Skip to content
Aditya Gujar edited this page Jun 17, 2024 · 15 revisions

What is ShadowClone?

ShadowClone is designed to delegate time consuming tasks to the cloud by distributing the input data to multiple serverless functions (AWS Lambda, Azure Functions etc.) and running the tasks in parallel resulting in huge performance boost!

ShadowClone uses IBM's awesome Lithops library to distribute the workloads to serverless functions which is at the core of this tool. Effectively, it is a proof-of-concept script showcasing the power of cloud computing for performing our regular pentesting tasks.

These are few of the use cases:

  1. DNS Bruteforce using a very large wordlist within seconds
  2. Fuzz through a huge wordlist using ffuf on a single host
  3. Fuzz a list of URLs on a single path all from different IP addresses
  4. Port scan thousands of IPs in seconds
  5. Run a nuclei template on a list of hosts

Get Started

Prerequisites

  • AWS/GCP/Azure/IBM cloud Account
  • Docker installed on local machine. (required for initial setup only)
  • Python 3.10

Configuration

There are two parts of configuration - Cloud and Local

Although the final script is cloud agnostic and should work with any supported platform, I have only tested it on AWS so far. Instructions for setting up GCP, Azure and IBM cloud environments will be added soon.

Cloud

  • Login to your AWS account and get API credentials (access key & secret)

  • Go to IAM in AWS console and create a new policy with the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:*",
                "lambda:*",
                "ec2:*",
                "ecr:*",
                "sts:GetCallerIdentity"
            ],
            "Resource": "*"
        }
    ]
}
  • Create a new role with "Lambda" use case and attach the above policy to it.

  • Keep a note of the ARN of this role, you will need it later.

  • Go to S3 and create two buckets in the same region where your lambda is going to be executed.

    • One bucket is used for storing logs, runtime information etc. and the other bucket will be used for storing uploaded files

If you are using AWS and would like to control the costs to remain in the free tier budget, I highly recommend following this article and setting up some budgets and alerts.

Local machine

  • Ensure docker is installed on your local machine. This is required for the initial setup only.
  • Clone the repo and install python dependencies
git clone https://github.com/fyoorer/ShadowClone.git
cd ShadowClone
python -m venv env
source env/bin/activate
pip install -r requirements.txt

All the magic happens in the lithops library, which should be installed after the previous command.

  • Verify that lithops command line utility is installed by running
lithops test
⚡ lithops test
2022-01-18 08:08:45,832 [INFO] lithops.config -- Lithops v2.5.8
2022-01-18 08:08:45,833 [INFO] lithops.storage.backends.localhost.localhost -- Localhost storage client created
2022-01-18 08:08:45,833 [INFO] lithops.localhost.localhost -- Localhost compute client created
2022-01-18 08:08:45,833 [INFO] lithops.invokers -- ExecutorID b9419a-0 | JobID A000 - Selected Runtime: python
2022-01-18 08:08:45,833 [INFO] lithops.invokers -- Runtime python is not yet installed
2022-01-18 08:08:45,833 [INFO] lithops.localhost.localhost -- Extracting preinstalled Python modules from python
2022-01-18 08:08:46,110 [INFO] lithops.invokers -- ExecutorID b9419a-0 | JobID A000 - Starting function invocation: hello() - Total: 1 activations
2022-01-18 08:08:46,111 [INFO] lithops.invokers -- ExecutorID b9419a-0 | JobID A000 - View execution logs at /tmp/lithops/logs/b9419a-0-A000.log
2022-01-18 08:08:46,111 [INFO] lithops.wait -- ExecutorID b9419a-0 - Getting results from functions

  100%|████████████████████████████████████████████████████████████| 1/1

2022-01-18 08:08:48,125 [INFO] lithops.executors -- ExecutorID b9419a-0 - Cleaning temporary data

Hello fyoorer! Lithops is working as expected :)

If you see this, that means Lithops is installed and working as intended.

  • Now to make lithops work with your cloud provider, create a configuration file at ~/.lithops/config and copy the following content into it

Lithops Config

vi ~/.lithops/config

lithops:
 backend: aws_lambda
 storage: aws_s3
 data_limit: False
 monitoring_interval: 2

aws:
 access_key_id: AKIA[REDACTED] #changeme
 secret_access_key: xxxx[REDACTED]xxxx #changeme
 #account_id: <ACCOUNT_ID>  # Optional

aws_lambda:
 execution_role: arn:aws:iam::123123123123:role/lithops-execution-role #changeme
 region_name: us-east-1
 runtime_memory: 512
 runtime_timeout: 330
  
aws_s3:
 storage_bucket: mybucket #changeme
 region_name : us-east-1

The lines marked with #changeme need to be updated with the values noted above

  • access_key_id & secret_access_key - Your account's API credentials
  • execution_role - Enter the IAM Role ARN noted above
  • storage_bucket - Enter the name of the bucket you wish to use for storing logs

Ensure that the config file is placed at ~/.lithops/config

Build

Now we need to build a container image with all our tools baked in which will be used by the serverless function.

Build the image using lithops build command:

lithops runtime build sc-runtime -f Dockerfile_v0.2

The Dockerfile provided in this repo can be treated as a starting point. You are free to add any tools you wish to use.

Important! Make sure to match the python version in your local environment and the one specified in Dockerfile. If these mismatch, a runtime error will be thrown.

Next, register the runtime in your cloud environment with the following command:

lithops runtime deploy sc-runtime --memory 512 --timeout 300

Check runtime successfully registered

lithops runtime list

Copy the runtime name displayed in the output. We will need it in the next step.

Finally, update the config.py with the name of your runtime and the bucket:

LITHOPS_RUNTIME="sc-runtime" #runtime name obtained from above
STORAGE_BUCKET="mytestbucket" #name of the 2nd bucket created above

Run

Finally we are ready run some lambdas!

Usage

⚡ python shadowclone.py -h
usage: shadowclone.py [-h] -i INPUT [-s SPLITNUM] [-o OUTPUT] -c COMMAND

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
  -s SPLITNUM, --split SPLITNUM
                        Number of lines per chunk of file
  -o OUTPUT, --output OUTPUT
  -c COMMAND, --command COMMAND
                        command to execute
  --no-split NOSPLIT    File to be used without splitting

See Examples page for actual syntax and commands.