Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 136e64a
Showing
40 changed files
with
17,695 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
.vscode/ | ||
__pycache__/ | ||
*.py[cod] | ||
*$py.class | ||
#data/* | ||
package-lock.json | ||
node_modules | ||
dist |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,103 @@ | ||
# Contributing to awspx | ||
|
||
The easiest way you can contribute to awspx is by using the tool and reporting bugs you encounter through the Github issue tracker. But if you'd like to do a bit more than that, the project has two naturally expandable components: service ingestors and attack patterns. This file provides a couple of brief tutorials on how to add those. | ||
|
||
<br/> | ||
<br/> | ||
|
||
## Adding a new service ingestor | ||
|
||
Depending on the service you want to add support for, you may need to write a Resources-based ingestor, or you may need to write a custom ingestor. | ||
|
||
### Adding a new service with boto3 Resources | ||
|
||
The base Ingestor class uses [boto3's Resources](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html) interface to ingest resources and infer relationships between them. Currently, this interface is implemented for the following services. | ||
|
||
* cloudformation | ||
* cloudwatch | ||
* dynamodb | ||
* ec2 | ||
* glacier | ||
* iam | ||
* opsworks | ||
* s3 | ||
* sns | ||
* sqs | ||
|
||
Currently, awspx's EC2 and S3 ingestors use this architecture. If you would like to add one of the other services in this list (for example, CloudFormation), you can start with the following class declaration: | ||
|
||
``` | ||
class CloudFormation: | ||
pass | ||
``` | ||
|
||
To test your ingestor, run `awspx ingest` with the following arguments: | ||
|
||
``` | ||
awspice ingest --profile your-profile --services CloudFormation | ||
``` | ||
|
||
It is likely that the first run will crash with an error. These errors usually result from the ingestor failing to determine a `Name` or `Arn` property for one or more resources in the service. With verbose output, you will be able to see the structure of the data the ingestor fails on. Using this output, you will need to amend the methods `_get_resource_arn` and `_get_resource_name` in the `Ingestor` base class. Continue until all ingested resources have both a `Name` and an `Arn`. | ||
|
||
Once you've fixed these bugs, you can start considering which resources you want to ingest and which relationships are worth showing. These two aspects of ingestion are represented by `run` and `associates` respectively. | ||
|
||
An ingestor's `run` attribute is a list of resources it will ingest by default. These are formatted as plurals of the final part of each resource name in `lib/aws/resources.py`. Thus `AWS::S3::Bucket` becomes `buckets` and `AWS::Ec2::Vpc` becomes `vpcs`. | ||
|
||
An ingestor's `associates` attribute is a list of tuples, representing relationships between types to map. The ingestor will infer relationships between resources based on boto3's Resource model, e.g. if an `instance` object has an attribute called `snapshots`, there is a relationship between instances and snapshots. If `associates` is not defined, all inferred relationships will be mapped. Thus, `associates` is a good way to prune excessive edges. See the EC2 ingestor for a good example. | ||
|
||
### Adding a new service without boto3 Resources | ||
|
||
When adding services not supported by boto3 Resources, you will have to fall back to using boto3 service clients, which are thin wrappers over AWS cli commands. You will need to manually discover the different resource types and their associations. The Lambda service ingestor is an example of such an ingestor. | ||
|
||
You can start with code like the following, which inherits from the base Ingestor but tells the initializer not to run the default ingestion while still accounting for type and ARN selection. | ||
|
||
```python | ||
class MyService(Ingestor): | ||
run = [ '...' ] | ||
|
||
def __init__(self, session, account="0000000000000", | ||
only_types=[], except_types=[], only_arns=[], except_arns=[]): | ||
|
||
super().__init__(session=session, default=False) | ||
if not (super()._resolve_type_selection(only_types, except_types) | ||
and super()._resolve_arn_selection(only_arns, except_arns)): | ||
return | ||
|
||
self.client = self.session.client(self.__class__.__name__.lower()) | ||
|
||
for rt in self.run: | ||
|
||
print(f"{self.__class__.__name__}: Loading {rt}") | ||
|
||
|
||
``` | ||
|
||
### Enriching services | ||
|
||
While the above methods will ingest a large amount of data about resources and their relationships, many AWS services require additional enrichment for us to get all the information that's useful for mapping out the environment and drawing attack paths. For these cases, we need to do some ingestor enrichment. | ||
|
||
The S3 ingestor provides a good example of this. Because bucket policies and ACLs are not pulled by the standard ingestor logic, we have two additional methods to pull this information for each bucket. Consider the method for getting bucket policies: | ||
|
||
|
||
```python | ||
def get_bucket_policies(self): | ||
|
||
sr = self.session.resource(self.__class__.__name__.lower()) | ||
|
||
for bucket in self.get("AWS::S3::Bucket").get("Resource"): | ||
try: | ||
|
||
bucket.set("Policy", json.loads(sr.BucketPolicy( | ||
bucket.get('Name')).policy)) | ||
except: # no policy for this bucket | ||
pass | ||
``` | ||
|
||
This method uses additional, S3-specific methods to get the policy for each ingested bucket, which are enumerated using methods provided by `Ingestor`'s parent class, `Elements` (see `lib/graph/base.py`). The same procedure is followed to get bucket ACLs, and to get instance user data in EC2. | ||
|
||
<br/> | ||
<br/> | ||
|
||
## Adding a new attack pattern | ||
|
||
A dictionary of attacks is defined in `lib/aws/attacks.py`. An overview of the thinking behind attack resolution in awspx can be found [on the F-Secure Labs blog](https://labs.f-secure.com/blog/awspx). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
#!/bin/bash | ||
|
||
NAME="awspx" | ||
DIR="/opt/awspx" | ||
|
||
function install(){ | ||
|
||
MOUNT="/opt/awspx" | ||
BASE="neo4j" | ||
DEPS=("docker" "rsync") | ||
|
||
SOURCE="$(dirname $(realpath $0))" | ||
|
||
APT_DEPS=("nodejs" "npm" "python3-pip" "procps") | ||
PY_DEPS=("argparse" "awscli-local" | ||
"boto3" "configparser" | ||
"pip" "neo4j" "inflect") | ||
|
||
# Ensure script is run as root | ||
if [[ "$(whoami)" != "root" ]]; then | ||
echo "[-] awspx must be run with root privileges." | ||
fi | ||
|
||
# Ensure dependencies are met | ||
for dep in ${DEPS[*]}; do | ||
if [ -z "$(which ${dep} 2>/dev/null)" ]; then | ||
echo "[-] awspx requires \"$(basename ${dep})\" to function." | ||
echo " Ensure it has been installed before running this script." | ||
exit 2 | ||
fi | ||
done | ||
|
||
# Delete all containers named $NAME (prompt for confirmation) | ||
uid=($(docker ps -a -f name=${NAME} -q)) | ||
if [ -n "${uid}" ]; then | ||
echo "[!] An existing container named \"$NAME\" was detected" | ||
echo " In order to continue, it must be deleted. All data will be lost." | ||
read -p " Continue [y/n]? " response | ||
[[ "${response^^}" == "Y" ]] || exit | ||
docker stop ${NAME} >/dev/null 2>&1 | ||
docker rm ${NAME} >/dev/null 2>&1 | ||
fi | ||
|
||
# Update | ||
mkdir -p $MOUNT/data | ||
rsync -avrt $SOURCE/* ${MOUNT}/. >/dev/null | ||
rsync -avrt $0 /bin/awspx >/dev/null | ||
# I'm sorry for not putting it in /usr/local/bin | ||
# please don't kick me out of the sysadmin club | ||
|
||
echo "" | ||
set -e | ||
|
||
# Create awspx container | ||
docker pull $BASE | ||
docker run -itd \ | ||
--name $NAME \ | ||
--hostname=$NAME \ | ||
-p 127.0.0.1:80:80 \ | ||
-p 127.0.0.1:7687:7687 \ | ||
-p 127.0.0.1:7373:7373 \ | ||
-p 127.0.0.1:7474:7474 \ | ||
-v ${MOUNT}:${DIR} \ | ||
--restart=always $BASE | ||
|
||
# Modify entrypoint | ||
HEADER=( | ||
'#NEO4J_dbms_memory_heap_initial__size="2048m"' | ||
'#NEO4J_dbms_memory_heap_max__size="2048m"' | ||
'#NEO4J_dbms_memory_pagecache__size="2048m"' | ||
'if [[ "${1}" == "neo4j" ]] && [[ -z "${2:-}" ]] && [[ -z "$(pgrep java)" ]]; then' | ||
'[[ -z "$(pgrep npm)" ]] && cd /opt/awspx/www && nohup npm run serve>/dev/null 2>&1 &' | ||
'/docker-entrypoint.sh neo4j init &' | ||
'bash' | ||
'exit' | ||
'fi' | ||
) | ||
|
||
for i in `seq $((${#HEADER[@]} -1)) -1 0`; do | ||
docker exec -it $NAME sed -i "4i${HEADER[$i]}" /docker-entrypoint.sh | ||
done | ||
|
||
# Install dependencies | ||
docker exec -it $NAME \ | ||
apt -y update | ||
docker exec -it $NAME \ | ||
apt install -y ${APT_DEPS[@]} | ||
docker exec -it $NAME \ | ||
pip3 install --upgrade ${PY_DEPS[@]} | ||
|
||
# Set neo4j user:pass to neo4j:neo4j | ||
docker exec -it $NAME \ | ||
rm /var/lib/neo4j/data/dbms/auth | ||
docker exec -it $NAME \ | ||
neo4j-admin set-initial-password neo4j | ||
|
||
# Install npm packages | ||
docker exec -it $NAME \ | ||
sh -c "cd ${DIR}/www && npm install" | ||
|
||
docker restart $NAME | ||
|
||
echo -e "\n[+] Done! Server should soon be available at http://localhost" | ||
echo -e "\tThe client can be run by executing \`awspx\`" | ||
} | ||
|
||
function awspx(){ | ||
docker exec -it $NAME $DIR/cli.py $@ | ||
} | ||
|
||
if [ "$(systemctl is-active docker)" != "active" ]; then | ||
echo "[-] \"docker\" must first be started." | ||
exit 2 | ||
fi | ||
|
||
case "$(basename $0)" in | ||
awspx) awspx $@;; | ||
*) install;; | ||
esac |
Oops, something went wrong.