Saferwall is an open source malware analysis platform.
It aims for the following goals:
- Be the collaborative platform for teams and researchers to analyze and share new threats.
- Streamline the analysis process to help researchers generates IoC's and reports with zero friction.
- Provide a searchable database to build intelligence feeds for threat hunting.
- Be open-source, developer friendly and user driven.
- File metadata, packer identification and crypto hashes.
- String (ASCII/Unicode and ASM) extraction.
- PE (Portable Executable) file parser
Multiple AV scanner which includes major vendors:
Vendors status Vendors status Avast ✔️ FSecure ✔️ Avira ✔️ Kaspersky ✔️ Bitdefender ✔️ McAfee ✔️ ClamAV ✔️ Sophos ✔️ Comodo ✔️ Symantec ✔️ ESET ✔️ Windows Defender ✔️ TrendMicro ✔️ DrWeb ✔️
Integrations with your own data processing pipeline.
Saferwall takes advantage of Kubernetes for its high availability, scalability and ecosystem behind it.
Everything runs inside Kubernetes. You can either deploy it in the cloud or have it self hosted.
Here are the different deployment options available depending on how you are planning to use it:
"I want to try it first" : Use the cloud instance in https://saferwall.com.
"I want to make a PR or make changes" : When you intend to make changes to the code or make PR's, see this guide for detailed steps.
The production deployment using Kops automatically provisions a Kubernetes cluster hosted on AWS, GCE, DigitalOcean or OpenStack and also on bare metal. For the time being, only AWS is officially supported. A helm chart is also provided for fast deployement. This setup works well for compagnies or small teams planning to scan a massive amounts of file.
- Golang mostly.
- Backend: Echo
- Frontend: VueJS + Tailwind.css
- Messaging: NSQ
- Database: Couchbase
- Logging: FileBeat + ElasticSearch + Kibanna
- Metrics: Prometheus
- Minio: Object storage
- Deployment: Helm + Kubernetes
Current architecture / Workflow:
Here is a basic workflow of what happens when a new file is submitted:
- Frontend talks to the the backend via REST APIs.
- Backend uploads samples to the object storage.
- Backend pushes a message into the scanning queue.
- Consumer fetches the file and copies it to the nfs share avoiding to pull the sample on every container.
- Consumer starts scanning routines for static information such as (File metadata, File format details...)
- Consumer calls asynchronously scanning services (like AV scanners) via gRPC calls and waits for results.
Please read docs/CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.