By Code Monkey Cybersecurity (ABN 77 177 673 061) Motto: "Cybersecurity. With humans."
Shells is a comprehensive security scanning platform designed for bug bounty hunters and security researchers. Point it at a target (company name, domain, IP, or email) and it automatically discovers assets, tests for vulnerabilities, and generates actionable findings.
Current Status: 1.0.0-beta - Production ready with known limitations
Super Easy - One Command Installation:
# Clone and run install script (handles everything automatically)
git clone https://github.com/CodeMonkeyCybersecurity/artemis
cd shells
./install.sh
# Start web dashboard
artemis serve --port 8080
# Open browser to http://localhost:8080 and start scanning!What install.sh does automatically:
- Installs/updates Go 1.24.4
- Installs PostgreSQL and creates database
- Builds artemis binary
- Sets up Python workers (GraphCrawler, IDORD)
- Configures everything - just run and go!
Run scans:
# Full automated workflow
artemis example.com
# Or specify target type
artemis "Acme Corporation" # Discover company assets
artemis admin@example.com # Discover from email
artemis 192.168.1.0/24 # Scan IP range- From company name: Certificate transparency logs, WHOIS, DNS enumeration
- From domain: Subdomain discovery, related domains, tech stack fingerprinting
- From IP/range: Network scanning, service discovery, reverse DNS
- From email: Domain extraction, mail server analysis
- Authentication Security: SAML (Golden SAML, XSW), OAuth2/OIDC (JWT attacks, PKCE bypass), WebAuthn/FIDO2
- SCIM Vulnerabilities: Unauthorized provisioning, filter injection, privilege escalation
- HTTP Request Smuggling: CL.TE, TE.CL, TE.TE desync attacks
- Business Logic Testing: Password reset flows, payment processing
- Infrastructure: SSL/TLS analysis, port scanning
- PostgreSQL database: Production-ready storage with full ACID compliance
- Web dashboard: Real-time scan progress and findings viewer at
http://localhost:8080 - Export formats: JSON, CSV, HTML
- Query & filter: By severity, tool, target, date range
- Statistics: Aggregate findings, trend analysis
- Auto-refresh: Dashboard updates every 5 seconds
Just run ./install.sh - it handles EVERYTHING:
# Clone and install (one command does it all!)
git clone https://github.com/CodeMonkeyCybersecurity/artemis
cd shells
./install.sh
# That's it! Now start scanning:
shells serve --port 8080What install.sh does automatically:
- Detects your platform (Linux/macOS)
- Updates system packages (if needed)
- Installs Go 1.24.4
- Installs PostgreSQL (brew on macOS, apt/dnf on Linux)
- Creates database and user (
shellsdatabase withshellsuser) - Builds the
shellsbinary - Installs to
/usr/local/bin/shells - Sets up Python workers (GraphCrawler, IDORD)
No manual PostgreSQL setup needed! The script detects if you have Docker and offers to use a container, or installs PostgreSQL natively. Everything just works.
No configuration files needed! Like kubectl, gh, and other modern CLI tools, shells uses command-line flags and environment variables - no YAML files to manage.
After installation:
# Start the web dashboard (workers auto-start)
artemis serve --port 8080
# Open http://localhost:8080 in your browser
# Or run a scan directly
artemis example.com# Clone repository
git clone https://github.com/CodeMonkeyCybersecurity/artemis
cd shells
# Build binary
go build -o artemis
# Optional: Install to PATH
sudo cp artemis /usr/local/bin/
sudo chmod 755 /usr/local/bin/artemis- Go: 1.21 or higher (automatically installed by install.sh)
- PostgreSQL: 15 or higher (required for database storage)
- Python 3.8+: Optional, for GraphCrawler and IDORD workers
- Docker: Optional, for containerized deployment
- Git: For cloning and updates
For production deployments, use Docker Compose to run shells with all dependencies in a single network:
cd deployments/docker
# Start full stack (PostgreSQL + Redis + OTEL + Shells API + Workers)
docker-compose up -d
# View logs
docker-compose logs -f shells-api
# Access web dashboard at http://localhost:8080Architecture:
shells-api: Main API server with web dashboard (port 8080)webscan-worker: 3 worker instances for distributed scanningpostgres: PostgreSQL database (single source of truth)redis: Job queue for worker coordinationotel-collector: OpenTelemetry metrics collectionnmap: Network scanning containerzap: OWASP ZAP proxy container
Key Benefits:
- Single database: All containers share the same PostgreSQL instance
- No data duplication: Workers and API use the same dataset
- Scalable: Increase worker replicas in docker-compose.yml
- Persistent storage: Database and Redis data survive container restarts
- Network isolation: All containers in isolated Docker network
# Scale workers
docker-compose up -d --scale webscan-worker=10
# Stop all services
docker-compose down
# Stop and remove data volumes
docker-compose down -vThe main command runs the full orchestrated pipeline:
# Full automated workflow: Discovery → Prioritization → Testing → Reporting
./artemis example.com# Asset discovery only
./artemis discover example.com
# Authentication testing
./artemis auth discover --target https://example.com
./artemis auth test --target https://example.com --protocol saml
./artemis auth chain --target https://example.com # Find attack chains
# SCIM security testing
./artemis scim discover https://example.com
./artemis scim test https://example.com/scim/v2 --test-all
# HTTP request smuggling
./artemis smuggle detect https://example.com
./artemis smuggle exploit https://example.com --technique cl.te
# Results querying
./artemis results query --severity critical
./artemis results stats
./artemis results export scan-12345 --format json
# Bug bounty platform integration
./artemis platform programs --platform hackerone
./artemis platform submit <finding-id> --platform bugcrowd --program my-program
./artemis platform auto-submit --severity CRITICAL
# Self-management
./artemis self update # Update to latest version
./artemis self update --branch develop # Update from specific branchShells integrates specialized Python tools for GraphQL and IDOR vulnerability detection:
# One-time setup (clones GraphCrawler & IDORD, creates venv)
artemis workers setup
# Start worker service
artemis workers start
# Or start API server with workers auto-started
artemis serve # Workers start automatically
# Check worker health
artemis workers status
# Stop workers
artemis workers stopIntegrated Tools:
-
GraphCrawler (gsmith257-cyber/GraphCrawler)
- GraphQL endpoint discovery
- Schema introspection
- Mutation detection
- Authorization testing
-
IDORD (AyemunHossain/IDORD)
- Automated IDOR vulnerability scanning
- Multi-user authorization testing
- Smart ID fuzzing (numeric, UUID, alphanumeric)
- Authenticated testing support
Worker Architecture:
- FastAPI service wraps Python tools
- REST API for job submission and status
- Background task execution with polling
- Automatic integration when running
shells serve
No config files needed! Configure via flags or environment variables:
# Using flags
artemis example.com --log-level debug --rate-limit 20 --workers 5
# Using environment variables
export SHELLS_LOG_LEVEL=debug
export SHELLS_DATABASE_DSN="postgres://user:pass@localhost:5432/shells"
export SHELLS_REDIS_ADDR="localhost:6379"
export SHELLS_WORKERS=5
export SHELLS_RATE_LIMIT=20
artemis example.com
# Common configuration flags
artemis --help
--db-dsn PostgreSQL connection (default: postgres://shells:shells_password@localhost:5432/shells)
--log-level Log level: debug, info, warn, error (default: error)
--log-format Log format: json, console (default: console)
--redis-addr Redis server address (default: localhost:6379)
--workers Number of worker processes (default: 3)
--rate-limit Requests per second (default: 10)
--rate-burst Rate limit burst size (default: 20)
# API keys (environment variables only - never use flags!)
export SHODAN_API_KEY="your-key"
export CENSYS_API_KEY="your-key"
export CENSYS_SECRET="your-secret"/cmd/- CLI commands (Cobra)/internal/- Internal packagesconfig/- Configuration structs (populated from flags/env vars)database/- PostgreSQL storage layerdiscovery/- Asset discovery modulesorchestrator/- Bug bounty workflow enginelogger/- Structured logging (otelzap)
/pkg/- Public packagesauth/- Authentication testing (SAML, OAuth2, WebAuthn)scim/- SCIM vulnerability testingsmuggling/- HTTP request smuggling detectiondiscovery/- Asset discovery utilities
- Go: Performance and reliability
- PostgreSQL: Production-ready database with ACID compliance
- Cobra + Viper: CLI framework with flags/env var support
- OpenTelemetry: Observability and tracing
- Context: Proper cancellation and timeouts
# Run all tests
make test
# Run specific package tests
go test ./pkg/auth/...
go test ./pkg/scim/...
# With coverage
go test -cover ./...
# Verify build
make check # Runs fmt, vet, and testSee docs/TESTING.md for comprehensive testing guide including IPv6 verification.
-
New Scanner Command:
- Add command in
/cmd/ - Follow existing patterns (see
cmd/auth.go) - Register in
init()function - Add tests
- Add command in
-
New Scanner Plugin:
- Create directory in
/internal/plugins/ - Implement plugin interface
- Add configuration options
- Register in worker system
- Create directory in
See CLAUDE.md for detailed development guidance including:
- Collaboration principles
- Code standards
- Priority system (P0-P3)
- Testing guidelines
make deps # Download dependencies
make build # Build binary
make dev # Build with race detection
make test # Run tests
make check # Run fmt, vet, test (pre-commit)
make fmt # Format code
make vet # Check for issues
make clean # Remove binary- Mail Server Testing: Planned for v1.1.0
- Advanced API Testing: Planned for v1.1.0
- Test Coverage: Currently ~8%, targeting 50% for v1.2.0
cmd/root.gois 3,169 lines (refactoring in progress)- Some TODO markers in codebase
- See CLAUDE.md for complete technical debt inventory
- Optimized for thoroughness over speed
- Rate limiting prevents target overload
- Parallel scanning of discovered assets
This tool is for authorized security testing only:
- Always obtain explicit permission before scanning
- Respect rate limits and terms of service
- Follow responsible disclosure practices
- Never use against production systems without authorization
- Verify scope before running automated scans
- No hardcoded credentials
- SQL injection protection (parameterized queries)
- SSRF protection in HTTP client
- Context cancellation prevents hangs
- Graceful error handling
See docs/BUG-BOUNTY-GUIDE.md for complete workflow guide.
Typical Usage:
- Research target scope
- Run discovery:
./artemis discover target.com - Review discovered assets
- Run full scan:
./artemis target.com - Query findings:
./artemis results query --severity high - Export evidence:
./artemis results export scan-id --format json - Verify findings manually
- Submit responsible disclosure
We welcome contributions! Please:
- Read CLAUDE.md for development guidelines
- Follow existing code patterns
- Add tests for new functionality
- Run
make checkbefore committing - Write clear commit messages
- Focus on sustainable, maintainable solutions
Philosophy: We prioritize human-centric security, evidence-based approaches, and collaboration. See CLAUDE.md for our working principles.
- CLAUDE.md - Development guide and collaboration principles
- docs/BUG-BOUNTY-GUIDE.md - Bug bounty workflow
- docs/TESTING.md - Testing and verification guide
- Modular Architecture: Clean architecture with dependency injection and plugin system
- Multiple Scanner Integration:
- Network & Infrastructure: Nmap (port scanning, service detection), SSL/TLS analysis
- Web Application Security: OWASP ZAP, Nikto, directory/file discovery
- Advanced Reconnaissance: httpx (HTTP probing), DNS enumeration
- Vulnerability Assessment: Nuclei (template-based scanning), OpenVAS
- OAuth2/Authentication Testing: Comprehensive OAuth2/OIDC security assessment
- API Security: GraphQL introspection, batching attacks, complexity analysis
- JavaScript Analysis: Secret extraction, library vulnerability detection, DOM XSS sinks
- Workflow Engine: Complex multi-stage scanning pipelines
- Distributed Scanning: Redis-based job queue with worker pools
- Observability: OpenTelemetry integration with structured logging via otelzap
- Result Management: Normalized result schema with SQLite storage (lightweight, embedded database)
- Deployment Ready: Docker containers and Nomad job specifications
- Security Features: Rate limiting, scope validation, audit trails
[Add license information]
- Issues: GitHub Issues
- Documentation: See
/docsdirectory - Contact: Code Monkey Cybersecurity
Remember: "Cybersecurity. With humans." - This tool assists security researchers, it doesn't replace human judgment and expertise.