Community-maintained database of MCP server audit results and security assessments. Contains structured audit findings, compliance reports, and security ratings to help organizations evaluate MCP server safety and make informed deployment decisions.
The Audit Database is a transparent, community-driven repository of comprehensive MCP server audits. Unlike traditional vulnerability databases that only report final conclusions, this database emphasizes complete transparency by requiring full audit methodology, supporting evidence, and reproducible findings.
Every audit entry must provide sufficient detail for independent verification and reproduction, including the prompts used, reasoning processes, supporting evidence, and complete methodology. This transparency ensures that audit findings can be validated, challenged, and improved by the community, creating a trusted foundation for MCP server security assessments.
- Open Methodology: All audit approaches, prompts, and reasoning must be fully documented
- Evidence-Based: Every finding must be supported by concrete evidence and reproduction steps
- Process Documentation: Complete audit conversations and decision-making processes should be included
- No Black Box Results: Simple "good" or "bad" conclusions without supporting rationale are not acceptable
- Open Source Targets: Only audits of open source MCP servers are accepted to ensure reproducible analysis
- Detailed Steps: All audit procedures must be documented with sufficient detail for reproduction
- Version Specificity: Audits must specify exact versions, commits, or releases of audited servers
- Environment Documentation: Testing environments and configurations must be clearly documented
- Future Validation: Reproducible audits enable future re-auditing to verify if identified problems have been fixed
- Peer Review: Audit submissions undergo community review and validation
- Challenge Process: Mechanisms for questioning and refining audit findings
- Iterative Improvement: Support for updated audits as servers evolve and improve
- Quality Standards: Consistent standards for audit quality and completeness
The audit database uses a hierarchical structure designed for scalability, searchability, and standardization:
audit-db/
├── audits/
│ └── [domain]/
│ └── [org]/
│ └── [repo]/
│ ├── metadata.json # Repo-level metadata
│ └── audits/
│ ├── [auditor]-[date]-[seq]/ # Individual audit folders
│ │ ├── audit-manifest.json # Standardized metadata
│ │ ├── security-assessment.md # Main report
│ │ ├── findings/ # Structured findings
│ │ │ ├── high-001-credential-exposure.md
│ │ │ ├── medium-002-input-validation.md
│ │ │ └── info-003-dependency-analysis.md
│ │ ├── artifacts/ # Supporting files
│ │ │ ├── code-samples/
│ │ │ ├── network-diagrams/
│ │ │ └── dependency-tree.json
│ │ └── raw-notes.md # Working notes
│ └── [auditor]-[date]-[seq]/ # Additional audits
├── indexes/
│ ├── by-auditor.json
│ ├── by-date.json
│ ├── by-severity.json
│ └── by-repo.json
├── templates/
│ ├── audit-manifest.schema.json
│ ├── security-assessment.template.md
│ └── finding.template.md
└── tools/
├── generate-index.py
└── validate-audit.py
GitHub Repository Audit:
audits/github.com/makenotion/notion-mcp-server/
├── metadata.json
└── audits/
├── kurtseifried-2025-08-07-001/ # First audit on this date
├── kurtseifried-2025-08-07-002/ # Second audit on this date
└── automated-scan-2025-08-07-003/ # Automated scan
Other Source Control Systems:
audits/gitlab.com/company/mcp-server/
audits/bitbucket.org/team/server/
audits/codeberg.org/user/project/
- Hierarchical Organization: Domain → Organization → Repository → Individual Audits
- Version Control Friendly: Structure supports git operations and branching
- Scalable: Handles growth in audits, auditors, and repositories
- Collision-Free: Sequence numbers prevent conflicts from multiple audits per day
- by-auditor.json: Find all audits by a specific auditor
- by-date.json: Chronological index of all audits
- by-severity.json: Index of findings by severity level
- by-repo.json: Repository-focused view with latest audit status
- Generated Automatically: Tools maintain indexes as audits are added
- audit-manifest.schema.json: JSON schema for audit metadata validation
- security-assessment.template.md: Template for main audit reports
- finding.template.md: Standardized finding documentation format
- Quality Assurance: Ensures consistency across all audit submissions
- generate-index.py: Rebuilds search indexes from audit data
- validate-audit.py: Validates audit submissions against templates and schemas
- Integration Ready: Scripts support CI/CD integration and quality gates
Each audit includes a standardized audit-manifest.json
file:
{
"audit_id": "kurtseifried-2025-08-07-001",
"auditor": {
"name": "Kurt Seifried",
"github": "kurtseifried",
"organization": "Cloud Security Alliance",
"contact": "kurt@example.org"
},
"target": {
"repo_url": "https://github.com/makenotion/notion-mcp-server",
"commit_hash": "a1b2c3d4e5f67890abcdef1234567890abcdef12",
"version": "v1.8.1",
"audit_scope": ["security", "architecture", "dependencies"],
"audit_depth": "comprehensive"
},
"audit_metadata": {
"start_date": "2025-08-07T10:00:00Z",
"completion_date": "2025-08-07T16:30:00Z",
"status": "completed",
"audit_type": "manual_security_review",
"time_spent_hours": 4.5,
"methodology": "MCP Security Framework v1.0"
},
"findings_summary": {
"critical": 0,
"high": 2,
"medium": 3,
"low": 1,
"info": 2,
"total_issues": 8
},
"tools_used": [
"manual_code_review",
"dependency_analysis",
"static_analysis",
"threat_modeling"
],
"compliance_checks": {
"mcp_security_baseline": "partial",
"owasp_top_10": "addressed",
"supply_chain_security": "reviewed"
},
"references": {
"related_audits": [],
"external_reports": [],
"vulnerability_references": []
}
}
Individual findings follow a standardized template:
# Finding: [Title]
**Finding ID**: high-001-credential-exposure
**Severity**: High
**Category**: Authentication & Authorization
**CWE**: CWE-798 (Use of Hard-coded Credentials)
**CVSS Score**: 7.5 (if applicable)
## Executive Summary
Brief description suitable for management review.
## Technical Description
Detailed technical analysis of the security issue.
## Evidence
- Code snippets
- Screenshots
- Log entries
- Reproduction steps
## Impact Assessment
- **Confidentiality**: High/Medium/Low
- **Integrity**: High/Medium/Low
- **Availability**: High/Medium/Low
- **Exploitability**: High/Medium/Low
- **Scope**: Specific impact scope
## Affected Components
- File: `src/component.ts` (lines 123-145)
- Function: `handleAuthentication()`
- Dependency: `vulnerable-library@1.2.3`
## Reproduction Steps
1. Step-by-step reproduction instructions
2. Expected vs actual behavior
3. Environmental requirements
## Risk Scenarios
Specific attack scenarios and their potential impact.
## Recommendations
### Immediate Actions
- [ ] Priority fixes that should be implemented immediately
### Short-term Improvements
- [ ] Improvements for next release cycle
### Long-term Strategic Changes
- [ ] Architectural or process improvements
## Remediation Validation
Steps to verify that remediation is effective.
## References
- External security resources
- Related findings
- Vendor documentation
## Status Tracking
- [x] Identified: 2025-08-07
- [x] Documented: 2025-08-07
- [ ] Reported to maintainers:
- [ ] Acknowledged by maintainers:
- [ ] Fix available:
- [ ] Fix verified:
- [ ] Closed:
## Auditor Notes
Additional context, assumptions, or limitations of the finding.
- Target Server: Name, version, repository URL, and specific commit hash
- Audit Purpose: Objective and scope of the audit (security, functionality, compliance)
- Auditor Information: Contributor identity and relevant expertise
- Audit Date: When the audit was conducted
- Methodology: High-level approach and frameworks used
- Initial Prompts: Starting prompts and instructions used to begin the audit
- Conversation Transcripts: Complete or substantial portions of audit conversations
- Decision Points: Key decision-making moments and rationale
- Tool Usage: Any automated tools or scripts used during the audit
- Environmental Setup: Testing environment and configuration details
- Specific Findings: Detailed description of each finding with severity classification
- Supporting Evidence: Code snippets, screenshots, logs, or other proof of findings
- Reproduction Steps: Complete steps needed to reproduce each finding
- Impact Assessment: Analysis of potential impact and exploitability
- Remediation Guidance: Specific recommendations for addressing findings
- Testing Evidence: Proof that claimed functionality was actually tested
- False Positive Analysis: Discussion of potential false positives and validation steps
- Limitations: Acknowledgment of audit scope limitations and areas not covered
- Confidence Levels: Auditor confidence in various findings and assessments
- Complete Conversations: Full audit conversations when context allows
- Research Notes: Background research and preliminary analysis
- Comparative Analysis: Comparison with similar servers or alternatives
- Follow-up Actions: Planned or recommended follow-up audits or improvements
- Response to Feedback: Auditor responses to community questions and challenges
- Updates and Revisions: Revised findings based on community input
- Cross-References: Links to related audits or vulnerability reports
- Discussion Links: References to relevant community discussions
As the community contributes audits and develops better techniques, these improvements must be integrated back into the broader ecosystem:
- mcpserver-audit Integration: Successful audit methodologies and techniques discovered through community contributions should be incorporated into the mcpserver-audit tool
- Tool Enhancement: Automated tools and scripts that prove effective in community audits should be integrated into the audit tool suite
- Standard Development: Quality standards and best practices developed through community experience should be formalized in the audit tooling
- Feedback Loop: The audit database serves as a testing ground for new audit approaches that can then be systematized and automated
This ensures that the manual audit efforts in the database continuously improve the automated audit capabilities, creating a virtuous cycle of security assessment enhancement.
- Comprehensive Coverage: Audit addresses all stated objectives and scope
- Methodology Documentation: Complete description of audit approach and tools used
- Evidence Provision: Sufficient evidence to support all findings and conclusions
- Reproduction Details: Enough detail for independent reproduction of findings
- Verified Findings: All findings have been validated and tested
- False Positive Management: Clear discussion of potential false positives
- Version Accuracy: Findings accurately reflect the specific version audited
- Environmental Consistency: Results are consistent with documented test environment
- Open Process: Audit methodology and decision-making process is fully documented
- Bias Acknowledgment: Clear statement of any potential conflicts of interest
- Limitation Recognition: Honest assessment of audit limitations and scope
- Community Accessibility: Documentation is clear and accessible to the community
- Initial Validation: Check for completeness and format compliance
- Technical Review: Verify technical accuracy and reproducibility
- Community Review: Open review period for community feedback
- Final Acceptance: Approval for inclusion in the database
- Regular Updates: Mechanism for updating audits as servers evolve
- Community Challenges: Process for questioning and refining findings
- Quality Improvement: Continuous improvement of audit standards and processes
- Archive Management: Handling of outdated or superseded audits
- Quality First: Prioritize thoroughness and accuracy over speed
- Document Everything: Include all relevant process documentation and evidence
- Be Responsive: Engage with community feedback and questions
- Continuous Learning: Incorporate feedback to improve future audits
- Constructive Feedback: Provide specific, actionable feedback on submissions
- Verify Claims: Attempt to reproduce findings when possible
- Challenge Respectfully: Question findings constructively and professionally
- Improve Standards: Contribute to improving audit quality standards
- Quality Audits: Recognition for high-quality, thorough audit contributions
- Community Value: Acknowledgment of audits that provide significant community value
- Reproducibility: Special recognition for audits that enable successful reproduction
- Continuous Contribution: Recognition for ongoing participation and improvement
- Shared Learning: Community learns from audit methodologies and findings
- Improved Security: Better security outcomes through transparent audit processes
- Standard Setting: Development of community standards and best practices
- Ecosystem Improvement: Feedback loops that improve the overall MCP ecosystem
- mcpserver-audit: Automated audit reports from the audit tool
- Independent Auditors: Manual audits from security researchers and practitioners
- Community Contributions: Audits from users and organizations
- Academic Research: Formal research and analysis of MCP server security
- mcpserver-finder: Historical audit data to inform server recommendations
- vulnerability-db: Cross-reference with known vulnerabilities and security issues
- Community Dashboard: Public visualization of audit results and trends
- Research Projects: Data for academic research and security analysis
- Open Repository: Full audit database is publicly accessible
- Search Capabilities: Comprehensive search across audits, findings, and evidence
- API Access: Programmatic access for tool integration and analysis
- Export Functions: Data export for research and analysis purposes
- Tool Integration: APIs for integration with security tools and workflows
- Automated Queries: Support for automated audit result queries
- Notification System: Alerts for new audits of servers of interest
- Trending Analysis: Identification of common issues and improvement trends
We welcome contributions from the security community, including:
- Comprehensive Audits: Thorough security assessments of MCP servers
- Methodology Improvements: Better approaches and techniques for MCP server auditing
- Tool Development: Tools that enhance audit quality and reproducibility
- Standard Development: Contributions to audit quality standards and processes
- Community Moderation: Help with review processes and quality assurance
- Review Standards: Familiarize yourself with audit quality standards
- Choose Target: Select an open source MCP server for audit
- Follow Template: Use provided templates for consistent audit structure
- Submit for Review: Submit audit for community review and feedback
- Engage with Community: Respond to feedback and participate in discussions
Part of the Model Context Protocol Security initiative - A Cloud Security Alliance community project.