Skip to content

The GitHub Documentation Q&A MCP Server is a project that lets an agent (or LLM tool) answer natural language questions about public GitHub repositories, in real time, by accessing up-to-date documentation, code, and issues.

License

Notifications You must be signed in to change notification settings

Auxilus08/github-docs-mcp-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub Docs MCP Server

A Model Context Protocol (MCP) server that provides AI-powered Q&A capabilities for GitHub repositories using OpenAI GPT-4 or GPT-3.5-turbo.

🤖 OpenAI Integration

This server integrates with OpenAI's GPT models to provide intelligent, context-aware answers to questions about GitHub repositories. The AI analyzes repository documentation, code, and issues to generate comprehensive responses.

Features

  • AI-Powered Responses: Uses GPT-4 or GPT-3.5-turbo to generate intelligent answers
  • Context-Aware: Analyzes repository docs, code snippets, and issues
  • Markdown Formatting: Returns well-formatted answers with proper structure
  • Source Links: Includes links back to original GitHub files and issues
  • Fallback Support: Gracefully falls back to template responses if OpenAI API fails
  • Configurable: Support for different OpenAI models and parameters

Setup

  1. Get an OpenAI API Key

    • Visit OpenAI API
    • Create a new API key
    • Ensure you have credits available
  2. Configure Environment Variables

    # Copy the example environment file
    cp .env.example .env
    
    # Edit .env and add your keys
    OPENAI_API_KEY=your_openai_api_key_here
    GITHUB_TOKEN=your_github_token_here
    
    # Optional: Configure model preferences
    OPENAI_MODEL=gpt-4  # or gpt-3.5-turbo
    OPENAI_MAX_TOKENS=1500
    OPENAI_TEMPERATURE=0.3
  3. Install Dependencies

    pip install -r requirements.txt
  4. Validate Installation

    python validate_openai.py

🚀 Usage

Start the Server

python -m src.github_docs_mcp.main

The server will start on http://localhost:8000

Check Service Status

curl http://localhost:8000/status

This endpoint shows the status of all services including OpenAI integration:

{
  "server": {
    "name": "github-docs-qa",
    "version": "1.0.0",
    "status": "running"
  },
  "services": {
    "openai_service": {
      "status": "healthy",
      "model": "gpt-4",
      "features": {
        "ai_answers": true,
        "fallback_answers": true
      }
    }
  }
}

Ask Questions

Send POST requests to /ask with repository questions:

curl -X POST http://localhost:8000/ask \
  -H "Content-Type: application/json" \
  -d '{
    "repository": "microsoft/vscode",
    "question": "How do I create a VS Code extension?",
    "include_code": true,
    "include_issues": false,
    "max_results": 5
  }'

Response Format

The AI generates responses in Markdown format with:

  • Structured answers with headers and sections
  • Code examples with proper syntax highlighting
  • Source links to original GitHub files
  • Contextual information based on repository content

Example response:

{
  "question": "How do I create a VS Code extension?",
  "answer": "# Creating a VS Code Extension\n\nTo create a VS Code extension, you'll need to...\n\n## Getting Started\n\n1. Install the required tools\n2. Generate the extension scaffold\n3. Configure your extension\n\n## 📚 Sources\n\n1. **Documentation:** [extension-authoring.md](https://github.com/microsoft/vscode/blob/main/docs/extension-authoring.md)\n2. **Code:** [example-extension.js](https://github.com/microsoft/vscode/blob/main/examples/example-extension.js)",
  "repository": "microsoft/vscode",
  "sources": [...],
  "confidence": 0.85,
  "processing_time_ms": 2500
}

🔄 Fallback Behavior

When OpenAI API is not available or fails, the server automatically falls back to template-based responses:

  • No API Key: Uses structured templates with source content
  • API Errors: Gracefully handles rate limits and other errors
  • Network Issues: Continues serving requests with fallback responses

🧪 Testing

Test the OpenAI integration:

# Run validation checks
python validate_openai.py

# Test integration with live server
python test_openai_integration.py

# Run full test suite
python test_runner.py

📊 Monitoring

Monitor OpenAI usage through:

  • Service Status: /status endpoint shows AI service health
  • Server Logs: Detailed logging of AI interactions
  • Response Analysis: Check for AI vs fallback responses

⚙️ Configuration

OpenAI Models

Supported models:

  • gpt-4 (recommended, higher quality)
  • gpt-3.5-turbo (faster, lower cost)

Parameters

  • Max Tokens: Control response length (default: 1500)
  • Temperature: Control creativity (default: 0.3 for factual responses)
  • Timeout: API call timeout (configured automatically)

Rate Limiting

The server respects OpenAI rate limits and handles:

  • Rate limit errors: Automatic fallback to templates
  • Token limits: Smart content truncation
  • Cost management: Configurable token limits

🔧 Troubleshooting

Common Issues

  1. "OpenAI service unavailable"

    • Check API key in .env file
    • Verify OpenAI account has credits
    • Test API key with OpenAI directly
  2. Slow responses

    • Try gpt-3.5-turbo for faster responses
    • Reduce max_results in requests
    • Check network connectivity
  3. Rate limit errors

    • Upgrade OpenAI plan for higher limits
    • Implement request queuing if needed
    • Monitor usage in OpenAI dashboard

Debug Mode

Enable detailed logging:

DEBUG=true LOG_LEVEL=debug python -m src.github_docs_mcp.main

🛡️ Security

  • API Keys: Never commit API keys to version control
  • Environment Variables: Store sensitive data in .env files
  • Request Validation: All inputs are validated and sanitized
  • Error Handling: Sensitive information is not exposed in error messages

📈 Performance

Typical response times:

  • GPT-4: 2-5 seconds
  • GPT-3.5-turbo: 1-3 seconds
  • Fallback: < 1 second

Optimization tips:

  • Use gpt-3.5-turbo for speed-critical applications
  • Limit max_results to reduce context size
  • Cache frequently asked questions
  • Monitor token usage to optimize costs

🤝 Contributing

When contributing to OpenAI features:

  1. Test with both GPT-4 and GPT-3.5-turbo
  2. Ensure fallback behavior works correctly
  3. Add appropriate error handling
  4. Update documentation and examples
  5. Test with various repository types and question formats

📄 License

MIT License - see LICENSE for details.

About

The GitHub Documentation Q&A MCP Server is a project that lets an agent (or LLM tool) answer natural language questions about public GitHub repositories, in real time, by accessing up-to-date documentation, code, and issues.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published