When production fails, the on-call engineer is in a panic. The only question that matters is "What changed?"
The "Code Red" Assistant is a Model Context Protocol (MCP) server built to answer that question in seconds. It acts as a "smart context engine" that finds the "needle in the haystack" by correlating production alerts with the exact code deployments and feature flags that caused them.
The easiest way to see the project's value is to run the interactive Streamlit showcase. It demonstrates the "dumb" AI (before) vs. the "smart" AI (after) powered by our MCP server.
- Terminal 1 (Backend): Start the MCP server.
# (Activate your venv first: source venv/bin/activate) uvicorn main:app --reload - Terminal 2 (Frontend): Run the Streamlit showcase.
# (Activate your venv first: source venv/bin/activate) streamlit run showcase.py
This project is a perfect example of the judging criteria: Contextual Intelligence, Clever Integration, and Efficiency (Signal vs. Noise).
Without our server, an AI is useless in an outage.
- Query: "Why is the 'auth-service' failing?"
- AI Response: "I'm sorry, I don't have access to your live system status or deployment logs."
Our MCP server intercepts the query and performs its "smart context" retrieval:
- (Infer Context): The server's Correlation Engine infers that the user needs the cause of the failure.
- (Combine Sources): It fetches the alert time from Datadog, then searches GitHub and LaunchDarkly for any events that happened in the 15 minutes leading up to the alert.
- (Clever Integration): The "magic" is the correlation. The server finds the direct link between the alert and the specific deployment that caused it.
- (High-Signal Context): It assembles a high-signal, low-noise context package, filtering out thousands of irrelevant logs to provide only the critical information.
The server then injects this "context package" into the prompt, unlocking the AI's true potential:
Final Prompt Sent to the AI: AI Response: "The outage on 'auth-service' started at 2:10 PM. This directly correlates with a deployment by 'jane.doe' and a feature flag toggle. I recommend investigating that commit or rolling back the flag."
- Language: Python
- Backend: FastAPI (for the high-speed, asynchronous MCP server)
- Frontend: Streamlit (for the interactive showcase)
- Server: Uvicorn
- Data Validation: Pydantic
- HTTP Client: httpx
- Data Sources (APIs): GitHub (Live), Datadog (Simulated), LaunchDarkly (Simulated)
-
Clone the repository:
git clone [https://github.com/TankEngine1234/mcp-devops-server.git](https://github.com/TankEngine1234/mcp-devops-server.git) cd mcp-devops-server -
Create a virtual environment:
python3 -m venv venv source venv/bin/activate -
Install dependencies:
pip3 install -r requirements.txt
-
Set up API Keys (Optional): The project runs out-of-the-box with
USE_MOCK=Truein the.envfile. No keys are needed to run the demo.If you want to add your own keys, copy
.env.exampleto.envand setUSE_MOCK=False. -
Run the Backend & Frontend:
- Terminal 1 (Backend):
uvicorn main:app --reload - Terminal 2 (Frontend):
streamlit run showcase.py
- Terminal 1 (Backend):
If you want to bypass the UI and test the curl endpoint directly:
curl -X POST "[http://127.0.0.1:8000/analyze](http://127.0.0.1:8000/analyze)" \
-H "Content-Type: application/json" \
-d '{"query": "Why is 'auth-service' failing?"}'