An AI-powered agent that generates Harness.io pipeline and connector YAML configurations using LangChain, OpenAI, and the Harness.io MCP server. The agent exposes its functionality through a REST API built with FastAPI.
- Generate Harness.io pipeline YAML from natural language descriptions
- Generate Harness.io connector YAML configurations
- Query existing pipelines and connectors
- Interact with Harness.io through MCP server integration
- RESTful API with interactive documentation
- Powered by OpenAI GPT-4 and LangChain
┌─────────────┐ ┌──────────────┐ ┌─────────────────┐
│ FastAPI │─────>│ LangChain │─────>│ Harness MCP │
│ REST API │ │ Agent │ │ Server │
└─────────────┘ └──────────────┘ └─────────────────┘
│
▼
┌──────────────┐
│ OpenAI │
│ GPT-4 │
└──────────────┘
- Python 3.9 or higher
- OpenAI API key
- Harness.io account with API access
- Harness.io MCP server installed and configured
The easiest way to run the application is with Docker:
# 1. Configure environment
cp .env.example .env
# Edit .env with your credentials
# 2. Place Harness MCP server binary
cp /path/to/harness-mcp mcp_server/harness-mcp
# 3. Build and run
./build-docker.sh
docker-compose up -dSee DOCKER.md for detailed Docker deployment instructions.
- Clone or navigate to the project directory:
cd harness_agent- Create a virtual environment:
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Configure environment variables:
cp .env.example .envEdit .env with your credentials:
OPENAI_API_KEY=your_openai_api_key_here
HARNESS_ACCOUNT_ID=your_harness_account_id
HARNESS_API_KEY=your_harness_api_key
HARNESS_API_URL=https://app.harness.io
HARNESS_DEFAULT_ORG_ID=default
HARNESS_DEFAULT_PROJECT_ID=default
MCP_SERVER_PATH=path_to_harness_mcp_server
# Optional: LangSmith Tracing
LANGCHAIN_TRACING_V2=false
LANGCHAIN_API_KEY=your_langsmith_api_key_here
LANGCHAIN_PROJECT=harness-agent./run.shsource venv/bin/activate
python main.pyuvicorn main:app --host 0.0.0.0 --port 8000 --reloadThe API will be available at http://localhost:8000
- Documentation Hub - Complete documentation index
- Quick Start Guide - Get started in 5 minutes
- Troubleshooting - Common issues and solutions
- Examples - Usage examples and patterns
- Docker Deployment - Container deployment guide
- LangSmith Tracing - Observability and debugging
- Debugging Guide - Debug and troubleshoot
- Change History - Implementation notes and fixes
Once the application is running, you can access:
- Interactive API documentation (Swagger UI):
http://localhost:8000/docs - Alternative API documentation (ReDoc):
http://localhost:8000/redoc
GET /healthReturns the health status of the agent and MCP connection.
POST /api/v1/generate/pipeline
Content-Type: application/json
{
"request": "Create a CI pipeline for a Python application with build, test, and deploy stages"
}POST /api/v1/generate/connector
Content-Type: application/json
{
"request": "Create a GitHub connector for my repository https://github.com/myorg/myrepo"
}POST /api/v1/query
Content-Type: application/json
{
"request": "List all available pipelines in my Harness account"
}curl -X POST "http://localhost:8000/api/v1/generate/pipeline" \
-H "Content-Type: application/json" \
-d '{
"request": "Create a CI/CD pipeline for a Node.js application with these stages: 1) Build and run tests, 2) Build Docker image, 3) Deploy to Kubernetes"
}'curl -X POST "http://localhost:8000/api/v1/generate/connector" \
-H "Content-Type: application/json" \
-d '{
"request": "Create a GitHub connector named my-github with OAuth authentication"
}'curl -X POST "http://localhost:8000/api/v1/query" \
-H "Content-Type: application/json" \
-d '{
"request": "Show me all pipelines in the production project"
}'harness_agent/
├── main.py # FastAPI application
├── agent.py # LangChain agent implementation
├── mcp_client.py # Harness MCP client
├── models.py # Pydantic models for API
├── config.py # Configuration management
├── requirements.txt # Python dependencies
├── Dockerfile # Docker image definition
├── docker-compose.yml # Docker Compose configuration
├── build-docker.sh # Docker build script
├── .env.example # Example environment variables
├── .dockerignore # Docker build exclusions
├── .gitignore # Git ignore rules
├── run.sh # Local startup script
├── test_client.py # API test client
├── mcp_server/ # Harness MCP server binary location
│ └── README.md # MCP setup instructions
├── README.md # This file
├── DOCKER.md # Docker deployment guide
└── examples.md # Usage examples
- User Request: A user sends a request through the REST API
- LangChain Agent: The request is processed by a LangChain agent powered by OpenAI GPT-4
- MCP Integration: The agent uses tools from the Harness MCP server to interact with Harness.io
- YAML Generation: The agent generates or retrieves the appropriate YAML configuration
- Response: The YAML and any additional information is returned to the user
uvicorn main:app --reload --host 0.0.0.0 --port 8000pytest tests/The following environment variables can be configured in .env:
| Variable | Description | Required | Default |
|---|---|---|---|
OPENAI_API_KEY |
Your OpenAI API key | Yes | - |
HARNESS_ACCOUNT_ID |
Harness account identifier | Yes | - |
HARNESS_API_KEY |
Harness API key | Yes | - |
HARNESS_API_URL |
Harness API URL | No | https://app.harness.io |
HARNESS_DEFAULT_ORG_ID |
Default organization ID for pipelines | Yes | - |
HARNESS_DEFAULT_PROJECT_ID |
Default project ID for pipelines | Yes | - |
MCP_SERVER_PATH |
Path to Harness MCP server executable | Yes | - |
API_HOST |
API server host | No | 0.0.0.0 |
API_PORT |
API server port | No | 8000 |
LANGCHAIN_TRACING_V2 |
Enable LangSmith tracing | No | false |
LANGCHAIN_API_KEY |
LangSmith API key | No | - |
LANGCHAIN_PROJECT |
LangSmith project name | No | harness-agent |
LangSmith provides observability and debugging for your AI agent. When enabled, it automatically traces:
- All agent executions
- LLM calls and responses
- Tool calls and results
- Token usage and costs
- Execution timing
- Sign up at smith.langchain.com
- Get your API key from Settings → API Keys
- Add to your
.envfile:LANGCHAIN_TRACING_V2=true LANGCHAIN_API_KEY=your_langsmith_api_key LANGCHAIN_PROJECT=harness-agent
- Restart the application
That's it! All traces will automatically appear in your LangSmith dashboard.
- Complete execution traces with timing for each step
- LLM prompts and responses for debugging
- Tool calls showing which Harness tools were used
- Token usage for cost tracking
- Error traces for debugging failures
Set LANGCHAIN_TRACING_V2=false or remove the variable from .env.
- Verify all environment variables are set correctly
- Check that the Harness MCP server path is correct
- Ensure your OpenAI API key is valid
- Verify the MCP server is accessible
- Check Harness API credentials
- Review MCP server logs
- Ensure your request is clear and specific
- Check the agent logs for detailed error messages
- Verify Harness account permissions
- Never commit
.envfile to version control - Rotate API keys regularly
- Use HTTPS in production
- Implement authentication/authorization for the API in production environments
Contributions are welcome! Please ensure your code follows the existing style and includes appropriate tests.
MIT License - feel free to use this project as you see fit.
For issues related to:
- Harness.io: Visit Harness Documentation
- OpenAI API: Visit OpenAI Documentation
- LangChain: Visit LangChain Documentation
- Built with FastAPI
- Powered by LangChain
- Uses OpenAI GPT-4
- Integrates with Harness.io