The Gradient™ Agent Development Kit (ADK) is a comprehensive toolkit for building, testing, deploying, and evaluating AI agents on DigitalOcean's Gradient™ AI Platform. It provides both a CLI for development workflows and a runtime environment for hosting agents with automatic trace capture.
- Local Development: Run and test your agents locally with hot-reload support
- Seamless Deployment: Deploy agents to DigitalOcean with a single command
- Evaluation Framework: Run comprehensive evaluations with custom metrics and datasets
- Observability: View traces and runtime logs directly from the CLI
- Framework Agnostic: Works with any Python framework for building AI agents
- Automatic LangGraph Integration: Built-in trace capture for LangGraph nodes and state transitions
- Custom Decorators: Capture traces from any framework using
@tracedecorators - Streaming Support: Full support for streaming responses with trace capture
- Production Ready: Designed for seamless deployment to DigitalOcean infrastructure
pip install gradient-adk🎥 Watch the Getting Started Video for a complete walkthrough
gradient agent initThis creates a new agent project with:
main.py- Agent entrypoint with example codeagents/- Directory for agent implementationstools/- Directory for custom toolsconfig.yaml- Agent configurationrequirements.txt- Python dependencies
gradient agent runYour agent will be available at http://localhost:8080 with automatic trace capture enabled.
export DIGITALOCEAN_API_TOKEN=your_token_here
gradient agent deploygradient agent evaluate \
--test-case-name "my-evaluation" \
--dataset-file evaluation_dataset.csv \
--categories correctness,context_qualityLangGraph agents automatically capture traces for all nodes and state transitions:
from gradient_adk import entrypoint
from langgraph.graph import StateGraph
from typing import TypedDict
class State(TypedDict):
input: str
output: str
async def llm_call(state: State) -> State:
# This node execution is automatically traced
response = await llm.ainvoke(state["input"])
state["output"] = response
return state
@entrypoint
async def main(input: dict, context: dict):
graph = StateGraph(State)
graph.add_node("llm_call", llm_call)
graph.set_entry_point("llm_call")
app = graph.compile()
result = await app.ainvoke({"input": input.get("query")})
return result["output"]For frameworks beyond LangGraph, use trace decorators to capture custom spans:
from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever
@trace_retriever("vector_search")
async def search_knowledge_base(query: str):
# Retriever spans capture search/lookup operations
results = await vector_db.search(query)
return results
@trace_llm("generate_response")
async def generate_response(prompt: str):
# LLM spans capture model calls with token usage
response = await llm.generate(prompt)
return response
@trace_tool("calculate")
async def calculate(x: int, y: int):
# Tool spans capture function execution
return x + y
@entrypoint
async def main(input: dict, context: dict):
docs = await search_knowledge_base(input["query"])
result = await calculate(5, 10)
response = await generate_response(f"Context: {docs}")
return responseThe runtime supports streaming responses with automatic trace capture:
from gradient_adk import entrypoint, StreamingResponse
@entrypoint
async def main(input: dict, context: dict):
# Stream text chunks
async def generate_chunks():
async for chunk in llm.stream(input["query"]):
yield chunk
return StreamingResponse(generate_chunks(), media_type="text/plain")For JSON streaming:
from gradient_adk import entrypoint, stream_json
@entrypoint
async def main(input: dict, context: dict):
async def generate_events():
async for event in process_stream(input["query"]):
yield {"type": "chunk", "data": event}
return stream_json(generate_events())# Initialize new project
gradient agent init
# Configure existing project
gradient agent configure
# Run locally with hot-reload
gradient agent run --dev
# Deploy to DigitalOcean
gradient agent deploy
# View runtime logs
gradient agent logs
# Open traces UI
gradient agent traces# Run evaluation (interactive)
gradient agent evaluate
# Run evaluation (non-interactive)
gradient agent evaluate \
--test-case-name "my-test" \
--dataset-file data.csv \
--categories correctness,safety_and_security \
--star-metric-name "Correctness (general hallucinations)" \
--success-threshold 80.0The ADK runtime automatically captures detailed traces:
- LangGraph Nodes: All node executions, state transitions, and edges
- LLM Calls: Function decorated with
@trace_llmor detected via network interception - Tool Calls: Functions decorated with
@trace_tool - Retriever Calls: Functions decorated with
@trace_retriever - HTTP Requests: Request/response payloads for LLM API calls
- Errors: Full exception details and stack traces
- Streaming Responses: Individual chunks and aggregated outputs
from gradient_adk import trace_llm, trace_tool, trace_retriever
@trace_llm("model_call") # For LLM/model invocations
@trace_tool("calculator") # For tool/function calls
@trace_retriever("db_search") # For retrieval/search operationsTraces are:
- Automatically sent to DigitalOcean's Gradient Platform
- Available in real-time through the web console
- Accessible via
gradient agent tracescommand
# Required for deployment and evaluations
export DIGITALOCEAN_API_TOKEN=your_do_api_token
# Required for Gradient serverless inference (if using)
export GRADIENT_MODEL_ACCESS_KEY=your_gradient_key
# Optional: Enable verbose trace logging
export GRADIENT_VERBOSE=1my-agent/
├── main.py # Agent entrypoint with @entrypoint decorator
├── .gradient/agent.yml # Agent configuration (auto-generated)
├── requirements.txt # Python dependencies
├── .env # Environment variables (not committed)
├── agents/ # Agent implementations
│ └── my_agent.py
└── tools/ # Custom tools
└── my_tool.py
The Gradient ADK is designed to work with any Python-based AI agent framework:
- ✅ LangGraph - Automatic trace capture (zero configuration)
- ✅ LangChain - Use trace decorators (
@trace_llm,@trace_tool,@trace_retriever) for custom spans - ✅ CrewAI - Use trace decorators for agent and task execution
- ✅ AutoGen - Use trace decorators for agent interactions
- ✅ Custom Frameworks - Use trace decorators for any function
- Examples: https://github.com/digitalocean/gradient-adk-examples
- Gradient Platform: https://www.digitalocean.com/products/gradient/platform
- Documentation: https://docs.digitalocean.com/products/gradient-ai-platform/
- API Reference: https://docs.digitalocean.com/reference/api
- Community: DigitalOcean Community Forums
Licensed under the Apache License 2.0. See LICENSE