Skip to content

digitalocean/gradient-adk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Header image for the DigitalOcean Gradient AI Agentic Cloud

DigitalOcean Gradient™ Agent Development Kit (ADK)

PyPI version Docs

The Gradient™ Agent Development Kit (ADK) is a comprehensive toolkit for building, testing, deploying, and evaluating AI agents on DigitalOcean's Gradient™ AI Platform. It provides both a CLI for development workflows and a runtime environment for hosting agents with automatic trace capture.

Features

🛠️ CLI (Command Line Interface)

  • Local Development: Run and test your agents locally with hot-reload support
  • Seamless Deployment: Deploy agents to DigitalOcean with a single command
  • Evaluation Framework: Run comprehensive evaluations with custom metrics and datasets
  • Observability: View traces and runtime logs directly from the CLI

🚀 Runtime Environment

  • Framework Agnostic: Works with any Python framework for building AI agents
  • Automatic LangGraph Integration: Built-in trace capture for LangGraph nodes and state transitions
  • Custom Decorators: Capture traces from any framework using @trace decorators
  • Streaming Support: Full support for streaming responses with trace capture
  • Production Ready: Designed for seamless deployment to DigitalOcean infrastructure

Installation

pip install gradient-adk

Quick Start

🎥 Watch the Getting Started Video for a complete walkthrough

1. Initialize a New Agent Project

gradient agent init

This creates a new agent project with:

  • main.py - Agent entrypoint with example code
  • agents/ - Directory for agent implementations
  • tools/ - Directory for custom tools
  • config.yaml - Agent configuration
  • requirements.txt - Python dependencies

2. Run Locally

gradient agent run

Your agent will be available at http://localhost:8080 with automatic trace capture enabled.

3. Deploy to DigitalOcean

export DIGITALOCEAN_API_TOKEN=your_token_here
gradient agent deploy

4. Evaluate Your Agent

gradient agent evaluate \
  --test-case-name "my-evaluation" \
  --dataset-file evaluation_dataset.csv \
  --categories correctness,context_quality

Usage Examples

Using LangGraph (Automatic Trace Capture)

LangGraph agents automatically capture traces for all nodes and state transitions:

from gradient_adk import entrypoint
from langgraph.graph import StateGraph
from typing import TypedDict

class State(TypedDict):
    input: str
    output: str

async def llm_call(state: State) -> State:
    # This node execution is automatically traced
    response = await llm.ainvoke(state["input"])
    state["output"] = response
    return state

@entrypoint
async def main(input: dict, context: dict):
    graph = StateGraph(State)
    graph.add_node("llm_call", llm_call)
    graph.set_entry_point("llm_call")
    
    app = graph.compile()
    result = await app.ainvoke({"input": input.get("query")})
    return result["output"]

Using Custom Decorators (Any Framework)

For frameworks beyond LangGraph, use trace decorators to capture custom spans:

from gradient_adk import entrypoint, trace_llm, trace_tool, trace_retriever

@trace_retriever("vector_search")
async def search_knowledge_base(query: str):
    # Retriever spans capture search/lookup operations
    results = await vector_db.search(query)
    return results

@trace_llm("generate_response")
async def generate_response(prompt: str):
    # LLM spans capture model calls with token usage
    response = await llm.generate(prompt)
    return response

@trace_tool("calculate")
async def calculate(x: int, y: int):
    # Tool spans capture function execution
    return x + y

@entrypoint
async def main(input: dict, context: dict):
    docs = await search_knowledge_base(input["query"])
    result = await calculate(5, 10)
    response = await generate_response(f"Context: {docs}")
    return response

Streaming Responses

The runtime supports streaming responses with automatic trace capture:

from gradient_adk import entrypoint, StreamingResponse

@entrypoint
async def main(input: dict, context: dict):
    # Stream text chunks
    async def generate_chunks():
        async for chunk in llm.stream(input["query"]):
            yield chunk
    
    return StreamingResponse(generate_chunks(), media_type="text/plain")

For JSON streaming:

from gradient_adk import entrypoint, stream_json

@entrypoint
async def main(input: dict, context: dict):
    async def generate_events():
        async for event in process_stream(input["query"]):
            yield {"type": "chunk", "data": event}
    
    return stream_json(generate_events())

CLI Commands

Agent Management

# Initialize new project
gradient agent init

# Configure existing project
gradient agent configure

# Run locally with hot-reload
gradient agent run --dev

# Deploy to DigitalOcean
gradient agent deploy

# View runtime logs
gradient agent logs

# Open traces UI
gradient agent traces

Evaluation

# Run evaluation (interactive)
gradient agent evaluate

# Run evaluation (non-interactive)
gradient agent evaluate \
  --test-case-name "my-test" \
  --dataset-file data.csv \
  --categories correctness,safety_and_security \
  --star-metric-name "Correctness (general hallucinations)" \
  --success-threshold 80.0

Trace Capture

The ADK runtime automatically captures detailed traces:

What Gets Traced

  • LangGraph Nodes: All node executions, state transitions, and edges
  • LLM Calls: Function decorated with @trace_llm or detected via network interception
  • Tool Calls: Functions decorated with @trace_tool
  • Retriever Calls: Functions decorated with @trace_retriever
  • HTTP Requests: Request/response payloads for LLM API calls
  • Errors: Full exception details and stack traces
  • Streaming Responses: Individual chunks and aggregated outputs

Available Decorators

from gradient_adk import trace_llm, trace_tool, trace_retriever

@trace_llm("model_call")      # For LLM/model invocations
@trace_tool("calculator")      # For tool/function calls
@trace_retriever("db_search")  # For retrieval/search operations

Viewing Traces

Traces are:

  • Automatically sent to DigitalOcean's Gradient Platform
  • Available in real-time through the web console
  • Accessible via gradient agent traces command

Environment Variables

# Required for deployment and evaluations
export DIGITALOCEAN_API_TOKEN=your_do_api_token

# Required for Gradient serverless inference (if using)
export GRADIENT_MODEL_ACCESS_KEY=your_gradient_key

# Optional: Enable verbose trace logging
export GRADIENT_VERBOSE=1

Project Structure

my-agent/
├── main.py              # Agent entrypoint with @entrypoint decorator
├── .gradient/agent.yml  # Agent configuration (auto-generated)
├── requirements.txt     # Python dependencies
├── .env                 # Environment variables (not committed)
├── agents/              # Agent implementations
│   └── my_agent.py
└── tools/               # Custom tools
    └── my_tool.py

Framework Compatibility

The Gradient ADK is designed to work with any Python-based AI agent framework:

  • LangGraph - Automatic trace capture (zero configuration)
  • LangChain - Use trace decorators (@trace_llm, @trace_tool, @trace_retriever) for custom spans
  • CrewAI - Use trace decorators for agent and task execution
  • AutoGen - Use trace decorators for agent interactions
  • Custom Frameworks - Use trace decorators for any function

Support

License

Licensed under the Apache License 2.0. See LICENSE

About

The Gradient AI Platform Agent Development Kit and CLI.

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages