AgentAPI is an open-source Python framework for building agentic AI backends with a clean developer experience: minimal setup, provider abstraction, tool calling, memory, and streaming-first APIs.
It is designed for teams that want FastAPI-style simplicity for LLM agents, without heavy orchestration overhead.
Documentation site: https://agentapi.prajwalsuryawanshi.in
- Why AgentAPI
- Features
- Installation
- Quick Start
- Provider Configuration
- Tool Calling
- Streaming
- CLI
- Custom Providers
- Error Handling
- Project Structure
- Project Status
- Roadmap
- Contributing
- License
- Keep agent backends simple and readable.
- Use one
Agentinterface across providers. - Add tools as plain Python functions.
- Stream responses with minimal boilerplate.
- Start fast, then customize deeply when needed.
Agentclass with memory and tool execution loop.- Provider abstraction for
openai,gemini, andopenrouter. - AgentAPI app integration with
@app.chat. - Automatic SSE when a chat handler returns an async iterator.
- Built-in project scaffolding and run helper via CLI.
- Environment-based configuration using
.env. - Extensible provider system (custom instance or registered factory).
Install from PyPI:
pip install agentapi-coreInstall in editable mode while developing:
pip install -e .Create main.py:
from agentapi import AgentApp, Agent
app = AgentApp()
agent = Agent(
system_prompt="You are a helpful assistant",
provider="openai",
)
@app.chat("/chat")
async def chat(message: str):
return await agent.run(message)
@app.chat("/stream")
async def stream_chat(message: str):
return agent.stream(message)Run it:
uvicorn main:app --reloadOpen docs:
http://127.0.0.1:8000/docshttp://127.0.0.1:8000/redoc
Create .env:
OPENAI_API_KEY=
GEMINI_API_KEY=
OPENROUTER_API_KEY=
DEFAULT_PROVIDER=openaiSupported provider names:
openaigeminiopenrouter
Define tools with plain Python:
from agentapi import tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: sunny"Attach tools to an agent:
agent = Agent(
system_prompt="You are a weather assistant",
provider="openai",
tools=[get_weather],
)Tool schemas are generated from function signatures and mapped to provider-specific tool formats internally.
@app.chat auto-switches to SSE (text/event-stream) when your handler returns an async iterator.
Example:
@app.chat("/stream")
async def stream_chat(message: str):
return agent.stream(message)Create a new project scaffold:
agentapi new myprojectInteractive setup (asks for project name and provider):
agentapi newRun app via helper:
agentapi run --app main:app --reloadOpenAICompatibleProvider is an internal helper for OpenAI-compatible APIs. AgentAPI is not locked to it.
You can customize providers in two ways:
- Pass a provider instance directly:
provider=<BaseProvider instance> - Register a provider factory and reference by name.
from agentapi import Agent, BaseProvider
from agentapi.providers.base import ProviderResponse
class MyProvider(BaseProvider):
async def chat(self, messages, *, tools=None, tool_calling=None):
return ProviderResponse(content="hello", tool_calls=[], raw_message={})
async def stream(self, messages, *, tools=None, tool_calling=None):
yield "hello"
Agent.register_provider(
"myprovider",
lambda agent, settings, model: MyProvider(),
)
agent = Agent(system_prompt="You are helpful", provider="myprovider")AgentAPI converts common runtime issues into clear API-level errors:
- Missing API keys -> configuration error message.
- Upstream provider failures -> provider error message with status context.
- Streaming endpoints emit SSE error events instead of hard crashes.
agentapi/
agent/
agent.py
memory.py
tools.py
assets/
agentapi-logo.png
agentapi-favicon.png
config/
settings.py
core/
app.py
providers/
base.py
gemini.py
openai_compatible.py
openai.py
openrouter.py
examples/
main.py
Current phase: MVP
Implemented:
- Core agent runtime
- Provider abstraction (OpenAI, Gemini, OpenRouter)
- Tool calling and in-memory conversation memory
- Automatic SSE streaming on chat endpoints
- CLI scaffolding and run helper
- Add Anthropic provider.
- Expand memory backends (Redis/Postgres).
- Add richer observability and tracing hooks.
- Improve generated project templates.
- Add test suite and CI workflows.
Contributions are welcome. See CONTRIBUTING.md for setup and PR workflow.
This repository is configured to publish on GitHub Release.
- Bump
versioninpyproject.toml. - Commit and tag a release version.
- Create a GitHub Release.
- GitHub Actions publishes to PyPI using trusted publishing.
Required one-time setup:
- In PyPI, create project
agentapi-core. - Configure trusted publisher for this GitHub repository.
- Keep release workflow enabled in
.github/workflows/publish.yml.
Use these exact values when adding publishers.
TestPyPI pending publisher:
- Project name:
agentapi-core - Owner:
prajwalsuryawanshi - Repository:
agentapi - Workflow filename:
publish-testpypi.yml - Environment name:
testpypi
PyPI trusted publisher (after project exists):
- Project name:
agentapi-core - Owner:
prajwalsuryawanshi - Repository:
agentapi - Workflow filename:
publish.yml - Environment name:
pypi
Note: package names on PyPI are normalized, so use lowercase agentapi-core.
MIT License. See LICENSE.
