Skip to content

nanitics/nanitics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Nanitics

Nanitics

The Python SDK for production agents.

CI PyPI Python License

Build agentic systems your team can debug, extend, and own. Built and used by Propodeum for production client engagements. Compose any architecture from reusable primitives. Trace every agent decision.

Why Nanitics?

  • Your team owns the result. Apache-2.0, typed, documented, with an explicit public API (nanitics.__all__). The codebase a client inherits at the end of an engagement is one their engineers can read, extend, and operate.
  • Trace-first observability. Every agent loop, tool call, and coordination event emits a structured event. The built-in Observatory trace viewer turns that into a live debugging surface from day one. No instrumentation to add.
  • Real-services validation. Public components are validated against real LLM providers before release, not just mocks. Mocks drive fast tests; real services prove correctness.

Features

Agent strategies — Built-in strategies for different problem shapes: ReAct, Reasoning, CodeAct, Reflexion, ReWOO, LATS, and Tree of Thought.

MemoryWorking, episodic, long-term, and semantic memory for persistent agent state.

Orchestration — Compose agents into pipelines, DAGs, loops, conditionals, and map-reduce workflows.

Multi-agent coordinationHandoff, supervisor, agent-as-tool, blackboard, debate, consensus, bidding, broadcast, message bus, peer network, judge router.

EvaluationProgrammatic and LLM-based evaluators for assessing agent output quality.

Human-in-the-LoopApproval gates, revision gates, and durable HITL with checkpoint suspension for long-running workflows.

ToolsFunction tools, conditional tools, and tool composition with automatic schema generation.

ObservabilityEvent-based tracing with the Observatory trace viewer for inspecting agent execution.

PlanningUpfront and adaptive planning with goal tracking and plan adherence evaluation.

SafetyIteration limits, cancellation tokens, and sandboxed code execution.

Quick Start

For a full end-to-end run against a real LLM, see the deployment guide.

Install Nanitics:

pip install nanitics

Create a ReAct agent with a tool, driven by a scripted MockLLMClient so the snippet runs without an API key:

import asyncio
from nanitics import (
    InMemoryEmitter,
    LLMResponse,
    MockLLMClient,
    ReActAgent,
    ToolCall,
    Usage,
    tool,
)

@tool("greet", "Greet someone by name")
async def greet(name: str) -> str:
    return f"Hello, {name}!"

async def main():
    llm = MockLLMClient(responses=[
        LLMResponse(
            content="I'll greet them.",
            tool_calls=[ToolCall(id="1", name="greet", arguments={"name": "world"})],
            usage=Usage(input_tokens=50, output_tokens=20),
            model="mock",
            stop_reason="tool_use",
        ),
        LLMResponse(
            content="Hello, world!",
            tool_calls=[],
            usage=Usage(input_tokens=80, output_tokens=15),
            model="mock",
            stop_reason="end_turn",
        ),
    ])
    agent = ReActAgent(
        name="my-agent",
        llm_client=llm,
        emitter=InMemoryEmitter("trace-001"),
        system_prompt="You are a helpful assistant.",
        tools=[greet],
    )
    result = await agent.run("Say hello to the world")
    print(result.output)

asyncio.run(main())

To run the same agent against a real provider, set ANTHROPIC_API_KEY and swap MockLLMClient(...) for AnthropicLLMClient(model="claude-haiku-4-5"). Everything else above is unchanged.

See the Getting Started guide for a full walkthrough. For the full API, read the docstrings in the source tree under nanitics/.

LLM Providers

Nanitics supports multiple LLM providers:

Provider Install Client
Anthropic pip install nanitics AnthropicLLMClient
OpenAI pip install nanitics OpenAILLMClient
Mistral pip install nanitics[mistral] MistralLLMClient
LiteLLM pip install nanitics[litellm] LiteLLMClient

For testing and development, use MockLLMClient — no API keys required.

Examples

The examples directory contains runnable examples covering every SDK component. All examples use MockLLMClient for deterministic, API-key-free execution.

See the examples README for a complete index.

Documentation

Primary entry points:

Guide Description
Getting Started Build your first agent
Core Concepts The agent loop, tools, prompts, LLM clients
Agent Types Agent strategies and when to use each
Multi-Agent Coordination Coordination patterns for multi-agent systems
Deployment Full-stack compose, take-to-own-infra, resource and shutdown patterns
API Reference Generated from source docstrings — signatures, fields, constraints

For the complete catalogue — Memory, Orchestration, Evaluation, HITL, Tools, Planning, Context Management, Error Handling, Safety, Security, Observability, Building Applications, Architecture, SDK Internals, Diagnosing Agent Issues, Testing, Streaming, Production, Built-in Tools, Local LLMs — see the full guides index.

Project

About

Nanitics is built and maintained by Propodeum for production client engagements with technical teams taking agentic AI from prototype to production. If you want help integrating it, see propodeum.com.

License

Apache License 2.0. See LICENSE for the full text.

About

Composable Python SDK for building single-agent and multi-agent AI systems.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors