Skip to content

lseman/logician

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

41 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Logician

PyPI - Python Version License

*A lightweight, deterministic LLM agent framework built on llama.cpp with tool calling, RAG, persistent memory, and structured tracing."


🎯 What Is Logician?

Logician is a production-grade agent framework for building tool-using AI agents on top of llama.cpp. It provides:

  • Deterministic tool execution (JSON or TOON format)
  • Persistent memory with semantic search over conversations
  • RAG integration for external knowledge
  • Structured logging & tracing for debugging and monitoring
  • Self-reflection loops for iterative improvement
  • llama.cpp backend for local, privacy-preserving inference

πŸš€ Quick Start

# Clone and install
git clone https://github.com/lseman/logician.git
cd logician
pip install -e .

# Run a demo
python apps/runners/demo.py

# Or start a REPL
python apps/runners/repl_demo.py

# Or ingest a repo without the TUI
python apps/runners/repo_ingest.py /path/to/repo

# Or clone + ingest a git URL into .logician/repos/_checkouts
python apps/runners/repo_ingest.py https://github.com/org/repo.git --base-dir /path/to/workspace

Basic Usage

from agent import Agent
from agent.tools import ToolRegistry

# Register your tools
tool_registry = ToolRegistry()

# Create agent
agent = Agent(
    model="llama3:8b",
    tools=[tool_registry],
    memory_path=".memory",
)

# Run with reflection
result = agent.run(
    prompt="Analyze this time series and forecast next 30 days",
    max_turns=5,
    use_reflection=True,
)

print(result.answer)
print(result.trace_md)  # Markdown timeline

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         Agent Core                           β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚  Planning    β”‚β†’β”‚   Acting      β”‚β†’β”‚   Observing       β”‚  β”‚
β”‚  β”‚  (think)     β”‚  β”‚   (tools)    β”‚  β”‚   (verify)       β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         Backends                             β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚ llama.cpp    β”‚  β”‚  vLLM        β”‚  β”‚  MCP Client      β”‚  β”‚
β”‚  β”‚ (local)      β”‚  β”‚ (GPU)        β”‚  β”‚ (model context)  β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     Memory & Storage                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚  MessageDB   β”‚  β”‚  DocumentDB  β”‚  β”‚  RAG Vector Store β”‚  β”‚
β”‚  β”‚  (SQLite)    β”‚  β”‚  (Chroma)    β”‚  β”‚  (Chroma)        β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

✨ Key Features

πŸ› οΈ Tool Registry

  • Typed parameters for safe tool calling
  • Dynamic tool loading via add_tool(...) API
  • Dual format support: JSON or TOON (Token-Oriented Object Notation)
from agent.tools import ToolParameter, ToolRegistry

tool_registry.add_tool(
    name="fetch_data",
    description="Fetch time series data from CSV",
    parameters={
        "filepath": ToolParameter(type="string", required=True),
        "date_column": ToolParameter(type="string", required=True),
    },
)

🧠 Persistent Memory (MessageDB)

  • SQLite storage for conversation history
  • Semantic search via Chroma + SentenceTransformers
  • Auto-summarization of long histories into SYSTEM prompts

πŸ“š RAG Document Store

  • Separate Chroma collection for external documents
  • Chunking & metadata with top-k retrieval
  • Injected as SYSTEM context in prompts
from agent import DocumentDB
db = DocumentDB(collection_name="docs")
db.add_directory("./docs/")

# Documents appear as SYSTEM context in agent prompts

πŸ”„ Self-Reflection

  • Second-pass critique of final answers
  • Optional refinement or additional tool calls
  • Configurable prompt and token budget
result = agent.run(
    prompt="Debug this code",
    use_reflection=True,  # Enable self-reflection
    reflection_prompt="Review your answer for accuracy and completeness",
)

πŸ“Š Logging & Tracing

  • Structured logging via AGENT_LOG_LEVEL and AGENT_LOG_JSON
  • Per-module loggers: agent, agent.tools, agent.db, agent.rag
  • Trace output:
    • debug: structured JSON trace (events, timings, config)
    • trace_md: markdown timeline for UI rendering

βš™οΈ llama.cpp Backend

  • Dual API support: /v1/chat/completions and /completion
  • Streaming with token callbacks
  • Retry with backoff, configurable stop tokens
  • Temperature, max tokens, and other inference parameters

πŸ“¦ Project Structure

agent/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ agent/          # Core agent logic
β”‚   β”‚   β”œβ”€β”€ core.py     # Main agent loop
β”‚   β”‚   β”œβ”€β”€ trace.py    # Structured tracing
β”‚   β”‚   β”œβ”€β”€ memory.py   # Memory management
β”‚   β”‚   └── tools/      # Tool registry
β”‚   β”œβ”€β”€ backends/       # Model backends
β”‚   β”œβ”€β”€ db/             # Database layer
β”‚   β”œβ”€β”€ eoh/            # Evolution of Heuristics (meta-learning)
β”‚   β”œβ”€β”€ mcp/            # Model Context Protocol client
β”‚   └── reasoners/      # Reasoning strategies (CoT, ToT, etc.)
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ runners/        # Demo and diagnostic runners
β”‚   └── plotting/       # Visualization helpers
β”œβ”€β”€ skills/             # Tool definitions
β”œβ”€β”€ tests/              # Unit and integration tests
β”œβ”€β”€ docs/               # Documentation
β”œβ”€β”€ pyproject.toml      # Dependencies and build config
└── README.md

πŸ”§ Configuration

Environment Variables

Variable Description Default
AGENT_LOG_LEVEL Log level (DEBUG, INFO, WARNING, ERROR) INFO
AGENT_LOG_JSON Emit JSON logs false
AGENT_MODEL_PATH Path to llama.cpp model ./models/
AGENT_MEMORY_PATH Memory storage path .memory
AGENT_TOON_FORMAT Use TOON format false

Agent Parameters

agent = Agent(
    model="llama3:8b",
    tools=[tool_registry],
    memory_path=".memory",
    max_turns=10,
    temperature=0.7,
    use_reflection=True,
    toon_format=False,
)

πŸ§ͺ Testing

# Run all tests
pytest tests/

# Run specific test
pytest tests/test_agent.py

# Run with coverage
pytest tests/ --cov=src/agent

🚧 Roadmap

  • Python SDK for easier integration
  • UI dashboard for tracing and monitoring
  • Multi-agent collaboration workflows
  • Fine-tuning support for custom models
  • Event streaming for real-time tracing

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

MIT License - see LICENSE for details.


πŸ™ Acknowledgments


Need help? Open an issue or join the discussion.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors