Skip to content

menonpg/litecrew

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

litecrew

Multi-agent orchestration for people who don't want a framework.

PyPI Tests License: MIT

Most multi-agent libraries want you to learn their abstractions, their decorators, their 47 integration patterns. You just want two LLMs to pass data to each other.

20% of the features. 1% of the code. Zero learning curve.

from litecrew import Agent, crew

researcher = Agent("researcher", model="gpt-4o-mini")
writer = Agent("writer", model="claude-3-5-sonnet-20241022")

@crew(researcher, writer)
def write_article(topic: str) -> str:
    research = researcher(f"Research {topic}, return key facts")
    return writer(f"Write article using: {research}")

article = write_article("quantum computing")

That's it. No config files. No YAML. No 200-page docs.

pip install litecrew

⚑ Why litecrew?

Framework To run 2 agents in sequence, you need...
CrewAI Crew, Task, Agent classes, YAML config, decorators
LangGraph StateGraph, nodes, edges, conditional routing
AutoGen ConversableAgent, GroupChat, GroupChatManager
litecrew sequential(agent1, agent2)

We're not better. We're smaller. If you need complex orchestration, use the big frameworks. If you need something working in 5 minutes, we're here.


πŸ”‘ BYOK β€” Bring Your Own Keys

litecrew never touches your API keys. We don't proxy, store, or even see them.

# Set your keys as environment variables (standard practice)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."

The official openai and anthropic Python libraries read these automatically. litecrew just calls those libraries. Your keys stay on your machine.

  • βœ… No litecrew account required
  • βœ… No API proxy
  • βœ… No telemetry
  • βœ… No key storage
  • βœ… Works offline with local models (via OpenAI-compatible APIs)

πŸ¦™ Ollama & Local Models

litecrew works with any OpenAI-compatible API β€” including Ollama, LM Studio, vLLM, and more.

import openai
from litecrew import Agent

# Point to your local Ollama server
openai.base_url = "http://localhost:11434/v1"
openai.api_key = "ollama"  # Ollama doesn't need a real key

# Use any local model
agent = Agent(
    name="local",
    model="llama3.2",  # or mistral, qwen2.5, phi3, etc.
    system="You are a helpful assistant."
)

response = agent("Explain quantum computing in simple terms.")
print(response)

Or use environment variables:

export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_API_KEY="ollama"

Supported local providers:

Provider Base URL Notes
Ollama http://localhost:11434/v1 Most popular
LM Studio http://localhost:1234/v1 GUI-based
vLLM http://localhost:8000/v1 Production-grade
LocalAI http://localhost:8080/v1 Docker-friendly
text-generation-webui http://localhost:5000/v1 With OpenAI extension

Mix cloud and local:

# Use local for research (free), cloud for final output (quality)
researcher = Agent("researcher", model="llama3.2")  # Local via Ollama
writer = Agent("writer", model="gpt-4o")  # Cloud via OpenAI

🎯 What litecrew IS

A minimal orchestration layer for simple multi-agent workflows.

βœ… Use litecrew when...
You have 2-5 agents that pass data to each other
You're prototyping and want to move fast
You want to understand every line of your orchestration code
You're learning how multi-agent systems work
You need something working in 10 minutes, not 10 hours

Core features:

  • Define agents (model + tools + system prompt)
  • Sequential handoffs (A β†’ B β†’ C)
  • Parallel fan-out (A β†’ [B, C, D] β†’ collect)
  • Tool calling (OpenAI function calling format)
  • Token tracking and cost awareness
  • Optional persistent memory via soul-agent

🚫 What litecrew is NOT

Be honest about scope. If you need these, use a full framework:

❌ Don't use litecrew when... Use instead
Complex hierarchical agent management CrewAI, AutoGen
Stateful conversation with branching LangGraph
Production enterprise workflows LangGraph, Temporal
Visual workflow builders Flowise, n8n
47 pre-built integrations LangChain
Human-in-the-loop approval flows CrewAI, custom
Automatic retry with exponential backoff Tenacity + custom
Streaming responses Direct API calls
Agent-to-agent negotiation AutoGen

The deal: We do 20% of what CrewAI does in 1% of the code. That's a tradeoff. If you need the other 80%, you've outgrown us β€” and that's fine.


πŸ“Š Comparison

Framework Lines of Code Learning Curve Flexibility Our Take
litecrew ~150 Minutes Limited Start here
CrewAI ~15,000 Hours High Graduate to this
LangGraph ~50,000 Days Very High For complex flows
AutoGen ~30,000 Days High For agent negotiation

Our recommendation:

  1. Start with litecrew β€” Get your prototype working
  2. Hit a limitation β€” You need something we don't do
  3. Graduate to CrewAI + crewai-soul β€” Keep your memory layer

Installation

pip install litecrew

With providers:

pip install litecrew[openai]      # OpenAI support
pip install litecrew[anthropic]   # Anthropic support
pip install litecrew[all]         # Everything including memory

Usage

Basic Agent

from litecrew import Agent

agent = Agent(
    name="assistant",
    model="gpt-4o-mini",  # or "claude-3-5-sonnet-20241022"
    system="You are a helpful assistant."
)

response = agent("What is the capital of France?")
print(response)
print(agent.tokens)  # {"in": 23, "out": 15}

Sequential Handoff

from litecrew import Agent, sequential

researcher = Agent("researcher", model="gpt-4o-mini")
writer = Agent("writer", model="gpt-4o-mini")
editor = Agent("editor", model="gpt-4o-mini")

pipeline = sequential(researcher, writer, editor)
result = pipeline("Write about AI safety")

Parallel Execution

from litecrew import Agent, parallel

security = Agent("security", system="Review for security issues.")
performance = Agent("performance", system="Review for performance.")
style = Agent("style", system="Review for code style.")

review_all = parallel(security, performance, style)
results = review_all("def get_user(id): return db.query(f'SELECT * FROM users WHERE id={id}')")
# Returns: ["SQL injection risk...", "Consider caching...", "Use parameterized queries..."]

With Tools

from litecrew import Agent, tool

@tool(schema={
    "type": "object",
    "properties": {"query": {"type": "string"}},
    "required": ["query"]
})
def search(query: str) -> str:
    return f"Results for: {query}"

agent = Agent("assistant", tools=[search])
response = agent("Search for the latest AI news")

With Persistent Memory

from litecrew import Agent, with_memory

agent = Agent("assistant", model="gpt-4o-mini")
agent = with_memory(agent, namespace="my-assistant")

# Agent now remembers across sessions
agent("My name is Alice and I work at Acme Corp")
# ... later, even after restart ...
agent("Where do I work?")  # "You work at Acme Corp"

Testing

# Install dev dependencies
pip install litecrew[dev]

# Run tests
pytest tests/

# Run with coverage
pytest tests/ --cov=litecrew

The Soul Ecosystem

litecrew is part of a family of simple, composable AI tools:

Package Purpose When to Use
litecrew Minimal orchestration Starting out, prototypes
soul-agent Persistent memory Add memory to any agent
crewai-soul CrewAI + memory Production multi-agent
langchain-soul LangChain + memory Complex chains
llamaindex-soul LlamaIndex + memory RAG pipelines

Philosophy

"Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away." β€” Antoine de Saint-ExupΓ©ry

Most frameworks race to add features. We race to keep them out.

The SQLite strategy: SQLite doesn't try to be PostgreSQL. It does one thing well and says "if you need more, use something else." That's us.


FAQ

Q: Why not just use CrewAI?
A: CrewAI is great when you need it. But sometimes you just want two agents to pass data without learning a framework. That's us.

Q: How do I add feature X?
A: Fork it. The code is ~150 lines. Add what you need. Or graduate to CrewAI.

Q: Will you add streaming/callbacks/hierarchies?
A: No. Adding features makes us what we're replacing.

Q: Is this production-ready?
A: For simple workflows, yes. For complex enterprise needs, use CrewAI + crewai-soul.

Q: Do you store my API keys?
A: No. We never see them. They stay in your environment variables.


License

MIT β€” Do whatever you want.


Contributing

  • Bug? Open an issue
  • Feature request? Consider if it keeps us simple. If not, fork it.
  • PR? Keep it minimal

Built by The Menon Lab | Blog | Twitter

About

Multi-agent orchestration in ~100 lines. No magic.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages