A scalable research assistant with modular architecture supporting multiple LLM providers and extensible workflow nodes.
- Modular Architecture - Pluggable components for maximum flexibility
- Multi-LLM Support - Easy switching between Ollama, OpenAI, Anthropic
- Interactive Workflow - Human review and approval steps
- Extensible Nodes - Add new research capabilities easily
- Structured Generation - JSON-constrained llm output for reliability
- Web Interface - Streamlit UI for easy interaction
- Configuration-Driven - YAML-based workflow management
-
Install dependencies:
pip install -r requirements.txt
-
Setup LLM Provider:
# For Ollama (default) ollama serve ollama pull llama3.2 # For OpenAI (coming soon) export OPENAI_API_KEY="your-key" # For Anthropic (coming soon) export ANTHROPIC_API_KEY="your-key"
-
Run the application:
# Web interface streamlit run main.py
llm_provider:
type: "ollama" # or "openai", "anthropic"
ollama:
model: "llama3.2"
base_url: "http://localhost:11434"
openai:
model: "gpt-4"
temperature: 0.7
anthropic:
model: "claude-3-sonnet"
temperature: 0.7
workflow:
enable_human_review: true
max_retries: 3
timeout: 300
from agents.nodes.base_node import BaseNode
class CustomAnalysisNode(BaseNode):
def __init__(self, llm_provider):
super().__init__(
name="custom_analysis",
llm_provider=llm_provider
)
def execute(self, state):
# Your custom logic here
return {"custom_results": "analysis_data"}
- Implement
BaseLLMProvider
interface - Add to
LLMProviderFactory
- Update configuration schema
- Extend
BaseNode
orConditionalNode
- Implement
execute()
method - Register in workflow configuration
For a deeper dive into the system's architecture, refer to Architecture Details. This document provides an in-depth explanation of the modular design, workflow orchestration, and integration with multiple LLM providers.
LLM Provider Connection
# Check Ollama status
ollama list
curl http://localhost:11434/api/version
Configuration Issues
- Verify
config/settings.yaml
syntax - Check provider-specific environment variables
- Ensure model names match available models