Skip to content

An abstraction library for building domain-specific intelligent agents based on Large Language Models (LLMs). LLMAgent provides a core architecture and behavior definitions that simplify the development of specialized agents by leveraging the AgentForge framework for dynamic workflow orchestration.

License

Notifications You must be signed in to change notification settings

i365dev/llm_agent

Repository files navigation

LLMAgent

CI Hex.pm Docs License

LLMAgent is an abstraction library for building domain-specific intelligent agents based on Large Language Models (LLMs). Built on top of AgentForge's signal-driven architecture, LLMAgent provides specialized patterns for LLM-powered agents, including predefined signals, handlers, store structures, and flows optimized for conversational agents.

Features

  • 🧠 LLM-specific interaction patterns and signal types
  • πŸ”€ Message processing workflows and handlers
  • πŸ› οΈ Tool integration and execution
  • ⏱️ Long-running task management
  • πŸ’¬ Context and conversation management
  • πŸ”Œ Plugin-based provider integrations
  • πŸ”„ AgentForge compatibility
  • 🌊 Dynamic workflow orchestration - Enable LLMs to create multi-step workflows based on context

Installation

Add llm_agent to your list of dependencies in mix.exs:

def deps do
  [
    {:llm_agent, "~> 0.1.1"},
    # Optional LLM provider dependencies
    {:openai, "~> 0.5.0"}, # If using OpenAI
    {:anthropic, "~> 0.1.0"} # If using Anthropic
  ]
end

Quick Start

# Create agent with system prompt and basic tools
{flow, initial_state} = LLMAgent.new(
  "You are a helpful assistant that can answer questions and use tools.",
  [
    %{
      name: "search",
      description: "Search the web for information",
      parameters: %{
        "type" => "object",
        "properties" => %{
          "query" => %{
            "type" => "string",
            "description" => "Search query"
          }
        },
        "required" => ["query"]
      },
      execute: &MyApp.Tools.search/1
    }
  ]
)

# Process a user message
{:ok, result, new_state} = LLMAgent.process(flow, initial_state, "What is the capital of France?")

# Handle result
case result do
  %{type: :response, data: content} ->
    IO.puts("Agent response: #{content}")
    
  %{type: :thinking, data: thought} ->
    IO.puts("Agent thinking: #{thought}")
    
  %{type: :error, data: %{message: message}} ->
    IO.puts("Error: #{message}")
end

Core Components

Signals

LLMAgent defines specialized signal types for LLM interactions:

# Create a user message signal
signal = LLMAgent.Signals.user_message("Help me analyze AAPL stock")

# Create a thinking signal
thinking = LLMAgent.Signals.thinking("I need to get stock price data", 1)

# Create a tool call signal
tool_call = LLMAgent.Signals.tool_call("get_stock_price", %{ticker: "AAPL"})

Handlers

Handlers process LLM-specific signals:

# Message handler processes user messages
LLMAgent.Handlers.message_handler(signal, state)

# Tool handler executes tool calls
LLMAgent.Handlers.tool_handler(tool_call, state)

Store

Manage conversation state:

# Create a new store
state = LLMAgent.Store.new()

# Add a message to history
state = LLMAgent.Store.add_message(state, "user", "Hello")

# Get conversation history
history = LLMAgent.Store.get_llm_history(state)

Flows

Create standard workflow compositions:

# Create a conversation flow with tools
{flow, state} = LLMAgent.Flows.conversation(system_prompt, tools)

# Create a simple QA agent
{flow, state} = LLMAgent.Flows.qa_agent(system_prompt)

Tasks

Manage long-running operations:

# Define a task with AgentForge primitives
task_def = [
  AgentForge.Primitives.transform(fn data -> Map.put(data, :processed, true) end)
]

# Start the task
{task_id, signal} = LLMAgent.Tasks.start(task_def, params, state)

LLM Providers

LLMAgent supports multiple LLM providers through its plugin system:

# Use OpenAI for completions
{:ok, response} = LLMAgent.Providers.OpenAI.completion(%{
  model: "gpt-4",
  messages: [%{role: "user", content: "Hello"}],
  max_tokens: 500
})

# Use Anthropic for completions
{:ok, response} = LLMAgent.Providers.Anthropic.completion(%{
  model: "claude-3-opus-20240229",
  messages: [%{role: "user", content: "Hello"}],
  max_tokens: 500
})

Domain-Specific Agents

Build specialized agents by:

  1. Creating domain-specific handlers
  2. Registering domain-specific tools
  3. Defining domain-specific tasks
  4. Creating domain-specific flows
defmodule MyApp.InvestmentAgent do
  def new(options \\ %{}) do
    # Define system prompt
    system_prompt = "You are an AI investment assistant..."
    
    # Define investment tools
    tools = [
      %{
        name: "get_stock_price",
        description: "Get current price for a stock",
        parameters: %{
          "type" => "object",
          "properties" => %{
            "ticker" => %{
              "type" => "string",
              "description" => "Stock ticker symbol"
            }
          },
          "required" => ["ticker"]
        },
        execute: &MyApp.Tools.get_stock_price/1
      }
    ]
    
    # Create agent flow
    LLMAgent.new(system_prompt, tools, options)
  end
end

Additional Documentation

Contributing

We welcome contributions from the community! Please see our Contributing Guide for more information on how to get involved.

License

LLMAgent is released under the MIT License. See the LICENSE file for details.

About

An abstraction library for building domain-specific intelligent agents based on Large Language Models (LLMs). LLMAgent provides a core architecture and behavior definitions that simplify the development of specialized agents by leveraging the AgentForge framework for dynamic workflow orchestration.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published