Skip to content

elallali/multi_agent_tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

multi_agent_tool

Environment Variables (Local LLM)

This project loads .env automatically via python-dotenv.

Set the local LLM variables below to run against a local OpenAI-compatible server:

Variable Required Default Description
LLAMA_SERVER_BASE_URL Yes (for local mode) none Base URL of your local server, e.g. http://127.0.0.1:8080/v1.
LLAMA_SERVER_MODEL_ID Yes (for local mode) none Model name/id sent as model in chat completion requests.
LLAMA_SERVER_API_KEY No empty Optional bearer token for local server auth.
LLAMA_SERVER_TIMEOUT No 120 Request timeout in seconds. Must be > 0.
LLAMA_SERVER_FOLLOWUP_PROMPT No Please provide your response. Prompt appended when the previous message role is assistant or tool.
LLAMA_SERVER_CONTINUE_PROMPT No Continue. Prompt used when the model stops with finish_reason=length.
LLAMA_SERVER_MAX_CONTINUATIONS No 0 Number of continuation retries when response is truncated. Must be >= 0.

Optional App Variables

Variable Required Default Description
MAF_WORKFLOW_MODE No sequential One of sequential, group_chat, concurrent.
MAF_GROUPCHAT_MAX_ROUNDS No number of agents Max rounds for group_chat mode. Must be integer > 0.
AGENTS_TABLE_PATH No agents.csv Path to agent configuration CSV.

Quick Start

  1. Copy .env.example to .env.
  2. Fill in local server values.
  3. Run:
python run_test_agent.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages