Skip to content

InfiAgent/InfiAgent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ToRA
InfiAgent: An Open-Source Agent Framework

This is the repo for the InfiAgent project. InfiAgent aims to support building an agent demo from scratch, supporting code execution, API call, batch inference, and sandbox mangagement. You can easily build your own agents based on InfiAgent.

🔥 News

  • [2024/2/21] InfiAgent-DABench and DA-Agent released at huggingface.
  • [2024/1/27] DAEval with human filtering updated.
  • [2024/1/10] Our paper on InfiAgent-DABench released on arxiv.
  • [2023/12/15] InfiAgent-DABench released.
  • [2023/11/29] InfiAgent repo, and website released.

DA-Agent: An Example Implemented with InfiAgent Framework

We develop DA-Agent, an open-source LLM-based agent for data analysis implemented with InfiAgent framework.

Framework Features

InfiAgent is a project for building your own agents via code execution and API call. It formulates LLMs as agents via a REACT pipeline by default. You also can define your own prompt templates.

We have provided a comprehensive pipeline code for running your agent locally. Within the pipeline, we offer API calls, local model inference, a Python code sandbox based on Docker, and a frontend built on Streamlit. We take GPT-3.5 as an example to show how to build a data anaylsis agent given a LLM interface.

  1. API Calls:

    Simply fill in your API Key, and you can begin making calls to models such as GPT-3.5, GPT-4, Claud, and more to run your code interpreter pipeline.

  2. Local Model Inference:

    If you would like to run models locally, we've incorporated the capability to perform local inference on your Nvidia GPU. The local inference feature is implemented based on vLLM for optimized performance on your local machine.

  3. Python Code Sandbox:

    To execute Python code, we implement a Python code sandbox using a subprocess, which is convenient but may introduce certain security issues.

    Also, we have implemented a Python code sandbox based on Docker, where you can run the pipeline executing the code generated by LLM in an isolated environment without worrying about damage to local files.

    Considering the complexity of the Docker environment configuration, we have set running the sandbox using subprocess as the default option.

  4. Streamlit-based Frontend:

    We have provided a front-end based on Streamlit, allowing you to interact with the pipeline in a visualized manner for a clearer and simpler experience.

Usage

Installation

InfiAgent requires Python version >= 3.9.

  1. Install infiagent and requirements
pip install .
  1. You can easily use the following command to start a demo using APIs. You can changes the config s if you need.
# Supported LLM: OPEN_AI, AZURE_OPEN_AI, LLAMA, OPT
# api_key is required for API-based models
bash run_demo.sh --llm OPEN_AI --api_key <YOUR API KEY> --config_path configs/agent_configs/react_agent_gpt4_async.yaml
  1. (Optional) Using docker python sandbox
docker build -t codesandbox .
bash run_demo.sh --llm OPEN_AI --api_key <YOUR API KEY> --config_path configs/agent_configs/react_agent_gpt4_async_docker.yaml
pip3 install vllm

Demo Usage

Run your local model serving

  1. Our local LLM service is developed on vLLM. First, You should install by command:
pip install vllm fschat
  1. you can start a vLLM model serving by running this command:
python3 ./activities/vllm_api_server.py --model "meta-llama/Llama-2-7b-hf"  --served_model_name "meta-llama/Llama-2-7b-hf"

At this point, we only support Linux environment.

  1. You can try this command if the serving is successfully starting:
curl http://localhost:8000/v1/completions \
      -H "Content-Type: application/json" \
      -d '{
         "model": "meta-llama/Llama-2-7b-hf",
         "prompt": "San Francisco is a",
         "max_tokens": 7,
         "temperature": 0
      }'
  1. Then you can run the demo by this command with your local model:
bash run_demo.sh --llm "meta-llama/Llama-2-7b-hf" --config_path configs/agent_configs/react_agent_llama_async.yaml

If you run in worker, please change openai.api_base in the ./src/infiagent/llm/client/llama.py to your podIP and then run pip install .

For example, you can change the setting from "http://localhost:8000/v1" to "http://[fdbd:dc03:9:130:5500::e5]:8000/v1".

Acknowledge

We would like to express our sincere gratitude to the following open-source projects for their invaluable assistance to our project:

Contact

If you have any questions, feedback, or would like to collaborate on this project, please feel free to reach out to us through huxueyu@zju.edu.cn. Your inquiries and suggestions are highly appreciated.

Thank you for your interest in our work!

Citation

If you find our repo useful, please kindly consider citing:

@misc{hu2024infiagentdabench,
      title={InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks}, 
      author={Xueyu Hu and Ziyu Zhao and Shuang Wei and Ziwei Chai and Qianli Ma and Guoyin Wang and Xuwu Wang and Jing Su and Jingjing Xu and Ming Zhu and Yao Cheng and Jianbo Yuan and Jiwei Li and Kun Kuang and Yang Yang and Hongxia Yang and Fei Wu},
      year={2024},
      eprint={2401.05507},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages