Skip to content
forked from InternLM/lagent

A lightweight framework for building LLM-based agents

License

Notifications You must be signed in to change notification settings

kalyani2003/lagent

 
 

Repository files navigation

Lagent 🚀

👋 join us on Twitter, Discord and WeChat

What's Lagent?

Lagent is a lightweight open-source framework that allows users to efficiently build large language model(LLM)-based agents. It also provides some typical tools to augment LLM. The overview of our framework is shown below:

image

💻Tech Stack

python

Major Features

0.1.2 was released in 24/10/2023:

  • Support efficient inference engine. Lagent now supports efficient inference engine lmdeploy turbomind.

  • Support multiple kinds of agents out of box. Lagent now supports ReAct, AutoGPT and ReWOO, which can drive the large language models(LLMs) for multiple trials of reasoning and function calling.

  • Extremely simple and easy to extend. The framework is quite simple with a clear structure. With only 20 lines of code, you are able to construct your own agent. It also supports three typical tools: Python interpreter, API call, and google search.

  • Support various large language models. We support different LLMs, including API-based (GPT-3.5/4) and open-source (LLaMA 2, InternLM) models.

Getting Started

Please see the overview for the general introduction of Lagent. Meanwhile, we provide extremely simple code for quick start. You may refer to examples for more details.

Installation

Install with pip (Recommended).

pip install lagent

Optionally, you could also build Lagent from source in case you want to modify the code:

git clone https://github.com/InternLM/lagent.git
cd lagent
pip install -e .

Run ReAct Web Demo

# You need to install streamlit first
# pip install streamlit
streamlit run examples/react_web_demo.py

Then you can chat through the UI shown as below image

Run a ReWOO agent with GPT-3.5

Below is an example for running ReWOO with GPT-3.5

from lagent.agents import ReWOO
from lagent.actions import ActionExecutor, GoogleSearch, LLMQA
from lagent.llms import GPTAPI

llm = GPTAPI(model_type='gpt-3.5-turbo', key=['Your OPENAI_API_KEY'])
search_tool = GoogleSearch(api_key='Your SERPER_API_KEY')
llmqa_tool = LLMQA(llm)

chatbot = ReWOO(
    llm=llm,
    action_executor=ActionExecutor(
        actions=[search_tool, llmqa_tool]),
)

response = chatbot.chat('What profession does Nicholas Ray and Elia Kazan have in common')
print(response.response)
>>> Film director.

Run a ReAct agent with InternLM

NOTE: If you want to run a HuggingFace model, please run pip install -e .[all] first.

from lagent.agents import ReAct
from lagent.actions import ActionExecutor, GoogleSearch, PythonInterpreter
from lagent.llms import HFTransformer

llm = HFTransformer('internlm/internlm-chat-7b-v1_1')
search_tool = GoogleSearch(api_key='Your SERPER_API_KEY')
python_interpreter = PythonInterpreter()

chatbot = ReAct(
    llm=llm,
    action_executor=ActionExecutor(
        actions=[search_tool, python_interpreter]),
)

response = chatbot.chat('若$z=-1+\sqrt{3}i$,则$\frac{z}{{z\overline{z}-1}}=\left(\ \ \right)$')
print(response.response)
>>> $-\\frac{1}{3}+\\frac{{\\sqrt{3}}}{3}i$

All Thanks To Our Contributors:

Citation

If you find this project useful in your research, please consider cite:

@misc{lagent2023,
    title={{Lagent: InternLM} a lightweight open-source framework that allows users to efficiently build large language model(LLM)-based agents},
    author={Lagent Developer Team},
    howpublished = {\url{https://github.com/InternLM/lagent}},
    year={2023}
}

License

This project is released under the Apache 2.0 license.

About

A lightweight framework for building LLM-based agents

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%