This project allows you to enhance large language models (LLMs) with custom tools and agents developed in bash/javascript/python. Imagine your LLM being able to execute system commands, access web APIs, or perform other complex tasks – all triggered by simple, natural language prompts.
Make sure you have the following tools installed:
Getting Started with AIChat
git clone https://github.com/sigoden/llm-functions
I. Create a ./tools.txt
file with each tool filename on a new line.
get_current_weather.sh
#execute_command.sh
execute_py_code.py
search_tavily.sh
II. Create a ./agents.txt
file with each agent name on a new line.
coder
todo
III. Run argc build
to build tools and agents.
Symlink this repo directory to AIChat functions_dir:
ln -s "$(pwd)" "$(aichat --info | grep -w functions_dir | awk '{print $2}')"
# OR
argc install
Done! You can experience the magic of llm-functions
in AIChat.
Building tools for our platform is remarkably straightforward. You can leverage your existing programming knowledge, as tools are essentially just functions written in your preferred language.
LLM Functions automatically generates the JSON declarations for the tools based on comments. Refer to ./tools/demo_tool.{sh,js,py}
for examples of how to use comments for autogeneration of declarations.
Create a new bashscript in the ./tools/ directory (.e.g. may_execute_command.sh
).
#!/usr/bin/env bash
set -e
# @describe Runs a shell command.
# @option --command! The command to execute.
main() {
eval "$argc_command" >> "$LLM_OUTPUT"
}
eval "$(argc --argc-eval "$0" "$@")"
Create a new javascript in the ./tools/ directory (.e.g. may_execute_js_code.js
).
/**
* Runs the javascript code in node.js.
* @typedef {Object} Args
* @property {string} code - Javascript code to execute, such as `console.log("hello world")`
* @param {Args} args
*/
exports.main = function main({ code }) {
return eval(code);
}
Create a new python script in the ./tools/ directory (e.g., may_execute_py_code.py
).
def main(code: str):
"""Runs the python code.
Args:
code: Python code to execute, such as `print("hello world")`
"""
return exec(code)
Agent = Prompt + Tools (Function Callings) + Knowndge (RAG). It's also known as OpenAI's GPTs.
The agent has the following folder structure:
└── agents
└── myagent
├── functions.json # Function JSON declarations (Auto-generated)
├── index.yaml # Agent definition
├── tools.txt # Shared tools from ./tools
└── tools.{sh,js,py} # Agent tools
The agent definition file (index.yaml
) defines crucial aspects of your agent:
name: TestAgent
description: This is test agent
version: 0.1.0
instructions: You are a test ai agent to ...
conversation_starters:
- What can you do?
documents:
- local-file.txt
- local-dir/
- https://example.com/remote-file.txt
Refer to ./agents/todo-{sh,js,py}
for examples of how to implement a agent.
The project is under the MIT License, Refer to the LICENSE file for detailed information.