ToolBrain is a lightweight open-source Python library for training agentic systems with effective tool usage and built-in reinforcement learning.
📚 Our website: toolbrain.org and Documentation & tutorials
📚 Watch Introduction Video
Support us by giving ToolBrain a ⭐ on GitHub.
- 🤖 Learning algorithms: Supports GRPO, DPO, and supervised learning.
- 🎯 Flexible rewards: Define your own reward functions or use LLM-as-judge.
- 🔧 Tool management: Scalable retrieval for managing large tool collections.
- 📊 Knowledge distillation: Distill large teacher models into smaller student models for efficiency.
- 🚀 Zero-learn: Automatically generate training tasks.
- ⚡ Efficient training: Supports FP16 finetuning, LoRA, Unsloth, and BitsAndBytes for resource-efficient training.
- 🧠 Multiple agent frameworks: Supports SmolAgent and LangChain, with more coming soon.
- Python 3.10+
Create conda env (optional)
conda create --name toolbrain python=3.12
conda activate toolbrain
from PyPi:
pip install toolbrain
Or from the source code:
git clone https://github.com/ToolBrain/ToolBrain.git
Enter the cloned folder and type:
pip install .
Run the complete example to see ToolBrain in action (please see under examples folder for more advanced usage examples):
python examples/01_run_hello_world.py
This will:
- Initialize a
CodeAgent
with simple math tools - Define a customised reward function
- Run the GRPO algorithm
Here's a minimal example of how to use ToolBrain. This script demonstrates simplified ToolBrain API:
- Create a smolagent CodeAgent
- Create a brain with our main class Brain()
- Train the agent with the GRPO algorithm
from smolagents import tool, TransformersModel, CodeAgent
from toolbrain import Brain
from toolbrain.rewards import reward_exact_match
# --- 1. Define Tools and Reward Function (User-defined) ---
@tool
def add(a: int, b: int) -> int:
"""
Add two integers.
Args:
a (int): First addend.
b (int): Second addend.
Returns:
int: Sum of a and b.
"""
return a + b
# --- 2. Prepare Training Data ---
training_dataset = [
{
"query": "Use the add tool to calculate 5 + 7",
"gold_answer": "12"
}
]
# 3. Create agent
model = TransformersModel(
model_id="Qwen/Qwen2.5-0.5B-Instruct", # use a bigger model for better results
max_new_tokens=128
)
agent = CodeAgent(
model=model,
tools=[add],
max_steps=1
)
# 4. Create Brain
brain = Brain(
agent, # Agent instance
algorithm="GRPO", # Algorithm choice
reward_func=reward_exact_match # A reward function, you can customise any python function as reward
)
# 5. Train the agent with GRPO steps
brain.train(training_dataset, num_iterations=10)
The following plot illustrates how ToolBrain enhances the tool usage accuracy of the small Qwen/Qwen2.5-0.5B-Instruct model after just 20 training steps using GRPO.
This project is licensed under the MIT License - see the LICENSE for details.
Our vision is for ToolBrain to become the universal Reinforcement Learning layer for any agentic framework. Whether you build your agents with LangChain, SmolAgents, LlamaIndex, AutoGen, or a custom solution, you should be able to make them smarter with ToolBrain.
The key to this vision is our modular Adapter architecture. Adding support for a new framework is as simple as implementing a new adapter that translates the agent's internal state into ToolBrain's standard Execution Trace.
We welcome community contributions!
If you are using an agent framework not yet supported, we encourage you to build an adapter for it.
Check out our CONTRIBUTING.md
guide and the existing implementations in the toolbrain/adapters/
directory to get started.
Quy Minh Le, Minh Sao Khue Luu, Khanh-Tung Tran, Duc-Hai Nguyen, Hoang-Quoc-Viet Pham, Quan Le, Hoang Thanh Lam and Harry Nguyen
If you believe in ToolBrain's vision of making agent training accessible to everyone, please consider sharing it with your network!
Please cite our paper with the following bibtex:
@misc{le2025toolbrainflexiblereinforcementlearning,
title={ToolBrain: A Flexible Reinforcement Learning Framework for Agentic Tools},
author={Quy Minh Le and Minh Sao Khue Luu and Khanh-Tung Tran and Duc-Hai Nguyen and Hoang-Quoc-Viet Pham and Quan Le and Hoang Thanh Lam and Hoang D. Nguyen},
year={2025},
eprint={2510.00023},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2510.00023},
}
Made with ❤️ by the ToolBrain Team