Skip to content

ToolBrain/ToolBrain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ToolBrain 🧠

PyPI Version Monthly Downloads

ToolBrain is a lightweight open-source Python library for training agentic systems with effective tool usage and built-in reinforcement learning.
📚 Our website: toolbrain.org and Documentation & tutorials

📚 Watch Introduction Video

Support us by giving ToolBrain a ⭐ on GitHub.

✨ Key Features

🚀 Getting Started

Prerequisites

  • Python 3.10+

Installation

Create conda env (optional)

conda create --name toolbrain python=3.12
conda activate toolbrain

from PyPi:

pip install toolbrain

Or from the source code:

git clone https://github.com/ToolBrain/ToolBrain.git

Enter the cloned folder and type:

pip install .

Run the Example

Run the complete example to see ToolBrain in action (please see under examples folder for more advanced usage examples):

python examples/01_run_hello_world.py

This will:

  • Initialize a CodeAgent with simple math tools
  • Define a customised reward function
  • Run the GRPO algorithm

📖 Usage Example

Here's a minimal example of how to use ToolBrain. This script demonstrates simplified ToolBrain API:

  1. Create a smolagent CodeAgent
  2. Create a brain with our main class Brain()
  3. Train the agent with the GRPO algorithm
from smolagents import tool, TransformersModel, CodeAgent
from toolbrain import Brain
from toolbrain.rewards import reward_exact_match

# --- 1. Define Tools and Reward Function (User-defined) ---
@tool
def add(a: int, b: int) -> int:
    """
    Add two integers.

    Args:
        a (int): First addend.
        b (int): Second addend.

    Returns:
        int: Sum of a and b.
    """
    return a + b


# --- 2. Prepare Training Data ---
training_dataset = [
    {
        "query": "Use the add tool to calculate 5 + 7",
        "gold_answer": "12"
    }
]


# 3. Create agent
model = TransformersModel(
    model_id="Qwen/Qwen2.5-0.5B-Instruct",  # use a bigger model for better results
    max_new_tokens=128
)

agent = CodeAgent(
    model=model,
    tools=[add],
    max_steps=1
)

# 4. Create Brain

brain = Brain(
    agent,                          # Agent instance
    algorithm="GRPO",               # Algorithm choice
    reward_func=reward_exact_match  # A reward function, you can customise any python function as reward
)

# 5. Train the agent with GRPO steps
brain.train(training_dataset, num_iterations=10)

Results

The following plot illustrates how ToolBrain enhances the tool usage accuracy of the small Qwen/Qwen2.5-0.5B-Instruct model after just 20 training steps using GRPO.

GRPO learning curve

📄 License

This project is licensed under the MIT License - see the LICENSE for details.

🌍 Community contributions

Our vision is for ToolBrain to become the universal Reinforcement Learning layer for any agentic framework. Whether you build your agents with LangChain, SmolAgents, LlamaIndex, AutoGen, or a custom solution, you should be able to make them smarter with ToolBrain.

The key to this vision is our modular Adapter architecture. Adding support for a new framework is as simple as implementing a new adapter that translates the agent's internal state into ToolBrain's standard Execution Trace.

We welcome community contributions!
If you are using an agent framework not yet supported, we encourage you to build an adapter for it.
Check out our CONTRIBUTING.md guide and the existing implementations in the toolbrain/adapters/ directory to get started.

Contributors

Quy Minh Le, Minh Sao Khue Luu, Khanh-Tung Tran, Duc-Hai Nguyen, Hoang-Quoc-Viet Pham, Quan Le, Hoang Thanh Lam and Harry Nguyen


🚀 Spread the Word

If you believe in ToolBrain's vision of making agent training accessible to everyone, please consider sharing it with your network!

Share on Twitter Share on LinkedIn Share on Facebook Share on Reddit


References

Please cite our paper with the following bibtex:

@misc{le2025toolbrainflexiblereinforcementlearning,
      title={ToolBrain: A Flexible Reinforcement Learning Framework for Agentic Tools}, 
      author={Quy Minh Le and Minh Sao Khue Luu and Khanh-Tung Tran and Duc-Hai Nguyen and Hoang-Quoc-Viet Pham and Quan Le and Hoang Thanh Lam and Hoang D. Nguyen},
      year={2025},
      eprint={2510.00023},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2510.00023}, 
}

Made with ❤️ by the ToolBrain Team

About

A framework for agentic tool use training with reinforcement learning

Topics

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages