Skip to content

Democratizing Function Calling Capabilities for Open-Source Language Models

License

Notifications You must be signed in to change notification settings

unclecode/callama

Repository files navigation

📞🦙 CaLLama: Hola "Function" are you there?

CaLLama

This repository is dedicated to advancing the "function-call" features for open-source large language models (LLMs). We believe that the future of AI, specifically AI agents, depends on proper function-calling capabilities. While proprietary models like OpenAI's have these features, it is crucial for the open-source community to have access to high-quality function-calling abilities to democratize AI.

Recently, Facebook released LLaMA3, perhaps the best open-source LLM available. We have fine-tuned and created a version of LLaMA3 that natively supports function calls.

🎯 Solutions

We are focusing on two directions:

  1. We are developing a cool library focused on function-call to build a uniform way of working with function calls (tool calls) for all LLMs. This library will be released in its first version soon.
  2. Fine-tuning small models specifically for function calling, which has already been done for Llama 3 and Tiny Llama.

Usage Methods 🛠️

Model in HuggingFace

🖥️ Colab

  1. To know how to run using helper class, check this colab Open In Colab
  2. For a more detailed experience, check out the Open In Colab
  3. To use GGFU version, check out the Open In Colab

🛠️ Usage Locally

To use the models in this repository, follow these steps:

  1. Clone the repository:
git clone https://github.com/unclecode/fllm.git
  1. Create a virtual environment and activate it:
conda create --name env python=3.10
conda activate env
  1. Install PyTorch if you haven't already:
conda install pytorch-cuda=<12.1/11.8> pytorch cudatoolkit xformers -c pytorch -c nvidia -c xformers
  1. Install the required dependencies:
python setup.py
  1. Add your HuggingFace token in ".env.text", and then rename the file to ".env".

  2. Run the example code in the examples folder to see the models in action.

You can also refer to the callama.py file in the llms folder to see the LLM chat template.

🦙 Using the ollama

Steps to run the example:

  1. Make sure Ollama is installed, and ollama server is running.
  2. Pull the model from the ollaama hub.
ollama pull unclecode/llama3callama
ollama pull unclecode/tinycallama
  1. Make sure to check the ollama example here

Link to models:

✅ Features TODO List

  • Single function detection
  • Support for various model sizes and quantization levels
  • Available as a LoRA adapter that can be merged with many models
  • Multi-function detection
  • Function binding, allowing the model to detect the order of execution and bind the output of one function to another
  • Fine-tuning models with less than 1B parameters for efficient function calling

🤗 Models

The following models are available on Hugging Face:

📊 Dataset

The models were fine-tuned using a modified version of the ilacai/glaive-function-calling-v2-sharegpt dataset, which can be found at unclecode/glaive-function-calling-llama3.

🤝 Contributing

We welcome contributions from the community. If you are interested in joining this project or have any questions, please open an issue in this repository.

Twitter (X): https://x.com/unclecode

📜 License

These models are released under the Apache 2.0 license.

About

Democratizing Function Calling Capabilities for Open-Source Language Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published