Harness LLMs with Multi-Agent Programming
-
Updated
Jun 8, 2024 - Python
Harness LLMs with Multi-Agent Programming
Extend LLM with functions written in bash/js/python.
A ChatBot written in C# using OpenAI's API
Multi-node production AI stack. Run the best of open source AI easily on your own servers. Create your own AI by fine-tuning open source models. Integrate LLMs with APIs. Run gptscript securely on the server
Composio equips agents with well-crafted tools empowering them to tackle complex tasks
openai function calling demo that gets customizable weather information
🤯 Lobe Chat - an open-source, modern-design LLMs/AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Bedrock / Azure / Mistral / Perplexity ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT chat application.
A mixture of Gen AI cookbook recipes for Gen AI applications.
An intuitive approach to building with LLMs
llmon-py is a multimodal webui for Llama 3-8B.
Production ready AI agent framework
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.
Separation of planning concerns in ReAct-style agents. Planner fine-tuning on synthetic trajectories.
🦖 Stateful Serverless Framework for building Geo-distributed Edge AI Infra
All-in-one AI CLI tool that integrates 20+ AI platforms, including OpenAI, Azure-OpenAI, Gemini, Claude, Mistral, Cohere, VertexAI, Bedrock, Ollama, Ernie, Qianwen, Deepseek...
Anthropic Claude API wrapper for Go
GPT-4 level function calling models for real-world tool using use cases
Add a description, image, and links to the function-calling topic page so that developers can more easily learn about it.
To associate your repository with the function-calling topic, visit your repo's landing page and select "manage topics."