Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "Capabilities" endpoint/API #274

Open
ptgoetz opened this issue Apr 6, 2024 · 2 comments
Open

Add "Capabilities" endpoint/API #274

ptgoetz opened this issue Apr 6, 2024 · 2 comments

Comments

@ptgoetz
Copy link
Collaborator

ptgoetz commented Apr 6, 2024

What?

Add a REST API endpoint such as /api/v1/capabilities that returns a nested structure describing what LLMs and Tools the given OpenGPTs API instance supports. This would enable UIs to dynamically show/hide OpenGPTs options like LLMs and Tools.

A hypothetical response to GET /api/v1/capabilities might look something like:

{
   capabilities: {
        models : [],
        tools: []
    }
}

Why?

Currently, the OpenGPTs UI lets you select Models and Tools that may not be configured. The UI will happily let you create assistants with models that aren't configured. When used, the backend will spit out a trace, and the frontend won't do anything.

Implementing this endpoint would enable API clients (e.g. UIs) to only present options that the OpenGPTs instance is actually configured to support.

Implementation Considerations

With the current state of the codebase and dependency stack, the path of least resistance is likely to implement such a feature by checking for existence of LLM/Tool-specific environment variables.

Longer, more scalable solutions might involve moving away from environment variable driven configurations to something like a configuration file.

@ptgoetz
Copy link
Collaborator Author

ptgoetz commented Apr 30, 2024

Here's a proposed response to /api/v1/capabilities.

The idea is to introduce the concept of an "LLM Provider" that supports one or more models.

Whether a tool or model is "enabled", for now, would depend on if the requisite env variables are set or not.

The goal is to make it so UIs consuming the OpenGPTs backend could toggle LLMs and tools on and off based on how a given backend is configured.

{
    "capabilities": {
        "llms": [
            {
                "provider": "OpenAI",
                "models": [
                    {
                        "id": "openai_gpt3_turbo",
                        "title": "OpenAI GPT 3.5 Turbo",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "openai_gpt4_turbo",
                        "title": "OpenAI GPT 4 Turbo",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            },
            {
                "provider": "Anthropic",
                "models": [
                    {
                        "id": "anthropic_claude_2",
                        "title": "Claude 2",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            },
            {
                "provider": "Amazon Bedrock",
                "models": [
                    {
                        "id": "amazon_bedrock_claude_2",
                        "title": "Claude 2",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            },
            {
                "provider": "Azure",
                "models": [
                    {
                        "id": "azure_gpt4_turbo",
                        "title": "GPT 4 Turbo",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": false
                    }
                ]
            },
            {
                "provider": "Google",
                "models": [
                    {
                        "id": "google_gemini",
                        "title": "Gemini",
                        "supports_tools": true,
                        "supports_streaming": true,
                        "enabled": false
                    }
                ]
            },
            {
                "privider": "Ollama",
                "models": [
                    {
                        "id": "ollama_llama2",
                        "title": "Ollma - Llama2",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "ollama_mistral",
                        "title": "Ollma - Mistral",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "ollama_openchat",
                        "title": "Ollma - Openchat",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    },
                    {
                        "id": "ollama_orca2",
                        "title": "Ollma - Orca2",
                        "supports_tools": false,
                        "supports_streaming": true,
                        "enabled": true
                    }
                ]
            }
        ]
    },
    "tools": [
        {
            "id": "action_server_by_robocorp",
            "title": "Action Server by robocorp",
            "description": "Run AI actions with [Robocorp Action Server](https://github.com/robocorp/robocorp).",
            "enabled": true
        },
        {
            "id": "ai_action_runner_by_connery",
            "title": "AI Action Runner by Connery",
            "description": "Connect OpenGPTs to the real world with [Connery](https://github.com/connery-io/connery).",
            "enabled": true
        },
        {
            "id": "ddg_search",
            "title": "DuckDuckGo Search",
            "description": "Search the web with [DuckDuckGo](https://pypi.org/project/duckduckgo-search/).",
            "enabled": true
        },
        {
            "id": "arxiv_search",
            "title": "ArXiv Search",
            "description": "Searches [Arxiv](https://arxiv.org/).",
            "enabled": false
        },
        {
            "id": "you_search",
            "title": "You.com Search",
            "description": "Uses [You.com](https://you.com/) search, optimized responses for LLMs.",
            "enabled": true
        },
        {
            "id": "sec_filings_kai_ai",
            "title": "SEC Filings (Kay.ai)",
            "description": "Searches through SEC filings using [Kay.ai](https://www.kay.ai/).",
            "enabled": true
        },
        {
            "id": "ai_action_runner_by_connery",
            "title": "AI Action Runner by Connery",
            "description": "Connect OpenGPTs to the real world with [Connery](https://github.com/connery-io/connery).",
            "enabled": true
        },
        {
            "id": "wikipedia",
            "title": "Wikipedia",
            "description": "Searches [Wikipedia](https://pypi.org/project/wikipedia/).",
            "enabled": true
        }
    ]
}

@hoyleb
Copy link

hoyleb commented Jun 17, 2024

I added this functionality myself: a tool that points to an API endpoint (which I exposed in the frontend) which lists all tools found in api/tools.py and another that lists all public GPTs. I created an "building assistant" GPT that then uses these tools to figure out if the user requester functionality already exists, and if not, tells the user how to build it given the available tools. I actually want to the make this "building assistant" the entry point/ first thing a user sees when they come to my self-hosted version of OpenGPTs.

something like:

from app.agent import Tool as AVAILABLE_AGENT_TOOLS
import sys
from typing import get_args, Union

def get_all_tools_info(tool_union):
tool_classes = get_args(tool_union)
tool_name_desc = {}
for tool_class in tool_classes:
# Create an instance of the tool class (if necessary for accessing class variables)
tool_instance = tool_class()
tool_name = getattr(tool_instance, "name", "No name")
tool_description = getattr(tool_instance, "description", "No description")
tool_name_desc[tool_name] = tool_description
return tool_name_desc

@app.get("/list_tools", description="Gets the tools available")
async def get_tools():
tool_name_desc = get_all_tools_info(AVAILABLE_AGENT_TOOLS)
return tool_name_desc

and I did the same for GPTs, so I created a tool that searches the existing GPTs/assistants.

import json
@app.get("/list_gpts", description="Gets the public GPTs")
async def get_gpts():

available_gpts = await storage.list_public_assistants()
print("available_gpts", available_gpts, file=sys.stdout)
agents_desc = {}
for agent in available_gpts:
    tools_used = agent["config"]["configurable"]["type==agent/tools"]
    if len(tools_used) > 0:
        tools_name_desc = [
        {"tool_handler": t["type"],
        "tool_name": t["name"], 
        "tool_description": t["description"]
        } for t in tools_used]
    else:
        tools_name_desc = "no tool used in this GPT"

    agents_desc[agent["name"]] = {
        "description": agent["config"]["configurable"]["type==agent/retrieval_description"],
        "tools used": tools_name_desc,
        "system prompt": agent["config"]["configurable"]["type==chat_retrieval/system_message"]
        }
return agents_desc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants