Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Does OpenAIAgent support the use of customllm #7721

Closed
1 task done
123zzw opened this issue Sep 19, 2023 · 5 comments
Closed
1 task done

[Question]: Does OpenAIAgent support the use of customllm #7721

123zzw opened this issue Sep 19, 2023 · 5 comments
Labels
question Further information is requested

Comments

@123zzw
Copy link

123zzw commented Sep 19, 2023

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Hello, the performance of OpenAIAgent is very impressive. I recently encountered a problem where when I use a local model, it seems that OpenAIAgent is not working and can only treat the local model as a conversation assistant.So I would like to ask if OpenAIAgent does not support loading local models, that is, openai must be used instead of customLLM.
What should I do if I want to use customllm?

@123zzw 123zzw added the question Further information is requested label Sep 19, 2023
@dosubot
Copy link

dosubot bot commented Sep 19, 2023

🤖

Hello,

Thank you for your question. Based on the context provided, it seems that OpenAIAgent does not support loading local models directly. However, you can use a custom LLM with OpenAIAgent by creating a new class that inherits from the LLM base class and implements the required methods. Then, you can pass an instance of your custom LLM to the OpenAIAgent's constructor or the from_tools class method.

Here is a simplified example of how you can create a custom LLM:

from llama_index.llms.base import LLM

class CustomLLM(LLM):
    def __init__(self, model):
        self.model = model

    def chat(self, messages):
        # Implement your chat logic here
        pass

    def achat(self, messages):
        # Implement your async chat logic here
        pass

And here is how you can use it with OpenAIAgent:

custom_llm = CustomLLM(my_model)
agent = OpenAIAgent.from_tools(llm=custom_llm, ...)

Please replace my_model with your model and ... with other necessary arguments.

To load a local model in the LlamaIndex framework, you would use the load_index_from_storage function from the loading.py module. This function loads an index from a given storage context. Here's the process:

  1. Call the load_index_from_storage function with the necessary arguments. The arguments would be a StorageContext object and optionally an index_id. If no index_id is provided, the function assumes there's only a single index in the index store and loads it.
  2. The function then calls load_indices_from_storage function which loads all indices from the storage context if no specific index_ids are provided.
  3. If no index is found in the storage context, a ValueError is raised. If more than one index is found but no specific index_id was provided, a ValueError is also raised.
  4. The function then returns the loaded index.

Here's the relevant code:

def load_index_from_storage(
    storage_context: StorageContext,
    index_id: Optional[str] = None,
    **kwargs: Any,
) -> BaseIndex:
    index_ids: Optional[Sequence[str]]
    if index_id is None:
        index_ids = None
    else:
        index_ids = [index_id]

    indices = load_indices_from_storage(storage_context, index_ids=index_ids, **kwargs)

    if len(indices) == 0:
        raise ValueError(
            "No index in storage context, check if you specified the right persist_dir."
        )
    elif len(indices) > 1:
        raise ValueError(
            f"Expected to load a single index, but got {len(indices)} instead. "
            "Please specify index_id."
        )

    return indices[0]

Please note that the specific arguments you need to pass to the load_index_from_storage function would depend on your storage context and the specific index you want to load.

I hope this helps! If you have any other questions, feel free to ask.

Best,
Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@123zzw 123zzw changed the title [Question]: [Question]: Does OpenAIAgent support the use of customllm Sep 19, 2023
@logan-markewich
Copy link
Collaborator

@123zzw the openai agent using the Function Calling API from OpenAI. It is much more reliable than a typical react agent, since the function/tool selection is baked into the model/api

You can use it with a custom LLM, but that custom LLM needs to support a similar API. LlamaAPI is an example of this
https://github.com/jerryjliu/llama_index/blob/1ae5a686490e3f9dc0bdd2c5daa713ea2b9eb993/llama_index/llms/llama_api.py#L81

Specifically, the LLM metadata should indicate that it is a function calling LLM

@123zzw
Copy link
Author

123zzw commented Sep 20, 2023

@123zzw the openai agent using the Function Calling API from OpenAI. It is much more reliable than a typical react agent, since the function/tool selection is baked into the model/api

You can use it with a custom LLM, but that custom LLM needs to support a similar API. LlamaAPI is an example of this

https://github.com/jerryjliu/llama_index/blob/1ae5a686490e3f9dc0bdd2c5daa713ea2b9eb993/llama_index/llms/llama_api.py#L81

Specifically, the LLM metadata should indicate that it is a function calling LLM

Thank you very much for your reply. I tried llama2-13b chat locally, but when running this demo, I replaced the model with llama2-13b chat. At this time, the model cannot access my local files and can only have normal conversations with me that are not based on files. What is the reason for this? I look forward to your reply very much.

@logan-markewich
Copy link
Collaborator

@123zzw yea once again, the openai agent is only supported for function calling models. Llama2 is not a function calling model, unless you have some special api on top of it

I would use a react agent instead
https://gpt-index.readthedocs.io/en/stable/examples/agent/react_agent_with_query_engine.html

@123zzw
Copy link
Author

123zzw commented Sep 21, 2023

function calling models

Thank you for your detailed explanation. I have understood the reason behind it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants