-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Does OpenAIAgent support the use of customllm #7721
Comments
🤖 Hello, Thank you for your question. Based on the context provided, it seems that OpenAIAgent does not support loading local models directly. However, you can use a custom LLM with OpenAIAgent by creating a new class that inherits from the LLM base class and implements the required methods. Then, you can pass an instance of your custom LLM to the OpenAIAgent's constructor or the Here is a simplified example of how you can create a custom LLM: from llama_index.llms.base import LLM
class CustomLLM(LLM):
def __init__(self, model):
self.model = model
def chat(self, messages):
# Implement your chat logic here
pass
def achat(self, messages):
# Implement your async chat logic here
pass And here is how you can use it with OpenAIAgent: custom_llm = CustomLLM(my_model)
agent = OpenAIAgent.from_tools(llm=custom_llm, ...) Please replace To load a local model in the LlamaIndex framework, you would use the
Here's the relevant code: def load_index_from_storage(
storage_context: StorageContext,
index_id: Optional[str] = None,
**kwargs: Any,
) -> BaseIndex:
index_ids: Optional[Sequence[str]]
if index_id is None:
index_ids = None
else:
index_ids = [index_id]
indices = load_indices_from_storage(storage_context, index_ids=index_ids, **kwargs)
if len(indices) == 0:
raise ValueError(
"No index in storage context, check if you specified the right persist_dir."
)
elif len(indices) > 1:
raise ValueError(
f"Expected to load a single index, but got {len(indices)} instead. "
"Please specify index_id."
)
return indices[0] Please note that the specific arguments you need to pass to the I hope this helps! If you have any other questions, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@123zzw the openai agent using the Function Calling API from OpenAI. It is much more reliable than a typical react agent, since the function/tool selection is baked into the model/api You can use it with a custom LLM, but that custom LLM needs to support a similar API. LlamaAPI is an example of this Specifically, the LLM metadata should indicate that it is a function calling LLM |
Thank you very much for your reply. I tried llama2-13b chat locally, but when running this demo, I replaced the model with llama2-13b chat. At this time, the model cannot access my local files and can only have normal conversations with me that are not based on files. What is the reason for this? I look forward to your reply very much. |
@123zzw yea once again, the openai agent is only supported for function calling models. Llama2 is not a function calling model, unless you have some special api on top of it I would use a react agent instead |
Thank you for your detailed explanation. I have understood the reason behind it! |
Question Validation
Question
Hello, the performance of OpenAIAgent is very impressive. I recently encountered a problem where when I use a local model, it seems that OpenAIAgent is not working and can only treat the local model as a conversation assistant.So I would like to ask if OpenAIAgent does not support loading local models, that is, openai must be used instead of customLLM.
What should I do if I want to use customllm?
The text was updated successfully, but these errors were encountered: