Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: Obfuscated LLM Fallback in OpenAIWrapper #2855

Open
WaelKarkoub opened this issue Jun 3, 2024 · 0 comments
Open

[Issue]: Obfuscated LLM Fallback in OpenAIWrapper #2855

WaelKarkoub opened this issue Jun 3, 2024 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@WaelKarkoub
Copy link
Collaborator

Describe the issue

I was working on modalities and different transforms and realized the issue of not knowing which LLM configuration is being used before and after the hooks are called. For example, imagine an agent with image modality support where all your transforms assume the agent has this capability. What happens if the API request fails and the OpenAIWrapper switches to an LLM that doesn't support the image modality? Will the hooks run again? How will these agent capabilities adjust to the OpenAIWrapper in this scenario?

I believe @afourney previously raised this issue, and I'm facing it as well. I was thinking maybe the agent should select which LLM config to fall back to, rather than leaving this responsibility to the OpenAIWrapper.

Steps to reproduce

No response

Screenshots and logs

No response

Additional Information

No response

@WaelKarkoub WaelKarkoub added the bug Something isn't working label Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants