Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Ignores my specified models and always goes to openai #2889

Open
charltonh opened this issue Jun 7, 2024 · 3 comments
Open

[Bug]: Ignores my specified models and always goes to openai #2889

charltonh opened this issue Jun 7, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@charltonh
Copy link

charltonh commented Jun 7, 2024

Describe the bug

This is a fresh install on linux and a couple of problems:

  1. When setting up a new model and model name, the agent can point to that model name, but when you hover over the model name in the dropdown you can see the info coming from another model. It's as if it still is using the openai model. THIS IS A BUG.

  2. Ok so you delete it and re-add it like it says to do on the agents and workflow, and now it appears to point to the model you really want, in my case pointing to https://api.groq.com/openai/v1 with a groq API key. The model tests out successfully.
    BUT, it still appears to be going on the openai base_url because it gives me the error:

Error occurred while processing message: Error code: 404 - {'error': {'message': 'The model llama3-70b-8192 does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

I tried exporting OPENAI_API_BASE and OPENAI_API_KEY environment variables to point to groq.com to see if that helped, and this is where I learned it's still going to openai no matter what I do, because then I got this error:

openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: gsk_k6LK********************************************ej4u. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

So problem #2 may be something that can be configured, but it's a problem because it says it's using a model that tests out successfully, but when you actually go to run it, it's pointing to openai no matter what model it's set to.

Steps to reproduce

No response

Model Used

llama3-70b-8192 on https://api.groq.com/openai/v1

Expected Behavior

Autogen (the agent) should use the model it says it is using, going by the model name/identifier.

Screenshots and logs

No response

Additional Information

python-3.12
autogenstudio 0.56
fresh install

@charltonh charltonh added the bug Something isn't working label Jun 7, 2024
@charltonh
Copy link
Author

charltonh commented Jun 7, 2024

Apologies, this was meant to be a bug for autogenstudio, which I thought was a newer beta version of this project.

@qingyun-wu
Copy link
Collaborator

@charltonh, I wonder if this is addressed?

@charltonh
Copy link
Author

Well, sort of.
Instructions need to include setting both OPENAI_BASE_URL and OPENAI_API_BASE environment variables, because Autogen Studio doesn't override this even though you may be pointing to an entirely different URL and model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants