You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the OpenAI Model parser only works with the default OpenAI as provider. Now that there are providers like Azure OpenAI ( authentication through oauth), we need to generalize the OpenAI Model parser to allow these different type of API providers.
One approach to this is to construct the OpenAI Client in the constructor of the OpenAI model parser, that way a user can simply extend the class and construct the client in the new constructor.
For Example:
openai.py
class OpenAIInference(ParameterizedModelParser):
def __init__(self):
super().__init__()
# New code:
self.client = OpenAI()
async def run_inference():
...
# Change this line to be self.client.completions.create(**completion_data)
252 response = openai.chat.completions.create(**completion_data)
would allow for
SomeAzureOpenAIModelParser.py
class AzureOpenAIInference(OpenAIInference):
def __init__(self):
super().__init__()
# New code:
self.client = AzureOpenAI() # or whatever this should be
Also update unit tests as necessary.
Note: this applies to both python and typescript sdks.
Note: the provided example solution may not be the best way to solve this; would require credentials at construct time. All of the current model parsers require credentials at the run step
The text was updated successfully, but these errors were encountered:
This won't work because of the way we register model parsers in config.py at initialization time.
I think it would be better to have an init function that sets up the model parser in the right way.
Also, AnyscaleEndpoint model parser uses the openai client, and the main OpenAI model parser should also use the OpenAI client. This will allow it to be configured in an init function.
The init function can be overridden by a derived class for things like azure openai usage.
Is it possible in the interim to set the base_url of the openai endpoint? We could still use litellm as an external component if needed. See example from github issue.
Hi @shuther, thanks for your suggestion. This issue was addressed by #1034 and can be marked as closed now. Providers such as AzureOpenAI, as mentioned in the original issue description, are now more seamlessly integrated the AIConfig SDK. Users can either initialize the OpenAI client with the base_url as you've suggested or set the relevant environment variables.
As for LiteLLM, it's on our radar as a potential out-of-the-box solution. Before I close this issue, I'd like to ensure we've fully addressed your needs. Are there specific concerns or goals you had in mind regarding the integration of LiteLLM?
Currently the OpenAI Model parser only works with the default OpenAI as provider. Now that there are providers like Azure OpenAI ( authentication through oauth), we need to generalize the OpenAI Model parser to allow these different type of API providers.
One approach to this is to construct the OpenAI Client in the constructor of the OpenAI model parser, that way a user can simply extend the class and construct the client in the new constructor.
For Example:
openai.py
would allow for
SomeAzureOpenAIModelParser.py
Also update unit tests as necessary.
Note: this applies to both python and typescript sdks.
Note: the provided example solution may not be the best way to solve this; would require credentials at construct time. All of the current model parsers require credentials at the run step
The text was updated successfully, but these errors were encountered: