-
Notifications
You must be signed in to change notification settings - Fork 511
FIX (openai): respect custom API endpoint when creating AsyncOpenAI clients #192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… in openAI client
@microsoft-github-policy-service agree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR ensures that custom OpenAI API endpoints from configuration or environment variables are properly utilized instead of a hard-coded endpoint.
- Added a new method in the llm module to retrieve the API endpoint.
- Updated the AsyncOpenAI client initialization in both llm and embedding modules to use the custom endpoint.
- Adjusted the environment template to align with the expected API endpoint format.
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
code/llm/openai.py | Added get_api_endpoint and updated client initialization to respect custom endpoints. |
code/embedding/openai_embedding.py | Added get_openai_api_endpoint and updated client initialization accordingly. |
code/.env.template | Updated the OPENAI_ENDPOINT value to a more general base URL. |
Comments suppressed due to low confidence (1)
code/embedding/openai_embedding.py:49
- The docstring incorrectly mentions 'API key' instead of 'API endpoint'. Update the docstring to accurately reflect that this function retrieves the API endpoint.
Retrieve the OpenAI API key from configuration.
# Fallback to environment variable | ||
endpoint = os.getenv("OPENAI_ENDPOINT") | ||
if not endpoint: | ||
error_msg = "OpenAI API endpoint not found in configuration or environment" | ||
logger.error(error_msg) | ||
raise ValueError(error_msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configured endpoint from provider_config is overwritten unconditionally by the environment variable. Consider checking if the provider_config.endpoint is available before using os.getenv as a fallback.
# Fallback to environment variable | |
endpoint = os.getenv("OPENAI_ENDPOINT") | |
if not endpoint: | |
error_msg = "OpenAI API endpoint not found in configuration or environment" | |
logger.error(error_msg) | |
raise ValueError(error_msg) | |
# Fallback to environment variable if provider_config.endpoint is not set | |
if not endpoint: | |
endpoint = os.getenv("OPENAI_ENDPOINT") | |
if not endpoint: | |
error_msg = "OpenAI API endpoint not found in configuration or environment" | |
logger.error(error_msg) | |
raise ValueError(error_msg) |
Copilot uses AI. Check for mistakes.
Respect user-defined OpenAI API endpoints for both Embedding and LLM providers.
Config files defines endpoints here:
Calls now use the endpoint configured in CONFIG (or OPENAI_ENDPOINT) instead of the hard-coded https://api.openai.com.