Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuring OpenAI client in AzureOpenAI components (Embedders, Generators) #8996

Open
lacebal opened this issue Mar 6, 2025 · 0 comments
Labels
P1 High priority, add to the next sprint

Comments

@lacebal
Copy link

lacebal commented Mar 6, 2025

Is your feature request related to a problem? Please describe.
When working in a complex enterprise networking setup it may be required to setup custom proxies and custom certificates when connecting to an Azure OpenAI endpoint. Think of custom internal LLM gateways with internal certificates and accessible only through proxies.

Current configuration parameters fall sort on options.

Describe the solution you'd like
AzureOpenAI* classes should allow to inject the underlaying OpenAI client and/or httpx client to overcome any enterprise networking requirement.

Other AI frameworks (PydanticAI, Lagnchain, Llamaindex) allows injecting that configuration allowing complex enterprise networking setups.

Langchain Embedding example with internal CAs support (based on truststore library) and proxy setup:

    ssl_ctx = ssl.create_default_context()
    http_async_client = httpx.AsyncClient(
        proxy=openai_config.proxy,
        verify=ssl_ctx,
    )
    embeddings = AzureOpenAIEmbeddings(
        http_async_client=http_async_client,
        azure_endpoint=openai_config.api_base,
        openai_api_key=openai_config.api_key, #type: ignore
        deployment="text-embedding-ada-002", #type: ignore
        model="text-embedding-ada-002",
    )

This solution probably does not works well with the component serialization and some thinking would be required.

Describe alternatives you've considered
Setting up global proxies and certificates could be cumbersome in python and not enough as different endpoints could require different proxies. In our scenario we require to access to LLM endpoints through different proxies.

Passing extra args to the OpenAI client construction could suffice but require some thinking on the interaction between current fixed parameters (azure_endpoint, azure_deployment, api_key)

Exposing the OpenAI client and having some post_construction lifecycle method executed after the from_dict could be a good solution. That way the object would be serialized/deserialized as of now and then client code would be able to fix with dedicated code the client connection parameters.

Additional context
Add any other context or screenshots about the feature request here.

The same requirement would apply to any Cloud provider that allows private endpoints and setups (AWS Bedrock, etc)

@julian-risch julian-risch added the P1 High priority, add to the next sprint label Mar 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1 High priority, add to the next sprint
Projects
None yet
Development

No branches or pull requests

2 participants