Skip to content

Enable System Proxy Support for aiohttp Transport #11616

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 12, 2025

Conversation

idootop
Copy link
Contributor

@idootop idootop commented Jun 11, 2025

Enable System Proxy Support for aiohttp Transport

After switching to aiohttp as the default HTTP transport, system proxy configurations (HTTP_PROXY) are no longer automatically detected, which differs from the previous httpx behavior.

This PR enables trust_env by default in aiohttp transport to maintain consistent proxy behavior when reading from environment variables.

图片

Relevant issues

Fixes #11389

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

  • Set trust_env=True by default for aiohttp client
  • Add configuration options to control this behavior:
    • Environment variable: DISABLE_AIOHTTP_TRUST_ENV
    • Code parameter: litellm.disable_aiohttp_trust_env

This change ensures backward compatibility with previous proxy behavior while maintaining flexibility for users who need to disable this feature.

@CLAassistant
Copy link

CLAassistant commented Jun 11, 2025

CLA assistant check
All committers have signed the CLA.

Copy link

vercel bot commented Jun 11, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 12, 2025 2:56am

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a test in tests/test_litellm/llms/custom_httpx/test_http_handler.py

@idootop
Copy link
Contributor Author

idootop commented Jun 12, 2025

@ishaan-jaff done

图片

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ishaan-jaff ishaan-jaff merged commit 33c134c into BerriAI:main Jun 12, 2025
5 checks passed
@ishaan-jaff
Copy link
Contributor

Hi @idootop our testing pipeline found that with trust_env a blocking call gets added. It looks like the following happens

  1. When trust_env=True (the default), aiohttp attempts to:
  2. Read environment variables for proxy configuration
  3. Parse the .netrc file for authentication credentials
    Both operations involve synchronous file I/O that blocks the event loop

I want to help here and I think this PR allows you to use HTTPS env variables too: #11217

I will merge 11217 into main, can you help confirm it works. If not please can you help with approaches that add this support but don't add blocking sync calls. Blocking sync calls are costly for us

Full trace of blocking. You can use https://github.com/cbornet/blockbuster to see the same.


self = <openai.AsyncOpenAI object at 0x137947a60>, cast_to = <class 'openai.types.chat.chat_completion.ChatCompletion'>
options = FinalRequestOptions(method='post', url='/chat/completions', params={}, headers={'X-Stainless-Raw-Response': 'true'}, m... 'This is a test'}], 'model': 'gpt-4o-mini', 'stream': True, 'stream_options': {'include_usage': True}}, extra_json={})

    async def request(
        self,
        cast_to: Type[ResponseT],
        options: FinalRequestOptions,
        *,
        stream: bool = False,
        stream_cls: type[_AsyncStreamT] | None = None,
    ) -> ResponseT | _AsyncStreamT:
        if self._platform is None:
            # `get_platform` can make blocking IO calls so we
            # execute it earlier while we are in an async context
            self._platform = await asyncify(get_platform)()
    
        cast_to = self._maybe_override_cast_to(cast_to, options)
    
        # create a copy of the options we were given so that if the
        # options are mutated later & we then retry, the retries are
        # given the original options
        input_options = model_copy(options)
        if input_options.idempotency_key is None and input_options.method.lower() != "get":
            # ensure the idempotency key is reused between requests
            input_options.idempotency_key = self._idempotency_key()
    
        response: httpx.Response | None = None
        max_retries = input_options.get_max_retries(self.max_retries)
    
        retries_taken = 0
        for retries_taken in range(max_retries + 1):
            options = model_copy(input_options)
            options = await self._prepare_options(options)
    
            remaining_retries = max_retries - retries_taken
            request = self._build_request(options, retries_taken=retries_taken)
            await self._prepare_request(request)
    
            kwargs: HttpxSendArgs = {}
            if self.custom_auth is not None:
                kwargs["auth"] = self.custom_auth
    
            log.debug("Sending HTTP Request: %s %s", request.method, request.url)
    
            response = None
            try:
>               response = await self._client.send(
                    request,
                    stream=stream or self._should_stream_response_body(request=request),
                    **kwargs,
                )

/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py:1484: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/httpx/_client.py:1629: in send
    response = await self._send_handling_auth(
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/httpx/_client.py:1657: in _send_handling_auth
    response = await self._send_handling_redirects(
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/httpx/_client.py:1694: in _send_handling_redirects
    response = await self._send_single_request(request)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/httpx/_client.py:1730: in _send_single_request
    response = await transport.handle_async_request(request)
../../litellm/llms/custom_httpx/aiohttp_transport.py:206: in handle_async_request
    response = await client_session.request(
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/aiohttp/client.py:1353: in __aenter__
    self._resp = await self._coro
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/aiohttp/client.py:623: in _request
    proxy, proxy_auth = get_env_proxy_for_url(url)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/aiohttp/helpers.py:294: in get_env_proxy_for_url
    proxies_in_env = proxies_from_env()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/aiohttp/helpers.py:269: in proxies_from_env
    netrc_obj = netrc_from_env()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/aiohttp/helpers.py:212: in netrc_from_env
    return netrc.netrc(str(netrc_path))
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/netrc.py:31: in __init__
    self._parse(file, fp, default_netrc)
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/netrc.py:43: in _parse
    toplevel = tt = lexer.get_token()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shlex.py:109: in get_token
    raw = self.read_token()
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/shlex.py:140: in read_token
    nextchar = self.instream.read(1)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<_io.TextIOWrapper name='/Users/ishaanjaffer/.netrc' mode='r' encoding='utf-8'>, 1), kwargs = {}
skip_token = <Token used var=<ContextVar name='blockbuster_skip' at 0x1160260c0> at 0x1525863c0>, frame = None, in_test_module = False
frame_info = Traceback(filename='/Library/Frameworks/Python.framework/Versions/3.10/bin/pytest', lineno=8, function='<module>', code_context=['    sys.exit(console_main())\n'], index=0)
in_excluded_module = False, frame_file_name = '/Library/Frameworks/Python.framework/Versions/3.10/bin/pytest', filename = 'aiofile/version.py', functions = {'<module>'}

    def wrapper(*args: Any, **kwargs: Any) -> _T:
        if blockbuster_skip.get(False):
            return func(*args, **kwargs)
        try:
            asyncio.get_running_loop()
        except RuntimeError:
            return func(*args, **kwargs)
        skip_token = blockbuster_skip.set(True)
        try:
            if can_block_predicate(*args, **kwargs):
                return func(*args, **kwargs)
            frame = inspect.currentframe()
            in_test_module = False
            while frame:
                frame_info = inspect.getframeinfo(frame)
                if not in_test_module:
                    in_excluded_module = False
                    for excluded_module in excluded_modules:
                        if frame_info.filename.startswith(excluded_module):
                            in_excluded_module = True
                            break
                    if not in_excluded_module:
                        for module in modules:
                            if frame_info.filename.startswith(module):
                                in_test_module = True
                                break
                frame_file_name = Path(frame_info.filename).as_posix()
                for filename, functions in can_block_functions:
                    if (
                        frame_file_name.endswith(filename)
                        and frame_info.function in functions
                    ):
                        return func(*args, **kwargs)
                frame = frame.f_back
            if not modules or in_test_module:
>               raise BlockingError(func_name)
E               blockbuster.blockbuster.BlockingError: Blocking call to io.TextIOWrapper.read

/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/blockbuster/blockbuster.py:109: BlockingError

The above exception was the direct cause of the following exception:

self = <litellm.llms.openai.openai.OpenAIChatCompletion object at 0x130c7b130>, timeout = 600.0, messages = [{'content': 'This is a test', 'role': 'user'}]
optional_params = {'extra_body': {}}, litellm_params = {'acompletion': True, 'aembedding': None, 'api_base': None, 'api_key': None, ...}
provider_config = <litellm.llms.openai.chat.gpt_transformation.OpenAIGPTConfig object at 0x11505a110>, model = 'gpt-4o-mini'
logging_obj = <litellm.litellm_core_utils.litellm_logging.Logging object at 0x137acd240>
api_key = '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'
api_base = 'https://api.openai.com/v1', api_version = None, organization = 'org-ikDc4ex8NB5ZzfTf8m5WYVB7', client = None, max_retries = 2, headers = None, drop_params = None
stream_options = None

    async def async_streaming(
        self,
        timeout: Union[float, httpx.Timeout],
        messages: list,
        optional_params: dict,
        litellm_params: dict,
        provider_config: BaseConfig,
        model: str,
        logging_obj: LiteLLMLoggingObj,
        api_key: Optional[str] = None,
        api_base: Optional[str] = None,
        api_version: Optional[str] = None,
        organization: Optional[str] = None,
        client=None,
        max_retries=None,
        headers=None,
        drop_params: Optional[bool] = None,
        stream_options: Optional[dict] = None,
    ):
        response = None
        data = provider_config.transform_request(
            model=model,
            messages=messages,
            optional_params=optional_params,
            litellm_params=litellm_params,
            headers=headers or {},
        )
        data["stream"] = True
        data.update(
            self.get_stream_options(stream_options=stream_options, api_base=api_base)
        )
        for _ in range(2):
            try:
                openai_aclient: AsyncOpenAI = self._get_openai_client(  # type: ignore
                    is_async=True,
                    api_key=api_key,
                    api_base=api_base,
                    api_version=api_version,
                    timeout=timeout,
                    max_retries=max_retries,
                    organization=organization,
                    client=client,
                )
                ## LOGGING
                logging_obj.pre_call(
                    input=data["messages"],
                    api_key=api_key,
                    additional_args={
                        "headers": headers,
                        "api_base": api_base,
                        "acompletion": True,
                        "complete_input_dict": data,
                    },
                )
    
>               headers, response = await self.make_openai_chat_completion_request(
                    openai_aclient=openai_aclient,
                    data=data,
                    timeout=timeout,
                    logging_obj=logging_obj,
                )

../../litellm/llms/openai/openai.py:969: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../litellm/litellm_core_utils/logging_utils.py:135: in async_wrapper
    result = await func(*args, **kwargs)
../../litellm/llms/openai/openai.py:436: in make_openai_chat_completion_request
    raise e
../../litellm/llms/openai/openai.py:418: in make_openai_chat_completion_request
    await openai_aclient.chat.completions.with_raw_response.create(
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_legacy_response.py:381: in wrapped
    return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/resources/chat/completions/completions.py:2028: in create
    return await self._post(
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py:1742: in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)

ishaan-jaff added a commit that referenced this pull request Jun 14, 2025
@idootop
Copy link
Contributor Author

idootop commented Jun 15, 2025

Hi @ishaan-jaff. Thanks for the feedback. Unfortunately, #11217 won't work as expected due to several issues:

  • httpx.AsyncClient doesn't have a proxies attribute - you need mounts
  • The mounts returned by litellm.utils.create_proxy_transport_and_mounts() have incompatible keys (e.g., NO_PROXY). You should use httpx.get_environment_proxies() instead
  • There's a circular import between litellm.llms.custom_httpx.http_handler and litellm.utils
  • Even with correct proxy mounts, this conflicts with existing verify_ssl, ssl_context, and force_ipv4 configurations, since proxy requests won't use the SSL-configured transports unless all mounts include those SSL settings
  • Minor note: your log above leaked your OpenAI key and organization (though I checked and the key appears to be invalid)

Regarding aiohttp's trust_env behavior: you're absolutely right that it's not async-friendly. It performs blocking file I/O on every request to read proxy credentials from local files. A better approach might be initializing proxy and auth during transport creation, though aiohttp's strategy does make sense from the perspective of always using the latest configuration.

While these issues could potentially be resolved by properly configuring mounts with SSL settings, I won't be working on this further as I'm not a maintainer of this library.

Good luck with the implementation!

craftslab added a commit to ai-flowx/drivex that referenced this pull request Jun 16, 2025
X4tar pushed a commit to X4tar/litellm that referenced this pull request Jun 17, 2025
* feat: enable proxy for aiohttp, fixes 11389

* chore: add test for aiohttp trust env

* style: format litellm/__init__.py
X4tar pushed a commit to X4tar/litellm that referenced this pull request Jun 17, 2025
X4tar pushed a commit to X4tar/litellm that referenced this pull request Jun 17, 2025
* feat: enable proxy for aiohttp, fixes 11389

* chore: add test for aiohttp trust env

* style: format litellm/__init__.py
craftslab added a commit to ai-flowx/drivex that referenced this pull request Jun 22, 2025
craftslab added a commit to ai-flowx/drivex that referenced this pull request Jun 23, 2025
craftslab added a commit to ai-flowx/drivex that referenced this pull request Jun 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Proxy Server env HTTP__PROXY is not taking effect
3 participants