-
Notifications
You must be signed in to change notification settings - Fork 0
a3s-code appears to ignore HTTP(S) proxy settings for model requests, causing outbound OpenAI-compatible calls to time out in proxy-required environments #20
Description
Summary
We are integrating a3s-code into skillsbench environment where outbound network access must go
through an HTTP proxy.
In the same environment, with the same OpenAI-compatible base URL and the same proxy settings:
codexworksclaude-codeworksa3s-codetimes out when sending model requests
This strongly suggests that a3s-code is not honoring standard proxy environment variables such as
http_proxy, https_proxy, HTTP_PROXY, and HTTPS_PROXY for its outbound HTTP requests.
Environment
Host setup:
- Linux host
- Docker with
--network host - outbound network requires an intranet HTTP proxy
- no SSH tunnel proxy was used in this test
Tooling versions observed locally:
a3s-code:1.5.7os:linux
Provider setup:
- OpenAI-compatible endpoint
- model configured through
base_url+api_key - proxy configured only through standard env vars:
http_proxyhttps_proxyHTTP_PROXYHTTPS_PROXY
Configuration used
Minimal config.hcl:
default_model = "openai/gpt-5.4"
providers {
name = "openai"
api_key = env("OPENAI_API_KEY")
base_url = env("OPENAI_BASE_URL")
models {
id = "gpt-5.4"
name = "GPT 5.4"
}
}
Proxy environment was set before running the agent:
export http_proxy=http://<proxy-host>:3128
export https_proxy=http://<proxy-host>:3128
export HTTP_PROXY=$http_proxy
export HTTPS_PROXY=$https_proxyWhat we tested
1. Control test: Codex works under the same proxy
We ran a minimal smoke test with the same environment and same OpenAI-compatible gateway.
Command pattern:
source scripts/env_proxy.sh user.chensicheng
source .env
codex exec --skip-git-repo-check --cd /tmp --model gpt-5.4 --json "Reply with exactly OK and nothing else."
Observed result:
- success
- returned OK
- exit code 0
Relevant output:
{"type":"item.completed","item":{"id":"item_1","type":"agent_message","text":"OK"}}
2. Control test: Claude Code works under the same proxy
We ran a minimal Claude smoke test in the same environment.
Command pattern:
source scripts/env_proxy.sh user.chensicheng
source .env
export ANTHROPIC_MODEL="claude-sonnet-4-5@20250929"
claude --bare -p --output-format json --model "$ANTHROPIC_MODEL" "Reply with exactly OK and nothing else."
Observed result:
- success
- returned OK
- exit code 0
Relevant output:
{"type":"result","subtype":"success","is_error":false,"result":"OK"}
3. a3s-code fails in the same environment
After we fixed unrelated issues in our integration:
- config file upload path was correct
- config.hcl syntax was correct
a3s-code still failed at request time.
Observed runtime error:
[DEBUG] HTTP error: Failed to send request to https:///chat/completions
Caused by:
0: error sending request for url (https:///chat/completions): error trying to conn
ect: tcp connect error: Connection timed out (os error 110)
1: error trying to connect: tcp connect error: Connection timed out (os error 110)
2: tcp connect error: Connection timed out (os error 110)
3: Connection timed out (os error 110)
This happened after:
- a3s-code was installed successfully
- config was uploaded successfully
- the same host/proxy combination already worked for other agent CLIs
Why we believe this is a proxy support issue
We ruled out several other causes:
Not a general network failure
In the same environment:
- codex can reach the model provider and complete a prompt
- claude-code can reach its provider and complete a prompt
Not a Docker networking issue
We separately validated our Docker network behavior:
- host networking works
- the container can use the configured host-side proxy
- other agent CLIs succeed in the same containerized/proxied environment
Not a config file parsing issue
We did previously hit a separate HCL syntax problem when base_url was unquoted, but that was fixed.
Earlier parse error looked like this:
Failed to parse HCL: expected newline or eof
That issue is independent from the current timeout.
Not a missing config issue
We also previously hit a missing config upload issue:
A3S code config not found: /root/.a3s/config.hcl
That was also fixed. After fixing it, the network timeout remained.
Expected behavior
When standard proxy variables are present in the environment, a3s-code should honor them for outbound model API
requests, including OpenAI-compatible providers configured via base_url.
At minimum, one of the following should work:
- a3s-code automatically respects http_proxy / https_proxy / HTTP_PROXY / HTTPS_PROXY
- a3s-code exposes an explicit proxy configuration option
- the docs clearly state that proxy env vars are unsupported, and document the supported alternative
Actual behavior
a3s-code appears to attempt a direct TCP connection to the model endpoint and times out, even though the
environment requires proxy egress and other CLIs succeed under the same proxy configuration.
Minimal reproduction request
Could you confirm whether a3s-code is supposed to support proxy-based outbound HTTP in environments like this?
If yes, can you point to:
- the supported proxy env vars
- whether they apply to OpenAI-compatible providers
- any additional config needed for Rust/reqwest transport
- whether proxy support is currently missing in the a3s-code HTTP client layer
Suggested improvements
If proxy support is intended, it would help a lot to:
- explicitly support http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY
- document proxy support in the README/config docs
- add a small proxy-required integration test
- optionally add debug logging that prints whether a proxy is being used for outbound requests
Additional context
Our integration wraps a3s-code inside a benchmark runner, but the important point is this:
- same machine
- same proxy
- same style of API gateway
- same container/network setup
Control tools succeed, while a3s-code times out on the model request.
That is the reason we believe the issue is specifically in proxy handling for a3s-code.