This is just a quick hack, I don't plan to maintain it, but I'm finding it useful. Feel free to fork.
If you use this software and follow these instructions, you are going to open your local machine to a non-deterministic, non-reliable piece of software and to the wild Internet, with your presence fully announced to any malicious bot out there (if you are familiar with the Dark Forest theory, I think it's a pretty good analogy).
If something bad happens, you have warned.
Still, if you follow this recommendations:
- you run it within a VM (VirtualBox, Lima, whatever) or in a disposable machine (RPi) sharing just what you need the LLM to know
- you use the source IP filtering mechanism
- you drop the tunnel when you aren't using it
- you obfuscate the url (security through obscurity gets a lot of bad press but for untargeted attacks it's very effective)
I think you wouldn't be totally mad if you decide to run it.
What it is: A lightweight GitHub OAuth-protected proxy built with FastMCP to securely expose Model Context Protocol (MCP) tools to ChatGPT.
What problem solves:
ChatGPT as of today (October 2025) does support MCP in what it calls "Developer Mode" with these restrictions:
- Only remote endpoints (https secured and Internet-reachable). No local (stdio) support.
- The authentication options are:
- None (unsecured endpoints)
- OAuth protected
- Notably missing: "simple" Bearer API keys are not supported (unlike in GPT Actions where they are available)
These restrictions make it difficult to use it for personal needs like Claude Desktop openly allows.
This script allows to run a proxy that supports GitHub as an OAuth provider, and forwards requests to local (stdio) mcp servers. Only GitHub users that are authenticated and in an allowed user list may use the services.
For enhaced security it allows to keep a list of permitted ip ranges. This way only connections that originate from OpenAI are permitted (OpenAI publishes their CIDR ranges in https://openai.com/chatgpt-actions.json).
And finally, it allows to set a custom uri-path, so it's not easy to discover by bots.
When the proxy boots it performs an upfront security audit. Three safeguards are evaluated and reported with emojis in the logs:
- ✅/
⚠️ OAuth authentication (disabled only whenSKIP_OAUTHis truthy, e.g.SKIP_OAUTH=true) - ✅/
⚠️ IP allowlist (set viaALLOWED_RANGES_FILE) - ✅/
⚠️ Obfuscated URL path (non-defaultOBFUSCATED_PATHwith at least 8 characters)
If fewer than two protections are active the server refuses to start and raises an error. Adjust your configuration until at least two checks pass before retrying.
There is still the valid https and Internet reachable problem, but there are standardized solutions for that.
The options I know are: Tailscale funnel, cloudflare tunnels and ngrok.
Tailscale funnel is the one I have tested and, in general terms, only requires:
- Install tailscale
- run `tailscale funnel
- click accept several times the first time it's run to enable it
That is all. The bad news is that the moment you enable it, you start getting scans looking vulnerabilities (brute-force hacking attempts), because the creation of the certificate signals its existence. May be the other solutions don't have this problem, I don't know.
Assuming you have an account. Go to https://github.com/settings/developers and create a new App:
- Client ID and Client Secret are generated by GitHub (write them down securely)
- App Name (something like "MCP for Laptop" should suffice)
- HomePage URL. This is where you will be sent after form validation: https://chatgpt.com/connector_platform_oauth_redirect
- Authorization callback URL: Use https://..ts.net/auth/callback
(THIS PART IS AI GENERATED)
Prerequisites:
- Python 3.10 or newer
- A public HTTPS URL to your machine (via Tailscale Funnel, Cloudflare Tunnel, or ngrok)
- A GitHub OAuth App (see section above)
Steps:
- Clone and enter the repo
- git clone
- cd
- Create and activate a virtual environment
- python3 -m venv .venv
- source .venv/bin/activate (Windows: .venv\Scripts\activate)
- Install dependencies
- pip install -r requirements.txt
- Create your environment file
- cp env.example .env
- Edit .env and fill in values (see Configuration below)
- Prepare your MCP server config
- Copy or edit mcp.json (example in mcp.json.example) so it points to your local stdio MCP servers.
Edit the .env file. Important variables:
- INTERNAL_HOST / INTERNAL_PORT: Where the proxy listens locally (default 127.0.0.1:8888)
- EXTERNAL_HOSTNAME: The public hostname clients will use (e.g. myhost.tail123.ts.net)
- GITHUB_CLIENT_ID / GITHUB_CLIENT_SECRET: From your GitHub OAuth App
- GITHUB_USERS: Comma-separated GitHub usernames allowed to access (leave empty to allow any authenticated user)
- BASE_URL_SCHEME: Usually "https"
- OAUTH_REDIRECT_PATH: Usually "/auth/callback"; must match your GitHub OAuth App
- MCP_JSON_PATH: Path to your MCP server config (default ./mcp.json)
- SERVER_NAME: Name ChatGPT will see for the MCP server
- ALLOWED_RANGES_FILE: Optional path to IPv4 CIDR allowlist; if set, only those source IPs are allowed
- OBFUSCATED_PATH: A random-looking path segment added to your public endpoint for obscurity. To satisfy the startup check it must differ from the default
"shouldberandom"and be at least 8 characters long.
Optional: Restrict source IPs to OpenAI
- Generate a ranges file using the helper script:
- python generate_allowed_ranges_from_openai.py --output allowed-ranges.txt
- Point ALLOWED_RANGES_FILE in .env to that file.
- Ensure your .env and mcp.json are ready
- Start the server:
- python server.py
- On startup you will see emoji-tagged status lines for each security measure; ensure at least two are ✅ or the process will exit.
- The proxy then listens on INTERNAL_HOST:INTERNAL_PORT and exposes an HTTP endpoint path of /{OBFUSCATED_PATH}.
Example local URL: http://127.0.0.1:8888/your-long-random-path
To make it reachable by ChatGPT, put it behind a public HTTPS endpoint (see next section) and ensure EXTERNAL_HOSTNAME points to that hostname.
You can use any of the following:
- Tailscale Funnel
- tailscale funnel
- Cloudflare Tunnel
- cloudflared tunnel --url http://127.0.0.1:
- ngrok
- ngrok http http://127.0.0.1:
Be sure the final public URL hostname matches EXTERNAL_HOSTNAME in your .env and that HTTPS is enabled.
- In ChatGPT, open Settings → Developer Mode → Add MCP server
- Choose OAuth as the authentication type
- Endpoint URL: https://<EXTERNAL_HOSTNAME>/{OBFUSCATED_PATH}
- Follow the OAuth flow; after authorization, the proxy will only allow users in GITHUB_USERS
- The proxy will forward MCP traffic to the local stdio servers described by your mcp.json
OpenAI publishes ChatGPT Actions IP ranges at https://openai.com/chatgpt-actions.json
- Regenerate your allowlist periodically:
- python generate_allowed_ranges_from_openai.py
- Restart the proxy to apply changes
- 403 / not allowed:
- Check GITHUB_USERS includes your GitHub username
- If ALLOWED_RANGES_FILE is set, confirm your request is coming from an allowed IP (consider your tunnel provider’s X-Forwarded-For behavior)
- OAuth callback mismatch:
- Ensure OAUTH_REDIRECT_PATH matches the GitHub App configuration
- Ensure EXTERNAL_HOSTNAME is correct and publicly reachable via HTTPS
- Cannot reach MCP servers:
- Validate your mcp.json syntax and that the commands run locally via stdio
- Logs too quiet / verbose:
- Set DEBUGLEVEL in .env (e.g., INFO, DEBUG, WARNING)
- Deactivate and remove the virtualenv
- Remove the tunnel configuration if any
- Delete the .env if it contains secrets