TLS proxy for * domains
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.



Proxy is a nginx-based reverse proxy with TLS that runs on AWS. It was written to be the backend for a Recurse Center domain service.

Proxy is designed to be a front-end for an unlimited number of webapps all hosted at * For each request, Proxy serves a wildcard TLS certificate for that domain.

Proxy can also optionally be the front-end for multiple apex redirects. This will redirect all HTTP and HTTPS requests from to, to, etc. This is useful if you use Proxy on the same domain as a Heroku app living at because Route 53, which Proxy requires, does not support ALIAS records to non-AWS infrastructure.


  • Nightly unattended security updates with zero downtime
  • Easy deploys with near-zero downtime
  • Dynamic updating of host list from an external HTTPS endpoint
  • Secure secret storage with easy secret updating
  • Logs to a syslog server (e.g. Papertrail)
  • Redirect apex domains to www domains

Example config.production.yml

  elb_name: proxy-elb
  region: us-east-1
  ami: ami-06b5810be11add0e2 # Ubuntu 14.04.5 for us-east-1
  instance_type: m3.medium
  instance_count: 2
  key_name: Zach
  security_group: proxy # Used for instances. Should have ports 22 and 443 open.
  PROXY_ENV: production

  # Optional

  # Apex redirects ( ->

  # Remote logging over TLS. All three variables must be set.

How Proxy works

The infrastructure that Proxy runs on consists of a Classic Load Balancer and a configurable number of EC2 instances (we use 2).

Proxy itself is a few pieces of software:

  • A command line tool (bin/proxy) that knows how to boot and configure new EC2 instances, register them with the load balancer, and terminate old ones.
  • Nginx listening on port port 443. Requests with X-Forwarded-Proto set to http are redirected to HTTPS, and HTTPS requests are reverse-proxied to the configured hosts.
  • A backend (backend/bin/proxy-backend) that is responsible for loading subdomain -> host mappings from $PROXY_DOMAINS_ENDPOINT, and reloading nginx. By default, this happens every 15 seconds.
  • Provisioning scripts (backend/bin/setup, backend/bin/proxy-install) that are responsible for configuring new EC2 instances.

The ELB sits in front of the two instances, which are deployed in separate availability zones for redundency. The ELB loads /healthcheck on each instance to make sure that the instances are running.

The ELB has two listeners: HTTP and HTTPS. Both listeners forward to HTTPS on the instances, using Backend Authentication, which consists of a self-signed certificate and associated private key, generated during the deploy process. The public key gets installed on the ELB during deploy, and the ELB only passes traffic to instances that present a certificate with the same public key.

The instances use Upstart to make sure the proxy-backend daemon is always running. Proxy-backend logs to syslog. You can set the optional PROXY_SYSLOG_DRAIN config option to the URL for a remote syslog server, which can collect the logs from all running instances.

The deploy process

The code for Proxy's deploy process is located in lib/proxy/deploy.rb. This is a summary of the process:

  • Generate self-signed certificate (newly generated each deploy) and dhparams (only generated if necessary)
  • Clean up any instances from a failed half-finished deploy
  • Add public key from self-signed certificate to the ELB's list of trusted public keys
  • Boots new instances
  • Uploads a tar file file consisting of everything in git ls-files, as well as the production config and certificate files to each instance, extracts the tar on the server, and runs backend/bin/setup and backend/bin/proxy-install production
  • Registers the new instances with the ELB and waits until they are in service.
  • Terminates the old instances
  • Removes the old trusted public key from the ELB

What gets deployed

While bin/proxy uses git ls-files to decide what files get uploaded to the instances, it uses the contents of your working directory, not the contents of the git repository. This means that any local modifications you have will get deployed. This is useful for testing changes to Proxy, but may bite you if you're not careful.

Broken deploys

Each time you run bin/proxy deploy, a random UUID is generated and written to .deploy. This file is removed once the deploy is complete. Each instance that gets deployed is tagged with this UUID.

If bin/proxy sees a .deploy file when it is run, it assumes there was a broken deploy and cleans up by terminating all instances tagged with that UUID.

Instances are also tagged with the name "proxy-web" so you can easily see which instances are a part of Proxy.

How to deploy Proxy

If this is your first time deploying proxy, you'll have to create a Classic Load Balancer manually. It should be configured to forward both HTTP and HTTPS connections to HTTPS on its instances.

Make sure your AWS credentials are in ~/.aws/credentials. You can configure this with the aws cli: aws configure

Next, create config.production.yml file in Proxy's root directory (see above for an example).

Then run bin/proxy deploy


  • TLS session resumption
  • WebSocket support (see
  • Config stored in the cloud to support multiple people deploying
    • In event of broken conf, upload to S3 and include link in error msg


Copyright Recurse Center 2017


AGPLv3 or later