A Squid3 caching proxy with SSL enabled in a Docker container, based on https://github.com/toffer/docker-squid3-ssl and meant to be used from within other Docker containers.
- Ubuntu 14.04 LTS (Trusty).
- Squid (Version 3.3.8).
- Built from source, with
--enable-ssl
. - Automatically generates self-signed certificate.
- Configured to cache Docker images (default config for Squid3 doesn't handle Docker images very well.)
Start Squid3 setting its hostname and container name:
$ docker run -d -h proxy.docker.dev --name squid3 fgrehm/squid3-ssl:v20140809
Start another container linking it to the proxy container and setting the
http_proxy
env var to the linked container:
$ docker run -ti --rm \
--link squid3:proxy.docker.dev \
-e http_proxy="http://proxy.docker.dev:3128" \
ubuntu:trusty bash
You can check that things are working by hitting the same URL twice from within
a linked container and checking the X-Cache
header that Squid sets in the
responses. The second response should show a cache hit:
$ curl -s -i http://httpbin.org/ip | grep 'X-Cache:'
X-Cache: MISS from proxy
$ curl -s -i http://httpbin.org/ip | grep 'X-Cache:'
X-Cache: HIT from proxy
To use the proxy for HTTPS requests, the linked container needs to trust the self-signed certificate generated by the Squid3 server. To setup and install the CA certificate on an Ubuntu container, try these steps:
# Start Squid3 setting its hostname and container name:
$ docker run -d -h proxy.docker.dev --name squid3 fgrehm/squid3-ssl:v20140809
# Save the certificate into a file
$ docker logs squid3 | sed -n '/BEGIN/,/END/p' > proxy.docker.dev.crt
# Start a new container doing the appropriate setup
$ docker run -ti --rm \
--link squid3:proxy.docker.dev \
-v `pwd`/proxy.docker.dev.crt:/usr/share/ca-certificates/proxy.docker.dev.crt \
-e http_proxy="http://proxy.docker.dev:3128" \
-e https_proxy="http://proxy.docker.dev:3128" \
ubuntu:trusty bash
# From within the container, trust the certificate
$ apt-get update && apt-get install -y ca-certificates curl
$ echo 'proxy.docker.dev.crt' >> /etc/ca-certificates.conf
$ /usr/sbin/update-ca-certificates
If your container gives you a way to "inject" scripts within its init process / entrypoint, you can have that process automated.
For example, if you use phusion/baseimage you can create an executable script like the one below:
#!/bin/bash
set -e
# Install ca-certificates if needed
if ! [ -x /usr/sbin/update-ca-certificates ]; then
apt-get update && apt-get install -y ca-certificates
fi
# Trust our certificate
if ! $(grep -q 'proxy.docker.dev.crt' /etc/ca-certificates.conf); then
echo 'proxy.docker.dev.crt' >> /etc/ca-certificates.conf
/usr/sbin/update-ca-certificates
fi
And run your containers like:
$ docker run -ti --rm \
--link squid3:proxy.docker.dev \
-v `pwd`/proxy.docker.dev.crt:/usr/share/ca-certificates/proxy.docker.dev.crt \
-v `pwd`/proxy-script.sh:/etc/my_init.d/proxy-script.sh \
-e http_proxy="http://proxy.docker.dev:3128" \
-e https_proxy="http://proxy.docker.dev:3128" \
phusion/baseimage /sbin/my_init -- bash -l
This will make sure that every time the container is brought up, it trusts the proxy certificates and you don't need to do that by hand.
As with regular http requests, you can check that things are working by hitting
the same https URL twice from within a linked container (after trusting the
certificate) and checking the X-Cache
header that Squid sets in the responses.
The second response should show a cache hit:
$ curl -s -i https://httpbin.org/ip | grep 'X-Cache:'
X-Cache: MISS from proxy
$ curl -s -i https://httpbin.org/ip | grep 'X-Cache:'
X-Cache: HIT from proxy
I manually built the packages using a separate Dockerfile, created a GitHub release with a tarball of those debs and set up an automated build on Docker Hub.