NOTE: This used to be a gist that continually expanded. It's now a github project because it's considerably easier for other people to edit, fix and expand on Docker using Github. Just click README.md, and then on the "writing pen" icon on the right to edit.
- Why
- I just want a dev environment
- Prerequisites
- Installation
- Containers
- Images
- Registry and Repository
- Dockerfile
- Layers
- Links
- Volumes
- Exposing Ports
- Tips
Why Should I Care (For Developers)
"Docker interests me because it allows simple environment isolation and repeatability. I can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple."
- A Docker Dev Environment in 24 Hours!
- Building a Development Environment With Docker
- Discourse in a Docker Container
You may also like to try the following tools (and add more details here after you try them):
Use Homebrew.
ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"
This is all MacOS specific.
Install VirtualBox and Vagrant using Brew Cask.
brew tap caskroom/homebrew-cask
brew install brew-cask
brew cask install virtualbox
brew cask install vagrant
I personally don't use boot2docker because I already know how to use Vagrant, and I don't like how boot2docker doesn't give me control over my Vagrant instances (especially the lack of port forwarding). So this is the real way to do it.
We use the Open Vagrant files defined by Phusion, which have better default settings:
vagrant init phusion/ubuntu-14.04-amd64
vagrant up
vagrant ssh
Once you're in the Vagrant instance, install Docker like any other package:
sudo apt-get update
sudo apt-get install -qy software-properties-common # needed for add-apt-repository etc
sudo apt-get install -qy docker.io
sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
Then start up a container:
sudo docker run -i -t ubuntu /bin/bash
That's it, you have a running Docker container. Also note that Vagrant 1.6 has Docker supported as a built-in provisioner which can help you when configuring images.
I use Oh My Zsh with the Docker plugin for autocompletion of docker commands. YMMV.
Your basic isolated Docker process. Containers are to Virtual Machines as threads are to processes. Or you can think of them as chroots on steroids.
Some common misconceptions it's worth correcting:
- Containers are not transient.
docker rundoesn't do what you think. - Containers are not limited to running a single command or process. You can use supervisord or runit.
docker runcreates a container.docker stopstops it.docker startwill start it again.docker restartrestarts a container.docker rmdeletes a container.docker killsends a SIGKILL to a container. Has issues.docker attachwill connect to a running container.docker waitblocks until container stops.
If you want to run and then interact with a container, docker start then docker attach to get in (or, as of 0.9, nsenter).
If you want a transient container, docker run -rm will remove the container after it stops.
If you want to poke around in an image, docker run -t -i <myimage> <myshell> to open a tty.
If you want to map a directory on the host to a docker container, docker run -v $HOSTDIR:$DOCKERDIR (also see Volumes section).
If you want to integrate a container with a host process manager, start the daemon with -r=false then use docker start -a.
If you want to expose container ports through the host, see the exposing ports section.
docker psshows running containers.docker inspectlooks at all the info on a container (including IP address).docker logsgets logs from container.docker eventsgets events from container.docker portshows public facing port of container.docker topshows running processes in container.docker diffshows changed files in the container's FS.
docker ps -a shows running and stopped containers.
There doesn't seem to be a way to use docker directly to import files into a container's filesystem. The closest thing is to mount a host file or directory as a data volume and copy it from inside the container.
docker cpcopies files or folders out of a container's filesystem.docker exportturns container filesystem into tarball.
The "official" way to enter a docker container while it's running is to use nsenter, which uses libcontainer under the hood. Using an sshd daemon is considered evil.
Unfortunately, nsenter requires some configuration and installation. If your operating system does not include nsenter (usually in a package named util-linux or similar, although it has to be quite a recent version), the easiest way is probably to install it through docker, as described in the first of the following links:
- Installing nsenter using docker
- How to enter a Docker container
- Docker debug with nsenter on boot2docker
nsenter allows you to run any command (e.g. a shell) inside a container that's already running another command (e.g. your database or webserver). This allows you to see all mounted volumes, check on processes, log files etc. inside a running container.
The first installation method described above also installs a small wrapper script wrapping nsenter named docker-enter that makes executing a shell inside a running container as easy as docker-enter CONTAINER and any other command via docker-enter CONTAINER COMMAND.
Images are just templates for docker containers.
docker imagesshows all images.docker importcreates an image from a tarball.docker buildcreates image from Dockerfile.docker commitcreates image from a container.docker rmiremoves an image.docker insertinserts a file from URL into image. (kind of odd, you'd think images would be immutable after create)docker loadloads an image from a tar archive as STDIN, including images and tags (as of 0.7).docker savesaves an image to a tar archive stream to STDOUT with all parent layers, tags & versions (as of 0.7).
docker import and docker commit only set up the filesystem, not Dockerfile info like CMD or ENTRYPOINT or EXPOSE. See bug.
docker historyshows history of image.docker tagtags an image to a name (local or registry).
A repository is a hosted collection of tagged images that together create the file system for a container.
A registry is a host -- a server that stores repositories and provides an HTTP API for managing the uploading and downloading of repositories.
Docker.io hosts its own index to a central registry which contains a large number of repositories.
docker loginto login to a registry.docker searchsearches registry for image.docker pullpulls an image from registry to local machine.docker pushpushes an image to the registry from local machine.
The configuration file. Sets up a Docker container when you run docker build on it. Vastly preferable to docker commit.
Best to look at http://github.com/wsargent/docker-devenv and the best practices / take 2 for more details.
If you use jEdit, I've put up a syntax highlighting module for Dockerfile you can use.
The versioned filesystem in Docker is based on layers. They're like git commits or changesets for filesystems.
Links are how Docker containers talk to each other through TCP/IP ports. Linking into Redis and Atlassian show worked examples. You can also (in 0.11) resolve links by hostname.
NOTE: If you want containers to ONLY communicate with each other through links, start the docker daemon with -icc=false to disable inter process communication.
If you have a container with the name CONTAINER (specified by docker run --name CONTAINER) and in the Dockerfile, it has an exposed port:
EXPOSE 1337
Then if we create another container called LINKED like so:
docker run -d --link CONTAINER:ALIAS --name LINKED user/wordpress
Then the exposed ports and aliases of CONTAINER will show up in LINKED with the following environment variables:
$ALIAS_PORT_1337_TCP_PORT
$ALIAS_PORT_1337_TCP_ADDR
And you can connect to it that way.
To delete links, use docker rm --link .
Docker volumes are free-floating filesystems. They don't have to be connected to a particular container.
Volumes are useful in situations where you can't use links (which are TCP/IP only). For instance, if you need to have two docker instances communicate by leaving stuff on the filesystem.
You can mount them in several docker containers at once, using docker run -volume-from
See advanced volumes for more details.
Exposing ports through the host container is fiddly but doable.
First expose the port in your Dockerfile:
EXPOSE <CONTAINERPORT>
Then map the container port to the host port (only using localhost interface):
docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage
If you're running Docker in Virtualbox, you then need to forward the port there as well. It can be useful to define something in Vagrantfile to expose a range of ports so that you can dynamically map them:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
...
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
...
end
If you forget what you mapped the port to on the host container, use docker port to show it:
docker port CONTAINER $CONTAINERPORT
Sources:
alias dl='docker ps -l -q'
docker run ubuntu echo hello world
docker commit `dl` helloworld
docker commit -run='{"Cmd":["postgres", "-too -many -opts"]}' `dl` postgres
docker inspect `dl` | grep IPAddress | cut -d '"' -f 4
or
wget http://stedolan.github.io/jq/download/source/jq-1.3.tar.gz
tar xzvf jq-1.3.tar.gz
cd jq-1.3
./configure && make && sudo make install
docker inspect `dl` | jq -r '.[0].NetworkSettings.IPAddress'
or (this is unverified)
docker inspect -f '{{ .NetworkSettings.IPAddress }}' <container_name>
docker run -rm ubuntu env
docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs docker rm
docker rm `docker ps -a -q`
docker images -viz | dot -Tpng -o docker.png