Containerization: the use of Linux containers to deploy applications. Containers are not new, but their use for easily deploying applications is.
Image: an executable package that includes everything needed to run an application – the code, a runtime, libraries, environment variables, and configuration files. Image: template to create a target container (snapshot) Images are defined using a Dockerfile
Container: running instance of an image Container: a runtime instance of an image – what the image becomes in memory when executed (that is, an image with state, or a user process)
Dockerfile: defines what goes on in the environment inside your container. Access to resources like networking interfaces and disk drives is virtualized inside this environment, which is isolated from the rest of your system, so you need to map ports to the outside world, and be specific about what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile behaves exactly the same wherever it runs.
Repository: a collection of images – sort of like a GitHub repository, except the code is already built.
Registry: a collection of repositories.
Service: defines how containers behave in production Service: really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on.
Compose: a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Docker is a tool for running applications in an isolated environment
- Same environment everywhere
- Sandbox projects
- It just works – makes it easy to use any project
…without the overhead of a virtual machine
- Write Dockerfile
- Build an image from the Dockerfile
- Run the image to get containers
Docker images hub: https://hub.docker.com/
Official PHP images: https://hub.docker.com/_/php/
cd hello # Build the image. giving it a name (and optionally a tag, name:tag) docker build -t hello . # Run the image, forwarding port 80 from the host to port 80 in the container docker run -p 80:80 hello # Go to the page, show the container's output. open http://localhost/
- Persist / share data between containers
- Share data between the host and container (mount local dir as volume)
# Same container, but this time mounting a volume – the src dir to the container’s /var/www/html. # Note: needs the full path, not relative docker run -p 80:80 -v "$(pwd)/src:/var/www/html" hello
Ctrl + C …or when the container's main process stops. Do not run background processes.
An image is a template for the environment you want to run.
Run an image -> get a container.
Docker compose: lets us define all our services in a configuration file, spin up all the containers with one command.
docker-compose.yml: the stuff we would have specified in the
docker run command.
# Build and run all the containers defined in the docker-compose.yml file. docker-compose up # Product service open http://localhost:5001/ # Website open http://localhost:5000/
Docker-compose creates a virtual network for all of the containers, where the container hostname matches the service name.
# Detached mode. docker-compose up -d # See running containers. docker ps $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 61d7f20c403e php:apache "docker-php-entrypoi…" About a minute ago Up 14 seconds 0.0.0.0:5000->80/tcp dockercompose_website_1 2788f427d122 dockercompose_product-service "python api.py" 44 minutes ago Up 15 seconds 0.0.0.0:5001->80/tcp dockercompose_product-service_1 # Stop detached containers. docker-compose stop
Part 1, orientation
# Execute "hello-world" Docker image docker run hello-world # To generate this message, Docker took the following steps: # 1. The Docker client contacted the Docker daemon. # 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. # (amd64) # 3. The Docker daemon created a new container from that image which runs the # executable that produces the output you are currently reading. # 4. The Docker daemon streamed that output to the Docker client, which sent it # to your terminal. # List Docker containers (running, all, all in quiet mode) docker container ls docker container ls --all docker container ls -aq
cd friendlyhello # Build the image, giving it a name docker build -t friendlyhello . # Local docker image registry docker image ls # REPOSITORY TAG IMAGE ID CREATED SIZE # friendlyhello latest 19e1cfe21ac5 6 seconds ago 150MB # Run the app, mapping your machine’s port 4000 to the container’s published port 80 using -p: docker run -p 4000:80 friendlyhello # Run the app in the background (detached mode): docker run -d -p 4000:80 friendlyhello # Stop the container (using the right id from docker container ls) docker container stop 1fa4ab2cf395
Share your image
# Log in to the Docker public registry on your local machine. docker login # Associates a local image with a repository on a registry. # Associates the friendlyhello image with the thibaudcolas/get-started repository, currently as tag part2 docker tag friendlyhello thibaudcolas/get-started:part2 # Upload the tagged image to the repository. docker push thibaudcolas/get-started:part2 # Now runnable from anywhere 🌈 docker run -p 4000:80 thibaudcolas/get-started:part2
Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.
cd getstartedlab docker swarm init # Swarm initialized: current node () is now a manager. # Our single service stack is running 5 container instances of our deployed image on one host. Let’s investigate. docker stack deploy -c docker-compose.yml getstartedlab # Get the service ID for the one service in our application: docker service ls # List the tasks for your service: docker service ps getstartedlab_web # Take down the app and the swarm. docker stack rm getstartedlab docker swarm leave --force
TODO, maybe later
Overview of Docker Compose
- Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
- Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
- Run docker-compose up and Compose starts and runs your entire app.
- Start, stop, and rebuild services
- View the status of running services
- Stream the log output of running services
- Run a one-off command on a service
- Multiple isolated environments on a single host
- Use a project name to isolate environments from each others (of the same app, of apps running the same services, etc).
- By default, project name is basename of project directory.
- Preserve volume data when containers are created
- Only recreate containers that have changed
- Variables and moving a composition between environments
cd composetest # Compose pulls a Redis image, builds an image for your code, and starts the services you defined. In this case, the code is statically copied into the image at build time. docker-compose up open http://0.0.0.0:5000/ # Run one-off commands with docker-compose run docker-compose run web env # Stop everything, remove the containers entirely, with their data. docker-compose down --volumes
# Create the Django project, in Docker. docker-compose run web django-admin.py startproject djangocompose . # Start all the things! docker-compose up