Skip to content

Commit

Permalink
Merge pull request #24 from juliogomez/master
Browse files Browse the repository at this point in the history
Adding Docker 201
  • Loading branch information
metahertz committed Jan 9, 2018
2 parents d3dd1e1 + 5677061 commit e843d35
Show file tree
Hide file tree
Showing 75 changed files with 939 additions and 0 deletions.
165 changes: 165 additions & 0 deletions labs/docker-201/1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
Docker 201

# Objective

This learning lab will explore advanced Docker features for containers, like networking, storage, service stack creation, and native orchestration. We would recommend you go through the Docker-101 lab first, so you can build on that foundational knowledge and expand your expertise on Docker.

Completion time: 45 minutes

# For best results..
This is a hands on lab, but each example builds on the previous and explains *WHY* we are running the commands.
If you just skip to each command to run, without reading the text in between, you will not learn anything.

# Audience

* DevOps engineers
* Application developers
* IT teams addressing the developer need for Docker and Containers

# Content Notice

The content for this lab comes from multiple sources, mixing them in what we think is a meaningful and funny way to learn about Docker.

# Recap
As we covered in the Docker-101 learning lab, Docker is the most popular tool focused on making it easy for developers to build, package, share and deploy their applications. After going through Docker-101 you now know what containers are and how Docker creates and runs them.

So lets get started with the hands on!

# Installing Docker

You may install Docker on your favorite platform (Linux, Mac, Windows) by downloading it from [docker.com](http://www.docker.com)

If you decide to use your own environment please remember to add your user to the *docker* group after completing the installation. That way you will not have to `sudo` every docker command.

```
sudo usermod -aG docker <your_user>
```

Or alternatively you may use an in-browser Docker playground by logging at [play-with-docker.com](http://play-with-docker.com) with your Docker user. This gives you access to a full VM running Docker, directly from your web browser, making demos really easy regardless of your device.

![Play With Docker Site](/posts/files/docker-201/assets/images/playwithdocker1.png)


# Basic container management

There are multiple different ways to run containers, so let's start by exploring the most basic ones:

* You may run a container, instruct it to run a command once it is up and then exit. Let's run an Ubuntu image and have it *echo* a message. This command will download and store the required image, had it not been used and downloaded previously.

```
docker run ubuntu /bin/echo ‘hello world’
```

* You can now run a container with the same Ubuntu image as previously, but this time let's make it *interactive* (`-it` for interactive with terminal) so that you can type your own commands inside *ubuntu* (type `exit` to go back to your host).

```
docker run -it ubuntu /bin/bash
```

![Ubuntu interactive](/posts/files/docker-201/assets/images/ubuntu_interactive.png)

Use `cat /etc/issue` both in the host and in the container to see the different distributions of Linux for each one of them.

* You may also run a container with the same Ubuntu image as before, but this time in *detached* mode (`-d` option), so that it runs in the background without any interaction required from the terminal. Let's make it run a script and we can check its logs (ctrl+c to escape).

```
docker run --name script -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
docker logs -f script
```

![Ubuntu logs](/posts/files/docker-201/assets/images/ubuntu_logs.png)

One of the commands you will use very often while working with Docker is the one to find out what containers are now running:

```
docker ps
```

![Docker ps](/posts/files/docker-201/assets/images/docker_ps.png)

As you can see there is only one docker container *running* (the one generating the logs, from the previous step). The output shows it has been assigned a random ID, what image it uses, the command it runs when executed, when it was created, the current status, what ports it exposes, and the name we have given to that container.

But there are other containers that were just *exited*, and you can see both (*running* and *exited*) with the following command.

```
docker ps -a
```

![Docker ps -a](/posts/files/docker-201/assets/images/docker_ps_a.png)

These are the containers we have been running in our previous steps, they were executed and exited. As you can seen in the column status they are *exited* or *up*. Also, for the containers we did not specify a name (with the `--name` param) they have been assigned with random ones by the system.

None of the previous containers had ports exposed because there was no need to interact with them externally. Let's now create a web server that serves http in TCP port 80. We need to run it in detached mode (`-d`) so that it runs in the background, and we need to map the port we will use in our host to the port where the container will serve *http* (`-p 80:80`). In this case we are using TCP port 80 both for the configured port in our local host, and for the port the *nginx* container is exposing.

```
docker run -d -p 80:80 --name webserver nginx
```

[*Nginx*](https://www.nginx.com) is a reverse proxy server container that works also as an http server. As long as this image has not been used before, the system will download and store it locally.

![Nginx run](/posts/files/docker-201/assets/images/nginx_run.png)

As you can see there is no output at all, except for the download and a string of characters for the container id. But as a detached container it is running in the background. Let's interact with it with the `curl` command to our host (localhost). By default, if nothing is specified, `curl` requests *http* from TCP port 80.

```
curl localhost
```

![Nginx curl](/posts/files/docker-201/assets/images/nginx_curl.png)

`curl` requests *http* from the server the same way your browser does, and shows you the output in *html* format.

Let's modify the default *html* page and check that subsequent *http* requests get the updated content.

```
docker exec -it webserver /bin/bash
apt-get -qq update && apt-get -qq install vim
cd /usr/share/nginx/html
vim index.html
```

![Nginx edit](/posts/files/docker-201/assets/images/nginx_edit.png)

Please edit your *index.html* file by changing its welcome message, under the *h1* headers.

![Nginx html](/posts/files/docker-201/assets/images/nginx_html.png)

Save the file, `exit` the container, and `curl` again the *http* server on the default TCP port 80.

```
curl localhost
```

![Nginx second curl](/posts/files/docker-201/assets/images/nginx_curl2.png)

This way you can check that the *http* server in your container, successfully updates its content dynamically as you modify relevant *html* pages.

Not it is the time to do some maintenance of your system. First let's stop our *http* container with `docker stop webserver`. This command sends the container a SIGTERM+SIGKILL signal.

You may now check that `docker ps` does not show it anymore.

However that *webserver*, and other containers we used during the lab, will still show up if you run `docker ps -a`.

![docker ps](/posts/files/docker-201/assets/images/docker_ps_a2.png)

Docker does not delete resources by default, so all those containers are now *exited*, waiting to be started again, or deleted. In our case we will delete them to clean our system.

You may delete an individual container, like the *webserver* one, using `docker rm webserver`. This command will delete the R/W layers of our container, preserving the R/O layers of the image.

After deleting the container please check that it does not show up anymore in your `docker ps -a`.

Alternatively you might as well have stopped **AND** deleted your running *webserver* container, in a single command, with `docker rm -f webserver`.

You could also remove every *exited* container in your system by running a single command: `docker rm $(docker ps -aq)`. This is built by providing `docker rm` with a parameter (`docker ps -aq`), that generates a list with all the container IDs in your system.

You may now clean the images stored locally in your system. First take a look at what images you have downloaded with `docker image ls`.

![docker image ls](/posts/files/docker-201/assets/images/docker_image_ls.png)

Delete one of them using `docker rmi ubuntu`. This command deletes the R/O layers used when creating a running container that uses Ubuntu. Next time you try to run a container that requires this Ubuntu image it will have to download it again from a registry.

You can also download the image in advance with `docker pull ubuntu`.

#### Congratulations! Now you know how to interact with your containers in different ways, and how to manage your own Docker system.

#### Read on to learn different methods we can use to create Docker images for our own applications!
114 changes: 114 additions & 0 deletions labs/docker-201/2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# Building your own images

You can build images from scratch, or based on existing ones with your own modifications and additional software. And you can do this manually or with a Dockerfile. Let's explore both ways of doing it.

Before digging into the lab please set the following variables to your own values (your docker ID and the name of the app you want to use), so that later on you can easily refer to them:

```
export your_docker_id=<your docker ID>
export your_app_name=<your app name>
```

![Setting your variables](/posts/files/docker-201/assets/images/variables.png)


## Option 1: Manually

You can run an Ubuntu container in interactive mode, and add an application (*myapp.sh*) inside it.

```
docker run -it --name testapp ubuntu /bin/bash
echo -e "#! /bin/bash\necho It works!" > myapp.sh
chmod +x myapp.sh
exit
```

When you exit the container it goes to *exited*, so it is stopped and can be seen with the `dockers ps -a` command.

You may now save these new R/W container layers as R/O ones, creating a new image, with `docker commit testapp $your_docker_id/$your_app_name`.

![Docker image manually](/posts/files/docker-201/assets/images/docker_manual.png)

Now delete the *exited* container with `docker rm testapp`.

And run the app that you created inside the container:

```
docker run --name testapp $your_docker_id/$your_app_name /myapp.sh
```

![Docker manual run](/posts/files/docker-201/assets/images/docker_manual_run.png)

As you may have noticed you had to specify the path and application name you wanted to execute when running the container. Let's make it easier by setting the execution of *myapp.sh* as default when running that image.

```
docker commit --change='CMD [“/myapp.sh”]' testapp $your_docker_id/$your_app_name
```

Now that our image is ready let's run it in a container without the parameters we used before: `docker --rm $your_docker_id/$your_app_name`. The `--rm` parameter automatically deletes the container when it exits, so that we do not have to delete it manually afterwards.

![Docker manual run with no params](/posts/files/docker-201/assets/images/docker_manual_run2.png)

## Option 2: With a Dockerfile

As with any manual method you may have already guessed it does not scale very well. So we can automate image creation with a special file called *Dockerfile*.

Let's first create another *myapp.sh* in your local host, with the following content:

```
#! /bin/bash
echo "I am a cow!" | /usr/games/cowsay | /usr/games/lolcat -f
```

Assign permissions to execute it with `chmod +x myapp.sh`.

Now you need to create a file called *Dockerfile* in your local host, with the following content:

```
FROM ubuntu
RUN apt-get update && apt-get install -y cowsay lolcat && rm -rf /var/lib/apt/lists/*
COPY myapp.sh /myapp/myapp.sh
CMD ["/myapp/myapp.sh"]
```

Each line of this Dockerfile creates a new R/W layer in our new image. As per the file our image is based on a standard Ubuntu, then it installs some software required for *myapp.sh*, then copies our *myapp.sh* from the host to the container, and finally sets the default command when running a container with this image to call that *myapp.sh* application.

Now you can build your new image for this app, and run a new container for it, with:

```
docker build -t $your_docker_id/$your_app_name .
docker run --rm $your_docker_id/$your_app_name
```

![Dockerfile](/posts/files/docker-201/assets/images/dockerfile1.png)
![Dockerfile](/posts/files/docker-201/assets/images/dockerfile2.png)

You can now see the R/O layers created for each command in the Dockerfile:

```
docker history $your_docker_id/$your_app_name | head -n 4
```

![Docker layers](/posts/files/docker-201/assets/images/docker_history.png)

## Dockerhub

One of the great benefits of containers is that they can be distributed via registries, where containers are conveniently stored for download. These registries host public and private repositories, depending on user needs.

You may create a user for free at [Dockerhub](https://hub.docker.com), the default container registry for Docker, so you can upload your own images to your own repositories.

Once registered please login from your host with `docker login`. That way you will be able to *push* your image to Dockerhub. At that point you can delete your local image and run a container that pulls the remote image from Dockerhub.

```
docker push $your_docker_id/$your_app_name
docker rmi $your_docker_id/$your_app_name
docker run --rm $your_docker_id/$your_app_name
```

![Dockerhub](/posts/files/docker-201/assets/images/dockerhub.png)

(note: for this last section I have used my own computer instead of the public play_with_docker site, because you need to login to Dockerhub before pushing your image, and you would rather not provide your credentials in a public environment).

![PlayWithDocker warning](/posts/files/docker-201/assets/images/pwd_warning.png)

#### Congratulations! You know now how to customize and build your own images, both manually and with a Dockerfile. You have also learned how to publish them to Dockerhub. Let's explore now how networking and storage works for containers!
Loading

0 comments on commit e843d35

Please sign in to comment.