-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker #20
Comments
https://birthday.play-with-docker.com/cli-formating Docker has organized commands into a docker There is a format flag, we can type any raw string to simply output it, and then instructions to be parsed are included within {{ }}. Variables injected into our template are prefixed with a dot '.'.
We can add more complicated logic with the template. Lets take a look at the environment variable list for the last run container and print each of the elements of the PATH variable on separate lines.
|
Went through this resource along with the tutorial
|
Docker Context: allows you to connect to remote docker instances.
|
Do not automatically restart the container when it exits. This restart policy is the default. Therefore both commands are the same.docker run app Only restart the container if it exits with a non-zero code.docker run --restart=on-failure app Same as above, but limit the number of restart attempts to 10.docker run --restart=on-failure:10 app Always restart the container, regardless of the exit code. On Docker daemon startup, only start the container if it was already running before.docker run --restart=unless-stopped app Always restart the container, regardless of the exit code. The container will also start when the Docker daemon starts, even if it was in a stopped state before.docker run --restart=always app
|
https://birthday.play-with-docker.com/jenkins-docker-hub Ran docker container for .net core app. Created a token in dokcer hub.
|
https://birthday.play-with-docker.com/run-as-user
Q. What line in a Dockerfile changes the user Docker uses to run commands? Q. What port restrictions exists when running commands as a non-root user? Q. Where do named volumes get their file permissions?
Docker Hub has lots of popular images that are configured to run as root. This makes them very convenient for developers looking to get started quickly with the fewest complications. However for security, it’s recommended to run our containers as a non-root user. This exercise will walk you through some of the steps needed to run containers without root.
FROM **nginx:1.16**
RUN useradd -u 5000 app // `5000` is the user id, `app` is the user name
USER app:app // Tell docker to use the new user
Now configuration issues and permission issues will appear. By default We need to update the nginx configuration of the container. Extract the content from the container. mkdir -p conf
docker container run user/nginx:1.16-2 tar -cC /etc/nginx . | tar -xC conf This creates a conf directory that contains the nginx.conf and conf.d/default.conf. Now update the config file.
We will use /var/run/nginx/nginx.pid for the pid file, and /var/tmp/nginx/* for the other paths. # user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
# Add the below pid
pid /var/run/nginx/nginx.pid;
events {
worker_connections 1024;
}
http {
# add the below paths
client_body_temp_path /var/tmp/nginx/client_body;
fastcgi_temp_path /var/tmp/nginx/fastcgi_temp;
proxy_temp_path /var/tmp/nginx/proxy_temp;
scgi_temp_path /var/tmp/nginx/scgi_temp;
uwsgi_temp_path /var/tmp/nginx/uwsgi_temp;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
} Updated Docker file: FROM nginx:1.16
RUN useradd -u 5000 app \
&& mkdir -p /var/run/nginx /var/tmp/nginx \
&& chown -R app:app /usr/share/nginx /var/run/nginx /var/tmp/nginx
COPY conf/nginx.conf /etc/nginx/nginx.conf
USER app:app
Lets build the image with the updated configuration:
And now try to run that image:
This will give another permission error: When applications aren’t root, they cannot listen on ports below 1024, so our web server listening on port 80 and 443 will not work. But, inside the container we can listen on a higher numbered port. And even better, with Docker we can still publish to a lower numbered port on the host, and map that high numbered port inside the container. To edit the port nginx listens on, edit the conf/conf.d/default.conf file and replace: Updated Docker file: FROM nginx:1.16
RUN useradd -u 5000 app \
&& mkdir -p /var/run/nginx /var/tmp/nginx \
&& chown -R app:app /usr/share/nginx /var/run/nginx /var/tmp/nginx
COPY conf/nginx.conf /etc/nginx/nginx.conf
USER app:app
COPY conf/conf.d/default.conf /etc/nginx/conf.d/ Rebuild image: Run that image detached, with the port mapping, and a container name:
We get a permission denied again. We can see that the files on the host and inside the container are owned by the same UID/GID and with the same permissions. There’s no mapping from the host user to the container user when running on Linux. There are a few ways to handle this:
Note that named volumes are only initialized when they are first created, so you only want to use these for persistent data, and not the contents of the image that you want to update with each new image.
That fails without root access. If we try running sudo inside the container, what happens:
Images are minimal, shipping only with the needed commands. And in containers sudo is not needed since it would be a security vulnerability (what’s the point of running as a user if that user can sudo to root) and there are better ways to get root inside of a container. The docker container exec command runs our command in the container namespace with the same settings like environment variables, working directory, and user, that the docker container run command uses to start the container. However, the docker container exec command gives options to override those settings, have a look at the help output to see how we can change the user:
Try running an apt-get update command inside the container as root instead of our app user.
When a container is started using /bin/bash then it becomes the containers PID 1 and docker attach is used to get inside PID 1 of a container. So docker attach < container-id > will take you inside the bash terminal as it's PID 1 as we mentioned while starting the container. Exiting out from the container will stop the container. |
Remove Docker from Ubuntudpkg -l | grep -i docker : Give me a list of packages that contains the word “docker” in them Stop and disable servicesudo systemctl stop/disable docker sudo rm -rf /etc/systemd/system/docker.service.d Remove all Docker packages.sudo apt-get purge -y docker-engine docker docker.io docker-ce docker-ce-cli Remove all docker related filessudo rm -rf /var/lib/docker /etc/docker Remove groupsudo groupdel docker Deactivate network interface and ethernet bridge, and deleteifconfig docker0 down Resource |
Image Optimization
|
Docker
Docker Daemon: A constant background process that helps to manage/create Docker images, containers, networks, and storage volumes.
Docker Engine REST API: An API used by applications to interact with the Docker daemon; it can be accessed by an HTTP client.
Docker CLI: A Docker command line client for interacting with the Docker daemon. a.k.a the Docker command.
If we think differently we could just connect some problems with Docker:
As we all know Docker runs on a single process it could result into single point of failure.
All the child processes are owned by this process.
At any point if Docker daemon fails, all the child process losses their track
and enters into orphaned state.
Security vulnerabilities.
All the steps needs to be performed by root for Docker operations.
To understand why the Docker Daemon is running with root access and how this can become a problem, we first have to understand the Docker architecture (at least on a high level).
Container images are specified with the Dockerfile. The Dockerfile details how to build an image based on your application and resources. Using Docker, we can use the build command to build our container image. Once you have the image of your Dockerfile, you can run it. Upon running the image, a container is created.
The problem with this is that you cannot use Docker directly on your workstation. Docker is composed of a variety of different tools. In most cases, you will only interact with the Docker CLI. However, running an application with Docker means that you have to run the Docker Daemon with root privileges. It actually binds to a Unix socket instead of a TCP port. By default, users can only access the Unix socket using sudo command, which is owned by the user root.
The Docker Daemon is responsible for the state of your containers and images, and facilitates any interaction with “the outside world.” The Docker CLI is merely used to translate commands into API calls that are sent to the Docker Daemon. This allows you to use a local or remote Docker Daemon.
Running the Docker Daemon locally, you risk that any process that breaks out of the Docker Container will have the same rights as the host operating system. This is why you should only trust your own images that you have written and understand.
Resource:
It’s the Unix socket the Docker daemon listens on by default, and it can be used to communicate with the daemon from within a container.
The text was updated successfully, but these errors were encountered: