Skip to content

Rahul-Chauhan-2212/Docker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docker

Docker is a container management service. The keywords of Docker are develop, ship and run anywhere. The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be deployed anywhere.

Features of Docker

  • Docker has the ability to reduce the size of development by providing a smaller footprint of the operating system via containers.
  • With containers, it becomes easier for teams across different units, such as development, QA and Operations to work seamlessly across applications.
  • You can deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.
  • Since Docker containers are pretty lightweight, they are very easily scalable.

Components of Docker

  • Docker Desktop − It allows one to run Docker containers on the Windows/Mac OS.
  • Docker Engine − It is used for building Docker images and creating Docker containers.
  • Docker Hub − This is the registry which is used to host various Docker images.
  • Docker Compose − This is used to define applications using multiple Docker containers.

Install Docker Desktop on Windows

Download Docker from link Install Docker on Windows

Note: When prompted, ensure the Use WSL 2 instead of Hyper-V option on the Configuration page is selected or not depending on your choice of backend.

If your admin account is different to your user account, you must add the user to the docker-users group. Run Computer Management as an administrator and navigate to Local Users and Groups > Groups > docker-users. Right-click to add the user to the group. Log out and log back in for the changes to take effect.

Some Initial Useful Docker Commands

  • docker --version ---> To get the docker version installed
  • docker build . ---> To build docker image of the app
  • docker run -p portId:portId imageId ---> To run docker image on the given port
  • docker ps ---> To list down the docker images running
  • docker ps -a ---> to list down all the docker instances stopped or running
  • docker stop containerName ---> To stop docker image/container

Images vs Containers

Containers--> The running "unit of software" / running instance of Image Multiple containers can be created based on one image Images--> Templates of containers / contains code and required tools and runtime environment

Pre-Built Images

Docker Hub example: node image at docker hub cmd---> docker run -it node

-it

to open node interractive shell
Snippet-->
Digest: sha256:52bda4c171f379c1dcba5411d18ed83ae6e99c3751cad67a450684efb9491f6b Status: Downloaded newer image for node:latest

Custom Images

Node JS App

Docker File :

  • FROM node ---> base image instruction
  • WORKDIR /app --->working directory in image
  • COPY . /app ---> copy all code file to working dir
  • RUN npm install ---> command to install node dependencies in image
  • CMD [ "node", "server.js" ] ---> command to run app in container not in image
Commands to build and run node js app image/container
  • docker build .
  • docker run -p 80:80 imageId

Image Layers

Each Command in Docker File creates a cacheable image layers. So we should be very specific to run commands in manner which have changes from very frequent to less frequent. So that the caching can be used properly and decrease the image build time.

Stopping and Restarting Containers

docker run imageId --->creates and run new container docker stop imageName ---> stop docker container docker start imageName ---> run stopped container

Attached and Detached Containers

docker run cmd run in attached mode which means the command prompt will be attached with the container which will show any console log on the same terminal. docker start cmd run in detached mode.

docker run -p 8080:80 -d imageId
-d is used to run the docker run cmd in detached mode.

To attach to the already running detached container

docker attach imageId
or
docker logs -f imageId

Entering Interactive Mode

example: python-app-docker
docker run imageId command Fails here

docker run -it imageId

docker start -a containerId this command also not works correctly

docker start -a -i containerId
so -i serves as run containers in interactive mode

Deleting Images and containers

docker rm containerId
Note: Container should be in stopped mode for this command
To List down docker images
docker images

To delete Docker images

docker rmi imageId
Note:Only that image can be deleted which is not attached to any running or stopped container.So first have to delete containers then image.

To Delete all the unused Docker Images

docker image prune -a

Automatically delete the stopped containers

docker run -p 3000:80 -d --rm imageId
Here --rm does this trick. One the containers is stopped it is automatically deleted

Inspecting Images

docker image inspect imageId

Copy Files to Running Container and vice-versa

docker cp test/ admiring_kalam:/test
docker cp admiring_kalam:/test .
docker cp admiring_kalam:/test/copy.txt .

Naming and Tagging Images and Containers

Containers Naming

docker run -d -p 80:80 --name containerName imageId
--name name is used to give a name to container
Images Naming and Tagging
docker build -t repoName:tag .
Tag an image referenced by ID
To tag a local image with ID “0e5574283393” into the “fedora” repository with “version1.0”:
docker tag 0e5574283393 fedora/httpd:version1.0
Tag an image referenced by Name
To tag a local image with name “httpd” into the “fedora” repository with “version1.0”:
docker tag httpd fedora/httpd:version1.0
Note that since the tag name is not specified, the alias is created for an existing local version httpd:latest.
Tag an image referenced by Name and Tag
To tag a local image with name “httpd” and tag “test” into the “fedora” repository with “version1.0.test”:
docker tag httpd:test fedora/httpd:version1.0.test

Pushing Images to DockerHub

Create Docker repo in DockerHub
rchauhan9102/node-app

docker tag oldname docker reponame
docker login
docker push reponame
docker logout

Pulling Images from DockerHub

docker pull dockerimagename

Managing Data and Working with Volumes

Image ---> READ ONLY Container ---> READ/WRITE

Different Kinds of Data

1)Application(Code+environment) --->Fixed(Stored in Image)
2)Temprory App Data(i.e. entered user data) ---> Variable(Stored in Containers)
3)Permanent App Data(i.e. User Accounts) ---> Must not be lost when containers restarts(Stored in Volumes)

Project: data-volumes-starting-setup This app creates file with title name and content of file will be feedback docker build

Type of External Data Storages

  • Volumes(Mananged By Docker)
  • Bind Mounts(Managed By You)

Volumes

Volumes are folders on your local machine which are mounted into containers.

docker volume ls
1)Anonymous Volumes
When container shuts down, anonymous volume will be deleted

2)Named Volumes
When container shuts down, named volume is not deleted and data is persisted and once the container starts again the same data from that volume can be used.
CMD to create named volume:
docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback feedback-node:volume

To remove volumes:

  • docker volume rm volumeName
  • docker volume prune

Bind Mounts

Used to persist editable data
it is a volume inside your local machine which watches the changes inside the folder and copies it to container.

docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback -v "C:/Users/RAHUL CHAUHAN/Documents/Docker Codes/Docker/data-volumes-starting-setup:/app" -v /app/node_modules feedback-node:volume
Volumes Comparison
Anonymous Volume Named Volume Bind Mount
docker run -v /app/data docker run -v data:/app/data docker run -v path/to/code:/app/data
Created specifically for single container Created in general-not tied to any container Location on host file system, not tied to any specific container
Survives container shutdown/restarts until --rm is used Survives container shutdown/re-start/removal via docker CLI Survives container shutdown/re-start, removal on host file system
Can not be shared across containers Can be shared across containers Can be shared across containers
As it is anonymous. It can't be re-used even for same image. Can be re-used for same container(across re-starts) Can be re-used for same container(across re-starts)
Making Volumes Read-Only

By default all the volumes are read-write. Which means bind mounts can change our host machine code from the container which does not want. So for this ro is used in bind mounts

docker run -d -p 3000:80 --rm --name feedback-app -v feedback:/app/feedback -v "C:/Users/RAHUL CHAUHAN/Documents/Docker Codes/Docker/data-volumes-starting-setup:/app:ro" -v /app/temp -v /app/node_modules feedback-node:volume
Mananging Docker Volumes
docker volume ls
docker volume create volumeName
docker volume rm volumeName
docker volume prune
docker volume inspect volumeName
Ignoring files and folders while building images

.dockerignore file

ARGuments & ENVironment variables

Docker supports build-time ARGuments and run-time ENVironment variables

  1. ARG

    Available inside of Dockerfile, not accessible in cmd or any application code
    Set on image build(docker build) via --build-arg
    docker build -t feedback-node:dev --build-arg DEFAULT_PORT=8000 .
  2. ENV

    Available inside of a Dockerfile and application code
    Set via ENV in Dockerfile or --env on docker run
    docker run -d -p 3000:8000 --rm --name feedback-app --env PORT=8000 -v feedback:/app/feedback -v "C:/Users/RAHUL CHAUHAN/Documents/Docker Codes/Docker/data-volumes-starting-setup:/app:ro" -v /app/node_modules -v /app/temp feedback-node:env
    docker run -d -p 3000:8000 --rm --name feedback-app --env-file ./.env -v feedback:/app/feedback -v "C:/Users/RAHUL CHAUHAN/Documents/Docker Codes/Docker/data-volumes-starting-setup:/app" -v /app/node_modules -v /app/temp feedback-node:env
    via .env file

Networking(Cross-) Container Communication

  1. Container and WWW communication
    Container can communicate with WWW without any extra coding or configurationn.
  2. Container and Local Host Machine Communication
    for container and localhost communication, we need to change the server ip inside our docker images to host.docker.internal
    in local case : localhost --> host.docker.internal
  3. Container to container communication
    There can be two approaches:
    • Basic Solution
      Look for the ip address of the container to be used using docker container inspect containername
      then use this ip in the image of the main app.
    • Docker Networks
      Create Docker network
      docker network create favorite-network
      Run the container with network tag
      docker run -d --name mongodb --network favorite-network mongo
      Build the new image with ip of the mongodb as containername and start it with network tag
      docker run -d --rm --name favorites -p 3000:3000 --network favorite-network favorites-node
      In this case, the container communicates via Container network and container name.
app used: networks-starting-setup

Building Multi-container Applications with Docker

app used: multi-container-starting-setup

  1. Dockerize mongodb service
    docker run --name mongodb --rm -d -p 27017:27017 mongo
  2. Dockerize Backend service
    docker build -t goals-node .
    docker run --name goals-backend --rm -d -p 80:80 goals-node
  3. Dockerize Frontend service
    docker build -t goals-react .
    docker run --name goals-frontend --rm -d -p 3000:3000 goals-react
  4. Adding Docker networks for efficient cross-container communication
    docker network ls
    docker network create goals
    docker run --name mongodb --rm -d --network goals mongo
    docker run --name goals-backend --rm -d -p 80:80 --network goals goals-node
    docker run --name goals-frontend --rm -d -p 3000:3000 goals-react
  5. Adding Data persistence and authentication to mongodb with volumes
    docker run --name mongodb --rm -d -v data:/data/db --network goals mongo
    docker run --name mongodb -e MONGO_INITDB_ROOT_USERNAME="rahul" -e MONGO_INITDB_ROOT_PASSWORD="rahul" --rm -d -v data:/data/db --network goals mongo
  6. Volumes, Bind Mounts and Polishing to NodeJs Container
    docker run -v C:/MyCodes/Docker/multi-containers-starting-setup/backend:/app -v logs:/app/logs -v /app/node_modules -e MONGODB_USERNAME=rahul -e MONGODB_PASSWORD=rahul --name goals-backend --rm -d -p 80:80 --network goals goals-node
  7. Live Source Code Updates for React Container with Bind Mounts
    docker run --name goals-frontend -v C:/MyCodes/Docker/multi-containers-starting-setup/frontend/src:/app/src --rm -d -p 3000:3000 goals-react

Docker Compose : Elegand Multi-Container Orchestration

Used to run Docker Orchestration commands(build,run,start,stop etc..) using one configuration file. The docker commands are very big and for a multi-containers setup having 10's of containers, it is very difficult to run all the very big commands. So in this case, Docker Compose helps us.
Docker compose creates a default network and all the containers(services) written in docker-compose.yaml file lie under same network. When we stop docker container using docker-compose down all the containers are removed, so we don't need to specify alternative of --rm in yaml file.

docker-compose.yaml

Docker Compose file Reference
Note:
  1. Docker Compose does not replace Dockerfiles for custom images
  2. Docker Compose also does not replace images and containers concept
  3. Docker Compose is not suited for managing multiple containers on different hosts

Docker Compose Up and Down

  • docker-compose --version
  • docker-compose up
  • docker-compose up -d
  • docker-compose down
  • docker-compose down -v
  • docker-compose build
  • docker-compose up --build

Utility Containers

Without installing we can create projects on our local machine, using utility containers

docker exec -it containerName npm init
docker run -it node npm init
docker run -it -v C:/MyCodes/Docker/utility-containers:/app node-util npm init
docker run -it -v C:/MyCodes/Docker/utility-containers:/app node-util npm install
Utilizing Entry point
docker run -it -v C:/MyCodes/Docker/utility-containers:/app node-util init
docker run -it -v C:/MyCodes/Docker/utility-containers:/app node-util install express --save
Using Docker Compose for Utility containers
docker-compose run --rm npm init
docker-compose run --rm npm install express --save

Deploying Docker Containers

Development to Production: Things to watch out for

  1. Bind Mounts should not be used in Production
  2. Containerized apps might need a build step like(React, Angular)
  3. Multi Container Projects might need to be split across multiple host/remote machines
  4. Trade-off between control and responsibility might be worth it.

Possible Deployment Approach

Development Machine --> Push --> Container Registry --> Pull --> Remote/Host Machine

Hosting Providers
  • Amazon AWS
  • Microsoft Azure
  • Google Cloud
EC2 Instance
  • Launch EC2 instance
  • Connect EC2 using .ppk or .pem file
  • Make ssh 22 port availabe in inbound rule for security group
Installing Docker on AWS EC2 Instance
  • sudo yum update -y
  • sudo amazon-linux-extras install docker
Pushing our image to Docker Registry(Docker Hub)
  • docker build -t node-dep-example-1 .
  • docker tag node-dep-example-1 rchauhan9102/node-example-1
  • docker push rchauhan9102/node-example-1
Running and Publishing the App on EC2
  • docker run -d --rm -p 80:80 rchauhan9102/node-example-1
  • Make http 80 port availabe in inbound rule for security group
Managing and Updating Container/Image
  • Build New Image Locally
  • Tag the new image
  • Push Image to Docker Registry
  • In EC2, Stop the running container
  • Docker pull the latest image
  • Run the container using the latest container
Disadvantage of Current Approach of Deployment
  • This is a "Do it yourself" approach.
  • Keep essential softwares updated.
  • Manage network and security group firewall.
  • SSHing to EC2 instance can be annoying.
Deployment with AWS ECS:A Managed Docker Container Service
Create ECS service
This will create a container service so without any server configuration like EC2, we can access our application.
  • Define Container
  • Define Task Definition
  • Define Service
  • Define Cluster

Load Balancing

Updating Managed Containers
In the ECS, create new task for revision.
Deploying a multi-container app in AWS ECS
  • Create ECS Cluster.
  • Create Service.
  • Create Task. Inside that add containers
  • Add Load Balancer
  • Add EFS volumes