Skip to content

rvikunwar/docker-aws-gitlab-apache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

docker aws apache gitlab

Docker

Docker is a platform that provides us a way to package our application in an isolated environment containing all the libraries, tools, and code required to run an application. It is basically a virtualization tool tahta makes easy to develop, test and deploy our applications by using images and container. Those packages are known as docker images and when they are running then they are called as container.

  • They are isolated
  • They are light-weighted as they don't require hypervisor as virtual machine
  • Multiple containers can run on a single host

Docker registry

Docker registry is an open source platform for storing, maintaining and distributing docker images.

Docker commands

  • docker -version // for getting the current version of docker
  • docker pull // this pulls an image from docker registry
  • docker run -it -d <image_name> // this is used to run an image in a detached mode, here '-i' is used to start the image in an interactive mode allowing us to interact with the conatiner's shell, without this option, the container might start but immediately exit if no foreground process starts on container and '-t' or --tty is used to allocate a pseudo-TTY (pseudo terminal) for a container, it ensures that the container has access to terminal and can handle any input and output in a way that mimics the real terminal
  • docker ps // shows a list of running containers
  • docker ps -a // used to show all the running and exited containers
  • docker exec -t <container_id> bash or sh // used to access running container terminal
  • docker stop <container_id> // for stopping the container
  • docker kill <container_id> // it is used to stop the container immediately
  • docker commit <continer_id> <username/image_name> // used to update an image
  • docker login // used to login to docker hub registry
  • docker push // used to push an image to docker registry
  • docker images // list locally stored images
  • docker rm <container_id> // used to remove stopped container
  • docker rmi <image_id> // used to remove locally stored image
  • docker rmi -f <container_id> // used to stop and remove running container
  • docker build <path_to_docker_file>
  • docker history <image_name> // this provides the details for the evolution of docker image
  • docker logout <registry_url>
  • docker logs <container_id> // it provides the logs of container
  • docker network create <network_name> // this creates an newtork for docker container to communicate with each other internally in an isolated and secured environment
  • docker restart -it -d <container_id> // restarts the container
  • docker volume create volume_name // creates the volumes for persistancy
  • docker inspect <container_id> // list the configuration of a container

Docker port mapping and volumes

Port mapping: It is employed to enable external access to an application within a container. This technique facilitates the mapping of a port from an external network device to a port within the container, thus establishing communication between them.

docker run -p 8080:80 <image_id>

This command is used to map a port (8080) from an external device to an application of container on port 80.

Volumes: It is used to maintain persistant data in a container. When we start a container, they maintain a state for the applications running on it but that state gets destroyed when we stop our container. So in some cases we need to persistant data for our container and this is achieved by docker volumes. Docker volume is an independent file system maintained by docker, they exists on a host system as a normal file or directory where data is persisted.

Creating docker volume

docker volume create <volume_name>

Mouting docker volume to a container

docker run -v <volume_name>:<destination_inside_of_container> image_name

Listing docker volumes

docker volume ls

Dockerfile

It is a simple text file with a set of instruction. These instruction are used to perform actions on a base image to create a new docker image. Dockerfile example

Dockerfile commands FROM: defines a base image RUN: executes command in a new image layer CMD: command to be executed when we start a container ENTRYPOINT: define a container's executable, similar to CM but without params ADD: it is used to copy files/data from an URL to an image ENV: sets environment variables inside the image EXPOSE: defines a port for external communication COPY: copies the local files into images VOLUME: defines which directory in an image should be stored in docker volumes WORKDIR: defines the working directory for the instructions and commands.

Building docker image using Dockerfile

docker build -t image_tag <Dockerfile_path>

Docker compose

It a management tool of docker used for maintaining multiple docker containers/images at once. It uses docker-compose.yml, a yaml file for setting up configurations for different containers . And for applying specified configuration we need to run following command

docker-compose up

Components of docker-compose

Services :
It contains the configuration of mutiple docker containers

services:
  mobile_application:
    image: my react app
    # ... Other configurations for mobile application service
  backend:
    image: express_application
    # ... Other configurations for backend service
  db:
    image: mysql
    # ... Other configuration for database service

Here you can define configuration for multiple containers/services and with each service having its own set of configurations.

Volumes :
Volumes are shared directory between the containers and host, It is used to tretain data even after the container is halted.

services:
  backend_service:
    image: alpine:latest
    volumes: 
      - my_volume_name:/path/inside/container

volumes:
   my_volume_name: 

In this section we can define a volume name and associate it with a location inside the contianer where you intend to persist data.

Gitlab CI/CD

Gitlab

GitLab runner

Gitlab runner is an application which is used run jobs over a machine and sends result back to gitlab.

Installation on aws ubuntu machine

  1. Log into aws remote server using ssh
ssh -i <private_key> ubuntu@server_ip
  1. Download gitlab repo and run the script
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh > script.deb.sh
less script.deb.sh

sudo bash script.deb.sh
  1. Now install gitlab runner
sudo apt install gitlab-runner
  1. Atlast check the status
systemctl status gitlab-runner

After installing GitLab Runner, you need to register it with your GitLab account. Here are the steps:

  1. Within your GitLab project, navigate to Settings > CI/CD > Runners.
  2. Add a new runner specific to your project. Configure the settings as needed, and assign a tag. A tag is a user-defined keyword used to identify GitLab runners uniquely for your project.
  3. Once you've configured the runner, you will receive a GitLab URL and a registration token. You'll use these later when setting up an AWS EC2 instance and registering it as a GitLab runner.
  4. Now on ubuntu machine, execute the following command
sudo gitlab-runner register
  • This command will prompt you for the following information, which you obtained in the last step:
    • GitLab URL
    • Registration token
  • Additionally, it will ask you to specify an executor, which determines how jobs are run. You can choose from options like shell, docker, bash, and more, depending on your requirements.
  1. Now that GitLab runner is ready on your Ubuntu machine, make sure to update and configure it as necessary.

Configuring .gitlab-ci.yml file

It's now time to set up CI/CD for your Node.js application for deploying it to aws using docker and docker registry. In this example, we'll use GitLab Container Registry. Follow and checkout the steps in order to setup yml file for ci/cd pipeline:

  1. Build docker image:
  • Define your Dockerfile, which includes the configuration for building your application Docker image
  • then execute the following for building image
docker build -t <project-name>:<image-tag> .
  1. Publish image to docker registry
  • Publish our image to docker registry (gitlab container registry), you can explore the GitLab Container Registry by going to Packages & Registries > Container Registry in your GitLab project. Here, you can view and manage Docker images stored in the registry.
docker push <image_id>
  1. SSH into your AWS EC2 machine or you can use gitlab runner tag for logging/running jobs on AWS machine.
  • Here, remove the old Docker image and container if necessary.
  • Pull the latest Docker image from the GitLab Container Registry:
docker pull <registry-url>/<project-name>:<image-tag>
  • Start a new container with the latest Docker image:
docker run -d -p <host-port>:<container-port> <registry-url>/<project-name>:<image-tag>

By following these steps, you can configure yml CI/CD file to build and push Docker images to GitLab Container Registry and then deploy the latest image on your AWS EC2 machine.

Dockerfile

FROM node:16.20  // this will pull an image from docker registry with node 16 as it's environment
RUN mkdir -p /home/app  // creates a directory in our pulled docker image
WORKDIR /home/app // set /home/app as our working directory in our image
COPY . /home/app // copies everything from our current local location to working directory of image
RUN npm install  // install dependencies for our node application defined in our package.json

EXPOSE 9001  // exposes port 9001 for our application for port mapping with external network
CMD ["node", "index.js"]  // run node index.js command when we start our container

About

Deploying node application to aws using gitlab ci cd and serving application using apache & docker

Topics

Resources

Stars

Watchers

Forks