This project is build from scratch to finish based on microservices architecture using Spring Boot, Spring Cloud.
-
Client send a request to
customer
to performorders
then I checkproduct
customer want to orders exist or not. If product exist then I saveorders
then I push a message i.e notification to Message Queue whichNotification service
can pull message to send email to customer. -
Same with orders is that when client send a request to
customer
to performpayment
then I checkorders
user want to payment exist or not. If orders exist then I savepayment
then I push a message i.e notification to Message Queue whichNotification service
can pull message to send email to customer.
Microservices
are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.
Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features.
With monolithic architectures, all processes are tightly coupled and run as a single service. This means that when application grows up it's more difficult to manage and deploy application.
With a microservices architecture, an application is built as independent components that run each application process as a service. Because they are independently run, each service can be updated, deployed, and scaled to meet demand for specific functions of an application.
In this project, I will explain step by steps that I build. Beside that, I have a lot of branches to describe step by step corresponding to each branch. So, you can check out the code of each branch and look at this diagram to follow along.
- 1. Setup parent module
- 2. Create microservices instances
- 3. Microservices communication using RestTemplate
- 4. Service Discovery
- 5. Microservices communication using OpenFeign
- 6. Distributed Tracing
- 7. API Gateway
- 8. Message Queue
- 9. Package, run microservices with jar file
- 10. Containerize microservices, build, push docker image to local and DockerHub using Jib
- 11. Monitor microservices using Prometheus and Grafana
- 12. Deploy microservices to local Kubernetes using Minikube
- 13. Deploy microservices to AWS EKS (Elastic Kubernetes Service)
- 14. Monitor kubernetes cluster using Prometheus Operator
- 15. CI-CD microservices using GitHub Actions
To set up parent module i.e pom.xml
, we need to add dependencyManagement
, pluginManagement
and from that all sub-module i.e microserivces can pick one of list dependencies or plugin in there pom.xml
To create microservices instances you need to create new sub-module then add some dependency
and plugin
that you want to your pom.xml
and build each Spring Boot application normally i.e Controller, Service, Repository
, etc.
After create microservices instances then I want all microservices communication between them i.e send HTTP request
. I need to use RestTemplate
to perform a request.
The RestTemplate
is the central Spring class for client-side HTTP access.
The mechanism
for application & microservices to locate each other on a network i.e host:port
to communication between them.
Spring Cloud provided Eureka Server
& Eureka Client
to perform Service Discovery.
- Step 1: Microservices register
Eureka Server
as aClient
i.eEureka Client
. - Step 2: When microservices need to talk to another microservices, then they will look up to
Eureka Server
to know the location i.ehost:port
. - Step 3: Microservices can connect with each other just use
service name
to performHTTP request
.
Look at the diagram below to understand how service discovery work which is provided by Spring Cloud Netflix Eureka
.
A better way to communication between microservices is that using OpenFeign
instead using RestTemplate
.
Spring Cloud OpenFeign
— a declarative REST client
for Spring Boot app.
Declarative REST Client: Feign
is a declarative web service
that creates a dynamic implementation of an interface
decorated with JAX-RS or Spring MVC annotations
.
Spring Cloud integrates Eureka
, Spring Cloud CircuitBreaker
, as well as Spring Cloud LoadBalancer
to provide a load-balanced http client when using Feign
.
In microserivces architecture, microserivces talk to another microserivces via HTTP request. We need to know exactly the entire flow of request go though microservices. To do so, we need to use Sleuth
and Zipkin
.
-
Spring Cloud Sleuth
: Spring Cloud Sleuth provides API for distributed tracing solution for Spring Cloud. Spring Cloud Sleuth is able to trace your requests and messages so that you can correlate that communication to corresponding log entries. -
Zipkin
: Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in service architectures. If you have atrace ID
in a log file, you can jump directly to it.
If spring-cloud-sleuth-zipkin
dependency
is available and add spring.zipkin.baseUrl
in spring profile
then the app will generate and report Zipkin-compatible traces via HTTP.
The following Zipkin UI shows the flow of the request with TraceID when client request to the system.
Spring Cloud Gateway
as know API Gateway
or Load Balancer
is a service that allows you to route traffic requests to different microservices though the endpoint.
When a client send a request. The request go though the Load Balancer and the Load Blanacer will redirect the request to microserivces.
The following diagram shows how a request route based on the path
from Load Balancer to microservices in a system.
Message queue
is a form of asynchronous service-to-service communication used in serverless and microservices architectures.
In this project I used RabbitMQ
to perform message queue.
Benefit of RabbitMQ
:
- Loose coupling
- Performance
- Asynchronous
- Language Agnostic
- Acknowledge
- Management UI
- Plugin
- Cloud
RabbitMQ
is the most widely deployed open source message broker. It support multiple messaging protocols
.
AMQP 0-9-1
(Advanced Message Queuing Protocol) is a messaging protocol
that enables conforming client applications to communicate with conforming messaging middleware broker
.
Broker
receive messages from publishers (applications that publish them, also known as producers) and route them to consumers (applications that process them)
When a messages are published from producers
to exchange
. The Exchange
will distribute message to queues
when binding
the exchange
to queue
with routing-key
. Then the broker either deliver messages to consumers
subscribed to queue
, or consumers
fetch/pull messages from queue
on demand.
The following diagram shows how a message route from producer
to consumer
though message broker
in a system.
To package and run microservices with jar file, you need to add 2 things in your child pom.xml
.
-
<packaging>jar</packaging>
: To package project asjar
file. Ajar
is a package file format typically used to aggregate many Java class files and associated metadata and resources into one file for distribution -
spring-boot-maven-plugin
: It allows you to package executablejar
andwar
archives to run Spring Boot application.
Most important is that you also need to define spring-boot-maven-plugin
with the goal
of executions
is repackage
in parent pom.xml
, from that microservices can reference to parent pom.xml
to repackage existing jar
and war
archives after running mvn clean package
or mvn clean install
command so that they can be executed from the command line using java -jar
.
Jib
containerize your Maven
or Gradle
project to builds optimized Docker
and OCI
images for your Java applications without using a Dockerfile
or requiring a Docker installation and without deep mastery of Docker best-practices.
In your Maven Java project, add the jib-maven-plugin
to your pom.xml
.
You can check out the github link and google link to know more information and how to implement it.
In my case, because I need to push the image to local
and to DockerHub
. So, I created a Maven Profile
in pom.xml
to fully control when I want to executed and where I want to store the image.
After you have an image
of each microservices. You can deploy your microservices to Docker
running as a container
with docker-compose.yml
file.
Because our microservices run as an independent service
. So, we need to check i.e, monitor how many resources microservices are using such as CPU
, Memory
, Disk I/O
, etc. All the information
/metrics
can be exposed and collected by Prometheus
and visualize the data in Grafana
.
-
Prometheus
: Open-source systems monitoring. It collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded. It usedPromQL
that lets the user select and aggregate time series data in real time and the result can either be shown as agraph
. -
Grafana
: Open-source system enables you to query, visualize, alert on, and explore your metrics, logs, and traces wherever they are stored.
The following diagram shows how Prometheus
and Grafana
work together to monitor microservices.
First, we need to add a library
to expose all the metrics to Prometheus
and that is Micrometer
. In a Spring Boot application, a Prometheus actuator
endpoint is auto-configured
in the presence of Spring Boot Actuator
.
-
Micrometer
is alibrary
forJava
that allows you to capture metrics and expose them to several different tools such asPrometheus
. -
Spring Boot Actuator
is mainly used to expose operational information about the running application such as health, metrics, info, dump, etc thoughHTTP endpoints
.
Then, Prometheus
use prometheus.yml
file to know what microservices are available based on target
to scrape/pull
all the metrics.
After all the metrics are store in Prometheus
, Grafana
will pick up Prometheus
as a datasource
and create a series of dashboards to visualize the metrics.
One thing I explore is that we can also pick Zipkin
as a datasource
to see the flow of the request. So, we won't need to open Zipkin UI anymore if you want.
Kubernetes also known as K8S is an application orchestrator
, an open-source
system develop by Google
, writen on Golang
for
- Deploy & manage applications (
pod
,container
). - Scale up & down according demand.
- Zero downtime deployment.
- Rollback.
- And more.
Cluster:
- A set of nodes.
- Node - Virtual (
VM
) orPhysical Machine
. - A node can run on the Cloud such as AWS, Azure or Google Cloud.
Kubernetes Cluster Architecture:
The Kubernetes Cluster is divided in two nodes : master node
and worker nodes
.
-
The master node
also known asControl Plane
is responsible for managing the cluster (brain of cluster), where all the decisions are made.-
API Server
:Frontend
to Kubernetes Control Plane.Main entry point
.- All
communication
go throughAPI Server
includeExternal
andInternal
. - Expose
RESTful API
on port 443.
-
Cluster store
:- Stores
configuration
andstate
of the entire cluster. - Distribute Key Value data store.
etcd
: Single source of Truth (data).
- Stores
-
Scheduler
:- Watch for
new workload/pod
andassigns
them to anode
based on several scheduling factors. - Check
healthy
. - Check
enough resource
. - Check
port available
. - etc.
- Watch for
-
Controller Manager
:- Manages the control loop (Controller of Controllers).
Control loop
is a non-terminating loop that regulates thestate
of the system,watches
the sharedstate
(Desired State & Current State)
of the cluster though theAPI Server
.
- Manages the control loop (Controller of Controllers).
-
Cloud Control Manager
:- Responsible to interact with underlying cloud provider such as
AWS
,Azure
orGoogle Cloud
to create theLoad Balancer
.
- Responsible to interact with underlying cloud provider such as
-
-
The worker nodes
responsible for running the applications.-
Kubelet
:Main Agent
that run on every single node.- Receive
Pod
definitions fromAPI Server
. - Interacts with
Container runtime
to runcontainer
with thePod
. - Report
Node
andPod
state toMaster Node
thoughAPI Server
.
-
Container runtime
:- Responsible for running
containers
Pull images
from container registries such asDockerHub
,ECR
,ACR
orGCR
.Start
andStop
container.
- Responsible for running
-
Kube-proxy
:Agent
run on every node.- Responsible local cluster
networking
- Each
node
get ownunique IP address
- Implementing part of the Kubernetes
Service
- Maintain
network rule
to allow communication topods
frominside
andoutside
the cluster. - Redirect traffic to
Pod
that matchService
withlabel
,selector
.
-
Minikube
:
- Great community
- Add-ons and lots of feature
- Great documentation
Kubectl
:
- Kubernetes Command Line Tool.
- Interact with the cluster from local machine.
- Run command gains your cluster such as Deploy, Inspect, Edit, Debug, View logs, etc.
AWS EKS (Elastic Kubernetes Service) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
Before started you need to install AWS CLI.
AWS CLI – A command line tool for working with AWS services, including Amazon EKS.
Steps for deploy to AWS EKS
- Create Cluster
- Add Node Group
- Update
kube-config
for kubectl can connect to your cluster on the cloud with command below.- aws eks update-kubeconfig --region
<your region>
--name<cluster name>
- aws eks update-kubeconfig --region
- Use
kubectl
to apply all k8s resources i.e deployment, service, secret, configmap, etc.
One thing to note here is that you need to
- Have a
database
in AWS RDS to get theendpoints
,username
andpassword
then put this information intospring profile
. - Config
Security Group
for AWS EKS can connect to AWS RDS.
The Prometheus Operator
provides Kubernetes native deployment and management of Prometheus
and related monitoring components. The purpose of this project is to simplify and automate the configuration of a Prometheus based monitoring stack for Kubernetes clusters.
The Prometheus operator includes, but is not limited to, the following features:
-
Kubernetes Custom Resources
: Use Kubernetes custom resources to deploy and manage Prometheus, Alertmanager, and related components. -
Simplified Deployment Configuration
: Configure the fundamentals of Prometheus like versions, persistence, retention policies, and replicas from a native Kubernetes resource. -
Prometheus Target Configuration
: Automatically generate monitoring target configurations based on familiar Kubernetes label queries; no need to learn a Prometheus specific configuration language.
Prometheus Operator vs. kube-prometheus vs. community helm chart
-
Prometheus Operator
: The Prometheus Operator uses Kubernetes custom resources to simplify the deployment and configuration ofPrometheus
,Alertmanager
, and related monitoring components. -
kube-prometheus
: kube-prometheus provides example configurations for a complete cluster monitoring stack based onPrometheus
and thePrometheus Operator
. This includes deployment ofmultiple Prometheus
andAlertmanager
instances, metrics exporters such as thenode_exporter
for gatheringnode metrics
, scrape target configuration linking Prometheus to various metricsendpoints
, and example alerting rules for notification of potential issues in the cluster. -
helm chart
: The prometheus-community/kube-prometheus-stack helm chart provides a similar feature set to kube-prometheus. This chart is maintained by the Prometheus community
Custom Resource Definitions: A core feature of the Prometheus Operator
is to monitor the Kubernetes API
server for changes to specific objects and ensure that the current Prometheus deployments
match these objects. The Operator acts on the following custom resource definitions (CRDs):
Prometheus
: which defines a desired Prometheus deployment.Alertmanager
: which defines a desired Alertmanager deployment.ThanosRuler
: which defines a desired Thanos Ruler deployment.ServiceMonitor
: which declaratively specifies how groups ofKubernetes services
should bemonitored
. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in theAPI server
.PodMonitor
: which declaratively specifies how group of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.Probe
: which declaratively specifies how groups of ingresses or static targets should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.PrometheusRule
: which defines a desired set of Prometheus alerting and/or recording rules. The Operator generates a rule file, which can be used by Prometheus instances.AlertmanagerConfig
: which declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibit rules.
In this step I choose kube-prometheus
to monitor my kubernetes cluster.
kube-prometheus
collects Kubernetes manifests
, Grafana
dashboards, and Prometheus
rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator
.
Components included in this package:
- The Prometheus Operator
- Highly available Prometheus
- Highly available Alertmanager
- Prometheus node-exporter
- Prometheus Adapter for Kubernetes Metrics APIs
- kube-state-metrics
- Grafana
This stack is meant for cluster monitoring, so it is pre-configured to collect metrics from all Kubernetes components. In addition to that it delivers a default set of dashboards and alerting rules. Many of the useful dashboards
and alerts come from the kubernetes-mixin project, similar to this project it provides composable jsonnet as a library for users to customize to their needs.
Continuous integration (CI) and continuous delivery (CD), also known as CI/CD, embodies a culture, operating principles, and a set of practices that application development teams use to deliver code changes more frequently and reliably.
Continuous integration
is the practice of integrating all your code changes into the main branch of a shared source code repository early and often, automatically testing each change when you commit or merge them, and automatically kicking off a build.
Continuous delivery
picks up where continuous integration ends, and automates application delivery to selected environments, including production, development, and testing environments. Continuous delivery is an automated way to push code changes to these environments any time.
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production.
The following diagram shows the CI-CD workflows work with GitHub Actions.