We created the project with the following gradle command: gradle init
Please, have a look into the following references:
Docker is a platform for developing, shipping, and running applications in containers. It simplifies application deployment by providing a consistent environment across different systems. With Docker, developers can build, test, and deploy applications quickly and efficiently, improving collaboration and productivity.
Docker architecture consists of six main components: server, client, container, image, volume and registry.
-
Docker Server typically refers to the Docker Engine, which is the core component of the Docker platform responsible for creating and managing containers. It provides an API and command-line interface (CLI) for interacting with containers, allowing users to build, run, and manage containerized applications. The Docker Engine runs as a daemon process on the host operating system and manages container lifecycle, networking, storage, and other container-related tasks.
-
The Docker Client is a command-line interface (CLI) tool used to interact with the Docker Engine, allowing users to build, run, and manage Docker containers. It sends commands to the Docker Engine via its REST API, enabling users to perform various tasks such as creating, starting, stopping, and inspecting containers, as well as managing Docker images, networks, volumes, and more. The Docker Client provides a simple and intuitive interface for developers and administrators to work with Docker containers and related resources.
-
A Docker Container is a lightweight, standalone, and executable package that contains everything needed to run a software application, including code, runtime, system libraries, and dependencies. Containers provide a consistent and isolated environment for running applications, allowing them to be easily deployed across different systems without changes. Docker containers are portable, scalable, and efficient, making them ideal for packaging, shipping, and deploying applications in various environments, from development to production.
-
A Docker Image is a preconfigured template that specifies what should be included in a Docker container. Usually, images are downloaded from websites like Docker Hub. However, it’s also possible to create a custom image with the help of Dockerfile.
-
The Docker Registry is a central repository that stores and manages Docker images. It is a server-based system that lets users store and share Docker images with others, making it easy to distribute and deploy applications. The most notable Docker registry is Docker Hub.
-
Docker Volumes are a feature that allows data to persist beyond the lifespan of a container. They provide a way to manage and store data separately from the container’s filesystem, enabling data sharing between containers and between the host machine and containers. Volumes are commonly used for storing databases, log files, configuration files, and other persistent data in Dockerized applications. They offer flexibility, scalability, and performance benefits, making them essential for building reliable and scalable container-based systems
The Spring official guide says we shouldn’t use kotlin data class with spring-data-jpa.
"Here we don’t use data classes with val properties because JPA is not designed to work with immutable classes or the methods generated automatically by data classes. If you are using other Spring Data flavor, most of them are designed to support such constructs so you should use classes like data class User (val login: String, …) when using Spring Data MongoDB, Spring Data JDBC, etc."
The goal of the Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores. For more details, please have a look into the following reference:
We added the No-arg plugin for generating an additional zero-argument constructor for classes with a specific annotation. This allows the Java Persistence API (JPA) to instantiate a class, although it doesn’t have the zero-parameter constructor from Kotlin or Java point of view.
We have to add the following spring data configuration: spring.data.mongodb.auto-index-creation=true
Because in Spring Data MongoDB, automatic index creation is turned off by default.
We have to add the following spring data configuration: spring.jpa.hibernate.ddl-auto=create
Testcontainers is a Java library that provides lightweight, disposable containers for integration testing. It allows developers to define and manage Docker containers directly within their test code, making it easy to set up isolated environments for testing against external dependencies such as databases, message brokers, and web services. Testcontainers automates the container lifecycle, handling container creation, startup, and teardown, ensuring consistent and reliable test environments. It supports a wide range of containerized technologies and integrates seamlessly with popular testing frameworks like JUnit and TestNG. Overall, Testcontainers simplifies the process of writing and running integration tests by providing fast, reproducible, and portable test environments.
According to 'The Reactive Manifesto' a reactive system must be Responsive, Resilient, Elastic and Message Driven.
Reactive Programming is a programming paradigm that allows developers to build applications that react to changes in data streams and handle asynchronous events efficiently. It emphasizes declarative and event-driven programming models, making it easier to manage complex asynchronous operations.
In summary, "asynchronous streams of data with non-blocking back pressure" describes a programming model where data flows asynchronously between components, with operations performed non-blockingly, and back pressure applied to control the flow of data and prevent overwhelm in downstream components. This model is commonly used in reactive programming frameworks and systems to build scalable, responsive, and resilient applications.
When one component is struggling to keep-up, the system as a whole needs to respond in a sensible way. It is unacceptable for the component under stress to fail catastrophically or to drop messages in an uncontrolled fashion. Since it can’t cope and it can’t fail it should communicate the fact that it is under stress to upstream components and so get them to reduce the load. This back-pressure is an important feedback mechanism that allows systems to gracefully respond to load rather than collapse under it. The back-pressure may bubble all the way up to the user, at which point responsiveness may degrade, but this mechanism will ensure that the system is resilient under load, and will provide information that may allow the system itself to apply other resources to help distribute the load, see Elasticity.
In reactive programming, non-blocking (or non-blocking I/O) refers to a programming paradigm where operations do not block the execution thread while waiting for a result. Instead, they allow the thread to continue with other tasks or to be returned to a thread pool for further use.
In a non-blocking system, when a piece of code initiates an I/O operation (such as reading from a file, querying a database, or making an HTTP request), it doesn’t wait for the operation to complete before moving on to the next task. Instead, it registers a callback or a promise (depending on the programming model) and allows the thread to continue with other work.
When the I/O operation completes, the system invokes the callback or fulfills the promise, and the associated code is executed. This way, the thread doesn’t sit idle, waiting for the I/O operation to finish, which can lead to more efficient use of system resources and better scalability.
Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. This boundary also provides the means to delegate failures as messages. Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.
In reactive systems, an "upstream" refers to the source of data or events in a reactive data flow. It represents the initial producer or publisher of data, which emits data items or events that are then processed by downstream components.
In reactive systems, a "downstream" component refers to a consumer or subscriber of data or events in a reactive data flow. It represents the components that consume or react to the data emitted by upstream components.
In reactive systems, a "hot stream" component refers to a source of data or events that emits items regardless of whether there are any subscribers. Hot streams are continuously producing data, and subscribers can join the stream at any time to receive the emitted items. Unlike cold streams, hot streams do not start producing data upon subscription.
Even though sending asynchronous messages is preferred over synchronous API calls, it comes with challeges of its own. We can use Spring Cloud Stream to handle some of them:
-
Consumers groups
-
Retries and dead-letter queues
-
Guaranteed orders and partitions
Unfortunately, Kafka doesn’t come with any graphical tools that can be used to inspect topic, partitions, and the messages that are placed within them. Instead, we can run CLI commands in the Kafka Docker container.
We can use the following command for connecting to the Kafka container:
docker exec -it microservices_kafka_1 /bin/bash
If we want to see the list with the topics, we can run the following command:
cd / ./usr/bin/kafka-topics --bootstrap-server localhost:9092 --list
To see the partitions in a specific topic, for example, the products topic, run the following command:
cd / ./usr/bin/kafka-topics --describe --bootstrap-server localhost:9092 --topic products
To see all the messages in a specific topic, for example, the products topic, run the following command:
cd / ./usr/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic products --from-beginning
To see all the messages in a specific partition, for example, partition 1 in the products topic, run the following command:
cd / ./usr/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic products --from-beginning --partition 1
Service discovery is probably the most important support function required to make a landscape of cooperating microservices production ready. A discovery service can be used to keep track of existing microservices and their instances. The first discovery service that Spring Cloud supported was Netflix Eureka.
-
New instances can start up at any point in time
-
Existing instances can stop responding and eventually crash at any point in time
-
Some of the failing instances might be okay after a while and should start to receive traffic again, while others will not and should be removed from the service registry
-
Some microservice instances might take some time to start up; that is, just because they can receive HTTP requests doesn’t mean that traffic should be routed to them
-
Unintended network partitioning and other network-related errors can occur at any time
-
Whenever a microservice instance starts up - for example, the Review service - it registers itself to one of the Eureka servers.
-
On a regular basis, each microservice instance sends a heartbeat message to the Eureka server, telling it that the microservice instance is okay and is ready to receive requests.
-
Clients - for example, the Product Composite service - use a client library that regularly asks the Eureka service for information about available services. Moreover, each client keeps a local cache with the registered services.
-
When a client needs to send a request to another microservice, it already has a list of available instances in its local cache and can pick one of them without asking the discovery server. Typically, available instances are chosen in a round-robin fashion; that is, they are called one after another before the first one is called once more.
In a system landscape of microservices, it is in many cases desirable to expose some of the microservices to the outside of the system landscape and hide the remaining microservices from external access. The exposed microservices must be protected against requests from malicious clients.
Initially, Spring Cloud used Netflix Zuul v1 as its edge server. Since Cloud Greenwich release, it’s recommended to use Spring Cloud Gateway instead. Spring Cloud Gateway comes with similar support for critical features, such as URL path-based routing and the protection of endpoints via the use of OAuh 2.0 and OpenID Connect (OIDC).
One important difference between Netflix Zuul v1 and Spring Cloud Gateway is that Spring Cloud Gateway is based on non-blocking APIs that use Spring 5, Project Reactor, and Spring Boot 2,while Netflix Zuul v1 is based on blocking APIs.
-
Hide internal services that should not be exposed outside their context; that only route requests to microservices that are configured to allow external requests.
-
Expose external services and protect them from malicious requests; that is,use standard protocols and best practices such as OAuth, OIDC, JWT tokens, and API keys to ensure that the clients are trustworthy.