Saga pattern with Event sourcing.
Since 2022(I had stopped implement because it was enough for me)
First of all this example project is sample and you need to know that it is not for production.
Patterns that I have used in this project is self made so there is no another tool except RabbitMQ.
Patterns that I used:
1- Transactional outbox
2- Sage pattern
3- Event sourcing
Transactions can be published events and they are really important. In some scenarios you can see that another service can be down for couple of minutes or hours. In that case, you can lose data even if you use message broker because it can be down as well. (I guess durability is important)
First approach will be cdc(Change Data Capture) tools. (Debezium). For this project I created a outbox table and implemented my logic according to my implementation. You can simply get reference of transactional outbox pattern from www.microservices.io
Each event that is published to message broker is stored in database.
After storing event information I prepared scheduler job. (I used database lock mechanism to be consistent and not duplicate)
@Scheduled(fixedRate = 60000)
@Transactional(readOnly = true)
public void resendMissingOrders() {
// Order received but 10 minutes passed and there is no response from other services than send it again
List<Order> lOrders = orderRepository.findByStatusAndCreatedIsLessThan(OrderStatus.ORDER_RECEIVED, LocalDateTime.now().minusMinutes(10));
applicationEventPublisher.publishEvent(new CreateOrdersEvent(lOrders));
}
Only order service has this transactional outbox pattern. Especially as you can understand from name of the pattern. Outgoıng transactions need to be safe.
Actually there are two type of saga pattern. * Choreography-based saga * Orchestration-based saga
I designed Choreography-based saga. There is RabbitMq for event driven architecture.
There are some rest apis as well but internal communication of the system is on RabbitMQ. Postman scripts are inside the repository. You can use them as well.
For example:
1- Order Service APIs.
POST /saga/v1/order # Client sends order request to start everything.
GET /saga/v1/order/transaction/{transactionId} # When order is started, one transaction id is generated and user can get result with this endpoint.
2- Payment Service APIs
GET /saga/v1/payment/{paymentId} # Client can query payment information with paymentId which is retrieved when order is given.
3- Stock Service APIs
GET /saga/v1/stock # This endpoint is designed for checking stock quantity for products. Predefined products are created on runtime.(Mock data)
4- Event Store Service APIs.
Actually this is the new implementation and main purpose is the keep events and payloads in database for evensourcing.
POST /v1/event/rollback/transaction/{transactionId} # It is interesting feature. When Order, Payment, Stock are completed or one of them is completed(This is inconsistent behavior), reverts completed steps and it is in old state.
GET /v1/event/transaction/{transactionId} # Client can query event name with given transaction. (Not all events are listed becase internal one are not useful)
Lets return to saga pattern explanation and most annoying part of transaction management patterns, is to write rollback code. It increases your development time and maintain them but it is really useful. Think that you need to delete a specific transaction in the system. Then you can only send request to event store microservice to delete all completed steps.
Event sourcing pattern solves the problem of "How to atomically update the database and send messages to a message broker?" In my implementation it is a little bit different but in the mean time I created event store microservice which keeps event and payload. It does not orchestrate anythin but it has functionality that revert old state, create new order, and so on. (There is no subscription event store)
There are two more patterns that I used here. * Database per service * Domain event (Actually it is mixed with status fields but still I used them)
git clone git@github.com:grkn/Order_Payment_Stock_Saga_Pattern.git
You can see that rabbitmq is necessary for message broker.
Run below command in top project folder for rabbitmq message broker to start.
docker-compose up
Open IDE from your computer
Import Project
Run All four applications. They are not dependent to each other.
POST http://localhost:8089/saga/v1/order
body : [
{
"name" : "order1",
"quantity": 2
},
{
"name" : "order2",
"quantity": 2
},
{
"name" : "order3",
"quantity": 1
}
]
GET http://localhost:8089/saga/v1/order/transaction/{transactionId}
Response :
{
"transactionId": "d7c4d073-061d-4f04-8ff6-d1d0de4e1976",
"name": "order1",
"quantity": 4,
"status": "ORDER_FAILED",
"paymentId": "7260a2ac-233f-441a-aa74-51a9e331afbb"
}
GET http://localhost:8088/saga/v1/payment/{paymentId}
Response :
{
"status": "PAYMENT_COMPLETED",
"totalPrice": 531.0,
"transactionId": "506cc494-3594-46a4-8f52-780f7668513d"
}
Of course there are some failure cases. buy operation is mocked and sometimes it gives error.
Also stock is limited maybe you can not buy necessary amount. You need to check with stock endpoint.
GET http://localhost:8087/saga/v1/stock
Response :
[
{
"name": "order1",
"quantity": 13
},
{
"name": "order2",
"quantity": 16
},
{
"name": "order3",
"quantity": 15
},
{
"name": "order4",
"quantity": 11
},
{
"name": "order5",
"quantity": 10
}
]
Each service has one h2 database and it write result to file which is under user folder.
You can easily check status from database.
I have added postman collection it is not auto configured so you can simply change it according to your needs.
Events and status field are below.
ORDER_RECEIVED, ORDER_COMPLETED, ORDER_PENDING, ORDER_FAILED, ORDER_STOCK_COMPLETED
PAYMENT_REQUESTED, PAYMENT_PENDING, PAYMENT_COMPLETED, PAYMENT_FAILED, PAYMENT_AVAILABLE
STOCK_REQUESTED, STOCK_COMPLETED, STOCK_FAILED, STOCK_PENDING
Each service listens the related queue.
Exchange -> sagaExchange
Order service -> orderQueue
Payment service -> paymentQueue
Stock service -> stockQueue
I will continue to implement new feature according to needs.
I will refactor code each time to make it clean and readable.
I will update dependecies.
I have implement several patterns but I saw that most troublesome stuff in event driven architecture is KEEP TRACK OF DOMAIN EVENTS(You need to create minimum domain event ot ensure that your project is working).
I didnt use any design pattern but it is simpler by this way. At least I will apply several design pattern to seperate concerns according to bussiness logic.
As a result it was fun to see that it is really working and transactions are stable and even if there is failure in one of them, you can trust event store service to rollback.
If you try this project, I logged all the events and you can see that logic is simple but event management is complicated.