This project uses DDD with Hexagonal Architecture, CQRS & Event Sourcing
The workflow is that the write side sends the commands
to the command handlers
through commandBus
to alter the information. The succeeded commands will then generate resulting events
which are then stored in the event store
. Finally, the event handlers
subscribe to events and generate the projection / read store.
The only source of truth of Event Sourcing systems is the event store
while the data in the read store is simply a derivative of the events generated from the write side. This means we can use totally different data structure between the read and the write sides and we can replay the events from the event store from the whenever we want the regenerate the denormalised data in whatever shapes we want.
In this example, we use MongoDB
for both event store and read store, for example, having the collection product_events
to store the events and thcollection
product` to project the last state of the aggregates as the read store model.
The commands are sent by the frontend to the commandBus
which then selects appropriate command handlers
for the commands. The command handlers then prepare the Aggregate Root
and apply the business logic suitable for them. If the commands succeed, they result in events which will then be sent to the eventBus
to the event handlers
. In this example, the eventBus is implemented using RabbitMq
.
- Node.js
- TypeScript
- MongoDB with MongoDB native driver as event store and projections
- node-dependency-injection as an IoC container
- Express (via Inversify Express Utils) as an API framework
- RabbitMq/Kafka as message broker
- Angular as UI
- Elasticsearch as second repository for backoffice app
This project follows the standard CQRS & Event Sourcing applications available on GitHub. Highly inspired by CodelyTv's DDD on Typescript courses series (CodelyTV Repo).
Below is the list of components in this project
- Domain Model (Aggregate Root)
Contains the business logic required for the application - Commands
The command class reflects the intention of the users to alter the state of the application - CommandHandlers
The command processor managed byCommandBus
. It prepares the Aggregate Root and applies business logic on it. - CommandBus
The command management object which receieves incoming commands and select appropriate handlers for them. - Events
The resulting objects from describing the changes generated by succeeding commands which are sent through theEventBus
. - Event Store The storage that stores events. This is the only source of truth of the system (The sequence of events generated by the application).
- EventBus
The bus that contains the events where event handlers subscribe to. In this example,Kafka
is used to implement this bus. - Event Handlers
The event processor. This could be the projector or the denormaliser that generates the data for the read side on the read storage.
cp .env_template .env
Deploying docker container
docker-compose up
Backend:
npm run test
Acceptance:
docker-compose -f docker-compose.test.yml up --abort-on-container-exit --force-recreate
UI:
cd src/apps/shop/frontend
npm run test
Useful Kafka commands:
docker exec -it shop-kafka /bin/bash
kafka-topics --bootstrap-server localhost:9092 --list
kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic product_created --partition 0
kafka-console-producer --broker-list localhost:9092 --topic product_created
CMAK Config:
- Go to http://localhost:9000/addCluster
- Username: username, Password: password (as defined in docker-compose.yml)
- Cluster name: Shop
- Cluster Zookeeper Hosts: zookeeper:2181
- Save