It contains a solution for tracking the request processing time, messaging the tracked data to Kafka, and inserting the Kafka messages into the database. Also, there is a simple UI to see the logs in a chart.
Runs 5 services;
- Apache ZooKeeper
- Apache Kafka
- ASP.NET 5 Web API
- Go Project : Kafka Consumer and Database Updater
- Postgres Database
docker-compose up --builddocker-compose uprequest.sh provides to automate the requests but it needs shell script runner and curl.
./request.shYou can use the http://localhost:1923/ address to watch the live dashboard.
Also, on top of the dashboard, you will see the different HTTP method names. You can click those to filter which method is shown in the chart.
ASP.NET 5 Web API, web server project.
The web server has two endpoint,
- /api/products [GET, POST, PUT, DELETE]
- /health/api/products [GET]
/api/products endpoint's for creating dummy data to log. This endpoint has Delayer and TimeTracker. It's mean that every request will wait for a random time before processing at the endpoint and elapsed time to response will be logged by TimeTracker.
The /api/products endpoint supports to GET, POST, PUT, DELETE methods. You can use the http://localhost:1923/api/products address for your requests.
All of the above methods return 204.
This endpoint presents the following data about requests to /api/products in the last hour;
- HTTP Method
- Elapsed time to response
- When was it logged?
This endpoint just support GET method. You can consume the data by using the GET method at the http://localhost:1923/health/api/products/ address.
- 204
If there is no request that made to/api/productsin the last hour, it returns 204. - 200
Returns JSON data that give information about requests to/api/productsin last hour.
[
{
"method": "GET",
"elapsedTime": 375,
"timestamp": 1616368665
}
]Go Project consumes the kafka messages and writes those messages to the database.
It collects the messages from the response_log topic as go-consumer and it accumulates that data in a fixed size string array called kafkaMessages. This array's size is fixed using maxMessageCountToAccumulate variable and consumer collects messages until receivedMessageCount reaches maxMessageCountToAccumulate.
If receivedMessageCount >= maxMessageCountToAccumulate, write to database method runs and it inserts all of the collected messages into the database with one transaction. After then, receivedMessageCount sets to zero and the consumer resumes to collect data from last offset until again repeat.
As I mentioned above, the receivedMessageCount must reach maxMessageCountToAccumulate to start the database transaction and insert received messages with a single transaction.
