- HTTP REST API to create/retrieve/search "work to do":
pp_backend_api
. - Postgres DB via Docker with a SQL schema:
pp_storage
. pp_lib
Rust library sharing the source code for the business logic.- CLI utility (
cli_01
) to interact with the HTTP API and the DB directly viapp_lib
. - CLI utility (
cli_02
) to open a TCP socket and perform a manual HTTP call topp_backend_api
- AMQP RabbitMQ queue to publish/subscribe to produce/consume messages.
task_producer
a schedule (CLI) to produce messages as "work demand" via AMQP.- A backend schedule to consume the AMQP queue -
task_consumer
that maps a "work demand" to a "work to do" in the DB, then stores "events" in the DB table to track the execution of the work to do, finally it updates the work row in the DB with the results of the calculations.
All the operations can be performed with a dedicated target in the Makefile
.
The "work to do" is adding together numbers up to a upper bound threshold.
{
"id": 21,
"work_code": "api-bjq8euwsEA",
"add_up_to": 4,
"done": false,
"created_on": 1634115736,
"updated_on": 1634115736
}
- Rust structure:
Work
. - The
work_code
field has a prefix ofapi-*
. - The
done
field isfalse
and is supposed to be updated when a hypothetical backend schedule picks up work to do from the database tableworks
with a time-range filter (e.g. last 6 hours). - Both the
work_code
suffix and theadd_up_to
field are randomly generated before adding the row to the database.
{
"add_up_to": 2,
"done": false
}
- Rust structure:
WorkDemand
. - This is translated into a Rust structure
Work
by thetask_consumer
. - The
work_code
for a message pulled from the queue has a prefix ofconsumer-*
. - The
done
field is updated totrue
once the calculations have been performed by thetask_consumer
.
These calculated rows can be searched for from the HTTP API to be retrieved.