Skip to content

WhyNeet/globuflow

Repository files navigation

Globuflow is an automation system

Globuflow is a 3PL automation system for businesses. This version of the application implements the following components:

  • A working workflow execution engine that communicates with the system through events.
  • CRUD endpoints with proper (nearly-proper) access control.
  • Authentication and authorization module with Spring Security.
  • Some validation constraints.
  • Realtime communication module (SSE).
  • Nice-looking frontend page built using React. Frontend state management has been done using React Query and @xstate/store.
  • Stripe integrated into the system for charging clients.
  • Dockerfile.
  • A landing page using Astro.

More on the pipeline execution engine

It is a custom-built workflow automation module that asynchronously executes user-defined workflows. This version does not include any validation (neither syntactic not semantic) constraints.

  • It accepts a pipeline and creates an execution whenever a signal is received by a trigger.
  • It fetches the first node to execute.
  • It executes each node using an executor I define (executors are components in the Spring Boot application).
  • It checks the node output:
    • If the node returned waiting state, the engine will simply store all execution context and wait until some event comes in.
    • If the node returned failed state, the entire pipeline fails and thus the execution comes to an end.
    • If the node returned success state, the engine proceeds onto the next node.
  • What happens when node is waiting? The engine is listening for any pipeline-related events and routes them accordingly -- it will fetch the pipeline execution details, reconstruct the context, and run an event handler on the waiting node. The handler may then return some state.
  • Finally, when the node returns success state and there are no further nodes to execute (nothing connected after the node), the pipeline shifts into completed state. It will no longer change.

More details on the implementation of the engine

Basically, it is a custom-built async execution engine with event-driven state reconstruction.
First, a node has to be triggered. It has an execute method (yes, not the best naming here) that will be called when the engine decides to start working on the node. To be more precise, there is a node executor registry that collects all executors marked with @Component annotation and exposes them to the pipeline orchestration service. An appropriate node executor is selected for a specific node type.
Next, the method will return NodeResult. This object contains a status field -- it indicates whether the node failed on trigger, completed immediately, or must switch into async execution mode. In the last case, the engine will receive the instruction and save current node execution state to the database (actually, it saves the node state every time it changes).
The engine also listens to application events published via ApplicationEventPublisher -- each event contains pipeline ID, as well as some data. The engine then fetches the pipeline execution state, finds the node in waiting state, reconstructs the execution state, and runs an event handler defined by the node executor. To be more specific, execution state is comprised of three main components: execution-related data (ID, etc.), previous node output, current node output. All of these are recovered from the database and inserted into a new execution context, which is later passed down to the node executor. The event handler also returns NodeResult. What happens next -- same thing as after running execute method.
There are, of course, certain limitations to the approach descibed above. For example, if multiple nodes are in a waiting state, it is impossible to determine which one of them should receive the event. In this verion of the system, it is impossible to have multiple nodes in waiting state since there are no control flow primitives available to the user (thus, the implementation of the engine functions without issues).

So what work has been done?

  • First and foremost -- atchitectural planning. From inventing the core feature to planning its implementation in detail and finally implementing it.
  • All the CRUD endpoints.
  • Spring Security setup with Redis-stored sessions and remember-me tokens.
  • Role-based access control (though not polished) between customers, manufacturers, and warehouse managers.
  • Attempts to use GraalVM for the project (you may see some runtime hints). The attempt though worked, was rolled back to Amazon Corretto.
  • A frontend node-based editor using ReactFlow and data fetching through React Query.
  • An SSE endpoint that communicates updates to clients.
  • Multiple deployments of the platform on AWS -- first on EC2, then on Fargate.
  • Onboarding wizard, plus Stripe integrated for charging customers.
  • A landing page using Astro.

About

A workflow-based 3PL automation platform. Functions like n8n for logistical pipelines. Implements a custom asynchronous workflow execution engine.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors