Skip to content
Andre edited this page Oct 14, 2022 · 3 revisions

Overview

The idea came from the business need to execute workflows in the best scalable way, and also that the workflow must be dynamic in the sense that code deployment is not needed to create or change workflows. Durable Function workflows are usually hard coded by developers. This leads to a situation where workflow changes must go through the normal development life cycle (dev, test, production). What if the users designing the workflows are not developers? What if the business needs to make workflow changes quickly without the need for code changes? These are the problems that Microflow addresses, keeping the greatness of serverless Durable Functions and making the workflows dynamic for easy changeability. The workflow can be designed outside of Microflow and then passed in as JSON for execution, so no code changes or deployments are needed to modify any aspect of a workflow. Microflow can be deployed to the Azure Functions Consumption or Premium plans, to an Azure App Service, or to Kubernetes. For ultimate scalability, deploy Microflow to a serverless hosting plan for Microflow only, then deploy your micro-services each to their own plans. Microflow can orchestrate any Api endpoint across hosting plans, other clouds, or any http endpoints.

Microflow functionality:

  • dynamic json workflows separate and changeable outside of Microflow
  • auto scale out to 200 small virtual machines in the Consumption Plan, and to 100, 4 core CPU, 14GB memory virtual machines in the Premium Plan
  • parent-child-sibling-parent dependencies - complex inter step dependencies, parallel optimized execution, parent steps execute in parallel
  • very complex workflows can be created (be careful of creating endless loops - in the future validation will be built in to check this)
  • workflows can run as single instances (like a risk model that should always run as 1 instance), or can run as multiple parallel overlapping instances (like eCommerce orders)
  • run in split mode or standalone mode: split mode separates all non-workflow execution code from the admin API calls for optimum scaling performance and requires both the Microflow and MicroflowApi function apps to be deployed, whereas standalone mode includes admin API calls within the Microflow app and thus only requires the Microflow app to be deployed but has a slightly larger code footprint to scale
  • for custom logic like response interpretations, this can be included in Microflow, but best practice is to separate these response proxies as functions outside of Microflow, and then these will call back to Microflow
  • easily manage step configs with merge fields
  • do batch processing by looping the workflow execution with Microflow`s "Loop" setting, set up variation sets, 1 variation set per loop/batch
  • set outgoing http calls to be inline (wait for response at the point of out call), or wait asynchronously by setting WebhookAction (wait for external action call to the webhook)
  • set AsynchronousPollingEnabled (per step) to true and the step will poll for completion before moving on, embed other Microflow workflows and wait for them; or set it to false for fire and forget, call another Microflow workflow and continue immediately with the next step, no waiting for completion
  • there is a global id called "GlobalKey" in the logs (orchestrations, steps, errors) that can be used to tie up cross Microflow workflow calls, this key is passed along for logging purposes when a Microflow workflow calls/embed other Microflow workflows
  • workflows, as well as based on a GlobalKey, can be controlled via API calls for: stop, pause, and run
  • timeouts can be set per step for inline and webhook
  • retry policies can be set for each step and there can also be a default retry policy for the entire workflow
  • StopOnActionFailed can be set per step to indicate for when there is a failure (not a success call to webhook), which will make Microflow stop the workflow execution or to log the failure and continue with the workflow
  • scale group can be set per step, this will throttle the maximum concurrent step instances
  • http response data received from the callout url can be posted to the next step as input data by setting the data forwarding parent step's ForwardPostData property to true
  • by default a child step will wait for all parent steps to complete, and then it will execute once - this flow behavior can be changed by setting the child step's WaitForAllParents property to false, now the child step will execute every time a parent step completes
  • save and retrieve Microflow workflow JSON
  • view in-step progress counts, this is more useful when running multiple concurrent instances
  • stateful and durable! Microflow leverages Durable Functions, so even when the vm crashes, Microflow will continue from where it left off, before the crash, when a new vm becomes available (Azure will do this in the background)
  • leverage the Azure serverless plans: Serverless Consumption and Premium plans
  • Microflow can run anywhere on Kubernetes when the Azure serverless environment is not available
  • Microflow is lightweight and will auto scale when there is a usage spike
Clone this wiki locally