Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[workflow] Terminology, Scope and Specification Core simplification #129

Closed
salaboy opened this issue Dec 27, 2019 · 23 comments
Closed

[workflow] Terminology, Scope and Specification Core simplification #129

salaboy opened this issue Dec 27, 2019 · 23 comments
Labels
workflow workflow spec

Comments

@salaboy
Copy link
Contributor

salaboy commented Dec 27, 2019

As stated here: #127 one of the main challenges for people looking at the Workflow Spec inside the CNCF Serverless WG is around the scope of the Workflow Sub Working Group and terminology used in the Spec.

This issue proposes a set of changes to the terminology and the scope of the Specification to help to clarify scope, and purpose for the language itself.

Terminology change proposal

Moving away from State Machine/Automata terminology (Finite-state machine - Wikipedia) will bring clarity and set the right expectation for the users of the language:

  • State Machines terminology creates certain expectations around formality that this specification is not trying to cover. This is causing confusion, as the terminology is mixed all over the place where the main construct is called “State”.
  • Using workflow terminology such as “Task” will clearly specify what is expected for each element inside the workflow definition. This means that a Workflow is in charge of coordinating a set of tasks in a certain order.
    • A Task represents a unit of work at runtime.
    • Tasks can be specialized with different runtime behaviors such as
      • Operation Task: Logically encapsulate one or more Function calls
      • Fork/Join Task: Choose between different paths of the Workflow
      • SubFlow: Initiates a new instance of a Workflow
      • Event Consumer: Wait for a certain type of event
      • Event Producer: Produce a certain type of event
      • Each of the previous subtypes bringing a different runtime behavior
    • Tasks must contain the minimal information needed to identify them such as ID, Name, Type. Then subtypes can extend this information as needed.
    • Cloud Events must be first-class citizens inside the spec, meaning that Tasks and Cloud Events relationships need to be clearly defined
      • A Task for Event Emitting/Consuming can be clearly defined using Cloud Events definition references
    • Transitions between Tasks and Events to Tasks need to be clearly specified, the term transition is well understood and it should be used as it is currently defined.

The language itself doesn't specify how a Workflow will execute these tasks, that is left for each implementation to decide.

Simplification of concepts at the core of the specification proposal

As part of the terminology change, we need to make sure that the terms are not overloaded to an extent where the spec is confusing. This will require changes in the description of the Tasks (previously States) to reduce their responsibility to the minimum.
As examples for these simplifications, we can start with core constructs such as:

  • Operation Task: An Operation Task logically encapsulates one or more functions calls. The Operation Task assumes successful functions calls and when all the specified functions are called correctly, the task is finished. As previously defined, the user can fine-tune how these functions are called, in sequence or in parallel.
  • Event Consumer Task: An Event Consumer task waits for one or more events with the specified event type and filters. Event consumers can specify filters that can use Events Metadata to automatically discard events. If more than one event is specified, the Event Consumer Task will wait for all the specified events, when all of them are present the Task will finish.
  • Fork/Join Task: Fork/Join Tasks are used for flow control. Depending on the type, the Fork Task can fork a current execution into two or more paths of execution and the Join Task joins multiple paths into a single one.

Data Flow considerations and scope

Data Flow and Data evaluations and expressions inside workflows should be kept separate from orchestration as much as possible. Complex data handling should be kept as a sub specification. This will help us to keep the orchestration language simple. Having said this, we need to make sure that the workflow language is extensible enough to support Complex Data Flow extensions in the future.

As proposed in the spec, Workflow Instances can work with a JSON payload, where tasks can add and change data, that can be used to call consecutive functions down the line and to keep the instance contextual data. This context is shared and accessible for all the tasks in the workflow. Expressions can use the data inside this JSON payload to make decisions based on the available values.

For the sake of simplicity, Tasks does not impose inputs and outputs at the CNCF Workflow Language level, once again this can be incorporated later as an appendix to the spec.

Pull Request

I am happy to provide a Pull Request with the proposed changes here, but even sending the PR I would love to get feedback from all the parties involved to make sure that we are all on the same page.
After checking with different people, I believe that these proposed changes will bring clarity and provide a consistent core that we can mature with time, without confusing newcomers that find the current state of the spec to complex and confusing.

@tsurdilo @cathyhongzhang @mswiderski @berndruecker @manuelstein Feedback is highly appreciated.

@tsurdilo
Copy link
Collaborator

tsurdilo commented Dec 30, 2019

@salaboy thanks for writing this up! Here are some things to consider I believe:

  1. State machine vs workflow: this specification as you mention does not mention the execution and it is left for each implementation to decide. This to me is the key concept on why I think we should not worry much about this. Calling a node "state" or "task" imo does not make a difference and quite frankly does not decide nor govern any implementation decisions.
    We are trying to create a specification here which can be used by for example:

  2. AWS step functions -- state machine (state)

  3. Azure logic -- workflow (action)

  4. Fission -- workflow (task)
    ...
    so it has to cover both the state machine and workflow impls. With workflow-based impls themselves there are different concept of unit of work which brings to the next point:

  5. "A Task represents a unit of work at runtime" -- in our specification a unit of work is called an "action" which I believe is the right way to see a unit of work for our case.
    States/Tasks themselves are not units of work, they define some particular control/flow logic during execution. They also define logic for error handling and compensation among other things for the actions (units of work) they execute.

  6. Regarding the tasks you present:

  • Operation task -- Currently Operation State
  • Fork/Join Task -- Currently Switch State
  • SubFlow - same
  • Event Consumer -- Currently Event State
  • Event Producer -- I would say currently both event and operation state, as each can have actions that call serverless functions that can themselves produce events. I think the idea of having a state/task for even production is interesting to consider tho.
  1. "Tasks must contain the minimal information needed to identify them such as ID, Name, Type. Then subtypes can extend this information as needed." - this is already present as all states currently have a core set of parameters. In kogito impl for example we have a "default state" from which all other states inherit those core params from.
    I will update the json schema to define this here as well. And definitely would welcome pr which explains this more clearly!

  2. "Cloud Events must be first-class citizens inside the spec, meaning that Tasks and Cloud Events relationships need to be clearly defined". I believe this is already the case as triggerDefs which are consumed by event states are clearly defined to be cloud event types.
    Definitely would welcome pr which explains this more clearly!

Regarding the "simplification section" of the writeup:

  1. Again please note that the goal of this specification is not only to develop new implementations but also to cover existing known and used ones as well. As such starting off with the mentioned 3 states as a "minimum" is not ideal imo. I am working on documents to show that what we currently have in the specification fully covers the features of aws, fission, azure logic and netflix conductor. In addition to that I believe that it would be much more suited to define a reasonable "conformance document" similar to what bpmn2 has so that implementations can for example implement a subset of states/tasks and still conform to the specification. This to me would be a +1 and not removing existing states..sorry :)

Regarding the "Data Flow" section -- I think this is where I am failing to understand the reasoning behind the proposal the most. This specification is trying to target both stateless and stateful serverless orchestration scenarios.
I think that yes it would be a good idea to have a separate document for data flow so implementors have a clearly defined single document for that part. However in my view and opinion data flow is a core concept of this specification if not for anything else but support for stateful orchestration.

What I wanted to ask you, "Event producer" was mentioned to be one of the core tasks to consider, clearly a task that produces events that include "data", how is it then that workflow data is considered an extension point itself.
Almost all logic and flow decision during orchestration especially in stateful ones are based on a context (data) so I'm just trying to understand this section better, if you could explain it.
Removing something just because it is maybe difficult to understand is not reason enough to remove it, especially if it's such a big part of the core concept of all this. Please let me know.

Please don't take my comments as all negative, I think there is alot of things to learn from this especially that we do need to simplify things it seems that are currently present. Also a big takeaway from me is that there is still a ton of work it seems on making sure everything is clearly defined and a we need to add alot more examples and use cases, as well as comparisons and conversions from other workflow json formats.
Also defining "spec conformance" would be a real key part of I think clearing up alot of confusion
and allowing implementors to be on a much smaller scale.

I am a little weary however to consider this specification "too hard to understand" or "too complex". I believe that the people saying that do have the capabilities to voice that themselves via issues or github request, and really urge those people to speak up and give us their opinions as they are so valuable. If however the same people just look at what we currently have and don't voice out their opinion and contribute, from my experience they are not going to use this spec even if we went ahead and did a complete change.

@salaboy
Copy link
Contributor Author

salaboy commented Dec 30, 2019

@tsurdilo thanks a lot for taking the time for writing such a detailed answer.
I think that from this discussion we can create more scoped issues. I will try to answer some of your comments and organize those answers under different topics, then we can decide if each of these is an issue or not:

1 - State Machine VS Workflow (Terminology & Scope):
Here you added real implementations such as "AWS step functions", "Azure Logic", "Fission". I believe that at this level (specification), names really matter. I also feel that you are not married with any names, please correct me if I am wrong. From your answer, I get the feeling that you are trying to cover with the spec a set of existing implementations completely, there is a huge task and usually, specifications do quite the opposite. Specifications usually try to convey overlapping definitions and leave the details of each implementation out. Imagine if Standard SQL Language needed to cover all the Oracle, DB2 and PostgreSQL specifics, the end result will be a nightmare (complexity wise and full of inconsistencies), and I am afraid that we will head in the same direction if we try to cover all the implementations that you mention in full. Regarding “Task vs State” I am adding the workflow spec from Open Stack which also uses “Tasks”: OpenStack Docs: Mistral Workflows. Are you completely against renaming this concept? For me it solves the cognitive dissonance with “Finite State Machines” and that is quite a big issue.

Regarding my proposal to rename:

  • Operation task — Currently Operation State
  • Fork/Join Task — Currently Switch State
  • SubFlow - same
  • Event Consumer — Currently Event State

I purposely try to keep it one to one, but at the same time, I wanted to highlight the constructs that we should focus on for the first iteration of the spec. I have the feeling that we should focus on these core concepts and leave all other possibilities out for now. WDYT?

2 - “A Task represents a unit of work at runtime”
I got your point on this one, but still, it is confusing to me. The workflow construct itself, has “Tasks” and the units of work are bundled inside. Now, not all “Tasks” have function calls, just the “Operation Task”. This definitely adds complexity and for people that want to have workflows defining a sequence of functions to be called, they are now faced with three concepts “Task”, “Operation Task” and “Action” that they need to understand in order to create the sequence. In my proposal, I was thinking about using hierarchy (SubFlows) to solve this problem. Meaning, that we can create very low-level workflows that have one task per function call (no complexity or conceptual overhead) and then use SubFlows to group low-level workflows together. Having said this, I totally get the idea of “Tasks” as wrappers that will deal with retry and exception handling, and now I wonder if we should move that responsibility to the Workflow entity itself. Once again, I do understand that this is a major change, but if this makes it simple, I am more than happy to invest time in it.

Summing up, my proposal here would be to have:

- Workflow (Deal with Retry of its tasks and exception handling)
  - Tasks
    - Action Task -> 1 Function Call 
    - Other Tasks (not function calls: Fork/Join, Event Consumer, Event Producer, SubFlow) 

Here, the Workflow act as a controller to all its Tasks and we promote using SubFlows (hierarchy) to build higher-level models, only if needed (and only if needed is the important part).

3 - Simplification
I still believe that this is the wrong scope “the goal of this specification is not only to develop new implementations but also to cover existing known and used ones as well. “ This is too big to cover and in my opinion, we need to scope it down. I tried to cover how in point 1 and 2. If more details are needed I am more than happy to elaborate.
On the simplification angle, and based on the current state of the spec, I would push to keep it simple and treat extra states/Tasks as extensions. Creating a conformance document with core elements only is basically creating the scope that we need for the first iteration.

4 - DataFlow Scope
Regarding DataFlow, let me be clear, I don’t want to remove all DataFlow related features. I just want to scope what DataFlow means for the spec in the first iteration.
My proposal would be like this:

  • Workflow
    • Global Context (variables) in JSON format (per workflow instance)
    • Tasks can access this Global Context, add, remove, modify variables in this context. This means that Fork/Join task can use this context to make decisions and it also means that Action/Operation Tasks can grab this context and send it to a function.
    • Function Calls or Event Producers can specify a metadata/header field to make more explicit which field/fields are relevant for such function or event. But all context is shared for each interaction, at least from the spec point of view.

I think that you can notice from this proposal that I am trying to making things simple, by doing this I am aware that we will not cover all use cases, but we will make it simple to use and to understand.

Notice as well, how we don’t need strict binding between inputs/outputs from each Task, and we can just give hints to the implementations on how to produce such information based on metadata/headers.

I wonder how conceptually faraway are we on this point? What are the compromises that we can make to keep it simple and usable?

Issues / PRs

I propose to have an issue for each of these topics so people interested in just one section can comment on each of these buckets. This can help us also to lead changes if we consider that changes are worth after discussing them.

Do these issues make sense:

  • State Machine VS Workflow (Terminology & Scope)
  • Simplification (Core Concepts + Conformance)
  • DataFlow Scope

I've tried to summarize the feedback that I got in KubeCon San Diego and also trying to help people to raise their voices by adding these long threads. I don't think that people will comment unless we clearly explain where we are and where we are going.

@tsurdilo
Copy link
Collaborator

tsurdilo commented Dec 30, 2019

@salaboy:

  • yes definitely not married to the naming convention. If we feel that "task" is better suited pls go for it.

  • regarding scope - the reason I don't believe it is a wrong scope is because the current definitions of specification we have already covers the aws state language (to the most part) as well as has features which I believe extend it even. Same for conductor which is much smaller scope and fully covered by our existing definitions (same for fission, but we need to just write up the yaml support section). Also our current definitions covers the azure logic apps as well (again with minor differences that need to be worked out at some point if we feel are needed). It will help when I am able to finish the comparisons docs thats in the works showing how easy it it for users of those workflow impls to switch to
    this vendor-neutral spec (on the language level).
    Because of this I believe there is no need to talk about "first iteration" etc because what we have is already way past that point. Why I think your writeup is very important is that you push for simplification which is great, so let's simplify our existing features, but without trying removing functionality. wdyt?

I think the current specification is still far away from being optimal, especially on the writeup part, and we lack more examples etc, but I do believe it is

  • not complex regarding execution flow (we have only 6 states :)
  • not complex regarding data flow (simple filters)

Of course I agree with you that it should be as simple as possible, so yeah we should invest in your proposal of a "core" vs "extended" concept, but on the other side we have super complex things like bpmn2 which deal w/ that via the conformance definition. Which one is better idk.

There is no strict binding currently, the output of a task is passed as input to next as json object. we use json path for data filtering. I believe this has even the advantage over things like bpmn2 where for example alot more strict input/output bindings have to be defined.
Definitely agree with you that we need a workflow context. Thats something aws has that we don't cover, but note that context is mostly used for execution information, and that impls could add that with or without us defining it specifically. Also currently we do have workflow data input/output which is nice.

Sorry last thing regarding the mentioned possible confusion of like task, operation task, action, function etc. My opinion on this is that we have to see why people would even use a serverless workflow, this is described in the intro section currently. If I wanted to execute 10 functions in a sequence i would never use a workflow solution for that to begin with.
So each task in the workflow has to go beyond just calling a function imo. It has to have additional flow/logic features (depending on the task type). It also has to include things that are cross-cutting concerns such as error handling, compensation etc. It has to include control logic as for example to define the transition to the next part of the workflow.

The actual function call is encapsulated inside an action currently because the results of a function might have to be filtered and checked for validity/errors in order to achieve correct business logic.
Current specification defines an action filter that is triggered in key points of action execution. Actions also include a timeout and retry definitions.

In addition an action may do more than just call a serverless function - business logic might require utilizing existing integration services that may or may not be "serverless" in some cases. So I think we should try to via extensibility as you mentioned cover those cases as well.

We can try to lower the complexity of the json or try not to have too much nesting, but having a task thats only responsible to call a function imo defeats the purpose of a workflow solution (when I can write that as a 1-liner without workflow). wdyt?

Regarding the overall data flow discussion. If you look at the current specification, it is completely optional already. The workflow data input/output is optional. Different types of filters are again optional. You dont have to use the workflow data to pass as parameters to function calls.
So again, I think what you want to see is already in place, but maybe just not worded as clearly as it should be and thats what I think you want to achieve to bring it to our attention which is really nice.

Thanks again, and really nice conversation!

@salaboy
Copy link
Contributor Author

salaboy commented Jan 3, 2020

@tsurdilo once again thanks for the detailed answer, here my comments and action list.

  • Renaming to Task I will work on a PR for the renaming .. Then we can evaluate if it actually works or not

  • Regarding: " so let's simplify our existing features, but without trying removing functionality. WDYT?" I am ok with not removing stuff as soon as features are not confusing or overlapping with each other just to cover more existing implementations. In such situations, I would be in favor of cutting features to keep strong conceptual consistency between features. In other words, this will be feature by feature evaluation. And I don't see the point of keep discussing at this very abstract level without concrete examples, so no actions for me on these ones, besides looking at each feature individually and come up with a list of inconsistencies that I think that we need to remove.

  • Regarding: "There is no strict binding currently, the output of a task is passed as input to next as json object. we use json path for data filtering. I believe this has even the advantage over things like bpmn2 where for example alot more strict input/output bindings have to be defined.
    Definitely agree with you that we need a workflow context. Thats something aws has that we don't cover, but note that context is mostly used for execution information, and that impls could add that with or without us defining it specifically. Also currently we do have workflow data input/output which is nice.
    This is where it gets confusing.. if we have a workflow context we don't really need to move data from the output to one task to the input of the next one. I would be in favor of just having a workflow context. WDYT? As you mention AWS already has that concept and I think that it will simplify a lot how people things about data in the context of the workflow.

  • Regarding wording and confusion about scope and amount of features in tasks I do understand most of your points, we need to be careful with adding too much control logic as we can create a monster that is too difficult to understand. Again, here there is no point in keeping the discussing abstract, it will be way better to have concrete examples to discuss. I take action to create some examples for the pain points that I consider to be major and we can discuss them separately.

I encourage community people to get in touch if they have comments about these topics. For that reason, I would like to keep this issue open for a while until we decide that we have fixed some of these problems and we are all in the same page.

@cathyhongzhang
Copy link
Collaborator

This is an interesting talk, which will help our discussion on this, I think.
https://www.youtube.com/watch?v=75MRve4nv8s

@cathyhongzhang
Copy link
Collaborator

Task is a concrete piece of work to be done. State is a stage in the workflow and coveys more info than a task, for example, you can define trigger event, how to do retry, how information is passed, in addition to the task. Some people think a serverless workflow/graph can be naturally modeled as a state machine. Disregard that, a state is not necessarily associated with a finite state machine, imo. I think using task will be more confusing. Replacing event state with event consumer will cause more confusion since in CloudEvents, it is stated that events might also be routed to consumers, like embedded devices... People can argue that the workflow system is an event consumer or the serverless function is the event consumer or switch state is also an event consumer. I agree with most of Tihomir's comments.

@tsurdilo
Copy link
Collaborator

tsurdilo commented Jan 4, 2020

I don't see the point of keep discussing at this very abstract level without concrete examples -- agreed, that's why I added more examples and will keep adding more of them. That's the best way I think to see what may be confusing or needs better explanation.

This is where it gets confusing.. if we have a workflow context we don't really need to move data from the output to one task to the input of the next one. I would be in favor of just having a workflow context. WDYT? -- I am honestly not in favor of this. It may work for small workflows but for anything production-ready having a single global context holding on data is not ideal and imo will add to the confusion rather than help. We might look at having something similar to bpmn2 where you have process "global" data and subprocess/state/task "private data".

Regarding context specifically, there is a difference between "workflow context" and "execution context". I think what we are talking about here is the "workflow context" which can hold "global data" each state can have access to. For stateful orchestration implementations can define an execution context which can hold workflow data that should be persisted, it may include the workflow context (prob should :) ). This is something that we can definitely define imo.

Also agree that we need concrete things (via examples please -- spec-examples.md) that can be used to see where we can make our improvements.

Same thing please for describing why some features are confusing, as I still don't see how this spec is confusing at all. It is fairly small and compared to bpmn2, and aws even it is much much simpler to use and understand imo.

@tsurdilo
Copy link
Collaborator

tsurdilo commented Jan 6, 2020

Also just wanted to add - I think documentation will be much cleaner when serverless workflow spec gets its own github repo and we can do a proper wiki..and maybe one day we get a site like https://cloudevents.io :)

@ruromero
Copy link
Collaborator

ruromero commented Jan 8, 2020

  • I agree with Tihomir on not having exclusively the data in a global context but having it transformed between states/stages/tasks 😄 Consider parallel tasks or subflows, this can be hard to control. Moving data from one state to the next is what makes this spec so interesting and simple IMO.
  • I personally prefer also avoid the use of word state in favour of stage. By using state it is easy to think about state machines which are driven by events and each event imply a state change and this is clearly not the case for a workflow. As already mentioned by Salaboy.
    HTH

@salaboy
Copy link
Contributor Author

salaboy commented Jan 8, 2020

@cathyhongzhang thanks for your input.. very good talk about AWS Step Functions, do you think that Step is more clear than Task?
We have now Manuel's comment here: #127 also mentioning Task to be clearer.

@salaboy
Copy link
Contributor Author

salaboy commented Jan 8, 2020

@cathyhongzhang can you help us to get a separate repo for the group? can we open another issue for that? so we know if we are making progress around that or not?

@cathyhongzhang
Copy link
Collaborator

@salaboy Yes, I think step is more clean than task. Or we can use stage as suggested by Ruben.

@cathyhongzhang
Copy link
Collaborator

@salaboy I am not sure if I have the permission to get a separate repo. I will check the Doug on that.

@tsurdilo
Copy link
Collaborator

tsurdilo commented Jan 10, 2020

@salaboy after thinking about this for a while I have to say that unlike the "popular opinion" that we should keep "state" and not use your proposed "task":

  • the english definition of task fits better of what currently "state" is trying to represent, a piece of work, a "job", or a "chore".
  • the definition of "state" is "particular condition that someone or something is in at a specific time",
    this is to me more suited for describing like "the current state of the workflow", or "the current state of a task", which is to me related to data and very well suited for stateful orchestration (note again the word "state" there :)).

As far as "stage" goes, definitions says "a point, period, or step in a process or development" - this does not sound right to me as again it is somewhat related to a time and is to me more related to entities like users then a "thing" like a workflow. I think just saying "ParallelTask" or "ParallelStage" ... task sounds much better imo :)

So @ALL I think we should cast a vote for this maybe, @salaboy could you create a poll or something for it? Or is there maybe a better idea on how to get everyones vote in.
I thought originally that it did not matter much as different existing solutions use both, but I do see that "state" can be more confusing even when talking about it, "a state of what". wdyt?

@salaboy
Copy link
Contributor Author

salaboy commented Feb 4, 2020

@tsurdilo I am happy with a poll... I've just sent a request to install an app for a Poll into GitHub issues to the admin of the repo.
In general, I tend to agree with "state" for the things you mention, but we cannot generalize the term for Event, Choice, and other types of behavior that doesn't apply to the concept of "state". Hence if there is at least 1 that doesn't apply, the whole collection should be "states". That is, in my opinion, the main source of confusion.

@cathyhongzhang
Copy link
Collaborator

I think step (as used by AWS) or stage as suggested by Ruben is better than task. Task is a piece of work that needs to be done. The "state" entities we define in the workflow denote more meanings than a task. If the terminology "state" is not the optimal, then "step" or "stage" is a better name. Let's discuss this in the weekly meeting to get input from more people.

@cathyhongzhang
Copy link
Collaborator

@salaboy Whom have you sent the request for getting a separate repo? Actually Doug already replied to me saying that CNCF did not approve a separate repo for workflow.

@wanghq
Copy link

wanghq commented Feb 4, 2020

Here is my 2 cents:

I definitely don't like using the state machine and state to represent the workflow as I mentioned at #127 (comment). I think just replacing state machine with workflow, and state with step will make things easy to understand. Another datapoint is that this spec is not limited to stateful orchestration, what does the "state" mean if it's a stateless workflow?

Just like how Step Functions defines its state, Step is an abstraction and can have different concrete steps, such as task step, parallel step, foreach step...

Regarding stage, I feel it's a bit similar to state but without using the state word (which is good). When we model a workflow, do we use verbs or adjectives, if the answer is verb, is step better than stage? Also personally I use it less often than step, e.g. execute the workflow step by step (not stage by stage).

@salaboy
Copy link
Contributor Author

salaboy commented Feb 5, 2020

@wanghq I totally agree with your comment, step will be better, we need something way more generic than state to represent other types of behaviors that we have such as Event and Choice. I am not really worried about other people using the same nomeclature. Stage for me doesn't make any sense at all, as it brings the notion of aggregation of things that had happened in the past.
100% agree with making sure that we standardize into 'Workflow' terminology as that is the name of the sub group.

@tsurdilo
Copy link
Collaborator

tsurdilo commented Feb 5, 2020

on a side note :) just had one of those "holy @#!& im old" moments after reading "step by step" and first thing that came to mind was new kids on the block.....

@tsurdilo
Copy link
Collaborator

tsurdilo commented Feb 8, 2020

I think step (as used by AWS)...
AWS uses "states", its even called "Amazon States Language" :)
https://states-language.net/spec.html

@tsurdilo tsurdilo added the workflow workflow spec label Mar 11, 2020
@tsurdilo
Copy link
Collaborator

#243

@tsurdilo
Copy link
Collaborator

Will close this issue as the naming convention for control flow blocks is handled by pr #243

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
workflow workflow spec
Projects
None yet
Development

No branches or pull requests

5 participants