Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ Topics:
File: about-knative-eventing
- Name: OpenShift Serverless Functions overview
File: serverless-functions-about
- Name: OpenShift Serverless Logic overview
File: serverless-logic-overview
# Support
- Name: OpenShift Serverless support
File: serverless-support
Expand Down
28 changes: 28 additions & 0 deletions about/serverless-logic-overview.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
:_content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="serverless-logic-overview"]
= OpenShift Serverless Logic overview
:context: serverless-logic-overview

toc::[]

OpenShift Serverless Logic enables developers to define declarative workflow models that orchestrate event-driven, serverless applications.

You can write the workflow models in YAML or JSON format, which are ideal for developing and deploying serverless applications in cloud or container environments.

To deploy the workflows in your {ocp-product-title}, you can use the OpenShift Serverless Logic Operator.

// Add additional resources if any

The following sections provide an overview of the various OpenShift Serverless Logic concepts.

// modules present in this assembly

include::modules/serverless-logic-overview-events.adoc[leveloffset=+1]
include::modules/serverless-logic-overview-callbacks.adoc[leveloffset=+1]
include::modules/serverless-logic-overview-jq-expressions.adoc[leveloffset=+1]
include::modules/serverless-logic-overview-error-handling.adoc[leveloffset=+1]
include::modules/serverless-logic-overview-input-output-schema.adoc[leveloffset=+1]
include::modules/serverless-logic-overview-custom-functions.adoc[leveloffset=+1]
include::modules/serverless-logic-overview-timeouts.adoc[leveloffset=+1]
include::modules/serverless-logic-overview-parallelism.adoc[leveloffset=+1]
53 changes: 53 additions & 0 deletions modules/serverless-logic-overview-callbacks.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
// Module included in the following assemblies:
// * about/serverless-logic-overview.adoc


:_content-type: CONCEPT
[id="serverless-logic-overview-callbacks_{context}"]
= Callbacks

The Callback state performs an action and waits for an event that is produced as a result of the action before resuming the workflow. The action performed by a Callback state is an asynchronous external service invocation. Therefore, the Callback state is suitable to perform `fire&wait-for-result` operations.

From a workflow perspective, asynchronous service indicates that the control is returned to the caller immediately without waiting for the action to be completed. After the action is completed, a `CloudEvent` is published to resume the workflow.

.Example of Callback state in JSON format
[source,json]
----
{
"name": "CheckCredit",
"type": "callback",
"action": {
"functionRef": {
"refName": "callCreditCheckMicroservice",
"arguments": {
"customer": "${ .customer }"
}
}
},
"eventRef": "CreditCheckCompletedEvent",
"timeouts": {
"stateExecTimeout": "PT15M"
},
"transition": "EvaluateDecision"
}
----

.Example of Callback state in YAML format
[source,yaml]
----
name: CheckCredit
type: callback
action:
functionRef:
refName: callCreditCheckMicroservice
arguments:
customer: "${ .customer }"
eventRef: CreditCheckCompletedEvent
timeouts:
stateExecTimeout: PT15M
transition: EvaluateDecision
----

The `action` property defines a function call that triggers an external activity or service. After the action executes, the Callback state waits for a `CloudEvent`, which indicates the completion of the manual decision by the called service.

After the completion callback event is received, the Callback state completes its execution and transitions to the next defined workflow state or completes workflow execution if it is an end state.
174 changes: 174 additions & 0 deletions modules/serverless-logic-overview-custom-functions.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
// Module included in the following assemblies:
// * about/serverless-logic-overview.adoc


:_content-type: CONCEPT
[id="serverless-logic-overview-custom-functions_{context}"]
= Custom functions

OpenShift Serverless Logic supports the `custom` function type, which enables the implementation to extend the function definitions capability. By combining with the `operation` string, you can use a list of predefined function types.

[NOTE]
====
Custom function types might not be portable across other runtime implementations.
====

[id="sysout-custom-function_{context}"]
== Sysout custom function

You can use the `sysout` function for logging, as shown in the following example:

.Example of `sysout` function definition
[source,json]
----
{
"functions": [
{
"name": "logInfo",
"type": "custom",
"operation": "sysout:INFO"
}
]
}
----

The string after the `:` is optional and is used to indicate the log level. The possible values are `TRACE`, `DEBUG`, `INFO`, `WARN`, and `ERROR`. If the value is not present, `INFO` is the default.

In the `state` definition, you can call the same `sysout` function as shown in the following example:

.Example of a `sysout` function reference within a state
[source,json]
----
{
"states": [
{
"name": "myState",
"type": "operation",
"actions": [
{
"name": "printAction",
"functionRef": {
"refName": "logInfo",
"arguments": {
"message": "\"Workflow model is \\(.)\""
}
}
}
]
}
]
}
----

In the previous example, the `message` argument can be a jq expression or a jq string using interpolation.

[id="java-custom-function_{context}"]
== Java custom function

OpenShift Serverless Logic supports the `java` functions within an Apache Maven project, in which you define your workflow service.

The following example shows the declaration of a `java` function:

.Example of a `java` function declaration
[source,json]
----
{
"functions": [
{
"name": "myFunction", <1>
"type": "custom", <2>
"operation": "service:java:com.acme.MyInterfaceOrClass::myMethod" <3>
}
]
}
----

<1> `myFunction` is the function name.
<2> `custom` is the function type.
<3> `service:java:com.acme.MyInterfaceOrClass::myMethod` is the custom operation definition. In the custom operation definition, `service` is the reserved operation keyword, followed by the `java` keyword. `com.acme.MyInterfaceOrClass` is the FQCN (Fully Qualified Class Name) of the interface or implementation class, followed by the method name `myMethod`.

//[id="camel-custom-function_{context}"] (I have commented out this section, as we have discussed with the Dev team to add this post-release)
//== Camel custom function
//OpenShift Serverless Logic supports the Camel Routes functions within an Apache Maven project, in which you define your workflow service.
//The following example shows the declaration of a `Camel` function:

//.Example of a `Camel` function declaration
//[source,json]
//----
//{
// "functions": [
// {
// "name": "myCamelEndpoint", <1>
// "type": "custom", <2>
// "operation": "camel:direct:myendpoint" <3>
// }
// ]
//}
//----

//<1> `myCamelEndpoint` is the function name.
//<2> `custom` is the function type.
//<3> `camel:direct:myendpoint` is the custom operation definition. In this definition, `camel` is the reserved keyword followed by the direct endpoint, and `myendpoint` is the endpoint URI name found in the route within your project.

[id="knative-custom-function_{context}"]
== Knative custom function

OpenShift Serverless Logic provides an implementation of a custom function through the `knative-serving` add-on to invoke Knative services. It allows you to have a static URI, defining a Knative service, that is used to perform HTTP requests. The Knative service defined in the URI is queried in the current Knative cluster and translated to a valid URL.

The following example uses a deployed Knative service:

[source,bash]
----
$ kn service list
NAME URL LATEST AGE CONDITIONS READY REASON
custom-function-knative-service http://custom-function-knative-service.default.10.109.169.193.sslip.io custom-function-knative-service-00001 3h16m 3 OK / 3 True
----

You can declare a OpenShift Serverless Logic custom function using the Knative service name, as shown in the following example:
[source,json]
----
"functions": [
{
"name": "greet", <1>
"type": "custom", <2>
"operation": "knative:services.v1.serving.knative.dev/custom-function-knative-service?path=/plainJsonFunction", <3>
}
]
----

<1> `greet` is the function name.
<2> `custom` is the function type.
<3> In `operation`, you set the coordinates of the Knative service.

[NOTE]
====
This function sends a `POST` request. If you do not specify a path, OpenShift Serverless Logic uses the root path (/). You can also send `GET` requests by setting `method=GET` in the operation. In this case, the arguments are forwarded over a query string.
====

[id="rest-custom-function_{context}"]
== REST custom function

OpenShift Serverless Logic offers the `REST` custom type as a shortcut. When using custom rest, in the function definition, you specify the HTTP URI to be invoked and the HTTP method (get, post, patch, or put) to be used. This is done by using the `operation` string. When the function is invoked, you pass the request arguments as you do when using an OpenAPI function.

The following example shows the declaration of a `rest` function:

[source,json]
----
{
"functions": [
{
"name": "multiplyAllByAndSum", <1>
"type": "custom", <2>
"operation": "rest:post:/numbers/{multiplier}/multiplyByAndSum" <3>
}
]
}
----

<1> `multiplyAllAndSum` is the function name.
<2> `custom` is the function type.
<3> `rest:post:/numbers/{multiplier}/multiplyByAndSum` is the custom operation definition. In the custom operation definition, `rest` is the reserved operation keyword that indicates this is a REST call, `post` is the HTTP method, and `/numbers/{multiplier}/multiplyByAndSum` is the relative endpoint.

When using the relative endpoints, you must specify the host as a property. The format of the host property is `kogito.sw.functions.<function_name>`.host. In this example, `kogito.sw.functions.multiplyAllByAndSum.host` is the host property key. You can override the default port (80) if needed by specifying the `kogito.sw.functions.multiplyAllAndSum.port` property.

This endpoint expects as body a JSON object whose field `numbers` is an array of integers, multiplies each item in the array by `multiplier` and returns the sum of all the multiplied items.
62 changes: 62 additions & 0 deletions modules/serverless-logic-overview-error-handling.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
// Module included in the following assemblies:
// * about/serverless-logic-overview.adoc


:_content-type: CONCEPT
[id="serverless-logic-overview-error-handling_{context}"]
= Error handling

OpenShift Serverless Logic allows you to define `explicit` error handling. You can define inside of your workflow model what should happen if errors occur rather than some generic error handling entity. Explicit error handling enables you to handle the errors that might happen during the interactions between the workflow and external systems. When an error occurs, it changes the regular workflow sequence. In these cases, a workflow state transitions to an alternative state that can potentially handle the error, instead of transitioning to the predefined state.

Each workflow state can define error handling, which is related only to errors that might arise during its execution. Error handling defined in one state cannot be used to handle errors that happened during execution of another state during workflow execution.

Unknown errors that may arise during workflow state execution that are not explicitly handled within the workflow definition should be reported by runtime implementations and halt workflow execution.

[id="error-definition_{context}"]
== Error definition

An error definition in a workflow is composed of the `name` and `code` parameters. The `name` is a short and natural language description of an error, such as `wrong parameter`. The `code` parameter helps the implementation to identify the error.

The `code` parameter is mandatory and the engine uses different strategies to map the provided value to an exception encountered at runtime. The available strategies include FQCN, error message, and status code.

During workflow execution, you must handle the the known workflow errors in the workflow top-level `errors` property. This property can be either a `string` type, meaning it can reference a reusable `JSON` or `YAML` definition file including the error definitions, or it can have an `array` type where you can define these checked errors inline in your workflow definition.

The following examples show definitions for both types:

.Example of referencing a reusable JSON error definition file
[source,json]
----
{
"errors": "file://documents/reusable/errors.json"
}
----

.Example of referencing a reusable YAML error definition file
[source,yaml]
----
errors: file://documents/reusable/errors.json
----

.Example of defining workflow errors inline using a JSON file
[source,json]
----
{
"errors": [
{
"name": "Service not found error",
"code": "404",
"description": "Server has not found anything matching the provided service endpoint information"
}
]
}
----

.Example of defining workflow errors inline using a YAML file
[source,yaml]
----
errors:
- name: Service not found error
code: '404'
description: Server has not found anything matching the provided service endpoint
information
----
Loading