Skip to content

Commit

Permalink
Merge branch 'issue_3307' of https://github.com/hhunter-ms/docs into …
Browse files Browse the repository at this point in the history
…issue_3307
  • Loading branch information
hhunter-ms committed May 23, 2023
2 parents 76b0cb1 + e9d574e commit 66d562f
Show file tree
Hide file tree
Showing 17 changed files with 233 additions and 96 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/website-v1-11.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
- uses: actions/checkout@v2
with:
submodules: recursive
fetch-depth: 0
fetch-depth: 0
- name: Setup Docsy
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
- name: Build And Deploy
Expand All @@ -37,7 +37,7 @@ jobs:
app_location: "/daprdocs" # App source code path
api_location: "api" # Api source code path - optional
output_location: "public" # Built app content directory - optional
app_build_command: "hugo"
app_build_command: "git config --global --add safe.directory /github/workspace && hugo"
###### End of Repository/Build Configurations ######

close_pull_request_job:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,11 @@ Manage your workflow using HTTP calls. The example below plugs in the properties
To start your workflow with an ID `12345678`, run:

```bash
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678/start
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678
```

Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.

### Terminate workflow

To terminate your workflow with an ID `12345678`, run:
Expand All @@ -61,7 +63,7 @@ POST http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678/terminate
To fetch workflow information (outputs and inputs) with an ID `12345678`, run:

```bash
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678
```

Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,20 +52,24 @@ Each workflow instance managed by the engine is represented as one or more spans

There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:

- `dapr.internal.wfengine.workflow`
- `dapr.internal.wfengine.activity`
- `dapr.internal.{namespace}.{appID}.workflow`
- `dapr.internal.{namespace}.{appID}.activity`

The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured. The `{appID}` value is the app's ID. For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`.

The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario:

<img src="/images/workflow-overview/workflow-execution.png" alt="Diagram demonstrating internally registered actors across a cluster" />

Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.

There are two types of actors registered by the Dapr sidecar for workflow: the _workflow_ actor and the _activity_ actor. The next sections will go into more details on each.
{{% alert title="Note" color="primary" %}}
The internal workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK. If an app never registers a workflow, then the internal workflow actors are never registered.
{{% /alert %}}

### Workflow actors

A new instance of the `dapr.internal.wfengine.workflow` actor is activated for every workflow instance that gets created. The ID of the _workflow_ actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.

Each workflow actor saves its state using the following keys in the configured state store:

Expand Down Expand Up @@ -94,17 +98,13 @@ To summarize:

### Activity actors

A new instance of the `dapr.internal.wfengine.activity` actor is activated for every activity task that gets scheduled by a workflow. The ID of the _activity_ actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371#2` where `2` is the sequence number.
Activity actors are responsible for managing the state and placement of all workflow activity invocations. A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow. The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2` where `2` is the sequence number.

Each activity actor stores a single key into the state store:

| Key | Description |
| --- | ----------- |
| `activityreq-N` | The key contains the activity invocation payload, which includes the serialized activity input data. The `N` value is a 64-bit unsigned integer that represents the _generation_ of the workflow, a concept which is outside the scope of this documentation. |

{{% alert title="Warning" color="warning" %}}
In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), activity actor state will remain in the state store even after the activity task has completed. Scheduling a large number of workflow activities could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of completed activity state.
{{% /alert %}}
| `activityState` | The key contains the activity invocation payload, which includes the serialized activity input data. This key is deleted automatically after the activity invocation has completed. |

The following diagram illustrates the typical lifecycle of an activity actor.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ In the fan-out/fan-in design pattern, you execute multiple tasks simultaneously

<img src="/images/workflow-overview/workflows-fanin-fanout.png" width=800 alt="Diagram showing how the fan-out/fan-in workflow pattern works">

In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-overview.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually:
In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-patterns.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually:

- How do you control the degree of parallelism?
- How do you know when to trigger subsequent aggregation steps?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ This article provides guidance on running Dapr with Podman on a Windows/Linux/ma
## Prerequisites

- [Dapr CLI]({{< ref install-dapr-cli.md >}})
- [Podman](https://podman.io/getting-started/installation.html)
- [Podman](https://podman.io/docs/tutorials/installation)

## Initialize Dapr environment

Expand Down
66 changes: 21 additions & 45 deletions daprdocs/content/en/reference/api/workflow_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,29 +10,25 @@ Dapr provides users with the ability to interact with workflows and comes with a

## Start workflow request

Start a workflow instance with the given name and instance ID.
Start a workflow instance with the given name and optionally, an instance ID.

```bash
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>/start
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/start[?instanceId=<instanceId>]
```

Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.

### URL parameters

Parameter | Description
--------- | -----------
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
`workflowName` | Identify the workflow type
`instanceId` | Unique value created for each run of a specific workflow
`instanceId` | (Optional) Unique value created for each run of a specific workflow

### Request content

In the request you can pass along relevant input information that will be passed to the workflow:

```json
{
"input": // argument(s) to pass to the workflow which can be any valid JSON data type (such as objects, strings, numbers, arrays, etc.)
}
```
Any request content will be passed to the workflow as input. The Dapr API passes the content as-is without attempting to interpret it.

### HTTP response codes

Expand All @@ -48,9 +44,7 @@ The API call will provide a response similar to this:

```json
{
  "WFInfo": {
    "instance_id": "SampleWorkflow"
  }
"instanceID": "12345678"
}
```

Expand All @@ -59,15 +53,14 @@ The API call will provide a response similar to this:
Terminate a running workflow instance with the given name and instance ID.

```bash
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/terminate
POST http://localhost:3500/v1.0-alpha1/workflows/<instanceId>/terminate
```

### URL parameters

Parameter | Description
--------- | -----------
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
`workflowName` | Identify the workflow type
`instanceId` | Unique value created for each run of a specific workflow

### HTTP response codes
Expand All @@ -80,62 +73,45 @@ Code | Description

### Response content

The API call will provide a response similar to this:

```bash
HTTP/1.1 202 Accepted
Server: fasthttp
Date: Thu, 12 Jan 2023 21:31:16 GMT
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
Connection: close 
```
This API does not return any content.

### Get workflow request

Get information about a given workflow instance.

```bash
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>
```

### URL parameters

Parameter | Description
--------- | -----------
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
`workflowName` | Identify the workflow type
`instanceId` | Unique value created for each run of a specific workflow

### HTTP response codes

Code | Description
---- | -----------
`202` | Accepted
`200` | OK
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or underlying component

### Response content

The API call will provide a response similar to this:

```bash
HTTP/1.1 202 Accepted
Server: fasthttp
Date: Thu, 12 Jan 2023 21:31:16 GMT
Content-Type: application/json
Content-Length: 139
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
Connection: close 
The API call will provide a JSON response similar to this:

```json
{
  "WFInfo": {
    "instance_id": "SampleWorkflow"
  },
  "start_time": "2023-01-12T21:31:13Z",
  "metadata": {
    "status": "Running",
    "task_queue": "WorkflowSampleQueue"
  }
  "createdAt": "2023-01-12T21:31:13Z",
  "instanceID": "12345678",
"lastUpdatedAt": "2023-01-12T21:31:13Z",
  "properties": {
"property1": "value1",
"property2": "value2",
},
"runtimeStatus": "RUNNING",
}
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ spec:
value: /Users/somepath/client.pem # OPTIONAL <path to client cert> or <pem encoded string>
- name: MTLSClientKey
value: /Users/somepath/client.key # OPTIONAL <path to client key> or <pem encoded string>
- name: MTLSRenegotiation
value: RenegotiateOnceAsClient # OPTIONAL one of: RenegotiateNever, RenegotiateOnceAsClient, RenegotiateFreelyAsClient
- name: securityToken # OPTIONAL <token to include as a header on HTTP requests>
secretKeyRef:
name: mysecret
Expand All @@ -42,6 +44,7 @@ spec:
| MTLSRootCA | N | Output |Path to root ca certificate or pem encoded string |
| MTLSClientCert | N | Output |Path to client certificate or pem encoded string |
| MTLSClientKey | N | Output |Path client private key or pem encoded string |
| MTLSRenegotiation | N | Output |Type of TLS renegotiation to be used | `RenegotiateOnceAsClient`
| securityToken | N | Output |The value of a token to be added to an HTTP request as a header. Used together with `securityTokenHeader` |
| securityTokenHeader| N | Output |The name of the header for `securityToken` on an HTTP request that |

Expand Down Expand Up @@ -317,6 +320,13 @@ These fields can be passed as a file path or as a pem encoded string.
- If the pem encoded string is provided, the string is used as is.
When these fields are configured, the Dapr sidecar uses the provided certificate to authenticate itself with the server during the TLS handshake process.

If the remote server is enforcing TLS renegotiation, you also need to set the metadata field `MTLSRenegotiation`. This field accepts one of following options:
- `RenegotiateNever`
- `RenegotiateOnceAsClient`
- `RenegotiateFreelyAsClient`.

For more details see [the Go `RenegotiationSupport` documentation](https://pkg.go.dev/crypto/tls#RenegotiationSupport).

### When to use:
You can use this when the server with which the HTTP binding is configured to communicate requires mTLS or client TLS authentication.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
{{% /alert %}}


### S3 Bucket Creation
{{< tabs "Minio" "LocalStack" "AWS" >}}

{{% codetab %}}
### Using with Minio

[Minio](https://min.io/) is a service that exposes local storage as S3-compatible block storage, and it's a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:
Expand All @@ -78,6 +83,70 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet
3. The value for `region` is not important; you can set it to `us-east-1`.
4. Depending on your environment, you may need to set `disableSSL` to `true` if you're connecting to Minio using a non-secure connection (using the `http://` protocol). If you are using a secure connection (`https://` protocol) but with a self-signed certificate, you may need to set `insecureSSL` to `true`.

{{% /codetab %}}

{{% codetab %}}
For local development, the [LocalStack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run LocalStack.

To run LocalStack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following:

```yaml
version: "3.8"

services:
localstack:
container_name: "cont-aws-s3"
image: localstack/localstack:1.4.0
ports:
- "127.0.0.1:4566:4566"
environment:
- DEBUG=1
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "<PATH>/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh" # init hook
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
```

To use the S3 component, you need to use an existing bucket. The example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) to setup the bucket.

To use LocalStack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS.


```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: aws-s3
namespace: default
spec:
type: bindings.aws.s3
version: v1
metadata:
- name: bucket
value: conformance-test-docker
- name: endpoint
value: "http://localhost:4566"
- name: accessKey
value: "my-access"
- name: secretKey
value: "my-secret"
- name: region
value: "us-east-1"
```

{{% /codetab %}}

{{% codetab %}}

To use the S3 component, you need to use an existing bucket. Follow the [AWS documentation for creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).

{{% /codetab %}}



{{< /tabs >}}

## Binding support

This component supports **output binding** with the following operations:
Expand Down

0 comments on commit 66d562f

Please sign in to comment.