From d7d52eac139b2693efa6862366394f6ea69acc90 Mon Sep 17 00:00:00 2001 From: akkie Date: Tue, 18 May 2021 14:43:41 +0200 Subject: [PATCH 01/22] Update Zeebe to version 1.0 --- .../supported-bindings/_index.md | 2 +- .../supported-bindings/zeebe-command.md | 108 +++++++++--------- .../supported-bindings/zeebe-jobworker.md | 6 +- 3 files changed, 56 insertions(+), 60 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/_index.md b/daprdocs/content/en/reference/components-reference/supported-bindings/_index.md index e253a9c2b9c..ba2ca36294a 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/_index.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/_index.md @@ -81,7 +81,7 @@ Table captions: | [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Alpha | v1 | 1.0 | | [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | GA | v1 | 1.0 | -### Zeebe (Camunda) +### Zeebe (Camunda Cloud) | Name | Input
Binding | Output
Binding | Status | Component version | Since | |------|:----------------:|:-----------------:|--------| --------- | ---------- | diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md index b7d6e9fecf6..f511d65ce55 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md @@ -45,7 +45,7 @@ spec: This component supports **output binding** with the following operations: - `topology` -- `deploy-workflow` +- `deploy-process` - `create-instance` - `cancel-instance` - `set-variables` @@ -66,7 +66,7 @@ https://stage.docs.zeebe.io/reference/grpc.html The `topology` operation obtains the current topology of the cluster the gateway is part of. -To perform a `topology` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `topology` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -120,28 +120,25 @@ The response values are: - `replicationFactor` - configured replication factor for this cluster - `gatewayVersion` - gateway version -#### deploy-workflow +#### deploy-process -The `deploy-workflow` operation deploys a single workflow to Zeebe. +The `deploy-process` operation deploys a single process to Zeebe. -To perform a `deploy-workflow` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `deploy-process` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { "data": "YOUR_FILE_CONTENT", "metadata": { - "fileName": "products-process.bpmn", - "fileType": "bpmn" + "fileName": "products-process.bpmn" }, - "operation": "deploy-workflow" + "operation": "deploy-process" } ``` The metadata parameters are: -- `fileName` - the name of the workflow file -- `fileType` - (optional) the type of the file 'bpmn' or 'file'. If no type was given, the default will be recognized based on the file extension - 'bpmn' for file extension .bpmn, for all other files it will be set to 'file' +- `fileName` - the name of the process file ##### Response @@ -150,11 +147,11 @@ The binding returns a JSON with the following response: ```json { "key": 2251799813687320, - "workflows": [ + "processes": [ { "bpmnProcessId": "products-process", "version": 3, - "workflowKey": 2251799813685895, + "processDefinitionKey": 2251799813685895, "resourceName": "products-process.bpmn" } ] @@ -164,23 +161,23 @@ The binding returns a JSON with the following response: The response values are: - `key` - the unique key identifying the deployment -- `workflows` - a list of deployed workflows +- `processes` - a list of deployed processes - `bpmnProcessId` - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific - workflow definition + process definition - `version` - the assigned process version - - `workflowKey` - the assigned key, which acts as a unique identifier for this workflow - - `resourceName` - the resource name from which this workflow was parsed + - `processDefinitionKey` - the assigned key, which acts as a unique identifier for this process + - `resourceName` - the resource name from which this process was parsed #### create-instance -The `create-instance` operation creates and starts an instance of the specified workflow. The workflow definition to use to create the instance can be -specified either using its unique key (as returned by the `deploy-workflow` operation), or using the BPMN process ID and a version. +The `create-instance` operation creates and starts an instance of the specified process. The process definition to use to create the instance can be +specified either using its unique key (as returned by the `deploy-process` operation), or using the BPMN process ID and a version. -Note that only workflows with none start events can be started through this command. +Note that only processes with none start events can be started through this command. ##### By BPMN process ID -To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -198,22 +195,22 @@ To perform a `create-instance` operation, invoke the Zeebe command binding with The data parameters are: -- `bpmnProcessId` - the BPMN process ID of the workflow definition to instantiate +- `bpmnProcessId` - the BPMN process ID of the process definition to instantiate - `version` - (optional, default: latest version) the version of the process to instantiate - `variables` - (optional) JSON document that will instantiate the variables for the root variable scope of the - workflow instance; it must be a JSON object, as variables will be mapped in a + process instance; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and "b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object -##### By workflow key +##### By process definition key -To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { "data": { - "workflowKey": 2251799813685895, + "processDefinitionKey": 2251799813685895, "variables": { "productId": "some-product-id", "productName": "some-product-name", @@ -226,44 +223,43 @@ To perform a `create-instance` operation, invoke the Zeebe command binding with The data parameters are: -- `workflowKey` - the unique key identifying the workflow definition to instantiate -- `variables` - (optional) JSON document that will instantiate the variables for the root variable scope of the - workflow instance; it must be a JSON object, as variables will be mapped in a +- `processDefinitionKey` - the unique key identifying the process definition to instantiate +- `variables` - (optional) JSON document that will instantiate the variables for the root variable scope of the + process instance; it must be a JSON object, as variables will be mapped in a key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and "b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a valid argument, as the root of the JSON document is an array and not an object - ##### Response The binding returns a JSON with the following response: ```json { - "workflowKey": 2251799813685895, + "processDefinitionKey": 2251799813685895, "bpmnProcessId": "products-process", "version": 3, - "workflowInstanceKey": 2251799813687851 + "processInstanceKey": 2251799813687851 } ``` The response values are: -- `workflowKey` - the key of the workflow definition which was used to create the workflow instance -- `bpmnProcessId` - the BPMN process ID of the workflow definition which was used to create the workflow instance -- `version` - the version of the workflow definition which was used to create the workflow instance -- `workflowInstanceKey` - the unique identifier of the created workflow instance +- `processDefinitionKey` - the key of the process definition which was used to create the process instance +- `bpmnProcessId` - the BPMN process ID of the process definition which was used to create the process instance +- `version` - the version of the process definition which was used to create the process instance +- `processInstanceKey` - the unique identifier of the created process instance #### cancel-instance -The `cancel-instance` operation cancels a running workflow instance. +The `cancel-instance` operation cancels a running process instance. -To perform a `cancel-instance` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `cancel-instance` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { "data": { - "workflowInstanceKey": 2251799813687851 + "processInstanceKey": 2251799813687851 }, "metadata": {}, "operation": "cancel-instance" @@ -272,7 +268,7 @@ To perform a `cancel-instance` operation, invoke the Zeebe command binding with The data parameters are: -- `workflowInstanceKey` - the workflow instance key +- `processInstanceKey` - the process instance key ##### Response @@ -280,9 +276,9 @@ The binding does not return a response body. #### set-variables -The `set-variables` operation creates or updates variables for an element instance (e.g. workflow instance, flow element instance). +The `set-variables` operation creates or updates variables for an element instance (e.g. process instance, flow element instance). -To perform a `set-variables` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `set-variables` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -301,7 +297,7 @@ To perform a `set-variables` operation, invoke the Zeebe command binding with a The data parameters are: -- `elementInstanceKey` - the unique identifier of a particular element; can be the workflow instance key (as +- `elementInstanceKey` - the unique identifier of a particular element; can be the process instance key (as obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message) - `local` - (optional, default: `false`) if true, the variables will be merged strictly into the local scope (as indicated by elementInstanceKey); this means the variables is not propagated to upper scopes. @@ -330,7 +326,7 @@ The response values are: The `resolve-incident` operation resolves an incident. -To perform a `resolve-incident` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `resolve-incident` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -354,7 +350,7 @@ The binding does not return a response body. The `publish-message` operation publishes a single message. Messages are published to specific partitions computed from their correlation keys. -To perform a `publish-message` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `publish-message` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -398,7 +394,7 @@ The response values are: The `activate-jobs` operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to the client as they are activated. -To perform a `activate-jobs` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `activate-jobs` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -443,12 +439,12 @@ The response values are: - `key` - the key, a unique identifier for the job - `type` - the type of the job (should match what was requested) -- `workflowInstanceKey` - the job's workflow instance key -- `bpmnProcessId` - the bpmn process ID of the job workflow definition -- `workflowDefinitionVersion` - the version of the job workflow definition -- `workflowKey` - the key of the job workflow definition +- `processInstanceKey` - the job's process instance key +- `bpmnProcessId` - the bpmn process ID of the job process definition +- `processDefinitionVersion` - the version of the job process definition +- `processDefinitionKey` - the key of the job process definition - `elementId` - the associated task element ID -- `elementInstanceKey` - the unique key identifying the associated task, unique within the scope of the workflow instance +- `elementInstanceKey` - the unique key identifying the associated task, unique within the scope of the process instance - `customHeaders` - a set of custom headers defined during modelling; returned as a serialized JSON document - `worker` - the name of the worker which activated this job - `retries` - the amount of retries left to this job (should always be positive) @@ -459,7 +455,7 @@ The response values are: The `complete-job` operation completes a job with the given payload, which allows completing the associated service task. -To perform a `complete-job` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `complete-job` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -491,7 +487,7 @@ The `fail-job` operation marks the job as failed; if the retries argument is pos worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the job will not be activatable until the incident is resolved. -To perform a `fail-job` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `fail-job` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -521,7 +517,7 @@ The binding does not return a response body. The `update-job-retries` operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the underlying problem be solved. -To perform a `update-job-retries` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `update-job-retries` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { @@ -546,9 +542,9 @@ The binding does not return a response body. #### throw-error The `throw-error` operation throw an error to indicate that a business error is occurred while processing the job. The error is identified -by an error code and is handled by an error catch event in the workflow with the same error code. +by an error code and is handled by an error catch event in the process with the same error code. -To perform a `throw-error` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body: +To perform a `throw-error` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body: ```json { diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md index 2cce0ec00b7..4a2a98d96fa 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md @@ -73,10 +73,10 @@ This component supports **input** binding interfaces. ### Input binding -The Zeebe workflow engine handles the workflow state as also workflow variables which can be passed -on workflow instantiation or which can be updated or created during workflow execution. These variables +The Zeebe process engine handles the process state as also process variables which can be passed +on process instantiation or which can be updated or created during process execution. These variables can be passed to a registered job worker by defining the variable names as comma-separated list in -the `fetchVariables` metadata field. The workflow engine will then pass these variables with its current +the `fetchVariables` metadata field. The process engine will then pass these variables with its current values to the job worker implementation. If the binding will register three variables `productId`, `productName` and `productKey` then the service will From 3e0f702fa2d2af8918cce4c265561abbdf73e26c Mon Sep 17 00:00:00 2001 From: Bernd Verst Date: Wed, 26 May 2021 18:59:40 -0700 Subject: [PATCH 02/22] Supported Release Info and Upgrade Path for v1.2 (#1494) * Supported Release Info and Upgrade Path for v1.2 * Update support-release-policy.md * Update daprdocs/content/en/operations/support/support-release-policy.md Co-authored-by: Aaron Crawfis --- .../operations/support/support-release-policy.md | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md index 9c3e689ecfa..20594c89dd3 100644 --- a/daprdocs/content/en/operations/support/support-release-policy.md +++ b/daprdocs/content/en/operations/support/support-release-policy.md @@ -31,11 +31,12 @@ The table below shows the versions of Dapr releases that have been tested togeth | Release date | Runtime | CLI | SDKs | Dashboard | Status | |--------------------|:--------:|:--------|---------|---------|---------| -| Feb 17th 2021 | 1.0.0
| 1.0.0 | Java 1.0.0
Go 1.0.0
PHP 1.0.0
Python 1.0.0
.NET 1.0.0 | 0.6.0 | Supported | -| Mar 4th 2021 | 1.0.1
| 1.0.1 | Java 1.0.2
Go 1.0.0
PHP 1.0.0
Python 1.0.0
.NET 1.0.0 | 0.6.0 | Supported | +| Feb 17th 2021 | 1.0.0
| 1.0.0 | Java 1.0.0
Go 1.0.0
PHP 1.0.0
Python 1.0.0
.NET 1.0.0 | 0.6.0 | Unsupported | +| Mar 4th 2021 | 1.0.1
| 1.0.1 | Java 1.0.2
Go 1.0.0
PHP 1.0.0
Python 1.0.0
.NET 1.0.0 | 0.6.0 | Unsupported | | Apr 1st 2021 | 1.1.0
| 1.1.0 | Java 1.0.2
Go 1.1.0
PHP 1.0.0
Python 1.1.0
.NET 1.1.0 | 0.6.0 | Supported | -| Apr 6th 2021 | 1.1.1
| 1.1.0 | Java 1.0.2
Go 1.1.0
PHP 1.0.0
Python 1.1.0
.NET 1.1.0 | 0.6.0 | Supported (current) | -| Apr 16th 2021 | 1.1.2
| 1.1.0 | Java 1.0.2
Go 1.1.0
PHP 1.0.0
Python 1.1.0
.NET 1.1.0 | 0.6.0 | Supported (current) | +| Apr 6th 2021 | 1.1.1
| 1.1.0 | Java 1.0.2
Go 1.1.0
PHP 1.0.0
Python 1.1.0
.NET 1.1.0 | 0.6.0 | Supported | +| Apr 16th 2021 | 1.1.2
| 1.1.0 | Java 1.0.2
Go 1.1.0
PHP 1.0.0
Python 1.1.0
.NET 1.1.0 | 0.6.0 | Supported | +| May 26th 2021 | 1.2.0
| 1.2.0 | Java 1.1.0
Go 1.1.0
PHP 1.1.0
Python 1.1.0
.NET 1.2.0 | 0.6.0 | Supported (current) | ## Upgrade paths After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example an upgrade from v1.0 to v1.2 may need go pass through v1.1 @@ -48,9 +49,11 @@ General guidance on upgrading can be found for [self hosted mode]({{ Date: Thu, 27 May 2021 12:16:09 +0200 Subject: [PATCH 03/22] Add documentation for the job related metadata and custom headers --- .../supported-bindings/zeebe-jobworker.md | 33 +++++++++++++++++-- 1 file changed, 30 insertions(+), 3 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md index 4a2a98d96fa..2904f67c263 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-jobworker.md @@ -73,14 +73,16 @@ This component supports **input** binding interfaces. ### Input binding +#### Variables + The Zeebe process engine handles the process state as also process variables which can be passed on process instantiation or which can be updated or created during process execution. These variables can be passed to a registered job worker by defining the variable names as comma-separated list in the `fetchVariables` metadata field. The process engine will then pass these variables with its current values to the job worker implementation. -If the binding will register three variables `productId`, `productName` and `productKey` then the service will -be called with the following JSON: +If the binding will register three variables `productId`, `productName` and `productKey` then the worker will +be called with the following JSON body: ```json { @@ -90,10 +92,35 @@ be called with the following JSON: } ``` +Note: if the `fetchVariables` metadata field will not be passed, all process variables will be passed to the worker. + +#### Headers + +The Zeebe process engine has the ability to pass custom task headers to a job worker. These headers can be defined for every +[service task](https://stage.docs.zeebe.io/bpmn-workflows/service-tasks/service-tasks.html). Task headers will be passed +by the binding as metadata (HTTP headers) to the job worker. + +The binding will also pass the following job related variables as metadata. The values will be passed as string. The table contains also the +original data type so that it can be converted back to the equivalent data type in the used programming language for the worker. + +| Metadata | Data type | Description | +|------------------------------------|-----------|-------------------------------------------------------------------------------------------------| +| X-Zeebe-Job-Key | int64 | The key, a unique identifier for the job | +| X-Zeebe-Job-Type | string | The type of the job (should match what was requested) | +| X-Zeebe-Process-Instance-Key | int64 | The job's process instance key | +| X-Zeebe-Bpmn-Process-Id | string | The bpmn process ID of the job process definition | +| X-Zeebe-Process-Definition-Version | int32 | The version of the job process definition | +| X-Zeebe-Process-Definition-Key | int64 | The key of the job process definition | +| X-Zeebe-Element-Id | string | The associated task element ID | +| X-Zeebe-Element-Instance-Key | int64 | The unique key identifying the associated task, unique within the scope of the process instance | +| X-Zeebe-Worker | string | The name of the worker which activated this job | +| X-Zeebe-Retries | int32 | The amount of retries left to this job (should always be positive) | +| X-Zeebe-Deadline | int64 | When the job can be activated again, sent as a UNIX epoch timestamp | + ## Related links - [Basic schema for a Dapr component]({{< ref component-schema >}}) - [Bindings building block]({{< ref bindings >}}) - [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}}) - [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) -- [Bindings API reference]({{< ref bindings_api.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) \ No newline at end of file From 48e28c3f5146c1679d128bc4699f372ba5a59ed5 Mon Sep 17 00:00:00 2001 From: Mark Fussell Date: Fri, 28 May 2021 10:12:24 -0700 Subject: [PATCH 04/22] Adding K8s versions table (#1521) * Adding table of kubernetes versions * Updating intro --- .../hosting/kubernetes/kubernetes-overview.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md index 781050e2625..d6e9cde72c9 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md @@ -8,7 +8,7 @@ description: "Overview of how to get Dapr running on your Kubernetes cluster" ## Dapr on Kubernetes -Dapr can be configured to run on any Kubernetes cluster. To achieve this, Dapr begins by deploying the `dapr-sidecar-injector`, `dapr-operator`, `dapr-placement`, and `dapr-sentry` Kubernetes services. These provide first-class integration to make running applications with Dapr easy. +Dapr can be configured to run on any supported versions of Kubernetes. To achieve this, Dapr begins by deploying the `dapr-sidecar-injector`, `dapr-operator`, `dapr-placement`, and `dapr-sentry` Kubernetes services. These provide first-class integration to make running applications with Dapr easy. - **dapr-operator:** Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) - **dapr-sidecar-injector:** Injects Dapr into [annotated](#adding-dapr-to-a-kubernetes-deployment) deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. - **dapr-placement:** Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods @@ -36,6 +36,14 @@ Deploying and running a Dapr enabled application into your Kubernetes cluster is You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) in the Kubernetes getting started quickstart. +## Supported versions +Dapr is tested and supported on the following versions of Kubernetes. + +| Supported Kubernetes versions | +|-----------------------| +| 1.17.x and above | + + ## Related links - [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}}) From 6180edfc2f432fd303db5f0c72951d68ed784433 Mon Sep 17 00:00:00 2001 From: Zonciu Liang Date: Thu, 3 Jun 2021 08:24:29 +0800 Subject: [PATCH 05/22] Fix incorrect postgresql connection string example (#1524) Co-authored-by: Aaron Crawfis --- .../supported-state-stores/setup-postgresql.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql.md index 8aefce32108..3d4a3c5c87d 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql.md @@ -34,7 +34,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr | Field | Required | Details | Example | |--------------------|:--------:|---------|---------| -| connectionString | Y | The connection string for PostgreSQL | `"User ID=root;Password=myPassword;Host=localhost;Port=5432"` +| connectionString | Y | The connection string for PostgreSQL | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"` | actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` From 8f8952372cce300b5d4f1b07ab7e002bb3e38f39 Mon Sep 17 00:00:00 2001 From: Simon Leet <31784195+CodeMonkeyLeet@users.noreply.github.com> Date: Wed, 2 Jun 2021 17:28:05 -0700 Subject: [PATCH 06/22] Update docs on using Codespaces with Dapr repos (#1522) * Update docs on using Codespaces with Dapr repos * Move codespaces.md under the Contributing topic * Update daprdocs/content/en/contributing/codespaces.md Co-authored-by: Aaron Crawfis --- .../content/en/contributing/codespaces.md | 53 +++++++++++++++++++ .../en/contributing/contributing-overview.md | 1 + .../ides/codespaces.md | 32 ----------- 3 files changed, 54 insertions(+), 32 deletions(-) create mode 100644 daprdocs/content/en/contributing/codespaces.md delete mode 100644 daprdocs/content/en/developing-applications/ides/codespaces.md diff --git a/daprdocs/content/en/contributing/codespaces.md b/daprdocs/content/en/contributing/codespaces.md new file mode 100644 index 00000000000..62585e50ff8 --- /dev/null +++ b/daprdocs/content/en/contributing/codespaces.md @@ -0,0 +1,53 @@ +--- +type: docs +title: "Contributing with GitHub Codespaces" +linkTitle: "GitHub Codespaces" +weight: 2500 +description: "How to work with Dapr repos in GitHub Codespaces" +aliases: + - "/developing-applications/ides/codespaces/" +--- + +[GitHub Codespaces](https://github.com/features/codespaces) are the easiest way to get up and running for contributing to a Dapr repo. In as little as a single click, you can have an environment with all of the prerequisites ready to go in your browser. + +{{% alert title="Private Beta" color="warning" %}} +GitHub Codespaces is currently in a private beta. Sign up [here](https://github.com/features/codespaces/signup). +{{% /alert %}} + +## Features + +- **Click and Run**: Get a dedicated and sandboxed environment with all of the required frameworks and packages ready to go. +- **Usage-based Billing**: Only pay for the time you spend developing in the Codespace. Environments are spun down automatically when not in use. +- **Portable**: Run in your browser or in Visual Studio Code + +## Open a Dapr repo in a Codespace + +To open a Dapr repository in a Codespace simply select "Code" from the repo homepage and "Open with Codespaces": + +Screenshot of creating a Dapr Codespace + +If you haven't already forked the repo, creating the Codespace will also create a fork for you and use it inside the Codespace. + +### Supported repos + +- [Dapr](https://github.com/dapr/dapr) +- [Components-contrib](https://github.com/dapr/components-contrib) +- [Python SDK](https://github.com/dapr/python-sdk) + +### Developing Dapr Components in a Codespace + +Developing a new Dapr component requires working with both the [components-contrib](https://github.com/dapr/components-contrib) and [dapr](https://github.com/dapr/dapr) repos together under the `$GOPATH` tree for testing purposes. To facilitate this, the `/go/src/github.com/dapr` folder in the components-contrib Codespace will already be set up with your fork of components-contrib, and a clone of the dapr repo as described in the [component development documentation](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md). A few things to note in this configuration: + +- The components-contrib and dapr repos only define Codespaces for the Linux amd64 environment at the moment. +- The `/go/src/github.com/dapr/components-contrib` folder is a soft link to Codespace's default `/workspace/components-contrib` folder, so changes in one will be automatically reflected in the other. +- Since the `/go/src/github.com/dapr/dapr` folder uses a clone of the official dapr repo rather than a fork, you will not be able to make a pull request from changes made in that folder directly. You can use the dapr Codespace separately for that PR, or if you would like to use the same Codespace for the dapr changes as well, you should remap the dapr repo origin to your fork in the components-contrib Codespace. For example, to use a dapr fork under `my-git-alias`: + +```bash +cd /go/src/github.com/dapr/dapr +git remote set-url origin https://github.com/my-git-alias/dapr +git fetch +git reset --hard +``` + +## Related links +- [GitHub documentation](https://docs.github.com/en/github/developing-online-with-codespaces/about-codespaces) diff --git a/daprdocs/content/en/contributing/contributing-overview.md b/daprdocs/content/en/contributing/contributing-overview.md index 0a042eb8397..6a8a404b300 100644 --- a/daprdocs/content/en/contributing/contributing-overview.md +++ b/daprdocs/content/en/contributing/contributing-overview.md @@ -49,6 +49,7 @@ All contributions come through pull requests. To submit a proposed change, follo 1. Make sure there's an issue (bug or proposal) raised, which sets the expectations for the contribution you are about to make. 1. Fork the relevant repo and create a new branch + - Some Dapr repos support [Codespaces]({{< ref codespaces.md >}}) to provide an instant environment for you to build and test your changes. 1. Create your change - Code changes require tests 1. Update relevant documentation for the change diff --git a/daprdocs/content/en/developing-applications/ides/codespaces.md b/daprdocs/content/en/developing-applications/ides/codespaces.md deleted file mode 100644 index af4de0e7d58..00000000000 --- a/daprdocs/content/en/developing-applications/ides/codespaces.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -type: docs -title: "Developing with GitHub Codespaces" -linkTitle: "GitHub Codespaces" -weight: 3000 -description: "How to get up and running with Dapr in a GitHub Codespace" ---- - -[GitHub Codespaces](https://github.com/features/codespaces) are the easiest way to get up and running in a Dapr environment. In as little as a single click you have the environment, packages, code, samples, and documentation all ready to go in your browser. - -{{% alert title="Private Beta" color="warning" %}} -GitHub Codespaces is currently in a private beta. Sign up [here](https://github.com/features/codespaces/signup). -{{% /alert %}} - -## Features - -- **Click and Run**: Get a dedicated and sandboxed environment with all of the required frameworks and packages ready to go. -- **Usage-based Billing**: Only pay for the time you spend developing in the Codespace. Environments are spun down automatically when not in use. -- **Portable**: Run in your browser or in Visual Studio Code - -## Open a Dapr repo in a Codespace - -To open a Dapr repository in a Codespace simply select "Code" from the repo homepage and "Open with Codespaces": - -Screenshot of creating a Dapr Codespace - -### Supported repos - -- [Python SDK](https://github.com/dapr/python-sdk) - -## Related links -- [GitHub documentation](https://docs.github.com/en/github/developing-online-with-codespaces/about-codespaces) \ No newline at end of file From e6b27718c18b92e8b4686f57630926b9ccb7871d Mon Sep 17 00:00:00 2001 From: Maarten Mulders Date: Thu, 3 Jun 2021 02:32:29 +0200 Subject: [PATCH 07/22] Fix two typos (#1526) Co-authored-by: Aaron Crawfis --- .../components-reference/supported-pubsub/setup-rabbitmq.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md index 6342c3f0123..b16a9392e71 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md @@ -48,11 +48,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr |--------------------|:--------:|---------|---------| | host | Y | Connection-string for the rabbitmq host | `amqp://user:pass@localhost:5672` | durable | N | Whether or not to use [durable](https://www.rabbitmq.com/queues.html#durability) queues. Defaults to `"false"` | `"true"`, `"false"` -| deletedWhenUnused | N | Whether or not the queue sohuld be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"` +| deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"` | autoAck | N | Whether or not the queue consumer should [auto-ack](https://www.rabbitmq.com/confirms.html) messages. Defaults to `"false"` | `"true"`, `"false"` | deliveryMode | N | Persistence mode when publishing messages. Defaults to `"0"`. RabbitMQ treats `"2"` as persistent, all other numbers as non-persistent | `"0"`, `"2"` | requeueInFailure | N | Whether or not to requeue when sending a [negative acknolwedgement](https://www.rabbitmq.com/nack.html) in case of a failure. Defaults to `"false"` | `"true"`, `"false"` -| prefetchCount | N | Number of messages to [prefecth](https://www.rabbitmq.com/consumer-prefetch.html). Consider changing this to a non-zero value for production environments. Defaults to `"0"`, which means that all available messages will be pre-fetched. | `"2"` +| prefetchCount | N | Number of messages to [prefetch](https://www.rabbitmq.com/consumer-prefetch.html). Consider changing this to a non-zero value for production environments. Defaults to `"0"`, which means that all available messages will be pre-fetched. | `"2"` | reconnectWait | N | How long to wait (in seconds) before reconnecting if a connection failure occurs | `"0"` | concurrencyMode | N | `parallel` is the default, and allows processing multiple messages in parallel (limited by the `app-max-concurrency` annotation, if configured). Set to `single` to disable parallel processing. In most situations there's no reason to change this. | `parallel`, `single` From d2ccd781b329951650c9000d847ee118a117fadb Mon Sep 17 00:00:00 2001 From: Newbe36524 Date: Thu, 3 Jun 2021 08:36:17 +0800 Subject: [PATCH 08/22] Update chinese content (#1527) Co-authored-by: Aaron Crawfis --- translations/docs-zh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translations/docs-zh b/translations/docs-zh index a567aaeaafa..794330f6cab 160000 --- a/translations/docs-zh +++ b/translations/docs-zh @@ -1 +1 @@ -Subproject commit a567aaeaafa09450e37960dc218e6875cffd7815 +Subproject commit 794330f6cab2db8e09053bb7bf19233eb3237538 From 993cf5e2a2a91c6800810cb34f7678b5c2664648 Mon Sep 17 00:00:00 2001 From: Steven Jenkins De Haro <20492442+StevenJDH@users.noreply.github.com> Date: Thu, 3 Jun 2021 02:41:17 +0200 Subject: [PATCH 09/22] Updated to fix deprecated helm chart location (#1528) The `https://kubernetes-charts.storage.googleapis.com/` location is no longer used, so this change updates this, the command to install, and the missing update step that will cause the install to fail if an update was never done after adding the location. Co-authored-by: Aaron Crawfis --- .../components-reference/supported-bindings/eventgrid.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md b/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md index 3c46350291e..87413973525 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md @@ -130,8 +130,9 @@ controller: Then install NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations ```bash -helm repo add stable https://kubernetes-charts.storage.googleapis.com/ -helm install nginx stable/nginx-ingress -f ./dapr-annotations.yaml -n default +helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx +helm repo update +helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yaml -n default # Get the public IP for the ingress controller kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}' ``` From a573434a2960685160362e778cfd57be8437827c Mon Sep 17 00:00:00 2001 From: Abdulaziz Elsheikh Date: Thu, 3 Jun 2021 01:43:27 +0100 Subject: [PATCH 10/22] nr_consul_typo fixed malformed yaml (#1532) Co-authored-by: Aaron Crawfis --- .../supported-name-resolution/setup-nr-consul.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-name-resolution/setup-nr-consul.md b/daprdocs/content/en/reference/components-reference/supported-name-resolution/setup-nr-consul.md index fd2b3876b9a..ab5b44a84f4 100644 --- a/daprdocs/content/en/reference/components-reference/supported-name-resolution/setup-nr-consul.md +++ b/daprdocs/content/en/reference/components-reference/supported-name-resolution/setup-nr-consul.md @@ -84,11 +84,11 @@ spec: checks: - name: "Dapr Health Status" checkID: "daprHealth:${APP_ID}" - interval: "15s", + interval: "15s" http: "http://${HOST_ADDRESS}:${DAPR_HTTP_PORT}/v1.0/healthz" - name: "Service Health Status" checkID: "serviceHealth:${APP_ID}" - interval: "15s", + interval: "15s" http: "http://${HOST_ADDRESS}:${APP_PORT}/health" tags: - "dapr" @@ -129,7 +129,7 @@ spec: check: name: "Dapr Health Status" checkID: "daprHealth:${APP_ID}" - interval: "15s", + interval: "15s" http: "http://${HOST_ADDRESS}:${DAPR_HTTP_PORT}/v1.0/healthz" meta: DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}" From 6cb10b7de6b6cd57f44ba88b9405277a8b4ec24e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Antonio=20Fiuman=C3=B2?= Date: Mon, 7 Jun 2021 18:15:59 +0100 Subject: [PATCH 11/22] Fix typo in azure-keyvault-managed-identity.md (#1541) --- .../supported-secret-stores/azure-keyvault-managed-identity.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault-managed-identity.md b/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault-managed-identity.md index f73f4ada5bb..55d4abd88ff 100644 --- a/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault-managed-identity.md +++ b/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault-managed-identity.md @@ -13,7 +13,7 @@ To setup Azure Key Vault secret store with Managed Identies create a component o In Kubernetes mode, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore. -The component yaml uses the name of your key vault and the Cliend ID of the managed identity to setup the secret store. +The component yaml uses the name of your key vault and the Client ID of the managed identity to setup the secret store. ```yaml apiVersion: dapr.io/v1alpha1 From a941868ca1b4590c690f84bc9a949ba097fce2fa Mon Sep 17 00:00:00 2001 From: li1234yun Date: Tue, 8 Jun 2021 11:43:10 +0800 Subject: [PATCH 12/22] Fix custom middleware sample code interface implementation error (#1539) Fix custom middleware sample code interface implementation error, interface function declare error. Co-authored-by: Aaron Crawfis --- .../middleware/middleware-overview.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/daprdocs/content/en/developing-applications/middleware/middleware-overview.md b/daprdocs/content/en/developing-applications/middleware/middleware-overview.md index 8bc5df72b2d..751bac28f76 100644 --- a/daprdocs/content/en/developing-applications/middleware/middleware-overview.md +++ b/daprdocs/content/en/developing-applications/middleware/middleware-overview.md @@ -49,7 +49,7 @@ spec: ## Writing a custom middleware -Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns a **fasthttp.RequestHandler**: +Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns **fasthttp.RequestHandler** and **error**: ```go type Middleware interface { @@ -60,14 +60,16 @@ type Middleware interface { Your handler implementation can include any inbound logic, outbound logic, or both: ```go -func GetHandler(metadata Metadata) fasthttp.RequestHandler { + +func (m *customMiddleware) GetHandler(metadata Metadata) (func(fasthttp.RequestHandler) fasthttp.RequestHandler, error) { + var err error return func(h fasthttp.RequestHandler) fasthttp.RequestHandler { return func(ctx *fasthttp.RequestCtx) { // inboud logic h(ctx) // call the downstream handler // outbound logic } - } + }, err } ``` From c73245eea6c171457d79d4dc21cb5897a7f49dd9 Mon Sep 17 00:00:00 2001 From: greenie-msft <56556602+greenie-msft@users.noreply.github.com> Date: Wed, 9 Jun 2021 14:38:10 -0700 Subject: [PATCH 13/22] Fix the file name of secrets json (#1546) --- .../building-blocks/secrets/howto-secrets.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/developing-applications/building-blocks/secrets/howto-secrets.md b/daprdocs/content/en/developing-applications/building-blocks/secrets/howto-secrets.md index 77d421e2065..de8e1d22193 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/secrets/howto-secrets.md +++ b/daprdocs/content/en/developing-applications/building-blocks/secrets/howto-secrets.md @@ -14,7 +14,7 @@ Before retrieving secrets in your application's code, you must have a secret sto >Note: The component used in this example is not secured and is not recommended for production deployments. You can find other alternatives [here]({{}}). -Create a file named `secrets.json` with the following contents: +Create a file named `mysecrets.json` with the following contents: ```json { From ae5b22256cb3fc58932926a345603af5c2432033 Mon Sep 17 00:00:00 2001 From: voipengineer Date: Mon, 14 Jun 2021 11:36:08 -0600 Subject: [PATCH 14/22] Tech writing touch-ups (#1555) --- .../content/en/concepts/observability-concept.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/daprdocs/content/en/concepts/observability-concept.md b/daprdocs/content/en/concepts/observability-concept.md index 33c5761e72b..afb37f138d8 100644 --- a/daprdocs/content/en/concepts/observability-concept.md +++ b/daprdocs/content/en/concepts/observability-concept.md @@ -7,14 +7,14 @@ description: > Monitor applications through tracing, metrics, logs and health --- -When building an applications, understanding how the system is behaving is an important part of operating it - this includes having the ability to observe the internal calls of an application, gauging its performance and becoming aware of problems as soon as they occur. This is challenging for any system but even more so for a distributed system comprised of multiple microservices where a flow, made of several calls, may start in one microservices but continue in another. Observability is critical in production environments but also useful during development to understand bottlenecks, improve performance and perform basic debugging across the span of microservices. +When building an application, understanding how the system is behaving is an important part of operating it - this includes having the ability to observe the internal calls of an application, gauging its performance and becoming aware of problems as soon as they occur. This is challenging for any system, but even more so for a distributed system comprised of multiple microservices where a flow, made of several calls, may start in one microservices but continue in another. Observability is critical in production environments, but also useful during development to understand bottlenecks, improve performance and perform basic debugging across the span of microservices. -While some data points about an application can be gathered from the underlying infrastructure (e.g. memory consumption, CPU usage), other meaningful information must be collected from an "application aware" layer - one that can show how an important series of calls is executed across microservices. This usually means a developer must add some code to instrument an application for this purpose. Often, instrumentation code is simply meant to send collected data such as traces and metrics to an external monitoring tool or service that can help store, visualize and analyze all this information. +While some data points about an application can be gathered from the underlying infrastructure (e.g. memory consumption, CPU usage), other meaningful information must be collected from an "application-aware" layer - one that can show how an important series of calls is executed across microservices. This usually means a developer must add some code to instrument an application for this purpose. Often, instrumentation code is simply meant to send collected data such as traces and metrics to an external monitoring tool or service that can help store, visualize and analyze all this information. -Having to maintain this code, which is not part of the core logic of the application, is another burden on the developer, sometimes requiring understanding monitoring tools APIs, using additional SDKs etc. This instrumentation may also add to the portability challenges of an application which may require different instrumentation depending on where the application is deployed. For example, different cloud providers offer different monitoring solutions and an on-prem deployment might require an on-prem solution. +Having to maintain this code, which is not part of the core logic of the application, is another burden on the developer, sometimes requiring understanding the monitoring tools' APIs, using additional SDKs etc. This instrumentation may also add to the portability challenges of an application, which may require different instrumentation depending on where the application is deployed. For example, different cloud providers offer different monitoring solutions and an on-prem deployment might require an on-prem solution. ## Observability for your application with Dapr -When building an application which is leveraging Dapr building blocks to perform service-to-service calls and pub/sub messaging, Dapr offers an advantage in respect to [distributed tracing]({{}}) because this inter-service communication flows through the Dapr sidecar, the sidecar is in a unique position to offload the burden of application level instrumentation. +When building an application which leverages Dapr building blocks to perform service-to-service calls and pub/sub messaging, Dapr offers an advantage with respect to [distributed tracing]({{}}). Because this inter-service communication flows through the Dapr sidecar, the sidecar is in a unique position to offload the burden of application-level instrumentation. ### Distributed tracing Dapr can be [configured to emit tracing data]({{}}), and because Dapr does so using widely adopted protocols such as the [Zipkin](https://zipkin.io) protocol, it can be easily integrated with multiple [monitoring backends]({{}}). @@ -27,10 +27,10 @@ Dapr can also be configured to work with the [OpenTelemetry Collector]({{ ### Tracing context -Dapr uses [W3C tracing]({{}}) specification for tracing context and can generate and propagate the context header itself or propagate user provided context headers. +Dapr uses [W3C tracing]({{}}) specification for tracing context and can generate and propagate the context header itself or propagate user-provided context headers. ## Observability for the Dapr sidecar and system services -As for other parts of your system, you will want to be able to observe Dapr itself and collect metrics and logs emitted by the Dapr sidecar that runs along each microservice as well as the Dapr related services in your environment such as the control plane services that are deployed for a Dapr enabled Kubernetes cluster. +As for other parts of your system, you will want to be able to observe Dapr itself and collect metrics and logs emitted by the Dapr sidecar that runs along each microservice, as well as the Dapr-related services in your environment such as the control plane services that are deployed for a Dapr-enabled Kubernetes cluster. Dapr sidecar metrics, logs and health checks @@ -38,7 +38,7 @@ As for other parts of your system, you will want to be able to observe Dapr itse Dapr generates [logs]({{}}) to provide visibility into sidecar operation and to help users identify issues and perform debugging. Log events contain warning, error, info, and debug messages produced by Dapr system services. Dapr can also be configured to send logs to collectors such as [Fluentd]({{< ref fluentd.md >}}) and [Azure Monitor]({{< ref azure-monitor.md >}}) so they can be easily searched, analyzed and provide insights. ### Metrics -Metrics are the series of measured values and counts that are collected and stored over time. [Dapr metrics]({{}}) provide monitoring capabilities to understand the behavior of the Dapr sidecar and system services. For example, the metrics between a Dapr sidecar and the user application show call latency, traffic failures, error rates of requests etc. Dapr [system services metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md) show sidecar injection failures, health of the system services including CPU usage, number of actor placements made etc. +Metrics are the series of measured values and counts that are collected and stored over time. [Dapr metrics]({{}}) provide monitoring capabilities to understand the behavior of the Dapr sidecar and system services. For example, the metrics between a Dapr sidecar and the user application show call latency, traffic failures, error rates of requests, etc. Dapr [system services metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md) show sidecar injection failures and the health of system services, including CPU usage, number of actor placements made, etc. ### Health checks The Dapr sidecar exposes an HTTP endpoint for [health checks]({{}}). With this API, user code or hosting environments can probe the Dapr sidecar to determine its status and identify issues with sidecar readiness. From 35137816b4670500ea5df13c83e1bd035974dc8b Mon Sep 17 00:00:00 2001 From: voipengineer Date: Mon, 14 Jun 2021 11:42:07 -0600 Subject: [PATCH 15/22] Tech writing touch-ups (#1556) Co-authored-by: Aaron Crawfis --- .../content/en/concepts/security-concept.md | 40 +++++++++---------- 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/daprdocs/content/en/concepts/security-concept.md b/daprdocs/content/en/concepts/security-concept.md index c72c9d47537..480b3ba3c56 100644 --- a/daprdocs/content/en/concepts/security-concept.md +++ b/daprdocs/content/en/concepts/security-concept.md @@ -20,7 +20,7 @@ Dapr enables mTLS and all the features described in this document in your applic ## Sidecar-to-app communication -The Dapr sidecar runs close to the application through **localhost**, and is recommended to run under the same network boundary as the app. While many cloud-native systems today consider the pod level (on Kubernetes, for example) as a trusted security boundary, Dapr provides user with API level authentication using tokens. This feature guarantees that even on localhost, only an authenticated caller may call into Dapr. +The Dapr sidecar runs close to the application through **localhost**, and is recommended to run under the same network boundary as the app. While many cloud-native systems today consider the pod level (on Kubernetes, for example) as a trusted security boundary, Dapr provides the user with API level authentication using tokens. This feature guarantees that even on localhost, only an authenticated caller may call into Dapr. ## Sidecar-to-sidecar communication @@ -29,20 +29,20 @@ To achieve this, Dapr leverages a system service named `Sentry` which acts as a Dapr also manages workload certificate rotation, and does so with zero downtime to the application. -Sentry, the CA service, automatically creates and persists self signed root certificates valid for one year, unless existing root certs have been provided by the user. +Sentry, the CA service, automatically creates and persists self-signed root certificates valid for one year, unless existing root certs have been provided by the user. -When root certs are replaced (secret in Kubernetes mode and filesystem for self hosted mode), the Sentry picks them up and re-builds the trust chain without needing to restart, with zero downtime to Sentry. +When root certs are replaced (secret in Kubernetes mode and filesystem for self-hosted mode), the Sentry picks them up and rebuilds the trust chain without needing to restart, with zero downtime to Sentry. When a new Dapr sidecar initializes, it first checks if mTLS is enabled. If it is, an ECDSA private key and certificate signing request are generated and sent to Sentry via a gRPC interface. The communication between the Dapr sidecar and Sentry is authenticated using the trust chain cert, which is injected into each Dapr instance by the Dapr Sidecar Injector system service. -In a Kubernetes cluster, the secret that holds the root certificates is scoped to the namespace in which the Dapr components are deployed to and is only accessible by the Dapr system pods. +In a Kubernetes cluster, the secret that holds the root certificates is scoped to the namespace in which the Dapr components are deployed and is only accessible by the Dapr system pods. Dapr also supports strong identities when deployed on Kubernetes, relying on a pod's Service Account token which is sent as part of the certificate signing request (CSR) to Sentry. By default, a workload cert is valid for 24 hours and the clock skew is set to 15 minutes. Mutual TLS can be turned off/on by editing the default configuration that is deployed with Dapr via the `spec.mtls.enabled` field. -This can be done for both Kubernetes and self hosted modes. Details for how to do this can be found [here]({{< ref mtls.md >}}). +This can be done for both Kubernetes and self-hosted modes. Details for how to do this can be found [here]({{< ref mtls.md >}}). ### mTLS self hosted The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored in a file @@ -58,13 +58,13 @@ The diagram below shows how the Sentry system service issues certificates for ap In addition to automatic mTLS between Dapr sidecars, Dapr offers mandatory mTLS between the Dapr sidecar and the Dapr system services, namely the Sentry service (Certificate Authority), Placement service (actor placement) and the Kubernetes Operator. -When mTLS is enabled, Sentry writes the root and issuer certificates to a Kubernetes secret that is scoped to the namespace where the control plane is installed. In self hosted mode, Sentry writes the certificates to a configurable filesystem path. +When mTLS is enabled, Sentry writes the root and issuer certificates to a Kubernetes secret that is scoped to the namespace where the control plane is installed. In self-hosted mode, Sentry writes the certificates to a configurable file system path. -In Kubernetes, when the Dapr system services start, they automatically mount the secret containing the root and issuer certs and use those to secure the gRPC server that is used by the Dapr sidecar. +In Kubernetes, when Dapr system services start, they automatically mount the secret containing the root and issuer certs and use those to secure the gRPC server that is used by the Dapr sidecar. -In self hosted mode, each system service can be mounted to a filesystem path to get the credentials. +In self-hosted mode, each system service can be mounted to a filesystem path to get the credentials. -When the Dapr sidecar initializes, it authenticates with the system pods using the mounted leaf certificates and issuer private key. these are mounted as environment variables on the sidecar container. +When the Dapr sidecar initializes, it authenticates with the system pods using the mounted leaf certificates and issuer private key. These are mounted as environment variables on the sidecar container. ### mTLS to system services in Kubernetes The diagram below shows secure communication between the Dapr sidecar and the Dapr Sentry (Certificate Authority), Placement (actor placement) and the Kubernetes Operator system services @@ -77,13 +77,11 @@ Dapr components are namespaced. That means a Dapr runtime sidecar instance can o Dapr components uses Dapr's built-in secret management capability to manage secrets. See the [secret store overview]({{}}) for more details. -In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components.For more information about application level scoping, see [here]({{}}). +In addition, Dapr offers application-level scoping for components by allowing users to specify which applications can consume given components. For more information about application level scoping, see [here]({{}}). ## Network security -You can adopt common network security technologies such as network security groups (NSGs), demilitarized zones (DMZs) and firewalls to provide layers of protections over your networked resources. - -For example, unless configured to talk to an external binding target, Dapr sidecars don’t open connections to the internet. And most binding implementations use outbound connections only. You can design your firewall rules to allow outbound connections only through designated ports. +You can adopt common network security technologies such as network security groups (NSGs), demilitarized zones (DMZs) and firewalls to provide layers of protection over your networked resources. For example, unless configured to talk to an external binding target, Dapr sidecars don’t open connections to the internet. And most binding implementations use outbound connections only. You can design your firewall rules to allow outbound connections only through designated ports. ## Bindings security @@ -95,7 +93,7 @@ Dapr doesn't transform the state data from applications. This means Dapr doesn't Dapr does not store any data at rest. -Dapr uses the configured authentication method to authenticate with the underlying state store. And many state store implementations use official client libraries that generally use secured communication channels with the servers. +Dapr uses the configured authentication method to authenticate with the underlying state store. Many state store implementations use official client libraries that generally use secured communication channels with the servers. ## Management security @@ -104,7 +102,7 @@ When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kub When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management. ## Threat model -Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified, enumerated, and mitigations can be prioritized. The Dapr threat model is below. +Threat modeling is a process by which potential threats, such as structural vulnerabilities or the absence of appropriate safeguards, can be identified and enumerated, and mitigations can be prioritized. The Dapr threat model is below. Dapr threat model @@ -112,10 +110,10 @@ Threat modeling is a process by which potential threats, such as structural vuln ### February 2021 -In February 2021, Dapr has gone through a 2nd security audit targetting it's 1.0 release by Cure53. +In February 2021, Dapr went through a 2nd security audit targeting it's 1.0 release by Cure53. The test focused on the following: -* Dapr runtime code base evaluation since last audit +* Dapr runtime codebase evaluation since last audit * Access control lists * Secrets management * Penetration testing @@ -128,12 +126,12 @@ As of February 16th 2021, Dapr has 0 criticals, 0 highs, 0 mediums, 2 lows, 2 in ### June 2020 -In June 2020, Dapr has undergone a security audit from Cure53, a CNCF approved cybersecurity firm. +In June 2020, Dapr underwent a security audit from Cure53, a CNCF-approved cybersecurity firm. The test focused on the following: -* Dapr runtime code base evaluation -* Dapr components code base evaluation -* Dapr CLI code base evaluation +* Dapr runtime codebase evaluation +* Dapr components codebase evaluation +* Dapr CLI codebase evaluation * Privilege escalation * Traffic spoofing * Secrets management From be06bfecdc782007baebbad6884ec5b505f3c829 Mon Sep 17 00:00:00 2001 From: voipengineer Date: Mon, 14 Jun 2021 11:46:10 -0600 Subject: [PATCH 16/22] Tech writing touch-ups (#1557) Co-authored-by: Aaron Crawfis --- daprdocs/content/en/concepts/service-mesh.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/daprdocs/content/en/concepts/service-mesh.md b/daprdocs/content/en/concepts/service-mesh.md index bee67a399d3..a5554515b93 100644 --- a/daprdocs/content/en/concepts/service-mesh.md +++ b/daprdocs/content/en/concepts/service-mesh.md @@ -4,15 +4,15 @@ title: "Dapr and service meshes" linkTitle: "Service meshes" weight: 700 description: > - How Dapr compares to, and works with service meshes + How Dapr compares to, and works with, service meshes --- -Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as, service invocation, network security and distributed tracing. This often raises the question - how does Dapr compare to service mesh solutions such as Linkerd, Istio and Open Service Mesh (OSM)? +Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and distributed tracing. This often raises the question: how does Dapr compare to service mesh solutions such as Linkerd, Istio and Open Service Mesh (OSM)? ## How Dapr and service meshes compare -While Dapr and service meshes do offer some overlapping capabilities, **Dapr is not a service mesh** where a service mesh, is defined as a *networking* service mesh. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric versus service meshes being infrastructure-centric. +While Dapr and service meshes do offer some overlapping capabilities, **Dapr is not a service mesh**, where a service mesh is defined as a *networking* service mesh. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, versus service meshes which are infrastructure-centric. -In most cases, developers do not need to be aware that the application they are building will be deployed in an environment which includes a service mesh since a service mesh intercepts network traffic. Service meshes are mostly managed and deployed by system operators. However, Dapr building block APIs are intended to be used by developers explicitly in their code. +In most cases, developers do not need to be aware that the application they are building will be deployed in an environment which includes a service mesh, since a service mesh intercepts network traffic. Service meshes are mostly managed and deployed by system operators, whereas Dapr building block APIs are intended to be used by developers explicitly in their code. Some common capabilities that Dapr shares with service meshes include: - Secure service-to-service communication with mTLS encryption @@ -20,9 +20,9 @@ Some common capabilities that Dapr shares with service meshes include: - Service-to-service distributed tracing - Resiliency through retries - Importantly Dapr provides service discovery and invocation via names which is a developer centric concern. This means that through Dapr's service invocation API, developers call a method on a service name, whereas service meshes deal with network concepts such as IPs and DNS addresses. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. Traffic routing is often addressed with ingress proxies to an application and does not have to use a service mesh. In addition, Dapr does provides other application level building blocks for state management, pub/sub messaging, actors and more. + Importantly, Dapr provides service discovery and invocation via names, which is a developer-centric concern. This means that through Dapr's service invocation API, developers call a method on a service name, whereas service meshes deal with network concepts such as IP addresses and DNS addresses. However, Dapr does not provide capabilities for traffic behavior such as routing or traffic splitting. Traffic routing is often addressed with ingress proxies to an application and does not have to use a service mesh. In addition, Dapr provides other application-level building blocks for state management, pub/sub messaging, actors, and more. -Another difference between Dapr and service meshes is with observability (tracing and metrics). Service meshes operate at the network level and trace the network calls between services. Dapr does this with service invocation, however Dapr also provides observability (tracing and metrics) over pub/sub calls using trace ids written into the Cloud Events envelope. This means that the metrics and tracing with Dapr is more extensive than with a service mesh for applications that use both service-to-service invocation and pub/sub to communicate. +Another difference between Dapr and service meshes is observability (tracing and metrics). Service meshes operate at the network level and trace the network calls between services. Dapr does this with service invocation. Moreover, Dapr also provides observability (tracing and metrics) over pub/sub calls using trace IDs written into the Cloud Events envelope. This means that metrics and tracing with Dapr is more extensive than with a service mesh for applications that use both service-to-service invocation and pub/sub to communicate. The illustration below captures the overlapping features and unique capabilities that Dapr and service meshes offer: @@ -35,11 +35,11 @@ Watch these recordings from the Dapr community calls showing presentations on ru - General overview and a demo of [Dapr and Linkerd](https://youtu.be/xxU68ewRmz8?t=142) - Demo of running [Dapr and Istio](https://youtu.be/ngIDOQApx8g?t=335) -## When to choose using Dapr, a service mesh or both -Should you be using Dapr, a service mesh or both? The answer depends on your requirements. If, for example, you are looking to use Dapr for one or more building blocks such as state management or pub/sub and considering using a service mesh just for network security or observability, you may find that Dapr is a good fit and a service mesh is not required. +## When to choose using Dapr, a service mesh, or both +Should you be using Dapr, a service mesh, or both? The answer depends on your requirements. If, for example, you are looking to use Dapr for one or more building blocks such as state management or pub/sub, and you are considering using a service mesh just for network security or observability, you may find that Dapr is a good fit and that a service mesh is not required. -Typically you would use a service mesh with Dapr where there is a corporate policy that traffic on the network needs to be encrypted regardless for all applications. For example, you may be using Dapr in only part of your application and other services and processes that are not using Dapr in your application also need encrypted traffic. In this scenario a service mesh is the better option and most likely you should use mTLS and distributed tracing on the service mesh and disable this on Dapr. +Typically you would use a service mesh with Dapr where there is a corporate policy that traffic on the network must be encrypted for all applications. For example, you may be using Dapr in only part of your application, and other services and processes that are not using Dapr in your application also need their traffic encrypted. In this scenario a service mesh is the better option, and most likely you should use mTLS and distributed tracing on the service mesh and disable this on Dapr. If you need traffic splitting for A/B testing scenarios you would benefit from using a service mesh, since Dapr does not provide these capabilities. -In some cases, where you require capabilities that are unique to both you will find it useful to leverage both Dapr and a service mesh - as mentioned above, there is no limitation for using them together. +In some cases, where you require capabilities that are unique to both, you will find it useful to leverage both Dapr and a service mesh; as mentioned above, there is no limitation to using them together. From fff6256541ce7bc0827af4165044a296083d0653 Mon Sep 17 00:00:00 2001 From: voipengineer Date: Mon, 14 Jun 2021 11:52:32 -0600 Subject: [PATCH 17/22] Tech writing touch-ups (#1558) Co-authored-by: Aaron Crawfis --- daprdocs/content/en/concepts/terminology.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/daprdocs/content/en/concepts/terminology.md b/daprdocs/content/en/concepts/terminology.md index 87731d1ba71..5e97b8f7a51 100644 --- a/daprdocs/content/en/concepts/terminology.md +++ b/daprdocs/content/en/concepts/terminology.md @@ -10,12 +10,12 @@ This page details all of the common terms you may come across in the Dapr docs. | Term | Definition | More information | |:-----|------------|------------------| -| App/Application | A running service/binary, usually that you as the user create and run. +| App/Application | A running service/binary, usually one that you as the user create and run. | Building block | An API that Dapr provides to users to help in the creation of microservices and applications. | [Dapr building blocks]({{< ref building-blocks-concept.md >}}) | Component | Modular types of functionality that are used either individually or with a collection of other components, by a Dapr building block. | [Dapr components]({{< ref components-concept.md >}}) -| Configuration | A YAML file declaring all of the settings for Dapr sidecars or the Dapr control plane. It is here where you can configure control plane mTLS settings, or the tracing, and middleware settings for an application instance. | [Dapr configuration]({{< ref configuration-concept.md >}}) +| Configuration | A YAML file declaring all of the settings for Dapr sidecars or the Dapr control plane. This is where you can configure control plane mTLS settings, or the tracing and middleware settings for an application instance. | [Dapr configuration]({{< ref configuration-concept.md >}}) | Dapr | Distributed Application Runtime. | [Dapr overview]({{< ref overview.md >}}) -| Dapr control plane | A collection of services that are part of a Dapr installation on a hosting platform such as a Kubernetes cluster. Allow Dapr enabled applications to run on that platform and handles Dapr capabilities such as actor placement, Dapr sidecar injection or certificate issuance/rollover. | [Self-hosted overview]({{< ref self-hosted-overview >}})
[Kubernetes overview]({{< ref kubernetes-overview >}}) -| Self-hosted | Windows/macOS/Linux machine(s) where you can run your applications with Dapr. Dapr provides capabilities to run on machines in "self-hosted" mode. | [Self-hosted mode]({{< ref self-hosted-overview.md >}}) -| Service | A running application or binary. Can be used to refer to your application, or a Dapr application. +| Dapr control plane | A collection of services that are part of a Dapr installation on a hosting platform such as a Kubernetes cluster. This allows Dapr-enabled applications to run on the platform and handles Dapr capabilities such as actor placement, Dapr sidecar injection, or certificate issuance/rollover. | [Self-hosted overview]({{< ref self-hosted-overview >}})
[Kubernetes overview]({{< ref kubernetes-overview >}}) +| Self-hosted | Windows/macOS/Linux machine(s) where you can run your applications with Dapr. Dapr provides the capability to run on machines in "self-hosted" mode. | [Self-hosted mode]({{< ref self-hosted-overview.md >}}) +| Service | A running application or binary. This can refer to your application or to a Dapr application. | Sidecar | A program that runs alongside your application as a separate process or container. | [Sidecar pattern](https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar) From 77062392e344bc0848dadf5c63ac1438b4fa4be0 Mon Sep 17 00:00:00 2001 From: voipengineer Date: Mon, 14 Jun 2021 12:05:55 -0600 Subject: [PATCH 18/22] Tech writing touch-ups (#1560) Co-authored-by: Aaron Crawfis --- daprdocs/content/en/concepts/overview.md | 32 ++++++++++++------------ 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/daprdocs/content/en/concepts/overview.md b/daprdocs/content/en/concepts/overview.md index 08a669b3721..d0785771510 100644 --- a/daprdocs/content/en/concepts/overview.md +++ b/daprdocs/content/en/concepts/overview.md @@ -17,29 +17,29 @@ Dapr is a portable, event-driven runtime that makes it easy for any developer to Today we are experiencing a wave of cloud adoption. Developers are comfortable with web + database application architectures (for example classic 3-tier designs) but not with microservice application architectures which are inherently distributed. It’s hard to become a distributed systems expert, nor should you have to. Developers want to focus on business logic, while leaning on the platforms to imbue their applications with scale, resiliency, maintainability, elasticity and the other attributes of cloud-native architectures. -This is where Dapr comes in. Dapr codifies the *best practices* for building microservice applications into open, independent, building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is completely independent and you can use one, some, or all of them in your application. +This is where Dapr comes in. Dapr codifies the *best practices* for building microservice applications into open, independent building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is completely independent and you can use one, some, or all of them in your application. -In addition Dapr is platform agnostic meaning you can run your applications locally, on any Kubernetes cluster, and other hosting environments that Dapr integrates with. This enables you to build microservice applications that can run on the cloud and edge. +In addition, Dapr is platform agnostic, meaning you can run your applications locally, on any Kubernetes cluster, and in other hosting environments that Dapr integrates with. This enables you to build microservice applications that can run on the cloud and edge. -Using Dapr you can easily build microservice applications using any language, any framework, and run them anywhere. +Using Dapr you can easily build microservice applications using any language and any framework, and run them anywhere. ## Microservice building blocks for cloud and edge -There are many considerations when architecting microservices applications. Dapr provides best practices for common capabilities when building microservice applications that developers can use in a standard way and deploy to any environment. It does this by providing distributed system building blocks. +There are many considerations when architecting microservices applications. Dapr provides best practices for common capabilities when building microservice applications that developers can use in a standard way, and deploy to any environment. It does this by providing distributed system building blocks. -Each of these building blocks is independent, meaning that you can use one, some or all of them in your application. Today, the following building blocks are available: +Each of these building blocks is independent, meaning that you can use one, some, or all of them in your application. Today, the following building blocks are available: | Building Block | Description | |----------------|-------------| -| [**Service-to-service invocation**]({{}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment. -| [**State management**]({{}}) | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others. -| [**Publish and subscribe**]({{}}) | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee. +| [**Service-to-service invocation**]({{}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment. +| [**State management**]({{}}) | With state management for storing key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis, among others. +| [**Publish and subscribe**]({{}}) | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at-least-once message delivery guarantee. | [**Resource bindings**]({{}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc. -| [**Actors**]({{}}) | A pattern for stateful and stateless objects that make concurrency simple with method and state encapsulation. Dapr provides many capabilities in its actor runtime including concurrency, state, life-cycle management for actor activation/deactivation and timers and reminders to wake-up actors. -| [**Observability**]({{}}) | Dapr emit metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools. -| [**Secrets**]({{}}) | Dapr provides secrets management and integrates with public cloud and local secret stores to retrieve the secrets for use in application code. +| [**Actors**]({{}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors. +| [**Observability**]({{}}) | Dapr emits metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools. +| [**Secrets**]({{}}) | Dapr provides secrets management, and integrates with public-cloud and local-secret stores to retrieve the secrets for use in application code. ## Sidecar architecture @@ -55,7 +55,7 @@ Dapr can be hosted in multiple environments, including self-hosted on a Windows/ In [self-hosted mode]({{< ref self-hosted-overview.md >}}) Dapr runs as a separate sidecar process which your service code can call via HTTP or gRPC. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks. -You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr enabled application on your local machine. Try this out with the [getting started samples]({{< ref getting-started >}}). +You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) to run a Dapr-enabled application on your local machine. Try this out with the [getting started samples]({{< ref getting-started >}}). Architecture diagram of Dapr in self-hosted mode @@ -63,11 +63,11 @@ You can use the [Dapr CLI](https://github.com/dapr/cli#launch-dapr-and-your-app) In container hosting environments such as Kubernetes, Dapr runs as a sidecar container with the application container in the same pod. -The `dapr-sidecar-injector` and `dapr-operator` services provide first class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned into the cluster. +The `dapr-sidecar-injector` and `dapr-operator` services provide first-class integration to launch Dapr as a sidecar container in the same pod as the service container and provide notifications of Dapr component updates provisioned in the cluster. -The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service read the [security overview]({{< ref "security-concept.md#dapr-to-dapr-communication" >}}) +The `dapr-sentry` service is a certificate authority that enables mutual TLS between Dapr sidecar instances for secure data encryption. For more information on the `Sentry` service, read the [security overview]({{< ref "security-concept.md#dapr-to-dapr-communication" >}}) -Deploying and running a Dapr enabled application into your Kubernetes cluster is as simple as adding a few annotations to the deployment schemes. Visit the [Dapr on Kubernetes docs]({{< ref kubernetes >}}) +Deploying and running a Dapr-enabled application into your Kubernetes cluster is as simple as adding a few annotations to the deployment schemes. Visit the [Dapr on Kubernetes docs]({{< ref kubernetes >}}) Architecture diagram of Dapr in Kubernetes mode @@ -87,7 +87,7 @@ To make using Dapr more natural for different languages, it also includes [langu - .NET - PHP -These SDKs expose the functionality of the Dapr building blocks through a typed language API, rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support. +These SDKs expose the functionality of the Dapr building blocks through a typed language API, rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and function support. ### Developer frameworks From 978fa111be135a27bed7377dedee6aa440821e05 Mon Sep 17 00:00:00 2001 From: voipengineer Date: Mon, 14 Jun 2021 12:08:49 -0600 Subject: [PATCH 19/22] Tech writing touch-ups (#1559) Co-authored-by: Aaron Crawfis --- daprdocs/content/en/concepts/faq.md | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/daprdocs/content/en/concepts/faq.md b/daprdocs/content/en/concepts/faq.md index c02644aecdd..fbe8a3219a2 100644 --- a/daprdocs/content/en/concepts/faq.md +++ b/daprdocs/content/en/concepts/faq.md @@ -7,7 +7,7 @@ description: "Common questions asked about Dapr" --- ## How does Dapr compare to service meshes such as Istio, Linkerd or OSM? -Dapr is not a service mesh. While service meshes focus on fine grained network control, Dapr is focused on helping developers build distributed applications. Both Dapr and service meshes use the sidecar pattern and run alongside the application and they do have some overlapping features but also offer unique benefits. For more information please read the [Dapr & service meshes]({{}}) concept page. +Dapr is not a service mesh. While service meshes focus on fine-grained network control, Dapr is focused on helping developers build distributed applications. Both Dapr and service meshes use the sidecar pattern and run alongside the application. They do have some overlapping features, but also offer unique benefits. For more information please read the [Dapr & service meshes]({{}}) concept page. ## Performance Benchmarks The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. See [here]({{< ref perf-service-invocation.md >}}) for updated performance numbers. @@ -16,26 +16,24 @@ The Dapr project is focused on performance due to the inherent discussion of Dap ### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors? -The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments. -Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview]({{< ref overview.md >}}). +The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premisis environments. +Moreover, Dapr is about more than just actors. It provides you with a set of best-practice building blocks to build into any microservices application. See [Dapr overview]({{< ref overview.md >}}). -### Differences between Dapr from an actor framework +### Differences between Dapr and an actor framework -Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language. +Virtual actor capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr, because it is programming-language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language. -Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors///…`, for example `http://localhost:3500/v1.0/actors/myactor/50/method/getData` to call the `getData` method on the newly created `myactor` with id `50`. +Creating a new actor follows a local call like `http://localhost:3500/v1.0/actors///…`. For example, `http://localhost:3500/v1.0/actors/myactor/50/method/getData` calls the `getData` method on the newly created `myactor` with id `50`. -The Dapr runtime SDKs have language specific actor frameworks. The .NET SDK for example has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks. +The Dapr runtime SDKs have language-specific actor frameworks. For example, the .NET SDK has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java and Python SDK have actor frameworks. ## Developer language SDKs and frameworks -### Does Dapr have any SDKs if I want to work with a particular programming language or framework? +### Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework? -To make using Dapr more natural for different languages, it includes [language specific SDKs]({{}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. +To make using Dapr more natural for different languages, it includes [language specific SDKs]({{}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support. -These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support. - -### What frameworks does Dapr integrated with? +### What frameworks does Dapr integrate with? Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services. Dapr is integrated with the following frameworks; From e0669765bec5a0ad7402a05f715cf17af81a0f05 Mon Sep 17 00:00:00 2001 From: Aaron Crawfis Date: Mon, 14 Jun 2021 19:01:47 -0700 Subject: [PATCH 20/22] Ignore intellij link that isn't resolvable (#1564) --- daprdocs/content/en/developing-applications/ides/intellij.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/daprdocs/content/en/developing-applications/ides/intellij.md b/daprdocs/content/en/developing-applications/ides/intellij.md index 108770a104d..80bc755e552 100644 --- a/daprdocs/content/en/developing-applications/ides/intellij.md +++ b/daprdocs/content/en/developing-applications/ides/intellij.md @@ -146,4 +146,8 @@ Happy debugging! ## Related links + + - [Change](https://intellij-support.jetbrains.com/hc/en-us/articles/206544519-Directories-used-by-the-IDE-to-store-settings-caches-plugins-and-logs) in IntelliJ configuration directory location + + \ No newline at end of file From e71fa4906e83f25ba0a40788851a821631276202 Mon Sep 17 00:00:00 2001 From: Aaron Crawfis Date: Mon, 14 Jun 2021 19:06:12 -0700 Subject: [PATCH 21/22] Update issue templates (#1563) * Update issue templates * Add needs-triage --- .github/ISSUE_TEMPLATE/new-content-needed.md | 11 +++++++++-- .github/ISSUE_TEMPLATE/typo.md | 4 ++-- .github/ISSUE_TEMPLATE/website-issue.md | 4 ++-- .../ISSUE_TEMPLATE/wrong-information-code-steps.md | 4 ++-- 4 files changed, 15 insertions(+), 8 deletions(-) diff --git a/.github/ISSUE_TEMPLATE/new-content-needed.md b/.github/ISSUE_TEMPLATE/new-content-needed.md index 20564eb2a7c..5a1ec2bafc1 100644 --- a/.github/ISSUE_TEMPLATE/new-content-needed.md +++ b/.github/ISSUE_TEMPLATE/new-content-needed.md @@ -1,8 +1,8 @@ --- name: New Content Needed about: Topic is missing and needs to be written -title: "[CONTENT]" -labels: content/missing-information +title: '' +labels: needs-triage,content/missing-information assignees: '' --- @@ -16,5 +16,12 @@ assignees: '' **Where should the new material be placed?** +**The associated pull request from dapr/dapr, dapr/components-contrib, or other Dapr code repos** + + **Additional context** diff --git a/.github/ISSUE_TEMPLATE/typo.md b/.github/ISSUE_TEMPLATE/typo.md index 1a7a32fd95f..9ad74454f26 100644 --- a/.github/ISSUE_TEMPLATE/typo.md +++ b/.github/ISSUE_TEMPLATE/typo.md @@ -1,8 +1,8 @@ --- name: Typo about: Report incorrect language/small updates to fix readability -title: "[TYPO]" -labels: content/typo +title: '' +labels: needs-triage,content/typo assignees: '' --- diff --git a/.github/ISSUE_TEMPLATE/website-issue.md b/.github/ISSUE_TEMPLATE/website-issue.md index 12afec0a33c..3f23d75cd69 100644 --- a/.github/ISSUE_TEMPLATE/website-issue.md +++ b/.github/ISSUE_TEMPLATE/website-issue.md @@ -1,8 +1,8 @@ --- name: Website Issue about: The website is broken or not working correctly. -title: "[WEBSITE]" -labels: website/functionality +title: '' +labels: needs-triage,website/functionality assignees: AaronCrawfis --- diff --git a/.github/ISSUE_TEMPLATE/wrong-information-code-steps.md b/.github/ISSUE_TEMPLATE/wrong-information-code-steps.md index 16896b6711c..150371fe400 100644 --- a/.github/ISSUE_TEMPLATE/wrong-information-code-steps.md +++ b/.github/ISSUE_TEMPLATE/wrong-information-code-steps.md @@ -1,8 +1,8 @@ --- name: Wrong Information/Code/Steps about: Something in the docs is incorrect -title: "[CONTENT]" -labels: P1, content/incorrect-information +title: '' +labels: needs-triage,content/incorrect-information assignees: '' --- From 5c6c31b0fc78b27f6aca1974cabf0f6c6c5539bf Mon Sep 17 00:00:00 2001 From: Evan Simkowitz Date: Mon, 14 Jun 2021 19:14:15 -0700 Subject: [PATCH 22/22] Updating PubSub documentation to remove slave wording (#1565) * Updating PubSub documentation to remove slave Bitnami has updated their Redis Helm chart to change redis-slave to redis-replicas. I am updating the documentation for PubSub to reflect this change and avoid confusion for any readers. * Removing more instances of Redis slave naming Co-authored-by: Aaron Crawfis --- .../content/en/getting-started/configure-state-pubsub.md | 6 +++--- .../operations/components/setup-pubsub/pubsub-namespaces.md | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/daprdocs/content/en/getting-started/configure-state-pubsub.md b/daprdocs/content/en/getting-started/configure-state-pubsub.md index e1f943cf0af..84faafa3bfb 100644 --- a/daprdocs/content/en/getting-started/configure-state-pubsub.md +++ b/daprdocs/content/en/getting-started/configure-state-pubsub.md @@ -52,8 +52,8 @@ You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our K $ kubectl get pods NAME READY STATUS RESTARTS AGE redis-master-0 1/1 Running 0 69s - redis-slave-0 1/1 Running 0 69s - redis-slave-1 1/1 Running 0 22s + redis-replicas-0 1/1 Running 0 69s + redis-replicas-1 1/1 Running 0 22s ``` Note that the hostname is `redis-master.default.svc.cluster.local:6379`, and a Kubernetes secret, `redis`, is created automatically. @@ -228,4 +228,4 @@ kubectl apply -f redis-pubsub.yaml {{< /tabs >}} ## Next steps -- [Try out a Dapr quickstart]({{< ref quickstarts.md >}}) \ No newline at end of file +- [Try out a Dapr quickstart]({{< ref quickstarts.md >}}) diff --git a/daprdocs/content/en/operations/components/setup-pubsub/pubsub-namespaces.md b/daprdocs/content/en/operations/components/setup-pubsub/pubsub-namespaces.md index bd042668441..b70fc73ba47 100644 --- a/daprdocs/content/en/operations/components/setup-pubsub/pubsub-namespaces.md +++ b/daprdocs/content/en/operations/components/setup-pubsub/pubsub-namespaces.md @@ -24,7 +24,7 @@ The table below shows which resources are deployed to which namespaces: | Resource | namespace-a | namespace-b | |------------------------ |-------------|-------------| | Redis master | X | | -| Redis slave | X | | +| Redis replicas | X | | | Dapr's PubSub component | X | X | | Node subscriber | X | | | Python subscriber | X | |