diff --git a/README.md b/README.md
index f4eb19db9e4..65072e23eb3 100644
--- a/README.md
+++ b/README.md
@@ -16,8 +16,8 @@ The following branches are currently maintained:
| Branch | Website | Description |
| ------------------------------------------------------------ | -------------------------- | ------------------------------------------------------------------------------------------------ |
-| [v1.14](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here. |
-| [v1.15](https://github.com/dapr/docs/tree/v1.15) (pre-release) | https://v1-15.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.15+ go here. |
+| [v1.15](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here. |
+| [v1.16](https://github.com/dapr/docs/tree/v1.16) (pre-release) | https://v1-16.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.15+ go here. |
For more information visit the [Dapr branch structure](https://docs.dapr.io/contributing/docs-contrib/contributing-docs/#branch-guidance) document.
diff --git a/daprdocs/config.toml b/daprdocs/config.toml
index 5cbcde9a6d4..a5fe94c745b 100644
--- a/daprdocs/config.toml
+++ b/daprdocs/config.toml
@@ -109,7 +109,7 @@ id = "G-60C6Q1ETC1"
lang = "en"
[[module.mounts]]
source = "../sdkdocs/rust/daprdocs/content/en/rust-sdk-contributing"
- target = "content/contributing/sdks-contrib"
+ target = "content/contributing/sdk-contrib/"
lang = "en"
[[module.mounts]]
@@ -143,7 +143,11 @@ id = "G-60C6Q1ETC1"
[[module.mounts]]
source = "../translations/docs-zh/translated_content/zh_CN/sdks_js"
target = "content/developing-applications/sdks/js"
- lang = "zh-hans"
+ lang = "zh-hans"
+ [[module.mounts]]
+ source = "../translations/docs-zh/translated_content/zh_CN/sdks_rust"
+ target = "content/developing-applications/sdks/rust"
+ lang = "zh-hans"
[[module.mounts]]
source = "../translations/docs-zh/translated_content/zh_CN/pluggable-components/dotnet"
target = "content/developing-applications/develop-components/pluggable-components/pluggable-components-sdks/pluggable-components-dotnet"
@@ -210,7 +214,7 @@ url_latest_version = "https://docs.dapr.io"
[[params.versions]]
version = "v1.15 (latest)"
url = "https://docs.dapr.io"
- [[params.versions]]
+[[params.versions]]
version = "v1.14"
url = "https://v1-14.docs.dapr.io"
[[params.versions]]
@@ -277,4 +281,4 @@ no = 'Sorry to hear that. Please }}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
-| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
+| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
diff --git a/daprdocs/content/en/concepts/components-concept.md b/daprdocs/content/en/concepts/components-concept.md
index 27c64796968..77b7e7f3abd 100644
--- a/daprdocs/content/en/concepts/components-concept.md
+++ b/daprdocs/content/en/concepts/components-concept.md
@@ -78,13 +78,6 @@ Pub/sub broker components are message brokers that can pass messages to/from ser
- [List of pub/sub brokers]({{< ref supported-pubsub >}})
- [Pub/sub broker implementations](https://github.com/dapr/components-contrib/tree/master/pubsub)
-### Workflows
-
-A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
-
-
-
### State stores
State store components are data stores (databases, files, memory) that store key-value pairs as part of the [state management]({{< ref "state-management-overview.md" >}}) building block.
diff --git a/daprdocs/content/en/concepts/dapr-services/scheduler.md b/daprdocs/content/en/concepts/dapr-services/scheduler.md
index 2fba4ba713a..a0d00aa19ef 100644
--- a/daprdocs/content/en/concepts/dapr-services/scheduler.md
+++ b/daprdocs/content/en/concepts/dapr-services/scheduler.md
@@ -5,28 +5,136 @@ linkTitle: "Scheduler"
description: "Overview of the Dapr scheduler service"
---
-The Dapr Scheduler service is used to schedule jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
+The Dapr Scheduler service is used to schedule different types of jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
+- Jobs created through the Jobs API
+- Actor reminder jobs (used by the actor reminders)
+- Actor reminder jobs created by the Workflow API (which uses actor reminders)
-The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded Etcd database.
+From Dapr v1.15, the Scheduler service is used by default to schedule actor reminders as well as actor reminders for the Workflow API.
+
+There is no concept of a leader Scheduler instance. All Scheduler service replicas are considered peers. All receive jobs to be scheduled for execution and the jobs are allocated between the available Scheduler service replicas for load balancing of the trigger events.
+
+The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded etcd database.
-## Actor reminders
+## Actor Reminders
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
-When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
+When you deploy Dapr v1.15, any _existing_ actor reminders are automatically migrated from the Actor State Store to the Scheduler service as a one time operation for each actor type. Each replica will only migrate the reminders whose actor type and id are associated with that host. This means that only when all replicas implementing an actor type are upgraded to 1.15, will all the reminders associated with that type be migrated. There will be _no_ loss of reminder triggers during the migration. However, you can prevent this migration and keep the existing actor reminders running using the Actor State Store by setting the `SchedulerReminders` flag to `false` in the application configuration file for the actor type.
+
+To confirm that the migration was successful, check the Dapr sidecar logs for the following:
+
+```sh
+Running actor reminder migration from state store to scheduler
+```
+coupled with
+```sh
+Migrated X reminders from state store to scheduler successfully
+```
+or
+```sh
+Skipping migration, no missing scheduler reminders found
+```
+
+## Job Locality
+
+### Default Job Behavior
+
+By default, when the Scheduler service triggers jobs, they are sent back to a single replica for the same app ID that scheduled the job in a randomly load balanced manner. This provides basic load balancing across your application's replicas, which is suitable for most use cases where strict locality isn't required.
+
+### Using Actor Reminders for Perfect Locality
+
+For users who require perfect job locality (having jobs triggered on the exact same host that created them), actor reminders provide a solution. To enforce perfect locality for a job:
+
+1. Create an actor type with a random UUID that is unique to the specific replica
+2. Use this actor type to create an actor reminder
+
+This approach ensures that the job will always be triggered on the same host which created it, rather than being randomly distributed among replicas.
+
+## Job Triggering
+
+### Job Failure Policy and Staging Queue
+
+When the Scheduler service triggers a job and it has a client side error, the job is retried by default with a 1s interval and 3 maximum retries.
+
+For non-client side errors, for example, when a job cannot be sent to an available Dapr sidecar at trigger time, it is placed in a staging queue within the Scheduler service. Jobs remain in this queue until a suitable sidecar instance becomes available, at which point they are automatically sent to the appropriate Dapr sidecar instance.
## Self-hosted mode
The Scheduler service Docker container is started automatically as part of `dapr init`. It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
+The Scheduler can be run in both high availability (HA) and non-HA modes in self-hosted deployments. However, non-HA mode is not recommended for production use. If switching between non-HA and HA modes, the existing data directory must be removed, which results in loss of jobs and actor reminders. [Run a back-up]({{< ref "#back-up-and-restore-scheduler-data" >}}) before making this change to avoid losing data.
+
## Kubernetes mode
-The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
+The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. Scheduler always runs in high availability (HA) mode in Kubernetes deployments. Scaling the Scheduler service replicas up or down is not possible without incurring data loss due to the nature of the embedded data store. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
+
+When a Kubernetes namespace is deleted, all the Job and Actor Reminders corresponding to that namespace are deleted.
+
+## Back Up and Restore Scheduler Data
+
+In production environments, it's recommended to perform periodic backups of this data at an interval that aligns with your recovery point objectives.
+
+### Port Forward for Backup Operations
+
+To perform backup and restore operations, you'll need to access the embedded etcd instance. This requires port forwarding to expose the etcd ports (port 2379).
+
+#### Kubernetes Example
+
+Here's how to port forward and connect to the etcd instance:
+
+```shell
+kubectl port-forward svc/dapr-scheduler-server 2379:2379 -n dapr-system
+```
+
+#### Docker Compose Example
+
+Here's how to expose the etcd ports in a Docker Compose configuration for standalone mode:
+
+```yaml
+scheduler-1:
+ image: "diagrid/dapr/scheduler:dev110-linux-arm64"
+ command: ["./scheduler",
+ "--etcd-data-dir", "/var/run/dapr/scheduler",
+ "--replica-count", "3",
+ "--id","scheduler-1",
+ "--initial-cluster", "scheduler-1=http://scheduler-1:2380,scheduler-0=http://scheduler-0:2380,scheduler-2=http://scheduler-2:2380",
+ "--etcd-client-ports", "scheduler-0=2379,scheduler-1=2379,scheduler-2=2379",
+ "--etcd-client-http-ports", "scheduler-0=2330,scheduler-1=2330,scheduler-2=2330",
+ "--log-level=debug"
+ ]
+ ports:
+ - 2379:2379
+ volumes:
+ - ./dapr_scheduler/1:/var/run/dapr/scheduler
+ networks:
+ - network
+```
+
+When running in HA mode, you only need to expose the ports for one scheduler instance to perform backup operations.
+
+### Performing Backup and Restore
+
+Once you have access to the etcd ports, you can follow the [official etcd backup and restore documentation](https://etcd.io/docs/v3.5/op-guide/recovery/) to perform backup and restore operations. The process involves using standard etcd commands to create snapshots and restore from them.
+
+## Monitoring Scheduler's etcd Metrics
+
+Port forward the Scheduler instance and view etcd's metrics with the following:
+
+```shell
+curl -s http://localhost:2379/metrics
+```
+
+Fine tune the embedded etcd to your needs by [reviewing and configuring the Scheduler's etcd flags as needed](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md#dapr-scheduler-options).
+
+## Disabling the Scheduler service
+
+If you are not using any features that require the Scheduler service (Jobs API, Actor Reminders, or Workflows), you can disable it by setting `global.scheduler.enabled=false`.
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
## Related links
-[Learn more about the Jobs API.]({{< ref jobs_api.md >}})
\ No newline at end of file
+[Learn more about the Jobs API.]({{< ref jobs_api.md >}})
diff --git a/daprdocs/content/en/concepts/faq/faq.md b/daprdocs/content/en/concepts/faq/faq.md
index 34d37823f40..ce59e92c778 100644
--- a/daprdocs/content/en/concepts/faq/faq.md
+++ b/daprdocs/content/en/concepts/faq/faq.md
@@ -27,11 +27,11 @@ Creating a new actor follows a local call like `http://localhost:3500/v1.0/actor
The Dapr runtime SDKs have language-specific actor frameworks. For example, the .NET SDK has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java, Go and Python SDK have actor frameworks.
-### Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
+## Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes [language specific SDKs]({{[}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
-### What frameworks does Dapr integrate with?
+## What frameworks does Dapr integrate with?
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
Dapr is integrated with the following frameworks;
diff --git a/daprdocs/content/en/concepts/overview.md b/daprdocs/content/en/concepts/overview.md
index 7613042ff93..7de1b13b92c 100644
--- a/daprdocs/content/en/concepts/overview.md
+++ b/daprdocs/content/en/concepts/overview.md
@@ -46,7 +46,7 @@ Each of these building block APIs is independent, meaning that you can use any n
|----------------|-------------|
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features.
-| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
+| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows.
| [**State management**]({{< ref "state-management-overview.md" >}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure Cosmos DB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
| [**Resource bindings**]({{< ref "bindings-overview.md" >}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
| [**Actors**]({{< ref "actors-overview.md" >}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.
@@ -76,7 +76,7 @@ Dapr exposes its HTTP and gRPC APIs as a sidecar architecture, either as a conta
## Hosting environments
Dapr can be hosted in multiple environments, including:
-- Self-hosted on a Windows/Linux/macOS machine for local development
+- Self-hosted on a Windows/Linux/macOS machine for local development and in production
- On Kubernetes or clusters of physical or virtual machines in production
### Self-hosted local development
diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-features-concepts.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-features-concepts.md
index e486b3243ec..da52612980b 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-features-concepts.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-features-concepts.md
@@ -57,7 +57,7 @@ This simplifies some choices, but also carries some consideration:
## Actor communication
-You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
+You can interact with Dapr to invoke the actor method by calling the HTTP endpoint.
```bash
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/]//
diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md
index 8664045632c..cc78f521d71 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md
@@ -108,7 +108,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
## Actor reminders
{{% alert title="Note" color="primary" %}}
-In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}).
+In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}). When upgrading to Dapr v1.15 all existing reminders are automatically migrated to the Scheduler service with no loss of reminders as a one time operation for each actor type.
{{% /alert %}}
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
@@ -137,6 +137,10 @@ You can remove the actor reminder by calling
DELETE http://localhost:3500/v1.0/actors///reminders/
```
+If an actor reminder is triggered and the app does not return a 2** code to the runtime (for example, because of a connection issue),
+actor reminders will be retried up to three times with a backoff interval of one second between each attempt. There may be
+additional retries attempted in accordance with any optionally applied [actor resiliency policy]({{< ref "override-default-retries.md" >}}).
+
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.
## Error handling
diff --git a/daprdocs/content/en/developing-applications/building-blocks/conversation/conversation-overview.md b/daprdocs/content/en/developing-applications/building-blocks/conversation/conversation-overview.md
index f7621517e6e..38cce1067a2 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/conversation/conversation-overview.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/conversation/conversation-overview.md
@@ -10,18 +10,46 @@ description: "Overview of the conversation API building block"
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
{{% /alert %}}
+Dapr's conversation API reduces the complexity of securely and reliably interacting with Large Language Models (LLM) at scale. Whether you're a developer who doesn't have the necessary native SDKs or a polyglot shop who just wants to focus on the prompt aspects of LLM interactions, the conversation API provides one consistent API entry point to talk to underlying LLM providers.
-Using the Dapr conversation API, you can reduce the complexity of interacting with Large Language Models (LLMs) and enable critical performance and security functionality with features like prompt caching and personally identifiable information (PII) data obfuscation.
+
+
+In additon to enabling critical performance and security functionality (like [prompt caching]({{< ref "#prompt-caching" >}}) and [PII scrubbing]({{< ref "#personally-identifiable-information-pii-obfuscation" >}})), you can also pair the conversation API with Dapr functionalities, like:
+- Resiliency circuit breakers and retries to circumvent limit and token errors, or
+- Middleware to authenticate requests coming to and from the LLM
+
+Dapr provides observability by issuing metrics for your LLM interactions.
## Features
+The following features are out-of-the-box for [all the supported conversation components]({{< ref supported-conversation >}}).
+
### Prompt caching
-To significantly reduce latency and cost, frequent prompts are stored in a cache to be reused, instead of reprocessing the information for every new request. Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls.
+Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls. To significantly reduce latency and cost, Dapr stores frequent prompts in a local cache to be reused by your cluster, pod, or other, instead of reprocessing the information for every new request.
### Personally identifiable information (PII) obfuscation
-The PII obfuscation feature identifies and removes any PII from a conversation response. This feature protects your privacy by eliminating sensitive details like names, addresses, phone numbers, or other details that could be used to identify an individual.
+The PII obfuscation feature identifies and removes any form of sensitve user information from a conversation response. Simply enable PII obfuscation on input and output data to protect your privacy and scrub sensitive details that could be used to identify an individual.
+
+The PII scrubber obfuscates the following user information:
+- Phone number
+- Email address
+- IP address
+- Street address
+- Credit cards
+- Social Security number
+- ISBN
+- Media Access Control (MAC) address
+- Secure Hash Algorithm 1 (SHA-1) hex
+- SHA-256 hex
+- MD5 hex
+
+## Demo
+
+Watch the demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how the conversation API works using the .NET SDK.
+
+
## Try out conversation
@@ -31,7 +59,7 @@ Want to put the Dapr conversation API to the test? Walk through the following qu
| Quickstart/tutorial | Description |
| ------------------- | ----------- |
-| [Conversation quickstart](todo) | . |
+| [Conversation quickstart]({{< ref conversation-quickstart.md >}}) | Learn how to interact with Large Language Models (LLMs) using the conversation API. |
### Start using the conversation API directly in your app
@@ -40,4 +68,4 @@ Want to skip the quickstarts? Not a problem. You can try out the conversation bu
## Next steps
- [How-To: Converse with an LLM using the conversation API]({{< ref howto-conversation-layer.md >}})
-- [Conversation API components]({{< ref supported-conversation >}})
\ No newline at end of file
+- [Conversation API components]({{< ref supported-conversation >}})
diff --git a/daprdocs/content/en/developing-applications/building-blocks/conversation/howto-conversation-layer.md b/daprdocs/content/en/developing-applications/building-blocks/conversation/howto-conversation-layer.md
index 3a6266f4d8c..7e7fd0fb478 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/conversation/howto-conversation-layer.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/conversation/howto-conversation-layer.md
@@ -14,6 +14,7 @@ Let's get started using the [conversation API]({{< ref conversation-overview.md
- Set up one of the available Dapr components (echo) that work with the conversation API.
- Add the conversation client to your application.
+- Run the connection using `dapr run`.
## Set up the conversation component
@@ -33,8 +34,29 @@ spec:
version: v1
```
+### Use the OpenAI component
+
+To interface with a real LLM, use one of the other [supported conversation components]({{< ref "supported-conversation" >}}), including OpenAI, Hugging Face, Anthropic, DeepSeek, and more.
+
+For example, to swap out the `echo` mock component with an `OpenAI` component, replace the `conversation.yaml` file with the following. You'll need to copy your API key into the component file.
+
+```
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: openai
+spec:
+ type: conversation.openai
+ metadata:
+ - name: key
+ value:
+ - name: model
+ value: gpt-4-turbo
+```
+
## Connect the conversation client
+The following examples use an HTTP client to send a POST request to Dapr's sidecar HTTP endpoint. You can also use [the Dapr SDK client instead]({{< ref "#related-links" >}}).
{{< tabs ".NET" "Go" "Rust" >}}
@@ -42,8 +64,30 @@ spec:
{{% codetab %}}
-```dotnet
-todo
+```csharp
+using Dapr.AI.Conversation;
+using Dapr.AI.Conversation.Extensions;
+
+var builder = WebApplication.CreateBuilder(args);
+
+builder.Services.AddDaprConversationClient();
+
+var app = builder.Build();
+
+var conversationClient = app.Services.GetRequiredService();
+var response = await conversationClient.ConverseAsync("conversation",
+ new List
+ {
+ new DaprConversationInput(
+ "Please write a witty haiku about the Dapr distributed programming framework at dapr.io",
+ DaprConversationRole.Generic)
+ });
+
+Console.WriteLine("Received the following from the LLM:");
+foreach (var resp in response.Outputs)
+{
+ Console.WriteLine($"\t{resp.Result}");
+}
```
{{% /codetab %}}
@@ -68,12 +112,12 @@ func main() {
}
input := dapr.ConversationInput{
- Message: "hello world",
- // Role: nil, // Optional
- // ScrubPII: nil, // Optional
+ Content: "Please write a witty haiku about the Dapr distributed programming framework at dapr.io",
+ // Role: "", // Optional
+ // ScrubPII: false, // Optional
}
- fmt.Printf("conversation input: %s\n", input.Message)
+ fmt.Printf("conversation input: %s\n", input.Content)
var conversationComponent = "echo"
@@ -110,14 +154,14 @@ async fn main() -> Result<(), Box> {
let mut client = DaprClient::connect(address).await?;
- let input = ConversationInputBuilder::new("hello world").build();
+ let input = ConversationInputBuilder::new("Please write a witty haiku about the Dapr distributed programming framework at dapr.io").build();
let conversation_component = "echo";
let request =
ConversationRequestBuilder::new(conversation_component, vec![input.clone()]).build();
- println!("conversation input: {:?}", input.message);
+ println!("conversation input: {:?}", input.content);
let response = client.converse_alpha1(request).await?;
@@ -130,6 +174,94 @@ async fn main() -> Result<(), Box> {
{{< /tabs >}}
+## Run the conversation connection
+
+Start the connection using the `dapr run` command. For example, for this scenario, we're running `dapr run` on an application with the app ID `conversation` and pointing to our conversation YAML file in the `./config` directory.
+
+{{< tabs ".NET" "Go" "Rust" >}}
+
+
+{{% codetab %}}
+
+```bash
+dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- dotnet run
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+```bash
+dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- go run ./main.go
+```
+
+**Expected output**
+
+```
+ - '== APP == conversation output: Please write a witty haiku about the Dapr distributed programming framework at dapr.io'
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+```bash
+dapr run --app-id=conversation --resources-path ./config --dapr-grpc-port 3500 -- cargo run --example conversation
+```
+
+**Expected output**
+
+```
+ - 'conversation input: hello world'
+ - 'conversation output: hello world'
+```
+
+{{% /codetab %}}
+
+{{< /tabs >}}
+
+## Advanced features
+
+The conversation API supports the following features:
+
+1. **Prompt caching:** Allows developers to cache prompts in Dapr, leading to much faster response times and reducing costs on egress and on inserting the prompt into the LLM provider's cache.
+
+1. **PII scrubbing:** Allows for the obfuscation of data going in and out of the LLM.
+
+To learn how to enable these features, see the [conversation API reference guide]({{< ref conversation_api.md >}}).
+
+## Related links
+
+Try out the conversation API using the full examples provided in the supported SDK repos.
+
+
+{{< tabs ".NET" "Go" "Rust" >}}
+
+
+{{% codetab %}}
+
+[Dapr conversation example with the .NET SDK](https://github.com/dapr/dotnet-sdk/tree/master/examples/AI/ConversationalAI)
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+[Dapr conversation example with the Go SDK](https://github.com/dapr/go-sdk/tree/main/examples/conversation)
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+[Dapr conversation example with the Rust SDK](https://github.com/dapr/rust-sdk/tree/main/examples/src/conversation)
+
+{{% /codetab %}}
+
+{{< /tabs >}}
+
## Next steps
diff --git a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md
index 0a5dba3b10c..3624cfb5af9 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md
@@ -2,7 +2,7 @@
type: docs
title: "How-To: Schedule and handle triggered jobs"
linkTitle: "How-To: Schedule and handle triggered jobs"
-weight: 2000
+weight: 5000
description: "Learn how to use the jobs API to schedule and handle triggered jobs"
---
@@ -20,7 +20,103 @@ When you [run `dapr init` in either self-hosted mode or on Kubernetes]({{< ref i
In your code, set up and schedule jobs within your application.
-{{< tabs "Go" >}}
+{{< tabs ".NET" "Go" >}}
+
+{{% codetab %}}
+
+
+
+The following .NET SDK code sample schedules the job named `prod-db-backup`. The job data contains information
+about the database that you'll be seeking to backup regularly. Over the course of this example, you'll:
+- Define types used in the rest of the example
+- Register an endpoint during application startup that handles all job trigger invocations on the service
+- Register the job with Dapr
+
+In the following example, you'll create records that you'll serialize and register alongside the job so the information
+is available when the job is triggered in the future:
+- The name of the backup task (`db-backup`)
+- The backup task's `Metadata`, including:
+ - The database name (`DBName`)
+ - The database location (`BackupLocation`)
+
+Create an ASP.NET Core project and add the latest version of `Dapr.Jobs` from NuGet.
+
+> **Note:** While it's not strictly necessary
+for your project to use the `Microsoft.NET.Sdk.Web` SDK to create jobs, as of the time this documentation is authored,
+only the service that schedules a job receives trigger invocations for it. As those invocations expect an endpoint
+that can handle the job trigger and requires the `Microsoft.NET.Sdk.Web` SDK, it's recommended that you
+use an ASP.NET Core project for this purpose.
+
+Start by defining types to persist our backup job data and apply our own JSON property name attributes to the properties
+so they're consistent with other language examples.
+
+```cs
+//Define the types that we'll represent the job data with
+internal sealed record BackupJobData([property: JsonPropertyName("task")] string Task, [property: JsonPropertyName("metadata")] BackupMetadata Metadata);
+internal sealed record BackupMetadata([property: JsonPropertyName("DBName")]string DatabaseName, [property: JsonPropertyName("BackupLocation")] string BackupLocation);
+```
+
+Next, set up a handler as part of your application setup that will be called anytime a job is triggered on your
+application. It's the responsibility of this handler to identify how jobs should be processed based on the job name provided.
+
+This works by registering a handler with ASP.NET Core at `/job/`, where `` is parameterized and
+passed into this handler delegate, meeting Dapr's expectation that an endpoint is available to handle triggered named jobs.
+
+Populate your `Program.cs` file with the following:
+
+```cs
+using System.Text;
+using System.Text.Json;
+using Dapr.Jobs;
+using Dapr.Jobs.Extensions;
+using Dapr.Jobs.Models;
+using Dapr.Jobs.Models.Responses;
+
+var builder = WebApplication.CreateBuilder(args);
+builder.Services.AddDaprJobsClient();
+var app = builder.Build();
+
+//Registers an endpoint to receive and process triggered jobs
+var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(5));
+app.MapDaprScheduledJobHandler((string jobName, ReadOnlyMemory jobPayload, ILogger logger, CancellationToken cancellationToken) => {
+ logger?.LogInformation("Received trigger invocation for job '{jobName}'", jobName);
+ switch (jobName)
+ {
+ case "prod-db-backup":
+ // Deserialize the job payload metadata
+ var jobData = JsonSerializer.Deserialize(jobPayload);
+
+ // Process the backup operation - we assume this is implemented elsewhere in your code
+ await BackupDatabaseAsync(jobData, cancellationToken);
+ break;
+ }
+}, cancellationTokenSource.Token);
+
+await app.RunAsync();
+```
+
+Finally, the job itself needs to be registered with Dapr so it can be triggered at a later point in time. You can do this
+by injecting a `DaprJobsClient` into a class and executing as part of an inbound operation to your application, but for
+this example's purposes, it'll go at the bottom of the `Program.cs` file you started above. Because you'll be using the
+`DaprJobsClient` you registered with dependency injection, start by creating a scope so you can access it.
+
+```cs
+//Create a scope so we can access the registered DaprJobsClient
+await using scope = app.Services.CreateAsyncScope();
+var daprJobsClient = scope.ServiceProvider.GetRequiredService();
+
+//Create the payload we wish to present alongside our future job triggers
+var jobData = new BackupJobData("db-backup", new BackupMetadata("my-prod-db", "/backup-dir"));
+
+//Serialize our payload to UTF-8 bytes
+var serializedJobData = JsonSerializer.SerializeToUtf8Bytes(jobData);
+
+//Schedule our backup job to run every minute, but only repeat 10 times
+await daprJobsClient.ScheduleJobAsync("prod-db-backup", DaprJobSchedule.FromDuration(TimeSpan.FromMinutes(1)),
+ serializedJobData, repeats: 10);
+```
+
+{{% /codetab %}}
{{% codetab %}}
@@ -92,66 +188,8 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul
}
```
-At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:
-
-#### HTTP
-
-When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at
-`/job/`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job
-events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is
-triggered. For example:
-
-*Note: The following example is in Go but applies to any programming language.*
-
-```go
-
-func main() {
- ...
- http.HandleFunc("/job/", handleJob)
- http.HandleFunc("/job/", specificJob)
- ...
-}
-
-func specificJob(w http.ResponseWriter, r *http.Request) {
- // Handle specific triggered job
-}
-
-func handleJob(w http.ResponseWriter, r *http.Request) {
- // Handle the triggered jobs
-}
-```
-
-#### gRPC
-
-When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
-callback function:
-
-*Note: The following example is in Go but applies to any programming language with gRPC support.*
-
-```go
-import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
-...
-func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
- // Handle the triggered job
-}
-```
-
-This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
-you register the callback server, which will invoke this function when a job is triggered:
-
-```go
-...
-js := &JobService{}
-rtv1.RegisterAppCallbackAlphaServer(server, js)
-```
-
-In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
-through this gRPC method.
-
-#### SDKs
-
-For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the
-event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this:
+When a job is triggered, Dapr will automatically route the job to the event handler you set up during the server
+initialization. For example, in Go, you'd register the event handler like this:
```go
...
diff --git a/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-features-concepts.md b/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-features-concepts.md
new file mode 100644
index 00000000000..0d528f2c03b
--- /dev/null
+++ b/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-features-concepts.md
@@ -0,0 +1,121 @@
+---
+type: docs
+title: "Features and concepts"
+linkTitle: "Features and concepts"
+weight: 2000
+description: "Learn more about the Dapr Jobs features and concepts"
+---
+
+Now that you've learned about the [jobs building block]({{< ref jobs-overview.md >}}) at a high level, let's deep dive
+into the features and concepts included with Dapr Jobs and the various SDKs. Dapr Jobs:
+- Provides a robust and scalable API for scheduling operations to be triggered in the future.
+- Exposes several capabilities which are common across all supported languages.
+
+
+
+## Job identity
+
+All jobs are registered with a case-sensitive job name. These names are intended to be unique across all services
+interfacing with the Dapr runtime. The name is used as an identifier when creating and modifying the job as well as
+to indicate which job a triggered invocation is associated with.
+
+Only one job can be associated with a name at any given time. Any attempt to create a new job using the same name
+as an existing job will result in an overwrite of this existing job.
+
+## Scheduling Jobs
+A job can be scheduled using any of the following mechanisms:
+- Intervals using Cron expressions, duration values, or period expressions
+- Specific dates and times
+
+For all time-based schedules, if a timestamp is provided with a time zone via the RFC3339 specification, that
+time zone is used. When not provided, the time zone used by the server running Dapr is used.
+In other words, do **not** assume that times run in UTC time zone, unless otherwise specified when scheduling
+the job.
+
+### Schedule using a Cron expression
+When scheduling a job to execute on a specific interval using a Cron expression, the expression is written using 6
+fields spanning the values specified in the table below:
+
+| seconds | minutes | hours | day of month | month | day of week |
+| -- | -- | -- | -- | -- | -- |
+| 0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat |
+
+#### Example 1
+`"0 30 * * * *"` triggers every hour on the half-hour mark.
+
+#### Example 2
+`"0 15 3 * * *"` triggers every day at 03:15.
+
+### Schedule using a duration value
+You can schedule jobs using [a Go duration string](https://pkg.go.dev/time#ParseDuration), in which
+a string consists of a (possibly) signed sequence of decimal numbers, each with an optional fraction and a unit suffix.
+Valid time units are `"ns"`, `"us"`, `"ms"`, `"s"`, `"m"`, or `"h"`.
+
+#### Example 1
+`"2h45m"` triggers every 2 hours and 45 minutes.
+
+#### Example 2
+`"37m25s"` triggers every 37 minutes and 25 seconds.
+
+### Schedule using a period expression
+The following period expressions are supported. The "@every" expression also accepts a [Go duration string](https://pkg.go.dev/time#ParseDuration).
+
+| Entry | Description | Equivalent Cron expression |
+| -- | -- | -- |
+| @every | Run every (e.g. "@every 1h30m") | N/A |
+| @yearly (or @annually) | Run once a year, midnight, January 1st | 0 0 0 1 1 * |
+| @monthly | Run once a month, midnight, first of month | 0 0 0 1 * * |
+| @weekly | Run once a week, midnight on Sunday | 0 0 0 * * 0 |
+| @daily or @midnight | Run once a day at midnight | 0 0 0 * * * |
+| @hourly | Run once an hour at the beginning of the hour | 0 0 * * * * |
+
+### Schedule using a specific date/time
+A job can also be scheduled to run at a particular point in time by providing a date using the
+[RFC3339 specification](https://www.rfc-editor.org/rfc/rfc3339).
+
+#### Example 1
+`"2025-12-09T16:09:53+00:00"` Indicates that the job should be run on December 9, 2025 at 4:09:53 PM UTC.
+
+## Scheduled triggers
+When a scheduled Dapr job is triggered, the runtime sends a message back to the service that scheduled the job using
+either the HTTP or gRPC approach, depending on which is registered with Dapr when the service starts.
+
+### gRPC
+When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
+callback function:
+
+> **Note:** The following example is in Go, but applies to any programming language with gRPC support.
+
+```go
+import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
+...
+func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
+ // Handle the triggered job
+}
+```
+
+This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
+you register the callback server, which invokes this function when a job is triggered:
+
+```go
+...
+js := &JobService{}
+rtv1.RegisterAppCallbackAlphaServer(server, js)
+```
+
+In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
+through this gRPC method.
+
+### HTTP
+If a gRPC server isn't registered with Dapr when the application starts up, Dapr instead triggers jobs by making a
+POST request to the endpoint `/job/`. The body includes the following information about the job:
+- `Schedule`: When the job triggers occur
+- `RepeatCount`: An optional value indicating how often the job should repeat
+- `DueTime`: An optional point in time representing either the one time when the job should execute (if not recurring)
+or the not-before time from which the schedule should take effect
+- `Ttl`: An optional value indicating when the job should expire
+- `Payload`: A collection of bytes containing data originally stored when the job was scheduled
+
+The `DueTime` and `Ttl` fields will reflect an RC3339 timestamp value reflective of the time zone provided when the job was
+originally scheduled. If no time zone was provided, these values indicate the time zone used by the server running
+Dapr.
\ No newline at end of file
diff --git a/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md b/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md
index 63f90c102f6..688b30b0420 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md
@@ -8,7 +8,7 @@ description: "Overview of the jobs API building block"
Many applications require job scheduling, or the need to take an action in the future. The jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
-Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
+Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the Scheduler service to schedule actor reminders.
Jobs in Dapr consist of:
- [The jobs API building block]({{< ref jobs_api.md >}})
@@ -57,7 +57,9 @@ The jobs API provides several features to make it easy for you to schedule jobs.
### Schedule jobs across multiple replicas
-The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance.
+When you create a job, it replaces any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
+
+The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
## Try out the jobs API
diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-cloudevents.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-cloudevents.md
index ca14d145eae..72d8e8a256f 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-cloudevents.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-cloudevents.md
@@ -108,6 +108,26 @@ with DaprClient() as client:
topic_name='orders',
publish_metadata={'cloudevent.id': 'd99b228f-6c73-4e78-8c4d-3f80a043d317', 'cloudevent.source': 'payment'}
)
+
+ # or
+
+ cloud_event = {
+ 'specversion': '1.0',
+ 'type': 'com.example.event',
+ 'source': 'payment',
+ 'id': 'd99b228f-6c73-4e78-8c4d-3f80a043d317',
+ 'data': {'orderId': i},
+ 'datacontenttype': 'application/json',
+ ...
+ }
+
+ # Set the data content type to 'application/cloudevents+json'
+ result = client.publish_event(
+ pubsub_name='order_pub_sub',
+ topic_name='orders',
+ data=json.dumps(cloud_event),
+ data_content_type='application/cloudevents+json',
+ )
```
{{% /codetab %}}
diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-deadletter.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-deadletter.md
index 8085c1c47af..7f00d0c410a 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-deadletter.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-deadletter.md
@@ -70,7 +70,7 @@ app.get('/dapr/subscribe', (_req, res) => {
## Retries and dead letter topics
By default, when a dead letter topic is set, any failing message immediately goes to the dead letter topic. As a result it is recommend to always have a retry policy set when using dead letter topics in a subscription.
-To enable the retry of a message before sending it to the dead letter topic, apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the pub/sub component.
+To enable the retry of a message before sending it to the dead letter topic, apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the pub/sub component.
This example shows how to set a constant retry policy named `pubsubRetry`, with 10 maximum delivery attempts applied every 5 seconds for the `pubsub` pub/sub component.
diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-raw.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-raw.md
index 6e518fa963a..5b4fe2c1b2b 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-raw.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-raw.md
@@ -20,7 +20,7 @@ Not using CloudEvents disables support for tracing, event deduplication per mess
To disable CloudEvent wrapping, set the `rawPayload` metadata to `true` as part of the publishing request. This allows subscribers to receive these messages without having to parse the CloudEvent schema.
-{{< tabs curl "Python SDK" "PHP SDK">}}
+{{< tabs curl ".NET" "Python" "PHP">}}
{{% codetab %}}
```bash
@@ -28,6 +28,43 @@ curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/TOPIC_A?metadata.rawPay
```
{{% /codetab %}}
+{{% codetab %}}
+
+```csharp
+using Dapr.Client;
+
+var builder = WebApplication.CreateBuilder(args);
+builder.Services.AddControllers().AddDapr();
+
+var app = builder.Build();
+
+app.MapPost("/publish", async (DaprClient daprClient) =>
+{
+ var message = new Message(
+ Guid.NewGuid().ToString(),
+ $"Hello at {DateTime.UtcNow}",
+ DateTime.UtcNow
+ );
+
+ await daprClient.PublishEventAsync(
+ "pubsub", // pubsub name
+ "messages", // topic name
+ message, // message data
+ new Dictionary
+ {
+ { "rawPayload", "true" },
+ { "content-type", "application/json" }
+ }
+ );
+
+ return Results.Ok(message);
+});
+
+app.Run();
+```
+
+{{% /codetab %}}
+
{{% codetab %}}
```python
from dapr.clients import DaprClient
@@ -74,9 +111,52 @@ Dapr apps are also able to subscribe to raw events coming from existing pub/sub
### Programmatically subscribe to raw events
-When subscribing programmatically, add the additional metadata entry for `rawPayload` so the Dapr sidecar automatically wraps the payloads into a CloudEvent that is compatible with current Dapr SDKs.
+When subscribing programmatically, add the additional metadata entry for `rawPayload` to allow the subscriber to receive a message that is not wrapped by a CloudEvent. For .NET, this metadata entry is called `isRawPayload`.
+
+{{< tabs ".NET" "Python" "PHP" >}}
+
+{{% codetab %}}
+
+```csharp
+using System.Text.Json;
+using System.Text.Json.Serialization;
+
+var builder = WebApplication.CreateBuilder(args);
+var app = builder.Build();
+
+app.MapGet("/dapr/subscribe", () =>
+{
+ var subscriptions = new[]
+ {
+ new
+ {
+ pubsubname = "pubsub",
+ topic = "messages",
+ route = "/messages",
+ metadata = new Dictionary
+ {
+ { "isRawPayload", "true" },
+ { "content-type", "application/json" }
+ }
+ }
+ };
+ return Results.Ok(subscriptions);
+});
+
+app.MapPost("/messages", async (HttpContext context) =>
+{
+ using var reader = new StreamReader(context.Request.Body);
+ var json = await reader.ReadToEndAsync();
+
+ Console.WriteLine($"Raw message received: {json}");
-{{< tabs "Python" "PHP SDK" >}}
+ return Results.Ok();
+});
+
+app.Run();
+```
+
+{{% /codetab %}}
{{% codetab %}}
@@ -151,7 +231,7 @@ spec:
default: /dsstatus
pubsubname: pubsub
metadata:
- rawPayload: "true"
+ isRawPayload: "true"
scopes:
- app1
- app2
@@ -161,4 +241,5 @@ scopes:
- Learn more about [publishing and subscribing messages]({{< ref pubsub-overview.md >}})
- List of [pub/sub components]({{< ref supported-pubsub >}})
-- Read the [API reference]({{< ref pubsub_api.md >}})
\ No newline at end of file
+- Read the [API reference]({{< ref pubsub_api.md >}})
+- Read the .NET sample on how to [consume Kafka messages without CloudEvents](https://github.com/dapr/samples/pubsub-raw-payload)
\ No newline at end of file
diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md
index 62ed2811ebe..5c31057ee32 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md
@@ -203,7 +203,112 @@ As messages are sent to the given message handler code, there is no concept of r
The example below shows the different ways to stream subscribe to a topic.
-{{< tabs Go>}}
+{{< tabs Python Go >}}
+
+{{% codetab %}}
+
+You can use the `subscribe` method, which returns a `Subscription` object and allows you to pull messages from the stream by calling the `next_message` method. This runs in and may block the main thread while waiting for messages.
+
+```python
+import time
+
+from dapr.clients import DaprClient
+from dapr.clients.grpc.subscription import StreamInactiveError
+
+counter = 0
+
+
+def process_message(message):
+ global counter
+ counter += 1
+ # Process the message here
+ print(f'Processing message: {message.data()} from {message.topic()}...')
+ return 'success'
+
+
+def main():
+ with DaprClient() as client:
+ global counter
+
+ subscription = client.subscribe(
+ pubsub_name='pubsub', topic='orders', dead_letter_topic='orders_dead'
+ )
+
+ try:
+ while counter < 5:
+ try:
+ message = subscription.next_message()
+
+ except StreamInactiveError as e:
+ print('Stream is inactive. Retrying...')
+ time.sleep(1)
+ continue
+ if message is None:
+ print('No message received within timeout period.')
+ continue
+
+ # Process the message
+ response_status = process_message(message)
+
+ if response_status == 'success':
+ subscription.respond_success(message)
+ elif response_status == 'retry':
+ subscription.respond_retry(message)
+ elif response_status == 'drop':
+ subscription.respond_drop(message)
+
+ finally:
+ print("Closing subscription...")
+ subscription.close()
+
+
+if __name__ == '__main__':
+ main()
+
+```
+
+You can also use the `subscribe_with_handler` method, which accepts a callback function executed for each message received from the stream. This runs in a separate thread, so it doesn't block the main thread.
+
+```python
+import time
+
+from dapr.clients import DaprClient
+from dapr.clients.grpc._response import TopicEventResponse
+
+counter = 0
+
+
+def process_message(message):
+ # Process the message here
+ global counter
+ counter += 1
+ print(f'Processing message: {message.data()} from {message.topic()}...')
+ return TopicEventResponse('success')
+
+
+def main():
+ with (DaprClient() as client):
+ # This will start a new thread that will listen for messages
+ # and process them in the `process_message` function
+ close_fn = client.subscribe_with_handler(
+ pubsub_name='pubsub', topic='orders', handler_fn=process_message,
+ dead_letter_topic='orders_dead'
+ )
+
+ while counter < 5:
+ time.sleep(1)
+
+ print("Closing subscription...")
+ close_fn()
+
+
+if __name__ == '__main__':
+ main()
+```
+
+[Learn more about streaming subscriptions using the Python SDK client.]({{< ref "python-client.md#streaming-message-subscription" >}})
+
+{{% /codetab %}}
{{% codetab %}}
diff --git a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-services-grpc.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-services-grpc.md
index 71679ff519a..adf36ab1fc6 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-services-grpc.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-services-grpc.md
@@ -309,6 +309,8 @@ context.AddMetadata("dapr-stream", "true");
### Streaming gRPCs and Resiliency
+> Currently, resiliency policies are not supported for service invocation via gRPC.
+
When proxying streaming gRPCs, due to their long-lived nature, [resiliency]({{< ref "resiliency-overview.md" >}}) policies are applied on the "initial handshake" only. As a consequence:
- If the stream is interrupted after the initial handshake, it will not be automatically re-established by Dapr. Your application will be notified that the stream has ended, and will need to recreate it.
diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md
index 186ea32643f..e19d7331b88 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md
@@ -6,7 +6,9 @@ weight: 4000
description: "The Dapr Workflow engine architecture"
---
-[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. This article describes:
+[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. Dapr Workflows are built on top of Dapr Actors providing durability and scalability for workflow execution.
+
+This article describes:
- The architecture of the Dapr Workflow engine
- How the workflow engine interacts with application code
@@ -72,7 +74,7 @@ The internal workflow actor types are only registered after an app has registere
### Workflow actors
-Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
+There are 2 different types of actors used with workflows: workflow actors and activity actors. Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
Each workflow actor saves its state using the following keys in the configured state store:
@@ -84,7 +86,7 @@ Each workflow actor saves its state using the following keys in the configured s
| `metadata` | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
{{% alert title="Warning" color="warning" %}}
-In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), workflow actor state will remain in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of old workflow state.
+Workflow actor state remains in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
{{% /alert %}}
The following diagram illustrates the typical lifecycle of a workflow actor.
@@ -122,7 +124,7 @@ Activity actors are short-lived:
### Reminder usage and execution guarantees
-The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{< ref "howto-actors.md#actor-timers-and-reminders" >}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
+The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{< ref "../actors/actors-timers-reminders.md##actor-reminders" >}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md
index 7ee3b500dae..985c1cc11de 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md
@@ -195,7 +195,7 @@ string randomString = GetRandomString();
// DON'T DO THIS!
Instant currentTime = Instant.now();
UUID newIdentifier = UUID.randomUUID();
-string randomString = GetRandomString();
+String randomString = getRandomString();
```
{{% /codetab %}}
@@ -242,7 +242,7 @@ string randomString = await context.CallActivityAsync("GetRandomString")
```java
// Do this!!
Instant currentTime = context.getCurrentInstant();
-Guid newIdentifier = context.NewGuid();
+Guid newIdentifier = context.newGuid();
String randomString = context.callActivity(GetRandomString.class.getName(), String.class).await();
```
@@ -338,7 +338,7 @@ Do this:
```csharp
// Do this!!
-string configuation = workflowInput.Configuration; // imaginary workflow input argument
+string configuration = workflowInput.Configuration; // imaginary workflow input argument
string data = await context.CallActivityAsync("MakeHttpCall", "https://example.com/api/data");
```
@@ -348,7 +348,7 @@ string data = await context.CallActivityAsync("MakeHttpCall", "https://e
```java
// Do this!!
-String configuation = ctx.getInput(InputType.class).getConfiguration(); // imaginary workflow input argument
+String configuration = ctx.getInput(InputType.class).getConfiguration(); // imaginary workflow input argument
String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data", String.class).await();
```
@@ -358,7 +358,7 @@ String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data
```javascript
// Do this!!
-const configuation = workflowInput.getConfiguration(); // imaginary workflow input argument
+const configuration = workflowInput.getConfiguration(); // imaginary workflow input argument
const data = yield ctx.callActivity(makeHttpCall, "https://example.com/api/data");
```
diff --git a/daprdocs/content/en/developing-applications/debugging/bridge-to-kubernetes.md b/daprdocs/content/en/developing-applications/debugging/bridge-to-kubernetes.md
deleted file mode 100644
index fc77d65c947..00000000000
--- a/daprdocs/content/en/developing-applications/debugging/bridge-to-kubernetes.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-type: docs
-title: "Bridge to Kubernetes support for Dapr services"
-linkTitle: "Bridge to Kubernetes"
-weight: 300
-description: "Debug Dapr apps locally which still connected to your Kubernetes cluster"
----
-
-Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. This type of debugging is often called *local tunnel debugging*.
-
-{{< button text="Learn more about Bridge to Kubernetes" link="https://aka.ms/bridge-vscode-dapr" >}}
-
-## Debug Dapr apps
-
-Bridge to Kubernetes supports debugging Dapr apps on your machine, while still having them interact with the services and applications running on your Kubernetes cluster. This example showcases Bridge to Kubernetes enabling a developer to debug the [distributed calculator quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator):
-
-
-
-
-
-{{% alert title="Isolation mode" color="warning" %}}
-[Isolation mode](https://aka.ms/bridge-isolation-vscode-dapr) is currently not supported with Dapr apps. Make sure to launch Bridge to Kubernetes mode without isolation.
-{{% /alert %}}
-
-## Further reading
-
-- [Bridge to Kubernetes documentation](https://code.visualstudio.com/docs/containers/bridge-to-kubernetes)
-- [VSCode integration]({{< ref vscode >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/developing-applications/develop-components/_index.md b/daprdocs/content/en/developing-applications/develop-components/_index.md
index cb9f7e8a851..981506e6d3f 100644
--- a/daprdocs/content/en/developing-applications/develop-components/_index.md
+++ b/daprdocs/content/en/developing-applications/develop-components/_index.md
@@ -2,6 +2,6 @@
type: docs
title: "Components"
linkTitle: "Components"
-weight: 30
+weight: 70
description: "Learn more about developing Dapr's pluggable and middleware components"
---
diff --git a/daprdocs/content/en/developing-applications/error-codes/_index.md b/daprdocs/content/en/developing-applications/error-codes/_index.md
new file mode 100644
index 00000000000..0ea1f5846b4
--- /dev/null
+++ b/daprdocs/content/en/developing-applications/error-codes/_index.md
@@ -0,0 +1,8 @@
+---
+type: docs
+title: "Error codes"
+linkTitle: "Error codes"
+weight: 30
+description: "Error codes and messages you may encounter while using Dapr"
+---
+
diff --git a/daprdocs/content/en/developing-applications/error-codes/error-codes-reference.md b/daprdocs/content/en/developing-applications/error-codes/error-codes-reference.md
new file mode 100644
index 00000000000..494a123ef50
--- /dev/null
+++ b/daprdocs/content/en/developing-applications/error-codes/error-codes-reference.md
@@ -0,0 +1,206 @@
+---
+type: docs
+title: "Error codes reference guide"
+linkTitle: "Reference"
+description: "List of gRPC and HTTP error codes in Dapr and their descriptions"
+weight: 20
+---
+
+The following tables list the error codes returned by Dapr runtime.
+The error codes are returned in the response body of an HTTP request or in the `ErrorInfo` section of a gRPC status response, if one is present.
+An effort is underway to enrich all gRPC error responses according to the [Richer Error Model]({{< ref "grpc-error-codes.md#richer-grpc-error-model" >}}). Error codes without a corresponding gRPC code indicate those errors have not yet been updated to this model.
+
+### Actors API
+
+| HTTP Code | gRPC Code | Description |
+| ---------------------------------- | --------- | ----------------------------------------------------------------------- |
+| `ERR_ACTOR_INSTANCE_MISSING` | | Missing actor instance |
+| `ERR_ACTOR_INVOKE_METHOD` | | Error invoking actor method |
+| `ERR_ACTOR_RUNTIME_NOT_FOUND` | | Actor runtime not found |
+| `ERR_ACTOR_STATE_GET` | | Error getting actor state |
+| `ERR_ACTOR_STATE_TRANSACTION_SAVE` | | Error saving actor transaction |
+| `ERR_ACTOR_REMINDER_CREATE` | | Error creating actor reminder |
+| `ERR_ACTOR_REMINDER_DELETE` | | Error deleting actor reminder |
+| `ERR_ACTOR_REMINDER_GET` | | Error getting actor reminder |
+| `ERR_ACTOR_REMINDER_NON_HOSTED` | | Reminder operation on non-hosted actor type |
+| `ERR_ACTOR_TIMER_CREATE` | | Error creating actor timer |
+| `ERR_ACTOR_NO_APP_CHANNEL` | | App channel not initialized |
+| `ERR_ACTOR_STACK_DEPTH` | | Maximum actor call stack depth exceeded |
+| `ERR_ACTOR_NO_PLACEMENT` | | Placement service not configured |
+| `ERR_ACTOR_RUNTIME_CLOSED` | | Actor runtime is closed |
+| `ERR_ACTOR_NAMESPACE_REQUIRED` | | Actors must have a namespace configured when running in Kubernetes mode |
+| `ERR_ACTOR_NO_ADDRESS` | | No address found for actor |
+
+
+### Workflows API
+
+| HTTP Code | gRPC Code | Description |
+| ---------------------------------- | --------- | --------------------------------------------------------------------------------------- |
+| `ERR_GET_WORKFLOW` | | Error getting workflow |
+| `ERR_START_WORKFLOW` | | Error starting workflow |
+| `ERR_PAUSE_WORKFLOW` | | Error pausing workflow |
+| `ERR_RESUME_WORKFLOW` | | Error resuming workflow |
+| `ERR_TERMINATE_WORKFLOW` | | Error terminating workflow |
+| `ERR_PURGE_WORKFLOW` | | Error purging workflow |
+| `ERR_RAISE_EVENT_WORKFLOW` | | Error raising event in workflow |
+| `ERR_WORKFLOW_COMPONENT_MISSING` | | Missing workflow component |
+| `ERR_WORKFLOW_COMPONENT_NOT_FOUND` | | Workflow component not found |
+| `ERR_WORKFLOW_EVENT_NAME_MISSING` | | Missing workflow event name |
+| `ERR_WORKFLOW_NAME_MISSING` | | Workflow name not configured |
+| `ERR_INSTANCE_ID_INVALID` | | Invalid workflow instance ID. (Only alphanumeric and underscore characters are allowed) |
+| `ERR_INSTANCE_ID_NOT_FOUND` | | Workflow instance ID not found |
+| `ERR_INSTANCE_ID_PROVIDED_MISSING` | | Missing workflow instance ID |
+| `ERR_INSTANCE_ID_TOO_LONG` | | Workflow instance ID too long |
+
+
+### State management API
+
+| HTTP Code | gRPC Code | Description |
+| --------------------------------------- | --------------------------------------- | ----------------------------------------- |
+| `ERR_STATE_TRANSACTION` | | Error in state transaction |
+| `ERR_STATE_SAVE` | | Error saving state |
+| `ERR_STATE_GET` | | Error getting state |
+| `ERR_STATE_DELETE` | | Error deleting state |
+| `ERR_STATE_BULK_DELETE` | | Error deleting state in bulk |
+| `ERR_STATE_BULK_GET` | | Error getting state in bulk |
+| `ERR_NOT_SUPPORTED_STATE_OPERATION` | | Operation not supported in transaction |
+| `ERR_STATE_QUERY` | `DAPR_STATE_QUERY_FAILED` | Error querying state |
+| `ERR_STATE_STORE_NOT_FOUND` | `DAPR_STATE_NOT_FOUND` | State store not found |
+| `ERR_STATE_STORE_NOT_CONFIGURED` | `DAPR_STATE_NOT_CONFIGURED` | State store not configured |
+| `ERR_STATE_STORE_NOT_SUPPORTED` | `DAPR_STATE_TRANSACTIONS_NOT_SUPPORTED` | State store does not support transactions |
+| `ERR_STATE_STORE_NOT_SUPPORTED` | `DAPR_STATE_QUERYING_NOT_SUPPORTED` | State store does not support querying |
+| `ERR_STATE_STORE_TOO_MANY_TRANSACTIONS` | `DAPR_STATE_TOO_MANY_TRANSACTIONS` | Too many operations per transaction |
+| `ERR_MALFORMED_REQUEST` | `DAPR_STATE_ILLEGAL_KEY` | Invalid key |
+
+
+### Configuration API
+
+| HTTP Code | gRPC Code | Description |
+| ---------------------------------------- | --------- | -------------------------------------- |
+| `ERR_CONFIGURATION_GET` | | Error getting configuration |
+| `ERR_CONFIGURATION_STORE_NOT_CONFIGURED` | | Configuration store not configured |
+| `ERR_CONFIGURATION_STORE_NOT_FOUND` | | Configuration store not found |
+| `ERR_CONFIGURATION_SUBSCRIBE` | | Error subscribing to configuration |
+| `ERR_CONFIGURATION_UNSUBSCRIBE` | | Error unsubscribing from configuration |
+
+
+### Crypto API
+
+| HTTP Code | gRPC Code | Description |
+| ------------------------------------- | --------- | ------------------------------- |
+| `ERR_CRYPTO` | | Error in crypto operation |
+| `ERR_CRYPTO_KEY` | | Error retrieving crypto key |
+| `ERR_CRYPTO_PROVIDER_NOT_FOUND` | | Crypto provider not found |
+| `ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED` | | Crypto providers not configured |
+
+
+### Secrets API
+
+| HTTP Code | gRPC Code | Description |
+| ---------------------------------- | --------- | --------------------------- |
+| `ERR_SECRET_GET` | | Error getting secret |
+| `ERR_SECRET_STORE_NOT_FOUND` | | Secret store not found |
+| `ERR_SECRET_STORES_NOT_CONFIGURED` | | Secret store not configured |
+| `ERR_PERMISSION_DENIED` | | Permission denied by policy |
+
+
+### Pub/Sub and messaging errors
+
+| HTTP Code | gRPC Code | Description |
+| ----------------------------- | -------------------------------------- | -------------------------------------- |
+| `ERR_PUBSUB_EMPTY` | `DAPR_PUBSUB_NAME_EMPTY` | Pubsub name is empty |
+| `ERR_PUBSUB_NOT_FOUND` | `DAPR_PUBSUB_NOT_FOUND` | Pubsub not found |
+| `ERR_PUBSUB_NOT_FOUND` | `DAPR_PUBSUB_TEST_NOT_FOUND` | Pubsub not found |
+| `ERR_PUBSUB_NOT_CONFIGURED` | `DAPR_PUBSUB_NOT_CONFIGURED` | Pubsub not configured |
+| `ERR_TOPIC_NAME_EMPTY` | `DAPR_PUBSUB_TOPIC_NAME_EMPTY` | Topic name is empty |
+| `ERR_PUBSUB_FORBIDDEN` | `DAPR_PUBSUB_FORBIDDEN` | Access to topic forbidden for APP ID |
+| `ERR_PUBSUB_PUBLISH_MESSAGE` | `DAPR_PUBSUB_PUBLISH_MESSAGE` | Error publishing message |
+| `ERR_PUBSUB_REQUEST_METADATA` | `DAPR_PUBSUB_METADATA_DESERIALIZATION` | Error deserializing metadata |
+| `ERR_PUBSUB_CLOUD_EVENTS_SER` | `DAPR_PUBSUB_CLOUD_EVENT_CREATION` | Error creating CloudEvent |
+| `ERR_PUBSUB_EVENTS_SER` | `DAPR_PUBSUB_MARSHAL_ENVELOPE` | Error marshalling Cloud Event envelope |
+| `ERR_PUBSUB_EVENTS_SER` | `DAPR_PUBSUB_MARSHAL_EVENTS` | Error marshalling events to bytes |
+| `ERR_PUBSUB_EVENTS_SER` | `DAPR_PUBSUB_UNMARSHAL_EVENTS` | Error unmarshalling events |
+| `ERR_PUBLISH_OUTBOX` | | Error publishing message to outbox |
+
+
+### Conversation API
+
+| HTTP Code | gRPC Code | Description |
+| --------------------------------- | --------- | --------------------------------------------- |
+| `ERR_CONVERSATION_INVALID_PARMS` | | Invalid parameters for conversation component |
+| `ERR_CONVERSATION_INVOKE` | | Error invoking conversation |
+| `ERR_CONVERSATION_MISSING_INPUTS` | | Missing inputs for conversation |
+| `ERR_CONVERSATION_NOT_FOUND` | | Conversation not found |
+
+
+### Service Invocation / Direct Messaging API
+
+| HTTP Code | gRPC Code | Description |
+| ------------------- | --------- | ---------------------- |
+| `ERR_DIRECT_INVOKE` | | Error invoking service |
+
+
+### Bindings API
+
+| HTTP Code | gRPC Code | Description |
+| --------------------------- | --------- | ----------------------------- |
+| `ERR_INVOKE_OUTPUT_BINDING` | | Error invoking output binding |
+
+
+### Distributed Lock API
+
+| HTTP Code | gRPC Code | Description |
+| ------------------------------- | --------- | ------------------------- |
+| `ERR_LOCK_STORE_NOT_CONFIGURED` | | Lock store not configured |
+| `ERR_LOCK_STORE_NOT_FOUND` | | Lock store not found |
+| `ERR_TRY_LOCK` | | Error acquiring lock |
+| `ERR_UNLOCK` | | Error releasing lock |
+
+
+### Healthz
+
+| HTTP Code | gRPC Code | Description |
+| ------------------------------- | --------- | --------------------------- |
+| `ERR_HEALTH_NOT_READY` | | Dapr not ready |
+| `ERR_HEALTH_APPID_NOT_MATCH` | | Dapr App ID does not match |
+| `ERR_OUTBOUND_HEALTH_NOT_READY` | | Dapr outbound not ready |
+
+
+### Common
+
+| HTTP Code | gRPC Code | Description |
+| ---------------------------- | --------- | -------------------------- |
+| `ERR_API_UNIMPLEMENTED` | | API not implemented |
+| `ERR_APP_CHANNEL_NIL` | | App channel is nil |
+| `ERR_BAD_REQUEST` | | Bad request |
+| `ERR_BODY_READ` | | Error reading request body |
+| `ERR_INTERNAL` | | Internal error |
+| `ERR_MALFORMED_REQUEST` | | Malformed request |
+| `ERR_MALFORMED_REQUEST_DATA` | | Malformed request data |
+| `ERR_MALFORMED_RESPONSE` | | Malformed response |
+
+
+### Scheduler/Jobs API
+
+| HTTP Code | gRPC Code | Description |
+| ------------------------------- | ------------------------------- | -------------------------------------- |
+| `DAPR_SCHEDULER_SCHEDULE_JOB` | `DAPR_SCHEDULER_SCHEDULE_JOB` | Error scheduling job |
+| `DAPR_SCHEDULER_JOB_NAME` | `DAPR_SCHEDULER_JOB_NAME` | Job name should only be set in the url |
+| `DAPR_SCHEDULER_JOB_NAME_EMPTY` | `DAPR_SCHEDULER_JOB_NAME_EMPTY` | Job name is empty |
+| `DAPR_SCHEDULER_GET_JOB` | `DAPR_SCHEDULER_GET_JOB` | Error getting job |
+| `DAPR_SCHEDULER_LIST_JOBS` | `DAPR_SCHEDULER_LIST_JOBS` | Error listing jobs |
+| `DAPR_SCHEDULER_DELETE_JOB` | `DAPR_SCHEDULER_DELETE_JOB` | Error deleting job |
+| `DAPR_SCHEDULER_EMPTY` | `DAPR_SCHEDULER_EMPTY` | Required argument is empty |
+| `DAPR_SCHEDULER_SCHEDULE_EMPTY` | `DAPR_SCHEDULER_SCHEDULE_EMPTY` | No schedule provided for job |
+
+
+### Generic
+
+| HTTP Code | gRPC Code | Description |
+| --------- | --------- | ------------- |
+| `ERROR` | `ERROR` | Generic error |
+
+## Next steps
+
+- [Handling HTTP error codes]({{< ref http-error-codes.md >}})
+- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/developing-applications/error-codes/errors-overview.md b/daprdocs/content/en/developing-applications/error-codes/errors-overview.md
new file mode 100644
index 00000000000..762413fb7f7
--- /dev/null
+++ b/daprdocs/content/en/developing-applications/error-codes/errors-overview.md
@@ -0,0 +1,62 @@
+---
+type: docs
+title: "Errors overview"
+linkTitle: "Overview"
+weight: 10
+description: "Overview of Dapr errors"
+---
+
+An error code is a numeric or alphamueric code that indicates the nature of an error and, when possible, why it occured.
+
+Dapr error codes are standardized strings for over 80+ common errors across HTTP and gRPC requests when using the Dapr APIs. These codes are both:
+- Returned in the JSON response body of the request.
+- When enabled, logged in debug-level logs in the runtime.
+ - If you're running in Kubernetes, error codes are logged in the sidecar.
+ - If you're running in self-hosted, you can enable and run debug logs.
+
+## Error format
+
+Dapr error codes consist of a prefix, a category, and shorthand of the error itself. For example:
+
+| Prefix | Category | Error shorthand |
+| ------ | -------- | --------------- |
+| ERR_ | PUBSUB_ | NOT_FOUND |
+
+Some of the most common errors returned include:
+
+- ERR_ACTOR_TIMER_CREATE
+- ERR_PURGE_WORKFLOW
+- ERR_STATE_STORE_NOT_FOUND
+- ERR_HEALTH_NOT_READY
+
+> **Note:** [See a full list of error codes in Dapr.]({{< ref error-codes-reference.md >}})
+
+An error returned for a state store not found might look like the following:
+
+```json
+{
+ "error": "Bad Request",
+ "error_msg": "{\"errorCode\":\"ERR_STATE_STORE_NOT_FOUND\",\"message\":\"state store is not found\",\"details\":[{\"@type\":\"type.googleapis.com/google.rpc.ErrorInfo\",\"domain\":\"dapr.io\",\"metadata\":{\"appID\":\"nodeapp\"},\"reason\":\"DAPR_STATE_NOT_FOUND\"}]}",
+ "status": 400
+}
+```
+
+The returned error includes:
+- The error code: `ERR_STATE_STORE_NOT_FOUND`
+- The error message describing the issue: `state store is not found`
+- The app ID in which the error is occuring: `nodeapp`
+- The reason for the error: `DAPR_STATE_NOT_FOUND`
+
+## Dapr error code metrics
+
+Metrics help you see when exactly errors are occuring from within the runtime. Error code metrics are collected using the `error_code_total` endpoint. This endpoint is disabled by default. You can [enable it using the `recordErrorCodes` field in your configuration file]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}).
+
+## Demo
+
+Watch a demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how to enable error code metrics and handle error codes returned in the runtime.
+
+
+
+## Next step
+
+{{< button text="See a list of all Dapr error codes" page="error-codes-reference" >}}
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/errors/_index.md b/daprdocs/content/en/developing-applications/error-codes/grpc-error-codes.md
similarity index 93%
rename from daprdocs/content/en/reference/errors/_index.md
rename to daprdocs/content/en/developing-applications/error-codes/grpc-error-codes.md
index 35f685f7491..1d343cce59d 100644
--- a/daprdocs/content/en/reference/errors/_index.md
+++ b/daprdocs/content/en/developing-applications/error-codes/grpc-error-codes.md
@@ -1,20 +1,18 @@
---
type: docs
-title: Dapr errors
-linkTitle: "Dapr errors"
-weight: 700
-description: "Information on Dapr errors and how to handle them"
+title: Handling gRPC error codes
+linkTitle: "gRPC"
+weight: 40
+description: "Information on Dapr gRPC errors and how to handle them"
---
-## Error handling: Understanding errors model and reporting
-
Initially, errors followed the [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model). However, to provide more detailed and informative error messages, an enhanced error model has been defined which aligns with the gRPC [Richer error model](https://grpc.io/docs/guides/error/#richer-error-model).
{{% alert title="Note" color="primary" %}}
Not all Dapr errors have been converted to the richer gRPC error model.
{{% /alert %}}
-### Standard gRPC Error Model
+## Standard gRPC Error Model
The [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model) is an approach to error reporting in gRPC. Each error response includes an error code and an error message. The error codes are standardized and reflect common error conditions.
@@ -25,7 +23,7 @@ ERROR:
Message: input key/keyPrefix 'bad||keyname' can't contain '||'
```
-### Richer gRPC Error Model
+## Richer gRPC Error Model
The [Richer gRPC error model](https://grpc.io/docs/guides/error/#richer-error-model) extends the standard error model by providing additional context and details about the error. This model includes the standard error `code` and `message`, along with a `details` section that can contain various types of information, such as `ErrorInfo`, `ResourceInfo`, and `BadRequest` details.
diff --git a/daprdocs/content/en/developing-applications/error-codes/http-error-codes.md b/daprdocs/content/en/developing-applications/error-codes/http-error-codes.md
new file mode 100644
index 00000000000..1b069ebaf9d
--- /dev/null
+++ b/daprdocs/content/en/developing-applications/error-codes/http-error-codes.md
@@ -0,0 +1,21 @@
+---
+type: docs
+title: "Handling HTTP error codes"
+linkTitle: "HTTP"
+description: "Detailed reference of the Dapr HTTP error codes and how to handle them"
+weight: 30
+---
+
+For HTTP calls made to Dapr runtime, when an error is encountered, an error JSON is returned in response body. The JSON contains an error code and an descriptive error message.
+
+```
+{
+ "errorCode": "ERR_STATE_GET",
+ "message": "Requested state key does not exist in state store."
+}
+```
+
+## Related
+
+- [Error code reference list]({{< ref error-codes-reference.md >}})
+- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md b/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md
index 94757e86bb1..029279c261f 100644
--- a/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md
+++ b/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md
@@ -80,10 +80,16 @@ In production scenarios, it is recommended to use a solution such as:
If running on AWS EKS, you can [link an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html), which your pod can use.
-All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrive credentials dynamically, so that explicit credentials aren't needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
+All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrieve credentials dynamically, so that explicit credentials aren't needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
Both Kiam and Kube2IAM work by intercepting calls to the [instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).
+### Setting Up Dapr with AWS EKS Pod Identity
+
+EKS Pod Identities provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance’s role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account.
+
+To see a comprehensive example on how to authorize pod access to AWS Secrets Manager from EKS using AWS EKS Pod Identity, [follow the sample in this repository](https://github.com/dapr/samples/tree/master/dapr-eks-podidentity).
+
### Use an instance profile when running in stand-alone mode on AWS EC2
If running Dapr directly on an AWS EC2 instance in stand-alone mode, you can use instance profiles.
@@ -130,7 +136,6 @@ On Windows, the environment variable needs to be set before starting the `dapr`
{{< /tabs >}}
-
### Authenticate to AWS if using AWS SSO based profiles
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials and "legacy" credentials, such as:
@@ -157,7 +162,7 @@ AWS_PROFILE=myprofile awshelper daprd...
{{% codetab %}}
-On Windows, the environment variable needs to be set before starting the `awshelper` command, doing it inline (like in Linxu/MacOS) is not supported.
+On Windows, the environment variable needs to be set before starting the `awshelper` command; doing it inline (like in Linux/MacOS) is not supported.
{{% /codetab %}}
@@ -169,4 +174,7 @@ On Windows, the environment variable needs to be set before starting the `awshel
## Related links
-For more information, see [how the AWS SDK (which Dapr uses) handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
+- For more information, see [how the AWS SDK (which Dapr uses) handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
+- [EKS Pod Identity Documentation](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html)
+- [AWS SDK Credentials Configuration](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials)
+- [Set up an Elastic Kubernetes Service (EKS) cluster](https://docs.dapr.io/operations/hosting/kubernetes/cluster/setup-eks/)
diff --git a/daprdocs/content/en/developing-applications/integrations/Diagrid/test-containers.md b/daprdocs/content/en/developing-applications/integrations/Diagrid/test-containers.md
deleted file mode 100644
index 1eabef6bf61..00000000000
--- a/daprdocs/content/en/developing-applications/integrations/Diagrid/test-containers.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-type: docs
-title: "How to: Integrate using Testcontainers Dapr Module"
-linkTitle: "Dapr Testcontainers"
-weight: 3000
-description: "Use the Dapr Testcontainer module from your Java application"
----
-
-You can use the Testcontainers Dapr Module provided by Diagrid to set up Dapr locally for your Java applications. Simply add the following dependency to your Maven project:
-
-```xml
-
- io.diagrid.dapr
- testcontainers-dapr
- 0.10.x
-
-```
-
-[If you're using Spring Boot, you can also use the Spring Boot Starter.](https://github.com/diagridio/spring-boot-starter-dapr)
-
-{{< button text="Use the Testcontainers Dapr Module" link="https://github.com/diagridio/testcontainers-dapr" >}}
\ No newline at end of file
diff --git a/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md b/daprdocs/content/en/developing-applications/local-development/gRPC-integration.md
similarity index 99%
rename from daprdocs/content/en/developing-applications/integrations/gRPC-integration.md
rename to daprdocs/content/en/developing-applications/local-development/gRPC-integration.md
index 6b05dfa6076..bd0eea99230 100644
--- a/daprdocs/content/en/developing-applications/integrations/gRPC-integration.md
+++ b/daprdocs/content/en/developing-applications/local-development/gRPC-integration.md
@@ -2,7 +2,7 @@
type: docs
title: "How to: Use the gRPC interface in your Dapr application"
linkTitle: "gRPC interface"
-weight: 6000
+weight: 400
description: "Use the Dapr gRPC API in your application"
---
diff --git a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-overview.md b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-overview.md
index 4e48d8a09e4..1cb8cc0b93d 100644
--- a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-overview.md
+++ b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-overview.md
@@ -124,6 +124,7 @@ apps:
appDirPath: ./nodeapp/
appPort: 3000
containerImage: ghcr.io/dapr/samples/hello-k8s-node:latest
+ containerImagePullPolicy: Always
createService: true
env:
APP_PORT: 3000
@@ -134,6 +135,7 @@ apps:
> **Note:**
> - If the `containerImage` field is not specified, `dapr run -k -f` produces an error.
+> - The containerImagePullPolicy indicates that a new container image is always downloaded for this app.
> - The `createService` field defines a basic service in Kubernetes (ClusterIP or LoadBalancer) that targets the `--app-port` specified in the template. If `createService` isn't specified, the application is not accessible from outside the cluster.
For a more in-depth example and explanation of the template properties, see [Multi-app template]({{< ref multi-app-template.md >}}).
@@ -169,4 +171,4 @@ Watch [this video for an overview on Multi-App Run in Kubernetes](https://youtu.
- [Learn the Multi-App Run template file structure and its properties]({{< ref multi-app-template.md >}})
- [Try out the self-hosted Multi-App Run template with the Service Invocation quickstart]({{< ref serviceinvocation-quickstart.md >}})
-- [Try out the Kubernetes Multi-App Run template with the `hello-kubernetes` tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
\ No newline at end of file
+- [Try out the Kubernetes Multi-App Run template with the `hello-kubernetes` tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
diff --git a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
index 606ae9fbe8c..ba7e3d17753 100644
--- a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
+++ b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
@@ -203,6 +203,7 @@ apps:
appLogDestination: file # (optional), can be file, console or fileAndConsole. default is fileAndConsole.
daprdLogDestination: file # (optional), can be file, console or fileAndConsole. default is file.
containerImage: ghcr.io/dapr/samples/hello-k8s-node:latest # (optional) URI of the container image to be used when deploying to Kubernetes dev/test environment.
+ containerImagePullPolicy: IfNotPresent # (optional), the container image is downloaded if one is not present locally, otherwise the local one is used.
createService: true # (optional) Create a Kubernetes service for the application when deploying to dev/test environment.
- appID: backend # optional
appDirPath: .dapr/backend/ # REQUIRED
@@ -285,39 +286,39 @@ The properties for the Multi-App Run template align with the `dapr run -k` CLI f
{{< table "table table-white table-striped table-bordered" >}}
-| Properties | Required | Details | Example |
-|--------------------------|:--------:|--------|---------|
-| `appDirPath` | Y | Path to the your application code | `./webapp/`, `./backend/` |
-| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
-| `appChannelAddress` | N | The network address the application listens on. Can be left to the default value by convention. | `127.0.0.1` | `localhost` |
-| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
-| `appPort` | N | The port your application is listening on | `8080`, `3000` |
-| `daprHTTPPort` | N | Dapr HTTP port | |
-| `daprGRPCPort` | N | Dapr GRPC port | |
-| `daprInternalGRPCPort` | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
-| `metricsPort` | N | The port that Dapr sends its metrics information to | |
-| `unixDomainSocket` | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | `/tmp/test-socket` |
-| `profilePort` | N | The port for the profile server to listen on | |
-| `enableProfiling` | N | Enable profiling via an HTTP endpoint | |
-| `apiListenAddresses` | N | Dapr API listen addresses | |
-| `logLevel` | N | The log verbosity. | |
-| `appMaxConcurrency` | N | The concurrency level of the application; default is unlimited | |
-| `placementHostAddress` | N | | |
-| `appSSL` | N | Enable https when Dapr invokes the application | |
-| `daprHTTPMaxRequestSize` | N | Max size of the request body in MB. | |
-| `daprHTTPReadBufferSize` | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB | |
-| `enableAppHealthCheck` | N | Enable the app health check on the application | `true`, `false` |
-| `appHealthCheckPath` | N | Path to the health check file | `/healthz` |
-| `appHealthProbeInterval` | N | Interval to probe for the health of the app in seconds
- | |
-| `appHealthProbeTimeout` | N | Timeout for app health probes in milliseconds | |
-| `appHealthThreshold` | N | Number of consecutive failures for the app to be considered unhealthy | |
-| `enableApiLogging` | N | Enable the logging of all API calls from application to Dapr | |
-| `env` | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | `DEBUG`, `DAPR_HOST_ADD` |
-| `appLogDestination` | N | Log destination for outputting app logs; Its value can be file, console or fileAndConsole. Default is fileAndConsole | `file`, `console`, `fileAndConsole` |
-| `daprdLogDestination` | N | Log destination for outputting daprd logs; Its value can be file, console or fileAndConsole. Default is file | `file`, `console`, `fileAndConsole` |
-| `containerImage`| N | URI of the container image to be used when deploying to Kubernetes dev/test environment. | `ghcr.io/dapr/samples/hello-k8s-python:latest`
-| `createService`| N | Create a Kubernetes service for the application when deploying to dev/test environment. | `true`, `false` |
+| Properties | Required | Details | Example |
+|----------------------------|:--------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|
+| `appDirPath` | Y | Path to the your application code | `./webapp/`, `./backend/` |
+| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
+| `appChannelAddress` | N | The network address the application listens on. Can be left to the default value by convention. | `127.0.0`, `localhost` |
+| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
+| `appPort` | N | The port your application is listening on | `8080`, `3000` |
+| `daprHTTPPort` | N | Dapr HTTP port | |
+| `daprGRPCPort` | N | Dapr GRPC port | |
+| `daprInternalGRPCPort` | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
+| `metricsPort` | N | The port that Dapr sends its metrics information to | |
+| `unixDomainSocket` | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | `/tmp/test-socket` |
+| `profilePort` | N | The port for the profile server to listen on | |
+| `enableProfiling` | N | Enable profiling via an HTTP endpoint | |
+| `apiListenAddresses` | N | Dapr API listen addresses | |
+| `logLevel` | N | The log verbosity. | |
+| `appMaxConcurrency` | N | The concurrency level of the application; default is unlimited | |
+| `placementHostAddress` | N | | |
+| `appSSL` | N | Enable https when Dapr invokes the application | |
+| `daprHTTPMaxRequestSize` | N | Max size of the request body in MB. | |
+| `daprHTTPReadBufferSize` | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB | |
+| `enableAppHealthCheck` | N | Enable the app health check on the application | `true`, `false` |
+| `appHealthCheckPath` | N | Path to the health check file | `/healthz` |
+| `appHealthProbeInterval` | N | Interval to probe for the health of the app in seconds | |
+| `appHealthProbeTimeout` | N | Timeout for app health probes in milliseconds | |
+| `appHealthThreshold` | N | Number of consecutive failures for the app to be considered unhealthy | |
+| `enableApiLogging` | N | Enable the logging of all API calls from application to Dapr | |
+| `env` | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | `DEBUG`, `DAPR_HOST_ADD` |
+| `appLogDestination` | N | Log destination for outputting app logs; Its value can be file, console or fileAndConsole. Default is fileAndConsole | `file`, `console`, `fileAndConsole` |
+| `daprdLogDestination` | N | Log destination for outputting daprd logs; Its value can be file, console or fileAndConsole. Default is file | `file`, `console`, `fileAndConsole` |
+| `containerImage` | N | URI of the container image to be used when deploying to Kubernetes dev/test environment. | `ghcr.io/dapr/samples/hello-k8s-python:latest` |
+| `containerImagePullPolicy` | N | The container image pull policy (default to `Always`). | `Always`, `IfNotPresent`, `Never` |
+| `createService` | N | Create a Kubernetes service for the application when deploying to dev/test environment. | `true`, `false` |
{{< /table >}}
diff --git a/daprdocs/content/en/developing-applications/sdks/sdk-serialization.md b/daprdocs/content/en/developing-applications/local-development/sdk-serialization.md
similarity index 99%
rename from daprdocs/content/en/developing-applications/sdks/sdk-serialization.md
rename to daprdocs/content/en/developing-applications/local-development/sdk-serialization.md
index 0457f057d1c..4e22d8b58cb 100644
--- a/daprdocs/content/en/developing-applications/sdks/sdk-serialization.md
+++ b/daprdocs/content/en/developing-applications/local-development/sdk-serialization.md
@@ -1,9 +1,9 @@
---
type: docs
title: "Serialization in Dapr's SDKs"
-linkTitle: "Serialization"
+linkTitle: "SDK Serialization"
description: "How Dapr serializes data within the SDKs"
-weight: 2000
+weight: 400
aliases:
- '/developing-applications/sdks/serialization/'
---
diff --git a/daprdocs/content/en/getting-started/quickstarts/_index.md b/daprdocs/content/en/getting-started/quickstarts/_index.md
index dbb02df4db9..d1cd2a45e4e 100644
--- a/daprdocs/content/en/getting-started/quickstarts/_index.md
+++ b/daprdocs/content/en/getting-started/quickstarts/_index.md
@@ -10,7 +10,7 @@ no_list: true
Hit the ground running with our Dapr quickstarts, complete with code samples aimed to get you started quickly with Dapr.
{{% alert title="Note" color="primary" %}}
- We are actively working on adding to our quickstart library. In the meantime, you can explore Dapr through our [tutorials]({{< ref "getting-started/tutorials/_index.md" >}}).
+ Each release, the quickstart library has new examples added for the APIs and SDKs. You can also explore Dapr through the [tutorials]({{< ref "getting-started/tutorials/_index.md" >}}).
{{% /alert %}}
@@ -33,4 +33,4 @@ Hit the ground running with our Dapr quickstarts, complete with code samples aim
| [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. |
| [Cryptography]({{< ref cryptography-quickstart.md >}}) | Encrypt and decrypt data using Dapr's cryptographic APIs. |
| [Jobs]({{< ref jobs-quickstart.md >}}) | Schedule, retrieve, and delete jobs using Dapr's jobs APIs. |
-
+| [Conversation]({{< ref conversation-quickstart.md >}}) | Securely and reliably interact with Large Language Models (LLMs). |
\ No newline at end of file
diff --git a/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md
index 3b7ad206891..fef7c9de93c 100644
--- a/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md
@@ -20,8 +20,8 @@ As a quick overview of the .NET actors quickstart:
1. Using a `SmartDevice.Service` microservice, you host:
- Two `SmokeDetectorActor` smoke alarm objects
- A `ControllerActor` object that commands and controls the smart devices
-1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
-1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
+2. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
+3. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
@@ -30,10 +30,13 @@ As a quick overview of the .NET actors quickstart:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 1: Set up the environment
diff --git a/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
index 68c1c61e27a..3dc076e0a6b 100644
--- a/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
@@ -443,10 +443,13 @@ In the YAML file:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 1: Set up the environment
diff --git a/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
index 1c11084b01d..bd4f44a2a8d 100644
--- a/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
@@ -272,10 +272,13 @@ setTimeout(() => {
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 1: Set up the environment
diff --git a/daprdocs/content/en/getting-started/quickstarts/conversation-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/conversation-quickstart.md
new file mode 100644
index 00000000000..8abed6f58d3
--- /dev/null
+++ b/daprdocs/content/en/getting-started/quickstarts/conversation-quickstart.md
@@ -0,0 +1,782 @@
+---
+type: docs
+title: "Quickstart: Conversation"
+linkTitle: Conversation
+weight: 90
+description: Get started with the Dapr conversation building block
+---
+
+{{% alert title="Alpha" color="warning" %}}
+The conversation building block is currently in **alpha**.
+{{% /alert %}}
+
+Let's take a look at how the [Dapr conversation building block]({{< ref conversation-overview.md >}}) makes interacting with Large Language Models (LLMs) easier. In this quickstart, you use the echo component to communicate with the mock LLM and ask it for a poem about Dapr.
+
+You can try out this conversation quickstart by either:
+
+- [Running the application in this sample with the Multi-App Run template file]({{< ref "#run-the-app-with-the-template-file" >}}), or
+- [Running the application without the template]({{< ref "#run-the-app-without-the-template" >}})
+
+{{% alert title="Note" color="primary" %}}
+Currently, only the HTTP quickstart sample is available in Python and JavaScript.
+{{% /alert %}}
+
+## Run the app with the template file
+
+{{< tabs Python JavaScript ".NET" Go >}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [Python 3.7+ installed](https://www.python.org/downloads/).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/python/http/conversation
+```
+
+Install the dependencies:
+
+```bash
+pip3 install -r requirements.txt
+```
+
+### Step 3: Launch the conversation service
+
+Navigate back to the `http` directory and start the conversation service with the following command:
+
+```bash
+dapr run -f .
+```
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+### What happened?
+
+When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
+
+Running `dapr run -f .` in this Quickstart started [conversation.go]({{< ref "#programcs-conversation-app" >}}).
+
+#### `dapr.yaml` Multi-App Run template file
+
+Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
+
+```yml
+version: 1
+common:
+ resourcesPath: ../../components/
+apps:
+ - appID: conversation
+ appDirPath: ./conversation/
+ command: ["python3", "app.py"]
+```
+
+#### Echo mock LLM component
+
+In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components) directly of the quickstart, the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo LLM component.
+
+```yml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: echo
+spec:
+ type: conversation.echo
+ version: v1
+```
+
+To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
+
+#### `app.py` conversation app
+
+In the application code:
+- The app sends an input "What is dapr?" to the echo mock LLM component.
+- The mock LLM echoes "What is dapr?".
+
+```python
+import logging
+import requests
+import os
+
+logging.basicConfig(level=logging.INFO)
+
+base_url = os.getenv('BASE_URL', 'http://localhost') + ':' + os.getenv(
+ 'DAPR_HTTP_PORT', '3500')
+
+CONVERSATION_COMPONENT_NAME = 'echo'
+
+input = {
+ 'name': 'echo',
+ 'inputs': [{'message':'What is dapr?'}],
+ 'parameters': {},
+ 'metadata': {}
+ }
+
+# Send input to conversation endpoint
+result = requests.post(
+ url='%s/v1.0-alpha1/conversation/%s/converse' % (base_url, CONVERSATION_COMPONENT_NAME),
+ json=input
+)
+
+logging.info('Input sent: What is dapr?')
+
+# Parse conversation output
+data = result.json()
+output = data["outputs"][0]["result"]
+
+logging.info('Output response: ' + output)
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [Latest Node.js installed](https://nodejs.org/).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/javascript/http/conversation
+```
+
+Install the dependencies:
+
+```bash
+npm install
+```
+
+### Step 3: Launch the conversation service
+
+Navigate back to the `http` directory and start the conversation service with the following command:
+
+```bash
+dapr run -f .
+```
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+### What happened?
+
+When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
+
+Running `dapr run -f .` in this Quickstart started [conversation.go]({{< ref "#programcs-conversation-app" >}}).
+
+#### `dapr.yaml` Multi-App Run template file
+
+Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
+
+```yml
+version: 1
+common:
+ resourcesPath: ../../components/
+apps:
+ - appID: conversation
+ appDirPath: ./conversation/
+ daprHTTPPort: 3502
+ command: ["npm", "run", "start"]
+```
+
+#### Echo mock LLM component
+
+In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components) directly of the quickstart, the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo LLM component.
+
+```yml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: echo
+spec:
+ type: conversation.echo
+ version: v1
+```
+
+To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
+
+#### `index.js` conversation app
+
+In the application code:
+- The app sends an input "What is dapr?" to the echo mock LLM component.
+- The mock LLM echoes "What is dapr?".
+
+```javascript
+const conversationComponentName = "echo";
+
+async function main() {
+ const daprHost = process.env.DAPR_HOST || "http://localhost";
+ const daprHttpPort = process.env.DAPR_HTTP_PORT || "3500";
+
+ const inputBody = {
+ name: "echo",
+ inputs: [{ message: "What is dapr?" }],
+ parameters: {},
+ metadata: {},
+ };
+
+ const reqURL = `${daprHost}:${daprHttpPort}/v1.0-alpha1/conversation/${conversationComponentName}/converse`;
+
+ try {
+ const response = await fetch(reqURL, {
+ method: "POST",
+ headers: {
+ "Content-Type": "application/json",
+ },
+ body: JSON.stringify(inputBody),
+ });
+
+ console.log("Input sent: What is dapr?");
+
+ const data = await response.json();
+ const result = data.outputs[0].result;
+ console.log("Output response:", result);
+ } catch (error) {
+ console.error("Error:", error.message);
+ process.exit(1);
+ }
+}
+
+main().catch((error) => {
+ console.error("Unhandled error:", error);
+ process.exit(1);
+});
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [.NET 8 SDK+ installed](https://dotnet.microsoft.com/download).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/csharp/sdk
+```
+
+### Step 3: Launch the conversation service
+
+Start the conversation service with the following command:
+
+```bash
+dapr run -f .
+```
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+### What happened?
+
+When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
+
+Running `dapr run -f .` in this Quickstart started the [conversation Program.cs]({{< ref "#programcs-conversation-app" >}}).
+
+#### `dapr.yaml` Multi-App Run template file
+
+Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
+
+```yml
+version: 1
+common:
+ resourcesPath: ../../components/
+apps:
+ - appDirPath: ./conversation/
+ appID: conversation
+ daprHTTPPort: 3500
+ command: ["dotnet", "run"]
+```
+
+#### Echo mock LLM component
+
+In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components), the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo mock LLM component.
+
+```yml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: echo
+spec:
+ type: conversation.echo
+ version: v1
+```
+
+To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
+
+#### `Program.cs` conversation app
+
+In the application code:
+- The app sends an input "What is dapr?" to the echo mock LLM component.
+- The mock LLM echoes "What is dapr?".
+
+```csharp
+using Dapr.AI.Conversation;
+using Dapr.AI.Conversation.Extensions;
+
+class Program
+{
+ private const string ConversationComponentName = "echo";
+
+ static async Task Main(string[] args)
+ {
+ const string prompt = "What is dapr?";
+
+ var builder = WebApplication.CreateBuilder(args);
+ builder.Services.AddDaprConversationClient();
+ var app = builder.Build();
+
+ //Instantiate Dapr Conversation Client
+ var conversationClient = app.Services.GetRequiredService();
+
+ try
+ {
+ // Send a request to the echo mock LLM component
+ var response = await conversationClient.ConverseAsync(ConversationComponentName, [new(prompt, DaprConversationRole.Generic)]);
+ Console.WriteLine("Input sent: " + prompt);
+
+ if (response != null)
+ {
+ Console.Write("Output response:");
+ foreach (var resp in response.Outputs)
+ {
+ Console.WriteLine($" {resp.Result}");
+ }
+ }
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine("Error: " + ex.Message);
+ }
+ }
+}
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [Latest version of Go](https://go.dev/dl/).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/go/sdk
+```
+
+### Step 3: Launch the conversation service
+
+Start the conversation service with the following command:
+
+```bash
+dapr run -f .
+```
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+### What happened?
+
+When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
+
+Running `dapr run -f .` in this Quickstart started [conversation.go]({{< ref "#programcs-conversation-app" >}}).
+
+#### `dapr.yaml` Multi-App Run template file
+
+Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
+
+```yml
+version: 1
+common:
+ resourcesPath: ../../components/
+apps:
+ - appDirPath: ./conversation/
+ appID: conversation
+ daprHTTPPort: 3501
+ command: ["go", "run", "."]
+```
+
+#### Echo mock LLM component
+
+In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components) directly of the quickstart, the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo LLM component.
+
+```yml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: echo
+spec:
+ type: conversation.echo
+ version: v1
+```
+
+To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
+
+#### `conversation.go` conversation app
+
+In the application code:
+- The app sends an input "What is dapr?" to the echo mock LLM component.
+- The mock LLM echoes "What is dapr?".
+
+```go
+package main
+
+import (
+ "context"
+ "fmt"
+ "log"
+
+ dapr "github.com/dapr/go-sdk/client"
+)
+
+func main() {
+ client, err := dapr.NewClient()
+ if err != nil {
+ panic(err)
+ }
+
+ input := dapr.ConversationInput{
+ Message: "What is dapr?",
+ // Role: nil, // Optional
+ // ScrubPII: nil, // Optional
+ }
+
+ fmt.Println("Input sent:", input.Message)
+
+ var conversationComponent = "echo"
+
+ request := dapr.NewConversationRequest(conversationComponent, []dapr.ConversationInput{input})
+
+ resp, err := client.ConverseAlpha1(context.Background(), request)
+ if err != nil {
+ log.Fatalf("err: %v", err)
+ }
+
+ fmt.Println("Output response:", resp.Outputs[0].Result)
+}
+```
+
+{{% /codetab %}}
+
+{{< /tabs >}}
+
+## Run the app without the template
+
+{{< tabs Python JavaScript ".NET" Go >}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [Python 3.7+ installed](https://www.python.org/downloads/).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/python/http/conversation
+```
+
+Install the dependencies:
+
+```bash
+pip3 install -r requirements.txt
+```
+
+### Step 3: Launch the conversation service
+
+Navigate back to the `http` directory and start the conversation service with the following command:
+
+```bash
+dapr run --app-id conversation --resources-path ../../../components -- python3 app.py
+```
+
+> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [Latest Node.js installed](https://nodejs.org/).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/javascript/http/conversation
+```
+
+Install the dependencies:
+
+```bash
+npm install
+```
+
+### Step 3: Launch the conversation service
+
+Navigate back to the `http` directory and start the conversation service with the following command:
+
+```bash
+dapr run --app-id conversation --resources-path ../../../components/ -- npm run start
+```
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [.NET 8+ SDK installed](https://dotnet.microsoft.com/download).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/csharp/sdk/conversation
+```
+
+Install the dependencies:
+
+```bash
+dotnet build
+```
+
+### Step 3: Launch the conversation service
+
+Start the conversation service with the following command:
+
+```bash
+dapr run --app-id conversation --resources-path ../../../components/ -- dotnet run
+```
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
+
+### Step 1: Pre-requisites
+
+For this example, you will need:
+
+- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
+- [Latest version of Go](https://go.dev/dl/).
+
+- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+
+
+### Step 2: Set up the environment
+
+Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
+
+```bash
+git clone https://github.com/dapr/quickstarts.git
+```
+
+From the root of the Quickstarts directory, navigate into the conversation directory:
+
+```bash
+cd conversation/go/sdk/conversation
+```
+
+Install the dependencies:
+
+```bash
+go build .
+```
+
+### Step 3: Launch the conversation service
+
+Start the conversation service with the following command:
+
+```bash
+dapr run --app-id conversation --resources-path ../../../components/ -- go run .
+```
+
+**Expected output**
+
+```
+== APP - conversation == Input sent: What is dapr?
+== APP - conversation == Output response: What is dapr?
+```
+
+{{% /codetab %}}
+
+{{< /tabs >}}
+
+## Demo
+
+Watch the demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how the conversation API works using the .NET SDK.
+
+
+
+## Tell us what you think!
+
+We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
+
+Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).
+
+## Next steps
+
+- HTTP samples of this quickstart:
+ - [Python](https://github.com/dapr/quickstarts/tree/master/conversation/python/http)
+ - [JavaScript](https://github.com/dapr/quickstarts/tree/master/conversation/javascript/http)
+ - [.NET](https://github.com/dapr/quickstarts/tree/master/conversation/csharp/http)
+ - [Go](https://github.com/dapr/quickstarts/tree/master/conversation/go/http)
+- Learn more about [the conversation building block]({{< ref conversation-overview.md >}})
+
+{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
diff --git a/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
index af95c1c3401..4dc5543f3e4 100644
--- a/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
@@ -358,10 +358,13 @@ console.log("Published data: " + JSON.stringify(order));
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 2: Set up the environment
diff --git a/daprdocs/content/en/getting-started/quickstarts/secrets-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/secrets-quickstart.md
index da4c09763c1..5a799b32496 100644
--- a/daprdocs/content/en/getting-started/quickstarts/secrets-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/secrets-quickstart.md
@@ -247,10 +247,13 @@ Order-processor output:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 1: Set up the environment
diff --git a/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
index 4bd2b237b71..f925ca75210 100644
--- a/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
@@ -315,10 +315,13 @@ console.log("Order passed: " + res.config.data);
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 7 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 2: Set up the environment
@@ -439,13 +442,11 @@ app.MapPost("/orders", (Order order) =>
In the Program.cs file for the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
```csharp
-var client = new HttpClient();
-client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
+var client = DaprClient.CreateInvokeHttpClient(appId: "order-processor");
+var cts = new CancellationTokenSource();
-client.DefaultRequestHeaders.Add("dapr-app-id", "order-processor");
-
-var response = await client.PostAsync($"{baseURL}/orders", content);
- Console.WriteLine("Order passed: " + order);
+var response = await client.PostAsJsonAsync("/orders", order, cts.Token);
+Console.WriteLine("Order passed: " + order);
```
{{% /codetab %}}
@@ -1089,13 +1090,11 @@ dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet r
In the Program.cs file for the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
```csharp
-var client = new HttpClient();
-client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
-
-client.DefaultRequestHeaders.Add("dapr-app-id", "order-processor");
+var client = DaprClient.CreateInvokeHttpClient(appId: "order-processor");
+var cts = new CancellationTokenSource();
-var response = await client.PostAsync($"{baseURL}/orders", content);
- Console.WriteLine("Order passed: " + order);
+var response = await client.PostAsJsonAsync("/orders", order, cts.Token);
+Console.WriteLine("Order passed: " + order);
```
### Step 5: Use with Multi-App Run
diff --git a/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md
index ff92119df3e..bde517c44af 100644
--- a/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md
@@ -288,10 +288,13 @@ In the YAML file:
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 1: Set up the environment
diff --git a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
index 3cf04fac94e..5f50c6a9909 100644
--- a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
@@ -20,6 +20,21 @@ In this guide, you'll:
+The workflow contains the following activities:
+
+- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow.
+- `VerifyInventoryActivity`: Checks the state store to ensure that there is enough inventory for the purchase.
+- `RequestApprovalActivity`: Requests approval for orders over a certain cost threshold.
+- `ProcessPaymentActivity`: Processes and authorizes the payment.
+- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value.
+
+The workflow also contains business logic:
+- The workflow will not proceed with the payment if there is insufficient inventory.
+- The workflow will call the `RequestApprovalActivity` and wait for an external approval event when the total cost of the order is greater than 5000.
+- If the order is not approved or the approval is timed out, the workflow not proceed with the payment.
+
+
+
Select your preferred language-specific Dapr SDK before proceeding with the Quickstart.
{{< tabs "Python" "JavaScript" ".NET" "Java" Go >}}
@@ -31,10 +46,10 @@ The `order-processor` console app starts and manages the `order_processing_workf
- `notify_activity`: Utilizes a logger to print out messages throughout the workflow. These messages notify you when:
- You have insufficient inventory
- Your payment couldn't be processed, etc.
-- `process_payment_activity`: Processes and authorizes the payment.
- `verify_inventory_activity`: Checks the state store to ensure there is enough inventory present for purchase.
+- `request_approval_activity`: Requests approval for orders over a certain cost threshold.
+- `process_payment_activity`: Processes and authorizes the payment.
- `update_inventory_activity`: Removes the requested items from the state store and updates the store with the new remaining inventory value.
-- `request_approval_activity`: Seeks approval from the manager if payment is greater than 50,000 USD.
### Step 1: Pre-requisites
@@ -86,22 +101,50 @@ This starts the `order-processor` app with unique workflow ID and runs the workf
Expected output:
```bash
-== APP == Starting order workflow, purchasing 10 of cars
-== APP == 2023-06-06 09:35:52.945 durabletask-worker INFO: Successfully connected to 127.0.0.1:65406. Waiting for work items...
-== APP == INFO:NotifyActivity:Received order f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars at $150000 !
-== APP == INFO:VerifyInventoryActivity:Verifying inventory for order f4e1926e-3721-478d-be8a-f5bebd1995da of 10 cars
-== APP == INFO:VerifyInventoryActivity:There are 100 Cars available for purchase
-== APP == INFO:RequestApprovalActivity:Requesting approval for payment of 165000 USD for 10 cars
-== APP == 2023-06-06 09:36:05.969 durabletask-worker INFO: f4e1926e-3721-478d-be8a-f5bebd1995da Event raised: manager_approval
-== APP == INFO:NotifyActivity:Payment for order f4e1926e-3721-478d-be8a-f5bebd1995da has been approved!
-== APP == INFO:ProcessPaymentActivity:Processing payment: f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars at 150000 USD
-== APP == INFO:ProcessPaymentActivity:Payment for request ID f4e1926e-3721-478d-be8a-f5bebd1995da processed successfully
-== APP == INFO:UpdateInventoryActivity:Checking inventory for order f4e1926e-3721-478d-be8a-f5bebd1995da for 10 cars
-== APP == INFO:UpdateInventoryActivity:There are now 90 cars left in stock
-== APP == INFO:NotifyActivity:Order f4e1926e-3721-478d-be8a-f5bebd1995da has completed!
-== APP == 2023-06-06 09:36:06.106 durabletask-worker INFO: f4e1926e-3721-478d-be8a-f5bebd1995da: Orchestration completed with status: COMPLETED
-== APP == Workflow completed! Result: Completed
-== APP == Purchase of item is Completed
+== APP - order-processor == *** Welcome to the Dapr Workflow console app sample!
+== APP - order-processor == *** Using this app, you can place orders that start workflows.
+== APP - order-processor == 2025-02-13 11:44:11.357 durabletask-worker INFO: Starting gRPC worker that connects to dns:127.0.0.1:38891
+== APP - order-processor == 2025-02-13 11:44:11.361 durabletask-worker INFO: Successfully connected to dns:127.0.0.1:38891. Waiting for work items...
+== APP - order-processor == INFO:NotifyActivity:Received order 6830cb00174544a0b062ba818e14fddc for 1 cars at $5000 !
+== APP - order-processor == 2025-02-13 11:44:14.157 durabletask-worker INFO: 6830cb00174544a0b062ba818e14fddc: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:VerifyInventoryActivity:Verifying inventory for order 6830cb00174544a0b062ba818e14fddc of 1 cars
+== APP - order-processor == INFO:VerifyInventoryActivity:There are 10 Cars available for purchase
+== APP - order-processor == 2025-02-13 11:44:14.171 durabletask-worker INFO: 6830cb00174544a0b062ba818e14fddc: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:ProcessPaymentActivity:Processing payment: 6830cb00174544a0b062ba818e14fddc for 1 cars at 5000 USD
+== APP - order-processor == INFO:ProcessPaymentActivity:Payment for request ID 6830cb00174544a0b062ba818e14fddc processed successfully
+== APP - order-processor == 2025-02-13 11:44:14.177 durabletask-worker INFO: 6830cb00174544a0b062ba818e14fddc: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:UpdateInventoryActivity:Checking inventory for order 6830cb00174544a0b062ba818e14fddc for 1 cars
+== APP - order-processor == INFO:UpdateInventoryActivity:There are now 9 cars left in stock
+== APP - order-processor == 2025-02-13 11:44:14.189 durabletask-worker INFO: 6830cb00174544a0b062ba818e14fddc: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:NotifyActivity:Order 6830cb00174544a0b062ba818e14fddc has completed!
+== APP - order-processor == 2025-02-13 11:44:14.195 durabletask-worker INFO: 6830cb00174544a0b062ba818e14fddc: Orchestration completed with status: COMPLETED
+== APP - order-processor == item: InventoryItem(item_name=Paperclip, per_item_cost=5, quantity=100)
+== APP - order-processor == item: InventoryItem(item_name=Cars, per_item_cost=5000, quantity=10)
+== APP - order-processor == item: InventoryItem(item_name=Computers, per_item_cost=500, quantity=100)
+== APP - order-processor == ==========Begin the purchase of item:==========
+== APP - order-processor == Starting order workflow, purchasing 1 of cars
+== APP - order-processor == 2025-02-13 11:44:16.363 durabletask-client INFO: Starting new 'order_processing_workflow' instance with ID = 'fc8a507e4a2246d2917d3ad4e3111240'.
+== APP - order-processor == 2025-02-13 11:44:16.366 durabletask-client INFO: Waiting 30s for instance 'fc8a507e4a2246d2917d3ad4e3111240' to complete.
+== APP - order-processor == 2025-02-13 11:44:16.366 durabletask-worker INFO: fc8a507e4a2246d2917d3ad4e3111240: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:NotifyActivity:Received order fc8a507e4a2246d2917d3ad4e3111240 for 1 cars at $5000 !
+== APP - order-processor == 2025-02-13 11:44:16.373 durabletask-worker INFO: fc8a507e4a2246d2917d3ad4e3111240: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:VerifyInventoryActivity:Verifying inventory for order fc8a507e4a2246d2917d3ad4e3111240 of 1 cars
+== APP - order-processor == INFO:VerifyInventoryActivity:There are 10 Cars available for purchase
+== APP - order-processor == 2025-02-13 11:44:16.383 durabletask-worker INFO: fc8a507e4a2246d2917d3ad4e3111240: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:ProcessPaymentActivity:Processing payment: fc8a507e4a2246d2917d3ad4e3111240 for 1 cars at 5000 USD
+== APP - order-processor == INFO:ProcessPaymentActivity:Payment for request ID fc8a507e4a2246d2917d3ad4e3111240 processed successfully
+== APP - order-processor == 2025-02-13 11:44:16.390 durabletask-worker INFO: fc8a507e4a2246d2917d3ad4e3111240: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:UpdateInventoryActivity:Checking inventory for order fc8a507e4a2246d2917d3ad4e3111240 for 1 cars
+== APP - order-processor == INFO:UpdateInventoryActivity:There are now 9 cars left in stock
+== APP - order-processor == 2025-02-13 11:44:16.403 durabletask-worker INFO: fc8a507e4a2246d2917d3ad4e3111240: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
+== APP - order-processor == INFO:NotifyActivity:Order fc8a507e4a2246d2917d3ad4e3111240 has completed!
+== APP - order-processor == 2025-02-13 11:44:16.411 durabletask-worker INFO: fc8a507e4a2246d2917d3ad4e3111240: Orchestration completed with status: COMPLETED
+== APP - order-processor == 2025-02-13 11:44:16.425 durabletask-client INFO: Instance 'fc8a507e4a2246d2917d3ad4e3111240' completed.
+== APP - order-processor == 2025-02-13 11:44:16.425 durabletask-worker INFO: Stopping gRPC worker...
+== APP - order-processor == 2025-02-13 11:44:16.426 durabletask-worker INFO: Disconnected from dns:127.0.0.1:38891
+== APP - order-processor == 2025-02-13 11:44:16.426 durabletask-worker INFO: No longer listening for work items
+== APP - order-processor == 2025-02-13 11:44:16.426 durabletask-worker INFO: Worker shutdown completed
+== APP - order-processor == Workflow completed! Result: {"processed": true, "__durabletask_autoobject__": true}
```
### (Optional) Step 4: View in Zipkin
@@ -120,14 +163,15 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho
When you ran `dapr run -f .`:
-1. A unique order ID for the workflow is generated (in the above example, `f4e1926e-3721-478d-be8a-f5bebd1995da`) and the workflow is scheduled.
-1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
-1. The `ReserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
-1. Your workflow starts and notifies you of its status.
-1. The `ProcessPaymentActivity` workflow activity begins processing payment for order `f4e1926e-3721-478d-be8a-f5bebd1995da` and confirms if successful.
-1. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
-1. The `NotifyActivity` workflow activity sends a notification saying that order `f4e1926e-3721-478d-be8a-f5bebd1995da` has completed.
-1. The workflow terminates as completed.
+1. An OrderPayload is made containing one car.
+2. A unique order ID for the workflow is generated (in the above example, `fc8a507e4a2246d2917d3ad4e3111240`) and the workflow is scheduled.
+3. The `notify_activity` workflow activity sends a notification saying an order for one car has been received.
+4. The `verify_inventory_activity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock. The inventory is sufficient so the workflow continues.
+5. The total cost of the order is 5000, so the workflow will not call the `request_approval_activity` activity.
+6. The `process_payment_activity` workflow activity begins processing payment for order `fc8a507e4a2246d2917d3ad4e3111240` and confirms if successful.
+7. The `update_inventory_activity` workflow activity updates the inventory with the current available cars after the order has been processed.
+8. The `notify_activity` workflow activity sends a notification saying that order `fc8a507e4a2246d2917d3ad4e3111240` has completed.
+9. The workflow terminates as completed and the OrderResult is set to processed.
#### `order-processor/app.py`
@@ -139,70 +183,75 @@ In the application's program file:
- The workflow and the workflow activities it invokes are registered
```python
+from datetime import datetime
+from time import sleep
+
+from dapr.clients import DaprClient
+from dapr.conf import settings
+from dapr.ext.workflow import DaprWorkflowClient, WorkflowStatus
+
+from workflow import wfr, order_processing_workflow
+from model import InventoryItem, OrderPayload
+
+store_name = "statestore"
+workflow_name = "order_processing_workflow"
+default_item_name = "cars"
+
class WorkflowConsoleApp:
def main(self):
- # Register workflow and activities
- workflowRuntime = WorkflowRuntime(settings.DAPR_RUNTIME_HOST, settings.DAPR_GRPC_PORT)
- workflowRuntime.register_workflow(order_processing_workflow)
- workflowRuntime.register_activity(notify_activity)
- workflowRuntime.register_activity(requst_approval_activity)
- workflowRuntime.register_activity(verify_inventory_activity)
- workflowRuntime.register_activity(process_payment_activity)
- workflowRuntime.register_activity(update_inventory_activity)
- workflowRuntime.start()
+ print("*** Welcome to the Dapr Workflow console app sample!", flush=True)
+ print("*** Using this app, you can place orders that start workflows.", flush=True)
+
+ wfr.start()
+ # Wait for the sidecar to become available
+ sleep(5)
+
+ wfClient = DaprWorkflowClient()
+
+ baseInventory = {
+ "paperclip": InventoryItem("Paperclip", 5, 100),
+ "cars": InventoryItem("Cars", 5000, 10),
+ "computers": InventoryItem("Computers", 500, 100),
+ }
+
+
+ daprClient = DaprClient(address=f'{settings.DAPR_RUNTIME_HOST}:{settings.DAPR_GRPC_PORT}')
+ self.restock_inventory(daprClient, baseInventory)
print("==========Begin the purchase of item:==========", flush=True)
item_name = default_item_name
- order_quantity = 10
-
+ order_quantity = 1
total_cost = int(order_quantity) * baseInventory[item_name].per_item_cost
order = OrderPayload(item_name=item_name, quantity=int(order_quantity), total_cost=total_cost)
- # Start Workflow
print(f'Starting order workflow, purchasing {order_quantity} of {item_name}', flush=True)
- start_resp = daprClient.start_workflow(workflow_component=workflow_component,
- workflow_name=workflow_name,
- input=order)
- _id = start_resp.instance_id
-
- def prompt_for_approval(daprClient: DaprClient):
- daprClient.raise_workflow_event(instance_id=_id, workflow_component=workflow_component,
- event_name="manager_approval", event_data={'approval': True})
-
- approval_seeked = False
- start_time = datetime.now()
- while True:
- time_delta = datetime.now() - start_time
- state = daprClient.get_workflow(instance_id=_id, workflow_component=workflow_component)
+ instance_id = wfClient.schedule_new_workflow(
+ workflow=order_processing_workflow, input=order.to_json())
+
+ try:
+ state = wfClient.wait_for_workflow_completion(instance_id=instance_id, timeout_in_seconds=30)
if not state:
- print("Workflow not found!") # not expected
- elif state.runtime_status == "Completed" or\
- state.runtime_status == "Failed" or\
- state.runtime_status == "Terminated":
- print(f'Workflow completed! Result: {state.runtime_status}', flush=True)
- break
- if time_delta.total_seconds() >= 10:
- state = daprClient.get_workflow(instance_id=_id, workflow_component=workflow_component)
- if total_cost > 50000 and (
- state.runtime_status != "Completed" or
- state.runtime_status != "Failed" or
- state.runtime_status != "Terminated"
- ) and not approval_seeked:
- approval_seeked = True
- threading.Thread(target=prompt_for_approval(daprClient), daemon=True).start()
-
- print("Purchase of item is ", state.runtime_status, flush=True)
+ print("Workflow not found!")
+ elif state.runtime_status.name == 'COMPLETED':
+ print(f'Workflow completed! Result: {state.serialized_output}')
+ else:
+ print(f'Workflow failed! Status: {state.runtime_status.name}') # not expected
+ except TimeoutError:
+ print('*** Workflow timed out!')
+
+ wfr.shutdown()
def restock_inventory(self, daprClient: DaprClient, baseInventory):
for key, item in baseInventory.items():
print(f'item: {item}')
item_str = f'{{"name": "{item.item_name}", "quantity": {item.quantity},\
"per_item_cost": {item.per_item_cost}}}'
- daprClient.save_state("statestore-actors", key, item_str)
+ daprClient.save_state(store_name, key, item_str)
if __name__ == '__main__':
app = WorkflowConsoleApp()
app.main()
+
```
#### `order-processor/workflow.py`
@@ -210,12 +259,31 @@ if __name__ == '__main__':
In `workflow.py`, the workflow is defined as a class with all of its associated tasks (determined by workflow activities).
```python
- def order_processing_workflow(ctx: DaprWorkflowContext, order_payload_str: OrderPayload):
+from datetime import timedelta
+import logging
+import json
+
+from dapr.ext.workflow import DaprWorkflowContext, WorkflowActivityContext, WorkflowRuntime, when_any
+from dapr.clients import DaprClient
+from dapr.conf import settings
+
+from model import InventoryItem, Notification, InventoryRequest, OrderPayload, OrderResult,\
+ PaymentRequest, InventoryResult
+
+store_name = "statestore"
+
+wfr = WorkflowRuntime()
+
+logging.basicConfig(level=logging.INFO)
+
+
+@wfr.workflow(name="order_processing_workflow")
+def order_processing_workflow(ctx: DaprWorkflowContext, order_payload_str: str):
"""Defines the order processing workflow.
When the order is received, the inventory is checked to see if there is enough inventory to
fulfill the order. If there is enough inventory, the payment is processed and the inventory is
updated. If there is not enough inventory, the order is rejected.
- If the total order is greater than $50,000, the order is sent to a manager for approval.
+ If the total order is greater than $5,000, the order is sent to a manager for approval.
"""
order_id = ctx.instance_id
order_payload=json.loads(order_payload_str)
@@ -233,23 +301,20 @@ In `workflow.py`, the workflow is defined as a class with all of its associated
+f'{order_payload["item_name"]}'+'!'))
return OrderResult(processed=False)
- if order_payload["total_cost"] > 50000:
- yield ctx.call_activity(requst_approval_activity, input=order_payload)
- approval_task = ctx.wait_for_external_event("manager_approval")
- timeout_event = ctx.create_timer(timedelta(seconds=200))
+ if order_payload["total_cost"] > 5000:
+ yield ctx.call_activity(request_approval_activity, input=order_payload)
+ approval_task = ctx.wait_for_external_event("approval_event")
+ timeout_event = ctx.create_timer(timedelta(seconds=30))
winner = yield when_any([approval_task, timeout_event])
if winner == timeout_event:
yield ctx.call_activity(notify_activity,
- input=Notification(message='Payment for order '+order_id
- +' has been cancelled due to timeout!'))
+ input=Notification(message='Order '+order_id
+ +' has been cancelled due to approval timeout.'))
return OrderResult(processed=False)
approval_result = yield approval_task
- if approval_result["approval"]:
+ if approval_result == False:
yield ctx.call_activity(notify_activity, input=Notification(
- message=f'Payment for order {order_id} has been approved!'))
- else:
- yield ctx.call_activity(notify_activity, input=Notification(
- message=f'Payment for order {order_id} has been rejected!'))
+ message=f'Order {order_id} was not approved'))
return OrderResult(processed=False)
yield ctx.call_activity(process_payment_activity, input=PaymentRequest(
@@ -269,7 +334,86 @@ In `workflow.py`, the workflow is defined as a class with all of its associated
yield ctx.call_activity(notify_activity, input=Notification(
message=f'Order {order_id} has completed!'))
- return OrderResult(processed=True)
+ return OrderResult(processed=True)
+
+@wfr.activity(name="notify_activity")
+def notify_activity(ctx: WorkflowActivityContext, input: Notification):
+ """Defines Notify Activity. This is used by the workflow to send out a notification"""
+ # Create a logger
+ logger = logging.getLogger('NotifyActivity')
+ logger.info(input.message)
+
+
+@wfr.activity(name="process_payment_activity")
+def process_payment_activity(ctx: WorkflowActivityContext, input: PaymentRequest):
+ """Defines Process Payment Activity.This is used by the workflow to process a payment"""
+ logger = logging.getLogger('ProcessPaymentActivity')
+ logger.info('Processing payment: '+f'{input.request_id}'+' for '
+ +f'{input.quantity}' +' ' +f'{input.item_being_purchased}'+' at '+f'{input.amount}'
+ +' USD')
+ logger.info(f'Payment for request ID {input.request_id} processed successfully')
+
+
+@wfr.activity(name="verify_inventory_activity")
+def verify_inventory_activity(ctx: WorkflowActivityContext,
+ input: InventoryRequest) -> InventoryResult:
+ """Defines Verify Inventory Activity. This is used by the workflow to verify if inventory
+ is available for the order"""
+ logger = logging.getLogger('VerifyInventoryActivity')
+
+ logger.info('Verifying inventory for order '+f'{input.request_id}'+' of '
+ +f'{input.quantity}' +' ' +f'{input.item_name}')
+ with DaprClient(f'{settings.DAPR_RUNTIME_HOST}:{settings.DAPR_GRPC_PORT}') as client:
+ result = client.get_state(store_name, input.item_name)
+ if result.data is None:
+ return InventoryResult(False, None)
+ res_json=json.loads(str(result.data.decode('utf-8')))
+ logger.info(f'There are {res_json["quantity"]} {res_json["name"]} available for purchase')
+ inventory_item = InventoryItem(item_name=input.item_name,
+ per_item_cost=res_json['per_item_cost'],
+ quantity=res_json['quantity'])
+
+ if res_json['quantity'] >= input.quantity:
+ return InventoryResult(True, inventory_item)
+ return InventoryResult(False, None)
+
+
+
+@wfr.activity(name="update_inventory_activity")
+def update_inventory_activity(ctx: WorkflowActivityContext,
+ input: PaymentRequest) -> InventoryResult:
+ """Defines Update Inventory Activity. This is used by the workflow to check if inventory
+ is sufficient to fulfill the order and updates inventory by reducing order quantity from
+ inventory."""
+ logger = logging.getLogger('UpdateInventoryActivity')
+
+ logger.info('Checking inventory for order ' +f'{input.request_id}'+' for '
+ +f'{input.quantity}' +' ' +f'{input.item_being_purchased}')
+ with DaprClient(f'{settings.DAPR_RUNTIME_HOST}:{settings.DAPR_GRPC_PORT}') as client:
+ result = client.get_state(store_name, input.item_being_purchased)
+ res_json=json.loads(str(result.data.decode('utf-8')))
+ new_quantity = res_json['quantity'] - input.quantity
+ per_item_cost = res_json['per_item_cost']
+ if new_quantity < 0:
+ raise ValueError('Inventory update for request ID '+f'{input.item_being_purchased}'
+ +' could not be processed. Insufficient inventory.')
+ new_val = f'{{"name": "{input.item_being_purchased}", "quantity": {str(new_quantity)}, "per_item_cost": {str(per_item_cost)}}}'
+ client.save_state(store_name, input.item_being_purchased, new_val)
+ logger.info(f'There are now {new_quantity} {input.item_being_purchased} left in stock')
+
+
+
+@wfr.activity(name="request_approval_activity")
+def request_approval_activity(ctx: WorkflowActivityContext,
+ input: OrderPayload):
+ """Defines Request Approval Activity. This is used by the workflow to request approval
+ for payment of an order. This activity is used only if the order total cost is greater than
+ a particular threshold"""
+ logger = logging.getLogger('RequestApprovalActivity')
+
+ logger.info('Requesting approval for payment of '+f'{input["total_cost"]}'+' USD for '
+ +f'{input["quantity"]}' +' ' +f'{input["item_name"]}')
+
```
{{% /codetab %}}
@@ -279,8 +423,8 @@ In `workflow.py`, the workflow is defined as a class with all of its associated
The `order-processor` console app starts and manages the lifecycle of an order processing workflow that stores and retrieves data in a state store. The workflow consists of four workflow activities, or tasks:
- `notifyActivity`: Utilizes a logger to print out messages throughout the workflow. These messages notify the user when there is insufficient inventory, their payment couldn't be processed, and more.
-- `reserveInventoryActivity`: Checks the state store to ensure that there is enough inventory present for purchase.
-- `requestApprovalActivity`: Requests approval for orders over a certain threshold
+- `verifyInventoryActivity`: Checks the state store to ensure that there is enough inventory present for purchase.
+- `requestApprovalActivity`: Requests approval for orders over a certain threshold.
- `processPaymentActivity`: Processes and authorizes the payment.
- `updateInventoryActivity`: Updates the state store with the new remaining inventory value.
@@ -329,66 +473,67 @@ This starts the `order-processor` app with unique workflow ID and runs the workf
Expected output:
```log
-== APP - workflowApp == == APP == Orchestration scheduled with ID: 0c332155-1e02-453a-a333-28cfc7777642
-== APP - workflowApp == == APP == Waiting 30 seconds for instance 0c332155-1e02-453a-a333-28cfc7777642 to complete...
-== APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642'
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 0 history event...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, EXECUTIONSTARTED=1]
-== APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s)
-== APP - workflowApp == == APP == Received "Activity Request" work item
-== APP - workflowApp == == APP == Received order 0c332155-1e02-453a-a333-28cfc7777642 for 10 item1 at a total cost of 100
-== APP - workflowApp == == APP == Activity notifyActivity completed with output undefined (0 chars)
-== APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642'
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 3 history event...
-== APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s)
-== APP - workflowApp == == APP == Received "Activity Request" work item
-== APP - workflowApp == == APP == Reserving inventory for 0c332155-1e02-453a-a333-28cfc7777642 of 10 item1
-== APP - workflowApp == == APP == 2024-02-16T03:15:59.498Z INFO [HTTPClient, HTTPClient] Sidecar Started
-== APP - workflowApp == == APP == There are 100 item1 in stock
-== APP - workflowApp == == APP == Activity reserveInventoryActivity completed with output {"success":true,"inventoryItem":{"perItemCost":100,"quantity":100,"itemName":"item1"}} (86 chars)
-== APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642'
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 6 history event...
-== APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s)
-== APP - workflowApp == == APP == Received "Activity Request" work item
-== APP - workflowApp == == APP == Processing payment for order item1
-== APP - workflowApp == == APP == Payment of 100 for 10 item1 processed successfully
-== APP - workflowApp == == APP == Activity processPaymentActivity completed with output true (4 chars)
-== APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642'
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 9 history event...
-== APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s)
-== APP - workflowApp == == APP == Received "Activity Request" work item
-== APP - workflowApp == == APP == Updating inventory for 0c332155-1e02-453a-a333-28cfc7777642 of 10 item1
-== APP - workflowApp == == APP == Inventory updated for 0c332155-1e02-453a-a333-28cfc7777642, there are now 90 item1 in stock
-== APP - workflowApp == == APP == Activity updateInventoryActivity completed with output {"success":true,"inventoryItem":{"perItemCost":100,"quantity":90,"itemName":"item1"}} (85 chars)
-== APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642'
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 12 history event...
-== APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s)
-== APP - workflowApp == == APP == Received "Activity Request" work item
-== APP - workflowApp == == APP == order 0c332155-1e02-453a-a333-28cfc7777642 processed successfully!
-== APP - workflowApp == == APP == Activity notifyActivity completed with output undefined (0 chars)
-== APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642'
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 15 history event...
-== APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642...
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
-== APP - workflowApp == == APP == Order 0c332155-1e02-453a-a333-28cfc7777642 processed successfully!
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Orchestration completed with status COMPLETED
-== APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s)
-== APP - workflowApp == time="2024-02-15T21:15:59.5589687-06:00" level=info msg="0c332155-1e02-453a-a333-28cfc7777642: 'orderProcessingWorkflow' completed with a COMPLETED status." app_id=activity-sequence-workflow instance=kaibocai-devbox scope=wfengine.backend type=log ver=1.12.4
-== APP - workflowApp == == APP == Instance 0c332155-1e02-453a-a333-28cfc7777642 completed
+== APP - order-processor == Starting new orderProcessingWorkflow instance with ID = f5087775-779c-4e73-ac77-08edfcb375f4
+== APP - order-processor == Orchestration scheduled with ID: f5087775-779c-4e73-ac77-08edfcb375f4
+== APP - order-processor == Waiting 30 seconds for instance f5087775-779c-4e73-ac77-08edfcb375f4 to complete...
+== APP - order-processor == Received "Orchestrator Request" work item with instance id 'f5087775-779c-4e73-ac77-08edfcb375f4'
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Rebuilding local state with 0 history event...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, EXECUTIONSTARTED=1]
+== APP - order-processor == Processing order f5087775-779c-4e73-ac77-08edfcb375f4...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Waiting for 1 task(s) and 0 event(s) to complete...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Returning 1 action(s)
+== APP - order-processor == Received "Activity Request" work item
+== APP - order-processor == Received order f5087775-779c-4e73-ac77-08edfcb375f4 for 1 car at a total cost of 5000
+== APP - order-processor == Activity notifyActivity completed with output undefined (0 chars)
+== APP - order-processor == Received "Orchestrator Request" work item with instance id 'f5087775-779c-4e73-ac77-08edfcb375f4'
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Rebuilding local state with 3 history event...
+== APP - order-processor == Processing order f5087775-779c-4e73-ac77-08edfcb375f4...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Waiting for 1 task(s) and 0 event(s) to complete...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Returning 1 action(s)
+== APP - order-processor == Received "Activity Request" work item
+== APP - order-processor == Verifying inventory for f5087775-779c-4e73-ac77-08edfcb375f4 of 1 car
+== APP - order-processor == 2025-02-13T10:33:21.622Z INFO [HTTPClient, HTTPClient] Sidecar Started
+== APP - order-processor == There are 10 car in stock
+== APP - order-processor == Activity verifyInventoryActivity completed with output {"success":true,"inventoryItem":{"itemName":"car","perItemCost":5000,"quantity":10}} (84 chars)
+== APP - order-processor == Received "Orchestrator Request" work item with instance id 'f5087775-779c-4e73-ac77-08edfcb375f4'
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Rebuilding local state with 6 history event...
+== APP - order-processor == Processing order f5087775-779c-4e73-ac77-08edfcb375f4...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Waiting for 1 task(s) and 0 event(s) to complete...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Returning 1 action(s)
+== APP - order-processor == Received "Activity Request" work item
+== APP - order-processor == Processing payment for order car
+== APP - order-processor == Payment of 5000 for 1 car processed successfully
+== APP - order-processor == Activity processPaymentActivity completed with output true (4 chars)
+== APP - order-processor == Received "Orchestrator Request" work item with instance id 'f5087775-779c-4e73-ac77-08edfcb375f4'
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Rebuilding local state with 9 history event...
+== APP - order-processor == Processing order f5087775-779c-4e73-ac77-08edfcb375f4...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Waiting for 1 task(s) and 0 event(s) to complete...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Returning 1 action(s)
+== APP - order-processor == Received "Activity Request" work item
+== APP - order-processor == Updating inventory for f5087775-779c-4e73-ac77-08edfcb375f4 of 1 car
+== APP - order-processor == Inventory updated for f5087775-779c-4e73-ac77-08edfcb375f4, there are now 9 car in stock
+== APP - order-processor == Activity updateInventoryActivity completed with output {"success":true,"inventoryItem":{"itemName":"car","perItemCost":5000,"quantity":9}} (83 chars)
+== APP - order-processor == Received "Orchestrator Request" work item with instance id 'f5087775-779c-4e73-ac77-08edfcb375f4'
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Rebuilding local state with 12 history event...
+== APP - order-processor == Processing order f5087775-779c-4e73-ac77-08edfcb375f4...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Waiting for 1 task(s) and 0 event(s) to complete...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Returning 1 action(s)
+== APP - order-processor == Received "Activity Request" work item
+== APP - order-processor == order f5087775-779c-4e73-ac77-08edfcb375f4 processed successfully!
+== APP - order-processor == Activity notifyActivity completed with output undefined (0 chars)
+== APP - order-processor == Received "Orchestrator Request" work item with instance id 'f5087775-779c-4e73-ac77-08edfcb375f4'
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Rebuilding local state with 15 history event...
+== APP - order-processor == Processing order f5087775-779c-4e73-ac77-08edfcb375f4...
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1]
+== APP - order-processor == Order f5087775-779c-4e73-ac77-08edfcb375f4 processed successfully!
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Orchestration completed with status COMPLETED
+== APP - order-processor == f5087775-779c-4e73-ac77-08edfcb375f4: Returning 1 action(s)
+== APP - order-processor == Instance f5087775-779c-4e73-ac77-08edfcb375f4 completed
+== APP - order-processor == Orchestration completed! Result: {"processed":true}
```
### (Optional) Step 4: View in Zipkin
@@ -407,16 +552,17 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho
When you ran `dapr run -f .`:
-1. A unique order ID for the workflow is generated (in the above example, `0c332155-1e02-453a-a333-28cfc7777642`) and the workflow is scheduled.
-1. The `notifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
-1. The `reserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
-1. Your workflow starts and notifies you of its status.
-1. The `processPaymentActivity` workflow activity begins processing payment for order `0c332155-1e02-453a-a333-28cfc7777642` and confirms if successful.
-1. The `updateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
-1. The `notifyActivity` workflow activity sends a notification saying that order `0c332155-1e02-453a-a333-28cfc7777642` has completed.
-1. The workflow terminates as completed.
+1. A unique order ID for the workflow is generated (in the above example, `f5087775-779c-4e73-ac77-08edfcb375f4`) and the workflow is scheduled.
+2. The `notifyActivity` workflow activity sends a notification saying an order for 1 car has been received.
+3. The `verifyInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
+4. Your workflow starts and notifies you of its status.
+5. The `requestApprovalActivity` workflow activity requests approval for order `f5087775-779c-4e73-ac77-08edfcb375f4`
+6. The `processPaymentActivity` workflow activity begins processing payment for order `f5087775-779c-4e73-ac77-08edfcb375f4` and confirms if successful.
+7. The `updateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
+8. The `notifyActivity` workflow activity sends a notification saying that order `f5087775-779c-4e73-ac77-08edfcb375f4` has completed and processed.
+9. The workflow terminates as completed and processed.
-#### `order-processor/workflowApp.ts`
+#### `order-processor/app.ts`
In the application file:
@@ -426,19 +572,29 @@ In the application file:
- The workflow and the workflow activities it invokes are registered
```javascript
-import { DaprWorkflowClient, WorkflowRuntime, DaprClient } from "@dapr/dapr-dev";
+import { DaprWorkflowClient, WorkflowRuntime, DaprClient, CommunicationProtocolEnum } from "@dapr/dapr";
import { InventoryItem, OrderPayload } from "./model";
-import { notifyActivity, orderProcessingWorkflow, processPaymentActivity, requestApprovalActivity, reserveInventoryActivity, updateInventoryActivity } from "./orderProcessingWorkflow";
+import { notifyActivity, orderProcessingWorkflow, processPaymentActivity, requestApprovalActivity, verifyInventoryActivity as verifyInventoryActivity, updateInventoryActivity } from "./orderProcessingWorkflow";
+
+const workflowWorker = new WorkflowRuntime();
async function start() {
// Update the gRPC client and worker to use a local address and port
const workflowClient = new DaprWorkflowClient();
- const workflowWorker = new WorkflowRuntime();
- const daprClient = new DaprClient();
+
+ const daprHost = process.env.DAPR_HOST ?? "127.0.0.1";
+ const daprPort = process.env.DAPR_GRPC_PORT ?? "50001";
+
+ const daprClient = new DaprClient({
+ daprHost,
+ daprPort,
+ communicationProtocol: CommunicationProtocolEnum.GRPC,
+ });
+
const storeName = "statestore";
- const inventory = new InventoryItem("item1", 100, 100);
+ const inventory = new InventoryItem("car", 5000, 10);
const key = inventory.itemName;
await daprClient.state.save(storeName, [
@@ -448,12 +604,12 @@ async function start() {
}
]);
- const order = new OrderPayload("item1", 100, 10);
+ const order = new OrderPayload("car", 5000, 1);
workflowWorker
.registerWorkflow(orderProcessingWorkflow)
.registerActivity(notifyActivity)
- .registerActivity(reserveInventoryActivity)
+ .registerActivity(verifyInventoryActivity)
.registerActivity(requestApprovalActivity)
.registerActivity(processPaymentActivity)
.registerActivity(updateInventoryActivity);
@@ -480,16 +636,162 @@ async function start() {
throw error;
}
- await workflowWorker.stop();
await workflowClient.stop();
}
+process.on('SIGTERM', () => {
+ workflowWorker.stop();
+})
+
start().catch((e) => {
console.error(e);
process.exit(1);
});
```
+#### `order-processor/orderProcessingWorkflow.ts`
+
+In `orderProcessingWorkflow.ts`, the workflow is defined as a class with all of its associated tasks (determined by workflow activities).
+
+```javascript
+import { Task, WorkflowActivityContext, WorkflowContext, TWorkflow, DaprClient } from "@dapr/dapr";
+import { InventoryItem, InventoryRequest, InventoryResult, OrderNotification, OrderPayload, OrderPaymentRequest, OrderResult } from "./model";
+
+const daprClient = new DaprClient();
+const storeName = "statestore";
+
+// Defines Notify Activity. This is used by the workflow to send out a notification
+export const notifyActivity = async (_: WorkflowActivityContext, orderNotification: OrderNotification) => {
+ console.log(orderNotification.message);
+ return;
+};
+
+//Defines Verify Inventory Activity. This is used by the workflow to verify if inventory is available for the order
+export const verifyInventoryActivity = async (_: WorkflowActivityContext, inventoryRequest: InventoryRequest) => {
+ console.log(`Verifying inventory for ${inventoryRequest.requestId} of ${inventoryRequest.quantity} ${inventoryRequest.itemName}`);
+ const result = await daprClient.state.get(storeName, inventoryRequest.itemName);
+ if (result == undefined || result == null) {
+ return new InventoryResult(false, undefined);
+ }
+ const inventoryItem = result as InventoryItem;
+ console.log(`There are ${inventoryItem.quantity} ${inventoryItem.itemName} in stock`);
+
+ if (inventoryItem.quantity >= inventoryRequest.quantity) {
+ return new InventoryResult(true, inventoryItem)
+ }
+ return new InventoryResult(false, undefined);
+}
+
+export const requestApprovalActivity = async (_: WorkflowActivityContext, orderPayLoad: OrderPayload) => {
+ console.log(`Requesting approval for order ${orderPayLoad.itemName}`);
+ return true;
+}
+
+export const processPaymentActivity = async (_: WorkflowActivityContext, orderPaymentRequest: OrderPaymentRequest) => {
+ console.log(`Processing payment for order ${orderPaymentRequest.itemBeingPurchased}`);
+ console.log(`Payment of ${orderPaymentRequest.amount} for ${orderPaymentRequest.quantity} ${orderPaymentRequest.itemBeingPurchased} processed successfully`);
+ return true;
+}
+
+export const updateInventoryActivity = async (_: WorkflowActivityContext, inventoryRequest: InventoryRequest) => {
+ console.log(`Updating inventory for ${inventoryRequest.requestId} of ${inventoryRequest.quantity} ${inventoryRequest.itemName}`);
+ const result = await daprClient.state.get(storeName, inventoryRequest.itemName);
+ if (result == undefined || result == null) {
+ return new InventoryResult(false, undefined);
+ }
+ const inventoryItem = result as InventoryItem;
+ inventoryItem.quantity = inventoryItem.quantity - inventoryRequest.quantity;
+ if (inventoryItem.quantity < 0) {
+ console.log(`Insufficient inventory for ${inventoryRequest.requestId} of ${inventoryRequest.quantity} ${inventoryRequest.itemName}`);
+ return new InventoryResult(false, undefined);
+ }
+ await daprClient.state.save(storeName, [
+ {
+ key: inventoryRequest.itemName,
+ value: inventoryItem,
+ }
+ ]);
+ console.log(`Inventory updated for ${inventoryRequest.requestId}, there are now ${inventoryItem.quantity} ${inventoryItem.itemName} in stock`);
+ return new InventoryResult(true, inventoryItem);
+}
+
+export const orderProcessingWorkflow: TWorkflow = async function* (ctx: WorkflowContext, orderPayLoad: OrderPayload): any {
+ const orderId = ctx.getWorkflowInstanceId();
+ console.log(`Processing order ${orderId}...`);
+
+ const orderNotification: OrderNotification = {
+ message: `Received order ${orderId} for ${orderPayLoad.quantity} ${orderPayLoad.itemName} at a total cost of ${orderPayLoad.totalCost}`,
+ };
+ yield ctx.callActivity(notifyActivity, orderNotification);
+
+ const inventoryRequest = new InventoryRequest(orderId, orderPayLoad.itemName, orderPayLoad.quantity);
+ const inventoryResult = yield ctx.callActivity(verifyInventoryActivity, inventoryRequest);
+
+ if (!inventoryResult.success) {
+ const orderNotification: OrderNotification = {
+ message: `Insufficient inventory for order ${orderId}`,
+ };
+ yield ctx.callActivity(notifyActivity, orderNotification);
+ return new OrderResult(false);
+ }
+
+ if (orderPayLoad.totalCost > 5000) {
+ yield ctx.callActivity(requestApprovalActivity, orderPayLoad);
+
+ const tasks: Task[] = [];
+ const approvalEvent = ctx.waitForExternalEvent("approval_event");
+ tasks.push(approvalEvent);
+ const timeOutEvent = ctx.createTimer(30);
+ tasks.push(timeOutEvent);
+ const winner = ctx.whenAny(tasks);
+
+ if (winner == timeOutEvent) {
+ const orderNotification: OrderNotification = {
+ message: `Order ${orderId} has been cancelled due to approval timeout.`,
+ };
+ yield ctx.callActivity(notifyActivity, orderNotification);
+ return new OrderResult(false);
+ }
+ const approvalResult = approvalEvent.getResult();
+ if (!approvalResult) {
+ const orderNotification: OrderNotification = {
+ message: `Order ${orderId} was not approved.`,
+ };
+ yield ctx.callActivity(notifyActivity, orderNotification);
+ return new OrderResult(false);
+ }
+ }
+
+ const orderPaymentRequest = new OrderPaymentRequest(orderId, orderPayLoad.itemName, orderPayLoad.totalCost, orderPayLoad.quantity);
+ const paymentResult = yield ctx.callActivity(processPaymentActivity, orderPaymentRequest);
+
+ if (!paymentResult) {
+ const orderNotification: OrderNotification = {
+ message: `Payment for order ${orderId} failed`,
+ };
+ yield ctx.callActivity(notifyActivity, orderNotification);
+ return new OrderResult(false);
+ }
+
+ const updatedResult = yield ctx.callActivity(updateInventoryActivity, inventoryRequest);
+ if (!updatedResult.success) {
+ const orderNotification: OrderNotification = {
+ message: `Failed to update inventory for order ${orderId}`,
+ };
+ yield ctx.callActivity(notifyActivity, orderNotification);
+ return new OrderResult(false);
+ }
+
+ const orderCompletedNotification: OrderNotification = {
+ message: `order ${orderId} processed successfully!`,
+ };
+ yield ctx.callActivity(notifyActivity, orderCompletedNotification);
+
+ console.log(`Order ${orderId} processed successfully!`);
+ return new OrderResult(true);
+}
+```
+
{{% /codetab %}}
@@ -498,19 +800,23 @@ start().catch((e) => {
The `order-processor` console app starts and manages the lifecycle of an order processing workflow that stores and retrieves data in a state store. The workflow consists of four workflow activities, or tasks:
- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow
-- `ReserveInventoryActivity`: Checks the state store to ensure that there is enough inventory for the purchase
-- `ProcessPaymentActivity`: Processes and authorizes the payment
-- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value
+- `VerifyInventoryActivity`: Checks the state store to ensure that there is enough inventory for the purchase.
+- `RequestApprovalActivity`: Requests approval for orders over a certain threshold.
+- `ProcessPaymentActivity`: Processes and authorizes the payment.
+- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value.
### Step 1: Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
+- [.NET 7](https://dotnet.microsoft.com/download/dotnet/7.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
+
+**NOTE:** .NET 7 is the minimally supported version of .NET by Dapr.Workflows in Dapr v1.15. Only .NET 8 and .NET 9
+will be supported in Dapr v1.16 and later releases.
### Step 2: Set up the environment
@@ -552,31 +858,157 @@ This starts the `order-processor` app with unique workflow ID and runs the workf
Expected output:
```
-== APP == Starting workflow 6d2abcc9 purchasing 10 Cars
-
-== APP == info: Microsoft.DurableTask.Client.Grpc.GrpcDurableTaskClient[40]
-== APP == Scheduling new OrderProcessingWorkflow orchestration with instance ID '6d2abcc9' and 47 bytes of input data.
-== APP == info: WorkflowConsoleApp.Activities.NotifyActivity[0]
-== APP == Received order 6d2abcc9 for 10 Cars at $15000
-== APP == info: WorkflowConsoleApp.Activities.ReserveInventoryActivity[0]
-== APP == Reserving inventory for order 6d2abcc9 of 10 Cars
-== APP == info: WorkflowConsoleApp.Activities.ReserveInventoryActivity[0]
-== APP == There are: 100, Cars available for purchase
-
-== APP == Your workflow has started. Here is the status of the workflow: Dapr.Workflow.WorkflowState
-
-== APP == info: WorkflowConsoleApp.Activities.ProcessPaymentActivity[0]
-== APP == Processing payment: 6d2abcc9 for 10 Cars at $15000
-== APP == info: WorkflowConsoleApp.Activities.ProcessPaymentActivity[0]
-== APP == Payment for request ID '6d2abcc9' processed successfully
-== APP == info: WorkflowConsoleApp.Activities.UpdateInventoryActivity[0]
-== APP == Checking Inventory for: Order# 6d2abcc9 for 10 Cars
-== APP == info: WorkflowConsoleApp.Activities.UpdateInventoryActivity[0]
-== APP == There are now: 90 Cars left in stock
-== APP == info: WorkflowConsoleApp.Activities.NotifyActivity[0]
-== APP == Order 6d2abcc9 has completed!
-
-== APP == Workflow Status: Completed
+== APP - order-processor == Starting workflow 571a6e25 purchasing 1 Cars
+== APP - order-processor == info: Microsoft.DurableTask.Client.Grpc.GrpcDurableTaskClient[40]
+== APP - order-processor == Scheduling new OrderProcessingWorkflow orchestration with instance ID '571a6e25' and 45 bytes of input data.
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/StartInstance
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/StartInstance
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 3045.9209ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 3046.0945ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 3016.1346ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 3016.3572ms - 200
+== APP - order-processor == info: Microsoft.DurableTask.Client.Grpc.GrpcDurableTaskClient[42]
+== APP - order-processor == Waiting for instance '571a6e25' to start.
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/WaitForInstanceStart
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/WaitForInstanceStart
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 2.9095ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 3.0445ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 99.446ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 99.5407ms - 200
+== APP - order-processor == Your workflow has started. Here is the status of the workflow: Running
+== APP - order-processor == info: Microsoft.DurableTask.Client.Grpc.GrpcDurableTaskClient[43]
+== APP - order-processor == Waiting for instance '571a6e25' to complete, fail, or terminate.
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/WaitForInstanceCompletion
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/WaitForInstanceCompletion
+== APP - order-processor == info: WorkflowConsoleApp.Activities.NotifyActivity[1985924262]
+== APP - order-processor == Presenting notification Notification { Message = Received order 571a6e25 for 1 Cars at $5000 }
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 1.6785ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 1.7869ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Workflows.OrderProcessingWorkflow[2013970020]
+== APP - order-processor == Received request ID '571a6e25' for 1 Cars at $5000
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 1.1947ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 1.3293ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Activities.VerifyInventoryActivity[1478802116]
+== APP - order-processor == Reserving inventory for order request ID '571a6e25' of 1 Cars
+== APP - order-processor == info: WorkflowConsoleApp.Activities.VerifyInventoryActivity[1130866279]
+== APP - order-processor == There are: 10 Cars available for purchase
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 1.8534ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 2.0077ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Workflows.OrderProcessingWorkflow[1162731597]
+== APP - order-processor == Checked inventory for request ID 'InventoryRequest { RequestId = 571a6e25, ItemName = Cars, Quantity = 1 }'
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 1.1851ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 1.3742ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Activities.ProcessPaymentActivity[340284070]
+== APP - order-processor == Processing payment: request ID '571a6e25' for 1 Cars at $5000
+== APP - order-processor == info: WorkflowConsoleApp.Activities.ProcessPaymentActivity[1851315765]
+== APP - order-processor == Payment for request ID '571a6e25' processed successfully
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 0.8249ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 0.9595ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Workflows.OrderProcessingWorkflow[340284070]
+== APP - order-processor == Processed payment request as there's sufficient inventory to proceed: PaymentRequest { RequestId = 571a6e25, ItemBeingPurchased = Cars, Amount = 1, Currency = 5000 }
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 0.4457ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 0.5267ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Activities.UpdateInventoryActivity[2144991393]
+== APP - order-processor == Checking inventory for request ID '571a6e25' for 1 Cars
+== APP - order-processor == info: WorkflowConsoleApp.Activities.UpdateInventoryActivity[1901852920]
+== APP - order-processor == There are now 9 Cars left in stock
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 0.6012ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 0.7097ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Workflows.OrderProcessingWorkflow[96138418]
+== APP - order-processor == Updating available inventory for PaymentRequest { RequestId = 571a6e25, ItemBeingPurchased = Cars, Amount = 1, Currency = 5000 }
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 0.469ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 0.5431ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Activities.NotifyActivity[1985924262]
+== APP - order-processor == Presenting notification Notification { Message = Order 571a6e25 has completed! }
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteActivityTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 0.494ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 0.5685ms - 200
+== APP - order-processor == info: WorkflowConsoleApp.Workflows.OrderProcessingWorkflow[510392223]
+== APP - order-processor == Order 571a6e25 has completed
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[100]
+== APP - order-processor == Start processing HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[100]
+== APP - order-processor == Sending HTTP request POST http://localhost:37355/TaskHubSidecarService/CompleteOrchestratorTask
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 1.6353ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 1.7546ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.ClientHandler[101]
+== APP - order-processor == Received HTTP response headers after 15807.213ms - 200
+== APP - order-processor == info: System.Net.Http.HttpClient.Default.LogicalHandler[101]
+== APP - order-processor == End processing HTTP request after 15807.3675ms - 200
+== APP - order-processor == Workflow Status: Completed
```
### (Optional) Step 4: View in Zipkin
@@ -595,14 +1027,15 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho
When you ran `dapr run -f .`:
-1. A unique order ID for the workflow is generated (in the above example, `6d2abcc9`) and the workflow is scheduled.
-1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
-1. The `ReserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
-1. Your workflow starts and notifies you of its status.
-1. The `ProcessPaymentActivity` workflow activity begins processing payment for order `6d2abcc9` and confirms if successful.
-1. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
-1. The `NotifyActivity` workflow activity sends a notification saying that order `6d2abcc9` has completed.
-1. The workflow terminates as completed.
+1. An OrderPayload is made containing one car.
+2. A unique order ID for the workflow is generated (in the above example, `571a6e25`) and the workflow is scheduled.
+3. The `NotifyActivity` workflow activity sends a notification saying an order for one car has been received.
+4. The `VerifyInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock. The inventory is sufficient so the workflow continues.
+5. The total cost of the order is 5000, so the workflow will not call the `RequestApprovalActivity` activity.
+6. The `ProcessPaymentActivity` workflow activity begins processing payment for order `571a6e25` and confirms if successful.
+7. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
+8. The `NotifyActivity` workflow activity sends a notification saying that order `571a6e25` has completed.
+9. The workflow terminates as completed and the OrderResult is set to processed.
#### `order-processor/Program.cs`
@@ -616,9 +1049,18 @@ In the application's program file:
```csharp
using Dapr.Client;
using Dapr.Workflow;
-//...
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.DependencyInjection;
+using WorkflowConsoleApp.Activities;
+using WorkflowConsoleApp.Models;
+using WorkflowConsoleApp.Workflows;
+
+const string storeName = "statestore";
+// The workflow host is a background service that connects to the sidecar over gRPC
+var builder = Host.CreateDefaultBuilder(args).ConfigureServices(services =>
{
+ services.AddDaprClient();
services.AddDaprWorkflow(options =>
{
// Note that it's also possible to register a lambda function as the workflow
@@ -627,111 +1069,171 @@ using Dapr.Workflow;
// These are the activities that get invoked by the workflow(s).
options.RegisterActivity();
- options.RegisterActivity();
+ options.RegisterActivity();
+ options.RegisterActivity();
options.RegisterActivity();
options.RegisterActivity();
});
-};
+});
+
+// Start the app - this is the point where we connect to the Dapr sidecar
+using var host = builder.Build();
+host.Start();
-//...
+var daprClient = host.Services.GetRequiredService();
+var workflowClient = host.Services.GetRequiredService();
// Generate a unique ID for the workflow
-string orderId = Guid.NewGuid().ToString()[..8];
-string itemToPurchase = "Cars";
-int ammountToPurchase = 10;
+var orderId = Guid.NewGuid().ToString()[..8];
+const string itemToPurchase = "Cars";
+const int amountToPurchase = 1;
+
+// Populate the store with items
+RestockInventory(itemToPurchase);
// Construct the order
-OrderPayload orderInfo = new OrderPayload(itemToPurchase, 15000, ammountToPurchase);
+var orderInfo = new OrderPayload(itemToPurchase, 5000, amountToPurchase);
// Start the workflow
-Console.WriteLine("Starting workflow {0} purchasing {1} {2}", orderId, ammountToPurchase, itemToPurchase);
+Console.WriteLine($"Starting workflow {orderId} purchasing {amountToPurchase} {itemToPurchase}");
-await daprWorkflowClient.ScheduleNewWorkflowAsync(
+await workflowClient.ScheduleNewWorkflowAsync(
name: nameof(OrderProcessingWorkflow),
- input: orderInfo,
- instanceId: orderId);
+ instanceId: orderId,
+ input: orderInfo);
// Wait for the workflow to start and confirm the input
-WorkflowState state = await daprWorkflowClient.WaitForWorkflowStartAsync(
+var state = await workflowClient.WaitForWorkflowStartAsync(
instanceId: orderId);
-Console.WriteLine($"{nameof(OrderProcessingWorkflow)} (ID = {orderId}) started successfully with {state.ReadInputAs()}");
+Console.WriteLine($"Your workflow has started. Here is the status of the workflow: {Enum.GetName(typeof(WorkflowRuntimeStatus), state.RuntimeStatus)}");
// Wait for the workflow to complete
-using var ctx = new CancellationTokenSource(TimeSpan.FromSeconds(5));
-state = await daprClient.WaitForWorkflowCompletionAsync(
- instanceId: orderId,
- cancellation: ctx.Token);
+state = await workflowClient.WaitForWorkflowCompletionAsync(
+ instanceId: orderId);
+
+Console.WriteLine("Workflow Status: {0}", Enum.GetName(typeof(WorkflowRuntimeStatus), state.RuntimeStatus));
+return;
+
+void RestockInventory(string itemToPurchase)
+{
+ daprClient.SaveStateAsync(storeName, itemToPurchase, new OrderPayload(Name: itemToPurchase, TotalCost: 50000, Quantity: 10));
+}
-Console.WriteLine("Workflow Status: {0}", state.ReadCustomStatusAs());
```
#### `order-processor/Workflows/OrderProcessingWorkflow.cs`
-In `OrderProcessingWorkflow.cs`, the workflow is defined as a class with all of its associated tasks (determined by workflow activities).
+In `OrderProcessingWorkflow.cs`, the workflow is defined as a class with all of its associated tasks (determined by workflow activities in separate files).
```csharp
+namespace WorkflowConsoleApp.Workflows;
+
+using Microsoft.Extensions.Logging;
+using System.Threading.Tasks;
using Dapr.Workflow;
-//...
+using DurableTask.Core.Exceptions;
+using Activities;
+using Models;
-class OrderProcessingWorkflow : Workflow
+internal sealed partial class OrderProcessingWorkflow : Workflow
+{
+ public override async Task RunAsync(WorkflowContext context, OrderPayload order)
{
- public override async Task RunAsync(WorkflowContext context, OrderPayload order)
+ var logger = context.CreateReplaySafeLogger();
+ var orderId = context.InstanceId;
+
+ // Notify the user that an order has come through
+ await context.CallActivityAsync(nameof(NotifyActivity),
+ new Notification($"Received order {orderId} for {order.Quantity} {order.Name} at ${order.TotalCost}"));
+ LogOrderReceived(logger, orderId, order.Quantity, order.Name, order.TotalCost);
+
+ // Determine if there is enough of the item available for purchase by checking the inventory
+ var inventoryRequest = new InventoryRequest(RequestId: orderId, order.Name, order.Quantity);
+ var result = await context.CallActivityAsync(
+ nameof(VerifyInventoryActivity), inventoryRequest);
+ LogCheckInventory(logger, inventoryRequest);
+
+ // If there is insufficient inventory, fail and let the user know
+ if (!result.Success)
{
- string orderId = context.InstanceId;
-
- // Notify the user that an order has come through
- await context.CallActivityAsync(
- nameof(NotifyActivity),
- new Notification($"Received order {orderId} for {order.Quantity} {order.Name} at ${order.TotalCost}"));
-
- string requestId = context.InstanceId;
+ // End the workflow here since we don't have sufficient inventory
+ await context.CallActivityAsync(nameof(NotifyActivity),
+ new Notification($"Insufficient inventory for {order.Name}"));
+ LogInsufficientInventory(logger, order.Name);
+ return new OrderResult(Processed: false);
+ }
- // Determine if there is enough of the item available for purchase by checking the inventory
- InventoryResult result = await context.CallActivityAsync(
- nameof(ReserveInventoryActivity),
- new InventoryRequest(RequestId: orderId, order.Name, order.Quantity));
+ if (order.TotalCost > 5000)
+ {
+ await context.CallActivityAsync(nameof(RequestApprovalActivity),
+ new ApprovalRequest(orderId, order.Name, order.Quantity, order.TotalCost));
- // If there is insufficient inventory, fail and let the user know
- if (!result.Success)
+ var approvalResponse = await context.WaitForExternalEventAsync(
+ eventName: "ApprovalEvent",
+ timeout: TimeSpan.FromSeconds(30));
+ if (!approvalResponse.IsApproved)
{
- // End the workflow here since we don't have sufficient inventory
- await context.CallActivityAsync(
- nameof(NotifyActivity),
- new Notification($"Insufficient inventory for {order.Name}"));
+ await context.CallActivityAsync(nameof(NotifyActivity),
+ new Notification($"Order {orderId} was not approved"));
+ LogOrderNotApproved(logger, orderId);
return new OrderResult(Processed: false);
}
+ }
- // There is enough inventory available so the user can purchase the item(s). Process their payment
- await context.CallActivityAsync(
- nameof(ProcessPaymentActivity),
- new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost));
+ // There is enough inventory available so the user can purchase the item(s). Process their payment
+ var processPaymentRequest = new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost);
+ await context.CallActivityAsync(nameof(ProcessPaymentActivity),processPaymentRequest);
+ LogPaymentProcessing(logger, processPaymentRequest);
- try
- {
- // There is enough inventory available so the user can purchase the item(s). Process their payment
- await context.CallActivityAsync(
- nameof(UpdateInventoryActivity),
- new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost));
- }
- catch (WorkflowTaskFailedException)
- {
- // Let them know their payment was processed
- await context.CallActivityAsync(
- nameof(NotifyActivity),
- new Notification($"Order {orderId} Failed! You are now getting a refund"));
- return new OrderResult(Processed: false);
- }
+ try
+ {
+ // Update the available inventory
+ var paymentRequest = new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost);
+ await context.CallActivityAsync(nameof(UpdateInventoryActivity), paymentRequest);
+ LogInventoryUpdate(logger, paymentRequest);
+ }
+ catch (TaskFailedException)
+ {
+ // Let them know their payment was processed, but there's insufficient inventory, so they're getting a refund
+ await context.CallActivityAsync(nameof(NotifyActivity),
+ new Notification($"Order {orderId} Failed! You are now getting a refund"));
+ LogRefund(logger, orderId);
+ return new OrderResult(Processed: false);
+ }
- // Let them know their payment was processed
- await context.CallActivityAsync(
- nameof(NotifyActivity),
- new Notification($"Order {orderId} has completed!"));
+ // Let them know their payment was processed
+ await context.CallActivityAsync(nameof(NotifyActivity), new Notification($"Order {orderId} has completed!"));
+ LogSuccessfulOrder(logger, orderId);
- // End the workflow with a success result
- return new OrderResult(Processed: true);
- }
+ // End the workflow with a success result
+ return new OrderResult(Processed: true);
}
+
+ [LoggerMessage(LogLevel.Information, "Received request ID '{request}' for {quantity} {name} at ${totalCost}")]
+ static partial void LogOrderReceived(ILogger logger, string request, int quantity, string name, double totalCost);
+
+ [LoggerMessage(LogLevel.Information, "Checked inventory for request ID '{request}'")]
+ static partial void LogCheckInventory(ILogger logger, InventoryRequest request);
+
+ [LoggerMessage(LogLevel.Information, "Insufficient inventory for order {orderName}")]
+ static partial void LogInsufficientInventory(ILogger logger, string orderName);
+
+ [LoggerMessage(LogLevel.Information, "Order {orderName} was not approved")]
+ static partial void LogOrderNotApproved(ILogger logger, string orderName);
+
+ [LoggerMessage(LogLevel.Information, "Processed payment request as there's sufficient inventory to proceed: {request}")]
+ static partial void LogPaymentProcessing(ILogger logger, PaymentRequest request);
+
+ [LoggerMessage(LogLevel.Information, "Updating available inventory for {request}")]
+ static partial void LogInventoryUpdate(ILogger logger, PaymentRequest request);
+
+ [LoggerMessage(LogLevel.Information, "Order {orderId} failed due to insufficient inventory - processing refund")]
+ static partial void LogRefund(ILogger logger, string orderId);
+
+ [LoggerMessage(LogLevel.Information, "Order {orderId} has completed")]
+ static partial void LogSuccessfulOrder(ILogger logger, string orderId);
+}
```
#### `order-processor/Activities` directory
@@ -739,7 +1241,8 @@ class OrderProcessingWorkflow : Workflow
The `Activities` directory holds the four workflow activities used by the workflow, defined in the following files:
- `NotifyActivity.cs`
-- `ReserveInventoryActivity.cs`
+- `VerifyInventoryActivity.cs`
+- `RequestApprovalActivity.cs`
- `ProcessPaymentActivity.cs`
- `UpdateInventoryActivity.cs`
@@ -756,11 +1259,11 @@ Watch [this video to walk through the Dapr Workflow .NET demo](https://youtu.be/
The `order-processor` console app starts and manages the lifecycle of an order processing workflow that stores and retrieves data in a state store. The workflow consists of four workflow activities, or tasks:
-- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow
-- `RequestApprovalActivity`: Requests approval for processing payment
-- `ReserveInventoryActivity`: Checks the state store to ensure that there is enough inventory for the purchase
-- `ProcessPaymentActivity`: Processes and authorizes the payment
-- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value
+- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow.
+- `RequestApprovalActivity`: Requests approval for orders over a certain cost threshold.
+- `VerifyInventoryActivity`: Checks the state store to ensure that there is enough inventory for the purchase.
+- `ProcessPaymentActivity`: Processes and authorizes the payment.
+- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value.
### Step 1: Pre-requisites
@@ -816,34 +1319,39 @@ This starts the `order-processor` app with unique workflow ID and runs the workf
Expected output:
```
-== APP == *** Welcome to the Dapr Workflow console app sample!
-== APP == *** Using this app, you can place orders that start workflows.
-== APP == Start workflow runtime
-== APP == Sep 20, 2023 3:23:05 PM com.microsoft.durabletask.DurableTaskGrpcWorker startAndBlock
-== APP == INFO: Durable Task worker is connecting to sidecar at 127.0.0.1:50001.
-
-== APP == ==========Begin the purchase of item:==========
-== APP == Starting order workflow, purchasing 10 of cars
-
-== APP == scheduled new workflow instance of OrderProcessingWorkflow with instance ID: edceba90-9c45-4be8-ad40-60d16e060797
-== APP == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Starting Workflow: io.dapr.quickstarts.workflows.OrderProcessingWorkflow
-== APP == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Instance ID(order ID): edceba90-9c45-4be8-ad40-60d16e060797
-== APP == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Current Orchestration Time: 2023-09-20T19:23:09.755Z
-== APP == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Received Order: OrderPayload [itemName=cars, totalCost=150000, quantity=10]
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.NotifyActivity - Received Order: OrderPayload [itemName=cars, totalCost=150000, quantity=10]
-== APP == workflow instance edceba90-9c45-4be8-ad40-60d16e060797 started
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.ReserveInventoryActivity - Reserving inventory for order 'edceba90-9c45-4be8-ad40-60d16e060797' of 10 cars
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.ReserveInventoryActivity - There are 100 cars available for purchase
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.ReserveInventoryActivity - Reserved inventory for order 'edceba90-9c45-4be8-ad40-60d16e060797' of 10 cars
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.RequestApprovalActivity - Requesting approval for order: OrderPayload [itemName=cars, totalCost=150000, quantity=10]
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.RequestApprovalActivity - Approved requesting approval for order: OrderPayload [itemName=cars, totalCost=150000, quantity=10]
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.ProcessPaymentActivity - Processing payment: edceba90-9c45-4be8-ad40-60d16e060797 for 10 cars at $150000
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.ProcessPaymentActivity - Payment for request ID 'edceba90-9c45-4be8-ad40-60d16e060797' processed successfully
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.UpdateInventoryActivity - Updating inventory for order 'edceba90-9c45-4be8-ad40-60d16e060797' of 10 cars
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.UpdateInventoryActivity - Updated inventory for order 'edceba90-9c45-4be8-ad40-60d16e060797': there are now 90 cars left in stock
-== APP == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.NotifyActivity - Order completed! : edceba90-9c45-4be8-ad40-60d16e060797
-
-== APP == workflow instance edceba90-9c45-4be8-ad40-60d16e060797 completed, out is: {"processed":true}
+== APP - order-processor == *** Welcome to the Dapr Workflow console app sample!
+== APP - order-processor == *** Using this app, you can place orders that start workflows.
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - Registered Workflow: OrderProcessingWorkflow
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - Registered Activity: NotifyActivity
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - Registered Activity: ProcessPaymentActivity
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - Registered Activity: RequestApprovalActivity
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - Registered Activity: VerifyInventoryActivity
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - Registered Activity: UpdateInventoryActivity
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - List of registered workflows: [io.dapr.quickstarts.workflows.OrderProcessingWorkflow]
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - List of registered activites: [io.dapr.quickstarts.workflows.activities.NotifyActivity, io.dapr.quickstarts.workflows.activities.UpdateInventoryActivity, io.dapr.quickstarts.workflows.activities.ProcessPaymentActivity, io.dapr.quickstarts.workflows.activities.RequestApprovalActivity, io.dapr.quickstarts.workflows.activities.VerifyInventoryActivity]
+== APP - order-processor == [main] INFO io.dapr.workflows.runtime.WorkflowRuntimeBuilder - Successfully built dapr workflow runtime
+== APP - order-processor == Start workflow runtime
+== APP - order-processor == Feb 12, 2025 2:44:13 PM com.microsoft.durabletask.DurableTaskGrpcWorker startAndBlock
+== APP - order-processor == INFO: Durable Task worker is connecting to sidecar at 127.0.0.1:39261.
+== APP - order-processor == ==========Begin the purchase of item:==========
+== APP - order-processor == Starting order workflow, purchasing 1 of cars
+== APP - order-processor == scheduled new workflow instance of OrderProcessingWorkflow with instance ID: d1bf548b-c854-44af-978e-90c61ed88e3c
+== APP - order-processor == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Starting Workflow: io.dapr.quickstarts.workflows.OrderProcessingWorkflow
+== APP - order-processor == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Instance ID(order ID): d1bf548b-c854-44af-978e-90c61ed88e3c
+== APP - order-processor == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Current Orchestration Time: 2025-02-12T14:44:18.154Z
+== APP - order-processor == [Thread-0] INFO io.dapr.workflows.WorkflowContext - Received Order: OrderPayload [itemName=cars, totalCost=5000, quantity=1]
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.NotifyActivity - Received Order: OrderPayload [itemName=cars, totalCost=5000, quantity=1]
+== APP - order-processor == workflow instance d1bf548b-c854-44af-978e-90c61ed88e3c started
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.VerifyInventoryActivity - Verifying inventory for order 'd1bf548b-c854-44af-978e-90c61ed88e3c' of 1 cars
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.VerifyInventoryActivity - There are 10 cars available for purchase
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.VerifyInventoryActivity - Verified inventory for order 'd1bf548b-c854-44af-978e-90c61ed88e3c' of 1 cars
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.ProcessPaymentActivity - Processing payment: d1bf548b-c854-44af-978e-90c61ed88e3c for 1 cars at $5000
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.ProcessPaymentActivity - Payment for request ID 'd1bf548b-c854-44af-978e-90c61ed88e3c' processed successfully
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.UpdateInventoryActivity - Updating inventory for order 'd1bf548b-c854-44af-978e-90c61ed88e3c' of 1 cars
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.UpdateInventoryActivity - Updated inventory for order 'd1bf548b-c854-44af-978e-90c61ed88e3c': there are now 9 cars left in stock
+== APP - order-processor == there are now 9 cars left in stock
+== APP - order-processor == [Thread-0] INFO io.dapr.quickstarts.workflows.activities.NotifyActivity - Order completed! : d1bf548b-c854-44af-978e-90c61ed88e3c
+== APP - order-processor == workflow instance completed, out is: {"processed":true}
```
### (Optional) Step 4: View in Zipkin
@@ -862,14 +1370,15 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho
When you ran `dapr run -f .`:
-1. A unique order ID for the workflow is generated (in the above example, `edceba90-9c45-4be8-ad40-60d16e060797`) and the workflow is scheduled.
-1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
-1. The `ReserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
-1. Once approved, your workflow starts and notifies you of its status.
-1. The `ProcessPaymentActivity` workflow activity begins processing payment for order `edceba90-9c45-4be8-ad40-60d16e060797` and confirms if successful.
-1. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
-1. The `NotifyActivity` workflow activity sends a notification saying that order `edceba90-9c45-4be8-ad40-60d16e060797` has completed.
-1. The workflow terminates as completed.
+1. An OrderPayload is made containing one car.
+2. A unique order ID for the workflow is generated (in the above example, `d1bf548b-c854-44af-978e-90c61ed88e3c`) and the workflow is scheduled.
+3. The `NotifyActivity` workflow activity sends a notification saying an order for one car has been received.
+4. The `VertifyInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock. The inventory is sufficient so the workflow continues.
+5. The total cost of the order is 5000, so the workflow will not call the `RequestApprovalActivity` activity.
+6. The `ProcessPaymentActivity` workflow activity begins processing payment for order `d1bf548b-c854-44af-978e-90c61ed88e3c` and confirms if successful.
+7. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
+8. The `NotifyActivity` workflow activity sends a notification saying that order `d1bf548b-c854-44af-978e-90c61ed88e3c` has completed.
+9. The workflow terminates as completed and the orderResult is set to processed.
#### `order-processor/WorkflowConsoleApp.java`
@@ -881,15 +1390,34 @@ In the application's program file:
```java
package io.dapr.quickstarts.workflows;
+
+import java.time.Duration;
+import java.util.concurrent.TimeoutException;
+
import io.dapr.client.DaprClient;
import io.dapr.client.DaprClientBuilder;
+import io.dapr.quickstarts.workflows.activities.NotifyActivity;
+import io.dapr.quickstarts.workflows.activities.ProcessPaymentActivity;
+import io.dapr.quickstarts.workflows.activities.RequestApprovalActivity;
+import io.dapr.quickstarts.workflows.activities.VerifyInventoryActivity;
+import io.dapr.quickstarts.workflows.activities.UpdateInventoryActivity;
+import io.dapr.quickstarts.workflows.models.InventoryItem;
+import io.dapr.quickstarts.workflows.models.OrderPayload;
import io.dapr.workflows.client.DaprWorkflowClient;
+import io.dapr.workflows.client.WorkflowInstanceStatus;
+import io.dapr.workflows.runtime.WorkflowRuntime;
+import io.dapr.workflows.runtime.WorkflowRuntimeBuilder;
public class WorkflowConsoleApp {
- private static final String STATE_STORE_NAME = "statestore-actors";
+ private static final String STATE_STORE_NAME = "statestore";
- // ...
+ /**
+ * The main method of this console app.
+ *
+ * @param args The port the app will listen on.
+ * @throws Exception An Exception.
+ */
public static void main(String[] args) throws Exception {
System.out.println("*** Welcome to the Dapr Workflow console app sample!");
System.out.println("*** Using this app, you can place orders that start workflows.");
@@ -901,10 +1429,10 @@ public class WorkflowConsoleApp {
builder.registerActivity(NotifyActivity.class);
builder.registerActivity(ProcessPaymentActivity.class);
builder.registerActivity(RequestApprovalActivity.class);
- builder.registerActivity(ReserveInventoryActivity.class);
+ builder.registerActivity(VerifyInventoryActivity.class);
builder.registerActivity(UpdateInventoryActivity.class);
- // Build the workflow runtime
+ // Build and then start the workflow runtime pulling and executing tasks
try (WorkflowRuntime runtime = builder.build()) {
System.out.println("Start workflow runtime");
runtime.start(false);
@@ -919,7 +1447,6 @@ public class WorkflowConsoleApp {
}
- // Start the workflow runtime, pulling and executing tasks
private static void executeWorkflow(DaprWorkflowClient workflowClient, InventoryItem inventory) {
System.out.println("==========Begin the purchase of item:==========");
String itemName = inventory.getName();
@@ -935,7 +1462,6 @@ public class WorkflowConsoleApp {
System.out.printf("scheduled new workflow instance of OrderProcessingWorkflow with instance ID: %s%n",
instanceId);
- // Check workflow instance start status
try {
workflowClient.waitForInstanceStart(instanceId, Duration.ofSeconds(10), false);
System.out.printf("workflow instance %s started%n", instanceId);
@@ -944,13 +1470,12 @@ public class WorkflowConsoleApp {
return;
}
- // Check workflow instance complete status
try {
WorkflowInstanceStatus workflowStatus = workflowClient.waitForInstanceCompletion(instanceId,
Duration.ofSeconds(30),
true);
if (workflowStatus != null) {
- System.out.printf("workflow instance %s completed, out is: %s %n", instanceId,
+ System.out.printf("workflow instance completed, out is: %s%n",
workflowStatus.getSerializedOutput());
} else {
System.out.printf("workflow instance %s not found%n", instanceId);
@@ -962,19 +1487,19 @@ public class WorkflowConsoleApp {
}
private static InventoryItem prepareInventoryAndOrder() {
- // prepare 100 cars in inventory
+ // prepare 10 cars in inventory
InventoryItem inventory = new InventoryItem();
inventory.setName("cars");
- inventory.setPerItemCost(15000);
- inventory.setQuantity(100);
+ inventory.setPerItemCost(50000);
+ inventory.setQuantity(10);
DaprClient daprClient = new DaprClientBuilder().build();
restockInventory(daprClient, inventory);
// prepare order for 10 cars
InventoryItem order = new InventoryItem();
order.setName("cars");
- order.setPerItemCost(15000);
- order.setQuantity(10);
+ order.setPerItemCost(5000);
+ order.setQuantity(1);
return order;
}
@@ -983,6 +1508,7 @@ public class WorkflowConsoleApp {
daprClient.saveState(STATE_STORE_NAME, key, inventory).block();
}
}
+
```
#### `OrderProcessingWorkflow.java`
@@ -991,7 +1517,24 @@ In `OrderProcessingWorkflow.java`, the workflow is defined as a class with all o
```java
package io.dapr.quickstarts.workflows;
+
+import java.time.Duration;
+import org.slf4j.Logger;
+
+import io.dapr.quickstarts.workflows.activities.NotifyActivity;
+import io.dapr.quickstarts.workflows.activities.ProcessPaymentActivity;
+import io.dapr.quickstarts.workflows.activities.RequestApprovalActivity;
+import io.dapr.quickstarts.workflows.activities.VerifyInventoryActivity;
+import io.dapr.quickstarts.workflows.activities.UpdateInventoryActivity;
+import io.dapr.quickstarts.workflows.models.ApprovalResponse;
+import io.dapr.quickstarts.workflows.models.InventoryRequest;
+import io.dapr.quickstarts.workflows.models.InventoryResult;
+import io.dapr.quickstarts.workflows.models.Notification;
+import io.dapr.quickstarts.workflows.models.OrderPayload;
+import io.dapr.quickstarts.workflows.models.OrderResult;
+import io.dapr.quickstarts.workflows.models.PaymentRequest;
import io.dapr.workflows.Workflow;
+import io.dapr.workflows.WorkflowStub;
public class OrderProcessingWorkflow extends Workflow {
@@ -1020,7 +1563,7 @@ public class OrderProcessingWorkflow extends Workflow {
inventoryRequest.setRequestId(orderId);
inventoryRequest.setItemName(order.getItemName());
inventoryRequest.setQuantity(order.getQuantity());
- InventoryResult inventoryResult = ctx.callActivity(ReserveInventoryActivity.class.getName(),
+ InventoryResult inventoryResult = ctx.callActivity(VerifyInventoryActivity.class.getName(),
inventoryRequest, InventoryResult.class).await();
// If there is insufficient inventory, fail and let the user know
@@ -1033,9 +1576,11 @@ public class OrderProcessingWorkflow extends Workflow {
// Require orders over a certain threshold to be approved
if (order.getTotalCost() > 5000) {
- ApprovalResult approvalResult = ctx.callActivity(RequestApprovalActivity.class.getName(),
- order, ApprovalResult.class).await();
- if (approvalResult != ApprovalResult.Approved) {
+ ctx.callActivity(RequestApprovalActivity.class.getName(), order).await();
+
+ ApprovalResponse approvalResponse = ctx.waitForExternalEvent("approvalEvent",
+ Duration.ofSeconds(30), ApprovalResponse.class).await();
+ if (!approvalResponse.isApproved()) {
notification.setMessage("Order " + order.getItemName() + " was not approved.");
ctx.callActivity(NotifyActivity.class.getName(), notification).await();
ctx.complete(orderResult);
@@ -1092,7 +1637,7 @@ public class OrderProcessingWorkflow extends Workflow {
The `Activities` directory holds the four workflow activities used by the workflow, defined in the following files:
- [`NotifyActivity.java`](https://github.com/dapr/quickstarts/tree/master/workflows/java/sdk/order-processor/src/main/java/io/dapr/quickstarts/workflows/activities/NotifyActivity.java)
- [`RequestApprovalActivity`](https://github.com/dapr/quickstarts/tree/master/workflows/java/sdk/order-processor/src/main/java/io/dapr/quickstarts/workflows/activities/RequestApprovalActivity.java)
-- [`ReserveInventoryActivity`](https://github.com/dapr/quickstarts/tree/master/workflows/java/sdk/order-processor/src/main/java/io/dapr/quickstarts/workflows/activities/ReserveInventoryActivity.java)
+- [`ReserveInventoryActivity`](https://github.com/dapr/quickstarts/tree/master/workflows/java/sdk/order-processor/src/main/java/io/dapr/quickstarts/workflows/activities/VerifyInventoryActivity.java)
- [`ProcessPaymentActivity`](https://github.com/dapr/quickstarts/tree/master/workflows/java/sdk/order-processor/src/main/java/io/dapr/quickstarts/workflows/activities/ProcessPaymentActivity.java)
- [`UpdateInventoryActivity`](https://github.com/dapr/quickstarts/tree/master/workflows/java/sdk/order-processor/src/main/java/io/dapr/quickstarts/workflows/activities/UpdateInventoryActivity.java)
@@ -1106,10 +1651,10 @@ The `order-processor` console app starts and manages the `OrderProcessingWorkflo
- `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow. These messages notify you when:
- You have insufficient inventory
- Your payment couldn't be processed, etc.
-- `ProcessPaymentActivity`: Processes and authorizes the payment.
- `VerifyInventoryActivity`: Checks the state store to ensure there is enough inventory present for purchase.
+- `RequestApprovalActivity`: Requests approval for orders over a certain cost threshold.
+- `ProcessPaymentActivity`: Processes and authorizes the payment.
- `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value.
-- `RequestApprovalActivity`: Seeks approval from the manager if payment is greater than 50,000 USD.
### Step 1: Pre-requisites
@@ -1150,23 +1695,22 @@ Expected output:
```bash
== APP - order-processor == *** Welcome to the Dapr Workflow console app sample!
== APP - order-processor == *** Using this app, you can place orders that start workflows.
-== APP - order-processor == dapr client initializing for: 127.0.0.1:50056
+== APP - order-processor == dapr client initializing for: 127.0.0.1:46533
+== APP - order-processor == INFO: 2025/02/13 13:18:33 connecting work item listener stream
+== APP - order-processor == 2025/02/13 13:18:33 work item listener started
+== APP - order-processor == INFO: 2025/02/13 13:18:33 starting background processor
== APP - order-processor == adding base stock item: paperclip
-== APP - order-processor == 2024/02/01 12:59:52 work item listener started
-== APP - order-processor == INFO: 2024/02/01 12:59:52 starting background processor
== APP - order-processor == adding base stock item: cars
== APP - order-processor == adding base stock item: computers
== APP - order-processor == ==========Begin the purchase of item:==========
-== APP - order-processor == NotifyActivity: Received order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 for 10 cars - $150000
-== APP - order-processor == VerifyInventoryActivity: Verifying inventory for order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 of 10 cars
-== APP - order-processor == VerifyInventoryActivity: There are 100 cars available for purchase
-== APP - order-processor == RequestApprovalActivity: Requesting approval for payment of 150000USD for 10 cars
-== APP - order-processor == NotifyActivity: Payment for order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 has been approved!
-== APP - order-processor == ProcessPaymentActivity: 48ee83b7-5d80-48d5-97f9-6b372f5480a5 for 10 - cars (150000USD)
-== APP - order-processor == UpdateInventoryActivity: Checking Inventory for order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 for 10 * cars
-== APP - order-processor == UpdateInventoryActivity: There are now 90 cars left in stock
-== APP - order-processor == NotifyActivity: Order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 has completed!
-== APP - order-processor == Workflow completed - result: COMPLETED
+== APP - order-processor == NotifyActivity: Received order b4cb2687-1af0-4f8d-9659-eb6389c07ade for 1 cars - $5000
+== APP - order-processor == VerifyInventoryActivity: Verifying inventory for order b4cb2687-1af0-4f8d-9659-eb6389c07ade of 1 cars
+== APP - order-processor == VerifyInventoryActivity: There are 10 cars available for purchase
+== APP - order-processor == ProcessPaymentActivity: b4cb2687-1af0-4f8d-9659-eb6389c07ade for 1 - cars (5000USD)
+== APP - order-processor == UpdateInventoryActivity: Checking Inventory for order b4cb2687-1af0-4f8d-9659-eb6389c07ade for 1 * cars
+== APP - order-processor == UpdateInventoryActivity: There are now 9 cars left in stock
+== APP - order-processor == NotifyActivity: Order b4cb2687-1af0-4f8d-9659-eb6389c07ade has completed!
+== APP - order-processor == workflow status: COMPLETED
== APP - order-processor == Purchase of item is complete
```
@@ -1192,14 +1736,15 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho
When you ran `dapr run`:
-1. A unique order ID for the workflow is generated (in the above example, `48ee83b7-5d80-48d5-97f9-6b372f5480a5`) and the workflow is scheduled.
-1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
-1. The `ReserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
-1. Your workflow starts and notifies you of its status.
-1. The `ProcessPaymentActivity` workflow activity begins processing payment for order `48ee83b7-5d80-48d5-97f9-6b372f5480a5` and confirms if successful.
-1. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
-1. The `NotifyActivity` workflow activity sends a notification saying that order `48ee83b7-5d80-48d5-97f9-6b372f5480a5` has completed.
-1. The workflow terminates as completed.
+1. An OrderPayload is made containing one car.
+2. A unique order ID for the workflow is generated (in the above example, `b4cb2687-1af0-4f8d-9659-eb6389c07ade`) and the workflow is scheduled.
+3. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received.
+4. The `VerifyInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock.
+5. The total cost of the order is 5000, so the workflow will not call the `RequestApprovalActivity` activity.
+6. The `ProcessPaymentActivity` workflow activity begins processing payment for order `b4cb2687-1af0-4f8d-9659-eb6389c07ade` and confirms if successful.
+7. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed.
+8. The `NotifyActivity` workflow activity sends a notification saying that order `b4cb2687-1af0-4f8d-9659-eb6389c07ade` has completed.
+9. The workflow terminates as completed and the OrderResult is set to processed.
#### `order-processor/main.go`
@@ -1211,13 +1756,35 @@ In the application's program file:
- The workflow and the workflow activities it invokes are registered
```go
+package main
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/dapr/go-sdk/client"
+ "github.com/dapr/go-sdk/workflow"
+)
+
+var (
+ stateStoreName = "statestore"
+ workflowComponent = "dapr"
+ workflowName = "OrderProcessingWorkflow"
+ defaultItemName = "cars"
+)
+
func main() {
fmt.Println("*** Welcome to the Dapr Workflow console app sample!")
fmt.Println("*** Using this app, you can place orders that start workflows.")
- // ...
+ w, err := workflow.NewWorker()
+ if err != nil {
+ log.Fatalf("failed to start worker: %v", err)
+ }
- // Register workflow and activities
if err := w.RegisterWorkflow(OrderProcessingWorkflow); err != nil {
log.Fatal(err)
}
@@ -1237,7 +1804,6 @@ func main() {
log.Fatal(err)
}
- // Build and start workflow runtime, pulling and executing tasks
if err := w.Start(); err != nil {
log.Fatal(err)
}
@@ -1251,10 +1817,9 @@ func main() {
log.Fatalf("failed to initialise workflow client: %v", err)
}
- // Check inventory
inventory := []InventoryItem{
{ItemName: "paperclip", PerItemCost: 5, Quantity: 100},
- {ItemName: "cars", PerItemCost: 15000, Quantity: 100},
+ {ItemName: "cars", PerItemCost: 5000, Quantity: 10},
{ItemName: "computers", PerItemCost: 500, Quantity: 100},
}
if err := restockInventory(daprClient, inventory); err != nil {
@@ -1264,7 +1829,7 @@ func main() {
fmt.Println("==========Begin the purchase of item:==========")
itemName := defaultItemName
- orderQuantity := 10
+ orderQuantity := 1
totalCost := inventory[1].PerItemCost * orderQuantity
@@ -1274,54 +1839,28 @@ func main() {
TotalCost: totalCost,
}
- // Start workflow events, like receiving order, verifying inventory, and processing payment
id, err := wfClient.ScheduleNewWorkflow(context.Background(), workflowName, workflow.WithInput(orderPayload))
if err != nil {
log.Fatalf("failed to start workflow: %v", err)
}
- // ...
-
- // Notification that workflow has completed or failed
- for {
- timeDelta := time.Since(startTime)
- metadata, err := wfClient.FetchWorkflowMetadata(context.Background(), id)
- if err != nil {
- log.Fatalf("failed to fetch workflow: %v", err)
- }
- if (metadata.RuntimeStatus == workflow.StatusCompleted) || (metadata.RuntimeStatus == workflow.StatusFailed) || (metadata.RuntimeStatus == workflow.StatusTerminated) {
- fmt.Printf("Workflow completed - result: %v\n", metadata.RuntimeStatus.String())
- break
- }
- if timeDelta.Seconds() >= 10 {
- metadata, err := wfClient.FetchWorkflowMetadata(context.Background(), id)
- if err != nil {
- log.Fatalf("failed to fetch workflow: %v", err)
- }
- if totalCost > 50000 && !approvalSought && ((metadata.RuntimeStatus != workflow.StatusCompleted) || (metadata.RuntimeStatus != workflow.StatusFailed) || (metadata.RuntimeStatus != workflow.StatusTerminated)) {
- approvalSought = true
- promptForApproval(id)
- }
- }
- // Sleep to not DoS the dapr dev instance
- time.Sleep(time.Second)
+ waitCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
+ _, err = wfClient.WaitForWorkflowCompletion(waitCtx, id)
+ cancel()
+ if err != nil {
+ log.Fatalf("failed to wait for workflow: %v", err)
}
- fmt.Println("Purchase of item is complete")
-}
-
-// Request approval (RequestApprovalActivity)
-func promptForApproval(id string) {
- wfClient, err := workflow.NewClient()
+ respFetch, err := wfClient.FetchWorkflowMetadata(context.Background(), id, workflow.WithFetchPayloads(true))
if err != nil {
- log.Fatalf("failed to initialise wfClient: %v", err)
- }
- if err := wfClient.RaiseEvent(context.Background(), id, "manager_approval"); err != nil {
- log.Fatal(err)
+ log.Fatalf("failed to get workflow: %v", err)
}
+
+ fmt.Printf("workflow status: %v\n", respFetch.RuntimeStatus)
+
+ fmt.Println("Purchase of item is complete")
}
-// Update inventory for remaining stock (UpdateInventoryActivity)
func restockInventory(daprClient client.Client, inventory []InventoryItem) error {
for _, item := range inventory {
itemSerialized, err := json.Marshal(item)
@@ -1335,9 +1874,204 @@ func restockInventory(daprClient client.Client, inventory []InventoryItem) error
}
return nil
}
+
```
-Meanwhile, the `OrderProcessingWorkflow` and its activities are defined as methods in [`workflow.go`](https://github.com/dapr/quickstarts/workflows/go/sdk/order-processor/workflow.go)
+#### `order-processor/workflow.go`
+
+
+```go
+package main
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "log"
+ "time"
+
+ "github.com/dapr/go-sdk/client"
+ "github.com/dapr/go-sdk/workflow"
+)
+
+// OrderProcessingWorkflow is the main workflow for orchestrating activities in the order process.
+func OrderProcessingWorkflow(ctx *workflow.WorkflowContext) (any, error) {
+ orderID := ctx.InstanceID()
+ var orderPayload OrderPayload
+ if err := ctx.GetInput(&orderPayload); err != nil {
+ return nil, err
+ }
+ err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{
+ Message: fmt.Sprintf("Received order %s for %d %s - $%d", orderID, orderPayload.Quantity, orderPayload.ItemName, orderPayload.TotalCost),
+ })).Await(nil)
+ if err != nil {
+ return OrderResult{Processed: false}, err
+ }
+
+ var verifyInventoryResult InventoryResult
+ if err := ctx.CallActivity(VerifyInventoryActivity, workflow.ActivityInput(InventoryRequest{
+ RequestID: orderID,
+ ItemName: orderPayload.ItemName,
+ Quantity: orderPayload.Quantity,
+ })).Await(&verifyInventoryResult); err != nil {
+ return OrderResult{Processed: false}, err
+ }
+
+ if !verifyInventoryResult.Success {
+ notification := Notification{Message: fmt.Sprintf("Insufficient inventory for %s", orderPayload.ItemName)}
+ err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(notification)).Await(nil)
+ return OrderResult{Processed: false}, err
+ }
+
+ if orderPayload.TotalCost > 5000 {
+ var approvalRequired ApprovalRequired
+ if err := ctx.CallActivity(RequestApprovalActivity, workflow.ActivityInput(orderPayload)).Await(&approvalRequired); err != nil {
+ return OrderResult{Processed: false}, err
+ }
+ if err := ctx.WaitForExternalEvent("manager_approval", time.Second*200).Await(nil); err != nil {
+ return OrderResult{Processed: false}, err
+ }
+ // TODO: Confirm timeout flow - this will be in the form of an error.
+ if approvalRequired.Approval {
+ if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Payment for order %s has been approved!", orderID)})).Await(nil); err != nil {
+ log.Printf("failed to notify of a successful order: %v\n", err)
+ }
+ } else {
+ if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Payment for order %s has been rejected!", orderID)})).Await(nil); err != nil {
+ log.Printf("failed to notify of an unsuccessful order :%v\n", err)
+ }
+ return OrderResult{Processed: false}, err
+ }
+ }
+ err = ctx.CallActivity(ProcessPaymentActivity, workflow.ActivityInput(PaymentRequest{
+ RequestID: orderID,
+ ItemBeingPurchased: orderPayload.ItemName,
+ Amount: orderPayload.TotalCost,
+ Quantity: orderPayload.Quantity,
+ })).Await(nil)
+ if err != nil {
+ if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Order %s failed!", orderID)})).Await(nil); err != nil {
+ log.Printf("failed to notify of a failed order: %v", err)
+ }
+ return OrderResult{Processed: false}, err
+ }
+
+ err = ctx.CallActivity(UpdateInventoryActivity, workflow.ActivityInput(PaymentRequest{
+ RequestID: orderID,
+ ItemBeingPurchased: orderPayload.ItemName,
+ Amount: orderPayload.TotalCost,
+ Quantity: orderPayload.Quantity,
+ })).Await(nil)
+ if err != nil {
+ if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Order %s failed!", orderID)})).Await(nil); err != nil {
+ log.Printf("failed to notify of a failed order: %v", err)
+ }
+ return OrderResult{Processed: false}, err
+ }
+
+ if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Order %s has completed!", orderID)})).Await(nil); err != nil {
+ log.Printf("failed to notify of a successful order: %v", err)
+ }
+ return OrderResult{Processed: true}, err
+}
+
+// NotifyActivity outputs a notification message
+func NotifyActivity(ctx workflow.ActivityContext) (any, error) {
+ var input Notification
+ if err := ctx.GetInput(&input); err != nil {
+ return "", err
+ }
+ fmt.Printf("NotifyActivity: %s\n", input.Message)
+ return nil, nil
+}
+
+// ProcessPaymentActivity is used to process a payment
+func ProcessPaymentActivity(ctx workflow.ActivityContext) (any, error) {
+ var input PaymentRequest
+ if err := ctx.GetInput(&input); err != nil {
+ return "", err
+ }
+ fmt.Printf("ProcessPaymentActivity: %s for %d - %s (%dUSD)\n", input.RequestID, input.Quantity, input.ItemBeingPurchased, input.Amount)
+ return nil, nil
+}
+
+// VerifyInventoryActivity is used to verify if an item is available in the inventory
+func VerifyInventoryActivity(ctx workflow.ActivityContext) (any, error) {
+ var input InventoryRequest
+ if err := ctx.GetInput(&input); err != nil {
+ return nil, err
+ }
+ fmt.Printf("VerifyInventoryActivity: Verifying inventory for order %s of %d %s\n", input.RequestID, input.Quantity, input.ItemName)
+ dClient, err := client.NewClient()
+ if err != nil {
+ return nil, err
+ }
+ item, err := dClient.GetState(context.Background(), stateStoreName, input.ItemName, nil)
+ if err != nil {
+ return nil, err
+ }
+ if item == nil {
+ return InventoryResult{
+ Success: false,
+ InventoryItem: InventoryItem{},
+ }, nil
+ }
+ var result InventoryItem
+ if err := json.Unmarshal(item.Value, &result); err != nil {
+ log.Fatalf("failed to parse inventory result %v", err)
+ }
+ fmt.Printf("VerifyInventoryActivity: There are %d %s available for purchase\n", result.Quantity, result.ItemName)
+ if result.Quantity >= input.Quantity {
+ return InventoryResult{Success: true, InventoryItem: result}, nil
+ }
+ return InventoryResult{Success: false, InventoryItem: InventoryItem{}}, nil
+}
+
+// UpdateInventoryActivity modifies the inventory.
+func UpdateInventoryActivity(ctx workflow.ActivityContext) (any, error) {
+ var input PaymentRequest
+ if err := ctx.GetInput(&input); err != nil {
+ return nil, err
+ }
+ fmt.Printf("UpdateInventoryActivity: Checking Inventory for order %s for %d * %s\n", input.RequestID, input.Quantity, input.ItemBeingPurchased)
+ dClient, err := client.NewClient()
+ if err != nil {
+ return nil, err
+ }
+ item, err := dClient.GetState(context.Background(), stateStoreName, input.ItemBeingPurchased, nil)
+ if err != nil {
+ return nil, err
+ }
+ var result InventoryItem
+ err = json.Unmarshal(item.Value, &result)
+ if err != nil {
+ return nil, err
+ }
+ newQuantity := result.Quantity - input.Quantity
+ if newQuantity < 0 {
+ return nil, fmt.Errorf("insufficient inventory for: %s", input.ItemBeingPurchased)
+ }
+ result.Quantity = newQuantity
+ newState, err := json.Marshal(result)
+ if err != nil {
+ log.Fatalf("failed to marshal new state: %v", err)
+ }
+ dClient.SaveState(context.Background(), stateStoreName, input.ItemBeingPurchased, newState, nil)
+ fmt.Printf("UpdateInventoryActivity: There are now %d %s left in stock\n", result.Quantity, result.ItemName)
+ return InventoryResult{Success: true, InventoryItem: result}, nil
+}
+
+// RequestApprovalActivity requests approval for the order
+func RequestApprovalActivity(ctx workflow.ActivityContext) (any, error) {
+ var input OrderPayload
+ if err := ctx.GetInput(&input); err != nil {
+ return nil, err
+ }
+ fmt.Printf("RequestApprovalActivity: Requesting approval for payment of %dUSD for %d %s\n", input.TotalCost, input.Quantity, input.ItemName)
+ return ApprovalRequired{Approval: true}, nil
+}
+
+```
{{% /codetab %}}
diff --git a/daprdocs/content/en/operations/configuration/configuration-overview.md b/daprdocs/content/en/operations/configuration/configuration-overview.md
index 7225fc11f2f..5a528a22433 100644
--- a/daprdocs/content/en/operations/configuration/configuration-overview.md
+++ b/daprdocs/content/en/operations/configuration/configuration-overview.md
@@ -145,9 +145,12 @@ metrics:
- /payments/{paymentID}/refund
- /payments/{paymentID}/details
excludeVerbs: false
+ recordErrorCodes: true
```
-In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return _a single metric count_ matching all the `orderID`s and all the `itemID`s, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}})
+In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return _a single metric count_ matching all the `orderID`s and all the `itemID`s, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}).
+
+The above example also enables [recording error code metrics]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}), which is disabled by default.
The following table lists the properties for metrics:
diff --git a/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md
index 6a87484cc36..b7e8a0f8153 100644
--- a/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md
+++ b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md
@@ -66,7 +66,7 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
1. Create the cluster by running the following command:
```bash
- eksctl create cluster -f cluster.yaml
+ eksctl create cluster -f cluster-config.yaml
```
1. Verify the kubectl context:
diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md
index 7ad299dbe94..6abda8f987a 100644
--- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md
+++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-overview.md
@@ -8,12 +8,13 @@ description: "Overview of how to get Dapr running on your Kubernetes cluster"
Dapr can be configured to run on any supported versions of Kubernetes. To achieve this, Dapr begins by deploying the following Kubernetes services, which provide first-class integration to make running applications with Dapr easy.
-| Kubernetes services | Description |
-| ------------------- | ----------- |
-| `dapr-operator` | Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
+| Kubernetes services | Description |
+|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `dapr-operator` | Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
| `dapr-sidecar-injector` | Injects Dapr into [annotated](#adding-dapr-to-a-kubernetes-deployment) deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. |
-| `dapr-placement` | Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods |
-| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}) |
+| `dapr-placement` | Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods |
+| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}) |
+| `dapr-scheduler` | Provides distributed job scheduling capabilities used by the Jobs API, Workflow API, and Actor Reminders |
@@ -61,4 +62,3 @@ For information about:
- [Upgrade Dapr on a Kubernetes cluster]({{< ref kubernetes-upgrade >}})
- [Production guidelines for Dapr on Kubernetes]({{< ref kubernetes-production.md >}})
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
-- [Use Bridge to Kubernetes to debug Dapr apps locally, while connected to your Kubernetes cluster]({{< ref bridge-to-kubernetes >}})
diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md
index 1151137efab..fa64b4386d1 100644
--- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md
+++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md
@@ -120,13 +120,15 @@ In some scenarios, nodes may have memory and/or cpu pressure and the Dapr contro
for eviction. To prevent this, you can set a critical priority class name for the Dapr control plane pods. This ensures that
the Dapr control plane pods are not evicted unless all other pods with lower priority are evicted.
+It's particularly important to protect the Dapr control plane components from eviction, especially the Scheduler service. When Schedulers are rescheduled or restarted, it can be highly disruptive to inflight jobs, potentially causing them to fire duplicate times. To prevent such disruptions, you should ensure the Dapr control plane components have a higher priority class than your application workloads.
+
Learn more about [Protecting Mission-Critical Pods](https://kubernetes.io/blog/2023/01/12/protect-mission-critical-pods-priorityclass/).
There are two built-in critical priority classes in Kubernetes:
- `system-cluster-critical`
- `system-node-critical` (highest priority)
-It's recommended to set the `priorityClassName` to `system-cluster-critical` for the Dapr control plane pods.
+It's recommended to set the `priorityClassName` to `system-cluster-critical` for the Dapr control plane pods. If you have your own custom priority classes for your applications, ensure they have a lower priority value than the one assigned to the Dapr control plane to maintain system stability and prevent disruption of core Dapr services.
For a new Dapr control plane deployment, the `system-cluster-critical` priority class mode can be set via the helm value `global.priorityClassName`.
@@ -155,7 +157,6 @@ spec:
values: [system-cluster-critical]
```
-
## Deploy Dapr with Helm
[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}).
diff --git a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md
index 78f0e2c7522..700acc7767e 100644
--- a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md
+++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md
@@ -149,7 +149,7 @@ services:
- type: tmpfs
target: /data
tmpfs:
- size: "10000"
+ size: "64m"
networks:
hello-dapr: null
diff --git a/daprdocs/content/en/operations/observability/metrics/metrics-overview.md b/daprdocs/content/en/operations/observability/metrics/metrics-overview.md
index 23fea29e6db..1df663ab705 100644
--- a/daprdocs/content/en/operations/observability/metrics/metrics-overview.md
+++ b/daprdocs/content/en/operations/observability/metrics/metrics-overview.md
@@ -72,7 +72,7 @@ spec:
## Configuring metrics for error codes
-You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. As described in the [Dapr development docs](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md), a new metric called `error_code_total` is recorded, which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories.
+You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. [A new metric called `error_code_total` is recorded]({{< ref errors-overview.md >}}), which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories.
Example configuration:
```yaml
diff --git a/daprdocs/content/en/operations/observability/tracing/tracing-overview.md b/daprdocs/content/en/operations/observability/tracing/tracing-overview.md
index 603e5d12173..a5194a73086 100644
--- a/daprdocs/content/en/operations/observability/tracing/tracing-overview.md
+++ b/daprdocs/content/en/operations/observability/tracing/tracing-overview.md
@@ -63,7 +63,7 @@ You must propagate the headers from `service A` to `service B`. For example: `In
##### Pub/sub messages
-Dapr generates the trace headers in the published message topic. These trace headers are propagated to any services listening on that topic.
+Dapr generates the trace headers in the published message topic. For `rawPayload` messages, it is possible to specify the `traceparent` header to propagate the tracing information. These trace headers are propagated to any services listening on that topic.
#### Propagating multiple different service calls
diff --git a/daprdocs/content/en/operations/resiliency/policies.md b/daprdocs/content/en/operations/resiliency/policies.md
index c7b40c3b88f..e69de29bb2d 100644
--- a/daprdocs/content/en/operations/resiliency/policies.md
+++ b/daprdocs/content/en/operations/resiliency/policies.md
@@ -1,330 +0,0 @@
----
-type: docs
-title: "Resiliency policies"
-linkTitle: "Policies"
-weight: 200
-description: "Configure resiliency policies for timeouts, retries, and circuit breakers"
----
-
-Define timeouts, retries, and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the `targets` section in the resiliency spec.
-
-> Note: Dapr offers default retries for specific APIs. [See here]({{< ref "#overriding-default-retries" >}}) to learn how you can overwrite default retry logic with user defined retry policies.
-
-## Timeouts
-
-Timeouts are optional policies that can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
-
-- The operation in progress is terminated (if possible).
-- An error is returned.
-
-Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`. Timeouts have no set maximum value.
-
-Example:
-
-```yaml
-spec:
- policies:
- # Timeouts are simple named durations.
- timeouts:
- general: 5s
- important: 60s
- largeResponse: 10s
-```
-
-If you don't specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.
-
-## Retries
-
-With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy.
-
-{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
-Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages.
-{{% /alert %}}
-
-The following retry options are configurable:
-
-| Retry option | Description |
-| ------------ | ----------- |
-| `policy` | Determines the back-off and retry interval strategy. Valid values are `constant` and `exponential`. Defaults to `constant`. |
-| `duration` | Determines the time interval between retries. Only applies to the `constant` policy. Valid values are of the form `200ms`, `15s`, `2m`, etc. Defaults to `5s`.|
-| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow. Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
-| `maxRetries` | The maximum number of retries to attempt. `-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set). Defaults to `-1`. |
-| `matching.httpStatusCodes` | Optional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried. Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status) Format: `` or range `-` Example: "429,501-503" Default: empty string `""` or field is not set. Retries on all HTTP errors. |
-| `matching.gRPCStatusCodes` | Optional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried. Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/) Format: `` or range `-` Example: "1,501-503" Default: empty string `""` or field is not set. Retries on all gRPC errors. |
-
-
-{{% alert title="httpStatusCodes and gRPCStatusCodes format" color="warning" %}}
-The field values should follow the format as specified in the field description or in the "Example 2" below.
-An incorrectly formatted value will produce an error log ("Could not read resiliency policy") and `daprd` startup sequence will proceed.
-{{% /alert %}}
-
-
-The exponential back-off window uses the following formula:
-
-```
-BackOffDuration = PreviousBackOffDuration * (Random value from 0.5 to 1.5) * 1.5
-if BackOffDuration > maxInterval {
- BackoffDuration = maxInterval
-}
-```
-
-Example:
-
-```yaml
-spec:
- policies:
- # Retries are named templates for retry configurations and are instantiated for life of the operation.
- retries:
- pubsubRetry:
- policy: constant
- duration: 5s
- maxRetries: 10
-
- retryForever:
- policy: exponential
- maxInterval: 15s
- maxRetries: -1 # Retry indefinitely
-```
-
-Example 2:
-
-```yaml
-spec:
- policies:
- retries:
- retry5xxOnly:
- policy: constant
- duration: 5s
- maxRetries: 3
- matching:
- httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
- gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
-```
-
-## Circuit Breakers
-
-Circuit Breaker (CB) policies are used when other applications/services/components are experiencing elevated failure rates. CBs monitor the requests and shut off all traffic to the impacted service when a certain criteria is met ("open" state). By doing this, CBs give the service time to recover from their outage instead of flooding it with events. The CB can also allow partial traffic through to see if the system has healed ("half-open" state). Once requests resume being successful, the CB gets into "closed" state and allows traffic to completely resume.
-
-| Retry option | Description |
-| ------------ | ----------- |
-| `maxRequests` | The maximum number of requests allowed to pass through when the CB is half-open (recovering from failure). Defaults to `1`. |
-| `interval` | The cyclical period of time used by the CB to clear its internal counts. If set to 0 seconds, this never clears. Defaults to `0s`. |
-| `timeout` | The period of the open state (directly after failure) until the CB switches to half-open. Defaults to `60s`. |
-| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. |
-
-Example:
-
-```yaml
-spec:
- policies:
- circuitBreakers:
- pubsubCB:
- maxRequests: 1
- interval: 8s
- timeout: 45s
- trip: consecutiveFailures > 8
-```
-
-## Overriding default retries
-
-Dapr provides default retries for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
-
-> Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.
-
-Below is a table that describes Dapr's default retries and the policy keywords to override them:
-
-| Capability | Override Keyword | Default Retry Behavior | Description |
-| ------------------ | ------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------- |
-| Service Invocation | DaprBuiltInServiceRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (a service invocation method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
-| Actors | DaprBuiltInActorRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (an actor method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
-| Actor Reminders | DaprBuiltInActorReminderRetries | Per call retries are performed with an exponential backoff with an initial interval of 500ms, up to a maximum of 60s for a duration of 15mins | Requests that fail to persist an actor reminder to a state store |
-| Initialization Retries | DaprBuiltInInitializationRetries | Per call retries are performed 3 times with an exponential backoff, an initial interval of 500ms and for a duration of 10s | Failures when making a request to an application to retrieve a given spec. For example, failure to retrieve a subscription, component or resiliency specification |
-
-
-The resiliency spec example below shows overriding the default retries for _all_ service invocation requests by using the reserved, named keyword 'DaprBuiltInServiceRetries'.
-
-Also defined is a retry policy called 'retryForever' that is only applied to the appB target. appB uses the 'retryForever' retry policy, while all other application service invocation retry failures use the overridden 'DaprBuiltInServiceRetries' default policy.
-
-```yaml
-spec:
- policies:
- retries:
- DaprBuiltInServiceRetries: # Overrides default retry behavior for service-to-service calls
- policy: constant
- duration: 5s
- maxRetries: 10
-
- retryForever: # A user defined retry policy replaces default retries. Targets rely solely on the applied policy.
- policy: exponential
- maxInterval: 15s
- maxRetries: -1 # Retry indefinitely
-
- targets:
- apps:
- appB: # app-id of the target service
- retry: retryForever
-```
-
-## Setting default policies
-
-In resiliency you can set default policies, which have a broad scope. This is done through reserved keywords that let Dapr know when to apply the policy. There are 3 default policy types:
-
-- `DefaultRetryPolicy`
-- `DefaultTimeoutPolicy`
-- `DefaultCircuitBreakerPolicy`
-
-If these policies are defined, they are used for every operation to a service, application, or component. They can also be modified to be more specific through the appending of additional keywords. The specific policies follow the following pattern, `Default%sRetryPolicy`, `Default%sTimeoutPolicy`, and `Default%sCircuitBreakerPolicy`. Where the `%s` is replaced by a target of the policy.
-
-Below is a table of all possible default policy keywords and how they translate into a policy name.
-
-| Keyword | Target Operation | Example Policy Name |
-| -------------------------------- | ---------------------------------------------------- | ----------------------------------------------------------- |
-| `App` | Service invocation. | `DefaultAppRetryPolicy` |
-| `Actor` | Actor invocation. | `DefaultActorTimeoutPolicy` |
-| `Component` | All component operations. | `DefaultComponentCircuitBreakerPolicy` |
-| `ComponentInbound` | All inbound component operations. | `DefaultComponentInboundRetryPolicy` |
-| `ComponentOutbound` | All outbound component operations. | `DefaultComponentOutboundTimeoutPolicy` |
-| `StatestoreComponentOutbound` | All statestore component operations. | `DefaultStatestoreComponentOutboundCircuitBreakerPolicy` |
-| `PubsubComponentOutbound` | All outbound pubusub (publish) component operations. | `DefaultPubsubComponentOutboundRetryPolicy` |
-| `PubsubComponentInbound` | All inbound pubsub (subscribe) component operations. | `DefaultPubsubComponentInboundTimeoutPolicy` |
-| `BindingComponentOutbound` | All outbound binding (invoke) component operations. | `DefaultBindingComponentOutboundCircuitBreakerPolicy` |
-| `BindingComponentInbound` | All inbound binding (read) component operations. | `DefaultBindingComponentInboundRetryPolicy` |
-| `SecretstoreComponentOutbound` | All secretstore component operations. | `DefaultSecretstoreComponentTimeoutPolicy` |
-| `ConfigurationComponentOutbound` | All configuration component operations. | `DefaultConfigurationComponentOutboundCircuitBreakerPolicy` |
-| `LockComponentOutbound` | All lock component operations. | `DefaultLockComponentOutboundRetryPolicy` |
-
-### Policy hierarchy resolution
-
-Default policies are applied if the operation being executed matches the policy type and if there is no more specific policy targeting it. For each target type (app, actor, and component), the policy with the highest priority is a Named Policy, one that targets that construct specifically.
-
-If none exists, the policies are applied from most specific to most broad.
-
-#### How default policies and built-in retries work together
-
-In the case of the [built-in retries]({{< ref "policies.md#Override Default Retries" >}}), default policies do not stop the built-in retry policies from running. Both are used together but only under specific circumstances.
-
-For service and actor invocation, the built-in retries deal specifically with issues connecting to the remote sidecar (when needed). As these are important to the stability of the Dapr runtime, they are not disabled **unless** a named policy is specifically referenced for an operation. In some instances, there may be additional retries from both the built-in retry and the default retry policy, but this prevents an overly weak default policy from reducing the sidecar's availability/success rate.
-
-Policy resolution hierarchy for applications, from most specific to most broad:
-
-1. Named Policies in App Targets
-2. Default App Policies / Built-In Service Retries
-3. Default Policies / Built-In Service Retries
-
-Policy resolution hierarchy for actors, from most specific to most broad:
-
-1. Named Policies in Actor Targets
-2. Default Actor Policies / Built-In Actor Retries
-3. Default Policies / Built-In Actor Retries
-
-Policy resolution hierarchy for components, from most specific to most broad:
-
-1. Named Policies in Component Targets
-2. Default Component Type + Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
-3. Default Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
-4. Default Component Policies / Built-In Actor Reminder Retries (if applicable)
-5. Default Policies / Built-In Actor Reminder Retries (if applicable)
-
-As an example, take the following solution consisting of three applications, three components and two actor types:
-
-Applications:
-
-- AppA
-- AppB
-- AppC
-
-Components:
-
-- Redis Pubsub: pubsub
-- Redis statestore: statestore
-- CosmosDB Statestore: actorstore
-
-Actors:
-
-- EventActor
-- SummaryActor
-
-Below is policy that uses both default and named policies as applies these to the targets.
-
-```yaml
-spec:
- policies:
- retries:
- # Global Retry Policy
- DefaultRetryPolicy:
- policy: constant
- duration: 1s
- maxRetries: 3
-
- # Global Retry Policy for Apps
- DefaultAppRetryPolicy:
- policy: constant
- duration: 100ms
- maxRetries: 5
-
- # Global Retry Policy for Apps
- DefaultActorRetryPolicy:
- policy: exponential
- maxInterval: 15s
- maxRetries: 10
-
- # Global Retry Policy for Inbound Component operations
- DefaultComponentInboundRetryPolicy:
- policy: constant
- duration: 5s
- maxRetries: 5
-
- # Global Retry Policy for Statestores
- DefaultStatestoreComponentOutboundRetryPolicy:
- policy: exponential
- maxInterval: 60s
- maxRetries: -1
-
- # Named policy
- fastRetries:
- policy: constant
- duration: 10ms
- maxRetries: 3
-
- # Named policy
- retryForever:
- policy: exponential
- maxInterval: 10s
- maxRetries: -1
-
- targets:
- apps:
- appA:
- retry: fastRetries
-
- appB:
- retry: retryForever
-
- actors:
- EventActor:
- retry: retryForever
-
- components:
- actorstore:
- retry: fastRetries
-```
-
-The table below is a break down of which policies are applied when attempting to call the various targets in this solution.
-
-| Target | Policy Used |
-| ------------------ | ----------------------------------------------- |
-| AppA | fastRetries |
-| AppB | retryForever |
-| AppC | DefaultAppRetryPolicy / DaprBuiltInActorRetries |
-| pubsub - Publish | DefaultRetryPolicy |
-| pubsub - Subscribe | DefaultComponentInboundRetryPolicy |
-| statestore | DefaultStatestoreComponentOutboundRetryPolicy |
-| actorstore | fastRetries |
-| EventActor | retryForever |
-| SummaryActor | DefaultActorRetryPolicy |
-
-## Next steps
-
-Try out one of the Resiliency quickstarts:
-- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
-- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
diff --git a/daprdocs/content/en/operations/resiliency/policies/_index.md b/daprdocs/content/en/operations/resiliency/policies/_index.md
new file mode 100644
index 00000000000..40cbd084b26
--- /dev/null
+++ b/daprdocs/content/en/operations/resiliency/policies/_index.md
@@ -0,0 +1,9 @@
+---
+type: docs
+title: "Resiliency policies"
+linkTitle: "Policies"
+weight: 200
+description: "Configure resiliency policies for timeouts, retries, and circuit breakers"
+---
+
+Define timeouts, retries, and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the [`targets` section in the resiliency spec]({{< ref targets.md >}}).
diff --git a/daprdocs/content/en/operations/resiliency/policies/circuit-breakers.md b/daprdocs/content/en/operations/resiliency/policies/circuit-breakers.md
new file mode 100644
index 00000000000..afa4168126f
--- /dev/null
+++ b/daprdocs/content/en/operations/resiliency/policies/circuit-breakers.md
@@ -0,0 +1,49 @@
+---
+type: docs
+title: "Circuit breaker resiliency policies"
+linkTitle: "Circuit breakers"
+weight: 30
+description: "Configure resiliency policies for circuit breakers"
+---
+
+Circuit breaker policies are used when other applications/services/components are experiencing elevated failure rates. Circuit breakers reduce load by monitoring the requests and shutting off all traffic to the impacted service when a certain criteria is met.
+
+After a certain number of requests fail, circuit breakers "trip" or open to prevent cascading failures. By doing this, circuit breakers give the service time to recover from their outage instead of flooding it with events.
+
+The circuit breaker can also enter a “half-open” state, allowing partial traffic through to see if the system has healed.
+
+Once requests resume being successful, the circuit breaker gets into "closed" state and allows traffic to completely resume.
+
+## Circuit breaker policy format
+
+```yaml
+spec:
+ policies:
+ circuitBreakers:
+ pubsubCB:
+ maxRequests: 1
+ interval: 8s
+ timeout: 45s
+ trip: consecutiveFailures > 8
+```
+
+## Spec metadata
+
+| Retry option | Description |
+| ------------ | ----------- |
+| `maxRequests` | The maximum number of requests allowed to pass through when the circuit breaker is half-open (recovering from failure). Defaults to `1`. |
+| `interval` | The cyclical period of time used by the circuit breaker to clear its internal counts. If set to 0 seconds, this never clears. Defaults to `0s`. |
+| `timeout` | The period of the open state (directly after failure) until the circuit breaker switches to half-open. Defaults to `60s`. |
+| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the circuit breaker. When the statement evaluates to true, the circuit breaker trips and becomes open. Defaults to `consecutiveFailures > 5`. Other possible values are `requests` and `totalFailures` where `requests` represents the number of either successful or failed calls before the circuit opens and `totalFailures` represents the total (not necessarily consecutive) number of failed attempts before the circuit opens. Example: `requests > 5` and `totalFailures >3`.|
+
+## Next steps
+- [Learn more about default resiliency policies]({{< ref default-policies.md >}})
+- Learn more about:
+ - [Retry policies]({{< ref retries-overview.md >}})
+ - [Timeout policies]({{< ref timeouts.md >}})
+
+## Related links
+
+Try out one of the Resiliency quickstarts:
+- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
+- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
diff --git a/daprdocs/content/en/operations/resiliency/policies/default-policies.md b/daprdocs/content/en/operations/resiliency/policies/default-policies.md
new file mode 100644
index 00000000000..2d8f622f11b
--- /dev/null
+++ b/daprdocs/content/en/operations/resiliency/policies/default-policies.md
@@ -0,0 +1,173 @@
+---
+type: docs
+title: "Default resiliency policies"
+linkTitle: "Default policies"
+weight: 40
+description: "Learn more about the default resiliency policies for timeouts, retries, and circuit breakers"
+---
+
+In resiliency, you can set default policies, which have a broad scope. This is done through reserved keywords that let Dapr know when to apply the policy. There are 3 default policy types:
+
+- `DefaultRetryPolicy`
+- `DefaultTimeoutPolicy`
+- `DefaultCircuitBreakerPolicy`
+
+If these policies are defined, they are used for every operation to a service, application, or component. They can also be modified to be more specific through the appending of additional keywords. The specific policies follow the following pattern, `Default%sRetryPolicy`, `Default%sTimeoutPolicy`, and `Default%sCircuitBreakerPolicy`. Where the `%s` is replaced by a target of the policy.
+
+Below is a table of all possible default policy keywords and how they translate into a policy name.
+
+| Keyword | Target Operation | Example Policy Name |
+| -------------------------------- | ---------------------------------------------------- | ----------------------------------------------------------- |
+| `App` | Service invocation. | `DefaultAppRetryPolicy` |
+| `Actor` | Actor invocation. | `DefaultActorTimeoutPolicy` |
+| `Component` | All component operations. | `DefaultComponentCircuitBreakerPolicy` |
+| `ComponentInbound` | All inbound component operations. | `DefaultComponentInboundRetryPolicy` |
+| `ComponentOutbound` | All outbound component operations. | `DefaultComponentOutboundTimeoutPolicy` |
+| `StatestoreComponentOutbound` | All statestore component operations. | `DefaultStatestoreComponentOutboundCircuitBreakerPolicy` |
+| `PubsubComponentOutbound` | All outbound pubusub (publish) component operations. | `DefaultPubsubComponentOutboundRetryPolicy` |
+| `PubsubComponentInbound` | All inbound pubsub (subscribe) component operations. | `DefaultPubsubComponentInboundTimeoutPolicy` |
+| `BindingComponentOutbound` | All outbound binding (invoke) component operations. | `DefaultBindingComponentOutboundCircuitBreakerPolicy` |
+| `BindingComponentInbound` | All inbound binding (read) component operations. | `DefaultBindingComponentInboundRetryPolicy` |
+| `SecretstoreComponentOutbound` | All secretstore component operations. | `DefaultSecretstoreComponentTimeoutPolicy` |
+| `ConfigurationComponentOutbound` | All configuration component operations. | `DefaultConfigurationComponentOutboundCircuitBreakerPolicy` |
+| `LockComponentOutbound` | All lock component operations. | `DefaultLockComponentOutboundRetryPolicy` |
+
+## Policy hierarchy resolution
+
+Default policies are applied if the operation being executed matches the policy type and if there is no more specific policy targeting it. For each target type (app, actor, and component), the policy with the highest priority is a Named Policy, one that targets that construct specifically.
+
+If none exists, the policies are applied from most specific to most broad.
+
+## How default policies and built-in retries work together
+
+In the case of the [built-in retries]({{< ref override-default-retries.md >}}), default policies do not stop the built-in retry policies from running. Both are used together but only under specific circumstances.
+
+For service and actor invocation, the built-in retries deal specifically with issues connecting to the remote sidecar (when needed). As these are important to the stability of the Dapr runtime, they are not disabled **unless** a named policy is specifically referenced for an operation. In some instances, there may be additional retries from both the built-in retry and the default retry policy, but this prevents an overly weak default policy from reducing the sidecar's availability/success rate.
+
+Policy resolution hierarchy for applications, from most specific to most broad:
+
+1. Named Policies in App Targets
+2. Default App Policies / Built-In Service Retries
+3. Default Policies / Built-In Service Retries
+
+Policy resolution hierarchy for actors, from most specific to most broad:
+
+1. Named Policies in Actor Targets
+2. Default Actor Policies / Built-In Actor Retries
+3. Default Policies / Built-In Actor Retries
+
+Policy resolution hierarchy for components, from most specific to most broad:
+
+1. Named Policies in Component Targets
+2. Default Component Type + Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
+3. Default Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
+4. Default Component Policies / Built-In Actor Reminder Retries (if applicable)
+5. Default Policies / Built-In Actor Reminder Retries (if applicable)
+
+As an example, take the following solution consisting of three applications, three components and two actor types:
+
+Applications:
+
+- AppA
+- AppB
+- AppC
+
+Components:
+
+- Redis Pubsub: pubsub
+- Redis statestore: statestore
+- CosmosDB Statestore: actorstore
+
+Actors:
+
+- EventActor
+- SummaryActor
+
+Below is policy that uses both default and named policies as applies these to the targets.
+
+```yaml
+spec:
+ policies:
+ retries:
+ # Global Retry Policy
+ DefaultRetryPolicy:
+ policy: constant
+ duration: 1s
+ maxRetries: 3
+
+ # Global Retry Policy for Apps
+ DefaultAppRetryPolicy:
+ policy: constant
+ duration: 100ms
+ maxRetries: 5
+
+ # Global Retry Policy for Apps
+ DefaultActorRetryPolicy:
+ policy: exponential
+ maxInterval: 15s
+ maxRetries: 10
+
+ # Global Retry Policy for Inbound Component operations
+ DefaultComponentInboundRetryPolicy:
+ policy: constant
+ duration: 5s
+ maxRetries: 5
+
+ # Global Retry Policy for Statestores
+ DefaultStatestoreComponentOutboundRetryPolicy:
+ policy: exponential
+ maxInterval: 60s
+ maxRetries: -1
+
+ # Named policy
+ fastRetries:
+ policy: constant
+ duration: 10ms
+ maxRetries: 3
+
+ # Named policy
+ retryForever:
+ policy: exponential
+ maxInterval: 10s
+ maxRetries: -1
+
+ targets:
+ apps:
+ appA:
+ retry: fastRetries
+
+ appB:
+ retry: retryForever
+
+ actors:
+ EventActor:
+ retry: retryForever
+
+ components:
+ actorstore:
+ retry: fastRetries
+```
+
+The table below is a break down of which policies are applied when attempting to call the various targets in this solution.
+
+| Target | Policy Used |
+| ------------------ | ----------------------------------------------- |
+| AppA | fastRetries |
+| AppB | retryForever |
+| AppC | DefaultAppRetryPolicy / DaprBuiltInActorRetries |
+| pubsub - Publish | DefaultRetryPolicy |
+| pubsub - Subscribe | DefaultComponentInboundRetryPolicy |
+| statestore | DefaultStatestoreComponentOutboundRetryPolicy |
+| actorstore | fastRetries |
+| EventActor | retryForever |
+| SummaryActor | DefaultActorRetryPolicy |
+
+## Next steps
+
+[Learn how to override default retry policies.]({{< ref override-default-retries.md >}})
+
+## Related links
+
+Try out one of the Resiliency quickstarts:
+- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
+- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/resiliency/policies/retries/_index.md b/daprdocs/content/en/operations/resiliency/policies/retries/_index.md
new file mode 100644
index 00000000000..8e0f5b27964
--- /dev/null
+++ b/daprdocs/content/en/operations/resiliency/policies/retries/_index.md
@@ -0,0 +1,7 @@
+---
+type: docs
+title: "Retry and back-off resiliency policies"
+linkTitle: "Retries"
+weight: 20
+description: "Configure resiliency policies for retries and back-offs"
+---
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/resiliency/policies/retries/override-default-retries.md b/daprdocs/content/en/operations/resiliency/policies/retries/override-default-retries.md
new file mode 100644
index 00000000000..949c251f01d
--- /dev/null
+++ b/daprdocs/content/en/operations/resiliency/policies/retries/override-default-retries.md
@@ -0,0 +1,51 @@
+---
+type: docs
+title: "Override default retry resiliency policies"
+linkTitle: "Override default retries"
+weight: 20
+description: "Learn how to override the default retry resiliency policies for specific APIs"
+---
+
+Dapr provides [default retries]({{< ref default-policies.md >}}) for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
+
+> Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.
+
+Below is a table that describes Dapr's default retries and the policy keywords to override them:
+
+| Capability | Override Keyword | Default Retry Behavior | Description |
+| ------------------ | ------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------- |
+| Service Invocation | DaprBuiltInServiceRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (a service invocation method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
+| Actors | DaprBuiltInActorRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (an actor method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
+| Actor Reminders | DaprBuiltInActorReminderRetries | Per call retries are performed with an exponential backoff with an initial interval of 500ms, up to a maximum of 60s for a duration of 15mins | Requests that fail to persist an actor reminder to a state store |
+| Initialization Retries | DaprBuiltInInitializationRetries | Per call retries are performed 3 times with an exponential backoff, an initial interval of 500ms and for a duration of 10s | Failures when making a request to an application to retrieve a given spec. For example, failure to retrieve a subscription, component or resiliency specification |
+
+
+The resiliency spec example below shows overriding the default retries for _all_ service invocation requests by using the reserved, named keyword 'DaprBuiltInServiceRetries'.
+
+Also defined is a retry policy called 'retryForever' that is only applied to the appB target. appB uses the 'retryForever' retry policy, while all other application service invocation retry failures use the overridden 'DaprBuiltInServiceRetries' default policy.
+
+```yaml
+spec:
+ policies:
+ retries:
+ DaprBuiltInServiceRetries: # Overrides default retry behavior for service-to-service calls
+ policy: constant
+ duration: 5s
+ maxRetries: 10
+
+ retryForever: # A user defined retry policy replaces default retries. Targets rely solely on the applied policy.
+ policy: exponential
+ maxInterval: 15s
+ maxRetries: -1 # Retry indefinitely
+
+ targets:
+ apps:
+ appB: # app-id of the target service
+ retry: retryForever
+```
+
+## Related links
+
+Try out one of the Resiliency quickstarts:
+- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
+- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
diff --git a/daprdocs/content/en/operations/resiliency/policies/retries/retries-overview.md b/daprdocs/content/en/operations/resiliency/policies/retries/retries-overview.md
new file mode 100644
index 00000000000..121f5ba027c
--- /dev/null
+++ b/daprdocs/content/en/operations/resiliency/policies/retries/retries-overview.md
@@ -0,0 +1,153 @@
+---
+type: docs
+title: "Retry resiliency policies"
+linkTitle: "Overview"
+weight: 10
+description: "Configure resiliency policies for retries"
+---
+
+Requests can fail due to transient errors, like encountering network congestion, reroutes to overloaded instances, and more. Sometimes, requests can fail due to other resiliency policies set in place, like triggering a defined timeout or circuit breaker policy.
+
+In these cases, configuring `retries` can either:
+- Send the same request to a different instance, or
+- Retry sending the request after the condition has cleared.
+
+Retries and timeouts work together, with timeouts ensuring your system fails fast when needed, and retries recovering from temporary glitches.
+
+Dapr provides [default resiliency policies]({{< ref default-policies.md >}}), which you can [overwrite with user-defined retry policies.]({{< ref override-default-retries.md >}})
+
+{{% alert title="Important" color="warning" %}}
+Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages.
+{{% /alert %}}
+
+## Retry policy format
+
+**Example 1**
+
+```yaml
+spec:
+ policies:
+ # Retries are named templates for retry configurations and are instantiated for life of the operation.
+ retries:
+ pubsubRetry:
+ policy: constant
+ duration: 5s
+ maxRetries: 10
+
+ retryForever:
+ policy: exponential
+ maxInterval: 15s
+ maxRetries: -1 # Retry indefinitely
+```
+
+**Example 2**
+
+```yaml
+spec:
+ policies:
+ retries:
+ retry5xxOnly:
+ policy: constant
+ duration: 5s
+ maxRetries: 3
+ matching:
+ httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
+ gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
+```
+
+## Spec metadata
+
+The following retry options are configurable:
+
+| Retry option | Description |
+| ------------ | ----------- |
+| `policy` | Determines the back-off and retry interval strategy. Valid values are `constant` and `exponential`. Defaults to `constant`. |
+| `duration` | Determines the time interval between retries. Only applies to the `constant` policy. Valid values are of the form `200ms`, `15s`, `2m`, etc. Defaults to `5s`.|
+| `maxInterval` | Determines the maximum interval between retries to which the [`exponential` back-off policy](#exponential-back-off-policy) can grow. Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
+| `maxRetries` | The maximum number of retries to attempt. `-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set). Defaults to `-1`. |
+| `matching.httpStatusCodes` | Optional: a comma-separated string of [HTTP status codes or code ranges to retry](#retry-status-codes). Status codes not listed are not retried. Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status) Format: `` or range `-` Example: "429,501-503" Default: empty string `""` or field is not set. Retries on all HTTP errors. |
+| `matching.gRPCStatusCodes` | Optional: a comma-separated string of [gRPC status codes or code ranges to retry](#retry-status-codes). Status codes not listed are not retried. Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/) Format: `` or range `-` Example: "4,8,14" Default: empty string `""` or field is not set. Retries on all gRPC errors. |
+
+
+## Exponential back-off policy
+
+The exponential back-off window uses the following formula:
+
+```
+BackOffDuration = PreviousBackOffDuration * (Random value from 0.5 to 1.5) * 1.5
+if BackOffDuration > maxInterval {
+ BackoffDuration = maxInterval
+}
+```
+
+## Retry status codes
+
+When applications span multiple services, especially on dynamic environments like Kubernetes, services can disappear for all kinds of reasons and network calls can start hanging. Status codes provide a glimpse into our operations and where they may have failed in production.
+
+### HTTP
+
+The following table includes some examples of HTTP status codes you may receive and whether you should or should not retry certain operations.
+
+| HTTP Status Code | Retry Recommended? | Description |
+| ------------------------- | ---------------------- | ---------------------------- |
+| 404 Not Found | ❌ No | The resource doesn't exist. |
+| 400 Bad Request | ❌ No | Your request is invalid. |
+| 401 Unauthorized | ❌ No | Try getting new credentials. |
+| 408 Request Timeout | ✅ Yes | The server timed out waiting for the request. |
+| 429 Too Many Requests | ✅ Yes | (Respect the `Retry-After` header, if present). |
+| 500 Internal Server Error | ✅ Yes | The server encountered an unexpected condition. |
+| 502 Bad Gateway | ✅ Yes | A gateway or proxy received an invalid response. |
+| 503 Service Unavailable | ✅ Yes | Service might recover. |
+| 504 Gateway Timeout | ✅ Yes | Temporary network issue. |
+
+### gRPC
+
+The following table includes some examples of gRPC status codes you may receive and whether you should or should not retry certain operations.
+
+| gRPC Status Code | Retry Recommended? | Description |
+| ------------------------- | ----------------------- | ---------------------------- |
+| Code 1 CANCELLED | ❌ No | N/A |
+| Code 3 INVALID_ARGUMENT | ❌ No | N/A |
+| Code 4 DEADLINE_EXCEEDED | ✅ Yes | Retry with backoff |
+| Code 5 NOT_FOUND | ❌ No | N/A |
+| Code 8 RESOURCE_EXHAUSTED | ✅ Yes | Retry with backoff |
+| Code 14 UNAVAILABLE | ✅ Yes | Retry with backoff |
+
+### Retry filter based on status codes
+
+The retry filter enables granular control over retry policies by allowing users to specify HTTP and gRPC status codes or ranges for which retries should apply.
+
+```yml
+spec:
+ policies:
+ retries:
+ retry5xxOnly:
+ # ...
+ matching:
+ httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
+ gRPCStatusCodes: "4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
+```
+
+{{% alert title="Note" color="primary" %}}
+Field values for status codes must follow the format specified above. An incorrectly formatted value produces an error log ("Could not read resiliency policy") and the `daprd` startup sequence will proceed.
+{{% /alert %}}
+
+## Demo
+
+Watch a demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how to set retry status code filters using Diagrid Conductor
+
+
+
+## Next steps
+
+- [Learn how to override default retry policies for specific APIs.]({[< ref override-default-retries.md >]})
+- [Learn how to target your retry policies from the resiliency spec.]({{< ref targets.md >}})
+- Learn more about:
+ - [Timeout policies]({{< ref timeouts.md >}})
+ - [Circuit breaker policies]({{< ref circuit-breakers.md >}})
+
+## Related links
+
+Try out one of the Resiliency quickstarts:
+- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
+- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
diff --git a/daprdocs/content/en/operations/resiliency/policies/timeouts.md b/daprdocs/content/en/operations/resiliency/policies/timeouts.md
new file mode 100644
index 00000000000..619be3db553
--- /dev/null
+++ b/daprdocs/content/en/operations/resiliency/policies/timeouts.md
@@ -0,0 +1,50 @@
+---
+type: docs
+title: "Timeout resiliency policies"
+linkTitle: "Timeouts"
+weight: 10
+description: "Configure resiliency policies for timeouts"
+---
+
+Network calls can fail for many reasons, causing your application to wait indefinitely for responses. By setting a timeout duration, you can cut off those unresponsive services, freeing up resources to handle new requests.
+
+Timeouts are optional policies that can be used to early-terminate long-running operations. Set a realistic timeout duration that reflects actual response times in production. If you've exceeded a timeout duration:
+
+- The operation in progress is terminated (if possible).
+- An error is returned.
+
+## Timeout policy format
+
+```yaml
+spec:
+ policies:
+ # Timeouts are simple named durations.
+ timeouts:
+ timeoutName: timeout1
+ general: 5s
+ important: 60s
+ largeResponse: 10s
+```
+
+### Spec metadata
+
+| Field | Details | Example |
+| timeoutName | Name of the timeout policy | `timeout1` |
+| general | Time duration for timeouts marked as "general". Uses Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format. No set maximum value. | `15s`, `2m`, `1h30m` |
+| important | Time duration for timeouts marked as "important". Uses Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format. No set maximum value. | `15s`, `2m`, `1h30m` |
+| largeResponse | Time duration for timeouts awaiting a large response. Uses Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format. No set maximum value. | `15s`, `2m`, `1h30m` |
+
+> If you don't specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.
+
+## Next steps
+
+- [Learn more about default resiliency policies]({{< ref default-policies.md >}})
+- Learn more about:
+ - [Retry policies]({{< ref retries-overview.md >}})
+ - [Circuit breaker policies]({{< ref circuit-breakers.md >}})
+
+## Related links
+
+Try out one of the Resiliency quickstarts:
+- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
+- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
diff --git a/daprdocs/content/en/operations/resiliency/resiliency-overview.md b/daprdocs/content/en/operations/resiliency/resiliency-overview.md
index e7564757a88..c92a2c94fff 100644
--- a/daprdocs/content/en/operations/resiliency/resiliency-overview.md
+++ b/daprdocs/content/en/operations/resiliency/resiliency-overview.md
@@ -6,25 +6,32 @@ weight: 100
description: "Configure Dapr retries, timeouts, and circuit breakers"
---
-Dapr provides a capability for defining and applying fault tolerance resiliency policies via a [resiliency spec]({{< ref "resiliency-overview.md#complete-example-policy" >}}). Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls. In self-hosted mode, the resiliency spec must be named `resiliency.yaml`. In Kubernetes Dapr finds the named resiliency specs used by your application. Within the resiliency spec, you can define policies for popular resiliency patterns, such as:
-
-- [Timeouts]({{< ref "policies.md#timeouts" >}})
-- [Retries/back-offs]({{< ref "policies.md#retries" >}})
-- [Circuit breakers]({{< ref "policies.md#circuit-breakers" >}})
-
-Policies can then be applied to [targets]({{< ref "targets.md" >}}), which include:
-
-- [Apps]({{< ref "targets.md#apps" >}}) via service invocation
-- [Components]({{< ref "targets.md#components" >}})
-- [Actors]({{< ref "targets.md#actors" >}})
-
-Additionally, resiliency policies can be [scoped to specific apps]({{< ref "component-scopes.md#application-access-to-components-with-scopes" >}}).
-
-## Demo video
+Dapr provides the capability for defining and applying fault tolerance resiliency policies via a [resiliency spec]({{< ref "resiliency-overview.md#complete-example-policy" >}}). Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls.
+- **In self-hosted mode:** The resiliency spec must be named `resiliency.yaml`.
+- **In Kubernetes:** Dapr finds the named resiliency specs used by your application.
+
+## Policies
+
+You can configure Dapr resiliency policies with the following parts:
+- Metadata defining where the policy applies (like namespace and scope)
+- Policies specifying the resiliency name and behaviors, like:
+ - [Timeouts]({{< ref timeouts.md >}})
+ - [Retries]({{< ref retries-overview.md >}})
+ - [Circuit breakers]({{< ref circuit-breakers.md >}})
+- Targets determining which interactions these policies act on, including:
+ - [Apps]({{< ref "targets.md#apps" >}}) via service invocation
+ - [Components]({{< ref "targets.md#components" >}})
+ - [Actors]({{< ref "targets.md#actors" >}})
+
+Once defined, you can apply this configuration to your local Dapr components directory, or to your Kubernetes cluster using:
+
+```bash
+kubectl apply -f .yaml
+```
-Learn more about [how to write resilient microservices with Dapr](https://youtu.be/uC-4Q5KFq98?si=JSUlCtcUNZLBM9rW).
+Additionally, you can scope resiliency policies [to specific apps]({{< ref "component-scopes.md#application-access-to-components-with-scopes" >}}).
-
+> See [known limitations](#limitations).
## Resiliency policy structure
@@ -166,7 +173,11 @@ spec:
circuitBreaker: pubsubCB
```
-## Related links
+## Limitations
+
+- **Service invocation via gRPC:** Currently, resiliency policies are not supported for service invocation via gRPC.
+
+## Demos
Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184&v=7D6HOU3Ms6g&feature=youtu.be):
@@ -174,11 +185,20 @@ Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184
+Learn more about [how to write resilient microservices with Dapr](https://youtu.be/uC-4Q5KFq98?si=JSUlCtcUNZLBM9rW).
+
+
+
## Next steps
Learn more about resiliency policies and targets:
- - [Policies]({{< ref "policies.md" >}})
+ - Policies
+ - [Timeouts]({{< ref "timeouts.md" >}})
+ - [Retries]({{< ref "retries-overview.md" >}})
+ - [Circuit breakers]({{< ref circuit-breakers.md >}})
- [Targets]({{< ref "targets.md" >}})
+
+## Related links
Try out one of the Resiliency quickstarts:
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/security/mtls.md b/daprdocs/content/en/operations/security/mtls.md
index 0acdbcb5170..b471783c031 100644
--- a/daprdocs/content/en/operations/security/mtls.md
+++ b/daprdocs/content/en/operations/security/mtls.md
@@ -231,6 +231,8 @@ kubectl rollout restart -n deployment/dapr-sentry
```bash
kubectl rollout restart deploy/dapr-operator -n
kubectl rollout restart statefulsets/dapr-placement-server -n
+kubectl rollout restart deploy/dapr-sidecar-injector -n
+kubectl rollout restart deploy/dapr-scheduler-server -n
```
4. Restart your Dapr applications to pick up the latest trust bundle.
@@ -332,12 +334,13 @@ Example:
dapr status -k
NAME NAMESPACE HEALTHY STATUS REPLICAS VERSION AGE CREATED
- dapr-sentry dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
- dapr-dashboard dapr-system True Running 1 0.9.0 17d 2022-03-15 09:29.45
- dapr-sidecar-injector dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
- dapr-operator dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
- dapr-placement-server dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
-⚠ Dapr root certificate of your Kubernetes cluster expires in 2 days. Expiry date: Mon, 04 Apr 2022 15:01:03 UTC.
+ dapr-operator dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.26
+ dapr-placement-server dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.27
+ dapr-dashboard dapr-system True Running 1 0.15.0 4m 2025-02-19 17:36.27
+ dapr-sentry dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.26
+ dapr-scheduler-server dapr-system True Running 3 1.15.0 4m 2025-02-19 17:36.27
+ dapr-sidecar-injector dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.26
+⚠ Dapr root certificate of your Kubernetes cluster expires in 2 days. Expiry date: Mon, 04 Apr 2025 15:01:03 UTC.
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
```
diff --git a/daprdocs/content/en/operations/support/support-preview-features.md b/daprdocs/content/en/operations/support/support-preview-features.md
index 07ae1b9a679..1eaf253c393 100644
--- a/daprdocs/content/en/operations/support/support-preview-features.md
+++ b/daprdocs/content/en/operations/support/support-preview-features.md
@@ -17,7 +17,6 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| --- | --- | --- | --- | --- |
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{[}})| v1.9 |
| **Multi-App Run for Kubernetes** | Configure multiple Dapr applications from a single configuration file and run from a single command on Kubernetes | `dapr run -k -f` | [Multi-App Run]({{< ref multi-app-dapr-run.md >}}) | v1.12 |
-| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
| **Cryptography** | Encrypt or decrypt data without having to manage secrets keys | N/A | [Cryptography concept]({{< ref "components-concept#cryptography" >}})| v1.11 |
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |
diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md
index fbba03b5f14..cb8705451e5 100644
--- a/daprdocs/content/en/operations/support/support-release-policy.md
+++ b/daprdocs/content/en/operations/support/support-release-policy.md
@@ -45,11 +45,12 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------|
-| September 16th 2024 | 1.14.4 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
+| February 27th 2025 | 1.15.0 | 1.15.0 | Java 1.14.0 Go 1.12.0 PHP 1.2.0 Python 1.15.0 .NET 1.15.0 JS 3.5.0 Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.0) |
+| September 16th 2024 | 1.14.4 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
| September 13th 2024 | 1.14.3 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) |
-| September 6th 2024 | 1.14.2 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |
-| August 14th 2024 | 1.14.1 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) |
-| August 14th 2024 | 1.14.0 | 1.14.0 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) |
+| September 6th 2024 | 1.14.2 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |
+| August 14th 2024 | 1.14.1 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) |
+| August 14th 2024 | 1.14.0 | 1.14.0 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) |
| May 29th 2024 | 1.13.4 | 1.13.0 | Java 1.11.0 Go 1.10.0 PHP 1.2.0 Python 1.13.0 .NET 1.13.0 JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) |
| May 21st 2024 | 1.13.3 | 1.13.0 | Java 1.11.0 Go 1.10.0 PHP 1.2.0 Python 1.13.0 .NET 1.13.0 JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) |
| April 3rd 2024 | 1.13.2 | 1.13.0 | Java 1.11.0 Go 1.10.0 PHP 1.2.0 Python 1.13.0 .NET 1.13.0 JS 3.3.0 | 0.14.0 | Supported | [v1.13.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.2) |
@@ -143,7 +144,8 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| 1.11.0 to 1.11.4 | N/A | 1.12.4 |
| 1.12.0 to 1.12.4 | N/A | 1.13.5 |
| 1.13.0 to 1.13.5 | N/A | 1.14.0 |
-| 1.14.0 to 1.14.2 | N/A | 1.14.2 |
+| 1.14.0 to 1.14.4 | N/A | 1.14.4 |
+| 1.15.0 | N/A | 1.15.0 |
## Upgrade on Hosting platforms
diff --git a/daprdocs/content/en/reference/api/conversation_api.md b/daprdocs/content/en/reference/api/conversation_api.md
index 366625006de..44fa52d286a 100644
--- a/daprdocs/content/en/reference/api/conversation_api.md
+++ b/daprdocs/content/en/reference/api/conversation_api.md
@@ -17,7 +17,7 @@ Dapr provides an API to interact with Large Language Models (LLMs) and enables c
This endpoint lets you converse with LLMs.
```
-POST /v1.0-alpha1/conversation/]/converse
+POST http://localhost:/v1.0-alpha1/conversation//converse
```
### URL parameters
@@ -30,17 +30,34 @@ POST /v1.0-alpha1/conversation//converse
| Field | Description |
| --------- | ----------- |
-| `conversationContext` | |
-| `inputs` | |
-| `parameters` | |
+| `inputs` | Inputs for the conversation. Multiple inputs at one time are supported. Required |
+| `cacheTTL` | A time-to-live value for a prompt cache to expire. Uses Golang duration format. Optional |
+| `scrubPII` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Optional |
+| `temperature` | A float value to control the temperature of the model. Used to optimize for consistency and creativity. Optional |
+| `metadata` | [Metadata](#metadata) passed to conversation components. Optional |
+#### Input body
-### Request content
+| Field | Description |
+| --------- | ----------- |
+| `content` | The message content to send to the LLM. Required |
+| `role` | The role for the LLM to assume. Possible values: 'user', 'tool', 'assistant' |
+| `scrubPII` | A boolean value to enable obfuscation of sensitive information present in the content field. Optional |
+
+### Request content example
```json
REQUEST = {
- "inputs": ["what is Dapr", "Why use Dapr"],
- "parameters": {},
+ "inputs": [
+ {
+ "content": "What is Dapr?",
+ "role": "user", // Optional
+ "scrubPII": "true", // Optional. Will obfuscate any sensitive information found in the content field
+ },
+ ],
+ "cacheTTL": "10m", // Optional
+ "scrubPII": "true", // Optional. Will obfuscate any sensitive information returning from the LLM
+ "temperature": 0.5 // Optional. Optimizes for consistency (0) or creativity (1)
}
```
@@ -50,7 +67,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
-`500` | Request formatted correctly, error in dapr code or underlying component
+`500` | Request formatted correctly, error in Dapr code or underlying component
### Response content
@@ -71,4 +88,5 @@ RESPONSE = {
## Next steps
-[Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
+- [Conversation API overview]({{< ref conversation-overview.md >}})
+- [Supported conversation components]({{< ref supported-conversation >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/api/error_codes.md b/daprdocs/content/en/reference/api/error_codes.md
deleted file mode 100644
index c098521ccb5..00000000000
--- a/daprdocs/content/en/reference/api/error_codes.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-type: docs
-title: "Error codes returned by APIs"
-linkTitle: "Error codes"
-description: "Detailed reference of the Dapr API error codes"
-weight: 1400
----
-
-For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.
-
-```
-{
- "errorCode": "ERR_STATE_GET",
- "message": "Requested state key does not exist in state store."
-}
-```
-
-The following tables list the error codes returned by Dapr runtime:
-
-### Actors API
-
-| Error Code | Description |
-| -------------------------------- | ------------------------------------------ |
-| ERR_ACTOR_INSTANCE_MISSING | Error when an actor instance is missing. |
-| ERR_ACTOR_RUNTIME_NOT_FOUND | Error the actor instance. |
-| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. |
-| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. |
-| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. |
-| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. |
-| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. |
-| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. |
-| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. |
-| ERR_ACTOR_STATE_GET | Error getting the state for an actor. |
-| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. |
-| ERR_ACTOR_REMINDER_NON_HOSTED | Error setting reminder for an actor. |
-
-### Workflows API
-
-| Error Code | Description |
-| -------------------------------- | ----------------------------------------------------------- |
-| ERR_GET_WORKFLOW | Error getting workflow. |
-| ERR_START_WORKFLOW | Error starting the workflow. |
-| ERR_PAUSE_WORKFLOW | Error pausing the workflow. |
-| ERR_RESUME_WORKFLOW | Error resuming the workflow. |
-| ERR_TERMINATE_WORKFLOW | Error terminating the workflow. |
-| ERR_PURGE_WORKFLOW | Error purging workflow. |
-| ERR_RAISE_EVENT_WORKFLOW | Error raising an event within the workflow. |
-| ERR_WORKFLOW_COMPONENT_MISSING | Error when a workflow component is missing a configuration. |
-| ERR_WORKFLOW_COMPONENT_NOT_FOUND | Error when a workflow component is not found. |
-| ERR_WORKFLOW_EVENT_NAME_MISSING | Error when the event name for a workflow is missing. |
-| ERR_WORKFLOW_NAME_MISSING | Error when the workflow name is missing. |
-| ERR_INSTANCE_ID_INVALID | Error invalid workflow instance ID provided. |
-| ERR_INSTANCE_ID_NOT_FOUND | Error workflow instance ID not found. |
-| ERR_INSTANCE_ID_PROVIDED_MISSING | Error workflow instance ID was provided but missing. |
-| ERR_INSTANCE_ID_TOO_LONG | Error workflow instance ID exceeds allowable length. |
-
-### State Management API
-
-| Error Code | Description |
-| ------------------------------------- | ------------------------------------------------------------------------- |
-| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. |
-| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured. |
-| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support. |
-| ERR_STATE_GET | Error getting a state for state store. |
-| ERR_STATE_DELETE | Error deleting a state from state store. |
-| ERR_STATE_SAVE | Error saving a state in state store. |
-| ERR_STATE_TRANSACTION | Error encountered during state transaction. |
-| ERR_STATE_BULK_GET | Error performing bulk retrieval of state entries. |
-| ERR_STATE_QUERY | Error querying the state store. |
-| ERR_STATE_STORE_NOT_CONFIGURED | Error state store is not configured. |
-| ERR_STATE_STORE_NOT_SUPPORTED | Error state store is not supported. |
-| ERR_STATE_STORE_TOO_MANY_TRANSACTIONS | Error exceeded maximum allowable transactions. |
-
-### Configuration API
-
-| Error Code | Description |
-| -------------------------------------- | -------------------------------------------- |
-| ERR_CONFIGURATION_GET | Error retrieving configuration. |
-| ERR_CONFIGURATION_STORE_NOT_CONFIGURED | Error configuration store is not configured. |
-| ERR_CONFIGURATION_STORE_NOT_FOUND | Error configuration store not found. |
-| ERR_CONFIGURATION_SUBSCRIBE | Error subscribing to a configuration. |
-| ERR_CONFIGURATION_UNSUBSCRIBE | Error unsubscribing from a configuration. |
-
-### Crypto API
-
-| Error Code | Description |
-| ----------------------------------- | ------------------------------------------ |
-| ERR_CRYPTO | General crypto building block error. |
-| ERR_CRYPTO_KEY | Error related to a crypto key. |
-| ERR_CRYPTO_PROVIDER_NOT_FOUND | Error specified crypto provider not found. |
-| ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED | Error no crypto providers configured. |
-
-### Secrets API
-
-| Error Code | Description |
-| -------------------------------- | ---------------------------------------------------- |
-| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. |
-| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. |
-| ERR_SECRET_GET | Error retrieving the specified secret. |
-| ERR_PERMISSION_DENIED | Error access denied due to insufficient permissions. |
-
-### Pub/Sub API
-
-| Error Code | Description |
-| --------------------------- | -------------------------------------------------------- |
-| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. |
-| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. |
-| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls. |
-| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. |
-| ERR_PUBSUB_EMPTY | Error empty Pub/Sub. |
-| ERR_PUBSUB_NOT_CONFIGURED | Error Pub/Sub component is not configured. |
-| ERR_PUBSUB_REQUEST_METADATA | Error with metadata in Pub/Sub request. |
-| ERR_PUBSUB_EVENTS_SER | Error serializing Pub/Sub events. |
-| ERR_PUBLISH_OUTBOX | Error publishing message to the outbox. |
-| ERR_TOPIC_NAME_EMPTY | Error topic name for Pub/Sub message is empty. |
-
-### Conversation API
-
-| Error Code | Description |
-| ------------------------------- | ----------------------------------------------- |
-| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. |
-| ERR_DIRECT_INVOKE | Error in direct invocation. |
-| ERR_CONVERSATION_INVALID_PARMS | Error invalid parameters for conversation. |
-| ERR_CONVERSATION_INVOKE | Error invoking the conversation. |
-| ERR_CONVERSATION_MISSING_INPUTS | Error missing required inputs for conversation. |
-| ERR_CONVERSATION_NOT_FOUND | Error conversation not found. |
-
-### Distributed Lock API
-
-| Error Code | Description |
-| ----------------------------- | ----------------------------------- |
-| ERR_TRY_LOCK | Error attempting to acquire a lock. |
-| ERR_UNLOCK | Error attempting to release a lock. |
-| ERR_LOCK_STORE_NOT_CONFIGURED | Error lock store is not configured. |
-| ERR_LOCK_STORE_NOT_FOUND | Error lock store not found. |
-
-### Healthz
-
-| Error Code | Description |
-| ----------------------------- | --------------------------------------------------------------- |
-| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. |
-| ERR_HEALTH_APPID_NOT_MATCH | Error the app-id does not match expected value in health check. |
-| ERR_OUTBOUND_HEALTH_NOT_READY | Error outbound connection health is not ready. |
-
-### Common
-
-| Error Code | Description |
-| -------------------------- | ------------------------------------------------ |
-| ERR_API_UNIMPLEMENTED | Error API is not implemented. |
-| ERR_APP_CHANNEL_NIL | Error application channel is nil. |
-| ERR_BAD_REQUEST | Error client request is badly formed or invalid. |
-| ERR_BODY_READ | Error reading body. |
-| ERR_INTERNAL | Internal server error encountered. |
-| ERR_MALFORMED_REQUEST | Error with a malformed request. |
-| ERR_MALFORMED_REQUEST_DATA | Error request data is malformed. |
-| ERR_MALFORMED_RESPONSE | Error response data is malformed. |
diff --git a/daprdocs/content/en/reference/api/jobs_api.md b/daprdocs/content/en/reference/api/jobs_api.md
index 45459867684..bb635e3c759 100644
--- a/daprdocs/content/en/reference/api/jobs_api.md
+++ b/daprdocs/content/en/reference/api/jobs_api.md
@@ -13,11 +13,11 @@ The jobs API is currently in alpha.
With the jobs API, you can schedule jobs and tasks in the future.
> The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly
-> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs.
+> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs. This is because HTTP does JSON marshalling which can be expensive, while with gRPC, the data is transmitted over the wire and stored as-is being more performant.
## Schedule a job
-Schedule a job with a name.
+Schedule a job with a name. Jobs are scheduled based on the clock of the server where the Scheduler service is running. The timestamp is not converted to UTC. You can provide the timezone with the timestamp in RFC3339 format to specify which timezone you'd like the job to adhere to. If no timezone is provided, the server's local time is used.
```
POST http://localhost:3500/v1.0-alpha1/jobs/
diff --git a/daprdocs/content/en/reference/api/metadata_api.md b/daprdocs/content/en/reference/api/metadata_api.md
index 29629705a52..af0e8ebb12c 100644
--- a/daprdocs/content/en/reference/api/metadata_api.md
+++ b/daprdocs/content/en/reference/api/metadata_api.md
@@ -37,6 +37,9 @@ A list of features enabled via Configuration spec (including build-time override
### App connection details
The metadata API returns information related to Dapr's connection to the app. This includes the app port, protocol, host, max concurrency, along with health check details.
+### Scheduler connection details
+Information related to the connection to one or more scheduler hosts.
+
### Attributes
The metadata API allows you to store additional attribute information in the format of key-value pairs. These are ephemeral in-memory and are not persisted if a sidecar is reloaded. This information should be added at the time of a sidecar creation (for example, after the application has started).
@@ -82,6 +85,7 @@ components | [Metadata API Response Component](#metadataapiresponsec
httpEndpoints | [Metadata API Response HttpEndpoint](#metadataapiresponsehttpendpoint)[] | A json encoded array of loaded HttpEndpoints metadata.
subscriptions | [Metadata API Response Subscription](#metadataapiresponsesubscription)[] | A json encoded array of pub/sub subscriptions metadata.
appConnectionProperties| [Metadata API Response AppConnectionProperties](#metadataapiresponseappconnectionproperties) | A json encoded object of app connection properties.
+scheduler | [Metadata API Response Scheduler](#metadataapiresponsescheduler) | A json encoded object of scheduler connection properties.
**Metadata API Response Registered Actor**
@@ -142,6 +146,12 @@ healthProbeInterval | string | Time between each health probe, in go duration fo
healthProbeTimeout | string | Timeout for each health probe, in go duration format.
healthThreshold | integer | Max number of failed health probes before the app is considered unhealthy.
+ **Metadata API Response Scheduler**
+
+Name | Type | Description
+---- | ---- | -----------
+connected_addresses | string[] | List of strings representing the addresses of the conntected scheduler hosts.
+
### Examples
@@ -215,6 +225,13 @@ curl http://localhost:3500/v1.0/metadata
"healthProbeTimeout": "500ms",
"healthThreshold": 3
}
+ },
+ "scheduler": {
+ "connected_addresses": [
+ "10.244.0.47:50006",
+ "10.244.0.48:50006",
+ "10.244.0.49:50006"
+ ]
}
}
```
@@ -338,6 +355,13 @@ Get the metadata information to confirm your custom attribute was added:
"healthProbeTimeout": "500ms",
"healthThreshold": 3
}
+ },
+ "scheduler": {
+ "connected_addresses": [
+ "10.244.0.47:50006",
+ "10.244.0.48:50006",
+ "10.244.0.49:50006"
+ ]
}
}
```
diff --git a/daprdocs/content/en/reference/api/workflow_api.md b/daprdocs/content/en/reference/api/workflow_api.md
index c9dddaa618e..5a3ff3dd712 100644
--- a/daprdocs/content/en/reference/api/workflow_api.md
+++ b/daprdocs/content/en/reference/api/workflow_api.md
@@ -6,7 +6,7 @@ description: "Detailed documentation on the workflow API"
weight: 300
---
-Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
+Dapr provides users with the ability to interact with workflows through its built-in workflow engine, which is implemented using Dapr Actors. This workflow engine is accessed using the name `dapr` in API calls as the `workflowComponentName`.
## Start workflow request
@@ -36,7 +36,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
-`500` | Request formatted correctly, error in dapr code or underlying component
+`500` | Request formatted correctly, error in dapr code
### Response content
@@ -76,7 +76,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
-`500` | Request formatted correctly, error in dapr code or underlying component
+`500` | Request formatted correctly, error in dapr code
### Response content
@@ -163,7 +163,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
-`500` | Error in Dapr code or underlying component
+`500` | Error in Dapr code
### Response content
@@ -194,7 +194,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
-`500` | Error in Dapr code or underlying component
+`500` | Error in Dapr code
### Response content
@@ -221,7 +221,7 @@ Code | Description
---- | -----------
`200` | OK
`400` | Request was malformed
-`500` | Request formatted correctly, error in dapr code or underlying component
+`500` | Error in Dapr code
### Response content
@@ -244,30 +244,6 @@ Parameter | Description
--------- | -----------
`runtimeStatus` | The status of the workflow instance. Values include: `"RUNNING"`, `"COMPLETED"`, `"CONTINUED_AS_NEW"`, `"FAILED"`, `"CANCELED"`, `"TERMINATED"`, `"PENDING"`, `"SUSPENDED"`
-## Component format
-
-A Dapr `workflow.yaml` component file has the following structure:
-
-```yaml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name:
-spec:
- type: workflow.
- version: v1.0-alpha1
- metadata:
- - name:
- value:
- ```
-
-| Setting | Description |
-| ------- | ----------- |
-| `metadata.name` | The name of the workflow component. |
-| `spec/metadata` | Additional metadata parameters specified by workflow component |
-
-However, Dapr comes with a built-in `dapr` workflow component that is built on Dapr Actors. No component file is required to use the built-in Dapr workflow component.
-
## Next Steps
- [Workflow API overview]({{< ref workflow-overview.md >}})
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md b/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md
index be8536f7267..08215f0e1d9 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md
@@ -58,6 +58,8 @@ spec:
- name: storageConnectionString
value: "DefaultEndpointsProtocol=https;AccountName=;AccountKey="
# Optional metadata
+ - name: getAllMessageProperties
+ value: "true"
- name: direction
value: "input, output"
```
@@ -84,6 +86,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `storageAccountKey` | Y* | Input | Storage account key for the checkpoint store account. * When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
| `storageConnectionString` | Y* | Input | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey="`
| `storageContainerName` | Y | Input | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
+| `getAllMessageProperties` | N | Input | When set to `true`, retrieves all user/app/custom properties from the Event Hub message and forwards them in the returned event metadata. Default setting is `"false"`. | `"true"`, `"false"`
| `direction` | N | Input/Output | The direction of the binding. | `"input"`, `"output"`, `"input, output"`
### Microsoft Entra ID authentication
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md b/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md
index a0e356e54b2..2b70e456d0b 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md
@@ -9,7 +9,7 @@ aliases:
## Component format
-To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{ ref bindings-overview.md }}) on how to create and apply a binding configuration.
+To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{< ref bindings-overview.md >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md
index 780bfaebefe..76def5a6b42 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/zeebe-command.md
@@ -675,7 +675,12 @@ To perform a `throw-error` operation, invoke the Zeebe command binding with a `P
"data": {
"jobKey": 2251799813686172,
"errorCode": "product-fetch-error",
- "errorMessage": "The product could not be fetched"
+ "errorMessage": "The product could not be fetched",
+ "variables": {
+ "productId": "some-product-id",
+ "productName": "some-product-name",
+ "productKey": "some-product-key"
+ }
},
"operation": "throw-error"
}
@@ -686,6 +691,11 @@ The data parameters are:
- `jobKey` - the unique job identifier, as obtained when activating the job
- `errorCode` - the error code that will be matched with an error catch event
- `errorMessage` - (optional) an error message that provides additional context
+- `variables` - (optional) JSON document that will instantiate the variables at the local scope of the
+ job's associated task; it must be a JSON object, as variables will be mapped in a
+ key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and
+ "b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a
+ valid argument, as the root of the JSON document is an array and not an object.
##### Response
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md b/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md
index 759e370134d..d1b5f2dd128 100644
--- a/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md
@@ -37,6 +37,10 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `model` | N | The LLM to use. Defaults to Bedrock's default provider model from Amazon. | `amazon.titan-text-express-v1` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
+## Authenticating AWS
+
+Instead of using a `key` parameter, AWS Bedrock authenticates using Dapr's standard method of IAM or static credentials. [Learn more about authenticating with AWS.]({{< ref authenticating-aws.md >}})
+
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/deepseek.md b/daprdocs/content/en/reference/components-reference/supported-conversation/deepseek.md
new file mode 100644
index 00000000000..e09148dafcc
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/deepseek.md
@@ -0,0 +1,39 @@
+---
+type: docs
+title: "DeepSeek"
+linkTitle: "DeepSeek"
+description: Detailed information on the DeepSeek conversation component
+---
+
+## Component format
+
+A Dapr `conversation.yaml` component file has the following structure:
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: deepseek
+spec:
+ type: conversation.deepseek
+ metadata:
+ - name: key
+ value: mykey
+ - name: maxTokens
+ value: 2048
+```
+
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
+## Spec metadata fields
+
+| Field | Required | Details | Example |
+|--------------------|:--------:|---------|---------|
+| `key` | Y | API key for DeepSeek. | `mykey` |
+| `maxTokens` | N | The max amount of tokens for each request. | `2048` |
+
+## Related links
+
+- [Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md
index 2e2962d6855..fdaca4eca90 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md
@@ -12,7 +12,7 @@ no_list: true
The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. [Learn how to set up different brokers for Dapr publish and subscribe.]({{< ref setup-pubsub.md >}})
{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
-Each pub/sub component has its own built-in retry behaviors. Before explicity applying a [Dapr resiliency policy]({{< ref "policies.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages.
+Each pub/sub component has its own built-in retry behaviors, unique to the message broker solution and unrelated to Dapr. Before explicity applying a [Dapr resiliency policy]({{< ref "resiliency-overview.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages.
{{% /alert %}}
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
index 503500ca8e2..203d81a7633 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
@@ -459,8 +459,8 @@ Apache Kafka supports the following bulk metadata options:
| Configuration | Default |
|----------|---------|
-| `maxBulkAwaitDurationMs` | `10000` (10s) |
-| `maxBulkSubCount` | `80` |
+| `maxAwaitDurationMs` | `10000` (10s) |
+| `maxMessagesCount` | `80` |
## Per-call metadata fields
@@ -540,6 +540,7 @@ app.include_router(router)
```
{{% /codetab %}}
+
{{< /tabs >}}
## Receiving message headers with special characters
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md
index 73db174a0da..b6132ca8f39 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md
@@ -198,6 +198,44 @@ Entity management is only possible when using [Microsoft Entra ID Authentication
> Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.
+## Receiving custom properties
+
+By default, Dapr does not forward [custom properties](https://learn.microsoft.com/azure/event-hubs/add-custom-data-event). However, by setting the subscription metadata `requireAllProperties` to `"true"`, you can receive custom properties as HTTP headers.
+
+```yaml
+apiVersion: dapr.io/v2alpha1
+kind: Subscription
+metadata:
+ name: order-pub-sub
+spec:
+ topic: orders
+ routes:
+ default: /checkout
+ pubsubname: order-pub-sub
+ metadata:
+ requireAllProperties: "true"
+```
+
+The same can be achieved using the Dapr SDK:
+
+{{< tabs ".NET" >}}
+
+{{% codetab %}}
+
+```csharp
+[Topic("order-pub-sub", "orders")]
+[TopicMetadata("requireAllProperties", "true")]
+[HttpPost("checkout")]
+public ActionResult Checkout(Order order, [FromHeader] int priority)
+{
+ return Ok();
+}
+```
+
+{{% /codetab %}}
+
+{{< /tabs >}}
+
## Subscribing to Azure IoT Hub Events
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so the Azure Event Hubs pubsub component can also be used to subscribe to Azure IoT Hub events.
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-mqtt.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-mqtt.md
index 8c4b20e2d8c..454c6ac41a2 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-mqtt.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-mqtt.md
@@ -54,13 +54,13 @@ The above example uses secrets as plain strings. It is recommended to use a secr
The MQTT pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. If the service marks the message as not processed, the message won't be acknowledged back to the broker. Only if broker resends the message, would it would be retried.
-To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the MQTT pub/sub component.
+To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the MQTT pub/sub component.
There is a crucial difference between the two ways of retries:
1. Re-delivery of unacknowledged messages is completely dependent on the broker. Dapr does not guarantee it. Some brokers like [emqx](https://www.emqx.io/), [vernemq](https://vernemq.com/) etc. support it but it not a part of [MQTT3 spec](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718103).
-2. Using a [retry resiliency policy]({{< ref "policies.md#retries" >}}) makes the same Dapr sidecar retry redelivering the messages. So it is the same Dapr sidecar and the same app receiving the same message.
+2. Using a [retry resiliency policy]({{< ref "retries-overview.md" >}}) makes the same Dapr sidecar retry redelivering the messages. So it is the same Dapr sidecar and the same app receiving the same message.
### Communication using TLS
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
index 3e94c2fc725..5dc9261a7dd 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
@@ -167,7 +167,7 @@ spec:
### Enabling message delivery retries
-The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
+The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
### Delay queue
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
index bf8ac3f271f..f6569a8f88e 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
@@ -166,7 +166,7 @@ Note that while the `caCert` and `clientCert` values may not be secrets, they ca
The RabbitMQ pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. When the service returns a result, the message will be marked as consumed regardless of whether it was processed correctly or not. Note that this is common among all Dapr PubSub components and not just RabbitMQ.
Dapr can try redelivering a message a second time, when `autoAck` is set to `false` and `requeueInFailure` is set to `true`.
-To make Dapr use more sophisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the RabbitMQ pub/sub component.
+To make Dapr use more sophisticated retry policies, you can apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the RabbitMQ pub/sub component.
There is a crucial difference between the two ways to retry messages:
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md
index 53e4c0e75d1..3a53c1117a5 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md
@@ -52,7 +52,7 @@ spec:
# Controls the default mode for executing queries. (optional)
#- name: queryExecMode
# value: ""
- # Uncomment this if you wish to use PostgreSQL as a state store for actors (optional)
+ # Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
#- name: actorStateStore
# value: "true"
```
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md
index d4e21f17ba8..db5d7eddfef 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md
@@ -52,7 +52,7 @@ spec:
# Controls the default mode for executing queries. (optional)
#- name: queryExecMode
# value: ""
- # Uncomment this if you wish to use PostgreSQL as a state store for actors (optional)
+ # Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
#- name: actorStateStore
# value: "true"
```
diff --git a/daprdocs/content/en/reference/components-reference/supported-workflow-backend/_index.md b/daprdocs/content/en/reference/components-reference/supported-workflow-backend/_index.md
deleted file mode 100644
index 43838d711e2..00000000000
--- a/daprdocs/content/en/reference/components-reference/supported-workflow-backend/_index.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-type: docs
-title: "Workflow backend component specs"
-linkTitle: "Workflow backend"
-weight: 2000
-description: The supported workflow backend that orchestrate workflow and save workflow state
-no_list: true
----
-
-{{< partial "components/description.html" >}}
diff --git a/daprdocs/content/en/reference/components-reference/supported-workflow-backend/actor-workflow-backend.md b/daprdocs/content/en/reference/components-reference/supported-workflow-backend/actor-workflow-backend.md
deleted file mode 100644
index b1eead5631f..00000000000
--- a/daprdocs/content/en/reference/components-reference/supported-workflow-backend/actor-workflow-backend.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-type: docs
-title: "Actor workflow backend"
-linkTitle: "Actor workflow backend"
-description: Detailed information on the Actor workflow backend component
----
-
-## Component format
-
-The Actor workflow backend is the default backend in Dapr. If no workflow backend is explicitly defined, the Actor backend will be used automatically.
-
-You don't need to define any components to use the Actor workflow backend. It's ready to use out-of-the-box.
-
-However, if you wish to explicitly define the Actor workflow backend as a component, you can do so, as shown in the example below.
-
-```yaml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name: actorbackend
-spec:
- type: workflowbackend.actor
- version: v1
-```
diff --git a/daprdocs/content/en/reference/resource-specs/configuration-schema.md b/daprdocs/content/en/reference/resource-specs/configuration-schema.md
index b52228c16cf..e5caac79219 100644
--- a/daprdocs/content/en/reference/resource-specs/configuration-schema.md
+++ b/daprdocs/content/en/reference/resource-specs/configuration-schema.md
@@ -36,6 +36,7 @@ spec:
labels:
- name:
regex: {}
+ recordErrorCodes:
latencyDistributionBuckets:
-
-
diff --git a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md
index d307b70b4d4..c7bc15553ff 100644
--- a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md
+++ b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md
@@ -64,7 +64,7 @@ targets: # Required
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
-| policies | Y | The configuration of resiliency policies, including: `timeouts` `retries` `circuitBreakers` [See more examples with all of the built-in policies]({{< ref policies.md >}}) | timeout: `general` retry: `retryForever` circuit breaker: `simpleCB` |
+| policies | Y | The configuration of resiliency policies, including: `timeouts` `retries` `circuitBreakers` [See more examples with all of the built-in policies]({{< ref resiliency-overview.md >}}) | timeout: `general` retry: `retryForever` circuit breaker: `simpleCB` |
| targets | Y | The configuration for the applications, actors, or components that use the resiliency policies. [See more examples in the resiliency targets guide]({{< ref targets.md >}}) | `apps` `components` `actors` |
diff --git a/daprdocs/data/components/conversation/generic.yaml b/daprdocs/data/components/conversation/generic.yaml
index 26cf8431ce3..b8961c86829 100644
--- a/daprdocs/data/components/conversation/generic.yaml
+++ b/daprdocs/data/components/conversation/generic.yaml
@@ -18,3 +18,8 @@
state: Alpha
version: v1
since: "1.15"
+- component: DeepSeek
+ link: deepseek
+ state: Alpha
+ version: v1
+ since: "1.15"
diff --git a/daprdocs/layouts/partials/head.html b/daprdocs/layouts/partials/head.html
new file mode 100644
index 00000000000..92fac408193
--- /dev/null
+++ b/daprdocs/layouts/partials/head.html
@@ -0,0 +1,45 @@
+
+
+{{ hugo.Generator }}
+{{ range .AlternativeOutputFormats -}}
+
+{{ end -}}
+
+{{ $outputFormat := partial "outputformat.html" . -}}
+{{ if and hugo.IsProduction (ne $outputFormat "print") -}}
+
+{{ else -}}
+
+{{ end -}}
+
+{{ partialCached "favicons.html" . }}
+
+ {{- if .IsHome -}}
+ {{ .Site.Title -}}
+ {{ else -}}
+ {{ with .Title }}{{ . }} | {{ end -}}
+ {{ .Site.Title -}}
+ {{ end -}}
+
+{{ $desc := .Page.Description | default (.Page.Content | safeHTML | truncate 150) -}}
+
+{{ template "_internal/opengraph.html" . -}}
+{{ template "_internal/schema.html" . -}}
+{{ template "_internal/twitter_cards.html" . -}}
+{{ partialCached "head-css.html" . "asdf" -}}
+
+{{ if .Site.Params.offlineSearch -}}
+
+{{ end -}}
+
+{{ if .Site.Params.prism_syntax_highlighting -}}
+
+{{ end -}}
+
+{{ partial "hooks/head-end.html" . -}}
diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html
index 79be5626137..ee56053a0fd 100644
--- a/daprdocs/layouts/shortcodes/dapr-latest-version.html
+++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html
@@ -1 +1 @@
-{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.4{{ else if .Get "cli" }}1.14.1{{ else }}1.14.1{{ end -}}
+{{- if .Get "short" }}1.15{{ else if .Get "long" }}1.15.0{{ else if .Get "cli" }}1.15.0{{ else }}1.15.0{{ end -}}
diff --git a/daprdocs/package-lock.json b/daprdocs/package-lock.json
index 6bcdae97208..e7dec5f674f 100644
--- a/daprdocs/package-lock.json
+++ b/daprdocs/package-lock.json
@@ -720,9 +720,15 @@
}
},
"node_modules/nanoid": {
- "version": "3.3.2",
- "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.2.tgz",
- "integrity": "sha512-CuHBogktKwpm5g2sRgv83jEy2ijFzBwMoYA60orPDR7ynsLijJDqgsi4RDGj3OJpy3Ieb+LYwiRmIOGyytgITA==",
+ "version": "3.3.8",
+ "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.8.tgz",
+ "integrity": "sha512-WNLf5Sd8oZxOm+TzppcYk8gVOgP+l58xNy58D0nbUnOxOWRWvlcCV4kUF7ltmI6PsrLl/BgKEyS4mqsGChFN0w==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/ai"
+ }
+ ],
"bin": {
"nanoid": "bin/nanoid.cjs"
},
diff --git a/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-jaeger.yaml b/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-jaeger.yaml
index d8c0fe2934e..dac90954277 100644
--- a/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-jaeger.yaml
+++ b/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector-jaeger.yaml
@@ -19,8 +19,8 @@ data:
zpages:
endpoint: :55679
exporters:
- logging:
- loglevel: debug
+ debug:
+ verbosity: detailed
# Depending on where you want to export your trace, use the
# correct OpenTelemetry trace exporter here.
#
diff --git a/daprdocs/static/images/concepts-components.png b/daprdocs/static/images/concepts-components.png
index c22c50f2355..62515c4da3e 100644
Binary files a/daprdocs/static/images/concepts-components.png and b/daprdocs/static/images/concepts-components.png differ
diff --git a/daprdocs/static/images/conversation-overview.png b/daprdocs/static/images/conversation-overview.png
new file mode 100644
index 00000000000..757faea4081
Binary files /dev/null and b/daprdocs/static/images/conversation-overview.png differ
diff --git a/daprdocs/static/images/resiliency_inbound.png b/daprdocs/static/images/resiliency_inbound.png
index f3ba94de7ed..43ddce30e8c 100644
Binary files a/daprdocs/static/images/resiliency_inbound.png and b/daprdocs/static/images/resiliency_inbound.png differ
diff --git a/daprdocs/static/images/resiliency_outbound.png b/daprdocs/static/images/resiliency_outbound.png
index 73c7e0bbeed..e7e810c3cf8 100644
Binary files a/daprdocs/static/images/resiliency_outbound.png and b/daprdocs/static/images/resiliency_outbound.png differ
diff --git a/daprdocs/static/images/resiliency_pubsub.png b/daprdocs/static/images/resiliency_pubsub.png
index d5a6c990429..50cf7982b12 100644
Binary files a/daprdocs/static/images/resiliency_pubsub.png and b/daprdocs/static/images/resiliency_pubsub.png differ
diff --git a/daprdocs/static/images/resiliency_svc_invocation.png b/daprdocs/static/images/resiliency_svc_invocation.png
index a46316b24c5..b0c23e16291 100644
Binary files a/daprdocs/static/images/resiliency_svc_invocation.png and b/daprdocs/static/images/resiliency_svc_invocation.png differ
diff --git a/daprdocs/static/images/workflow-quickstart-controlflow.png b/daprdocs/static/images/workflow-quickstart-controlflow.png
new file mode 100644
index 00000000000..b4fac3a602b
Binary files /dev/null and b/daprdocs/static/images/workflow-quickstart-controlflow.png differ
diff --git a/daprdocs/static/images/workflow-quickstart-overview.png b/daprdocs/static/images/workflow-quickstart-overview.png
index 7a8ea3e2292..099999724cd 100644
Binary files a/daprdocs/static/images/workflow-quickstart-overview.png and b/daprdocs/static/images/workflow-quickstart-overview.png differ
diff --git a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip
index 985bf939f98..81690292685 100644
Binary files a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip and b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip differ
diff --git a/sdkdocs/dotnet b/sdkdocs/dotnet
index 03038fa5196..52f08517802 160000
--- a/sdkdocs/dotnet
+++ b/sdkdocs/dotnet
@@ -1 +1 @@
-Subproject commit 03038fa519670b583eabcef1417eacd55c3e44c8
+Subproject commit 52f0851780202f71ac4c7fbbcd5c5fb7d674db5a
diff --git a/sdkdocs/go b/sdkdocs/go
index dd9a2d5a3c4..c81a381811f 160000
--- a/sdkdocs/go
+++ b/sdkdocs/go
@@ -1 +1 @@
-Subproject commit dd9a2d5a3c4481b8a6bda032df8f44f5eaedb370
+Subproject commit c81a381811fbd24b038319bbec07b60c215f8e63
diff --git a/sdkdocs/java b/sdkdocs/java
index 0b7a051b79c..22d9874ae05 160000
--- a/sdkdocs/java
+++ b/sdkdocs/java
@@ -1 +1 @@
-Subproject commit 0b7a051b79c7a394e9bd4f57bd40778fb5f29897
+Subproject commit 22d9874ae05c2adaf1eea9fe45e1e6f40c30fb04
diff --git a/sdkdocs/js b/sdkdocs/js
index 76866c878a6..f1dba55586b 160000
--- a/sdkdocs/js
+++ b/sdkdocs/js
@@ -1 +1 @@
-Subproject commit 76866c878a6e79bb889c83f3930172ddb20f1624
+Subproject commit f1dba55586bb734e55de98098284f9139d6e5304
diff --git a/sdkdocs/python b/sdkdocs/python
index 6e90e84b166..fc4980daaa4 160000
--- a/sdkdocs/python
+++ b/sdkdocs/python
@@ -1 +1 @@
-Subproject commit 6e90e84b166ac7ea603b78894e9e1b92dc456014
+Subproject commit fc4980daaa4802bfb2590f133c332b934b196205
diff --git a/sdkdocs/rust b/sdkdocs/rust
index 4abf5aa6504..4e2d3160324 160000
--- a/sdkdocs/rust
+++ b/sdkdocs/rust
@@ -1 +1 @@
-Subproject commit 4abf5aa6504f7c0b0018d20f8dc038a486a67e3a
+Subproject commit 4e2d3160324f9c5968415acf206c039837df9a63
diff --git a/translations/docs-zh b/translations/docs-zh
index 864b558a7c2..8bc9e26a7f2 160000
--- a/translations/docs-zh
+++ b/translations/docs-zh
@@ -1 +1 @@
-Subproject commit 864b558a7c253f037f4c8bd21a579a5dab5e1456
+Subproject commit 8bc9e26a7f2be45602974c96df024fdd2c1539e3