diff --git a/README.md b/README.md index 8bd9986a..4cbfc40c 100644 --- a/README.md +++ b/README.md @@ -84,10 +84,6 @@ Some examples require extra dependencies. See each sample's directory for specif ## Test -Running the tests requires `poe` to be installed. +To run the tests: - uv tool install poethepoet - -Once you have `poe` installed you can run: - - poe test + uv run poe test diff --git a/activity_worker/README.md b/activity_worker/README.md index 0d904313..068d7e99 100644 --- a/activity_worker/README.md +++ b/activity_worker/README.md @@ -6,8 +6,8 @@ First run the Go workflow worker by running this in the `go_workflow` directory go run . -Then in another terminal, run the sample from this directory: +Then in another terminal, run the sample from the root directory: - uv run activity_worker.py + uv run activity_worker/activity_worker.py The Python code will invoke the Go workflow which will execute the Python activity and return. \ No newline at end of file diff --git a/bedrock/basic/README.md b/bedrock/basic/README.md index 88cae861..4f3f7430 100644 --- a/bedrock/basic/README.md +++ b/bedrock/basic/README.md @@ -2,9 +2,9 @@ A basic Bedrock workflow. Starts a workflow with a prompt, generates a response and ends the workflow. -To run, first see `samples-python` [README.md](../../README.md), and `bedrock` [README.md](../README.md) for prerequisites specific to this sample. Once set up, run the following from this directory: +To run, first see `samples-python` [README.md](../../README.md), and `bedrock` [README.md](../README.md) for prerequisites specific to this sample. Once set up, run the following from the root directory: -1. Run the worker: `uv run run_worker.py` +1. Run the worker: `uv run bedrock/basic/run_worker.py` 2. In another terminal run the client with a prompt: - e.g. `uv run send_message.py 'What animals are marsupials?'` \ No newline at end of file + e.g. `uv run bedrock/basic/send_message.py 'What animals are marsupials?'` \ No newline at end of file diff --git a/bedrock/entity/README.md b/bedrock/entity/README.md index 5f53840d..3f8e90da 100644 --- a/bedrock/entity/README.md +++ b/bedrock/entity/README.md @@ -2,18 +2,18 @@ Multi-Turn Chat using an Entity Workflow. The workflow runs forever unless explicitly ended. The workflow continues as new after a configurable number of chat turns to keep the prompt size small and the Temporal event history small. Each continued-as-new workflow receives a summary of the conversation history so far for context. -To run, first see `samples-python` [README.md](../../README.md), and `bedrock` [README.md](../README.md) for prerequisites specific to this sample. Once set up, run the following from this directory: +To run, first see `samples-python` [README.md](../../README.md), and `bedrock` [README.md](../README.md) for prerequisites specific to this sample. Once set up, run the following from the root directory: -1. Run the worker: `uv run run_worker.py` +1. Run the worker: `uv run bedrock/entity/run_worker.py` 2. In another terminal run the client with a prompt. - Example: `uv run send_message.py 'What animals are marsupials?'` + Example: `uv run bedrock/entity/send_message.py 'What animals are marsupials?'` 3. View the worker's output for the response. 4. Give followup prompts by signaling the workflow. - Example: `uv run send_message.py 'Do they lay eggs?'` + Example: `uv run bedrock/entity/send_message.py 'Do they lay eggs?'` 5. Get the conversation history summary by querying the workflow. - Example: `uv run get_history.py` -6. To end the chat session, run `uv run end_chat.py` + Example: `uv run bedrock/entity/get_history.py` +6. To end the chat session, run `uv run bedrock/entity/end_chat.py` diff --git a/bedrock/signals_and_queries/README.md b/bedrock/signals_and_queries/README.md index edbca795..9eb5df8d 100644 --- a/bedrock/signals_and_queries/README.md +++ b/bedrock/signals_and_queries/README.md @@ -2,18 +2,18 @@ Adding signals & queries to the [basic Bedrock sample](../1_basic). Starts a workflow with a prompt, allows follow-up prompts to be given using Temporal signals, and allows the conversation history to be queried using Temporal queries. -To run, first see `samples-python` [README.md](../../README.md), and `bedrock` [README.md](../README.md) for prerequisites specific to this sample. Once set up, run the following from this directory: +To run, first see `samples-python` [README.md](../../README.md), and `bedrock` [README.md](../README.md) for prerequisites specific to this sample. Once set up, run the following from the root directory: -1. Run the worker: `uv run run_worker.py` +1. Run the worker: `uv run bedrock/signals_and_queries/run_worker.py` 2. In another terminal run the client with a prompt. - Example: `uv run send_message.py 'What animals are marsupials?'` + Example: `uv run bedrock/signals_and_queries/send_message.py 'What animals are marsupials?'` 3. View the worker's output for the response. 4. Give followup prompts by signaling the workflow. - Example: `uv run send_message.py 'Do they lay eggs?'` + Example: `uv run bedrock/signals_and_queries/send_message.py 'Do they lay eggs?'` 5. Get the conversation history by querying the workflow. - Example: `uv run get_history.py` + Example: `uv run bedrock/signals_and_queries/get_history.py` 6. The workflow will timeout after inactivity. diff --git a/cloud_export_to_parquet/README.md b/cloud_export_to_parquet/README.md index fe6b3db9..99c9a196 100644 --- a/cloud_export_to_parquet/README.md +++ b/cloud_export_to_parquet/README.md @@ -8,16 +8,16 @@ Please make sure your python is 3.9 above. For this sample, run: Before you start, please modify workflow input in `create_schedule.py` with your s3 bucket and namespace. Also make sure you've the right AWS permission set up in your environment to allow this workflow read and write to your s3 bucket. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the worker: +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: ```bash -uv run run_worker.py +uv run cloud_export_to_parquet/run_worker.py ``` This will start the worker. Then, in another terminal, run the following to execute the schedule: ```bash -uv run create_schedule.py +uv run cloud_export_to_parquet/create_schedule.py ``` The workflow should convert exported file in your input s3 bucket to parquet in your specified location. diff --git a/context_propagation/README.md b/context_propagation/README.md index e1027292..fc79f80b 100644 --- a/context_propagation/README.md +++ b/context_propagation/README.md @@ -3,14 +3,14 @@ This sample shows how to use an interceptor to propagate contextual information through workflows and activities. For this example, [contextvars](https://docs.python.org/3/library/contextvars.html) holds the contextual information. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run context_propagation/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run context_propagation/starter.py The starter terminal should complete with the hello result and the worker terminal should show the logs with the propagated user ID contextual information flowing through the workflows/activities. \ No newline at end of file diff --git a/custom_converter/README.md b/custom_converter/README.md index 8e82e483..4ded4198 100644 --- a/custom_converter/README.md +++ b/custom_converter/README.md @@ -2,14 +2,14 @@ This sample shows how to make a custom payload converter for a type not natively supported by Temporal. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run custom_converter/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run custom_converter/starter.py The workflow should complete with the hello result. If the custom converter was not set for the custom input and output classes, we would get an error on the client side and on the worker side. \ No newline at end of file diff --git a/custom_decorator/README.md b/custom_decorator/README.md index cfc59111..24cd516d 100644 --- a/custom_decorator/README.md +++ b/custom_decorator/README.md @@ -3,14 +3,14 @@ This sample shows a custom decorator can help with Temporal code reuse. Specifically, this makes a `@auto_heartbeater` decorator that automatically configures an activity to heartbeat twice as frequently as the heartbeat timeout is set to. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run custom_decorator/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run custom_decorator/starter.py The workflow will be started, and then after 5 seconds will be sent a signal to cancel its forever-running activity. The activity has a heartbeat timeout set to 2s, so since it has the `@auto_heartbeater` decorator set, it will heartbeat diff --git a/dsl/README.md b/dsl/README.md index d5051132..5eca4fa4 100644 --- a/dsl/README.md +++ b/dsl/README.md @@ -8,22 +8,22 @@ For this sample, the optional `dsl` dependency group must be included. To includ uv sync --group dsl -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run dsl/worker.py This will start the worker. Then, in another terminal, run the following to execute a workflow of steps defined in -[workflow1.yaml](workflow1.yaml): +[workflow1.yaml](dsl/workflow1.yaml): - uv run starter.py workflow1.yaml + uv run dsl/starter.py dsl/workflow1.yaml This will run the workflow and show the final variables that the workflow returns. Looking in the worker terminal, each step executed will be visible. -Similarly we can do the same for the more advanced [workflow2.yaml](workflow2.yaml) file: +Similarly we can do the same for the more advanced [workflow2.yaml](dsl/workflow2.yaml) file: - uv run starter.py workflow2.yaml + uv run dsl/starter.py dsl/workflow2.yaml This sample gives a guide of how one can write a workflow to interpret arbitrary steps from a user-provided DSL. Many DSL models are more advanced and are more specific to conform to business logic needs. \ No newline at end of file diff --git a/encryption/README.md b/encryption/README.md index 09828793..c323e38b 100644 --- a/encryption/README.md +++ b/encryption/README.md @@ -9,14 +9,14 @@ For this sample, the optional `encryption` dependency group must be included. To uv sync --group encryption -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run encryption/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run encryption/starter.py The workflow should complete with the hello result. To view the workflow, use [temporal](https://docs.temporal.io/cli): @@ -31,7 +31,7 @@ Note how the result looks like (with wrapping removed): This is because the data is encrypted and not visible. To make data visible to external Temporal tools like `temporal` and the UI, start a codec server in another terminal: - uv run codec_server.py + uv run encryption/codec_server.py Now with that running, run `temporal` again with the codec endpoint: diff --git a/gevent_async/README.md b/gevent_async/README.md index d1d9f048..0c66a54b 100644 --- a/gevent_async/README.md +++ b/gevent_async/README.md @@ -15,14 +15,14 @@ For this sample, the optional `gevent` dependency group must be included. To inc uv sync --group gevent To run the sample, first see [README.md](../README.md) for prerequisites such as having a localhost Temporal server -running. Then, run the following from this directory to start the worker: +running. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run gevent_async/worker.py This will start the worker. The worker has a workflow and two activities, one `asyncio` based and one gevent based. Now -in another terminal, run the following from this directory to execute the workflow: +in another terminal, run the following to execute the workflow: - uv run starter.py + uv run gevent_async/starter.py The workflow should run and complete with the hello result. Note on the worker terminal there will be logs of the workflow and activity executions. \ No newline at end of file diff --git a/hello/README.md b/hello/README.md index fba44461..f5dde687 100644 --- a/hello/README.md +++ b/hello/README.md @@ -2,16 +2,16 @@ These samples show basic workflow and activity features. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to run the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to run the `hello_activity.py` sample: - uv run hello_activity.py + uv run hello/hello_activity.py The result will be: Result: Hello, World! -Replace `hello_activity.py` in the command with any other example filename to run it instead. +Replace `hello/hello_activity.py` in the command with any other example filename (with the `hello/` prefix) to run it instead. ## Samples diff --git a/langchain/README.md b/langchain/README.md index f11161e9..506a379d 100644 --- a/langchain/README.md +++ b/langchain/README.md @@ -10,14 +10,14 @@ Export your [OpenAI API key](https://platform.openai.com/api-keys) as an environ export OPENAI_API_KEY='...' -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run langchain/worker.py This will start the worker. Then, in another terminal, run the following to execute a workflow: - uv run starter.py + uv run langchain/starter.py Then, in another terminal, run the following command to translate a phrase: diff --git a/message_passing/introduction/README.md b/message_passing/introduction/README.md index 7a2bd351..e4e15b1c 100644 --- a/message_passing/introduction/README.md +++ b/message_passing/introduction/README.md @@ -6,13 +6,13 @@ See https://docs.temporal.io/develop/python/message-passing. To run, first see the main [README.md](../../README.md) for prerequisites. -Then create two terminals and `cd` to this directory. +Then create two terminals. Run the worker in one terminal: - uv run worker.py + uv run message_passing/introduction/worker.py And execute the workflow in the other terminal: - uv run starter.py + uv run message_passing/introduction/starter.py diff --git a/message_passing/safe_message_handlers/README.md b/message_passing/safe_message_handlers/README.md index 069d6bca..f8e4db8a 100644 --- a/message_passing/safe_message_handlers/README.md +++ b/message_passing/safe_message_handlers/README.md @@ -10,12 +10,12 @@ This sample shows off important techniques for handling signals and updates, aka To run, first see [README.md](../../README.md) for prerequisites. -Then, run the following from this directory to run the worker: -\ - uv run worker.py +Then, run the following from the root directory to run the worker: + + uv run message_passing/safe_message_handlers/worker.py Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run message_passing/safe_message_handlers/starter.py This will start a worker to run your workflow and activities, then start a ClusterManagerWorkflow and put it through its paces. diff --git a/message_passing/update_with_start/lazy_initialization/README.md b/message_passing/update_with_start/lazy_initialization/README.md index 0dbe1844..46f4e813 100644 --- a/message_passing/update_with_start/lazy_initialization/README.md +++ b/message_passing/update_with_start/lazy_initialization/README.md @@ -6,13 +6,13 @@ update-with-start is used to add items to the cart, receiving back the updated c To run, first see the main [README.md](../../../README.md) for prerequisites. -Then run the following from this directory: +Then run the following from the root directory: - uv run worker.py + uv run message_passing/update_with_start/lazy_initialization/worker.py Then, in another terminal: - uv run starter.py + uv run message_passing/update_with_start/lazy_initialization/starter.py This will start a worker to run your workflow and activities, then simulate a backend application receiving requests to add items to a shopping cart, before finalizing the order. diff --git a/message_passing/waiting_for_handlers/README.md b/message_passing/waiting_for_handlers/README.md index 238c1fb8..9c9e3f7c 100644 --- a/message_passing/waiting_for_handlers/README.md +++ b/message_passing/waiting_for_handlers/README.md @@ -12,15 +12,15 @@ usually wait for the handlers to finish immediately before the call to continue_as_new(); that's not illustrated in this sample. -To run, open two terminals and `cd` to this directory in them. +To run, open two terminals. Run the worker in one terminal: - uv run worker.py + uv run message_passing/waiting_for_handlers/worker.py And run the workflow-starter code in the other terminal: - uv run starter.py + uv run message_passing/waiting_for_handlers/starter.py Here's the output you'll see: diff --git a/message_passing/waiting_for_handlers_and_compensation/README.md b/message_passing/waiting_for_handlers_and_compensation/README.md index 2ef5fe1f..d23d484e 100644 --- a/message_passing/waiting_for_handlers_and_compensation/README.md +++ b/message_passing/waiting_for_handlers_and_compensation/README.md @@ -9,15 +9,15 @@ This sample demonstrates how to do the following: For a simpler sample showing how to do (1) without (2), see [safe_message_handlers](../safe_message_handlers/README.md). -To run, open two terminals and `cd` to this directory in them. +To run, open two terminals. Run the worker in one terminal: - uv run worker.py + uv run message_passing/waiting_for_handlers_and_compensation/worker.py And run the workflow-starter code in the other terminal: - uv run starter.py + uv run message_passing/waiting_for_handlers_and_compensation/starter.py Here's the output you'll see: diff --git a/open_telemetry/README.md b/open_telemetry/README.md index 06df7326..e95deaca 100644 --- a/open_telemetry/README.md +++ b/open_telemetry/README.md @@ -10,13 +10,13 @@ To run, first see [README.md](../README.md) for prerequisites. Then run the foll docker compose up -Now, from this directory, start the worker in its own terminal: +Now, start the worker in its own terminal: - uv run worker.py + uv run open_telemetry/worker.py Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run open_telemetry/starter.py The workflow should complete with the hello result. diff --git a/patching/README.md b/patching/README.md index ac7c55c2..24744075 100644 --- a/patching/README.md +++ b/patching/README.md @@ -8,15 +8,15 @@ To run, first see [README.md](../README.md) for prerequisites. Then follow the p This stage is for existing running workflows. To simulate our initial workflow, run the worker in a separate terminal: - uv run worker.py --workflow initial + uv run patching/worker.py --workflow initial Now we can start this workflow: - uv run starter.py --start-workflow initial-workflow-id + uv run patching/starter.py --start-workflow initial-workflow-id This will output "Started workflow with ID initial-workflow-id and ...". Now query this workflow: - uv run starter.py --query-workflow initial-workflow-id + uv run patching/starter.py --query-workflow initial-workflow-id This will output "Query result for ID initial-workflow-id: pre-patch". @@ -25,21 +25,21 @@ This will output "Query result for ID initial-workflow-id: pre-patch". This stage is for needing to run old and new workflows at the same time. To simulate our patched workflow, stop the worker from before and start it again with the patched workflow: - uv run worker.py --workflow patched + uv run patching/worker.py --workflow patched Now let's start another workflow with this patched code: - uv run starter.py --start-workflow patched-workflow-id + uv run patching/starter.py --start-workflow patched-workflow-id This will output "Started workflow with ID patched-workflow-id and ...". Now query the old workflow that's still running: - uv run starter.py --query-workflow initial-workflow-id + uv run patching/starter.py --query-workflow initial-workflow-id This will output "Query result for ID initial-workflow-id: pre-patch" since it is pre-patch. But if we execute a query against the new code: - uv run starter.py --query-workflow patched-workflow-id + uv run patching/starter.py --query-workflow patched-workflow-id We get "Query result for ID patched-workflow-id: post-patch". This is how old workflow code can take old paths and new workflow code can take new paths. @@ -50,22 +50,22 @@ Once we know that all workflows that started with the initial code from "Stage 1 the patch so we can deprecate it. To use the patch deprecated workflow, stop the workflow from before and start it again with: - uv run worker.py --workflow patch-deprecated + uv run patching/worker.py --workflow patch-deprecated All workflows in "Stage 2" and any new workflows will work. Now let's start another workflow with this patch deprecated code: - uv run starter.py --start-workflow patch-deprecated-workflow-id + uv run patching/starter.py --start-workflow patch-deprecated-workflow-id This will output "Started workflow with ID patch-deprecated-workflow-id and ...". Now query the patched workflow that's still running: - uv run starter.py --query-workflow patched-workflow-id + uv run patching/starter.py --query-workflow patched-workflow-id This will output "Query result for ID patched-workflow-id: post-patch". And if we execute a query against the latest workflow: - uv run starter.py --query-workflow patch-deprecated-workflow-id + uv run patching/starter.py --query-workflow patch-deprecated-workflow-id As expected, this will output "Query result for ID patch-deprecated-workflow-id: post-patch". @@ -75,22 +75,22 @@ Once we know we don't even have any workflows running on "Stage 2" or before (i. both code paths), we can just remove the patch deprecation altogether. To use the patch complete workflow, stop the workflow from before and start it again with: - uv run worker.py --workflow patch-complete + uv run patching/worker.py --workflow patch-complete All workflows in "Stage 3" and any new workflows will work. Now let's start another workflow with this patch complete code: - uv run starter.py --start-workflow patch-complete-workflow-id + uv run patching/starter.py --start-workflow patch-complete-workflow-id This will output "Started workflow with ID patch-complete-workflow-id and ...". Now query the patch deprecated workflow that's still running: - uv run starter.py --query-workflow patch-deprecated-workflow-id + uv run patching/starter.py --query-workflow patch-deprecated-workflow-id This will output "Query result for ID patch-deprecated-workflow-id: post-patch". And if we execute a query against the latest workflow: - uv run starter.py --query-workflow patch-complete-workflow-id + uv run patching/starter.py --query-workflow patch-complete-workflow-id As expected, this will output "Query result for ID patch-complete-workflow-id: post-patch". diff --git a/polling/frequent/README.md b/polling/frequent/README.md index d7fafc21..9968e18c 100644 --- a/polling/frequent/README.md +++ b/polling/frequent/README.md @@ -6,13 +6,13 @@ To ensure that polling Activity is restarted in a timely manner, we make sure th To run, first see [README.md](../../README.md) for prerequisites. -Then, run the following from this directory to run the sample: +Then, run the following from the root directory to run the sample: - uv run run_worker.py + uv run polling/frequent/run_worker.py Then, in another terminal, run the following to execute the workflow: - uv run run_frequent.py + uv run polling/frequent/run_frequent.py The Workflow will continue to poll the service and heartbeat on every iteration until it succeeds. diff --git a/polling/infrequent/README.md b/polling/infrequent/README.md index b6afeb56..7e86ea43 100644 --- a/polling/infrequent/README.md +++ b/polling/infrequent/README.md @@ -11,13 +11,13 @@ This will enable the Activity to be retried exactly on the set interval. To run, first see [README.md](../../README.md) for prerequisites. -Then, run the following from this directory to run the sample: +Then, run the following from the root directory to run the sample: - uv run run_worker.py + uv run polling/infrequent/run_worker.py Then, in another terminal, run the following to execute the workflow: - uv run run_infrequent.py + uv run polling/infrequent/run_infrequent.py Since the test service simulates being _down_ for four polling attempts and then returns _OK_ on the fifth poll attempt, the Workflow will perform four Activity retries with a 60-second poll interval, and then return the service result on the successful fifth attempt. diff --git a/polling/periodic_sequence/README.md b/polling/periodic_sequence/README.md index d632862d..8c5cb950 100644 --- a/polling/periodic_sequence/README.md +++ b/polling/periodic_sequence/README.md @@ -6,13 +6,13 @@ This is a rare scenario where polling requires execution of a Sequence of Activi To run, first see [README.md](../../README.md) for prerequisites. -Then, run the following from this directory to run the sample: +Then, run the following from the root directory to run the sample: - uv run run_worker.py + uv run polling/periodic_sequence/run_worker.py Then, in another terminal, run the following to execute the workflow: - uv run run_periodic.py + uv run polling/periodic_sequence/run_periodic.py This will start a Workflow and Child Workflow to periodically poll an Activity. diff --git a/prometheus/README.md b/prometheus/README.md index 091d5171..c0b5eb8f 100644 --- a/prometheus/README.md +++ b/prometheus/README.md @@ -2,16 +2,16 @@ This sample shows how to have SDK Prometheus metrics made available via HTTP. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run prometheus/worker.py This will start the worker and the metrics will be visible for this process at http://127.0.0.1:9000/metrics. Then, in another terminal, run the following to execute a workflow: - uv run starter.py + uv run prometheus/starter.py After executing the workflow, the process will stay open so the metrics if this separate process can be accessed at http://127.0.0.1:9001/metrics. \ No newline at end of file diff --git a/pydantic_converter/README.md b/pydantic_converter/README.md index bdbf0329..e215e735 100644 --- a/pydantic_converter/README.md +++ b/pydantic_converter/README.md @@ -6,14 +6,14 @@ For this sample, the optional `pydantic_converter` dependency group must be incl uv sync --group pydantic-converter -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run pydantic_converter/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run pydantic_converter/starter.py In the worker terminal, the workflow and its activity will log that it received the Pydantic models. In the starter terminal, the Pydantic models in the workflow result will be logged. diff --git a/pydantic_converter_v1/README.md b/pydantic_converter_v1/README.md index a38b00fc..526e6930 100644 --- a/pydantic_converter_v1/README.md +++ b/pydantic_converter_v1/README.md @@ -9,14 +9,14 @@ To install, run: uv run pip uninstall pydantic pydantic-core uv run pip install pydantic==1.10 -To run, first see the root [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see the root [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run pydantic_converter_v1/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run pydantic_converter_v1/starter.py In the worker terminal, the workflow and its activity will log that it received the Pydantic models. In the starter terminal, the Pydantic models in the workflow result will be logged. diff --git a/pyproject.toml b/pyproject.toml index 434e108a..3d114dd2 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -25,6 +25,7 @@ dev = [ "pyright>=1.1.394", "types-pyyaml>=6.0.12.20241230,<7", "pytest-pretty>=1.3.0", + "poethepoet>=0.36.0", ] bedrock = ["boto3>=1.34.92,<2"] dsl = [ diff --git a/replay/README.md b/replay/README.md index 4b220f9d..e9fc9ccf 100644 --- a/replay/README.md +++ b/replay/README.md @@ -3,18 +3,18 @@ This sample shows you how you can verify changes to workflow code are compatible with existing workflow histories. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run replay/worker.py This will start the worker. Then, in another terminal, run the following to execute a workflow: - uv run starter.py + uv run replay/starter.py Next, run the replayer: - uv run replayer.py + uv run replay/replayer.py Which should produce some output like: diff --git a/resource_pool/README.md b/resource_pool/README.md index 143de0f3..fe48e1ab 100644 --- a/resource_pool/README.md +++ b/resource_pool/README.md @@ -4,13 +4,13 @@ This sample shows how to use a long-lived `ResourcePoolWorkflow` to allocate `re Each `ResourceUserWorkflow` runs several activities while it has ownership of a resource. Note that `ResourcePoolWorkflow` is making resource allocation decisions based on in-memory state. -Run the following from this directory to start the worker: +Run the following from the root directory to start the worker: - uv run worker.py + uv run resource_pool/worker.py This will start the worker. Then, in another terminal, run the following to execute several `ResourceUserWorkflows`. - uv run starter.py + uv run resource_pool/starter.py You should see output indicating that the `ResourcePoolWorkflow` serialized access to each resource. diff --git a/schedules/README.md b/schedules/README.md index 8327d769..e5eed069 100644 --- a/schedules/README.md +++ b/schedules/README.md @@ -2,20 +2,20 @@ These samples show how to schedule a Workflow Execution and control certain action. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to run the `schedules/` sample: +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to run the `schedules/` sample: - uv run run_worker.py - uv run start_schedule.py + uv run schedules/run_worker.py + uv run schedules/start_schedule.py -Replace `start_schedule.py` in the command with any other example filename to run it instead. +Replace `schedules/start_schedule.py` in the command with any other example filename to run it instead. - uv run backfill_schedule.py - uv run delete_schedule.py - uv run describe_schedule.py - uv run list_schedule.py - uv run pause_schedule.py - python run python trigger_schedule.py - uv run update_schedule.py + uv run schedules/backfill_schedule.py + uv run schedules/delete_schedule.py + uv run schedules/describe_schedule.py + uv run schedules/list_schedule.py + uv run schedules/pause_schedule.py + uv run schedules/trigger_schedule.py + uv run schedules/update_schedule.py - create: Creates a new Schedule. Newly created Schedules return a Schedule ID to be used in other Schedule commands. - backfill: Backfills the Schedule by going through the specified time periods as if they passed right now. diff --git a/sentry/README.md b/sentry/README.md index 30667d5e..33a7b535 100644 --- a/sentry/README.md +++ b/sentry/README.md @@ -7,13 +7,13 @@ For this sample, the optional `sentry` dependency group must be included. To inc uv sync --group sentry To run, first see [README.md](../README.md) for prerequisites. Set `SENTRY_DSN` environment variable to the Sentry DSN. -Then, run the following from this directory to start the worker: +Then, run the following from the root directory to start the worker: - uv run worker.py + uv run sentry/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run sentry/starter.py The workflow should complete with the hello result. If you alter the workflow or the activity to raise an `ApplicationError` instead, it should appear in Sentry. \ No newline at end of file diff --git a/sleep_for_days/README.md b/sleep_for_days/README.md index 766aefc9..69302cc7 100644 --- a/sleep_for_days/README.md +++ b/sleep_for_days/README.md @@ -4,15 +4,15 @@ This sample demonstrates how to create a Temporal workflow that runs forever, se To run, first see the main [README.md](../../README.md) for prerequisites. -Then create two terminals and `cd` to this directory. +Then create two terminals. Run the worker in one terminal: - uv run worker.py + uv run sleep_for_days/worker.py And execute the workflow in the other terminal: - uv run starter.py + uv run sleep_for_days/starter.py This sample will run indefinitely until you send a signal to `complete`. See how to send a signal via Temporal CLI [here](https://docs.temporal.io/cli/workflow#signal). diff --git a/trio_async/README.md b/trio_async/README.md index 01891215..4c9770fb 100644 --- a/trio_async/README.md +++ b/trio_async/README.md @@ -9,14 +9,14 @@ For this sample, the optional `trio_async` dependency group must be included. To uv sync --group trio_async -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run trio_async/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run trio_async/starter.py The starter should complete with: diff --git a/updatable_timer/README.md b/updatable_timer/README.md index 04960265..3b738db8 100644 --- a/updatable_timer/README.md +++ b/updatable_timer/README.md @@ -11,7 +11,7 @@ The sample is composed of the three executables: First start the Worker: ```bash -uv run worker.py +uv run updatable_timer/worker.py ``` Check the output of the Worker window. The expected output is: @@ -22,7 +22,7 @@ Worker started, ctrl+c to exit Then in a different terminal window start the Workflow Execution: ```bash -uv run starter.py +uv run updatable_timer/starter.py ``` Check the output of the Worker window. The expected output is: ``` @@ -32,7 +32,7 @@ Workflow started: run_id=... Then run the updater as many times as you want to change timer to 10 seconds from now: ```bash -uv run wake_up_time_updater.py +uv run updatable_timer/wake_up_time_updater.py ``` Check the output of the worker window. The expected output is: diff --git a/uv.lock b/uv.lock index e1851c2d..e48faddb 100644 --- a/uv.lock +++ b/uv.lock @@ -1766,6 +1766,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/fa/cb/6c32f8fadefa4314b740fbe8f74f6a02423bd1549e7c930826df35ac3c1b/pandas-2.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:b4b0de34dc8499c2db34000ef8baad684cfa4cbd836ecee05f323ebfba348c7d", size = 11357186, upload-time = "2025-07-07T19:20:01.475Z" }, ] +[[package]] +name = "pastel" +version = "0.2.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/76/f1/4594f5e0fcddb6953e5b8fe00da8c317b8b41b547e2b3ae2da7512943c62/pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d", size = 7555, upload-time = "2020-09-16T19:21:12.43Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/aa/18/a8444036c6dd65ba3624c63b734d3ba95ba63ace513078e1580590075d21/pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364", size = 5955, upload-time = "2020-09-16T19:21:11.409Z" }, +] + [[package]] name = "pathspec" version = "0.12.1" @@ -1793,6 +1802,20 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" }, ] +[[package]] +name = "poethepoet" +version = "0.36.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pastel" }, + { name = "pyyaml" }, + { name = "tomli", marker = "python_full_version < '3.11'" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/cf/ac/311c8a492dc887f0b7a54d0ec3324cb2f9538b7b78ea06e5f7ae1f167e52/poethepoet-0.36.0.tar.gz", hash = "sha256:2217b49cb4e4c64af0b42ff8c4814b17f02e107d38bc461542517348ede25663", size = 66854, upload-time = "2025-06-29T19:54:50.444Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/03/29/dedb3a6b7e17ea723143b834a2da428a7d743c80d5cd4d22ed28b5e8c441/poethepoet-0.36.0-py3-none-any.whl", hash = "sha256:693e3c1eae9f6731d3613c3c0c40f747d3c5c68a375beda42e590a63c5623308", size = 88031, upload-time = "2025-06-29T19:54:48.884Z" }, +] + [[package]] name = "propcache" version = "0.3.2" @@ -2786,6 +2809,7 @@ dev = [ { name = "frozenlist" }, { name = "isort" }, { name = "mypy" }, + { name = "poethepoet" }, { name = "pyright" }, { name = "pytest" }, { name = "pytest-asyncio" }, @@ -2851,6 +2875,7 @@ dev = [ { name = "frozenlist", specifier = ">=1.4.0,<2" }, { name = "isort", specifier = ">=5.10.1,<6" }, { name = "mypy", specifier = ">=1.4.1,<2" }, + { name = "poethepoet", specifier = ">=0.36.0" }, { name = "pyright", specifier = ">=1.1.394" }, { name = "pytest", specifier = ">=7.1.2,<8" }, { name = "pytest-asyncio", specifier = ">=0.18.3,<0.19" }, diff --git a/worker_specific_task_queues/README.md b/worker_specific_task_queues/README.md index 01f72890..6fb17e6e 100644 --- a/worker_specific_task_queues/README.md +++ b/worker_specific_task_queues/README.md @@ -21,14 +21,14 @@ Activities have been artificially slowed with `time.sleep(3)` to simulate doing ### Running This Sample -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory to start the +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory to start the worker: - uv run worker.py + uv run worker_specific_task_queues/worker.py This will start the worker. Then, in another terminal, run the following to execute the workflow: - uv run starter.py + uv run worker_specific_task_queues/starter.py #### Example output: diff --git a/worker_versioning/README.md b/worker_versioning/README.md index 21af863f..2fd44bc4 100644 --- a/worker_versioning/README.md +++ b/worker_versioning/README.md @@ -3,9 +3,9 @@ This sample shows you how you can use the [Worker Versioning](https://docs.temporal.io/workers#worker-versioning) feature to deploy incompatible changes to workflow code more easily. -To run, first see [README.md](../README.md) for prerequisites. Then, run the following from this directory: +To run, first see [README.md](../README.md) for prerequisites. Then, run the following from the root directory: - uv run example.py + uv run worker_versioning/example.py This will add some Build IDs to a Task Queue, and will also run Workers with those versions to show how you can mark add versions, mark them as compatible (or not) with one another, and run Workers at specific versions. You'll