diff --git a/README.md b/README.md index 309ec357..c5d4985b 100644 --- a/README.md +++ b/README.md @@ -14,91 +14,20 @@ Show support for the Conductor OSS. Please help spread the awareness by starrin [![GitHub stars](https://img.shields.io/github/stars/conductor-oss/conductor.svg?style=social&label=Star&maxAge=)](https://GitHub.com/conductor-oss/conductor/) -## Content - - - - -- [Install Conductor Python SDK](#install-conductor-python-sdk) - - [Get Conductor Python SDK](#get-conductor-python-sdk) -- [Hello World Application Using Conductor](#hello-world-application-using-conductor) - - [Step 1: Create Workflow](#step-1-create-workflow) - - [Creating Workflows by Code](#creating-workflows-by-code) - - [(Alternatively) Creating Workflows in JSON](#alternatively-creating-workflows-in-json) - - [Step 2: Write Task Worker](#step-2-write-task-worker) - - [Step 3: Write _Hello World_ Application](#step-3-write-_hello-world_-application) -- [Running Workflows on Conductor Standalone (Installed Locally)](#running-workflows-on-conductor-standalone-installed-locally) - - [Setup Environment Variable](#setup-environment-variable) - - [Start Conductor Server](#start-conductor-server) - - [Execute Hello World Application](#execute-hello-world-application) -- [Running Workflows on Orkes Conductor](#running-workflows-on-orkes-conductor) -- [Learn More about Conductor Python SDK](#learn-more-about-conductor-python-sdk) -- [Create and Run Conductor Workers](#create-and-run-conductor-workers) -- [Writing Workers](#writing-workers) - - [Implementing Workers](#implementing-workers) - - [Managing Workers in Application](#managing-workers-in-application) - - [Design Principles for Workers](#design-principles-for-workers) - - [System Task Workers](#system-task-workers) - - [Wait Task](#wait-task) - - [Using Code to Create Wait Task](#using-code-to-create-wait-task) - - [JSON Configuration](#json-configuration) - - [HTTP Task](#http-task) - - [Using Code to Create HTTP Task](#using-code-to-create-http-task) - - [JSON Configuration](#json-configuration-1) - - [Javascript Executor Task](#javascript-executor-task) - - [Using Code to Create Inline Task](#using-code-to-create-inline-task) - - [JSON Configuration](#json-configuration-2) - - [JSON Processing using JQ](#json-processing-using-jq) - - [Using Code to Create JSON JQ Transform Task](#using-code-to-create-json-jq-transform-task) - - [JSON Configuration](#json-configuration-3) - - [Worker vs. Microservice/HTTP Endpoints](#worker-vs-microservicehttp-endpoints) - - [Deploying Workers in Production](#deploying-workers-in-production) -- [Create Conductor Workflows](#create-conductor-workflows) - - [Conductor Workflows](#conductor-workflows) - - [Creating Workflows](#creating-workflows) - - [Execute Dynamic Workflows Using Code](#execute-dynamic-workflows-using-code) - - [Kitchen-Sink Workflow](#kitchen-sink-workflow) - - [Executing Workflows](#executing-workflows) - - [Execute Workflow Asynchronously](#execute-workflow-asynchronously) - - [Execute Workflow Synchronously](#execute-workflow-synchronously) - - [Managing Workflow Executions](#managing-workflow-executions) - - [Get Execution Status](#get-execution-status) - - [Update Workflow State Variables](#update-workflow-state-variables) - - [Terminate Running Workflows](#terminate-running-workflows) - - [Retry Failed Workflows](#retry-failed-workflows) - - [Restart Workflows](#restart-workflows) - - [Rerun Workflow from a Specific Task](#rerun-workflow-from-a-specific-task) - - [Pause Running Workflow](#pause-running-workflow) - - [Resume Paused Workflow](#resume-paused-workflow) - - [Searching for Workflows](#searching-for-workflows) - - [Handling Failures, Retries and Rate Limits](#handling-failures-retries-and-rate-limits) - - [Retries](#retries) - - [Rate Limits](#rate-limits) - - [Task Registration](#task-registration) - - [Update Task Definition:](#update-task-definition) -- [Using Conductor in Your Application](#using-conductor-in-your-application) - - [Adding Conductor SDK to Your Application](#adding-conductor-sdk-to-your-application) - - [Testing Workflows](#testing-workflows) - - [Example Unit Testing Application](#example-unit-testing-application) - - [Workflow Deployments Using CI/CD](#workflow-deployments-using-cicd) - - [Versioning Workflows](#versioning-workflows) -- [Development](#development) - - [Client Regeneration](#client-regeneration) - - [Sync Client Regeneration](#sync-client-regeneration) - - [Async Client Regeneration](#async-client-regeneration) - - - -## Install Conductor Python SDK - -Before installing Conductor Python SDK, it is a good practice to set up a dedicated virtual environment as follows: +## Conductor-OSS vs. Orkes Conductor -```shell -virtualenv conductor -source conductor/bin/activate -``` +Conductor-OSS is the open-source version of the Conductor orchestration platform, maintained by the community and available for self-hosting. It provides a robust, extensible framework for building and managing workflows, ideal for developers who want full control over their deployment and customization. + +Orkes Conductor, built on top of Conductor-OSS, is a fully-managed, cloud-hosted service provided by Orkes. It offers additional features such as a user-friendly UI, enterprise-grade security, scalability, and support, making it suitable for organizations seeking a turnkey solution without managing infrastructure. + +## Quick Start -### Get Conductor Python SDK +- [Installation](#installation) +- [Configuration](#configuration) +- [Hello World Example](#hello-world-example) +- [Documentation](#documentation) + +## Installation The SDK requires Python 3.9+. To install the SDK, use the following command: @@ -106,859 +35,155 @@ The SDK requires Python 3.9+. To install the SDK, use the following command: python3 -m pip install conductor-python ``` -## Hello World Application Using Conductor +For development setup, it's recommended to use a virtual environment: -In this section, we will create a simple "Hello World" application that executes a "greetings" workflow managed by Conductor. +```shell +virtualenv conductor +source conductor/bin/activate +python3 -m pip install conductor-python +``` -### Step 1: Create Workflow +## Configuration -#### Creating Workflows by Code +### Basic Configuration -Create [greetings_workflow.py](examples/helloworld/greetings_workflow.py) with the following: +The SDK connects to `http://localhost:8080/api` by default. For other configurations: ```python -from conductor.client.workflow.conductor_workflow import ConductorWorkflow -from conductor.client.workflow.executor.workflow_executor import WorkflowExecutor -from greetings_worker import greet +from conductor.client.configuration.configuration import Configuration -def greetings_workflow(workflow_executor: WorkflowExecutor) -> ConductorWorkflow: - name = 'greetings' - workflow = ConductorWorkflow(name=name, executor=workflow_executor) - workflow.version = 1 - workflow >> greet(task_ref_name='greet_ref', name=workflow.input('name')) - return workflow +# Default configuration (localhost:8080) +config = Configuration() +# Custom server URL +config = Configuration(server_api_url="https://your-conductor-server.com/api") +# With authentication (for Orkes Conductor) +from conductor.shared.configuration.settings.authentication_settings import AuthenticationSettings +config = Configuration( + server_api_url="https://your-cluster.orkesconductor.io/api", + authentication_settings=AuthenticationSettings( + key_id="your_key", + key_secret="your_secret" + ) +) ``` -#### (Alternatively) Creating Workflows in JSON - -Create `greetings_workflow.json` with the following: - -```json -{ - "name": "greetings", - "description": "Sample greetings workflow", - "version": 1, - "tasks": [ - { - "name": "greet", - "taskReferenceName": "greet_ref", - "type": "SIMPLE", - "inputParameters": { - "name": "${workflow.input.name}" - } - } - ], - "timeoutPolicy": "TIME_OUT_WF", - "timeoutSeconds": 60 -} -``` +### Environment Variables + +You can also configure using environment variables: -Workflows must be registered to the Conductor server. Use the API to register the greetings workflow from the JSON file above: ```shell -curl -X POST -H "Content-Type:application/json" \ -http://localhost:8080/api/metadata/workflow -d @greetings_workflow.json +export CONDUCTOR_SERVER_URL=https://your-conductor-server.com/api +export CONDUCTOR_AUTH_KEY=your_key +export CONDUCTOR_AUTH_SECRET=your_secret ``` -> [!note] -> To use the Conductor API, the Conductor server must be up and running (see [Running over Conductor standalone (installed locally)](#running-over-conductor-standalone-installed-locally)). -### Step 2: Write Task Worker +## Hello World Example -Using Python, a worker represents a function with the worker_task decorator. Create [greetings_worker.py](examples/helloworld/greetings_worker.py) file as illustrated below: +Create a simple "Hello World" application that executes a "greetings" workflow: -> [!note] -> A single workflow can have task workers written in different languages and deployed anywhere, making your workflow polyglot and distributed! +### 1. Create a Worker ```python from conductor.client.worker.worker_task import worker_task - @worker_task(task_definition_name='greet') def greet(name: str) -> str: return f'Hello {name}' - ``` -Now, we are ready to write our main application, which will execute our workflow. - -### Step 3: Write _Hello World_ Application -Let's add [helloworld.py](examples/helloworld/helloworld.py) with a `main` method: +### 2. Create a Workflow ```python -from conductor.client.automator.task_handler import TaskHandler -from conductor.client.configuration.configuration import Configuration from conductor.client.workflow.conductor_workflow import ConductorWorkflow from conductor.client.workflow.executor.workflow_executor import WorkflowExecutor -from greetings_workflow import greetings_workflow - +from greetings_worker import greet -def register_workflow(workflow_executor: WorkflowExecutor) -> ConductorWorkflow: - workflow = greetings_workflow(workflow_executor=workflow_executor) - workflow.register(True) +def greetings_workflow(workflow_executor: WorkflowExecutor) -> ConductorWorkflow: + name = 'greetings' + workflow = ConductorWorkflow(name=name, executor=workflow_executor) + workflow.version = 1 + workflow >> greet(task_ref_name='greet_ref', name=workflow.input('name')) return workflow - - -def main(): - # The app is connected to http://localhost:8080/api by default - api_config = Configuration() - - workflow_executor = WorkflowExecutor(configuration=api_config) - - # Registering the workflow (Required only when the app is executed the first time) - workflow = register_workflow(workflow_executor) - - # Starting the worker polling mechanism - task_handler = TaskHandler(configuration=api_config) - task_handler.start_processes() - - workflow_run = workflow_executor.execute(name=workflow.name, version=workflow.version, - workflow_input={'name': 'Orkes'}) - - print(f'\nworkflow result: {workflow_run.output["result"]}\n') - print(f'see the workflow execution here: {api_config.ui_host}/execution/{workflow_run.workflow_id}\n') - task_handler.stop_processes() - - -if __name__ == '__main__': - main() -``` -## Running Workflows on Conductor Standalone (Installed Locally) - -### Setup Environment Variable - -Set the following environment variable to point the SDK to the Conductor Server API endpoint: - -```shell -export CONDUCTOR_SERVER_URL=http://localhost:8080/api -``` -### Start Conductor Server - -To start the Conductor server in a standalone mode from a Docker image, type the command below: - -```shell -docker run --init -p 8080:8080 -p 5000:5000 conductoross/conductor-standalone:3.15.0 -``` -To ensure the server has started successfully, open Conductor UI on http://localhost:5000. - -### Execute Hello World Application - -To run the application, type the following command: - -``` -python helloworld.py -``` - -Now, the workflow is executed, and its execution status can be viewed from Conductor UI (http://localhost:5000). - -Navigate to the **Executions** tab to view the workflow execution. - -Screenshot 2024-03-18 at 12 30 07 - -## Running Workflows on Orkes Conductor - -For running the workflow in Orkes Conductor, - -- Update the Conductor server URL to your cluster name. - -```shell -export CONDUCTOR_SERVER_URL=https://[cluster-name].orkesconductor.io/api -``` - -- If you want to run the workflow on the Orkes Conductor Playground, set the Conductor Server variable as follows: - -```shell -export CONDUCTOR_SERVER_URL=https://play.orkes.io/api -``` - -- Orkes Conductor requires authentication. [Obtain the key and secret from the Conductor server](https://orkes.io/content/how-to-videos/access-key-and-secret) and set the following environment variables. - -```shell -export CONDUCTOR_AUTH_KEY=your_key -export CONDUCTOR_AUTH_SECRET=your_key_secret -``` - -Run the application and view the execution status from Conductor's UI Console. - -> [!NOTE] -> That's it - you just created and executed your first distributed Python app! - -## Learn More about Conductor Python SDK - -There are three main ways you can use Conductor when building durable, resilient, distributed applications. - -1. Write service workers that implement business logic to accomplish a specific goal - such as initiating payment transfer, getting user information from the database, etc. -2. Create Conductor workflows that implement application state - A typical workflow implements the saga pattern. -3. Use Conductor SDK and APIs to manage workflows from your application. - -## Create and Run Conductor Workers - -## Writing Workers - -A Workflow task represents a unit of business logic that achieves a specific goal, such as checking inventory, initiating payment transfer, etc. A worker implements a task in the workflow. - - -### Implementing Workers - -The workers can be implemented by writing a simple Python function and annotating the function with the `@worker_task`. Conductor workers are services (similar to microservices) that follow the [Single Responsibility Principle](https://en.wikipedia.org/wiki/Single_responsibility_principle). - -Workers can be hosted along with the workflow or run in a distributed environment where a single workflow uses workers deployed and running in different machines/VMs/containers. Whether to keep all the workers in the same application or run them as a distributed application is a design and architectural choice. Conductor is well suited for both kinds of scenarios. - -You can create or convert any existing Python function to a distributed worker by adding `@worker_task` annotation to it. Here is a simple worker that takes `name` as input and returns greetings: - -```python -from conductor.client.worker.worker_task import worker_task - -@worker_task(task_definition_name='greetings') -def greetings(name: str) -> str: - return f'Hello, {name}' -``` - -A worker can take inputs which are primitives - `str`, `int`, `float`, `bool` etc. or can be complex data classes. - -Here is an example worker that uses `dataclass` as part of the worker input. - -```python -from conductor.client.worker.worker_task import worker_task -from dataclasses import dataclass - -@dataclass -class OrderInfo: - order_id: int - sku: str - quantity: int - sku_price: float - - -@worker_task(task_definition_name='process_order') -def process_order(order_info: OrderInfo) -> str: - return f'order: {order_info.order_id}' - ``` -### Managing Workers in Application - -Workers use a polling mechanism (with a long poll) to check for any available tasks from the server periodically. The startup and shutdown of workers are handled by the `conductor.client.automator.task_handler.TaskHandler` class. +### 3. Run the Application ```python from conductor.client.automator.task_handler import TaskHandler from conductor.client.configuration.configuration import Configuration +from conductor.client.workflow.executor.workflow_executor import WorkflowExecutor +from greetings_workflow import greetings_workflow def main(): - # points to http://localhost:8080/api by default + # Connect to Conductor server api_config = Configuration() - - task_handler = TaskHandler( - workers=[], - configuration=api_config, - scan_for_annotated_workers=True, - import_modules=['greetings'] # import workers from this module - leave empty if all the workers are in the same module - ) + workflow_executor = WorkflowExecutor(configuration=api_config) - # start worker polling - task_handler.start_processes() - - # Call to stop the workers when the application is ready to shutdown - task_handler.stop_processes() - - -if __name__ == '__main__': - main() - -``` - -### Design Principles for Workers - -Each worker embodies the design pattern and follows certain basic principles: - -1. Workers are stateless and do not implement a workflow-specific logic. -2. Each worker executes a particular task and produces well-defined output given specific inputs. -3. Workers are meant to be idempotent (Should handle cases where the partially executed task, due to timeouts, etc, gets rescheduled). -4. Workers do not implement the logic to handle retries, etc., that is taken care of by the Conductor server. - -#### System Task Workers - -A system task worker is a pre-built, general-purpose worker in your Conductor server distribution. - -System tasks automate repeated tasks such as calling an HTTP endpoint, executing lightweight ECMA-compliant javascript code, publishing to an event broker, etc. - -#### Wait Task - -> [!tip] -> Wait is a powerful way to have your system wait for a specific trigger, such as an external event, a particular date/time, or duration, such as 2 hours, without having to manage threads, background processes, or jobs. - -##### Using Code to Create Wait Task - -```python -from conductor.client.workflow.task.wait_task import WaitTask - -# waits for 2 seconds before scheduling the next task -wait_for_two_sec = WaitTask(task_ref_name='wait_for_2_sec', wait_for_seconds=2) - -# wait until end of jan -wait_till_jan = WaitTask(task_ref_name='wait_till_jsn', wait_until='2024-01-31 00:00 UTC') - -# waits until an API call or an event is triggered -wait_for_signal = WaitTask(task_ref_name='wait_till_jan_end') - -``` -##### JSON Configuration - -```json -{ - "name": "wait", - "taskReferenceName": "wait_till_jan_end", - "type": "WAIT", - "inputParameters": { - "until": "2024-01-31 00:00 UTC" - } -} -``` -#### HTTP Task - -Make a request to an HTTP(S) endpoint. The task allows for GET, PUT, POST, DELETE, HEAD, and PATCH requests. - -##### Using Code to Create HTTP Task - -```python -from conductor.client.workflow.task.http_task import HttpTask - -HttpTask(task_ref_name='call_remote_api', http_input={ - 'uri': 'https://orkes-api-tester.orkesconductor.com/api' - }) -``` - -##### JSON Configuration - -```json -{ - "name": "http_task", - "taskReferenceName": "http_task_ref", - "type" : "HTTP", - "uri": "https://orkes-api-tester.orkesconductor.com/api", - "method": "GET" -} -``` - -#### Javascript Executor Task - -Execute ECMA-compliant Javascript code. It is useful when writing a script for data mapping, calculations, etc. - -##### Using Code to Create Inline Task - -```python -from conductor.client.workflow.task.javascript_task import JavascriptTask - -say_hello_js = """ -function greetings() { - return { - "text": "hello " + $.name - } -} -greetings(); -""" - -js = JavascriptTask(task_ref_name='hello_script', script=say_hello_js, bindings={'name': '${workflow.input.name}'}) -``` -##### JSON Configuration - -```json -{ - "name": "inline_task", - "taskReferenceName": "inline_task_ref", - "type": "INLINE", - "inputParameters": { - "expression": " function greetings() {\n return {\n \"text\": \"hello \" + $.name\n }\n }\n greetings();", - "evaluatorType": "graaljs", - "name": "${workflow.input.name}" - } -} -``` - -#### JSON Processing using JQ - -[Jq](https://jqlang.github.io/jq/) is like sed for JSON data - you can slice, filter, map, and transform structured data with the same ease that sed, awk, grep, and friends let you play with text. - -##### Using Code to Create JSON JQ Transform Task - -```python -from conductor.client.workflow.task.json_jq_task import JsonJQTask - -jq_script = """ -{ key3: (.key1.value1 + .key2.value2) } -""" - -jq = JsonJQTask(task_ref_name='jq_process', script=jq_script) -``` -##### JSON Configuration - -```json -{ - "name": "json_transform_task", - "taskReferenceName": "json_transform_task_ref", - "type": "JSON_JQ_TRANSFORM", - "inputParameters": { - "key1": "k1", - "key2": "k2", - "queryExpression": "{ key3: (.key1.value1 + .key2.value2) }", - } -} -``` - -### Worker vs. Microservice/HTTP Endpoints - -> [!tip] -> Workers are a lightweight alternative to exposing an HTTP endpoint and orchestrating using HTTP tasks. Using workers is a recommended approach if you do not need to expose the service over HTTP or gRPC endpoints. - -There are several advantages to this approach: - -1. **No need for an API management layer** : Given there are no exposed endpoints and workers are self-load-balancing. -2. **Reduced infrastructure footprint** : No need for an API gateway/load balancer. -3. All the communication is initiated by workers using polling - avoiding the need to open up any incoming TCP ports. -4. Workers **self-regulate** when busy; they only poll as much as they can handle. Backpressure handling is done out of the box. -5. Workers can be scaled up/down quickly based on the demand by increasing the number of processes. - -### Deploying Workers in Production - -Conductor workers can run in the cloud-native environment or on-prem and can easily be deployed like any other Python application. Workers can run a containerized environment, VMs, or bare metal like you would deploy your other Python applications. - -## Create Conductor Workflows - -### Conductor Workflows - -Workflow can be defined as the collection of tasks and operators that specify the order and execution of the defined tasks. This orchestration occurs in a hybrid ecosystem that encircles serverless functions, microservices, and monolithic applications. - -This section will dive deeper into creating and executing Conductor workflows using Python SDK. - - -### Creating Workflows - -Conductor lets you create the workflows using either Python or JSON as the configuration. - -Using Python as code to define and execute workflows lets you build extremely powerful, dynamic workflows and run them on Conductor. - -When the workflows are relatively static, they can be designed using the Orkes UI (available when using Orkes Conductor) and APIs or SDKs to register and run the workflows. - -Both the code and configuration approaches are equally powerful and similar in nature to how you treat Infrastructure as Code. - -#### Execute Dynamic Workflows Using Code - -For cases where the workflows cannot be created statically ahead of time, Conductor is a powerful dynamic workflow execution platform that lets you create very complex workflows in code and execute them. It is useful when the workflow is unique for each execution. - -```python -from conductor.client.automator.task_handler import TaskHandler -from conductor.client.configuration.configuration import Configuration -from conductor.client.orkes_clients import OrkesClients -from conductor.client.worker.worker_task import worker_task -from conductor.client.workflow.conductor_workflow import ConductorWorkflow - -#@worker_task annotation denotes that this is a worker -@worker_task(task_definition_name='get_user_email') -def get_user_email(userid: str) -> str: - return f'{userid}@example.com' - -#@worker_task annotation denotes that this is a worker -@worker_task(task_definition_name='send_email') -def send_email(email: str, subject: str, body: str): - print(f'sending email to {email} with subject {subject} and body {body}') - - -def main(): - - # defaults to reading the configuration using following env variables - # CONDUCTOR_SERVER_URL : conductor server e.g. https://play.orkes.io/api - # CONDUCTOR_AUTH_KEY : API Authentication Key - # CONDUCTOR_AUTH_SECRET: API Auth Secret - api_config = Configuration() - - task_handler = TaskHandler(configuration=api_config) - #Start Polling + # Register and create workflow + workflow = greetings_workflow(workflow_executor) + workflow.register(True) + + # Start workers + task_handler = TaskHandler(configuration=api_config) task_handler.start_processes() - - clients = OrkesClients(configuration=api_config) - workflow_executor = clients.get_workflow_executor() - workflow = ConductorWorkflow(name='dynamic_workflow', version=1, executor=workflow_executor) - get_email = get_user_email(task_ref_name='get_user_email_ref', userid=workflow.input('userid')) - sendmail = send_email(task_ref_name='send_email_ref', email=get_email.output('result'), subject='Hello from Orkes', - body='Test Email') - #Order of task execution - workflow >> get_email >> sendmail - - # Configure the output of the workflow - workflow.output_parameters(output_parameters={ - 'email': get_email.output('result') - }) - #Run the workflow - result = workflow.execute(workflow_input={'userid': 'user_a'}) - print(f'\nworkflow output: {result.output}\n') - #Stop Polling + + # Execute workflow + workflow_run = workflow_executor.execute( + name=workflow.name, + version=workflow.version, + workflow_input={'name': 'Orkes'} + ) + + print(f'Workflow result: {workflow_run.output["result"]}') task_handler.stop_processes() - if __name__ == '__main__': main() - ``` -```shell ->> python3 dynamic_workflow.py +### 4. Start Conductor Server -2024-02-03 19:54:35,700 [32853] conductor.client.automator.task_handler INFO created worker with name=get_user_email and domain=None -2024-02-03 19:54:35,781 [32853] conductor.client.automator.task_handler INFO created worker with name=send_email and domain=None -2024-02-03 19:54:35,859 [32853] conductor.client.automator.task_handler INFO TaskHandler initialized -2024-02-03 19:54:35,859 [32853] conductor.client.automator.task_handler INFO Starting worker processes... -2024-02-03 19:54:35,861 [32853] conductor.client.automator.task_runner INFO Polling task get_user_email with domain None with polling interval 0.1 -2024-02-03 19:54:35,861 [32853] conductor.client.automator.task_handler INFO Started 2 TaskRunner process -2024-02-03 19:54:35,862 [32853] conductor.client.automator.task_handler INFO Started all processes -2024-02-03 19:54:35,862 [32853] conductor.client.automator.task_runner INFO Polling task send_email with domain None with polling interval 0.1 -sending email to user_a@example.com with subject Hello from Orkes and body Test Email - -workflow output: {'email': 'user_a@example.com'} - -2024-02-03 19:54:36,309 [32853] conductor.client.automator.task_handler INFO Stopped worker processes... -``` -See [dynamic_workflow.py](examples/dynamic_workflow.py) for a fully functional example. - -#### Kitchen-Sink Workflow - -For a more complex workflow example with all the supported features, see [kitchensink.py](examples/kitchensink.py). - -### Executing Workflows - -The [WorkflowClient](src/conductor/client/workflow_client.py) interface provides all the APIs required to work with workflow executions. - -```python -from conductor.client.configuration.configuration import Configuration -from conductor.client.orkes_clients import OrkesClients - -api_config = Configuration() -clients = OrkesClients(configuration=api_config) -workflow_client = clients.get_workflow_client() -``` -#### Execute Workflow Asynchronously - -Useful when workflows are long-running. - -```python -from conductor.client.http.models import StartWorkflowRequest - -request = StartWorkflowRequest() -request.name = 'hello' -request.version = 1 -request.input = {'name': 'Orkes'} -# workflow id is the unique execution id associated with this execution -workflow_id = workflow_client.start_workflow(request) -``` -#### Execute Workflow Synchronously - -Applicable when workflows complete very quickly - usually under 20-30 seconds. - -```python -from conductor.client.http.models import StartWorkflowRequest - -request = StartWorkflowRequest() -request.name = 'hello' -request.version = 1 -request.input = {'name': 'Orkes'} - -workflow_run = workflow_client.execute_workflow( - start_workflow_request=request, - wait_for_seconds=12) -``` - - -### Managing Workflow Executions -> [!note] -> See [workflow_ops.py](examples/workflow_ops.py) for a fully working application that demonstrates working with the workflow executions and sending signals to the workflow to manage its state. - -Workflows represent the application state. With Conductor, you can query the workflow execution state anytime during its lifecycle. You can also send signals to the workflow that determines the outcome of the workflow state. - -[WorkflowClient](src/conductor/client/workflow_client.py) is the client interface used to manage workflow executions. - -```python -from conductor.client.configuration.configuration import Configuration -from conductor.client.orkes_clients import OrkesClients - -api_config = Configuration() -clients = OrkesClients(configuration=api_config) -workflow_client = clients.get_workflow_client() -``` - -### Get Execution Status - -The following method lets you query the status of the workflow execution given the id. When the `include_tasks` is set, the response also includes all the completed and in-progress tasks. - -```python -get_workflow(workflow_id: str, include_tasks: Optional[bool] = True) -> Workflow -``` - -### Update Workflow State Variables - -Variables inside a workflow are the equivalent of global variables in a program. - -```python -update_variables(self, workflow_id: str, variables: Dict[str, object] = {}) -``` - -### Terminate Running Workflows - -Used to terminate a running workflow. Any pending tasks are canceled, and no further work is scheduled for this workflow upon termination. A failure workflow will be triggered but can be avoided if `trigger_failure_workflow` is set to False. - -```python -terminate_workflow(self, workflow_id: str, reason: Optional[str] = None, trigger_failure_workflow: bool = False) -``` - -### Retry Failed Workflows - -If the workflow has failed due to one of the task failures after exhausting the retries for the task, the workflow can still be resumed by calling the retry. - -```python -retry_workflow(self, workflow_id: str, resume_subworkflow_tasks: Optional[bool] = False) -``` - -When a sub-workflow inside a workflow has failed, there are two options: - -1. Re-trigger the sub-workflow from the start (Default behavior). -2. Resume the sub-workflow from the failed task (set `resume_subworkflow_tasks` to True). - -### Restart Workflows - -A workflow in the terminal state (COMPLETED, TERMINATED, FAILED) can be restarted from the beginning. Useful when retrying from the last failed task is insufficient, and the whole workflow must be started again. - -```python -restart_workflow(self, workflow_id: str, use_latest_def: Optional[bool] = False) -``` - -### Rerun Workflow from a Specific Task - -In the cases where a workflow needs to be restarted from a specific task rather than from the beginning, rerun provides that option. When issuing the rerun command to the workflow, you can specify the task ID from where the workflow should be restarted (as opposed to from the beginning), and optionally, the workflow's input can also be changed. - -```python -rerun_workflow(self, workflow_id: str, rerun_workflow_request: RerunWorkflowRequest) -``` - -> [!tip] -> Rerun is one of the most powerful features Conductor has, giving you unparalleled control over the workflow restart. -> - -### Pause Running Workflow - -A running workflow can be put to a PAUSED status. A paused workflow lets the currently running tasks complete but does not schedule any new tasks until resumed. - -```python -pause_workflow(self, workflow_id: str) -``` - -### Resume Paused Workflow - -Resume operation resumes the currently paused workflow, immediately evaluating its state and scheduling the next set of tasks. - -```python -resume_workflow(self, workflow_id: str) -``` - -### Searching for Workflows - -Workflow executions are retained until removed from the Conductor. This gives complete visibility into all the executions an application has - regardless of the number of executions. Conductor has a powerful search API that allows you to search for workflow executions. - -```python -search(self, start, size, free_text: str = '*', query: str = None) -> ScrollableSearchResultWorkflowSummary -``` - -* **free_text**: Free text search to look for specific words in the workflow and task input/output. -* **query** SQL-like query to search against specific fields in the workflow. - -Here are the supported fields for **query**: - -| Field | Description | -|-------------|-----------------| -| status |The status of the workflow. | -| correlationId |The ID to correlate the workflow execution to other executions. | -| workflowType |The name of the workflow. | - | version |The version of the workflow. | -|startTime|The start time of the workflow is in milliseconds.| - - -### Handling Failures, Retries and Rate Limits - -Conductor lets you embrace failures rather than worry about the complexities introduced in the system to handle failures. - -All the aspects of handling failures, retries, rate limits, etc., are driven by the configuration that can be updated in real time without re-deploying your application. - -#### Retries - -Each task in the Conductor workflow can be configured to handle failures with retries, along with the retry policy (linear, fixed, exponential backoff) and maximum number of retry attempts allowed. - -See [Error Handling](https://orkes.io/content/error-handling) for more details. - -#### Rate Limits - -What happens when a task is operating on a critical resource that can only handle a few requests at a time? Tasks can be configured to have a fixed concurrency (X request at a time) or a rate (Y tasks/time window). - - -#### Task Registration - -```python -from conductor.client.configuration.configuration import Configuration -from conductor.client.http.models import TaskDef -from conductor.client.orkes_clients import OrkesClients - - -def main(): - api_config = Configuration() - clients = OrkesClients(configuration=api_config) - metadata_client = clients.get_metadata_client() - - task_def = TaskDef() - task_def.name = 'task_with_retries' - task_def.retry_count = 3 - task_def.retry_logic = 'LINEAR_BACKOFF' - task_def.retry_delay_seconds = 1 - - # only allow 3 tasks at a time to be in the IN_PROGRESS status - task_def.concurrent_exec_limit = 3 - - # timeout the task if not polled within 60 seconds of scheduling - task_def.poll_timeout_seconds = 60 - - # timeout the task if the task does not COMPLETE in 2 minutes - task_def.timeout_seconds = 120 - - # for the long running tasks, timeout if the task does not get updated in COMPLETED or IN_PROGRESS status in - # 60 seconds after the last update - task_def.response_timeout_seconds = 60 - - # only allow 100 executions in a 10-second window! -- Note, this is complementary to concurrent_exec_limit - task_def.rate_limit_per_frequency = 100 - task_def.rate_limit_frequency_in_seconds = 10 - - metadata_client.register_task_def(task_def=task_def) -``` - - -```json -{ - "name": "task_with_retries", - - "retryCount": 3, - "retryLogic": "LINEAR_BACKOFF", - "retryDelaySeconds": 1, - "backoffScaleFactor": 1, - - "timeoutSeconds": 120, - "responseTimeoutSeconds": 60, - "pollTimeoutSeconds": 60, - "timeoutPolicy": "TIME_OUT_WF", - - "concurrentExecLimit": 3, - - "rateLimitPerFrequency": 0, - "rateLimitFrequencyInSeconds": 1 -} -``` - -#### Update Task Definition: +For local development, start Conductor using Docker: ```shell -POST /api/metadata/taskdef -d @task_def.json -``` - -See [task_configure.py](examples/task_configure.py) for a detailed working app. - -## Using Conductor in Your Application - -Conductor SDKs are lightweight and can easily be added to your existing or new Python app. This section will dive deeper into integrating Conductor in your application. - -### Adding Conductor SDK to Your Application - -Conductor Python SDKs are published on PyPi @ https://pypi.org/project/conductor-python/: - -```shell -pip3 install conductor-python -``` - -### Testing Workflows - -Conductor SDK for Python provides a complete feature testing framework for your workflow-based applications. The framework works well with any testing framework you prefer without imposing any specific framework. - -The Conductor server provides a test endpoint `POST /api/workflow/test` that allows you to post a workflow along with the test execution data to evaluate the workflow. - -The goal of the test framework is as follows: - -1. Ability to test the various branches of the workflow. -2. Confirm the workflow execution and tasks given a fixed set of inputs and outputs. -3. Validate that the workflow completes or fails given specific inputs. - -Here are example assertions from the test: - -```python - -... -test_request = WorkflowTestRequest(name=wf.name, version=wf.version, - task_ref_to_mock_output=task_ref_to_mock_output, - workflow_def=wf.to_workflow_def()) -run = workflow_client.test_workflow(test_request=test_request) - -print(f'completed the test run') -print(f'status: {run.status}') -self.assertEqual(run.status, 'COMPLETED') - -... - +docker run --init -p 8080:8080 -p 5000:5000 conductoross/conductor-standalone:3.15.0 ``` -> [!note] -> Workflow workers are your regular Python functions and can be tested with any available testing framework. - -#### Example Unit Testing Application - -See [test_workflows.py](examples/test_workflows.py) for a fully functional example of how to test a moderately complex workflow with branches. - -### Workflow Deployments Using CI/CD - -> [!tip] -> Treat your workflow definitions just like your code. Suppose you are defining the workflows using UI. In that case, we recommend checking the JSON configuration into the version control and using your development workflow for CI/CD to promote the workflow definitions across various environments such as Dev, Test, and Prod. - -Here is a recommended approach when defining workflows using JSON: - -* Treat your workflow metadata as code. -* Check in the workflow and task definitions along with the application code. -* Use `POST /api/metadata/*` endpoints or MetadataClient (`from conductor.client.metadata_client import MetadataClient`) to register/update workflows as part of the deployment process. -* Version your workflows. If there is a significant change, change the version field of the workflow. See versioning workflows below for more details. - - -### Versioning Workflows - -A powerful feature of Conductor is the ability to version workflows. You should increment the version of the workflow when there is a significant change to the definition. You can run multiple versions of the workflow at the same time. When starting a new workflow execution, use the `version` field to specify which version to use. When omitted, the latest (highest-numbered) version is used. - -* Versioning allows safely testing changes by doing canary testing in production or A/B testing across multiple versions before rolling out. -* A version can also be deleted, effectively allowing for "rollback" if required. - - -## Development +View the workflow execution in the Conductor UI at http://localhost:5000. -### Client Regeneration +## Documentation -When updating to a new Orkes version, you may need to regenerate the client code to support new APIs and features. The SDK provides comprehensive guides for regenerating both sync and async clients: +For detailed information on specific topics, see the following documentation: -#### Sync Client Regeneration +### Core Concepts +- **[Workers](docs/worker/README.md)** - Creating and managing Conductor workers +- **[Workflows](docs/workflow/README.md)** - Building and executing Conductor workflows +- **[Configuration](docs/configuration/)** - Advanced configuration options + - [SSL/TLS Configuration](docs/configuration/ssl-tls.md) - Secure connections and certificates + - [Proxy Configuration](docs/configuration/proxy.md) - Network proxy setup -For the synchronous client (`conductor.client`), see the [Client Regeneration Guide](src/conductor/client/CLIENT_REGENERATION_GUIDE.md) which covers: +### Development & Testing +- **[Testing](docs/testing/README.md)** - Testing workflows and workers +- **[Development](docs/development/README.md)** - Development setup and client regeneration +- **[Examples](docs/examples/)** - Complete working examples -- Creating swagger.json files for new Orkes versions -- Generating client code using Swagger Codegen -- Replacing models and API clients in the codegen folder -- Creating adapters and updating the proxy package -- Running backward compatibility, serialization, and integration tests +### Production & Deployment +- **[Production](docs/production/)** - Production deployment guidelines +- **[Metadata](docs/metadata/README.md)** - Workflow and task metadata management +- **[Authorization](docs/authorization/README.md)** - Authentication and authorization +- **[Secrets](docs/secret/README.md)** - Secret management +- **[Scheduling](docs/schedule/README.md)** - Workflow scheduling -#### Async Client Regeneration +### Advanced Topics +- **[Advanced](docs/advanced/)** - Advanced features and patterns -For the asynchronous client (`conductor.asyncio_client`), see the [Async Client Regeneration Guide](src/conductor/asyncio_client/ASYNC_CLIENT_REGENERATION_GUIDE.md) which covers: +## Examples -- Creating swagger.json files for new Orkes versions -- Generating async client code using OpenAPI Generator -- Replacing models and API clients in the http folder -- Creating adapters for backward compatibility -- Running async-specific tests and handling breaking changes +Check out the [examples directory](examples/) for complete working examples: -Both guides include detailed troubleshooting sections, best practices, and step-by-step instructions to ensure a smooth regeneration process while maintaining backward compatibility. +- [Hello World](examples/helloworld/) - Basic workflow example +- [Dynamic Workflow](examples/dynamic_workflow.py) - Dynamic workflow creation +- [Kitchen Sink](examples/kitchensink.py) - Comprehensive workflow features +- [Async Examples](examples/async/) - Asynchronous client examples diff --git a/docs/configuration/README.md b/docs/configuration/README.md new file mode 100644 index 00000000..d10af77e --- /dev/null +++ b/docs/configuration/README.md @@ -0,0 +1,99 @@ +# Configuration + +This section covers various configuration options for the Conductor Python SDK. + +## Table of Contents + +- [Basic Configuration](../../README.md#configuration) - Basic configuration setup +- [SSL/TLS Configuration](ssl-tls.md) - Secure connections and certificates +- [Proxy Configuration](proxy.md) - Network proxy setup + +## Overview + +The Conductor Python SDK provides flexible configuration options to work with different environments and security requirements. Configuration can be done through: + +1. **Code Configuration** - Direct configuration in your application code +2. **Environment Variables** - Configuration through environment variables +3. **Configuration Files** - External configuration files (future enhancement) + +## Quick Start + +```python +from conductor.client.configuration.configuration import Configuration + +# Basic configuration +config = Configuration() + +# Custom server URL +config = Configuration(server_api_url="https://your-server.com/api") + +# With authentication +from conductor.shared.configuration.settings.authentication_settings import AuthenticationSettings +config = Configuration( + server_api_url="https://your-server.com/api", + authentication_settings=AuthenticationSettings( + key_id="your_key", + key_secret="your_secret" + ) +) +``` + +## Environment Variables + +| Variable | Description | Default | +|----------|-------------|---------| +| `CONDUCTOR_SERVER_URL` | Conductor server API URL | `http://localhost:8080/api` | +| `CONDUCTOR_AUTH_KEY` | Authentication key | None | +| `CONDUCTOR_AUTH_SECRET` | Authentication secret | None | +| `CONDUCTOR_PROXY` | Proxy URL | None | +| `CONDUCTOR_PROXY_HEADERS` | Proxy headers (JSON) | None | +| `CONDUCTOR_SSL_CA_CERT` | CA certificate path | None | +| `CONDUCTOR_CERT_FILE` | Client certificate path | None | +| `CONDUCTOR_KEY_FILE` | Client private key path | None | + +## Configuration Examples + +### Local Development + +```python +config = Configuration() # Uses http://localhost:8080/api +``` + +### Production with Authentication + +```python +config = Configuration( + server_api_url="https://your-cluster.orkesconductor.io/api", + authentication_settings=AuthenticationSettings( + key_id="your_key", + key_secret="your_secret" + ) +) +``` + +### With Proxy + +```python +config = Configuration( + server_api_url="https://your-server.com/api", + proxy="http://proxy.company.com:8080" +) +``` + +### With SSL/TLS + +```python +config = Configuration( + server_api_url="https://your-server.com/api", + ssl_ca_cert="/path/to/ca-cert.pem", + cert_file="/path/to/client-cert.pem", + key_file="/path/to/client-key.pem" +) +``` + +## Advanced Configuration + +For more detailed configuration options, see: + +- [SSL/TLS Configuration](ssl-tls.md) - Complete SSL/TLS setup guide +- [Proxy Configuration](proxy.md) - Network proxy configuration guide diff --git a/docs/configuration/proxy.md b/docs/configuration/proxy.md new file mode 100644 index 00000000..357ed3fd --- /dev/null +++ b/docs/configuration/proxy.md @@ -0,0 +1,288 @@ +# Proxy Configuration + +The Conductor Python SDK supports proxy configuration for both synchronous and asynchronous clients. This is useful when your application needs to route traffic through corporate firewalls, load balancers, or other network intermediaries. + +## Table of Contents + +- [Supported Proxy Types](#supported-proxy-types) +- [Client Proxy Configuration](#client-proxy-configuration) +- [Environment Variable Configuration](#environment-variable-configuration) +- [Advanced Proxy Configuration](#advanced-proxy-configuration) +- [Troubleshooting](#troubleshooting) + +## Supported Proxy Types + +- **HTTP Proxy**: `http://proxy.example.com:8080` +- **HTTPS Proxy**: `https://proxy.example.com:8443` +- **SOCKS4 Proxy**: `socks4://proxy.example.com:1080` +- **SOCKS5 Proxy**: `socks5://proxy.example.com:1080` +- **Proxy with Authentication**: `http://username:password@proxy.example.com:8080` + +> [!NOTE] +> For SOCKS proxy support, install the additional dependency: `pip install httpx[socks]` + +## Client Proxy Configuration + +### Basic HTTP Proxy Configuration + +```python +from conductor.client.configuration.configuration import Configuration +from conductor.shared.configuration.settings.authentication_settings import AuthenticationSettings + +# Basic HTTP proxy configuration +config = Configuration( + server_api_url="https://api.orkes.io/api", + authentication_settings=AuthenticationSettings( + key_id="your_key_id", + key_secret="your_key_secret" + ), + proxy="http://proxy.company.com:8080" +) +``` + +### HTTPS Proxy with Authentication Headers + +```python +# HTTPS proxy with authentication headers +config = Configuration( + server_api_url="https://api.orkes.io/api", + authentication_settings=AuthenticationSettings( + key_id="your_key_id", + key_secret="your_key_secret" + ), + proxy="https://secure-proxy.company.com:8443", + proxy_headers={ + "Proxy-Authorization": "Basic dXNlcm5hbWU6cGFzc3dvcmQ=", + "X-Proxy-Client": "conductor-python-sdk" + } +) +``` + +### SOCKS Proxy Configuration + +```python +# SOCKS5 proxy configuration +config = Configuration( + server_api_url="https://api.orkes.io/api", + proxy="socks5://proxy.company.com:1080" +) + +# SOCKS5 proxy with authentication +config = Configuration( + server_api_url="https://api.orkes.io/api", + proxy="socks5://username:password@proxy.company.com:1080" +) +``` + +## Environment Variable Configuration + +You can configure proxy settings using Conductor-specific environment variables: + +```shell +# Basic proxy configuration +export CONDUCTOR_PROXY=http://proxy.company.com:8080 + +# Proxy with authentication headers (JSON format) +export CONDUCTOR_PROXY_HEADERS='{"Proxy-Authorization": "Basic dXNlcm5hbWU6cGFzc3dvcmQ=", "X-Proxy-Client": "conductor-python-sdk"}' + +# Or single header value +export CONDUCTOR_PROXY_HEADERS="Basic dXNlcm5hbWU6cGFzc3dvcmQ=" +``` + +**Priority Order:** +1. Explicit proxy parameters in Configuration constructor +2. `CONDUCTOR_PROXY` and `CONDUCTOR_PROXY_HEADERS` environment variables + +### Example Usage with Environment Variables + +```python +# Set environment variables +import os +os.environ['CONDUCTOR_PROXY'] = 'http://proxy.company.com:8080' +os.environ['CONDUCTOR_PROXY_HEADERS'] = '{"Proxy-Authorization": "Basic dXNlcm5hbWU6cGFzc3dvcmQ="}' + +# Configuration will automatically use proxy from environment +from conductor.client.configuration.configuration import Configuration +config = Configuration(server_api_url="https://api.orkes.io/api") +# Proxy is automatically configured from CONDUCTOR_PROXY environment variable +``` + +## Advanced Proxy Configuration + +### Custom HTTP Client with Proxy + +```python +import httpx +from conductor.client.configuration.configuration import Configuration + +# Create custom HTTP client with proxy +custom_client = httpx.Client( + proxies={ + "http://": "http://proxy.company.com:8080", + "https://": "http://proxy.company.com:8080" + }, + timeout=httpx.Timeout(120.0), + follow_redirects=True, + limits=httpx.Limits(max_keepalive_connections=20, max_connections=100), +) + +config = Configuration( + server_api_url="https://api.orkes.io/api", + http_connection=custom_client +) +``` + +### Proxy with Custom Headers + +```python +import httpx +from conductor.client.configuration.configuration import Configuration + +# Create custom HTTP client with proxy and headers +custom_client = httpx.Client( + proxies={ + "http://": "http://proxy.company.com:8080", + "https://": "http://proxy.company.com:8080" + }, + headers={ + "Proxy-Authorization": "Basic dXNlcm5hbWU6cGFzc3dvcmQ=", + "X-Proxy-Client": "conductor-python-sdk", + "User-Agent": "Conductor-Python-SDK/1.0" + } +) + +config = Configuration( + server_api_url="https://api.orkes.io/api", + http_connection=custom_client +) +``` + +### SOCKS Proxy with Authentication + +```python +import httpx +from conductor.client.configuration.configuration import Configuration + +# SOCKS5 proxy with authentication +custom_client = httpx.Client( + proxies={ + "http://": "socks5://username:password@proxy.company.com:1080", + "https://": "socks5://username:password@proxy.company.com:1080" + } +) + +config = Configuration( + server_api_url="https://api.orkes.io/api", + http_connection=custom_client +) +``` + +### Async Client Proxy Configuration + +```python +import asyncio +import httpx +from conductor.asyncio_client.configuration import Configuration +from conductor.asyncio_client.adapters import ApiClient + +async def main(): + # Create async HTTP client with proxy + async_client = httpx.AsyncClient( + proxies={ + "http://": "http://proxy.company.com:8080", + "https://": "http://proxy.company.com:8080" + } + ) + + config = Configuration( + server_url="https://api.orkes.io/api", + http_connection=async_client + ) + + async with ApiClient(config) as api_client: + # Use the client with proxy configuration + pass + +asyncio.run(main()) +``` + +## Troubleshooting + +### Common Proxy Issues + +1. **Connection refused** + - Check if the proxy server is running + - Verify the proxy URL and port + - Check firewall settings + +2. **Authentication failed** + - Verify username and password + - Check if the proxy requires specific authentication method + - Ensure credentials are properly encoded + +3. **SOCKS proxy not working** + - Install httpx with SOCKS support: `pip install httpx[socks]` + - Check if the SOCKS proxy server is accessible + - Verify SOCKS version (4 or 5) + +4. **SSL/TLS issues through proxy** + - Some proxies don't support HTTPS properly + - Try using HTTP proxy for HTTPS traffic + - Check proxy server SSL configuration + +### Debug Proxy Configuration + +```python +import httpx +import logging + +# Enable debug logging +logging.basicConfig(level=logging.DEBUG) + +# Test proxy connection +def test_proxy_connection(proxy_url): + try: + with httpx.Client(proxies={"http://": proxy_url, "https://": proxy_url}) as client: + response = client.get("http://httpbin.org/ip") + print(f"Proxy test successful: {response.json()}") + except Exception as e: + print(f"Proxy test failed: {e}") + +# Test your proxy +test_proxy_connection("http://proxy.company.com:8080") +``` + +### Proxy Environment Variables + +```bash +# Set proxy environment variables for testing +export HTTP_PROXY=http://proxy.company.com:8080 +export HTTPS_PROXY=http://proxy.company.com:8080 +export NO_PROXY=localhost,127.0.0.1 + +# Test with curl +curl -I https://api.orkes.io/api +``` + +### Proxy Authentication + +```python +import base64 +from urllib.parse import quote + +# Create proxy authentication header +username = "your_username" +password = "your_password" +credentials = f"{username}:{password}" +encoded_credentials = base64.b64encode(credentials.encode()).decode() + +proxy_headers = { + "Proxy-Authorization": f"Basic {encoded_credentials}" +} + +config = Configuration( + server_api_url="https://api.orkes.io/api", + proxy="http://proxy.company.com:8080", + proxy_headers=proxy_headers +) +``` diff --git a/docs/configuration/ssl-tls.md b/docs/configuration/ssl-tls.md new file mode 100644 index 00000000..e6f4bf9c --- /dev/null +++ b/docs/configuration/ssl-tls.md @@ -0,0 +1,262 @@ +# SSL/TLS Configuration + +The Conductor Python SDK supports comprehensive SSL/TLS configuration for both synchronous and asynchronous clients. This allows you to configure secure connections with custom certificates, client authentication, and various SSL verification options. + +## Table of Contents + +- [Synchronous Client SSL Configuration](#synchronous-client-ssl-configuration) +- [Asynchronous Client SSL Configuration](#asynchronous-client-ssl-configuration) +- [Environment Variable Configuration](#environment-variable-configuration) +- [Configuration Parameters](#configuration-parameters) +- [Example Files](#example-files) +- [Security Best Practices](#security-best-practices) +- [Troubleshooting SSL Issues](#troubleshooting-ssl-issues) + +## Synchronous Client SSL Configuration + +### Basic SSL Configuration + +```python +from conductor.client.configuration.configuration import Configuration +from conductor.client.orkes_clients import OrkesClients + +# Basic SSL configuration with custom CA certificate +config = Configuration( + base_url="https://play.orkes.io", + ssl_ca_cert="/path/to/ca-certificate.pem", +) + +# Create clients with SSL configuration +clients = OrkesClients(configuration=config) +workflow_client = clients.get_workflow_client() +``` + +### SSL with Certificate Data + +```python +# SSL with custom CA certificate data (PEM string) +config = Configuration( + base_url="https://play.orkes.io", + ca_cert_data="""-----BEGIN CERTIFICATE----- +MIIDXTCCAkWgAwIBAgIJAKoK/Ovj8EUMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV +BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX +aWRnaXRzIFB0eSBMdGQwHhcNMTYwMjEyMTQ0NDQ2WhcNMjYwMjEwMTQ0NDQ2WjBF +-----END CERTIFICATE-----""", +) +``` + +### SSL with Client Certificate Authentication + +```python +# SSL with client certificate authentication +config = Configuration( + base_url="https://play.orkes.io", + ssl_ca_cert="/path/to/ca-certificate.pem", + cert_file="/path/to/client-certificate.pem", + key_file="/path/to/client-key.pem", +) +``` + +### SSL with Disabled Verification (Not Recommended for Production) + +```python +# SSL with completely disabled verification (NOT RECOMMENDED for production) +config = Configuration( + base_url="https://play.orkes.io", +) +config.verify_ssl = False +``` + +### Advanced SSL Configuration with httpx + +```python +import httpx +import ssl + +# Create custom SSL context +ssl_context = ssl.create_default_context() +ssl_context.load_verify_locations("/path/to/ca-certificate.pem") +ssl_context.load_cert_chain( + certfile="/path/to/client-certificate.pem", + keyfile="/path/to/client-key.pem" +) + +# Create custom httpx client with SSL context +custom_client = httpx.Client( + verify=ssl_context, + timeout=httpx.Timeout(120.0), + follow_redirects=True, + limits=httpx.Limits(max_keepalive_connections=20, max_connections=100), +) + +config = Configuration(base_url="https://play.orkes.io") +config.http_connection = custom_client +``` + +## Asynchronous Client SSL Configuration + +### Basic Async SSL Configuration + +```python +import asyncio +from conductor.asyncio_client.configuration import Configuration +from conductor.asyncio_client.adapters import ApiClient +from conductor.asyncio_client.orkes.orkes_clients import OrkesClients + +# Basic SSL configuration with custom CA certificate +config = Configuration( + server_url="https://play.orkes.io/api", + ssl_ca_cert="/path/to/ca-certificate.pem", +) + +async def main(): + async with ApiClient(config) as api_client: + orkes_clients = OrkesClients(api_client, config) + workflow_client = orkes_clients.get_workflow_client() + + # Use the client with SSL configuration + workflows = await workflow_client.search_workflows() + print(f"Found {len(workflows)} workflows") + +asyncio.run(main()) +``` + +### Async SSL with Certificate Data + +```python +# SSL with custom CA certificate data (PEM string) +config = Configuration( + server_url="https://play.orkes.io/api", + ca_cert_data="""-----BEGIN CERTIFICATE----- +MIIDXTCCAkWgAwIBAgIJAKoK/Ovj8EUMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV +BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX +aWRnaXRzIFB0eSBMdGQwHhcNMTYwMjEyMTQ0NDQ2WhcNMjYwMjEwMTQ0NDQ2WjBF +-----END CERTIFICATE-----""", +) +``` + +### Async SSL with Custom SSL Context + +```python +import ssl + +# Create custom SSL context +ssl_context = ssl.create_default_context() +ssl_context.load_verify_locations("/path/to/ca-certificate.pem") +ssl_context.load_cert_chain( + certfile="/path/to/client-certificate.pem", + keyfile="/path/to/client-key.pem" +) +ssl_context.check_hostname = True +ssl_context.verify_mode = ssl.CERT_REQUIRED + +# Use with async client +config = Configuration( + server_url="https://play.orkes.io/api", + ssl_ca_cert="/path/to/ca-certificate.pem", +) +``` + +## Environment Variable Configuration + +You can configure SSL settings using environment variables: + +```bash +# Basic SSL configuration +export CONDUCTOR_SERVER_URL="https://play.orkes.io/api" +export CONDUCTOR_SSL_CA_CERT="/path/to/ca-certificate.pem" + +# Client certificate authentication +export CONDUCTOR_CERT_FILE="/path/to/client-certificate.pem" +export CONDUCTOR_KEY_FILE="/path/to/client-key.pem" +``` + +```python +# Configuration will automatically pick up environment variables +from conductor.client.configuration.configuration import Configuration + +config = Configuration() # SSL settings loaded from environment +``` + +## Configuration Parameters + +| Parameter | Type | Description | +|-----------|------|-------------| +| `ssl_ca_cert` | str | Path to CA certificate file | +| `ca_cert_data` | str/bytes | CA certificate data as PEM string or DER bytes | +| `cert_file` | str | Path to client certificate file | +| `key_file` | str | Path to client private key file | +| `verify_ssl` | bool | Enable/disable SSL verification (default: True) | +| `assert_hostname` | str | Custom hostname for SSL verification | + +## Example Files + +For complete working examples, see: +- [Sync SSL Example](../../examples/sync_ssl_example.py) - Comprehensive sync client SSL configuration +- [Async SSL Example](../../examples/async/async_ssl_example.py) - Comprehensive async client SSL configuration + +## Security Best Practices + +1. **Always use HTTPS in production** - Never use HTTP for production environments +2. **Verify SSL certificates** - Keep `verify_ssl=True` in production +3. **Use strong cipher suites** - Ensure your server supports modern TLS versions +4. **Rotate certificates regularly** - Implement certificate rotation policies +5. **Use certificate pinning** - For high-security environments, consider certificate pinning +6. **Monitor certificate expiration** - Set up alerts for certificate expiration +7. **Use proper key management** - Store private keys securely + +## Troubleshooting SSL Issues + +### Common SSL Issues + +1. **Certificate verification failed** + - Check if the CA certificate is correct + - Verify the certificate chain is complete + - Ensure the certificate hasn't expired + +2. **Hostname verification failed** + - Check if the hostname matches the certificate + - Use `assert_hostname` parameter if needed + +3. **Connection timeout** + - Check network connectivity + - Verify firewall settings + - Check if the server is accessible + +### Debug SSL Connections + +```python +import ssl +import logging + +# Enable SSL debugging +logging.basicConfig(level=logging.DEBUG) +ssl_context = ssl.create_default_context() +ssl_context.check_hostname = False # Only for debugging +ssl_context.verify_mode = ssl.CERT_NONE # Only for debugging + +# Use with configuration +config = Configuration( + base_url="https://your-server.com", + ssl_ca_cert="/path/to/ca-cert.pem" +) +``` + +### Testing SSL Configuration + +```python +import ssl +import socket + +def test_ssl_connection(hostname, port, ca_cert_path): + context = ssl.create_default_context() + context.load_verify_locations(ca_cert_path) + + with socket.create_connection((hostname, port)) as sock: + with context.wrap_socket(sock, server_hostname=hostname) as ssock: + print(f"SSL connection successful: {ssock.version()}") + print(f"Certificate: {ssock.getpeercert()}") + +# Test your SSL configuration +test_ssl_connection("your-server.com", 443, "/path/to/ca-cert.pem") +``` diff --git a/docs/development/README.md b/docs/development/README.md new file mode 100644 index 00000000..3182cdf2 --- /dev/null +++ b/docs/development/README.md @@ -0,0 +1,318 @@ +# Development + +This section covers development setup, client regeneration, and contributing to the Conductor Python SDK. + +## Table of Contents + +- [Development Setup](#development-setup) +- [Client Regeneration](#client-regeneration) +- [Testing](#testing) +- [Contributing](#contributing) + +## Development Setup + +### Prerequisites + +- Python 3.9+ +- Git +- Docker (for running Conductor server locally) + +### Local Development Environment + +1. **Clone the repository** + ```bash + git clone https://github.com/conductor-oss/python-sdk.git + cd python-sdk + ``` + +2. **Create a virtual environment** + ```bash + python3 -m venv conductor-dev + source conductor-dev/bin/activate # On Windows: conductor-dev\Scripts\activate + ``` + +3. **Install development dependencies** + ```bash + pip install -r requirements.dev.txt + pip install -e . + ``` + +4. **Start Conductor server locally** + ```bash + docker run --init -p 8080:8080 -p 5000:5000 conductoross/conductor-standalone:3.15.0 + ``` + +5. **Run tests** + ```bash + pytest tests/ + ``` + +## Client Regeneration + +When updating to a new Orkes version, you may need to regenerate the client code to support new APIs and features. The SDK provides comprehensive guides for regenerating both sync and async clients: + +### Sync Client Regeneration + +For the synchronous client (`conductor.client`), see the [Client Regeneration Guide](../../src/conductor/client/CLIENT_REGENERATION_GUIDE.md) which covers: + +- Creating swagger.json files for new Orkes versions +- Generating client code using Swagger Codegen +- Replacing models and API clients in the codegen folder +- Creating adapters and updating the proxy package +- Running backward compatibility, serialization, and integration tests + +### Async Client Regeneration + +For the asynchronous client (`conductor.asyncio_client`), see the [Async Client Regeneration Guide](../../src/conductor/asyncio_client/ASYNC_CLIENT_REGENERATION_GUIDE.md) which covers: + +- Creating swagger.json files for new Orkes versions +- Generating async client code using OpenAPI Generator +- Replacing models and API clients in the http folder +- Creating adapters for backward compatibility +- Running async-specific tests and handling breaking changes + +Both guides include detailed troubleshooting sections, best practices, and step-by-step instructions to ensure a smooth regeneration process while maintaining backward compatibility. + +### Quick Regeneration Steps + +1. **Generate swagger.json** + ```bash + # For sync client + python scripts/generate_swagger.py --version 3.15.0 --output src/conductor/client/swagger.json + + # For async client + python scripts/generate_swagger.py --version 3.15.0 --output src/conductor/asyncio_client/swagger.json + ``` + +2. **Generate client code** + ```bash + # Sync client + swagger-codegen generate -i src/conductor/client/swagger.json -l python -o src/conductor/client/codegen/ + + # Async client + openapi-generator generate -i src/conductor/asyncio_client/swagger.json -g python -o src/conductor/asyncio_client/http/ + ``` + +3. **Update adapters and run tests** + ```bash + python scripts/update_adapters.py + pytest tests/ + ``` + +## Testing + +### Running Tests + +```bash +# Run all tests +pytest + +# Run specific test categories +pytest tests/unit/ +pytest tests/integration/ +pytest tests/backwardcompatibility/ + +# Run with coverage +pytest --cov=conductor --cov-report=html + +# Run specific test file +pytest tests/unit/test_workflow.py + +# Run with verbose output +pytest -v +``` + +### Test Categories + +- **Unit Tests** (`tests/unit/`): Test individual components in isolation +- **Integration Tests** (`tests/integration/`): Test integration with Conductor server +- **Backward Compatibility Tests** (`tests/backwardcompatibility/`): Ensure API compatibility +- **Serialization Tests** (`tests/serdesertest/`): Test data serialization/deserialization + +### Writing Tests + +Follow the repository's testing guidelines: + +1. **Use functions instead of classes** for test cases +2. **Remove comments and docstrings** from test code +3. **Follow the repository's style guides** +4. **Use descriptive test names** + +Example test structure: + +```python +def test_workflow_creation(): + workflow_executor = WorkflowExecutor(configuration=Configuration()) + workflow = ConductorWorkflow(name='test_workflow', executor=workflow_executor) + assert workflow.name == 'test_workflow' + +def test_worker_task_execution(): + @worker_task(task_definition_name='test_task') + def test_task(input_data: str) -> str: + return f"processed: {input_data}" + + result = test_task("test_input") + assert result == "processed: test_input" +``` + +### Test Configuration + +Create a `conftest.py` file for shared test configuration: + +```python +import pytest +from conductor.client.configuration.configuration import Configuration + +@pytest.fixture +def test_config(): + return Configuration(server_api_url="http://localhost:8080/api") + +@pytest.fixture +def workflow_executor(test_config): + from conductor.client.workflow.executor.workflow_executor import WorkflowExecutor + return WorkflowExecutor(configuration=test_config) +``` + +## Contributing + +### Code Style + +- Follow PEP 8 guidelines +- Use type hints where appropriate +- Write clear, self-documenting code +- Add docstrings for public APIs + +### Pull Request Process + +1. **Fork the repository** +2. **Create a feature branch** + ```bash + git checkout -b feature/your-feature-name + ``` + +3. **Make your changes** + - Write tests for new functionality + - Update documentation if needed + - Ensure all tests pass + +4. **Commit your changes** + ```bash + git commit -m "Add feature: brief description" + ``` + +5. **Push to your fork** + ```bash + git push origin feature/your-feature-name + ``` + +6. **Create a Pull Request** + - Provide a clear description of changes + - Reference any related issues + - Ensure CI checks pass + +### Development Workflow + +1. **Start Conductor server** + ```bash + docker run --init -p 8080:8080 -p 5000:5000 conductoross/conductor-standalone:3.15.0 + ``` + +2. **Run tests before committing** + ```bash + pytest tests/ + ``` + +3. **Check code formatting** + ```bash + black src/ tests/ + isort src/ tests/ + ``` + +4. **Run linting** + ```bash + flake8 src/ tests/ + mypy src/ + ``` + +### Debugging + +#### Enable Debug Logging + +```python +import logging +logging.basicConfig(level=logging.DEBUG) + +# Your code here +``` + +#### Debug Conductor Server Connection + +```python +from conductor.client.configuration.configuration import Configuration +import httpx + +# Test server connectivity +config = Configuration() +try: + response = httpx.get(f"{config.server_api_url}/health") + print(f"Server status: {response.status_code}") +except Exception as e: + print(f"Connection failed: {e}") +``` + +#### Debug Workflow Execution + +```python +# Enable workflow execution logging +import logging +logging.getLogger("conductor.client.workflow").setLevel(logging.DEBUG) + +# Your workflow code +``` + +### Release Process + +1. **Update version numbers** + - `setup.py` + - `pyproject.toml` + - `src/conductor/__init__.py` + +2. **Update changelog** + - Document new features + - List bug fixes + - Note breaking changes + +3. **Create release tag** + ```bash + git tag -a v1.0.0 -m "Release version 1.0.0" + git push origin v1.0.0 + ``` + +4. **Build and publish** + ```bash + python -m build + twine upload dist/* + ``` + +### Troubleshooting + +#### Common Issues + +1. **Import errors** + - Check if virtual environment is activated + - Verify package installation: `pip list | grep conductor` + +2. **Connection errors** + - Ensure Conductor server is running + - Check server URL configuration + - Verify network connectivity + +3. **Test failures** + - Check test environment setup + - Verify test data and fixtures + - Review test logs for specific errors + +#### Getting Help + +- Check existing [GitHub Issues](https://github.com/conductor-oss/python-sdk/issues) +- Create a new issue with detailed information diff --git a/docs/examples/README.md b/docs/examples/README.md new file mode 100644 index 00000000..fb6d3d48 --- /dev/null +++ b/docs/examples/README.md @@ -0,0 +1,131 @@ +# Examples + +This section contains complete working examples demonstrating various features of the Conductor Python SDK. + +## Table of Contents + +- [Hello World](hello-world/) - Basic workflow example +- [Dynamic Workflow](../examples/dynamic_workflow.py) - Dynamic workflow creation +- [Kitchen Sink](../examples/kitchensink.py) - Comprehensive workflow features +- [Async Examples](../examples/async/) - Asynchronous client examples + +## Quick Start Examples + +### Basic Worker and Workflow + +```python +from conductor.client.worker.worker_task import worker_task +from conductor.client.workflow.conductor_workflow import ConductorWorkflow +from conductor.client.workflow.executor.workflow_executor import WorkflowExecutor +from conductor.client.automator.task_handler import TaskHandler +from conductor.client.configuration.configuration import Configuration + +@worker_task(task_definition_name='greet') +def greet(name: str) -> str: + return f'Hello {name}' + +def main(): + config = Configuration() + workflow_executor = WorkflowExecutor(configuration=config) + + workflow = ConductorWorkflow(name='greetings', executor=workflow_executor) + workflow.version = 1 + workflow >> greet(task_ref_name='greet_ref', name=workflow.input('name')) + + workflow.register(True) + + task_handler = TaskHandler(configuration=config) + task_handler.start_processes() + + result = workflow_executor.execute( + name=workflow.name, + version=workflow.version, + workflow_input={'name': 'World'} + ) + + print(f'Result: {result.output["result"]}') + task_handler.stop_processes() + +if __name__ == '__main__': + main() +``` + +## Example Categories + +### Basic Examples +- **Hello World** - Simple worker and workflow +- **Dynamic Workflow** - Creating workflows programmatically +- **Kitchen Sink** - All supported features + +### Advanced Examples +- **Async Examples** - Asynchronous client usage +- **SSL Examples** - Secure connections +- **Proxy Examples** - Network proxy configuration + +### Integration Examples +- **Orkes Examples** - Orkes Conductor specific features +- **Multi-agent Examples** - Complex multi-agent workflows +- **AI Integration** - AI and machine learning workflows + +## Running Examples + +1. **Start Conductor Server** + ```bash + docker run --init -p 8080:8080 -p 5000:5000 conductoross/conductor-standalone:3.15.0 + ``` + +2. **Run an Example** + ```bash + python examples/helloworld/helloworld.py + ``` + +3. **View in UI** + Open http://localhost:5000 to see workflow execution + +## Example Structure + +``` +examples/ +├── helloworld/ # Basic examples +│ ├── helloworld.py +│ ├── greetings_workflow.py +│ └── greetings_worker.py +├── async/ # Async examples +│ ├── async_ssl_example.py +│ └── async_proxy_example.py +├── orkes/ # Orkes specific examples +│ ├── open_ai_chat_gpt.py +│ └── multiagent_chat.py +└── dynamic_workflow.py # Dynamic workflow example +``` + +## Contributing Examples + +When adding new examples: + +1. **Follow the naming convention** - Use descriptive names +2. **Include documentation** - Add comments explaining the example +3. **Test thoroughly** - Ensure examples work with latest SDK +4. **Update this README** - Add new examples to the table of contents + +## Troubleshooting Examples + +### Common Issues + +1. **Connection refused** + - Ensure Conductor server is running + - Check server URL configuration + +2. **Import errors** + - Verify SDK installation + - Check Python path + +3. **Authentication errors** + - Verify API keys for Orkes examples + - Check authentication configuration + +### Getting Help + +- Check the [main documentation](../README.md) +- Review [configuration guides](configuration/) +- Open an issue on GitHub diff --git a/docs/worker/README.md b/docs/worker/README.md index b8ce84c5..ad8ac9a6 100644 --- a/docs/worker/README.md +++ b/docs/worker/README.md @@ -1,376 +1,303 @@ -# Worker +# Conductor Workers -Considering real use cases, the goal is to run multiple workers in parallel. Due to some limitations with Python, a multiprocessing architecture was chosen in order to enable real parallelization. +A Workflow task represents a unit of business logic that achieves a specific goal, such as checking inventory, initiating payment transfer, etc. A worker implements a task in the workflow. -You can write your workers independently and append them to a list. The `TaskHandler` class will spawn a unique and independent process for each worker, making sure it will behave as expected, by running an infinite loop like this: -* Poll for a `Task` at Conductor Server -* Generate `TaskResult` from given `Task` -* Update given `Task` with `TaskResult` at Conductor Server +## Table of Contents -## Write workers +- [Implementing Workers](#implementing-workers) +- [Managing Workers in Application](#managing-workers-in-application) +- [Design Principles for Workers](#design-principles-for-workers) +- [System Task Workers](#system-task-workers) +- [Worker vs. Microservice/HTTP Endpoints](#worker-vs-microservicehttp-endpoints) +- [Deploying Workers in Production](#deploying-workers-in-production) -Currently, there are three ways of writing a Python worker: -1. [Worker as a function](#worker-as-a-function) -2. [Worker as a class](#worker-as-a-class) -3. [Worker as an annotation](#worker-as-an-annotation) +## Implementing Workers +The workers can be implemented by writing a simple Python function and annotating the function with the `@worker_task`. Conductor workers are services (similar to microservices) that follow the [Single Responsibility Principle](https://en.wikipedia.org/wiki/Single_responsibility_principle). -### Worker as a function +Workers can be hosted along with the workflow or run in a distributed environment where a single workflow uses workers deployed and running in different machines/VMs/containers. Whether to keep all the workers in the same application or run them as a distributed application is a design and architectural choice. Conductor is well suited for both kinds of scenarios. -The function should follow this signature: +You can create or convert any existing Python function to a distributed worker by adding `@worker_task` annotation to it. Here is a simple worker that takes `name` as input and returns greetings: ```python -ExecuteTaskFunction = Callable[ - [ - Union[Task, object] - ], - Union[TaskResult, object] -] +from conductor.client.worker.worker_task import worker_task + +@worker_task(task_definition_name='greetings') +def greetings(name: str) -> str: + return f'Hello, {name}' ``` -In other words: -* Input must be either a `Task` or an `object` - * If it isn't a `Task`, the assumption is - you're expecting to receive the `Task.input_data` as the object -* Output must be either a `TaskResult` or an `object` - * If it isn't a `TaskResult`, the assumption is - you're expecting to use the object as the `TaskResult.output_data` +A worker can take inputs which are primitives - `str`, `int`, `float`, `bool` etc. or can be complex data classes. -Quick example below: +Here is an example worker that uses `dataclass` as part of the worker input. ```python -from conductor.client.http.models import Task, TaskResult -from conductor.shared.http.enums import TaskResultStatus +from conductor.client.worker.worker_task import worker_task +from dataclasses import dataclass + +@dataclass +class OrderInfo: + order_id: int + sku: str + quantity: int + sku_price: float + + +@worker_task(task_definition_name='process_order') +def process_order(order_info: OrderInfo) -> str: + return f'order: {order_info.order_id}' +``` +## Managing Workers in Application -def execute(task: Task) -> TaskResult: - task_result = TaskResult( - task_id=task.task_id, - workflow_instance_id=task.workflow_instance_id, - worker_id='your_custom_id' - ) - task_result.add_output_data('worker_style', 'function') - task_result.status = TaskResultStatus.COMPLETED - return task_result -``` +Workers use a polling mechanism (with a long poll) to check for any available tasks from the server periodically. The startup and shutdown of workers are handled by the `conductor.client.automator.task_handler.TaskHandler` class. + +```python +from conductor.client.automator.task_handler import TaskHandler +from conductor.client.configuration.configuration import Configuration -In the case you like more details, you can take a look at all possible combinations of workers [here](../../tests/integration/resources/worker/python/python_worker.py) +def main(): + # points to http://localhost:8080/api by default + api_config = Configuration() -### Worker as a class + task_handler = TaskHandler( + workers=[], + configuration=api_config, + scan_for_annotated_workers=True, + import_modules=['greetings'] # import workers from this module - leave empty if all the workers are in the same module + ) + + # start worker polling + task_handler.start_processes() -The class must implement `WorkerInterface` class, which requires an `execute` method. The remaining ones are inherited, but can be easily overridden. Example with a custom polling interval: + # Call to stop the workers when the application is ready to shutdown + task_handler.stop_processes() -```python -from conductor.client.http.models import Task, TaskResult -from conductor.shared.http.enums import TaskResultStatus -from conductor.client.worker.worker_interface import WorkerInterface - -class SimplePythonWorker(WorkerInterface): - def execute(self, task: Task) -> TaskResult: - task_result = self.get_task_result_from_task(task) - task_result.add_output_data('worker_style', 'class') - task_result.add_output_data('secret_number', 1234) - task_result.add_output_data('is_it_true', False) - task_result.status = TaskResultStatus.COMPLETED - return task_result - - def get_polling_interval_in_seconds(self) -> float: - # poll every 500ms - return 0.5 + +if __name__ == '__main__': + main() ``` -### Worker as an annotation -A worker can also be invoked by adding a WorkerTask decorator as shown in the below example. -As long as the annotated worker is in any file inside the root folder of your worker application, it will be picked up by the TaskHandler, see [Run Workers](#run-workers) +## Design Principles for Workers -The arguments that can be passed when defining the decorated worker are: -1. task_definition_name: The task definition name of the condcutor task that needs to be polled for. -2. domain: Optional routing domain of the worker to execute tasks with a specific domain -3. worker_id: An optional worker id used to identify the polling worker -4. poll_interval: Polling interval in seconds. Defaulted to 1 second if not passed. +Each worker embodies the design pattern and follows certain basic principles: -```python -from conductor.client.worker.worker_task import WorkerTask +1. Workers are stateless and do not implement a workflow-specific logic. +2. Each worker executes a particular task and produces well-defined output given specific inputs. +3. Workers are meant to be idempotent (Should handle cases where the partially executed task, due to timeouts, etc, gets rescheduled). +4. Workers do not implement the logic to handle retries, etc., that is taken care of by the Conductor server. -@WorkerTask(task_definition_name='python_annotated_task', worker_id='decorated', poll_interval=200.0) -def python_annotated_task(input) -> object: - return {'message': 'python is so cool :)'} -``` +## System Task Workers -## Run Workers +A system task worker is a pre-built, general-purpose worker in your Conductor server distribution. -Now you can run your workers by calling a `TaskHandler`, example: +System tasks automate repeated tasks such as calling an HTTP endpoint, executing lightweight ECMA-compliant javascript code, publishing to an event broker, etc. + +### Wait Task + +> [!tip] +> Wait is a powerful way to have your system wait for a specific trigger, such as an external event, a particular date/time, or duration, such as 2 hours, without having to manage threads, background processes, or jobs. + +#### Using Code to Create Wait Task ```python -from conductor.shared.configuration.settings.authentication_settings import AuthenticationSettings -from conductor.client.configuration.configuration import Configuration -from conductor.client.automator.task_handler import TaskHandler -from conductor.client.worker.worker import Worker - -#### Add these lines if running on a mac#### -from multiprocessing import set_start_method - -set_start_method('fork') -############################################ - -SERVER_API_URL = 'http://localhost:8080/api' -KEY_ID = '' -KEY_SECRET = '' - -configuration = Configuration( - server_api_url=SERVER_API_URL, - debug=True, - authentication_settings=AuthenticationSettings( - key_id=KEY_ID, - key_secret=KEY_SECRET - ), -) - -workers = [ - SimplePythonWorker( - task_definition_name='python_task_example' - ), - Worker( - task_definition_name='python_execute_function_task', - execute_function=execute, - poll_interval=250, - domain='test' - ) -] +from conductor.client.workflow.task.wait_task import WaitTask -# If there are decorated workers in your application, scan_for_annotated_workers should be set -# default value of scan_for_annotated_workers is False -with TaskHandler(workers, configuration, scan_for_annotated_workers=True) as task_handler: - task_handler.start_processes() +# waits for 2 seconds before scheduling the next task +wait_for_two_sec = WaitTask(task_ref_name='wait_for_2_sec', wait_for_seconds=2) + +# wait until end of jan +wait_till_jan = WaitTask(task_ref_name='wait_till_jsn', wait_until='2024-01-31 00:00 UTC') + +# waits until an API call or an event is triggered +wait_for_signal = WaitTask(task_ref_name='wait_till_jan_end') ``` -If you paste the above code in a file called main.py, you can launch the workers by running: -```shell -python3 main.py +#### JSON Configuration + +```json +{ + "name": "wait", + "taskReferenceName": "wait_till_jan_end", + "type": "WAIT", + "inputParameters": { + "until": "2024-01-31 00:00 UTC" + } +} ``` -## Task Domains -Workers can be configured to start polling for work that is tagged by a task domain. See more on domains [here](https://orkes.io/content/developer-guides/task-to-domain). +### HTTP Task + +Make a request to an HTTP(S) endpoint. The task allows for GET, PUT, POST, DELETE, HEAD, and PATCH requests. +#### Using Code to Create HTTP Task ```python -from conductor.client.worker.worker_task import WorkerTask +from conductor.client.workflow.task.http_task import HttpTask -@WorkerTask(task_definition_name='python_annotated_task', domain='cool') -def python_annotated_task(input) -> object: - return {'message': 'python is so cool :)'} +HttpTask(task_ref_name='call_remote_api', http_input={ + 'uri': 'https://orkes-api-tester.orkesconductor.com/api' + }) ``` -The above code would run a worker polling for task of type, *python_annotated_task*, but only for workflows that have a task to domain mapping specified with domain for this task as _cool_. +#### JSON Configuration ```json -"taskToDomain": { - "python_annotated_task": "cool" +{ + "name": "http_task", + "taskReferenceName": "http_task_ref", + "type" : "HTTP", + "uri": "https://orkes-api-tester.orkesconductor.com/api", + "method": "GET" } ``` -## Worker Configuration +### Javascript Executor Task -### Using Config File +Execute ECMA-compliant Javascript code. It is useful when writing a script for data mapping, calculations, etc. -You can choose to pass an _worker.ini_ file for specifying worker arguments like domain and polling_interval. This allows for configuring your workers dynamically and hence provides the flexbility along with cleaner worker code. This file has to be in the same directory as the main.py of your worker application. +#### Using Code to Create Inline Task -#### Format -``` -[task_definition_name] -domain = -polling_interval = -``` - -#### Generic Properties -There is an option for specifying common set of properties which apply to all workers by putting them in the _DEFAULT_ section. All workers who don't have a domain or/and polling_interval specified will default to these values. +```python +from conductor.client.workflow.task.javascript_task import JavascriptTask -``` -[DEFAULT] -domain = -polling_interval = -``` +say_hello_js = """ +function greetings() { + return { + "text": "hello " + $.name + } +} +greetings(); +""" -#### Example File +js = JavascriptTask(task_ref_name='hello_script', script=say_hello_js, bindings={'name': '${workflow.input.name}'}) ``` -[DEFAULT] -domain = nice -polling_interval = 2000 -[python_annotated_task_1] -domain = cool -polling_interval = 500 +#### JSON Configuration -[python_annotated_task_2] -domain = hot -polling_interval = 300 +```json +{ + "name": "inline_task", + "taskReferenceName": "inline_task_ref", + "type": "INLINE", + "inputParameters": { + "expression": " function greetings() {\n return {\n \"text\": \"hello \" + $.name\n }\n }\n greetings();", + "evaluatorType": "graaljs", + "name": "${workflow.input.name}" + } +} ``` -With the presence of the above config file, you don't need to specify domain and poll_interval for any of the worker task types. +### JSON Processing using JQ -##### Without config -```python -from conductor.client.worker.worker_task import WorkerTask +[Jq](https://jqlang.github.io/jq/) is like sed for JSON data - you can slice, filter, map, and transform structured data with the same ease that sed, awk, grep, and friends let you play with text. -@WorkerTask(task_definition_name='python_annotated_task_1', domain='cool', poll_interval=500.0) -def python_annotated_task(input) -> object: - return {'message': 'python is so cool :)'} +#### Using Code to Create JSON JQ Transform Task -@WorkerTask(task_definition_name='python_annotated_task_2', domain='hot', poll_interval=300.0) -def python_annotated_task_2(input) -> object: - return {'message': 'python is so hot :)'} +```python +from conductor.client.workflow.task.json_jq_task import JsonJQTask -@WorkerTask(task_definition_name='python_annotated_task_3', domain='nice', poll_interval=2000.0) -def python_annotated_task_3(input) -> object: - return {'message': 'python is so nice :)'} +jq_script = """ +{ key3: (.key1.value1 + .key2.value2) } +""" -@WorkerTask(task_definition_name='python_annotated_task_4', domain='nice', poll_interval=2000.0) -def python_annotated_task_4(input) -> object: - return {'message': 'python is very nice :)'} +jq = JsonJQTask(task_ref_name='jq_process', script=jq_script) ``` -##### With config -```python -from conductor.client.worker.worker_task import WorkerTask - -@WorkerTask(task_definition_name='python_annotated_task_1') -def python_annotated_task(input) -> object: - return {'message': 'python is so cool :)'} +#### JSON Configuration -@WorkerTask(task_definition_name='python_annotated_task_2') -def python_annotated_task_2(input) -> object: - return {'message': 'python is so hot :)'} +```json +{ + "name": "json_transform_task", + "taskReferenceName": "json_transform_task_ref", + "type": "JSON_JQ_TRANSFORM", + "inputParameters": { + "key1": "k1", + "key2": "k2", + "queryExpression": "{ key3: (.key1.value1 + .key2.value2) }", + } +} +``` -@WorkerTask(task_definition_name='python_annotated_task_3') -def python_annotated_task_3(input) -> object: - return {'message': 'python is so nice :)'} +## Worker vs. Microservice/HTTP Endpoints -@WorkerTask(task_definition_name='python_annotated_task_4') -def python_annotated_task_4(input) -> object: - return {'message': 'python is very nice :)'} +> [!tip] +> Workers are a lightweight alternative to exposing an HTTP endpoint and orchestrating using HTTP tasks. Using workers is a recommended approach if you do not need to expose the service over HTTP or gRPC endpoints. -``` +There are several advantages to this approach: -### Using Environment Variables +1. **No need for an API management layer** : Given there are no exposed endpoints and workers are self-load-balancing. +2. **Reduced infrastructure footprint** : No need for an API gateway/load balancer. +3. All the communication is initiated by workers using polling - avoiding the need to open up any incoming TCP ports. +4. Workers **self-regulate** when busy; they only poll as much as they can handle. Backpressure handling is done out of the box. +5. Workers can be scaled up/down quickly based on the demand by increasing the number of processes. -Workers can also be configured at run time by using environment variables which override configuration files as well. +## Deploying Workers in Production -#### Format -``` -conductor_worker_polling_interval= -conductor_worker_domain= -conductor_worker__polling_interval= -conductor_worker__domain= -``` +Conductor workers can run in the cloud-native environment or on-prem and can easily be deployed like any other Python application. Workers can run a containerized environment, VMs, or bare metal like you would deploy your other Python applications. -#### Example -``` -conductor_worker_polling_interval=2000 -conductor_worker_domain=nice -conductor_worker_python_annotated_task_1_polling_interval=500 -conductor_worker_python_annotated_task_1_domain=cool -conductor_worker_python_annotated_task_2_polling_interval=300 -conductor_worker_python_annotated_task_2_domain=hot -``` +### Best Practices -### Order of Precedence -If the worker configuration is initialized using multiple mechanisms mentioned above then the following order of priority -will be considered from highest to lowest: -1. Environment Variables -2. Config File -3. Worker Constructor Arguments +1. **Resource Management**: Monitor CPU and memory usage of workers +2. **Scaling**: Use container orchestration platforms like Kubernetes for automatic scaling +3. **Health Checks**: Implement health check endpoints for worker monitoring +4. **Logging**: Use structured logging for better debugging and monitoring +5. **Error Handling**: Implement proper error handling and retry logic +6. **Configuration**: Use environment variables for configuration management -See [Using Conductor Playground](https://orkes.io/content/docs/getting-started/playground/using-conductor-playground) for more details on how to use Playground environment for testing. +### Example Dockerfile -## Performance -If you're looking for better performance (i.e. more workers of the same type) - you can simply append more instances of the same worker, like this: +```dockerfile +FROM python:3.9-slim -```python -workers = [ - SimplePythonWorker( - task_definition_name='python_task_example' - ), - SimplePythonWorker( - task_definition_name='python_task_example' - ), - SimplePythonWorker( - task_definition_name='python_task_example' - ), - ... -] -``` +WORKDIR /app -```python -workers = [ - Worker( - task_definition_name='python_task_example', - execute_function=execute, - poll_interval=0.25, - ), - Worker( - task_definition_name='python_task_example', - execute_function=execute, - poll_interval=0.25, - ), - Worker( - task_definition_name='python_task_example', - execute_function=execute, - poll_interval=0.25, - ) - ... -] -``` +COPY requirements.txt . +RUN pip install -r requirements.txt -## C/C++ Support -Python is great, but at times you need to call into native C/C++ code. -Here is an example how you can do that with Conductor SDK. - -### 1. Export your C++ functions as `extern "C"`: - * C++ function example (sum two integers) - ```cpp - #include - - extern "C" int32_t get_sum(const int32_t A, const int32_t B) { - return A + B; - } - ``` -### 2. Compile and share its library: - * C++ file name: `simple_cpp_lib.cpp` - * Library output name goal: `lib.so` - ```shell - g++ -c -fPIC simple_cpp_lib.cpp -o simple_cpp_lib.o - g++ -shared -Wl,-install_name,lib.so -o lib.so simple_cpp_lib.o - ``` - -### 3. Use the C++ library in your python worker -You can use the Python library to call native code written in C++. Here is an example that calls native C++ library -from the Python worker. -See [simple_cpp_lib.cpp](src/example/worker/cpp/simple_cpp_lib.cpp) -and [simple_cpp_worker.py](src/example/worker/cpp/simple_cpp_worker.py) for complete working example. +COPY . . -```python -from conductor.client.http.models import Task, TaskResult -from conductor.shared.http.enums import TaskResultStatus -from conductor.client.worker.worker_interface import WorkerInterface -from ctypes import cdll - -class CppWrapper: - def __init__(self, file_path='./lib.so'): - self.cpp_lib = cdll.LoadLibrary(file_path) - - def get_sum(self, X: int, Y: int) -> int: - return self.cpp_lib.get_sum(X, Y) - - -class SimpleCppWorker(WorkerInterface): - cpp_wrapper = CppWrapper() - - def execute(self, task: Task) -> TaskResult: - execution_result = self.cpp_wrapper.get_sum(1, 2) - task_result = self.get_task_result_from_task(task) - task_result.add_output_data( - 'sum', execution_result - ) - task_result.status = TaskResultStatus.COMPLETED - return task_result +CMD ["python", "worker_app.py"] ``` -### Next: [Create workflows using Code](../workflow/README.md) +### Example Kubernetes Deployment + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: conductor-worker +spec: + replicas: 3 + selector: + matchLabels: + app: conductor-worker + template: + metadata: + labels: + app: conductor-worker + spec: + containers: + - name: worker + image: your-registry/conductor-worker:latest + env: + - name: CONDUCTOR_SERVER_URL + value: "https://your-conductor-server.com/api" + - name: CONDUCTOR_AUTH_KEY + valueFrom: + secretKeyRef: + name: conductor-secrets + key: auth-key + - name: CONDUCTOR_AUTH_SECRET + valueFrom: + secretKeyRef: + name: conductor-secrets + key: auth-secret + resources: + requests: + memory: "256Mi" + cpu: "250m" + limits: + memory: "512Mi" + cpu: "500m" +``` \ No newline at end of file diff --git a/docs/workflow/README.md b/docs/workflow/README.md index 4a620f60..a17a5197 100644 --- a/docs/workflow/README.md +++ b/docs/workflow/README.md @@ -1,125 +1,393 @@ -# Workflow Management +# Conductor Workflows -## Workflow Client +Workflow can be defined as the collection of tasks and operators that specify the order and execution of the defined tasks. This orchestration occurs in a hybrid ecosystem that encircles serverless functions, microservices, and monolithic applications. -### Initialization +## Table of Contents + +- [Creating Workflows](#creating-workflows) +- [Executing Workflows](#executing-workflows) +- [Managing Workflow Executions](#managing-workflow-executions) +- [Handling Failures, Retries and Rate Limits](#handling-failures-retries-and-rate-limits) +- [Using Conductor in Your Application](#using-conductor-in-your-application) + +## Creating Workflows + +Conductor lets you create the workflows using either Python or JSON as the configuration. + +Using Python as code to define and execute workflows lets you build extremely powerful, dynamic workflows and run them on Conductor. + +When the workflows are relatively static, they can be designed using the Orkes UI (available when using Orkes Conductor) and APIs or SDKs to register and run the workflows. + +Both the code and configuration approaches are equally powerful and similar in nature to how you treat Infrastructure as Code. + +### Execute Dynamic Workflows Using Code + +For cases where the workflows cannot be created statically ahead of time, Conductor is a powerful dynamic workflow execution platform that lets you create very complex workflows in code and execute them. It is useful when the workflow is unique for each execution. ```python +from conductor.client.automator.task_handler import TaskHandler from conductor.client.configuration.configuration import Configuration -from conductor.shared.configuration.settings.authentication_settings import AuthenticationSettings -from conductor.client.orkes.orkes_workflow_client import OrkesWorkflowClient +from conductor.client.orkes_clients import OrkesClients +from conductor.client.worker.worker_task import worker_task +from conductor.client.workflow.conductor_workflow import ConductorWorkflow + +#@worker_task annotation denotes that this is a worker +@worker_task(task_definition_name='get_user_email') +def get_user_email(userid: str) -> str: + return f'{userid}@example.com' + +#@worker_task annotation denotes that this is a worker +@worker_task(task_definition_name='send_email') +def send_email(email: str, subject: str, body: str): + print(f'sending email to {email} with subject {subject} and body {body}') + + +def main(): + + # defaults to reading the configuration using following env variables + # CONDUCTOR_SERVER_URL : conductor server e.g. https://play.orkes.io/api + # CONDUCTOR_AUTH_KEY : API Authentication Key + # CONDUCTOR_AUTH_SECRET: API Auth Secret + api_config = Configuration() + + task_handler = TaskHandler(configuration=api_config) + #Start Polling + task_handler.start_processes() + + clients = OrkesClients(configuration=api_config) + workflow_executor = clients.get_workflow_executor() + workflow = ConductorWorkflow(name='dynamic_workflow', version=1, executor=workflow_executor) + get_email = get_user_email(task_ref_name='get_user_email_ref', userid=workflow.input('userid')) + sendmail = send_email(task_ref_name='send_email_ref', email=get_email.output('result'), subject='Hello from Orkes', + body='Test Email') + #Order of task execution + workflow >> get_email >> sendmail + + # Configure the output of the workflow + workflow.output_parameters(output_parameters={ + 'email': get_email.output('result') + }) + #Run the workflow + result = workflow.execute(workflow_input={'userid': 'user_a'}) + print(f'\nworkflow output: {result.output}\n') + #Stop Polling + task_handler.stop_processes() + + +if __name__ == '__main__': + main() +``` + +See [dynamic_workflow.py](../../examples/dynamic_workflow.py) for a fully functional example. + +### Kitchen-Sink Workflow + +For a more complex workflow example with all the supported features, see [kitchensink.py](../../examples/kitchensink.py). + +## Executing Workflows + +The [WorkflowClient](../../src/conductor/client/workflow_client.py) interface provides all the APIs required to work with workflow executions. + +```python +from conductor.client.configuration.configuration import Configuration +from conductor.client.orkes_clients import OrkesClients + +api_config = Configuration() +clients = OrkesClients(configuration=api_config) +workflow_client = clients.get_workflow_client() +``` + +### Execute Workflow Asynchronously + +Useful when workflows are long-running. + +```python +from conductor.client.http.models import StartWorkflowRequest + +request = StartWorkflowRequest() +request.name = 'hello' +request.version = 1 +request.input = {'name': 'Orkes'} +# workflow id is the unique execution id associated with this execution +workflow_id = workflow_client.start_workflow(request) +``` + +### Execute Workflow Synchronously + +Applicable when workflows complete very quickly - usually under 20-30 seconds. -configuration = Configuration( - server_api_url=SERVER_API_URL, - debug=False, - authentication_settings=AuthenticationSettings(key_id=KEY_ID, key_secret=KEY_SECRET) -) +```python +from conductor.client.http.models import StartWorkflowRequest + +request = StartWorkflowRequest() +request.name = 'hello' +request.version = 1 +request.input = {'name': 'Orkes'} -workflow_client = OrkesWorkflowClient(configuration) +workflow_run = workflow_client.execute_workflow( + start_workflow_request=request, + wait_for_seconds=12) ``` -### Start Workflow Execution +## Managing Workflow Executions + +> [!note] +> See [workflow_ops.py](../../examples/workflow_ops.py) for a fully working application that demonstrates working with the workflow executions and sending signals to the workflow to manage its state. -#### Start using StartWorkflowRequest +Workflows represent the application state. With Conductor, you can query the workflow execution state anytime during its lifecycle. You can also send signals to the workflow that determines the outcome of the workflow state. + +[WorkflowClient](../../src/conductor/client/workflow_client.py) is the client interface used to manage workflow executions. ```python -workflow = ConductorWorkflow( - executor=self.workflow_executor, - name="WORKFLOW_NAME", - description='Test Create Workflow', - version=1 -) -workflow.input_parameters(["a", "b"]) -workflow >> SimpleTask("simple_task", "simple_task_ref") -workflowDef = workflow.to_workflow_def() +from conductor.client.configuration.configuration import Configuration +from conductor.client.orkes_clients import OrkesClients -startWorkflowRequest = StartWorkflowRequest( - name="WORKFLOW_NAME", - version=1, - workflow_def=workflowDef, - input={"a": 15, "b": 3} -) -workflow_id = workflow_client.start_workflow(startWorkflowRequest) +api_config = Configuration() +clients = OrkesClients(configuration=api_config) +workflow_client = clients.get_workflow_client() ``` -#### Start using Workflow Name +### Get Execution Status + +The following method lets you query the status of the workflow execution given the id. When the `include_tasks` is set, the response also includes all the completed and in-progress tasks. ```python -wfInput = {"a": 5, "b": "+", "c": [7, 8]} -workflow_id = workflow_client.start_workflow_by_name("WORKFLOW_NAME", wfInput) +get_workflow(workflow_id: str, include_tasks: Optional[bool] = True) -> Workflow ``` -#### Execute workflow synchronously -Starts a workflow and waits until the workflow completes or the waitUntilTask completes. +### Update Workflow State Variables + +Variables inside a workflow are the equivalent of global variables in a program. ```python -wfInput = {"a": 5, "b": "+", "c": [7, 8]} -requestId = "request_id" -version = 1 -waitUntilTaskRef = "simple_task_ref" # Optional -workflow_id = workflow_client.execute_workflow( - startWorkflowRequest, requestId, "WORKFLOW_NAME", version, waitUntilTaskRef -) +update_variables(self, workflow_id: str, variables: Dict[str, object] = {}) ``` -### Fetch a workflow execution +### Terminate Running Workflows -#### Exclude tasks +Used to terminate a running workflow. Any pending tasks are canceled, and no further work is scheduled for this workflow upon termination. A failure workflow will be triggered but can be avoided if `trigger_failure_workflow` is set to False. ```python -workflow = workflow_client.get_workflow(workflow_id, False) +terminate_workflow(self, workflow_id: str, reason: Optional[str] = None, trigger_failure_workflow: bool = False) ``` -#### Include tasks +### Retry Failed Workflows + +If the workflow has failed due to one of the task failures after exhausting the retries for the task, the workflow can still be resumed by calling the retry. ```python -workflow = workflow_client.get_workflow(workflow_id, True) +retry_workflow(self, workflow_id: str, resume_subworkflow_tasks: Optional[bool] = False) ``` -### Workflow Execution Management +When a sub-workflow inside a workflow has failed, there are two options: + +1. Re-trigger the sub-workflow from the start (Default behavior). +2. Resume the sub-workflow from the failed task (set `resume_subworkflow_tasks` to True). -### Pause workflow +### Restart Workflows + +A workflow in the terminal state (COMPLETED, TERMINATED, FAILED) can be restarted from the beginning. Useful when retrying from the last failed task is insufficient, and the whole workflow must be started again. ```python -workflow_client.pause_workflow(workflow_id) +restart_workflow(self, workflow_id: str, use_latest_def: Optional[bool] = False) ``` -### Resume workflow +### Rerun Workflow from a Specific Task + +In the cases where a workflow needs to be restarted from a specific task rather than from the beginning, rerun provides that option. When issuing the rerun command to the workflow, you can specify the task ID from where the workflow should be restarted (as opposed to from the beginning), and optionally, the workflow's input can also be changed. ```python -workflow_client.resume_workflow(workflow_id) +rerun_workflow(self, workflow_id: str, rerun_workflow_request: RerunWorkflowRequest) ``` -### Terminate workflow +> [!tip] +> Rerun is one of the most powerful features Conductor has, giving you unparalleled control over the workflow restart. + +### Pause Running Workflow + +A running workflow can be put to a PAUSED status. A paused workflow lets the currently running tasks complete but does not schedule any new tasks until resumed. ```python -workflow_client.terminate_workflow(workflow_id, "Termination reason") +pause_workflow(self, workflow_id: str) ``` -### Restart workflow -This operation has no effect when called on a workflow that is in a non-terminal state. If useLatestDef is set, the restarted workflow uses the latest workflow definition. +### Resume Paused Workflow + +Resume operation resumes the currently paused workflow, immediately evaluating its state and scheduling the next set of tasks. ```python -workflow_client.restart_workflow(workflow_id, use_latest_def=True) +resume_workflow(self, workflow_id: str) ``` -### Retry failed workflow -When called, the task in the failed state is scheduled again, and the workflow moves to RUNNING status. If resumeSubworkflowTasks is set and the last failed task was a sub-workflow, the server restarts the sub-workflow from the failed task. If set to false, the sub-workflow is re-executed. +### Searching for Workflows + +Workflow executions are retained until removed from the Conductor. This gives complete visibility into all the executions an application has - regardless of the number of executions. Conductor has a powerful search API that allows you to search for workflow executions. ```python -workflow_client.retry_workflow(workflow_id, resume_subworkflow_tasks=True) +search(self, start, size, free_text: str = '*', query: str = None) -> ScrollableSearchResultWorkflowSummary ``` -### Skip task from workflow -Skips a given task execution from a currently running workflow. +* **free_text**: Free text search to look for specific words in the workflow and task input/output. +* **query** SQL-like query to search against specific fields in the workflow. + +Here are the supported fields for **query**: + +| Field | Description | +|-------------|-----------------| +| status |The status of the workflow. | +| correlationId |The ID to correlate the workflow execution to other executions. | +| workflowType |The name of the workflow. | + | version |The version of the workflow. | +|startTime|The start time of the workflow is in milliseconds.| + +## Handling Failures, Retries and Rate Limits + +Conductor lets you embrace failures rather than worry about the complexities introduced in the system to handle failures. + +All the aspects of handling failures, retries, rate limits, etc., are driven by the configuration that can be updated in real time without re-deploying your application. + +### Retries + +Each task in the Conductor workflow can be configured to handle failures with retries, along with the retry policy (linear, fixed, exponential backoff) and maximum number of retry attempts allowed. + +See [Error Handling](https://orkes.io/content/error-handling) for more details. + +### Rate Limits + +What happens when a task is operating on a critical resource that can only handle a few requests at a time? Tasks can be configured to have a fixed concurrency (X request at a time) or a rate (Y tasks/time window). + +### Task Registration ```python -workflow_client.skip_task_from_workflow(workflow_id, "simple_task_ref") +from conductor.client.configuration.configuration import Configuration +from conductor.client.http.models import TaskDef +from conductor.client.orkes_clients import OrkesClients + + +def main(): + api_config = Configuration() + clients = OrkesClients(configuration=api_config) + metadata_client = clients.get_metadata_client() + + task_def = TaskDef() + task_def.name = 'task_with_retries' + task_def.retry_count = 3 + task_def.retry_logic = 'LINEAR_BACKOFF' + task_def.retry_delay_seconds = 1 + + # only allow 3 tasks at a time to be in the IN_PROGRESS status + task_def.concurrent_exec_limit = 3 + + # timeout the task if not polled within 60 seconds of scheduling + task_def.poll_timeout_seconds = 60 + + # timeout the task if the task does not COMPLETE in 2 minutes + task_def.timeout_seconds = 120 + + # for the long running tasks, timeout if the task does not get updated in COMPLETED or IN_PROGRESS status in + # 60 seconds after the last update + task_def.response_timeout_seconds = 60 + + # only allow 100 executions in a 10-second window! -- Note, this is complementary to concurrent_exec_limit + task_def.rate_limit_per_frequency = 100 + task_def.rate_limit_frequency_in_seconds = 10 + + metadata_client.register_task_def(task_def=task_def) +``` + +```json +{ + "name": "task_with_retries", + + "retryCount": 3, + "retryLogic": "LINEAR_BACKOFF", + "retryDelaySeconds": 1, + "backoffScaleFactor": 1, + + "timeoutSeconds": 120, + "responseTimeoutSeconds": 60, + "pollTimeoutSeconds": 60, + "timeoutPolicy": "TIME_OUT_WF", + + "concurrentExecLimit": 3, + + "rateLimitPerFrequency": 0, + "rateLimitFrequencyInSeconds": 1 +} +``` + +#### Update Task Definition: + +```shell +POST /api/metadata/taskdef -d @task_def.json +``` + +See [task_configure.py](../../examples/task_configure.py) for a detailed working app. + +## Using Conductor in Your Application + +Conductor SDKs are lightweight and can easily be added to your existing or new Python app. This section will dive deeper into integrating Conductor in your application. + +### Adding Conductor SDK to Your Application + +Conductor Python SDKs are published on PyPi @ https://pypi.org/project/conductor-python/: + +```shell +pip3 install conductor-python ``` -### Delete workflow +### Testing Workflows + +Conductor SDK for Python provides a complete feature testing framework for your workflow-based applications. The framework works well with any testing framework you prefer without imposing any specific framework. + +The Conductor server provides a test endpoint `POST /api/workflow/test` that allows you to post a workflow along with the test execution data to evaluate the workflow. + +The goal of the test framework is as follows: + +1. Ability to test the various branches of the workflow. +2. Confirm the workflow execution and tasks given a fixed set of inputs and outputs. +3. Validate that the workflow completes or fails given specific inputs. + +Here are example assertions from the test: ```python -workflow_client.delete_workflow(workflow_id) + +... +test_request = WorkflowTestRequest(name=wf.name, version=wf.version, + task_ref_to_mock_output=task_ref_to_mock_output, + workflow_def=wf.to_workflow_def()) +run = workflow_client.test_workflow(test_request=test_request) + +print(f'completed the test run') +print(f'status: {run.status}') +self.assertEqual(run.status, 'COMPLETED') + +... + ``` +> [!note] +> Workflow workers are your regular Python functions and can be tested with any available testing framework. + +#### Example Unit Testing Application + +See [test_workflows.py](../../examples/test_workflows.py) for a fully functional example of how to test a moderately complex workflow with branches. + +### Workflow Deployments Using CI/CD + +> [!tip] +> Treat your workflow definitions just like your code. Suppose you are defining the workflows using UI. In that case, we recommend checking the JSON configuration into the version control and using your development workflow for CI/CD to promote the workflow definitions across various environments such as Dev, Test, and Prod. + +Here is a recommended approach when defining workflows using JSON: + +* Treat your workflow metadata as code. +* Check in the workflow and task definitions along with the application code. +* Use `POST /api/metadata/*` endpoints or MetadataClient (`from conductor.client.metadata_client import MetadataClient`) to register/update workflows as part of the deployment process. +* Version your workflows. If there is a significant change, change the version field of the workflow. See versioning workflows below for more details. + +### Versioning Workflows + +A powerful feature of Conductor is the ability to version workflows. You should increment the version of the workflow when there is a significant change to the definition. You can run multiple versions of the workflow at the same time. When starting a new workflow execution, use the `version` field to specify which version to use. When omitted, the latest (highest-numbered) version is used. + +* Versioning allows safely testing changes by doing canary testing in production or A/B testing across multiple versions before rolling out. +* A version can also be deleted, effectively allowing for "rollback" if required. \ No newline at end of file diff --git a/examples/async/async_proxy_example.py b/examples/async/async_proxy_example.py new file mode 100644 index 00000000..a01f8ac3 --- /dev/null +++ b/examples/async/async_proxy_example.py @@ -0,0 +1,77 @@ +import asyncio +import os +from conductor.asyncio_client.adapters import ApiClient +from conductor.asyncio_client.configuration import Configuration +from conductor.asyncio_client.orkes.orkes_clients import OrkesClients + + +async def main(): + """ + Example of configuring async client with proxy settings. + """ + + # Method 1: Configure proxy via Configuration constructor parameters + + # Basic proxy configuration + config = Configuration( + server_url="https://play.orkes.io/api", # Or your Conductor server URL + proxy="http://your-proxy.com:8080", # Your proxy server + proxy_headers={ + "Authorization": "Bearer your-proxy-token", # Optional proxy auth + "User-Agent": "Conductor-Python-Async-SDK/1.0" + } + ) + + # Method 2: Configure proxy via environment variables + + # Set environment variables (you would typically do this in your shell or .env file) + os.environ["CONDUCTOR_SERVER_URL"] = "https://play.orkes.io/api" + os.environ["CONDUCTOR_PROXY"] = "http://your-proxy.com:8080" + os.environ["CONDUCTOR_PROXY_HEADERS"] = '{"Authorization": "Bearer your-proxy-token"}' + + # Configuration will automatically pick up environment variables + config_env = Configuration() + + # Different proxy types + + # HTTP proxy + http_config = Configuration( + server_url="https://play.orkes.io/api", + proxy="http://your-proxy.com:8080" + ) + + # HTTPS proxy + https_config = Configuration( + server_url="https://play.orkes.io/api", + proxy="https://your-proxy.com:8080" + ) + + # SOCKS5 proxy + socks5_config = Configuration( + server_url="https://play.orkes.io/api", + proxy="socks5://your-proxy.com:1080" + ) + + # SOCKS4 proxy + socks4_config = Configuration( + server_url="https://play.orkes.io/api", + proxy="socks4://your-proxy.com:1080" + ) + + # Usage + + # Create API client with proxy configuration + async with ApiClient(config) as api_client: + # Create OrkesClients with the API client + orkes_clients = OrkesClients(api_client, config) + workflow_client = orkes_clients.get_workflow_client() + + # Example: Get workflow definitions (this will go through the proxy) + # Note: This will only work if you have valid credentials and the proxy is accessible + + workflows = await workflow_client.search_workflows() + print(f"Found {len(workflows)} workflows") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/async/async_ssl_example.py b/examples/async/async_ssl_example.py new file mode 100644 index 00000000..a422e100 --- /dev/null +++ b/examples/async/async_ssl_example.py @@ -0,0 +1,106 @@ +import asyncio +import os +from conductor.asyncio_client.adapters import ApiClient +from conductor.asyncio_client.configuration import Configuration +from conductor.asyncio_client.orkes.orkes_clients import OrkesClients + + +async def main(): + """ + Example of configuring async client with SSL settings. + """ + + # Method 1: Configure SSL via Configuration constructor parameters + + # Basic SSL configuration with custom CA certificate + config = Configuration( + server_url="https://play.orkes.io/api", # Or your Conductor server URL + ssl_ca_cert="/path/to/ca-certificate.pem", # Path to CA certificate file + ) + + # Method 2: Configure SSL via environment variables + + # Set environment variables (you would typically do this in your shell or .env file) + os.environ["CONDUCTOR_SERVER_URL"] = "https://play.orkes.io/api" + os.environ["CONDUCTOR_SSL_CA_CERT"] = "/path/to/ca-certificate.pem" + os.environ["CONDUCTOR_VERIFY_SSL"] = "true" + + # Configuration will automatically pick up environment variables + config_env = Configuration() + + # Different SSL configurations + + # SSL with custom CA certificate file + ssl_ca_file_config = Configuration( + server_url="https://play.orkes.io/api", + ssl_ca_cert="/path/to/ca-certificate.pem", + ) + + # SSL with custom CA certificate data (PEM string) + ssl_ca_data_config = Configuration( + server_url="https://play.orkes.io/api", + ca_cert_data="""-----BEGIN CERTIFICATE----- + MIIDXTCCAkWgAwIBAgIJAKoK/Ovj8EUMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV + BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX + aWRnaXRzIFB0eSBMdGQwHhcNMTYwMjEyMTQ0NDQ2WhcNMjYwMjEwMTQ0NDQ2WjBF + -----END CERTIFICATE-----""", + ) + + # SSL with client certificate authentication + client_cert_config = Configuration( + server_url="https://play.orkes.io/api", + ssl_ca_cert="/path/to/ca-certificate.pem", + cert_file="/path/to/client-certificate.pem", + key_file="/path/to/client-key.pem", + ) + + # SSL with disabled hostname verification + no_hostname_verify_config = Configuration( + server_url="https://play.orkes.io/api", + ssl_ca_cert="/path/to/ca-certificate.pem", + ) + + # SSL with Server Name Indication (SNI) + sni_config = Configuration( + server_url="https://play.orkes.io/api", + ssl_ca_cert="/path/to/ca-certificate.pem", + ) + + # SSL with completely disabled verification (NOT RECOMMENDED for production) + no_ssl_verify_config = Configuration( + server_url="https://play.orkes.io/api", + ) + + # SSL with custom SSL context (advanced usage) + import ssl + + # Create custom SSL context + ssl_context = ssl.create_default_context() + ssl_context.load_verify_locations("/path/to/ca-certificate.pem") + ssl_context.load_cert_chain( + certfile="/path/to/client-certificate.pem", keyfile="/path/to/client-key.pem" + ) + ssl_context.check_hostname = True + ssl_context.verify_mode = ssl.CERT_REQUIRED + + # Usage + + # Create API client with SSL configuration + async with ApiClient(config) as api_client: + # Create OrkesClients with the API client + orkes_clients = OrkesClients(api_client, config) + workflow_client = orkes_clients.get_workflow_client() + + # Example: Get workflow definitions (this will use SSL configuration) + # Note: This will only work if you have valid credentials and SSL certificates + + try: + workflows = await workflow_client.search_workflows() + print(f"Found {len(workflows)} workflows") + except Exception as e: + print(f"SSL connection failed: {e}") + print("Make sure your SSL certificates are valid and accessible") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/examples/sync_proxy_example.py b/examples/sync_proxy_example.py new file mode 100644 index 00000000..ca4fceba --- /dev/null +++ b/examples/sync_proxy_example.py @@ -0,0 +1,78 @@ +#!/usr/bin/env python3 +""" +Simple example demonstrating sync client proxy configuration. + +This example shows how to configure the Conductor Python SDK sync client +to work through a proxy server. +""" + +import os +from conductor.client.configuration.configuration import Configuration +from conductor.client.orkes_clients import OrkesClients + + +def main(): + """ + Example of configuring sync client with proxy settings. + """ + + # Method 1: Configure proxy via Configuration constructor parameters + + # Basic proxy configuration + config = Configuration( + base_url="https://play.orkes.io", # Or your Conductor server URL + proxy="http://your-proxy.com:8080", # Your proxy server + proxy_headers={ + "Authorization": "Bearer your-proxy-token", # Optional proxy auth + "User-Agent": "Conductor-Python-SDK/1.0", + }, + ) + + # Create clients with proxy configuration + clients = OrkesClients(configuration=config) + workflow_client = clients.get_workflow_client() + task_client = clients.get_task_client() + + # Method 2: Configure proxy via environment variables + + # Set environment variables (you would typically do this in your shell or .env file) + os.environ["CONDUCTOR_SERVER_URL"] = "https://play.orkes.io/api" + os.environ["CONDUCTOR_PROXY"] = "http://your-proxy.com:8080" + os.environ["CONDUCTOR_PROXY_HEADERS"] = ( + '{"Authorization": "Bearer your-proxy-token"}' + ) + + # Configuration will automatically pick up environment variables + config_env = Configuration() + + # Different proxy types + + # HTTP proxy + http_config = Configuration( + base_url="https://play.orkes.io", proxy="http://your-proxy.com:8080" + ) + + # HTTPS proxy + https_config = Configuration( + base_url="https://play.orkes.io", proxy="https://your-proxy.com:8080" + ) + + # SOCKS5 proxy + socks5_config = Configuration( + base_url="https://play.orkes.io", proxy="socks5://your-proxy.com:1080" + ) + + # SOCKS4 proxy + socks4_config = Configuration( + base_url="https://play.orkes.io", proxy="socks4://your-proxy.com:1080" + ) + + # Example: Get workflow definitions (this will go through the proxy) + # Note: This will only work if you have valid credentials and the proxy is accessible + + workflows = workflow_client.search() + print(f"Found {len(workflows)} workflows") + + +if __name__ == "__main__": + main() diff --git a/examples/sync_ssl_example.py b/examples/sync_ssl_example.py new file mode 100644 index 00000000..2cfc3237 --- /dev/null +++ b/examples/sync_ssl_example.py @@ -0,0 +1,163 @@ +#!/usr/bin/env python3 +""" +Simple example demonstrating sync client SSL configuration. + +This example shows how to configure the Conductor Python SDK sync client +with various SSL/TLS settings for secure connections. +""" + +import os +from conductor.client.configuration.configuration import Configuration +from conductor.client.orkes_clients import OrkesClients + + +def main(): + """ + Example of configuring sync client with SSL settings. + """ + + # Method 1: Configure SSL via Configuration constructor parameters + + # Basic SSL configuration with custom CA certificate + config = Configuration( + base_url="https://play.orkes.io", + ssl_ca_cert="/path/to/ca-certificate.pem", + ) + + # Create clients with SSL configuration + clients = OrkesClients(configuration=config) + workflow_client = clients.get_workflow_client() + task_client = clients.get_task_client() + + # Method 2: Configure SSL via environment variables + + # Set environment variables (you would typically do this in your shell or .env file) + os.environ["CONDUCTOR_SERVER_URL"] = "https://play.orkes.io/api" + os.environ["CONDUCTOR_SSL_CA_CERT"] = "/path/to/ca-certificate.pem" + os.environ["CONDUCTOR_VERIFY_SSL"] = "true" + + # Configuration will automatically pick up environment variables + config_env = Configuration() + + # Different SSL configurations + + # SSL with custom CA certificate file + ssl_ca_file_config = Configuration( + base_url="https://play.orkes.io", + ssl_ca_cert="/path/to/ca-certificate.pem", + ) + + # SSL with custom CA certificate data (PEM string) + ssl_ca_data_config = Configuration( + base_url="https://play.orkes.io", + ca_cert_data="""-----BEGIN CERTIFICATE----- +MIIDXTCCAkWgAwIBAgIJAKoK/Ovj8EUMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV +BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX +aWRnaXRzIFB0eSBMdGQwHhcNMTYwMjEyMTQ0NDQ2WhcNMjYwMjEwMTQ0NDQ2WjBF +-----END CERTIFICATE-----""", + ) + + # SSL with client certificate authentication + client_cert_config = Configuration( + base_url="https://play.orkes.io", + ssl_ca_cert="/path/to/ca-certificate.pem", + cert_file="/path/to/client-certificate.pem", + key_file="/path/to/client-key.pem", + ) + + # SSL with disabled hostname verification + no_hostname_verify_config = Configuration( + base_url="https://play.orkes.io", + ssl_ca_cert="/path/to/ca-certificate.pem", + ) + + # SSL with completely disabled verification (NOT RECOMMENDED for production) + no_ssl_verify_config = Configuration( + base_url="https://play.orkes.io", + ) + # Disable SSL verification entirely + no_ssl_verify_config.verify_ssl = False + + # SSL with httpx-specific configurations + import httpx + import ssl + + # httpx client with custom SSL settings + httpx_ssl_client = httpx.Client( + verify="/path/to/ca-certificate.pem", # CA certificate file + cert=( + "/path/to/client-certificate.pem", + "/path/to/client-key.pem", + ), # Client cert + timeout=httpx.Timeout(120.0), + follow_redirects=True, + ) + + httpx_ssl_config = Configuration( + base_url="https://play.orkes.io", + ) + httpx_ssl_config.http_connection = httpx_ssl_client + + # httpx client with disabled SSL verification + httpx_no_ssl_client = httpx.Client( + verify=False, # Disable SSL verification + timeout=httpx.Timeout(120.0), + follow_redirects=True, + ) + + httpx_no_ssl_config = Configuration( + base_url="https://play.orkes.io", + ) + httpx_no_ssl_config.http_connection = httpx_no_ssl_client + + # SSL with custom SSL context (advanced usage) + + # Create custom SSL context + ssl_context = ssl.create_default_context() + ssl_context.load_verify_locations("/path/to/ca-certificate.pem") + ssl_context.load_cert_chain( + certfile="/path/to/client-certificate.pem", keyfile="/path/to/client-key.pem" + ) + + # Create custom httpx client with SSL context + custom_client = httpx.Client( + verify=ssl_context, + timeout=httpx.Timeout(120.0), + follow_redirects=True, + limits=httpx.Limits(max_keepalive_connections=20, max_connections=100), + ) + + custom_ssl_config = Configuration( + base_url="https://play.orkes.io", + ssl_ca_cert="/path/to/ca-certificate.pem", + ) + custom_ssl_config.http_connection = custom_client + + # Note: The sync client uses httpx instead of requests + # All SSL configurations are handled through the Configuration class + # or by providing a custom httpx.Client instance via http_connection + + # Example: Get workflow definitions (this will use SSL configuration) + # Note: This will only work if you have valid credentials and SSL certificates + + try: + workflows = workflow_client.search() + print(f"Found {len(workflows)} workflows") + except Exception as e: + print(f"SSL connection failed: {e}") + print("Make sure your SSL certificates are valid and accessible") + + # Example usage with different SSL configurations: + # You can use any of the configurations above by passing them to OrkesClients + + # Example with client certificate authentication: + # clients_with_cert = OrkesClients(configuration=client_cert_config) + # workflow_client_cert = clients_with_cert.get_workflow_client() + + # Example with custom httpx client: + # clients_with_httpx = OrkesClients(configuration=httpx_ssl_config) + # workflow_client_httpx = clients_with_httpx.get_workflow_client() + + +if __name__ == "__main__": + main() diff --git a/poetry.lock b/poetry.lock index c25f7ac0..b80ca9ad 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1448,14 +1448,14 @@ dev = ["pre-commit", "pytest-asyncio", "tox"] [[package]] name = "python-dateutil" -version = "2.8.2" +version = "2.9.0.post0" description = "Extensions to the standard Python datetime module" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" groups = ["main"] files = [ - {file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"}, - {file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"}, + {file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"}, + {file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"}, ] [package.dependencies] @@ -1701,14 +1701,14 @@ markers = {dev = "python_version < \"3.11\""} [[package]] name = "typing-inspection" -version = "0.4.1" +version = "0.4.2" description = "Runtime typing introspection tools" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "typing_inspection-0.4.1-py3-none-any.whl", hash = "sha256:389055682238f53b04f7badcb49b989835495a96700ced5dab2d8feae4b26f51"}, - {file = "typing_inspection-0.4.1.tar.gz", hash = "sha256:6ae134cc0203c33377d43188d4064e9b357dba58cff3185f22924610e70a9d28"}, + {file = "typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7"}, + {file = "typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464"}, ] [package.dependencies] @@ -1755,91 +1755,93 @@ test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess [[package]] name = "wrapt" -version = "1.17.2" +version = "1.17.3" description = "Module for decorators, wrappers and monkey patching." optional = false python-versions = ">=3.8" groups = ["main"] files = [ - {file = "wrapt-1.17.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:3d57c572081fed831ad2d26fd430d565b76aa277ed1d30ff4d40670b1c0dd984"}, - {file = "wrapt-1.17.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b5e251054542ae57ac7f3fba5d10bfff615b6c2fb09abeb37d2f1463f841ae22"}, - {file = "wrapt-1.17.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:80dd7db6a7cb57ffbc279c4394246414ec99537ae81ffd702443335a61dbf3a7"}, - {file = "wrapt-1.17.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a6e821770cf99cc586d33833b2ff32faebdbe886bd6322395606cf55153246c"}, - {file = "wrapt-1.17.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b60fb58b90c6d63779cb0c0c54eeb38941bae3ecf7a73c764c52c88c2dcb9d72"}, - {file = "wrapt-1.17.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b870b5df5b71d8c3359d21be8f0d6c485fa0ebdb6477dda51a1ea54a9b558061"}, - {file = "wrapt-1.17.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:4011d137b9955791f9084749cba9a367c68d50ab8d11d64c50ba1688c9b457f2"}, - {file = "wrapt-1.17.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:1473400e5b2733e58b396a04eb7f35f541e1fb976d0c0724d0223dd607e0f74c"}, - {file = "wrapt-1.17.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3cedbfa9c940fdad3e6e941db7138e26ce8aad38ab5fe9dcfadfed9db7a54e62"}, - {file = "wrapt-1.17.2-cp310-cp310-win32.whl", hash = "sha256:582530701bff1dec6779efa00c516496968edd851fba224fbd86e46cc6b73563"}, - {file = "wrapt-1.17.2-cp310-cp310-win_amd64.whl", hash = "sha256:58705da316756681ad3c9c73fd15499aa4d8c69f9fd38dc8a35e06c12468582f"}, - {file = "wrapt-1.17.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ff04ef6eec3eee8a5efef2401495967a916feaa353643defcc03fc74fe213b58"}, - {file = "wrapt-1.17.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4db983e7bca53819efdbd64590ee96c9213894272c776966ca6306b73e4affda"}, - {file = "wrapt-1.17.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9abc77a4ce4c6f2a3168ff34b1da9b0f311a8f1cfd694ec96b0603dff1c79438"}, - {file = "wrapt-1.17.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b929ac182f5ace000d459c59c2c9c33047e20e935f8e39371fa6e3b85d56f4a"}, - {file = "wrapt-1.17.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f09b286faeff3c750a879d336fb6d8713206fc97af3adc14def0cdd349df6000"}, - {file = "wrapt-1.17.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1a7ed2d9d039bd41e889f6fb9364554052ca21ce823580f6a07c4ec245c1f5d6"}, - {file = "wrapt-1.17.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:129a150f5c445165ff941fc02ee27df65940fcb8a22a61828b1853c98763a64b"}, - {file = "wrapt-1.17.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:1fb5699e4464afe5c7e65fa51d4f99e0b2eadcc176e4aa33600a3df7801d6662"}, - {file = "wrapt-1.17.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9a2bce789a5ea90e51a02dfcc39e31b7f1e662bc3317979aa7e5538e3a034f72"}, - {file = "wrapt-1.17.2-cp311-cp311-win32.whl", hash = "sha256:4afd5814270fdf6380616b321fd31435a462019d834f83c8611a0ce7484c7317"}, - {file = "wrapt-1.17.2-cp311-cp311-win_amd64.whl", hash = "sha256:acc130bc0375999da18e3d19e5a86403667ac0c4042a094fefb7eec8ebac7cf3"}, - {file = "wrapt-1.17.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:d5e2439eecc762cd85e7bd37161d4714aa03a33c5ba884e26c81559817ca0925"}, - {file = "wrapt-1.17.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:3fc7cb4c1c744f8c05cd5f9438a3caa6ab94ce8344e952d7c45a8ed59dd88392"}, - {file = "wrapt-1.17.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8fdbdb757d5390f7c675e558fd3186d590973244fab0c5fe63d373ade3e99d40"}, - {file = "wrapt-1.17.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5bb1d0dbf99411f3d871deb6faa9aabb9d4e744d67dcaaa05399af89d847a91d"}, - {file = "wrapt-1.17.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d18a4865f46b8579d44e4fe1e2bcbc6472ad83d98e22a26c963d46e4c125ef0b"}, - {file = "wrapt-1.17.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc570b5f14a79734437cb7b0500376b6b791153314986074486e0b0fa8d71d98"}, - {file = "wrapt-1.17.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6d9187b01bebc3875bac9b087948a2bccefe464a7d8f627cf6e48b1bbae30f82"}, - {file = "wrapt-1.17.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:9e8659775f1adf02eb1e6f109751268e493c73716ca5761f8acb695e52a756ae"}, - {file = "wrapt-1.17.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e8b2816ebef96d83657b56306152a93909a83f23994f4b30ad4573b00bd11bb9"}, - {file = "wrapt-1.17.2-cp312-cp312-win32.whl", hash = "sha256:468090021f391fe0056ad3e807e3d9034e0fd01adcd3bdfba977b6fdf4213ea9"}, - {file = "wrapt-1.17.2-cp312-cp312-win_amd64.whl", hash = "sha256:ec89ed91f2fa8e3f52ae53cd3cf640d6feff92ba90d62236a81e4e563ac0e991"}, - {file = "wrapt-1.17.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6ed6ffac43aecfe6d86ec5b74b06a5be33d5bb9243d055141e8cabb12aa08125"}, - {file = "wrapt-1.17.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:35621ae4c00e056adb0009f8e86e28eb4a41a4bfa8f9bfa9fca7d343fe94f998"}, - {file = "wrapt-1.17.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a604bf7a053f8362d27eb9fefd2097f82600b856d5abe996d623babd067b1ab5"}, - {file = "wrapt-1.17.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cbabee4f083b6b4cd282f5b817a867cf0b1028c54d445b7ec7cfe6505057cf8"}, - {file = "wrapt-1.17.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:49703ce2ddc220df165bd2962f8e03b84c89fee2d65e1c24a7defff6f988f4d6"}, - {file = "wrapt-1.17.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8112e52c5822fc4253f3901b676c55ddf288614dc7011634e2719718eaa187dc"}, - {file = "wrapt-1.17.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:9fee687dce376205d9a494e9c121e27183b2a3df18037f89d69bd7b35bcf59e2"}, - {file = "wrapt-1.17.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:18983c537e04d11cf027fbb60a1e8dfd5190e2b60cc27bc0808e653e7b218d1b"}, - {file = "wrapt-1.17.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:703919b1633412ab54bcf920ab388735832fdcb9f9a00ae49387f0fe67dad504"}, - {file = "wrapt-1.17.2-cp313-cp313-win32.whl", hash = "sha256:abbb9e76177c35d4e8568e58650aa6926040d6a9f6f03435b7a522bf1c487f9a"}, - {file = "wrapt-1.17.2-cp313-cp313-win_amd64.whl", hash = "sha256:69606d7bb691b50a4240ce6b22ebb319c1cfb164e5f6569835058196e0f3a845"}, - {file = "wrapt-1.17.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:4a721d3c943dae44f8e243b380cb645a709ba5bd35d3ad27bc2ed947e9c68192"}, - {file = "wrapt-1.17.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:766d8bbefcb9e00c3ac3b000d9acc51f1b399513f44d77dfe0eb026ad7c9a19b"}, - {file = "wrapt-1.17.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e496a8ce2c256da1eb98bd15803a79bee00fc351f5dfb9ea82594a3f058309e0"}, - {file = "wrapt-1.17.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40d615e4fe22f4ad3528448c193b218e077656ca9ccb22ce2cb20db730f8d306"}, - {file = "wrapt-1.17.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a5aaeff38654462bc4b09023918b7f21790efb807f54c000a39d41d69cf552cb"}, - {file = "wrapt-1.17.2-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a7d15bbd2bc99e92e39f49a04653062ee6085c0e18b3b7512a4f2fe91f2d681"}, - {file = "wrapt-1.17.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:e3890b508a23299083e065f435a492b5435eba6e304a7114d2f919d400888cc6"}, - {file = "wrapt-1.17.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:8c8b293cd65ad716d13d8dd3624e42e5a19cc2a2f1acc74b30c2c13f15cb61a6"}, - {file = "wrapt-1.17.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4c82b8785d98cdd9fed4cac84d765d234ed3251bd6afe34cb7ac523cb93e8b4f"}, - {file = "wrapt-1.17.2-cp313-cp313t-win32.whl", hash = "sha256:13e6afb7fe71fe7485a4550a8844cc9ffbe263c0f1a1eea569bc7091d4898555"}, - {file = "wrapt-1.17.2-cp313-cp313t-win_amd64.whl", hash = "sha256:eaf675418ed6b3b31c7a989fd007fa7c3be66ce14e5c3b27336383604c9da85c"}, - {file = "wrapt-1.17.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5c803c401ea1c1c18de70a06a6f79fcc9c5acfc79133e9869e730ad7f8ad8ef9"}, - {file = "wrapt-1.17.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f917c1180fdb8623c2b75a99192f4025e412597c50b2ac870f156de8fb101119"}, - {file = "wrapt-1.17.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:ecc840861360ba9d176d413a5489b9a0aff6d6303d7e733e2c4623cfa26904a6"}, - {file = "wrapt-1.17.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb87745b2e6dc56361bfde481d5a378dc314b252a98d7dd19a651a3fa58f24a9"}, - {file = "wrapt-1.17.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58455b79ec2661c3600e65c0a716955adc2410f7383755d537584b0de41b1d8a"}, - {file = "wrapt-1.17.2-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b4e42a40a5e164cbfdb7b386c966a588b1047558a990981ace551ed7e12ca9c2"}, - {file = "wrapt-1.17.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:91bd7d1773e64019f9288b7a5101f3ae50d3d8e6b1de7edee9c2ccc1d32f0c0a"}, - {file = "wrapt-1.17.2-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:bb90fb8bda722a1b9d48ac1e6c38f923ea757b3baf8ebd0c82e09c5c1a0e7a04"}, - {file = "wrapt-1.17.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:08e7ce672e35efa54c5024936e559469436f8b8096253404faeb54d2a878416f"}, - {file = "wrapt-1.17.2-cp38-cp38-win32.whl", hash = "sha256:410a92fefd2e0e10d26210e1dfb4a876ddaf8439ef60d6434f21ef8d87efc5b7"}, - {file = "wrapt-1.17.2-cp38-cp38-win_amd64.whl", hash = "sha256:95c658736ec15602da0ed73f312d410117723914a5c91a14ee4cdd72f1d790b3"}, - {file = "wrapt-1.17.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99039fa9e6306880572915728d7f6c24a86ec57b0a83f6b2491e1d8ab0235b9a"}, - {file = "wrapt-1.17.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2696993ee1eebd20b8e4ee4356483c4cb696066ddc24bd70bcbb80fa56ff9061"}, - {file = "wrapt-1.17.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:612dff5db80beef9e649c6d803a8d50c409082f1fedc9dbcdfde2983b2025b82"}, - {file = "wrapt-1.17.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62c2caa1585c82b3f7a7ab56afef7b3602021d6da34fbc1cf234ff139fed3cd9"}, - {file = "wrapt-1.17.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c958bcfd59bacc2d0249dcfe575e71da54f9dcf4a8bdf89c4cb9a68a1170d73f"}, - {file = "wrapt-1.17.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fc78a84e2dfbc27afe4b2bd7c80c8db9bca75cc5b85df52bfe634596a1da846b"}, - {file = "wrapt-1.17.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:ba0f0eb61ef00ea10e00eb53a9129501f52385c44853dbd6c4ad3f403603083f"}, - {file = "wrapt-1.17.2-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1e1fe0e6ab7775fd842bc39e86f6dcfc4507ab0ffe206093e76d61cde37225c8"}, - {file = "wrapt-1.17.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c86563182421896d73858e08e1db93afdd2b947a70064b813d515d66549e15f9"}, - {file = "wrapt-1.17.2-cp39-cp39-win32.whl", hash = "sha256:f393cda562f79828f38a819f4788641ac7c4085f30f1ce1a68672baa686482bb"}, - {file = "wrapt-1.17.2-cp39-cp39-win_amd64.whl", hash = "sha256:36ccae62f64235cf8ddb682073a60519426fdd4725524ae38874adf72b5f2aeb"}, - {file = "wrapt-1.17.2-py3-none-any.whl", hash = "sha256:b18f2d1533a71f069c7f82d524a52599053d4c7166e9dd374ae2136b7f40f7c8"}, - {file = "wrapt-1.17.2.tar.gz", hash = "sha256:41388e9d4d1522446fe79d3213196bd9e3b301a336965b9e27ca2788ebd122f3"}, + {file = "wrapt-1.17.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:88bbae4d40d5a46142e70d58bf664a89b6b4befaea7b2ecc14e03cedb8e06c04"}, + {file = "wrapt-1.17.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e6b13af258d6a9ad602d57d889f83b9d5543acd471eee12eb51f5b01f8eb1bc2"}, + {file = "wrapt-1.17.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd341868a4b6714a5962c1af0bd44f7c404ef78720c7de4892901e540417111c"}, + {file = "wrapt-1.17.3-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f9b2601381be482f70e5d1051a5965c25fb3625455a2bf520b5a077b22afb775"}, + {file = "wrapt-1.17.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:343e44b2a8e60e06a7e0d29c1671a0d9951f59174f3709962b5143f60a2a98bd"}, + {file = "wrapt-1.17.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:33486899acd2d7d3066156b03465b949da3fd41a5da6e394ec49d271baefcf05"}, + {file = "wrapt-1.17.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e6f40a8aa5a92f150bdb3e1c44b7e98fb7113955b2e5394122fa5532fec4b418"}, + {file = "wrapt-1.17.3-cp310-cp310-win32.whl", hash = "sha256:a36692b8491d30a8c75f1dfee65bef119d6f39ea84ee04d9f9311f83c5ad9390"}, + {file = "wrapt-1.17.3-cp310-cp310-win_amd64.whl", hash = "sha256:afd964fd43b10c12213574db492cb8f73b2f0826c8df07a68288f8f19af2ebe6"}, + {file = "wrapt-1.17.3-cp310-cp310-win_arm64.whl", hash = "sha256:af338aa93554be859173c39c85243970dc6a289fa907402289eeae7543e1ae18"}, + {file = "wrapt-1.17.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:273a736c4645e63ac582c60a56b0acb529ef07f78e08dc6bfadf6a46b19c0da7"}, + {file = "wrapt-1.17.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5531d911795e3f935a9c23eb1c8c03c211661a5060aab167065896bbf62a5f85"}, + {file = "wrapt-1.17.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0610b46293c59a3adbae3dee552b648b984176f8562ee0dba099a56cfbe4df1f"}, + {file = "wrapt-1.17.3-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b32888aad8b6e68f83a8fdccbf3165f5469702a7544472bdf41f582970ed3311"}, + {file = "wrapt-1.17.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8cccf4f81371f257440c88faed6b74f1053eef90807b77e31ca057b2db74edb1"}, + {file = "wrapt-1.17.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8a210b158a34164de8bb68b0e7780041a903d7b00c87e906fb69928bf7890d5"}, + {file = "wrapt-1.17.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:79573c24a46ce11aab457b472efd8d125e5a51da2d1d24387666cd85f54c05b2"}, + {file = "wrapt-1.17.3-cp311-cp311-win32.whl", hash = "sha256:c31eebe420a9a5d2887b13000b043ff6ca27c452a9a22fa71f35f118e8d4bf89"}, + {file = "wrapt-1.17.3-cp311-cp311-win_amd64.whl", hash = "sha256:0b1831115c97f0663cb77aa27d381237e73ad4f721391a9bfb2fe8bc25fa6e77"}, + {file = "wrapt-1.17.3-cp311-cp311-win_arm64.whl", hash = "sha256:5a7b3c1ee8265eb4c8f1b7d29943f195c00673f5ab60c192eba2d4a7eae5f46a"}, + {file = "wrapt-1.17.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ab232e7fdb44cdfbf55fc3afa31bcdb0d8980b9b95c38b6405df2acb672af0e0"}, + {file = "wrapt-1.17.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:9baa544e6acc91130e926e8c802a17f3b16fbea0fd441b5a60f5cf2cc5c3deba"}, + {file = "wrapt-1.17.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6b538e31eca1a7ea4605e44f81a48aa24c4632a277431a6ed3f328835901f4fd"}, + {file = "wrapt-1.17.3-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:042ec3bb8f319c147b1301f2393bc19dba6e176b7da446853406d041c36c7828"}, + {file = "wrapt-1.17.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3af60380ba0b7b5aeb329bc4e402acd25bd877e98b3727b0135cb5c2efdaefe9"}, + {file = "wrapt-1.17.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0b02e424deef65c9f7326d8c19220a2c9040c51dc165cddb732f16198c168396"}, + {file = "wrapt-1.17.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:74afa28374a3c3a11b3b5e5fca0ae03bef8450d6aa3ab3a1e2c30e3a75d023dc"}, + {file = "wrapt-1.17.3-cp312-cp312-win32.whl", hash = "sha256:4da9f45279fff3543c371d5ababc57a0384f70be244de7759c85a7f989cb4ebe"}, + {file = "wrapt-1.17.3-cp312-cp312-win_amd64.whl", hash = "sha256:e71d5c6ebac14875668a1e90baf2ea0ef5b7ac7918355850c0908ae82bcb297c"}, + {file = "wrapt-1.17.3-cp312-cp312-win_arm64.whl", hash = "sha256:604d076c55e2fdd4c1c03d06dc1a31b95130010517b5019db15365ec4a405fc6"}, + {file = "wrapt-1.17.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a47681378a0439215912ef542c45a783484d4dd82bac412b71e59cf9c0e1cea0"}, + {file = "wrapt-1.17.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:54a30837587c6ee3cd1a4d1c2ec5d24e77984d44e2f34547e2323ddb4e22eb77"}, + {file = "wrapt-1.17.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:16ecf15d6af39246fe33e507105d67e4b81d8f8d2c6598ff7e3ca1b8a37213f7"}, + {file = "wrapt-1.17.3-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6fd1ad24dc235e4ab88cda009e19bf347aabb975e44fd5c2fb22a3f6e4141277"}, + {file = "wrapt-1.17.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0ed61b7c2d49cee3c027372df5809a59d60cf1b6c2f81ee980a091f3afed6a2d"}, + {file = "wrapt-1.17.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:423ed5420ad5f5529db9ce89eac09c8a2f97da18eb1c870237e84c5a5c2d60aa"}, + {file = "wrapt-1.17.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e01375f275f010fcbf7f643b4279896d04e571889b8a5b3f848423d91bf07050"}, + {file = "wrapt-1.17.3-cp313-cp313-win32.whl", hash = "sha256:53e5e39ff71b3fc484df8a522c933ea2b7cdd0d5d15ae82e5b23fde87d44cbd8"}, + {file = "wrapt-1.17.3-cp313-cp313-win_amd64.whl", hash = "sha256:1f0b2f40cf341ee8cc1a97d51ff50dddb9fcc73241b9143ec74b30fc4f44f6cb"}, + {file = "wrapt-1.17.3-cp313-cp313-win_arm64.whl", hash = "sha256:7425ac3c54430f5fc5e7b6f41d41e704db073309acfc09305816bc6a0b26bb16"}, + {file = "wrapt-1.17.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:cf30f6e3c077c8e6a9a7809c94551203c8843e74ba0c960f4a98cd80d4665d39"}, + {file = "wrapt-1.17.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:e228514a06843cae89621384cfe3a80418f3c04aadf8a3b14e46a7be704e4235"}, + {file = "wrapt-1.17.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:5ea5eb3c0c071862997d6f3e02af1d055f381b1d25b286b9d6644b79db77657c"}, + {file = "wrapt-1.17.3-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:281262213373b6d5e4bb4353bc36d1ba4084e6d6b5d242863721ef2bf2c2930b"}, + {file = "wrapt-1.17.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:dc4a8d2b25efb6681ecacad42fca8859f88092d8732b170de6a5dddd80a1c8fa"}, + {file = "wrapt-1.17.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:373342dd05b1d07d752cecbec0c41817231f29f3a89aa8b8843f7b95992ed0c7"}, + {file = "wrapt-1.17.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d40770d7c0fd5cbed9d84b2c3f2e156431a12c9a37dc6284060fb4bec0b7ffd4"}, + {file = "wrapt-1.17.3-cp314-cp314-win32.whl", hash = "sha256:fbd3c8319de8e1dc79d346929cd71d523622da527cca14e0c1d257e31c2b8b10"}, + {file = "wrapt-1.17.3-cp314-cp314-win_amd64.whl", hash = "sha256:e1a4120ae5705f673727d3253de3ed0e016f7cd78dc463db1b31e2463e1f3cf6"}, + {file = "wrapt-1.17.3-cp314-cp314-win_arm64.whl", hash = "sha256:507553480670cab08a800b9463bdb881b2edeed77dc677b0a5915e6106e91a58"}, + {file = "wrapt-1.17.3-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:ed7c635ae45cfbc1a7371f708727bf74690daedc49b4dba310590ca0bd28aa8a"}, + {file = "wrapt-1.17.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:249f88ed15503f6492a71f01442abddd73856a0032ae860de6d75ca62eed8067"}, + {file = "wrapt-1.17.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5a03a38adec8066d5a37bea22f2ba6bbf39fcdefbe2d91419ab864c3fb515454"}, + {file = "wrapt-1.17.3-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:5d4478d72eb61c36e5b446e375bbc49ed002430d17cdec3cecb36993398e1a9e"}, + {file = "wrapt-1.17.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:223db574bb38637e8230eb14b185565023ab624474df94d2af18f1cdb625216f"}, + {file = "wrapt-1.17.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e405adefb53a435f01efa7ccdec012c016b5a1d3f35459990afc39b6be4d5056"}, + {file = "wrapt-1.17.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:88547535b787a6c9ce4086917b6e1d291aa8ed914fdd3a838b3539dc95c12804"}, + {file = "wrapt-1.17.3-cp314-cp314t-win32.whl", hash = "sha256:41b1d2bc74c2cac6f9074df52b2efbef2b30bdfe5f40cb78f8ca22963bc62977"}, + {file = "wrapt-1.17.3-cp314-cp314t-win_amd64.whl", hash = "sha256:73d496de46cd2cdbdbcce4ae4bcdb4afb6a11234a1df9c085249d55166b95116"}, + {file = "wrapt-1.17.3-cp314-cp314t-win_arm64.whl", hash = "sha256:f38e60678850c42461d4202739f9bf1e3a737c7ad283638251e79cc49effb6b6"}, + {file = "wrapt-1.17.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:70d86fa5197b8947a2fa70260b48e400bf2ccacdcab97bb7de47e3d1e6312225"}, + {file = "wrapt-1.17.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:df7d30371a2accfe4013e90445f6388c570f103d61019b6b7c57e0265250072a"}, + {file = "wrapt-1.17.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:caea3e9c79d5f0d2c6d9ab96111601797ea5da8e6d0723f77eabb0d4068d2b2f"}, + {file = "wrapt-1.17.3-cp38-cp38-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:758895b01d546812d1f42204bd443b8c433c44d090248bf22689df673ccafe00"}, + {file = "wrapt-1.17.3-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:02b551d101f31694fc785e58e0720ef7d9a10c4e62c1c9358ce6f63f23e30a56"}, + {file = "wrapt-1.17.3-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:656873859b3b50eeebe6db8b1455e99d90c26ab058db8e427046dbc35c3140a5"}, + {file = "wrapt-1.17.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a9a2203361a6e6404f80b99234fe7fb37d1fc73487b5a78dc1aa5b97201e0f22"}, + {file = "wrapt-1.17.3-cp38-cp38-win32.whl", hash = "sha256:55cbbc356c2842f39bcc553cf695932e8b30e30e797f961860afb308e6b1bb7c"}, + {file = "wrapt-1.17.3-cp38-cp38-win_amd64.whl", hash = "sha256:ad85e269fe54d506b240d2d7b9f5f2057c2aa9a2ea5b32c66f8902f768117ed2"}, + {file = "wrapt-1.17.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:30ce38e66630599e1193798285706903110d4f057aab3168a34b7fdc85569afc"}, + {file = "wrapt-1.17.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:65d1d00fbfb3ea5f20add88bbc0f815150dbbde3b026e6c24759466c8b5a9ef9"}, + {file = "wrapt-1.17.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7c06742645f914f26c7f1fa47b8bc4c91d222f76ee20116c43d5ef0912bba2d"}, + {file = "wrapt-1.17.3-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7e18f01b0c3e4a07fe6dfdb00e29049ba17eadbc5e7609a2a3a4af83ab7d710a"}, + {file = "wrapt-1.17.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0f5f51a6466667a5a356e6381d362d259125b57f059103dd9fdc8c0cf1d14139"}, + {file = "wrapt-1.17.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:59923aa12d0157f6b82d686c3fd8e1166fa8cdfb3e17b42ce3b6147ff81528df"}, + {file = "wrapt-1.17.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:46acc57b331e0b3bcb3e1ca3b421d65637915cfcd65eb783cb2f78a511193f9b"}, + {file = "wrapt-1.17.3-cp39-cp39-win32.whl", hash = "sha256:3e62d15d3cfa26e3d0788094de7b64efa75f3a53875cdbccdf78547aed547a81"}, + {file = "wrapt-1.17.3-cp39-cp39-win_amd64.whl", hash = "sha256:1f23fa283f51c890eda8e34e4937079114c74b4c81d2b2f1f1d94948f5cc3d7f"}, + {file = "wrapt-1.17.3-cp39-cp39-win_arm64.whl", hash = "sha256:24c2ed34dc222ed754247a2702b1e1e89fdbaa4016f324b4b8f1a802d4ffe87f"}, + {file = "wrapt-1.17.3-py3-none-any.whl", hash = "sha256:7171ae35d2c33d326ac19dd8facb1e82e5fd04ef8c6c0e394d7af55a55051c22"}, + {file = "wrapt-1.17.3.tar.gz", hash = "sha256:f66eb08feaa410fe4eebd17f2a2c8e2e46d3476e9f8c783daa8e09e0faa666d0"}, ] [[package]] diff --git a/pyproject.toml b/pyproject.toml index 4e8a56f3..972415f4 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -188,4 +188,4 @@ markers = [ "v4_1_73: mark test to run for version 4.1.73", "v5_2_6: mark test to run for version 5.2.6", "v3_21_16: mark test to run for version 3.21.16" -] +] \ No newline at end of file diff --git a/src/conductor/asyncio_client/configuration/configuration.py b/src/conductor/asyncio_client/configuration/configuration.py index 8177242f..7094d1d7 100644 --- a/src/conductor/asyncio_client/configuration/configuration.py +++ b/src/conductor/asyncio_client/configuration/configuration.py @@ -1,5 +1,6 @@ from __future__ import annotations +import json import logging import os from typing import Any, Dict, Optional, Union @@ -76,6 +77,11 @@ def __init__( ssl_ca_cert: Optional[str] = None, retries: Optional[int] = None, ca_cert_data: Optional[Union[str, bytes]] = None, + cert_file: Optional[str] = None, + key_file: Optional[str] = None, + verify_ssl: Optional[bool] = None, + proxy: Optional[str] = None, + proxy_headers: Optional[Dict[str, str]] = None, **kwargs: Any, ): """ @@ -99,6 +105,14 @@ def __init__( Polling interval in seconds. If not provided, reads from CONDUCTOR_WORKER_POLL_INTERVAL_SECONDS env var. **kwargs : Any Additional parameters passed to HttpConfiguration. + + Environment Variables: + --------------------- + CONDUCTOR_SERVER_URL: Server URL (e.g., http://localhost:8080/api) + CONDUCTOR_AUTH_KEY: Authentication key ID + CONDUCTOR_AUTH_SECRET: Authentication key secret + CONDUCTOR_PROXY: Proxy URL for HTTP requests + CONDUCTOR_PROXY_HEADERS: Proxy headers as JSON string or single header value """ # Resolve server URL from parameter or environment variable @@ -141,27 +155,58 @@ def __init__( if self.__ui_host is None: self.__ui_host = self.server_url.replace("/api", "") + # Proxy configuration - can be set via parameter or environment variable + self.proxy = proxy or os.getenv("CONDUCTOR_PROXY") + # Proxy headers - can be set via parameter or environment variable + self.proxy_headers = proxy_headers + if not self.proxy_headers and os.getenv("CONDUCTOR_PROXY_HEADERS"): + try: + self.proxy_headers = json.loads(os.getenv("CONDUCTOR_PROXY_HEADERS")) + except (json.JSONDecodeError, TypeError): + # If JSON parsing fails, treat as a single header value + self.proxy_headers = { + "Authorization": os.getenv("CONDUCTOR_PROXY_HEADERS") + } + self.logger_format = "%(asctime)s %(name)-12s %(levelname)-8s %(message)s" # Create the underlying HTTP configuration - self._http_config = HttpConfiguration( - host=self.server_url, - api_key=api_key, - api_key_prefix=api_key_prefix, - username=username, - password=password, - access_token=access_token, - server_index=server_index, - server_variables=server_variables, - server_operation_index=server_operation_index, - server_operation_variables=server_operation_variables, - ignore_operation_servers=ignore_operation_servers, - ssl_ca_cert=ssl_ca_cert, - retries=retries, - ca_cert_data=ca_cert_data, - debug=debug, - **kwargs, - ) + http_config_kwargs = { + "host": self.server_url, + "api_key": api_key, + "api_key_prefix": api_key_prefix, + "username": username, + "password": password, + "access_token": access_token, + "server_index": server_index, + "server_variables": server_variables, + "server_operation_index": server_operation_index, + "server_operation_variables": server_operation_variables, + "ignore_operation_servers": ignore_operation_servers, + "ssl_ca_cert": ssl_ca_cert or os.getenv("CONDUCTOR_SSL_CA_CERT"), + "retries": retries, + "ca_cert_data": ca_cert_data or os.getenv("CONDUCTOR_SSL_CA_CERT_DATA"), + "debug": debug, + } + + # Add SSL parameters if they exist in HttpConfiguration + if cert_file or os.getenv("CONDUCTOR_CERT_FILE"): + http_config_kwargs["cert_file"] = cert_file or os.getenv("CONDUCTOR_CERT_FILE") + if key_file or os.getenv("CONDUCTOR_KEY_FILE"): + http_config_kwargs["key_file"] = key_file or os.getenv("CONDUCTOR_KEY_FILE") + if verify_ssl is not None: + http_config_kwargs["verify_ssl"] = verify_ssl + elif os.getenv("CONDUCTOR_VERIFY_SSL"): + http_config_kwargs["verify_ssl"] = self._get_env_bool("CONDUCTOR_VERIFY_SSL", True) + + http_config_kwargs.update(kwargs) + self._http_config = HttpConfiguration(**http_config_kwargs) + + # Set proxy configuration on the HTTP config + if self.proxy: + self._http_config.proxy = self.proxy + if self.proxy_headers: + self._http_config.proxy_headers = self.proxy_headers # Debug switch and logging setup self.__debug = debug @@ -203,6 +248,13 @@ def _get_env_int(self, env_var: str, default: int) -> int: self.logger.warning("Invalid float value for %s: %s", env_var, value) return default + def _get_env_bool(self, env_var: str, default: bool) -> bool: + """Get boolean value from environment variable with default fallback.""" + value = os.getenv(env_var) + if value is not None: + return value.lower() in ("true", "1") + return default + def get_worker_property_value( self, property_name: str, task_type: Optional[str] = None ) -> Optional[Any]: @@ -522,5 +574,7 @@ def ui_host(self): def __getattr__(self, name: str) -> Any: """Delegate attribute access to underlying HTTP configuration.""" if "_http_config" not in self.__dict__ or self._http_config is None: - raise AttributeError(f"'{self.__class__.__name__}' object has no attribute '{name}'") + raise AttributeError( + f"'{self.__class__.__name__}' object has no attribute '{name}'" + ) return getattr(self._http_config, name) diff --git a/src/conductor/client/adapters/rest_adapter.py b/src/conductor/client/adapters/rest_adapter.py index d491b332..d06ac214 100644 --- a/src/conductor/client/adapters/rest_adapter.py +++ b/src/conductor/client/adapters/rest_adapter.py @@ -1,9 +1,10 @@ import io import logging -from typing import Optional, Dict, Any, Union, Tuple +import ssl +from typing import Any, Dict, Optional, Tuple, Union import httpx -from httpx import Response, RequestError, HTTPStatusError, TimeoutException +from httpx import HTTPStatusError, RequestError, Response, TimeoutException from conductor.client.codegen.rest import ( ApiException, @@ -23,11 +24,13 @@ def __init__(self, response: Response): self.reason = response.reason_phrase self.resp = response self.headers = response.headers - + # Log HTTP protocol version - http_version = getattr(response, 'http_version', 'Unknown') - logger.debug(f"HTTP response received - Status: {self.status}, Protocol: {http_version}") - + http_version = getattr(response, "http_version", "Unknown") + logger.debug( + f"HTTP response received - Status: {self.status}, Protocol: {http_version}" + ) + # Log HTTP/2 usage if http_version == "HTTP/2": logger.info(f"HTTP/2 connection established - URL: {response.url}") @@ -53,12 +56,12 @@ def data(self) -> bytes: def text(self) -> str: """Get response data as text.""" return self.resp.text - + @property def http_version(self) -> str: """Get the HTTP protocol version used.""" - return getattr(self.resp, 'http_version', 'Unknown') - + return getattr(self.resp, "http_version", "Unknown") + def is_http2(self) -> bool: """Check if HTTP/2 was used for this response.""" return self.http_version == "HTTP/2" @@ -67,33 +70,78 @@ def is_http2(self) -> bool: class RESTClientObjectAdapter(RESTClientObject): """HTTP client adapter using httpx instead of requests.""" - def __init__(self, connection: Optional[httpx.Client] = None): - """Initialize the REST client with httpx.""" - # Don't call super().__init__() to avoid requests initialization - self.connection = connection or httpx.Client( - timeout=httpx.Timeout(300.0), - follow_redirects=True, - limits=httpx.Limits(max_keepalive_connections=20, max_connections=100), - http2=True # added explicit configuration - ) + def __init__(self, connection: Optional[httpx.Client] = None, configuration=None): + """ + Initialize the REST client with httpx. + + Args: + connection: Pre-configured httpx.Client instance. If provided, + proxy settings from configuration will be ignored. + configuration: Configuration object containing proxy settings. + Expected attributes: proxy (str), proxy_headers (dict) + """ + if connection is not None: + self.connection = connection + else: + client_kwargs = { + "timeout": httpx.Timeout(120.0), + "follow_redirects": True, + "limits": httpx.Limits( + max_keepalive_connections=20, max_connections=100 + ), + "http2": True + } + + if ( + configuration + and hasattr(configuration, "proxy") + and configuration.proxy + ): + client_kwargs["proxy"] = configuration.proxy + if ( + configuration + and hasattr(configuration, "proxy_headers") + and configuration.proxy_headers + ): + client_kwargs["proxy_headers"] = configuration.proxy_headers + + if configuration: + ssl_context = ssl.create_default_context( + cafile=configuration.ssl_ca_cert, + cadata=configuration.ca_cert_data, + ) + if configuration.cert_file: + ssl_context.load_cert_chain( + configuration.cert_file, keyfile=configuration.key_file + ) + + if not configuration.verify_ssl: + ssl_context.check_hostname = False + ssl_context.verify_mode = ssl.CERT_NONE + + client_kwargs["verify"] = ssl_context + + self.connection = httpx.Client(**client_kwargs) def close(self): """Close the HTTP client connection.""" if hasattr(self, "connection") and self.connection: self.connection.close() - + def check_http2_support(self, url: str) -> bool: """Check if the server supports HTTP/2 by making a test request.""" try: logger.info(f"Checking HTTP/2 support for: {url}") response = self.GET(url) is_http2 = response.is_http2() - + if is_http2: logger.info(f"✓ HTTP/2 supported by {url}") else: - logger.info(f"✗ HTTP/2 not supported by {url}, using {response.http_version}") - + logger.info( + f"✗ HTTP/2 not supported by {url}, using {response.http_version}" + ) + return is_http2 except Exception as e: logger.error(f"Failed to check HTTP/2 support for {url}: {e}") @@ -150,7 +198,7 @@ def request( try: # Log the request attempt logger.debug(f"Making HTTP request - Method: {method}, URL: {url}") - + # Prepare request parameters request_kwargs = { "method": method, diff --git a/src/conductor/client/configuration/configuration.py b/src/conductor/client/configuration/configuration.py index 7c873c91..c38dcfb1 100644 --- a/src/conductor/client/configuration/configuration.py +++ b/src/conductor/client/configuration/configuration.py @@ -1,9 +1,11 @@ from __future__ import annotations +import json + import logging import os import time -from typing import Optional +from typing import Optional, Dict, Union from conductor.shared.configuration.settings.authentication_settings import ( AuthenticationSettings, @@ -20,10 +22,36 @@ def __init__( authentication_settings: AuthenticationSettings = None, server_api_url: Optional[str] = None, auth_token_ttl_min: int = 45, + proxy: Optional[str] = None, + proxy_headers: Optional[Dict[str, str]] = None, polling_interval: Optional[float] = None, domain: Optional[str] = None, polling_interval_seconds: Optional[float] = None, + ssl_ca_cert: Optional[str] = None, + ca_cert_data: Optional[Union[str, bytes]] = None, + cert_file: Optional[str] = None, + key_file: Optional[str] = None, + verify_ssl: Optional[bool] = None, ): + """ + Initialize Conductor client configuration. + + Args: + base_url: Base URL of the Conductor server (will append /api) + debug: Enable debug logging + authentication_settings: Authentication configuration for Orkes + server_api_url: Full API URL (overrides base_url) + auth_token_ttl_min: Authentication token time-to-live in minutes + proxy: Proxy URL for HTTP requests (supports http, https, socks4, socks5) + proxy_headers: Headers to send with proxy requests (e.g., authentication) + + Environment Variables: + CONDUCTOR_SERVER_URL: Server URL (e.g., http://localhost:8080/api) + CONDUCTOR_AUTH_KEY: Authentication key ID + CONDUCTOR_AUTH_SECRET: Authentication key secret + CONDUCTOR_PROXY: Proxy URL for HTTP requests + CONDUCTOR_PROXY_HEADERS: Proxy headers as JSON string or single header value + """ if server_api_url is not None: self.host = server_api_url elif base_url is not None: @@ -60,18 +88,33 @@ def __init__( # SSL/TLS verification # Set this to false to skip verifying SSL certificate when calling API # from https server. - self.verify_ssl = True + if verify_ssl is not None: + self.verify_ssl = verify_ssl + else: + self.verify_ssl = self._get_env_bool("CONDUCTOR_VERIFY_SSL", True) # Set this to customize the certificate file to verify the peer. - self.ssl_ca_cert = None + self.ssl_ca_cert = ssl_ca_cert or os.getenv("CONDUCTOR_SSL_CA_CERT") + # Set this to verify the peer using PEM (str) or DER (bytes) certificate data. + self.ca_cert_data = ca_cert_data or os.getenv("CONDUCTOR_SSL_CA_CERT_DATA") # client certificate file - self.cert_file = None + self.cert_file = cert_file or os.getenv("CONDUCTOR_CERT_FILE") # client key file - self.key_file = None + self.key_file = key_file or os.getenv("CONDUCTOR_KEY_FILE") # Set this to True/False to enable/disable SSL hostname verification. self.assert_hostname = None - # Proxy URL - self.proxy = None + # Proxy configuration - can be set via parameter or environment variable + self.proxy = proxy or os.getenv("CONDUCTOR_PROXY") + # Proxy headers - can be set via parameter or environment variable + self.proxy_headers = proxy_headers + if not self.proxy_headers and os.getenv("CONDUCTOR_PROXY_HEADERS"): + try: + self.proxy_headers = json.loads(os.getenv("CONDUCTOR_PROXY_HEADERS")) + except (json.JSONDecodeError, TypeError): + # If JSON parsing fails, treat as a single header value + self.proxy_headers = { + "Authorization": os.getenv("CONDUCTOR_PROXY_HEADERS") + } # Safe chars for path_param self.safe_chars_for_path_param = "" @@ -185,6 +228,13 @@ def _get_env_float(self, env_var: str, default: float) -> float: pass return default + def _get_env_bool(self, env_var: str, default: bool) -> bool: + """Get boolean value from environment variable with default fallback.""" + value = os.getenv(env_var) + if value is not None: + return value.lower() in ("true", "1") + return default + def get_poll_interval_seconds(self): return self.polling_interval_seconds diff --git a/tests/integration/test_conductor_oss_workflow_integration.py b/tests/integration/test_conductor_oss_workflow_integration.py index cf1b99c2..4563100d 100644 --- a/tests/integration/test_conductor_oss_workflow_integration.py +++ b/tests/integration/test_conductor_oss_workflow_integration.py @@ -3,7 +3,6 @@ import uuid import pytest -import httpx from conductor.client.http.models.rerun_workflow_request import ( RerunWorkflowRequestAdapter as RerunWorkflowRequest, @@ -42,12 +41,6 @@ class TestConductorOssWorkflowIntegration: def configuration(self) -> Configuration: """Create configuration for Conductor OSS.""" config = Configuration() - config.http_connection = httpx.Client( - timeout=httpx.Timeout(600.0), - follow_redirects=True, - limits=httpx.Limits(max_keepalive_connections=1, max_connections=1), - http2=True - ) config.debug = os.getenv("CONDUCTOR_DEBUG", "false").lower() == "true" config.apply_logging_config() return config diff --git a/tests/integration/test_orkes_authorization_client_integration.py b/tests/integration/test_orkes_authorization_client_integration.py index 9be1e1dc..2a8f4ea8 100644 --- a/tests/integration/test_orkes_authorization_client_integration.py +++ b/tests/integration/test_orkes_authorization_client_integration.py @@ -2,7 +2,6 @@ import uuid import pytest -import httpx from conductor.client.http.models.create_or_update_application_request import \ CreateOrUpdateApplicationRequestAdapter as CreateOrUpdateApplicationRequest @@ -42,12 +41,6 @@ class TestOrkesAuthorizationClientIntegration: def configuration(self) -> Configuration: """Create configuration from environment variables.""" config = Configuration() - config.http_connection = httpx.Client( - timeout=httpx.Timeout(600.0), - follow_redirects=True, - limits=httpx.Limits(max_keepalive_connections=1, max_connections=1), - http2=True - ) config.debug = os.getenv("CONDUCTOR_DEBUG", "false").lower() == "true" config.apply_logging_config() return config diff --git a/tests/integration/test_orkes_integration_client_integration.py b/tests/integration/test_orkes_integration_client_integration.py index cfcb1929..ca7f83d5 100644 --- a/tests/integration/test_orkes_integration_client_integration.py +++ b/tests/integration/test_orkes_integration_client_integration.py @@ -3,7 +3,6 @@ import uuid import threading import time -import httpx from conductor.client.configuration.configuration import Configuration from conductor.client.orkes.orkes_integration_client import OrkesIntegrationClient @@ -33,12 +32,6 @@ class TestOrkesIntegrationClientIntegration: @pytest.fixture(scope="class") def configuration(self) -> Configuration: config = Configuration() - config.http_connection = httpx.Client( - timeout=httpx.Timeout(600.0), - follow_redirects=True, - limits=httpx.Limits(max_keepalive_connections=1, max_connections=1), - http2=True - ) config.debug = os.getenv("CONDUCTOR_DEBUG", "false").lower() == "true" config.apply_logging_config() return config diff --git a/tests/unit/asyncio_client/test_configuration.py b/tests/unit/asyncio_client/test_configuration.py index db4f427a..a48f329e 100644 --- a/tests/unit/asyncio_client/test_configuration.py +++ b/tests/unit/asyncio_client/test_configuration.py @@ -50,7 +50,6 @@ def test_initialization_with_env_vars(monkeypatch): assert config.domain == "env_domain" assert config.polling_interval_seconds == 10 - def test_initialization_env_vars_override_params(monkeypatch): monkeypatch.setenv("CONDUCTOR_SERVER_URL", "https://env.com/api") monkeypatch.setenv("CONDUCTOR_AUTH_KEY", "env_key") @@ -147,7 +146,6 @@ def test_get_worker_property_value_poll_interval_seconds(): result = config.get_worker_property_value("poll_interval_seconds", "mytask") assert result == 0 - def test_convert_property_value_polling_interval(): config = Configuration() result = config._convert_property_value("polling_interval", "250") @@ -422,3 +420,16 @@ def test_get_poll_interval_task_type_provided_but_value_none(): with patch.dict(os.environ, {"CONDUCTOR_WORKER_MYTASK_POLLING_INTERVAL": ""}): result = config.get_poll_interval("mytask") assert result == 100 + + +def test_proxy_from_parameter(): + proxy_url = "http://proxy.company.com:8080" + config = Configuration(proxy=proxy_url) + assert config.proxy == proxy_url + + +def test_proxy_from_env(monkeypatch): + proxy_url = "http://proxy.company.com:8080" + monkeypatch.setenv("CONDUCTOR_PROXY", proxy_url) + config = Configuration() + assert config.proxy == proxy_url diff --git a/tests/unit/configuration/test_configuration.py b/tests/unit/configuration/test_configuration.py index ae710d85..3e95923c 100644 --- a/tests/unit/configuration/test_configuration.py +++ b/tests/unit/configuration/test_configuration.py @@ -1,4 +1,5 @@ import base64 +import json from conductor.client.configuration.configuration import Configuration from conductor.client.http.api_client import ApiClient @@ -21,9 +22,7 @@ def test_initialization_with_server_api_url(): def test_initialization_with_basic_auth_server_api_url(): - configuration = Configuration( - server_api_url="https://user:password@play.orkes.io/api" - ) + configuration = Configuration(server_api_url="https://user:password@play.orkes.io/api") basic_auth = "user:password" expected_host = f"https://{basic_auth}@play.orkes.io/api" assert configuration.host == expected_host @@ -33,3 +32,177 @@ def test_initialization_with_basic_auth_server_api_url(): "Accept-Encoding": "gzip", "authorization": token, } + + +def test_ssl_ca_cert_initialization(): + configuration = Configuration( + base_url="https://internal.conductor.dev", ssl_ca_cert="/path/to/ca-cert.pem" + ) + assert configuration.ssl_ca_cert == "/path/to/ca-cert.pem" + assert configuration.ca_cert_data is None + assert configuration.verify_ssl is True + + +def test_ca_cert_data_initialization_with_string(): + cert_data = "-----BEGIN CERTIFICATE-----\nMIIBIjANBgkqhkiG9w0B...\n-----END CERTIFICATE-----" + configuration = Configuration(base_url="https://example.com", ca_cert_data=cert_data) + assert configuration.ca_cert_data == cert_data + assert configuration.ssl_ca_cert is None + + +def test_ca_cert_data_initialization_with_bytes(): + cert_data = b"-----BEGIN CERTIFICATE-----\nMIIBIjANBgkqhkiG9w0B...\n-----END CERTIFICATE-----" + configuration = Configuration(base_url="https://internal.conductor.dev", ca_cert_data=cert_data) + assert configuration.ca_cert_data == cert_data + assert configuration.ssl_ca_cert is None + + +def test_ssl_options_combined(): + cert_data = "-----BEGIN CERTIFICATE-----\nMIIBIjANBgkqhkiG9w0B...\n-----END CERTIFICATE-----" + configuration = Configuration( + base_url="https://internal.conductor.dev", + ssl_ca_cert="/path/to/ca-cert.pem", + ca_cert_data=cert_data, + ) + assert configuration.ssl_ca_cert == "/path/to/ca-cert.pem" + assert configuration.ca_cert_data == cert_data + + +def test_ssl_defaults(): + configuration = Configuration(base_url="https://internal.conductor.dev") + assert configuration.verify_ssl is True + assert configuration.ssl_ca_cert is None + assert configuration.ca_cert_data is None + assert configuration.cert_file is None + assert configuration.key_file is None + assert configuration.assert_hostname is None + + +def test_cert_file_from_env(monkeypatch): + monkeypatch.setenv("CONDUCTOR_CERT_FILE", "/path/to/client-cert.pem") + configuration = Configuration(base_url="https://internal.conductor.dev") + assert configuration.cert_file == "/path/to/client-cert.pem" + + +def test_key_file_from_env(monkeypatch): + monkeypatch.setenv("CONDUCTOR_KEY_FILE", "/path/to/client-key.pem") + configuration = Configuration(base_url="https://internal.conductor.dev") + assert configuration.key_file == "/path/to/client-key.pem" + + +def test_verify_ssl_from_env_true(monkeypatch): + monkeypatch.setenv("CONDUCTOR_VERIFY_SSL", "true") + configuration = Configuration(base_url="https://internal.conductor.dev") + assert configuration.verify_ssl is True + + +def test_verify_ssl_from_env_false(monkeypatch): + monkeypatch.setenv("CONDUCTOR_VERIFY_SSL", "false") + configuration = Configuration(base_url="https://internal.conductor.dev") + assert configuration.verify_ssl is False + + +def test_ssl_ca_cert_data_from_env(monkeypatch): + cert_data = "-----BEGIN CERTIFICATE-----\nMIIBIjANBgkqhkiG9w0B...\n-----END CERTIFICATE-----" + monkeypatch.setenv("CONDUCTOR_SSL_CA_CERT_DATA", cert_data) + configuration = Configuration(base_url="https://internal.conductor.dev") + assert configuration.ca_cert_data == cert_data + + +def test_ssl_ca_cert_from_env(monkeypatch): + monkeypatch.setenv("CONDUCTOR_SSL_CA_CERT", "/path/to/ca-cert.pem") + configuration = Configuration(base_url="https://internal.conductor.dev") + assert configuration.ssl_ca_cert == "/path/to/ca-cert.pem" + + +def test_proxy_headers_from_parameter(): + proxy_headers = {"Authorization": "Bearer token123", "X-Custom": "value"} + configuration = Configuration(proxy_headers=proxy_headers) + assert configuration.proxy_headers == proxy_headers + + +def test_proxy_headers_from_env_valid_json(monkeypatch): + proxy_headers_json = '{"Authorization": "Bearer token123", "X-Custom": "value"}' + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", proxy_headers_json) + configuration = Configuration() + expected_headers = {"Authorization": "Bearer token123", "X-Custom": "value"} + assert configuration.proxy_headers == expected_headers + + +def test_proxy_headers_from_env_invalid_json_fallback(monkeypatch): + invalid_json = "invalid-json-string" + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", invalid_json) + configuration = Configuration() + expected_headers = {"Authorization": "invalid-json-string"} + assert configuration.proxy_headers == expected_headers + + +def test_proxy_headers_from_env_none_value_fallback(monkeypatch): + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", "None") + configuration = Configuration() + expected_headers = {"Authorization": "None"} + assert configuration.proxy_headers == expected_headers + + +def test_proxy_headers_from_env_empty_string_no_processing(monkeypatch): + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", "") + configuration = Configuration() + assert configuration.proxy_headers is None + + +def test_proxy_headers_from_env_malformed_json_fallback(monkeypatch): + malformed_json = '{"Authorization": "Bearer token", "X-Custom":}' + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", malformed_json) + configuration = Configuration() + expected_headers = {"Authorization": malformed_json} + assert configuration.proxy_headers == expected_headers + + +def test_proxy_headers_no_env_var(): + configuration = Configuration() + assert configuration.proxy_headers is None + + +def test_proxy_headers_parameter_overrides_env(monkeypatch): + proxy_headers_param = {"Authorization": "Bearer param-token"} + proxy_headers_env = '{"Authorization": "Bearer env-token"}' + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", proxy_headers_env) + configuration = Configuration(proxy_headers=proxy_headers_param) + assert configuration.proxy_headers == proxy_headers_param + + +def test_proxy_headers_complex_json(monkeypatch): + complex_headers = { + "Authorization": "Bearer token123", + "X-API-Key": "api-key-456", + "X-Custom-Header": "custom-value", + "User-Agent": "ConductorClient/1.0", + } + proxy_headers_json = json.dumps(complex_headers) + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", proxy_headers_json) + configuration = Configuration() + assert configuration.proxy_headers == complex_headers + + +def test_proxy_headers_json_with_special_chars(monkeypatch): + special_headers = { + "Authorization": "Bearer token with spaces and special chars!@#$%", + "X-Header": "value with \"quotes\" and 'apostrophes'", + } + proxy_headers_json = json.dumps(special_headers) + monkeypatch.setenv("CONDUCTOR_PROXY_HEADERS", proxy_headers_json) + configuration = Configuration() + assert configuration.proxy_headers == special_headers + + +def test_proxy_from_parameter(): + proxy_url = "http://proxy.company.com:8080" + configuration = Configuration(proxy=proxy_url) + assert configuration.proxy == proxy_url + + +def test_proxy_from_env(monkeypatch): + proxy_url = "http://proxy.company.com:8080" + monkeypatch.setenv("CONDUCTOR_PROXY", proxy_url) + configuration = Configuration() + assert configuration.proxy == proxy_url