Skip to content

Use new IDL package for query consistency level changes #6791

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

c-warren
Copy link
Contributor

@c-warren c-warren commented Apr 7, 2025

What changed?

cadence-idl has been updated to support specifying QueryConsistencyLevel on the GetWorkflowExecutionHistory and DescribeWorkflow rpcs. This PR pulls in those changes and adds support for the new fields to the mapper functions for thrift and proto.

Why?

This finishes piping data from a request through to the implementation to support specifying a queryconsistencylevel in the frontend. Client updates are still required for consumers to use the fields.

How did you test it?

Updated unit tests to validate the mapper functions.

To sanity check with integration tests:

  1. Updated the docker-compose-multiclusters to create Cassandra schema for each cluster prior to service nodes starting
  2. Updated the cadence-cli to support the new consistency parameter
  3. Ran shell scripts against the cluster that checked, for each cluster (active, passive):
    • passing strong consistency
    • passing nothing
    • passing normal consistency

Potential risks

Serialization failures on inbound requests on the DescribeWorkflowExecution or GetWorkflowExecutionHistory requests.

Release notes

cadence-frontend now supports passing a QueryConsistencyLevel to DescribeWorkflowExecution and GetWorkflowExecutionHistory rpcs.

Documentation Changes

N/A

@@ -11,6 +11,26 @@ services:
interval: 15s
timeout: 30s
retries: 10
# Setup schemas for primary and secondary clusters
cadence-schema-setup:
image: ubercadence/server:cdnc-12748
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have this image uploaded?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Damn, forgot to change this one back. Will update!

@@ -0,0 +1,102 @@
#!/bin/bash

set -x
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: set -e usually is worthwhile too, I realise this might be a copy from somewhere else, but lack of that flag is confusing usually

timeout=300 # 5 minutes timeout
counter=0

while [ $counter -lt $timeout ]; do
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: generally suggest preferring [[
https://google.github.io/styleguide/shellguide.html#s6.3-tests

doesn't matter overly much though

Copy link
Member

@davidporter-id-au davidporter-id-au left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to Neil's comment, otherwise pretty much lgtm

@@ -11,6 +11,26 @@ services:
interval: 15s
timeout: 30s
retries: 10
# Setup schemas for primary and secondary clusters
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this specific multicluster compose setup. We also have a new replication simulation with to test various scenarios in 2 cluster environment. It would be great to enhance it to validate the new strong consistency APIs. Ref: #7007

@c-warren c-warren merged commit e1267de into cadence-workflow:master Jul 24, 2025
25 checks passed
@c-warren c-warren deleted the CDNC-12748/updateconversionfunctions branch July 26, 2025 00:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants