Skip to content

Conversation

@guidobrei
Copy link
Member

@guidobrei guidobrei commented Sep 17, 2025

This PR

  • fixes a potential deadlock in FlagSyncService
  • fixes synchronization issues between getMetadata and syncFlags
  • instead of using the gRPC retry policy to prevent a busy loop on connection errors, use a simple sleep

Related Issues

Fixes #1583

Notes

We've identified two related issues in current implementation:

  1. It potentially waits forever on receiving a message on a queue by calling incomingQueue.take() some message put in the blocking queue.
  2. If getMetadata fails we want to cancel the stream. However, this never worked properly because the two parts listened on different cancellation tokens.

The new implementation eliminates problem 1) by removing incomingQueue at all and mitigates 2) by sharing a cancellation token.

Follow-up Tasks

Still open is the removal of deprecated getMetadata open-feature/flagd#1584.

How to test

Clagd testbed covers FlagSyncService

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @guidobrei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the FlagSyncService to enhance the stability and correctness of stream observation. The primary goal is to prevent a potential deadlock scenario and improve the synchronization of stream cancellation. By removing a blocking queue and implementing a unified cancellation context, the service now handles metadata retrieval and flag synchronization more robustly, ensuring that stream failures or metadata issues lead to proper stream restarts.

Highlights

  • Deadlock Prevention: Eliminated a potential deadlock in FlagSyncService by removing the incomingQueue and its blocking take() operation.
  • Stream Synchronization: Resolved synchronization issues between getMetadata and syncFlags by sharing a single Context.CancellableContext, ensuring consistent cancellation behavior.
  • Refactored Stream Observation: Introduced a dedicated SyncStreamObserver class to manage gRPC stream events directly, simplifying the stream handling logic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a potential deadlock in FlagSyncService by removing the blocking incomingQueue.take() call and refactors the stream handling logic to be more robust. The fix for the synchronization issue between getMetadata and syncFlags by sharing a CancellableContext is also a good improvement. My review includes a few suggestions to further improve the new implementation, mainly around error handling and thread interruption best practices. Specifically, I've pointed out an opportunity to restore the error message in the error payload, adjust a logging level, and properly handle InterruptedException.

@guidobrei
Copy link
Member Author

/gemini summary
/gemini review

@gemini-code-assist
Copy link

Hello guidobrei,

Thank you for submitting this pull request and for requesting a review. I've analyzed the changes and here's a summary and a detailed review.

Pull Request Summary

This pull request, titled "fix(flagd): Fix stream observation in FlagSyncService", addresses two critical issues within the FlagSyncService: a potential deadlock and synchronization problems between getMetadata and syncFlags. The core of the solution involves a significant refactoring of the stream observation mechanism. The incomingQueue (a LinkedBlockingQueue) has been removed, eliminating a source of potential blocking. Instead, a new SyncStreamObserver class is introduced, which directly handles stream events (onNext, onError, onCompleted) and uses an Awaitable for synchronization. The getMetadata logic is now encapsulated in a dedicated method, and its interaction with the stream's lifecycle is improved to ensure proper cancellation. These changes aim to enhance the robustness and reliability of the flag synchronization process.

Detailed Review

I've reviewed the changes based on correctness, efficiency, and maintainability, adhering to the Google Java Style Guide for any stylistic suggestions.

Overall Feedback

This is a well-structured and impactful change. The refactoring effectively addresses the identified issues, leading to a more robust and maintainable stream observation mechanism. The separation of concerns into getMetadata and the SyncStreamObserver is a significant improvement.

Review Comments

File: providers/flagd/src/main/java/dev/openfeature/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSource.java

  • Correctness & Maintainability (Imports and Field Removal - Diff 1, 2, 3)

    • The removal of QueueingStreamObserver, StreamResponseModel, Context, CancellableContext, and the incomingQueue field is a direct and effective way to resolve the identified deadlock and synchronization issues. The new Awaitable and StreamObserver imports are appropriate for the revised asynchronous pattern.
  • Correctness & Maintainability (Refactor of observeSyncStream and new methods - Diff 4)

    • The restructuring of observeSyncStream is excellent. It clearly separates the concerns of metadata retrieval and flag synchronization, making the logic easier to follow and reason about.
    • The new getMetadata() method correctly handles the syncMetadataDisabled case and applies the deadline. Using GetMetadataRequest.getDefaultInstance() is appropriate here.
    • The syncFlags(observer) call followed by streamObserver.done.await() provides a clear and effective synchronization point, replacing the previous blocking queue mechanism.
  • Correctness & Maintainability (New SyncStreamObserver class - Diff 5)

    • The introduction of SyncStreamObserver as a nested static class is a strong design choice. It encapsulates the stream handling logic, improving modularity and readability.
    • The use of Awaitable for signaling completion is a clean and efficient way to manage the lifecycle of the stream observation.
    • The onNext method correctly prioritizes syncFlagsResponse.hasSyncContext() for syncContext, falling back to the metadata retrieved earlier. This ensures the most relevant context is used.
    • The onError and onCompleted methods properly log events and ensure done.wakeup() is called, preventing indefinite blocking in the main observeSyncStream loop.

File: providers/flagd/src/test/java/dev/openfeature/contrib/providers/flagd/resolver/process/storage/connector/sync/SyncStreamQueueSourceTest.java

  • Correctness (Test Updates - Diff 1, 2, 3)
    • The test file has been correctly updated to reflect the changes in the main class, specifically regarding the removal of QueueingStreamObserver and the adoption of StreamObserver. This indicates that the tests are being maintained alongside the production code, which is crucial for ensuring correctness.

Conclusion

The changes in this pull request are well-thought-out and effectively address the described problems. The refactoring improves the code's correctness, efficiency, and maintainability. I recommend merging this PR once any minor suggestions are addressed.

@toddbaert
Copy link
Member

This looks like a good simplification. I will thoroughly review on Monday 🙏

localStub.syncFlags(syncRequest.build(), new QueueingStreamObserver<SyncFlagsResponse>(incomingQueue));
localStub.syncFlags(syncRequest.build(), streamObserver);

streamObserver.done.await();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we also have to wakeup done when the shutdown method is called?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because when we shutdown, we call shutdownNow on the underlying channel - when this happens, the onError handler runs immediately and wakes it up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, that was the intention.

@toddbaert
Copy link
Member

@guidobrei I pushed a spotless fix. Doing some manual testing now.

@toddbaert
Copy link
Member

toddbaert commented Sep 23, 2025

@guidobrei There's some e2e tests around context enrichment that are consistently failing since your change:

Error:  Errors: 
Error:    RunInProcessTest.Resolve value independent of context » General Initialization timeout exceeded; did not complete within the 2000 ms deadline.
Error:  dev.openfeature.contrib.providers.flagd.e2e.RunInProcessTest.Use enriched context
[INFO]   Run 1: PASS
Error:    Run 2: RunInProcessTest.Use enriched context » General Initialization timeout exceeded; did not complete within the 2000 ms deadline.
[INFO] 
Error:  dev.openfeature.contrib.providers.flagd.e2e.RunInProcessTest.Use enriched context on connection error for IN-PROCESS
[INFO]   Run 1: PASS
Error:    Run 2: RunInProcessTest.Use enriched context on connection error for IN-PROCESS » General Initialization timeout exceeded; did not complete within the 2000 ms deadline.

Other than this, things look great. I did some manual testing as well and reconnect, events, etc works fine!

EDIT: I think I found the reason here.

Comment on lines 120 to 121
enqueueError(String.format("Error in getMetadata request: %s", message));
continue;
Copy link
Member

@toddbaert toddbaert Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

99% sure this is causing our test failure. @aepfli introduced this option in flagd to help us test the migration from the old getMetadata call to the new syncContext payload - in some e2e tests, flagd runs with the old getMetadata disabled... which is causing an error here immediately, even though we have the new syncContext.

We need to "save" this exception state, and only throw if we DON'T get the new alternative syncContext.

Does that make sense?

@aepfli please verify what I said 🙏

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either this or we check for gRPC Status code 12 UNIMPLEMENTED, or 5 NOT_FOUND.

Relying on the status code would be the easier implementation.

Copy link
Member

@aepfli aepfli Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the idea with checking the error code is also a good idea based on https://github.com/open-feature/flagd/blob/fcd19b310b319c9837b41c19d6f082ac750cb81d/flagd/pkg/service/flag-sync/handler.go#L111 it is unimplemented

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can get UNIMPLEMENTED for other cases, like missing implementations - so I don't think that will work.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But isn't this what we want to have? If the implementation is missing or disabled, we use whatever we get via the syncFlags() stream.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an example of how this could look like in this commit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think you're right! lol sorry, I confused myself.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guidobrei The test is still failing, but if I make the catch block more general, it works. I think either the status code can't be checked the way you're checking it, or the exception is not of the expected type.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Today I learned: 0d6c840

@guidobrei guidobrei force-pushed the fix/stream-queue-source-race-condition branch from ee843fc to f378e87 Compare September 24, 2025 19:26
@SuppressWarnings({"unchecked", "rawtypes"})
private static final Map<String, ?> DEFAULT_RETRY_POLICY = new HashMap() {
{
// 1 + 2 + 4
Copy link
Contributor

@chrfwow chrfwow Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does that comment mean? The number 7 does not show up in the values below

));
put("retryPolicy", new HashMap(DEFAULT_RETRY_POLICY) {
{
// 1 + 2 + 4 + 5 + 5 + 5 + 5 + 5 + 5 + 5
Copy link
Contributor

@chrfwow chrfwow Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does that comment mean? The result does not show up in the values below

* deadline,and definitionally should not result in a tight loop
* (it's a timeout).
*/
Code.CANCELLED.toString(),
Copy link
Contributor

@chrfwow chrfwow Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have here so many more status codes than in the other policy?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use the GRPC retry policy in sync flags to not end up in a tight reconnect loop and try to reconnect instantly.

@guidobrei
Copy link
Member Author

guidobrei commented Oct 3, 2025

We've done some manual testing with @aepfli and @chrfwow. The retry policy seems to be broken when connecting via Envoy and Envoy loses the connection to flagd.

Warning

In this situation we end up in a tight retry loop. This needs to be fixed.

  • Fix retry policies

@toddbaert
Copy link
Member

We've done some manual testing with @aepfli and @chrfwow. The retry policy seems to be broken when connecting via Envoy and Envoy loses the connection to flagd.

Warning

In this situation we end up in a tight retry loop. This needs to be fixed.

  • Fix retry policies

It must be a result of the retry policy changes, right? Should we just avoid changes there for now?

I'm not super happy with it, but I think it's distinct from the deadlock bug you've found.

@guidobrei
Copy link
Member Author

It must be a result of the retry policy changes, right? Should we just avoid changes there for now?

If we don't change the retry policy, can't check for "UNIMPLEMENTED", because we would retry and in the end fail with a DEADLINE_EXCEEDED.

guidobrei and others added 19 commits October 14, 2025 09:06
… added Stream cancellation

Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Co-authored-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Guido Breitenhuber <guido.breitenhuber@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
@toddbaert toddbaert force-pushed the fix/stream-queue-source-race-condition branch from 5a1ded7 to 95914e6 Compare October 14, 2025 13:06
*/
@Builder.Default
private int retryBackoffMaxMs =
fallBackToEnvOrDefault(Config.FLAGD_RETRY_BACKOFF_MAX_MS_VAR_NAME, Config.DEFAULT_MAX_RETRY_BACKOFF);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're mixing here ms and s.

Is static final int DEFAULT_MAX_RETRY_BACKOFF = 5; a time unit or count?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a time. I will make the default name more clear. It should also be 12000 as per https://flagd.dev/providers/rust/?h=flagd_retry_backoff_max_ms#configuration-options... nice catch.... this was leftover from my initial version before I realized we had an unimplemented var for this.

Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
@toddbaert toddbaert force-pushed the fix/stream-queue-source-race-condition branch from 132801f to 455b362 Compare October 14, 2025 13:48
@toddbaert toddbaert merged commit 791f38c into open-feature:main Oct 14, 2025
5 checks passed
@toddbaert toddbaert deleted the fix/stream-queue-source-race-condition branch October 14, 2025 14:56
String message = metaEx.getMessage();
log.debug("Metadata request error: {}, will restart", message, metaEx);
enqueueError(String.format("Error in getMetadata request: %s", message));
Thread.sleep(this.maxBackoffMs);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this sleep helps prevent tight loops during retries, which can be a problem if a upstream connection in a proxy is down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[flagd] Reconnect Functionality Fails Under Certain Conditions, Transitioning System to Error State

8 participants