-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock with OmmConsumer #218
Comments
Update: I increased the batch size to 5000, which means our item list gets through in 3 batched requests, I am still running into the same issue, though not every single time. |
@charanyarajagopalan, Thank you for reporting this issue. We will investigate it and fix it in future releases. |
Update: Switched to using the USER_DISPATCH operational model instead of API_DISPATCH. This helps control preventing message dispatch from being triggered during the creation of the item streams, and prevents deadlock from happening. |
@charanyarajagopalan Thank you for additional details! |
@L-Karchevska thanks, https://github.com/charanyarajagopalan/issue-218-repro |
Hello I can replicate the same kind of deadlock with EMA Java 3.6.7 L2 by modifying the Consumer ex100_MP_Streaming example to subscribe 10K invalids RICs as follows:
Result: The example shows one "The record could not be found" and stop. The "Done" message is not printed. The stack trace shows a similar pattern as the client above:
I did a quick test with EMA Java 3.6.0 L1 and the same code. It works fine, the example shows 10k status messages for "The record could not be found" and then "Done" message. |
@charanyarajagopalan This is addressed with tag Real-Time-SDK-2.0.8.L1. Please let us know if there are further concerns. |
To give some background, we are creating an EMA consumer, and then opening item streams (100 items batched in each ReqMsg) with the consumer. We simply stop seeing new messages a few minutes after the consumer is established, without any exceptions or errors. This happens while the loop of opening item streams is ongoing. Debugging shows that there is potentially a deadlock situation occurring, as illustrated by the stack traces below. Is there some kind of rate/limit as to how batched requests should be created (we have a single Machine ID and hence a single EMA consumer). Version used is 3.6.7.1
deadlock-trace.txt
The text was updated successfully, but these errors were encountered: