New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[query] [map] AggregationMemberBounceTest.aggregationReturnsCorrectResultWhenBouncing #10776
Labels
Milestone
Comments
mmedenjak
changed the title
AggregationMemberBounceTest.aggregationReturnsCorrectResultWhenBouncing
[query] [map] AggregationMemberBounceTest.aggregationReturnsCorrectResultWhenBouncing
Jul 11, 2017
|
Merged
|
mmedenjak
pushed a commit
to mmedenjak/hazelcast
that referenced
this issue
Sep 28, 2017
Since the migration finalizations can be called concurrently, the owned partitions might be reloaded concurrently. This means that the set of owned partitions first might be set to a newer version and then to an older version, leading to an incorrect set of owned partitions. This affects the query engine when it performs queries off the partition thread as every member reports its own set of owned partitions which is in this case incorrect. If the results from the actual partition owner are received by the query engine later than from the "lying" partition owner, they are discarded. This can cause the query engine to return incorrect results until the partitions are reloaded again on an another migration. The fix reloads the partitions in a CAS loop ensuring that the newest partition state will always be applied. Also, added some type parameters and improved javadoc. Fixes : hazelcast#10107 hazelcast#9870 hazelcast#10776
mmedenjak
pushed a commit
to mmedenjak/hazelcast
that referenced
this issue
Sep 28, 2017
Since the migration finalizations can be called concurrently, the owned partitions might be reloaded concurrently. This means that the set of owned partitions first might be set to a newer version and then to an older version, leading to an incorrect set of owned partitions. This affects the query engine when it performs queries off the partition thread as every member reports its own set of owned partitions which is in this case incorrect. If the results from the actual partition owner are received by the query engine later than from the "lying" partition owner, they are discarded. This can cause the query engine to return incorrect results until the partitions are reloaded again on an another migration. The fix reloads the partitions in a CAS loop ensuring that the newest partition state will always be applied. Also, added some type parameters and improved javadoc. Fixes : hazelcast#10107 hazelcast#9870 hazelcast#10776
mmedenjak
pushed a commit
to mmedenjak/hazelcast
that referenced
this issue
Sep 28, 2017
Since the migration finalizations can be called concurrently, the owned partitions might be reloaded concurrently. This means that the set of owned partitions first might be set to a newer version and then to an older version, leading to an incorrect set of owned partitions. This affects the query engine when it performs queries off the partition thread as every member reports its own set of owned partitions which is in this case incorrect. If the results from the actual partition owner are received by the query engine later than from the "lying" partition owner, they are discarded. This can cause the query engine to return incorrect results until the partitions are reloaded again on an another migration. The fix reloads the partitions in a CAS loop ensuring that the newest partition state will always be applied. Also, added some type parameters and improved javadoc. Fixes : hazelcast#10107 hazelcast#9870 hazelcast#10776
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-nightly/com.hazelcast$hazelcast/1383/testReport/junit/com.hazelcast.aggregation/AggregationMemberBounceTest/aggregationReturnsCorrectResultWhenBouncing/
The text was updated successfully, but these errors were encountered: