Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix flaky RemoteIndexRecoveryIT testRerouteRecovery test #9580 #11918

Merged
merged 1 commit into from
Jan 31, 2024

Conversation

ashking94
Copy link
Member

Description

This PR fixes the flakiness in the testRerouteRecovery of RemoteIndexRecoveryIT.

The issue is happening due to 2 reasons as we have seen the stack trace -

  1. Index not found
org.opensearch.remotestore.RemoteIndexRecoveryIT > testRerouteRecovery FAILED
    [test-idx-1/gUdo_ZgeQSeTKz75x2BNLA] IndexNotFoundException[no such index [test-idx-1]]
        at __randomizedtesting.SeedInfo.seed([216A80F8C219BD32:8CC133ACF741DF70]:0)
        at app//org.opensearch.indices.IndicesService.indexServiceSafe(IndicesService.java:689)
        at app//org.opensearch.indices.recovery.IndexRecoveryIT.lambda$testRerouteRecovery$1(IndexRecoveryIT.java:528)
  1. Assertion failure
java.lang.AssertionError: 
Expected: <1>
     but: was <0>
	at __randomizedtesting.SeedInfo.seed([A66685B0FBDCA162:BCD36E4CE84C320]:0)
	at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18)
	at org.junit.Assert.assertThat(Assert.java:964)
	at org.junit.Assert.assertThat(Assert.java:930)
	at org.opensearch.indices.recovery.IndexRecoveryIT.lambda$testRerouteRecovery$1(IndexRecoveryIT.java:528)

The first issue happens due to cluster state publication cleaning up the index shard on the old primary node between 2 assertion check interval. The current assertBusy has exponential backoff between 2 assertion checks and between these 2 checks, the condition of assertion becomes true and the index shard itself gets cleared from the old node. However, due to high interval, it misses hitting the assertion true condition. The fix for this issue is to have the assertion checks done at fixed interval.

The second issue occurs due to the same reason as above but in this case the peer recovery has completed and changed the state of recovery state of the index shard.

Related Issues

Resolves #9580

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Failing checks are inspected and point to the corresponding known issue(s) (See: Troubleshooting Failing Builds)
  • Commits are signed per the DCO using --signoff
  • Commit changes are listed out in CHANGELOG.md file (See: Changelog)
  • Public documentation issue/PR created

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions github-actions bot added :test Adding or fixing a test >test-failure Test failure from CI, local build, etc. bug Something isn't working flaky-test Random test failure that succeeds on second run Indexing:Replication Issues and PRs related to core replication framework eg segrep Storage Issues and PRs relating to data and metadata storage labels Jan 18, 2024
@ashking94
Copy link
Member Author

Running the testRerouteRecovery for RemoteIndexRecoveryIT and IndexRecoveryIT both for 1K iterations atleast. Without the fix, the test currently fails around 20th iteration.

Copy link
Contributor

Compatibility status:

Checks if related components are compatible with change ff8fc99

Incompatible components

Incompatible components: [https://github.com/opensearch-project/observability.git, https://github.com/opensearch-project/sql.git, https://github.com/opensearch-project/custom-codecs.git, https://github.com/opensearch-project/cross-cluster-replication.git, https://github.com/opensearch-project/asynchronous-search.git, https://github.com/opensearch-project/reporting.git, https://github.com/opensearch-project/performance-analyzer.git, https://github.com/opensearch-project/performance-analyzer-rca.git]

Skipped components

Compatible components

Compatible components: [https://github.com/opensearch-project/security-analytics.git, https://github.com/opensearch-project/notifications.git, https://github.com/opensearch-project/opensearch-oci-object-storage.git, https://github.com/opensearch-project/job-scheduler.git, https://github.com/opensearch-project/security.git, https://github.com/opensearch-project/neural-search.git, https://github.com/opensearch-project/geospatial.git, https://github.com/opensearch-project/ml-commons.git, https://github.com/opensearch-project/common-utils.git, https://github.com/opensearch-project/anomaly-detection.git, https://github.com/opensearch-project/index-management.git, https://github.com/opensearch-project/alerting.git, https://github.com/opensearch-project/k-nn.git]

Copy link
Contributor

❌ Gradle check result for ff8fc99: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@ashking94
Copy link
Member Author

❌ Gradle check result for ff8fc99: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Flaky test - #9891

Copy link
Contributor

❕ Gradle check result for ff8fc99: UNSTABLE

  • TEST FAILURES:
      1 org.opensearch.remotestore.RemoteIndexPrimaryRelocationIT.testPrimaryRelocationWhileIndexing

Please review all flaky tests that succeeded after retry and create an issue if one does not already exist to track the flaky failure.

Copy link

codecov bot commented Jan 18, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (c132db9) 71.33% compared to head (ff8fc99) 71.41%.
Report is 8 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main   #11918      +/-   ##
============================================
+ Coverage     71.33%   71.41%   +0.08%     
- Complexity    59300    59328      +28     
============================================
  Files          4921     4921              
  Lines        278989   278989              
  Branches      40543    40543              
============================================
+ Hits         199014   199252     +238     
+ Misses        63444    63120     -324     
- Partials      16531    16617      +86     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ashking94
Copy link
Member Author

RemoteIndexPrimaryRelocationIT

Flaky test - #9191

@ashking94
Copy link
Member Author

The test has been run on iteration for more than 1K iterations without any failure yet.

@andrross
Copy link
Member

However, due to high interval, it misses hitting the assertion true condition. The fix for this issue is to have the assertion checks done at fixed interval.

Are you actually fixing the race condition here, or just making it much less likely? How can we make the test pass in a way that isn't dependent on the timing of the assertions?

@ashking94
Copy link
Member Author

ashking94 commented Jan 19, 2024

However, due to high interval, it misses hitting the assertion true condition. The fix for this issue is to have the assertion checks done at fixed interval.

Are you actually fixing the race condition here, or just making it much less likely? How can we make the test pass in a way that isn't dependent on the timing of the assertions?

There is no underlying problem here. The test tries to assert an intermediate state which is there transiently and for a very short period. The problem exists in the underlying doc rep test as well.

Once the recovery process completes, there is shard started action triggered which causes the index shard to get cleared from the source node.

@gbbafna gbbafna merged commit c6cebc7 into opensearch-project:main Jan 31, 2024
110 of 113 checks passed
@gbbafna gbbafna deleted the 9580 branch January 31, 2024 11:14
@ashking94 ashking94 added the backport 2.x Backport to 2.x branch label Feb 9, 2024
opensearch-trigger-bot bot pushed a commit that referenced this pull request Feb 9, 2024
Signed-off-by: Ashish Singh <ssashish@amazon.com>
(cherry picked from commit c6cebc7)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
sachinpkale pushed a commit that referenced this pull request Feb 10, 2024
… (#12275)

(cherry picked from commit c6cebc7)

Signed-off-by: Ashish Singh <ssashish@amazon.com>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
peteralfonsi pushed a commit to peteralfonsi/OpenSearch that referenced this pull request Mar 1, 2024
rayshrey pushed a commit to rayshrey/OpenSearch that referenced this pull request Mar 18, 2024
shiv0408 pushed a commit to Gaurav614/OpenSearch that referenced this pull request Apr 25, 2024
…roject#9580 (opensearch-project#11918)

Signed-off-by: Ashish Singh <ssashish@amazon.com>
Signed-off-by: Shivansh Arora <hishiv@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x Backport to 2.x branch bug Something isn't working flaky-test Random test failure that succeeds on second run Indexing:Replication Issues and PRs related to core replication framework eg segrep skip-changelog Storage Issues and PRs relating to data and metadata storage :test Adding or fixing a test >test-failure Test failure from CI, local build, etc.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI]flaky test faiure - org.opensearch.remotestore.RemoteIndexRecoveryIT.testRerouteRecovery
3 participants