Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recursively Delete Unreferenced Index Directories #42189

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
09ec6f3
Recursively Delete Unreferenced Index Directories
original-brownbear May 16, 2019
20a6720
more bulk
original-brownbear May 17, 2019
273a165
add docs
original-brownbear May 17, 2019
aba91e1
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 17, 2019
ac3c496
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 17, 2019
e593e0c
cleanup after snapshotting as well
original-brownbear May 18, 2019
de61d35
fix checkstyle
original-brownbear May 18, 2019
22a668e
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 18, 2019
3d7da2e
nicer
original-brownbear May 18, 2019
826e513
start marker logic
original-brownbear May 19, 2019
96dabb7
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 19, 2019
c0e025e
clean
original-brownbear May 19, 2019
e99e5c0
prevent index dir blobs from being listed
original-brownbear May 19, 2019
0e66c19
add assertion about proper S3 behavior
original-brownbear May 19, 2019
e4aa770
fix azure blob children listing
original-brownbear May 19, 2019
a2bfa0f
fix azure
original-brownbear May 19, 2019
98dc56c
nicer
original-brownbear May 19, 2019
372fd26
catch all the exceptions
original-brownbear May 19, 2019
5eb036b
cleanup
original-brownbear May 19, 2019
3d1e26b
smaller changeset
original-brownbear May 20, 2019
cb2a196
nicer
original-brownbear May 20, 2019
2ca8c4c
nicer
original-brownbear May 20, 2019
908cfeb
dry up test infra
original-brownbear May 20, 2019
2d0f368
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 20, 2019
d72cbcd
test cleanup on hdfs repository
original-brownbear May 20, 2019
b9b4c32
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 20, 2019
8a82885
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 22, 2019
ae4c9a8
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 23, 2019
b8441ac
azure 3rd party cleanup tests
original-brownbear May 23, 2019
f50a895
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 23, 2019
014f5e0
gcs third party tests
original-brownbear May 23, 2019
f825d3f
s3 third party tests
original-brownbear May 23, 2019
2e0bd14
better comment
original-brownbear May 23, 2019
1c33d31
add todo about eventual consistency on S3
original-brownbear May 23, 2019
bb3fb73
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 23, 2019
def2d77
much drier
original-brownbear May 24, 2019
d6fa0fe
much drier consistency checks
original-brownbear May 24, 2019
e7a3c2b
Fix consistency tests to account for S3 eventual consistency
original-brownbear May 24, 2019
bb95f7f
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 24, 2019
9cac1a4
S3 resiliency tests work with minio
original-brownbear May 24, 2019
c8394fe
add comment on Minio hack
original-brownbear May 24, 2019
b338a91
shorter diff
original-brownbear May 24, 2019
a8bf18e
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 28, 2019
be8b479
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear May 28, 2019
58fcfe2
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 5, 2019
1e39e9b
shorter diff
original-brownbear Jun 5, 2019
5b5ddb2
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 11, 2019
dc62d64
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 11, 2019
c52a394
CR: removed redundant test
original-brownbear Jun 11, 2019
dea1a07
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 11, 2019
43bb00a
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 17, 2019
af47bb8
CR: add repo consistency check globally
original-brownbear Jun 17, 2019
47766e1
Checkstyle
original-brownbear Jun 17, 2019
81aab9b
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 17, 2019
deb73e3
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 18, 2019
b8a7067
CR: remove . in ex messages
original-brownbear Jun 18, 2019
c613d89
remove now redundant directory deletes
original-brownbear Jun 18, 2019
d841af4
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 18, 2019
e13d799
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 18, 2019
f384e9c
Fix test issues with MockRepository
original-brownbear Jun 18, 2019
79478cb
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 20, 2019
bc1497e
CR: comments
original-brownbear Jun 20, 2019
421553b
CR: move container creation
original-brownbear Jun 20, 2019
b3478b5
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 21, 2019
bbb9632
Remute test
original-brownbear Jun 21, 2019
26df577
CR: comments
original-brownbear Jun 21, 2019
7617660
Merge remote-tracking branch 'elastic/master' into allow-listing-dire…
original-brownbear Jun 21, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,13 @@
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.repositories.RepositoryException;
import org.elasticsearch.repositories.blobstore.BlobStoreRepository;
import org.elasticsearch.repositories.blobstore.BlobStoreTestUtil;
import org.elasticsearch.snapshots.SnapshotState;
import org.elasticsearch.test.ESSingleNodeTestCase;
import org.elasticsearch.threadpool.ThreadPool;

import java.util.Collection;

Expand Down Expand Up @@ -145,6 +149,9 @@ public void testSimpleWorkflow() {
ClusterState clusterState = client.admin().cluster().prepareState().get().getState();
assertThat(clusterState.getMetaData().hasIndex("test-idx-1"), equalTo(true));
assertThat(clusterState.getMetaData().hasIndex("test-idx-2"), equalTo(false));
final BlobStoreRepository repo =
(BlobStoreRepository) getInstanceFromNode(RepositoriesService.class).repository("test-repo");
BlobStoreTestUtil.assertConsistency(repo, repo.threadPool().executor(ThreadPool.Names.GENERIC));
}

public void testMissingUri() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,12 @@
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.repositories.AbstractThirdPartyRepositoryTestCase;
import org.elasticsearch.repositories.blobstore.BlobStoreRepository;
import org.elasticsearch.test.StreamsUtils;

import java.io.IOException;
import java.util.Collection;
import java.util.concurrent.Executor;
import java.util.Map;
import java.util.concurrent.TimeUnit;

Expand Down Expand Up @@ -76,6 +78,20 @@ protected void createRepository(String repoName) {
}

@Override
protected boolean assertCorruptionVisible(BlobStoreRepository repo, Executor genericExec) throws Exception {
// S3 is only eventually consistent for the list operations used by this assertions so we retry for 10 minutes assuming that
// listing operations will become consistent within these 10 minutes.
assertBusy(() -> assertTrue(super.assertCorruptionVisible(repo, genericExec)), 10L, TimeUnit.MINUTES);
return true;
}

@Override
protected void assertConsistentRepository(BlobStoreRepository repo, Executor executor) throws Exception {
// S3 is only eventually consistent for the list operations used by this assertions so we retry for 10 minutes assuming that
// listing operations will become consistent within these 10 minutes.
assertBusy(() -> super.assertConsistentRepository(repo, executor), 10L, TimeUnit.MINUTES);
}

protected void assertBlobsByPrefix(BlobPath path, String prefix, Map<String, BlobMetaData> blobs) throws Exception {
// AWS S3 is eventually consistent so we retry for 10 minutes assuming a list operation will never take longer than that
// to become consistent.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentFactory;
Expand Down Expand Up @@ -393,46 +392,68 @@ public void deleteSnapshot(SnapshotId snapshotId, long repositoryStateId, Action
logger.warn(() -> new ParameterizedMessage("cannot read snapshot file [{}]", snapshotId), ex);
}
// Delete snapshot from the index file, since it is the maintainer of truth of active snapshots
final RepositoryData repositoryData;
final RepositoryData updatedRepositoryData;
final Map<String, BlobContainer> foundIndices;
try {
repositoryData = getRepositoryData();
final RepositoryData repositoryData = getRepositoryData();
updatedRepositoryData = repositoryData.removeSnapshot(snapshotId);
// Cache the indices that were found before writing out the new index-N blob so that a stuck master will never
Copy link
Member Author

@original-brownbear original-brownbear May 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this approach of listing before writing the updated index-N makes this approach completely safe even on an eventually consistent blob store. If an index uuid (the snapshot uuid, not the CS one!) goes out of scope here for a single index-N it will never be reused in a future index-M (M > N) => we don't have to care about stuck master nodes coming back to haunt us.
What we could however do is optimize this a little and maybe write a SUCCESS-N blob after we finished all operations that went into a given index-N blob and only run the listing if we suspect dangling indices because an operation that wrote a given index-N didn't finish fully.
The downside of this is, that it gets a little complicated when you have to account for master failover scenarios and may not be worth it performance wise now that we have bulk deletes and could parallelize the recursive delete as well (I just didn't do it here for readability on a first pass).

// delete an index that was created by another master node after writing this index-N blob.
foundIndices = blobStore().blobContainer(basePath().add("indices")).children();
writeIndexGen(updatedRepositoryData, repositoryStateId);
} catch (Exception ex) {
listener.onFailure(new RepositoryException(metadata.name(), "failed to delete snapshot [" + snapshotId + "]", ex));
return;
}
final SnapshotInfo finalSnapshotInfo = snapshot;
final Collection<IndexId> unreferencedIndices = Sets.newHashSet(repositoryData.getIndices().values());
unreferencedIndices.removeAll(updatedRepositoryData.getIndices().values());
try {
blobContainer().deleteBlobsIgnoringIfNotExists(
Arrays.asList(snapshotFormat.blobName(snapshotId.getUUID()), globalMetaDataFormat.blobName(snapshotId.getUUID())));
} catch (IOException e) {
logger.warn(() -> new ParameterizedMessage("[{}] Unable to delete global metadata files", snapshotId), e);
}
final var survivingIndices = updatedRepositoryData.getIndices();
deleteIndices(
Optional.ofNullable(finalSnapshotInfo)
.map(info -> info.indices().stream().map(repositoryData::resolveIndexId).collect(Collectors.toList()))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to run partial deletes on indices that went fully out of scope now. This would be great to have for #41581 because if we were to delay writing the metadata in the current implementation, we run into the messy spot of not knowing about the shards if we fail to write the index metadata -> current implementation couldn't delete the stale blobs from a partial upload of a new index.

.map(info -> info.indices().stream().filter(survivingIndices::containsKey)
.map(updatedRepositoryData::resolveIndexId).collect(Collectors.toList()))
.orElse(Collections.emptyList()),
snapshotId,
ActionListener.map(listener, v -> {
try {
blobStore().blobContainer(indicesPath()).deleteBlobsIgnoringIfNotExists(
unreferencedIndices.stream().map(IndexId::getId).collect(Collectors.toList()));
} catch (IOException e) {
logger.warn(() ->
new ParameterizedMessage(
"[{}] indices {} are no longer part of any snapshots in the repository, " +
"but failed to clean up their index folders.", metadata.name(), unreferencedIndices), e);
}
cleanupStaleIndices(foundIndices, survivingIndices);
return null;
})
);
}
}

private void cleanupStaleIndices(Map<String, BlobContainer> foundIndices, Map<String, IndexId> survivingIndices) {
try {
final Set<String> survivingIndexIds = survivingIndices.values().stream()
.map(IndexId::getId).collect(Collectors.toSet());
for (Map.Entry<String, BlobContainer> indexEntry : foundIndices.entrySet()) {
final String indexSnId = indexEntry.getKey();
try {
if (survivingIndexIds.contains(indexSnId) == false) {
logger.debug("[{}] Found stale index [{}]. Cleaning it up", metadata.name(), indexSnId);
indexEntry.getValue().delete();
logger.debug("[{}] Cleaned up stale index [{}]", metadata.name(), indexSnId);
}
} catch (IOException e) {
logger.warn(() -> new ParameterizedMessage(
"[{}] index {} is no longer part of any snapshots in the repository, " +
"but failed to clean up their index folders", metadata.name(), indexSnId), e);
}
}
} catch (Exception e) {
// TODO: We shouldn't be blanket catching and suppressing all exceptions here and instead handle them safely upstream.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add an assert false here?

// Currently this catch exists as a stop gap solution to tackle unexpected runtime exceptions from implementations
// bubbling up and breaking the snapshot functionality.
assert false : e;
logger.warn(new ParameterizedMessage("[{}] Exception during cleanup of stale indices", metadata.name()), e);
}
}

private void deleteIndices(List<IndexId> indices, SnapshotId snapshotId, ActionListener<Void> listener) {
if (indices.isEmpty()) {
listener.onResponse(null);
Expand Down Expand Up @@ -494,9 +515,9 @@ public SnapshotInfo finalizeSnapshot(final SnapshotId snapshotId,
startTime, failure, System.currentTimeMillis(), totalShards, shardFailures,
includeGlobalState, userMetadata);
try {
final RepositoryData updatedRepositoryData = getRepositoryData().addSnapshot(snapshotId, blobStoreSnapshot.state(), indices);
snapshotFormat.write(blobStoreSnapshot, blobContainer(), snapshotId.getUUID());
final RepositoryData repositoryData = getRepositoryData();
writeIndexGen(repositoryData.addSnapshot(snapshotId, blobStoreSnapshot.state(), indices), repositoryStateId);
writeIndexGen(updatedRepositoryData, repositoryStateId);
} catch (FileAlreadyExistsException ex) {
// if another master was elected and took over finalizing the snapshot, it is possible
// that both nodes try to finalize the snapshot and write to the same blobs, so we just
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,13 @@

import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;
import org.elasticsearch.cluster.SnapshotsInProgress;
import org.elasticsearch.cluster.metadata.RepositoryMetaData;
import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.repositories.blobstore.BlobStoreTestUtil;
import org.elasticsearch.snapshots.mockstore.MockRepository;
import org.elasticsearch.test.ESIntegTestCase;
import org.junit.After;
Expand Down Expand Up @@ -65,6 +67,32 @@ public void assertConsistentHistoryInLuceneIndex() throws Exception {
internalCluster().assertConsistentHistoryBetweenTranslogAndLuceneIndex();
}

private String skipRepoConsistencyCheckReason;

@After
public void assertRepoConsistency() {
if (skipRepoConsistencyCheckReason == null) {
client().admin().cluster().prepareGetRepositories().get().repositories()
.stream()
.map(RepositoryMetaData::name)
.forEach(name -> {
final List<SnapshotInfo> snapshots = client().admin().cluster().prepareGetSnapshots(name).get().getSnapshots(name);
// Delete one random snapshot to trigger repository cleanup.
if (snapshots.isEmpty() == false) {
client().admin().cluster().prepareDeleteSnapshot(name, randomFrom(snapshots).snapshotId().getName()).get();
}
BlobStoreTestUtil.assertRepoConsistency(internalCluster(), name);
});
} else {
logger.info("--> skipped repo consistency checks because [{}]", skipRepoConsistencyCheckReason);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps log the reason in the else branch here

}

protected void disableRepoConsistencyCheck(String reason) {
assertNotNull(reason);
skipRepoConsistencyCheckReason = reason;
}

public static long getFailureCount(String repository) {
long failureCount = 0;
for (RepositoriesService repositoriesService :
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -722,6 +722,7 @@ public boolean clearData(String nodeName) {
}

public void testRegistrationFailure() {
disableRepoConsistencyCheck("This test does not create any data in the repository");
logger.info("--> start first node");
internalCluster().startNode();
logger.info("--> start second node");
Expand All @@ -741,6 +742,7 @@ public void testRegistrationFailure() {
}

public void testThatSensitiveRepositorySettingsAreNotExposed() throws Exception {
disableRepoConsistencyCheck("This test does not create any data in the repository");
Settings nodeSettings = Settings.EMPTY;
logger.info("--> start two nodes");
internalCluster().startNodes(2, nodeSettings);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -141,8 +141,8 @@ public void testWhenMetadataAreLoaded() throws Exception {
// Deleting a snapshot does not load the global metadata state but loads each index metadata
assertAcked(client().admin().cluster().prepareDeleteSnapshot("repository", "snap").get());
assertGlobalMetadataLoads("snap", 1);
assertIndexMetadataLoads("snap", "docs", 5);
assertIndexMetadataLoads("snap", "others", 4);
assertIndexMetadataLoads("snap", "docs", 4);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now we don't need to read meta blobs to get an index's shard count before deleting since we just get that from the child container listing.

assertIndexMetadataLoads("snap", "others", 3);
}

private void assertGlobalMetadataLoads(final String snapshot, final int times) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,8 @@ public void testRepositoryAckTimeout() throws Exception {
}

public void testRepositoryVerification() throws Exception {
disableRepoConsistencyCheck("This test does not create any data in the repository.");

Client client = client();

Settings settings = Settings.builder()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -362,7 +362,7 @@ public void testSingleGetAfterRestore() throws Exception {
assertThat(client.prepareGet(restoredIndexName, typeName, docId).get().isExists(), equalTo(true));
}

public void testFreshIndexUUID() {
public void testFreshIndexUUID() throws InterruptedException {
Client client = client();

logger.info("--> creating repository");
Expand Down Expand Up @@ -540,7 +540,6 @@ public void testRestoreAliases() throws Exception {
logger.info("--> check that aliases are not restored and existing aliases still exist");
assertAliasesMissing(client.admin().indices().prepareAliasesExist("alias-123", "alias-1").get());
assertAliasesExist(client.admin().indices().prepareAliasesExist("alias-3").get());

}

public void testRestoreTemplates() throws Exception {
Expand Down Expand Up @@ -593,7 +592,6 @@ public void testRestoreTemplates() throws Exception {
logger.info("--> check that template is restored");
getIndexTemplatesResponse = client().admin().indices().prepareGetTemplates().get();
assertIndexTemplateExists(getIndexTemplatesResponse, "test-template");

}

public void testIncludeGlobalState() throws Exception {
Expand Down Expand Up @@ -780,10 +778,10 @@ public void testIncludeGlobalState() throws Exception {
assertFalse(client().admin().cluster().prepareGetPipeline("barbaz").get().isFound());
assertNull(client().admin().cluster().prepareGetStoredScript("foobar").get().getSource());
assertThat(client.prepareSearch("test-idx").setSize(0).get().getHits().getTotalHits().value, equalTo(100L));

}

public void testSnapshotFileFailureDuringSnapshot() {
public void testSnapshotFileFailureDuringSnapshot() throws InterruptedException {
disableRepoConsistencyCheck("This test uses a purposely broken repository so it would fail consistency checks");
Client client = client();

logger.info("--> creating repository");
Expand Down Expand Up @@ -910,6 +908,8 @@ public void testDataFileFailureDuringSnapshot() throws Exception {
}

public void testDataFileFailureDuringRestore() throws Exception {
disableRepoConsistencyCheck("This test intentionally leaves a broken repository");

Path repositoryLocation = randomRepoPath();
Client client = client();
logger.info("--> creating repository");
Expand Down Expand Up @@ -973,6 +973,8 @@ public void testDataFileFailureDuringRestore() throws Exception {
}

public void testDataFileCorruptionDuringRestore() throws Exception {
disableRepoConsistencyCheck("This test intentionally leaves a broken repository");

Path repositoryLocation = randomRepoPath();
Client client = client();
logger.info("--> creating repository");
Expand Down Expand Up @@ -1237,7 +1239,6 @@ public void testDeletionOfFailingToRecoverIndexShouldStopRestore() throws Except
assertThat(restoreSnapshotResponse.getRestoreInfo().failedShards(), equalTo(0));
SearchResponse countResponse = client.prepareSearch("test-idx").setSize(0).get();
assertThat(countResponse.getHits().getTotalHits().value, equalTo(100L));

}

public void testUnallocatedShards() throws Exception {
Expand Down Expand Up @@ -1786,8 +1787,6 @@ public void testRenameOnRestore() throws Exception {
.setIndices("test-idx-1").setRenamePattern("test-idx").setRenameReplacement("alias")
.setWaitForCompletion(true).setIncludeAliases(false).execute().actionGet();
assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));


}

public void testMoveShardWhileSnapshotting() throws Exception {
Expand Down Expand Up @@ -1854,6 +1853,7 @@ public void testMoveShardWhileSnapshotting() throws Exception {
}

public void testDeleteRepositoryWhileSnapshotting() throws Exception {
disableRepoConsistencyCheck("This test uses a purposely broken repository so it would fail consistency checks");
Client client = client();
Path repositoryLocation = randomRepoPath();
logger.info("--> creating repository");
Expand Down Expand Up @@ -2412,7 +2412,6 @@ public void testChangeSettingsOnRestore() throws Exception {

assertHitCount(client.prepareSearch("test-idx").setSize(0).setQuery(matchQuery("field1", "Foo")).get(), numdocs);
assertHitCount(client.prepareSearch("test-idx").setSize(0).setQuery(matchQuery("field1", "bar")).get(), numdocs);

}

public void testRecreateBlocksOnRestore() throws Exception {
Expand Down Expand Up @@ -2506,6 +2505,8 @@ public void testRecreateBlocksOnRestore() throws Exception {
}

public void testCloseOrDeleteIndexDuringSnapshot() throws Exception {
disableRepoConsistencyCheck("This test intentionally leaves a broken repository");

Client client = client();

boolean allowPartial = randomBoolean();
Expand Down Expand Up @@ -2827,6 +2828,8 @@ private boolean waitForRelocationsToStart(final String index, TimeValue timeout)
}

public void testSnapshotName() throws Exception {
disableRepoConsistencyCheck("This test does not create any data in the repository");

final Client client = client();

logger.info("--> creating repository");
Expand All @@ -2848,6 +2851,8 @@ public void testSnapshotName() throws Exception {
}

public void testListCorruptedSnapshot() throws Exception {
disableRepoConsistencyCheck("This test intentionally leaves a broken repository");

Client client = client();
Path repo = randomRepoPath();
logger.info("--> creating repository at {}", repo.toAbsolutePath());
Expand Down Expand Up @@ -3418,6 +3423,9 @@ public void testSnapshotCanceledOnRemovedShard() throws Exception {
}

public void testSnapshotSucceedsAfterSnapshotFailure() throws Exception {
// TODO: Fix repo cleanup logic to handle these leaked snap-file and only exclude test-repo (the mock repo) here.
disableRepoConsistencyCheck(
"This test uses a purposely broken repository implementation that results in leaking snap-{uuid}.dat files");
logger.info("--> creating repository");
final Path repoPath = randomRepoPath();
final Client client = client();
Expand Down
Loading