Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable auto gen id optimization #9468

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Expand Up @@ -212,7 +212,7 @@ public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexS
/* create engine config */

this.config = new EngineConfig(shardId,
indexSettings.getAsBoolean(EngineConfig.INDEX_OPTIMIZE_AUTOGENERATED_ID_SETTING, true),
indexSettings.getAsBoolean(EngineConfig.INDEX_OPTIMIZE_AUTOGENERATED_ID_SETTING, false),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd want to remove this entirely but this change is supposed to go into 1.4 as well afaik so I guess that is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to make it as minimal invasive as possible for 1.4.

threadPool,indexingService,indexSettingsService, warmer, store, deletionPolicy, translog, mergePolicyProvider, mergeScheduler,
analysisService.defaultIndexAnalyzer(), similarityService.similarity(), codecService, failedEngineListener);

Expand Down
Expand Up @@ -30,7 +30,9 @@
import org.apache.lucene.document.NumericDocValuesField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.*;
import org.apache.lucene.search.MatchAllDocsQuery;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.MockDirectoryWrapper;
import org.elasticsearch.ExceptionsHelper;
Expand All @@ -52,6 +54,7 @@
import org.elasticsearch.index.engine.*;
import org.elasticsearch.index.indexing.ShardIndexingService;
import org.elasticsearch.index.indexing.slowlog.ShardSlowLogIndexingService;
import org.elasticsearch.index.mapper.DocumentMapper;
import org.elasticsearch.index.mapper.ParseContext.Document;
import org.elasticsearch.index.mapper.ParsedDocument;
import org.elasticsearch.index.mapper.internal.SourceFieldMapper;
Expand Down Expand Up @@ -231,7 +234,7 @@ protected Engine createEngine(IndexSettingsService indexSettingsService, Store s

public EngineConfig config(IndexSettingsService indexSettingsService, Store store, Translog translog, MergeSchedulerProvider mergeSchedulerProvider) {
IndexWriterConfig iwc = newIndexWriterConfig();
EngineConfig config = new EngineConfig(shardId, true, threadPool, new ShardIndexingService(shardId, EMPTY_SETTINGS, new ShardSlowLogIndexingService(shardId, EMPTY_SETTINGS, indexSettingsService)), indexSettingsService
EngineConfig config = new EngineConfig(shardId, false/*per default optimization for auto generated ids is disabled*/, threadPool, new ShardIndexingService(shardId, EMPTY_SETTINGS, new ShardSlowLogIndexingService(shardId, EMPTY_SETTINGS, indexSettingsService)), indexSettingsService
, null, store, createSnapshotDeletionPolicy(), translog, createMergePolicy(), mergeSchedulerProvider,
iwc.getAnalyzer(), iwc.getSimilarity() , new CodecService(shardId.index()), new Engine.FailedEngineListener() {
@Override
Expand Down Expand Up @@ -1559,4 +1562,88 @@ public void testSettings() {

}
}

@Test
public void testRetryWithAutogeneratedIdWorksAndNoDuplicateDocs() throws IOException {

ParsedDocument doc = testParsedDocument("1", "1", "test", null, -1, -1, testDocument(), B_1, false);
boolean canHaveDuplicates = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't have any effect any more, right? do we want to randomize it with a comment it's being removed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see. you have two tests. We can maybe fold it into one and have the canHaveDuplicates set randomly and then set it at the reverse?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looked like the two tests do the exact same but now that I fixed the test it is not so anymore. The behavior is different depending on the order. Please take a look and let me know if you still want to fold.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I see now. We'll clean it up later.

boolean autoGeneratedId = true;

Engine.Create index = new Engine.Create(null, analyzer, newUid("1"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
engine.create(index);
assertThat(index.version(), equalTo(1l));

index = new Engine.Create(null, analyzer, newUid("1"), doc, index.version(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
replicaEngine.create(index);
assertThat(index.version(), equalTo(1l));

canHaveDuplicates = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment here. Randomize?

index = new Engine.Create(null, analyzer, newUid("1"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
engine.create(index);
assertThat(index.version(), equalTo(1l));
engine.refresh("test", true);
Engine.Searcher searcher = engine.acquireSearcher("test");
TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), 10);
assertThat(topDocs.totalHits, equalTo(1));

index = new Engine.Create(null, analyzer, newUid("1"), doc, index.version(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
try {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can use ElasticsearchAssertions.assertThrows. Slightly cleaner code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assertThrows currently only accepts ActionFuture. we can change that, but I think that should probably a different pr.

replicaEngine.create(index);
fail();
} catch (VersionConflictEngineException e) {
// we ignore version conflicts on replicas, see TransportShardReplicationOperationAction.ignoreReplicaException
}
replicaEngine.refresh("test", true);
Engine.Searcher replicaSearcher = replicaEngine.acquireSearcher("test");
topDocs = replicaSearcher.searcher().search(new MatchAllDocsQuery(), 10);
assertThat(topDocs.totalHits, equalTo(1));
searcher.close();
replicaSearcher.close();
}

@Test
public void testRetryWithAutogeneratedIdsAndWrongOrderWorksAndNoDuplicateDocs() throws IOException {

ParsedDocument doc = testParsedDocument("1", "1", "test", null, -1, -1, testDocument(), B_1, false);
boolean canHaveDuplicates = true;
boolean autoGeneratedId = true;

Engine.Create firstIndexRequest = new Engine.Create(null, analyzer, newUid("1"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
engine.create(firstIndexRequest);
assertThat(firstIndexRequest.version(), equalTo(1l));

Engine.Create firstIndexRequestReplica = new Engine.Create(null, analyzer, newUid("1"), doc, firstIndexRequest.version(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
replicaEngine.create(firstIndexRequestReplica);
assertThat(firstIndexRequestReplica.version(), equalTo(1l));

canHaveDuplicates = false;
Engine.Create secondIndexRequest = new Engine.Create(null, analyzer, newUid("1"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. ElasticsearchAssertions.assertThrows wil help

try {
engine.create(secondIndexRequest);
fail();
} catch (DocumentAlreadyExistsException e) {
// we can ignore the exception. In case this happens because the retry request arrived first then this error will not be sent back anyway.
// in any other case this is an actual error
}
engine.refresh("test", true);
Engine.Searcher searcher = engine.acquireSearcher("test");
TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), 10);
assertThat(topDocs.totalHits, equalTo(1));

Engine.Create secondIndexRequestReplica = new Engine.Create(null, analyzer, newUid("1"), doc, firstIndexRequest.version(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);
try {
replicaEngine.create(secondIndexRequestReplica);
fail();
} catch (VersionConflictEngineException e) {
// we ignore version conflicts on replicas, see TransportShardReplicationOperationAction.ignoreReplicaException.
}
replicaEngine.refresh("test", true);
Engine.Searcher replicaSearcher = replicaEngine.acquireSearcher("test");
topDocs = replicaSearcher.searcher().search(new MatchAllDocsQuery(), 10);
assertThat(topDocs.totalHits, equalTo(1));
searcher.close();
replicaSearcher.close();
}

}
Expand Up @@ -67,7 +67,6 @@ protected Settings nodeSettings(int nodeOrdinal) {
* see https://github.com/elasticsearch/elasticsearch/issues/8788
*/
@Test
@LuceneTestCase.AwaitsFix(bugUrl = "https://github.com/elasticsearch/elasticsearch/issues/8788")
public void testRetryDueToExceptionOnNetworkLayer() throws ExecutionException, InterruptedException, IOException {
final AtomicBoolean exceptionThrown = new AtomicBoolean(false);
int numDocs = scaledRandomIntBetween(100, 1000);
Expand Down