Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-3453] Fix HoodieBackedTableMetadata concurrent reading issue #5091

Merged
merged 16 commits into from
Sep 9, 2022
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,18 @@
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;

import static org.apache.hudi.common.model.WriteOperationType.INSERT;
import static org.apache.hudi.common.model.WriteOperationType.UPSERT;

import static java.util.Arrays.asList;
import static java.util.Collections.emptyList;
import static org.junit.jupiter.api.Assertions.assertDoesNotThrow;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertFalse;
Expand All @@ -92,6 +100,52 @@ public void testTableOperations() throws Exception {
verifyBaseMetadataTable();
}

@Test
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@codope let's remove this test altogether, and as a rule of thumb we should avoid adding any non-deterministic tests

public void testMultiReaderForHoodieBackedTableMetadata() throws Exception {
final int taskNumber = 100;
HoodieTableType tableType = HoodieTableType.COPY_ON_WRITE;
init(tableType);
testTable.doWriteOperation("000001", INSERT, emptyList(), asList("p1"), 1);
HoodieBackedTableMetadata tableMetadata = new HoodieBackedTableMetadata(context, writeConfig.getMetadataConfig(), writeConfig.getBasePath(), writeConfig.getSpillableMapBasePath(), false);
assertTrue(tableMetadata.enabled());
List<String> metadataPartitions = tableMetadata.getAllPartitionPaths();
String partition = metadataPartitions.get(0);
String finalPartition = basePath + "/" + partition;
ArrayList<String> duplicatedPartitions = new ArrayList<>(taskNumber);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment explaining the intent of the test so that it's apparent and doesn't require deciphering the test to understand at least on the high levle

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added!

for (int i = 0; i < taskNumber; i++) {
duplicatedPartitions.add(finalPartition);
}
ExecutorService executors = Executors.newFixedThreadPool(taskNumber);
AtomicBoolean flag = new AtomicBoolean(false);
AtomicInteger count = new AtomicInteger(0);
AtomicInteger filesNumber = new AtomicInteger(0);

for (String part : duplicatedPartitions) {
executors.submit(new Runnable() {
@Override
public void run() {
try {
count.incrementAndGet();
while (true) {
if (count.get() == taskNumber) {
break;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add a countDownLatch here so that all threads could call tableMetadata.getAllFilesInPartition() around the same time. that way the test is deterministic.

FileStatus[] files = tableMetadata.getAllFilesInPartition(new Path(part));
filesNumber.addAndGet(files.length);
assertEquals(1, files.length);
} catch (Exception e) {
flag.set(true);
}
}
});
}
executors.shutdown();
executors.awaitTermination(24, TimeUnit.HOURS);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's set meaningful timeout here (1 min should be more than enough to complete)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed!

assertFalse(flag.get());
assertEquals(filesNumber.get(), taskNumber);
}

private void doWriteInsertAndUpsert(HoodieTestTable testTable) throws Exception {
doWriteInsertAndUpsert(testTable, "0000001", "0000002", false);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,6 @@

package org.apache.hudi.metadata;

import org.apache.avro.Schema;
import org.apache.avro.generic.GenericRecord;
import org.apache.hadoop.fs.Path;
import org.apache.hudi.avro.HoodieAvroUtils;
import org.apache.hudi.avro.model.HoodieMetadataRecord;
import org.apache.hudi.avro.model.HoodieRestoreMetadata;
Expand Down Expand Up @@ -53,6 +50,10 @@
import org.apache.hudi.exception.TableNotFoundException;
import org.apache.hudi.io.storage.HoodieFileReader;
import org.apache.hudi.io.storage.HoodieFileReaderFactory;

import org.apache.avro.Schema;
import org.apache.avro.generic.GenericRecord;
import org.apache.hadoop.fs.Path;
import org.apache.log4j.LogManager;
import org.apache.log4j.Logger;

Expand Down Expand Up @@ -229,7 +230,7 @@ public List<Pair<String, Option<HoodieRecord<HoodieMetadataPayload>>>> getRecord
throw new HoodieIOException("Error merging records from metadata table for " + sortedKeys.size() + " key : ", ioe);
} finally {
if (!reuse) {
close(Pair.of(partitionFileSlicePair.getLeft(), partitionFileSlicePair.getRight().getFileId()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also clean up close method (doesn't seem like we need it)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, It is also used in

  private void closePartitionReaders() {
    for (Pair<String, String> partitionFileSlicePair : partitionReaders.keySet()) {
      close(partitionFileSlicePair);
    }
    partitionReaders.clear();
  }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we should rewrite it to use closeReader instead

private void closePartitionReaders() {
    for (Pair<...> pair : partitionReaders.values()) {
      closeReader(pair);
    }
    partitionReaders.clear();
  }

closeReader(readers);
}
}
});
Expand Down Expand Up @@ -397,7 +398,12 @@ private Map<Pair<String, FileSlice>, List<String>> getPartitionFileSliceToKeysMa
* @return File reader and the record scanner pair for the requested file slice
*/
private Pair<HoodieFileReader, HoodieMetadataMergedLogRecordReader> getOrCreateReaders(String partitionName, FileSlice slice) {
return partitionReaders.computeIfAbsent(Pair.of(partitionName, slice.getFileId()), k -> openReaders(partitionName, slice));
if (reuse) {
return partitionReaders.computeIfAbsent(Pair.of(partitionName, slice.getFileId()), k -> {
return openReaders(partitionName, slice); });
} else {
return openReaders(partitionName, slice);
}
}

private Pair<HoodieFileReader, HoodieMetadataMergedLogRecordReader> openReaders(String partitionName, FileSlice slice) {
Expand Down