Skip to content

Conversation

@FangYongs
Copy link
Contributor

In this PR we create flink-table-store-micro-benchmarks module in flink-table-store-benchmark, and add merge tree reader/writer benchmarks:

  1. In MergeTreeReaderBenchmark we first create a writer to write about 50*50000 records to the store, and performance the latency of scan in reader.
  2. In MergeTreeWriterBenchmark we performance the throughput of write in merge tree with compaction

@Benchmark
public void write(KeyValueData data) throws Exception {
kv.replace(data.key, sequenceNumber++, data.kind, data.value);
writer.write(kv);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that there is a concurrency security problem. Maybe we should prohibit the use of multithreading in testing.

@FangYongs FangYongs force-pushed the FLINK_29702_mergetree_reader_writer_micro_benchmarks branch from 1c89611 to b5c0995 Compare October 25, 2022 01:27
@FangYongs
Copy link
Contributor Author

FangYongs commented Oct 25, 2022

Hi @JingsongLi I have splited the reader/writer factory for compaction in the micro benchmarks and fix the concurrency problem, please review again, thanks

Copy link
Contributor

@JingsongLi JingsongLi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@JingsongLi JingsongLi merged commit 8af95e6 into apache:master Oct 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants