Skip to content

Optimize fast compaction memory#12340

Merged
OneSizeFitsQuorum merged 1 commit intoapache:masterfrom
choubenson:optimizeFastCompactionMemory
Apr 19, 2024
Merged

Optimize fast compaction memory#12340
OneSizeFitsQuorum merged 1 commit intoapache:masterfrom
choubenson:optimizeFastCompactionMemory

Conversation

@choubenson
Copy link
Contributor

@choubenson choubenson commented Apr 15, 2024

Optimize the memory usage of FastCompaction from the following two aspects:

  • Release compressed chunk data buffer when deserializing chunk into page queue.
  • Release compressed page data buffer when deserializing page into points.

Experiment

Experimental Scenario:
There is one sequential file and two unsequential files, all of which completely overlap. Each file contains 5 devices, with each device having 80 time series. Each time series has only one chunk, and each chunk has one page, with 250,000 data points per page. Each timeseries adopts GZIP compression and PLAIN encoding. The experiment simulates a compaction scenario of three-way compacting, with 80 sub threads simultaneously compacting 80 timeseries.

Experimental Procedure:
Using binary search method, adjust the MAX_HEAP_SIZE parameter of the system gradually until the difference between the upper and lower bounds is less than 2M, and then determine the memory usage of the compaction algorithm.

Experimental Results:
Before optimization, FastCompaction algorithm occupies 1327M, and after optimization, it occupies 1187M.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants