You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are sizes of loading LinkBench data with 1.5B max ids.
Without row checksum: 570959236
With row checksum enabled: 705439184
23.6% space increase is too much. This was because current row checksum adds 9 bytes (1B + CRC32 key and CRC32 value) into value for each index entry.
How about doing some optimizations like below?
Adding new checksum format using CRC8 instead of CRC32. This makes per row size reduced from 9B to 3B.
Adding rocksdb_checksum_row_pct global variable to control how many percentage of rows to be checksummed. For example, by setting to 10, checksum is enabled for only 10% of rows. This reduces checksum space overhead by 90%. This approach may miss some checksum corruptions. But in practice, many more than 10 rows will be affected if there is any corruption bug, so sooner or later it is possible to detect corruption.
The text was updated successfully, but these errors were encountered:
Issue by yoshinorim
Monday Jul 27, 2015 at 14:49 GMT
Originally opened as MySQLOnRocksDB#93
Here are sizes of loading LinkBench data with 1.5B max ids.
Without row checksum: 570959236
With row checksum enabled: 705439184
23.6% space increase is too much. This was because current row checksum adds 9 bytes (1B + CRC32 key and CRC32 value) into value for each index entry.
How about doing some optimizations like below?
The text was updated successfully, but these errors were encountered: