You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To reduce the memory usage while dumping a very large amount of data (e.g. a physical disk with 20TB) to a single segment, the chunk offset table should be split in multiple tables - this tables should be written periodically into the Zff container (currently, the full table is cached in memory while dumping the data - a single table entry needs 2*8 bytes, so you need 500MB memory space for each TB of data by using a chunk size of 32kB).
At the end of the segment there should be an additional table which contains the appropriate offsets to the chunk-offset tables.
Due this tables are sorted HashMaps (or BTreeMaps), they can be variable in their size.
The text was updated successfully, but these errors were encountered:
To reduce the memory usage while dumping a very large amount of data (e.g. a physical disk with 20TB) to a single segment, the chunk offset table should be split in multiple tables - this tables should be written periodically into the Zff container (currently, the full table is cached in memory while dumping the data - a single table entry needs 2*8 bytes, so you need 500MB memory space for each TB of data by using a chunk size of 32kB).
At the end of the segment there should be an additional table which contains the appropriate offsets to the chunk-offset tables.
Due this tables are sorted HashMaps (or BTreeMaps), they can be variable in their size.
The text was updated successfully, but these errors were encountered: