Skip to content
This repository has been archived by the owner on Oct 18, 2023. It is now read-only.

Commit

Permalink
bottomless: increase the max batch size to 10000
Browse files Browse the repository at this point in the history
The reasoning is as follows: 10000 uncompressed frames weigh 40MiB.
Gzip is expected to create a ~20MiB file from them, while xz
can compress it down to ~800KiB. The previous limit would make xz
create a 50KiB file, which is less than the minimum 128KiB that S3-like
services charge for when writing to an object store.
  • Loading branch information
psarna committed Oct 16, 2023
1 parent 4db9ff9 commit 7512319
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion bottomless/src/replicator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ impl Options {
let secret_access_key = env_var("LIBSQL_BOTTOMLESS_AWS_SECRET_ACCESS_KEY").ok();
let region = env_var("LIBSQL_BOTTOMLESS_AWS_DEFAULT_REGION").ok();
let max_frames_per_batch =
env_var_or("LIBSQL_BOTTOMLESS_BATCH_MAX_FRAMES", 500).parse::<usize>()?;
env_var_or("LIBSQL_BOTTOMLESS_BATCH_MAX_FRAMES", 10000).parse::<usize>()?;
let s3_upload_max_parallelism =
env_var_or("LIBSQL_BOTTOMLESS_S3_PARALLEL_MAX", 32).parse::<usize>()?;
let restore_transaction_page_swap_after =
Expand Down

0 comments on commit 7512319

Please sign in to comment.