New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kv: use header.TargetBytes for ExportRequest #69435
Comments
cc @aliher1911 |
This would be addressed by #70763, which can disable pagination to subsequent ranges with small limits. |
…ortRequest This change passes in the value of kv.bulk_sst.max_allowed_overage as an argument in the ExportRequest instead of reading it at the time of evaluation. This allows tenants to configure this setting to a desired value instead of always using the system tenants cluster setting value. Informs: cockroachdb#69435 Release note: None
cc @cockroachdb/disaster-recovery |
Previously, ExportRequest would set its `TargetFileSize` field to the target size of each SST it expected as part of the response. Additionally, it set `header.TargetBytes` to a sentinel value of 1 to force every ExportRequest to paginate regardless of if the generated SST was of the target file size or not. This change teaches ExportRequest to stop setting the `TargetFileSize` field, but instead exclusively use the `header.TargetBytes` to control the target size of the SST returned as part of the ExportResponse. To prevent DistSender from sending an ExportRequest to a subsequent range with a small, remaining TargetBytes, we also set `header.ReturnOnRangeBoundary` to true. Aside from a more intuitive use of DistSender limits there are no changes expected to the pagination of ExportRequests sent during a backup. We will not aggregate ExportRequests across range boundaries and will continue to generate SST files of a size controlled by `kv.bulk_sst.target_size`. For mixed version compatability, if all nodes in the cluster are not running a 23.1 binary we will fallback to the legacy behaviour described above. Informs: cockroachdb#69435 Release note: None
Previously, ExportRequest would set its `TargetFileSize` field to the target size of each SST it expected as part of the response. Additionally, it set `header.TargetBytes` to a sentinel value of 1 to force every ExportRequest to paginate regardless of if the generated SST was of the target file size or not. This change teaches ExportRequest to stop setting the `TargetFileSize` field, but instead exclusively use the `header.TargetBytes` to control the target size of the SST returned as part of the ExportResponse. To prevent DistSender from sending an ExportRequest to a subsequent range with a small, remaining TargetBytes, we also set `header.ReturnOnRangeBoundary` to true. Aside from a more intuitive use of DistSender limits there are no changes expected to the pagination of ExportRequests sent during a backup. We will not aggregate ExportRequests across range boundaries and will continue to generate SST files of a size controlled by `kv.bulk_sst.target_size`. For mixed version compatability, if all nodes in the cluster are not running a 23.1 binary we will fallback to the legacy behaviour described above. Informs: cockroachdb#69435 Release note: None
Now that we don't have cloud-file writing anymore, it is confusing that
ExportRequest
has aTargetFileSize
while its header also hasheader.TargetBytes
, and both need to be set, I believe, to paginate correctly?Instead, should we have all callers just put their desired response size in
header.TargetBytes
and deprecate the field inExportRequest
? I think we'd then just always setreply.NumBytes
toheader.TargetBytes
since we always want, I think, for a caller to immediately get whatever file we produced instead of distsender trying to go back for more (but not much more, since it'll be - NumBytes) and waiting to stitch them into a multi-file reply for the caller.Jira issue: CRDB-9595
Epic CRDB-19061
The text was updated successfully, but these errors were encountered: