Skip to content

Commit

Permalink
Serialization backwards compatiblity w/buffer size
Browse files Browse the repository at this point in the history
This adds proper server side conditional handling for buffer sizes. In
the original patch in [#2115](#2115)
support was added for sending original user requested buffers.
Compatibility was maintained for clients, but the server path missed
handling the older clients in the deserialization path.
  • Loading branch information
Shelnutt2 committed Mar 12, 2021
1 parent 727e876 commit 297af6f
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 9 deletions.
2 changes: 1 addition & 1 deletion HISTORY.md
Expand Up @@ -18,7 +18,7 @@
## Bug fixes

* Corrected a bug where sparse cells may be incorrectly returned using string dimensions. [#2125](https://github.com/TileDB-Inc/TileDB/pull/2125)
* Always use original buffer size in serialized read queries serverside. [#2115](https://github.com/TileDB-Inc/TileDB/pull/2115)
* Always use original buffer size in serialized read queries serverside. [#2115](https://github.com/TileDB-Inc/TileDB/pull/2115) [#2128](https://github.com/TileDB-Inc/TileDB/pull/2128)
* Fix segfault in serialized queries when partition is unsplittable [#2120](https://github.com/TileDB-Inc/TileDB/pull/2120)

## API additions
Expand Down
34 changes: 26 additions & 8 deletions tiledb/sm/serialization/query.cc
Expand Up @@ -678,7 +678,7 @@ Status query_from_capnp(
// We use the query_buffer directly in order to get the original buffer
// sizes This avoid a problem where an incomplete query will change the
// users buffer size to the smaller results and we end up not being able
// to correctly calcuate if the new results can fit into the users buffer
// to correctly calculate if the new results can fit into the users buffer
if (var_size) {
if (!nullable) {
existing_offset_buffer = static_cast<uint64_t*>(query_buffer.buffer_);
Expand Down Expand Up @@ -707,7 +707,7 @@ Status query_from_capnp(
}
}
} else {
// For writes we need to use get_buffer
// For writes we need to use get_buffer and clientside
if (var_size) {
if (!nullable) {
RETURN_NOT_OK(query->get_buffer(
Expand Down Expand Up @@ -904,12 +904,30 @@ Status query_from_capnp(
// submit's result size not the original user set buffer size. To work
// around this we revert the server to always use the full original user
// requested buffer sizes.
attr_state->fixed_len_size =
buffer_header.getOriginalFixedLenBufferSizeInBytes();
attr_state->var_len_size =
buffer_header.getOriginalVarLenBufferSizeInBytes();
attr_state->validity_len_size =
buffer_header.getOriginalValidityLenBufferSizeInBytes();
// We check for > 0 for fallback for clients older than 2.2.5
if (buffer_header.getOriginalFixedLenBufferSizeInBytes() > 0) {
attr_state->fixed_len_size =
buffer_header.getOriginalFixedLenBufferSizeInBytes();
} else {
attr_state->fixed_len_size =
buffer_header.getFixedLenBufferSizeInBytes();
}

if (buffer_header.getOriginalVarLenBufferSizeInBytes() > 0) {
attr_state->var_len_size =
buffer_header.getOriginalVarLenBufferSizeInBytes();
} else {
attr_state->var_len_size = buffer_header.getVarLenBufferSizeInBytes();
}

if (buffer_header.getOriginalValidityLenBufferSizeInBytes() > 0) {
attr_state->validity_len_size =
buffer_header.getOriginalValidityLenBufferSizeInBytes();
} else {
attr_state->validity_len_size =
buffer_header.getValidityLenBufferSizeInBytes();
}

attr_state->fixed_len_data.swap(offsets_buff);
attr_state->var_len_data.swap(varlen_buff);
attr_state->validity_len_data.swap(validitylen_buff);
Expand Down

0 comments on commit 297af6f

Please sign in to comment.