-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RCORE-1969 Fix bundled encrypted Realm regression #7535
Conversation
src/realm/alloc_slab.cpp
Outdated
|
||
// the file could have been produced on a device with a different | ||
// page size than our own so don't expect the size to be aligned | ||
size = round_up_to_page_size(size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This leaves the readers with a view that is larger than what is backed by the file. I think this should be okay, because eventually we will resize the file and call attach_file()
again. However, I'm not 100% sure about what other side effects this may have between this point and that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the test failures, setting the size to be larger than the file size isn't valid on Windows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out that the windows errors were because I was (incorrectly) rounding up even for non-encrypted files, and this caused our Windows builders to run out of disk space.
I looked into it further and haven't found anything to dispute rounding up here. In other places, the memory mapping is always rounded up to a page size regardless of the file size. We should be fine as long as we don't read from that memory which shouldn't happen (if it did, that would indicate a different problem). The other issue I could think of is where the rounded up section becomes a valid block due to a different process extending the file. But that isn't an issue here because we are the initiator process under the interprocess control lock, and we do the file extension ourselves immediately afterwards if necessary.
Pull Request Test Coverage Report for Build james.stone_519Details
💛 - Coveralls |
return static_cast<size_t>(size); | ||
} | ||
|
||
// This variable exists such that page_size() can return the page size without having to make any system calls. | ||
// It could also have been a static local variable, but Valgrind/Helgrind gives a false error on that. | ||
size_t cached_page_size = get_page_size(); | ||
std::atomic<size_t> cached_page_size = get_page_size(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably doesn't matter, but page_size() should do relaxed reads of this.
@@ -718,16 +718,9 @@ bool SlabAlloc::align_filesize_for_mmap(ref_type top_ref, Config& cfg) | |||
// check if online compaction allows us to shrink the file: | |||
if (top_ref) { | |||
// Get the expected file size by looking up logical file size stored in top array | |||
constexpr size_t max_top_size = (Group::s_file_size_ndx + 1) * 8 + sizeof(Header); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why the Header is part of this calculation? Plus, multiplication by 8 only gives the true size if the top array width is maxed out. This is fine for a "maximum" but causes problems later on because we actually use this as the size to read from the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume the use of sizeof(Header)
(24) should have actually been NodeHeader::header_size
(8)
size_t top_offset = top_ref - top_page_base; | ||
size_t map_size = std::min(max_top_size + top_offset, size - top_page_base); | ||
File::Map<char> map_top(m_file, top_page_base, File::access_ReadOnly, map_size, 0, m_write_observer); | ||
realm::util::encryption_read_barrier(map_top, top_offset, max_top_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we were requesting the max_top_size
which may be more than the encrypted mappings hold (or the file for that matter) if the top_ref happened to be very close to the end of the file. If the "top ref + size" crossed a page_size, then the File::Map won't have the mappings set up, and we'll hit an assertion. If the "top ref + size" crossed the end of the file, then we'd get a DecryptionFailed exception.
All of this is rather tricky to calculate accurately because we'd first have to map/decrypt the top_ref header, and then based on the array width, calculate the number of bytes to the Group::s_file_size_ndx
position and then map/decrypt that before we can read the data. Fortunately all of this is already handled by standard code to read from an array, so we can just use that existing machinery. This also means that we'd use our existing mappings rather than creating a new one here just for this.
… different page size
…ulate a different page_size we need to remove the assumption that mmap addresses are page-aligned.
d7ee5e7
to
38c340c
Compare
src/realm/util/file_mapper.cpp
Outdated
@@ -48,6 +48,7 @@ | |||
#include <sstream> | |||
#include <regex> | |||
#include <thread> | |||
#include <iostream> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't appear to be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing!
Fixes #7322
This is also suspected to fix: #7252, #7319, #6752
There was a regression due to the changes in #6807. Those changes moved the file expansion out of
SlabAlloc::attach_file
but this had the side effect that the actual file size is used when updating the reader view rather than using a multiple of the page_size().Users reported this regression since swift v10.42.0 which matches the release of the change in core version v13.17.1.
I had originally thought that we did not support encrypted bundled Realms across devices of different page size, but that does not seem to be a limitation that we have, as the encrypted file layout is actually not reliant on the page size. The page size only affects the in-memory buffers. I'm marking this as a "fix" because users reported that shipping bundled encrypted Realms used to work, though I don't know that we ever actually committed to supporting that. We certainly didn't have any tests for it, and the test that I added turned up a few other related issues.
☑️ ToDos
bindgen/spec.yml
, if public C++ API changed