New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement CAutoFile via std::fstream #3277
Conversation
I see no problems with the implementation, but I would suggest changing
to
Making this change will require the removal of the fstream file member variable, likewise that will cause you to have to replace all instances of "file." with "fstream::" . This allows the underlying fstream to be used in spite of being wrapped by a CAutoFile, if an only if it is referenced as a fstream and not a CAutoFile. In the end you could add it to an array of fstreams, while ensuring explicit CAutoFile references are forced to use the RAII function versions. |
@brandondahler Thanks for that suggestion, I'm going to update this pull. Have you got a nice idea, how I could add back the FileCommit() whilst beeing compatible to sdt::fstream? I guess a ::flush() is not enough... |
- implement CAutoFile via std::fstream and use it in the code - unify log/error messages for serializing/deserializing exceptions - add debug and benchmark messages for writing/reading block/undo files - remove boost::path member from CAddrDB class - add new helper functions GetBlockFile() and GetUndoFile()
Automatic sanity-testing: PASSED, see http://jenkins.bluematt.me/pull-tester/5ee9baec85bcc972853af954967ee6b24bdeff2c for binaries and test log. |
@Diapolo I've researched this for some other changes to move everything to fstream. It appears that there is no easy way to get the fileno from a c++ fstream, and likewise there's no way to directly request the OS flush the data to the disk. The best bet that I've found is doing:
That being said, I question the need to request/force the OS to flush the cache buffers to disk. It seems like we are trying to creep into the operating system's responsibilities instead of accepting that the OS may or may not follow our requests at the time that we make them. |
Being able to flush block files to disk is essential for consistency of on-disk structures (you don't want to make a database update that refers to a block being on disk before knowing that block is actually on disk). |
This does not look like an improvement. |
I have a slight preference of the low-level file functions to fstream implementations here, because the warts of those are a little bit more apparent to me than those of particular C++ libraries. And this change did not help with fixing the MacOSX corruption either. |
AFAIK no one ever tried this anyway... @brandondahler Isn't |
…nTestFramework (bitcoin#3277) * [tests] Change feature_block.py to use BitcoinTestFramework * [tests] Fix flake8 warnings in feature_block.py * [tests] Tidy up feature_block.py - move all helper methods to the end - remove block, create_tx and create_and_sign_tx shortcuts - remove --runbarelyexpensive option, since it defaults to True and it's unlikely that anyone ever runs the test with this option set to false. * [tests] Add logging to feature_block.py * [tests] Improve assert message when wait_until() fails * Merge bitcoin#13048: [tests] Fix feature_block flakiness c1d7420 [tests] Fix feature_block flakiness (John Newbery) Pull request description: feature_block.py occasionally fails on Travis. I believe this is due to a a race condition when reconnecting to bitcoind after a subtest that expects disconnection. If the test runs ahead and sends the INV for the subsequent test before we've received the initial sync getheaders, then we may end up sending two headers messages - one as a response to the initial sync getheaders and one in response to the INV getheaders. If both of those headers fail validation with a DoS score of 50 or higher, then we'll unexpectedly be disconnected. There is only one validation failure that has a DoS score bewteen 50 and 100, which is high-hash. That's why the test is failing immediately after the "Reject a block with invalid work" subtest. Fix is to wait for the initial getheaders from the peer before we start populating our blockstore. That way we won't have any invalid headers to respond to it with. Tree-SHA512: dc17d795fcfaf0f8c0bf1e9732b5e11fbc8febbfafba4c231b7c13a5404a2c297dcd703a7a75bc7f353c893e12efc87f424f2201abd47ba5268af32d4d2e841f * Temporarely rename MAX_BLOCK_SIZE -> MAX_BLOCK_BASE_SIZE We'll undo this after the next commit. This avoids merge many conflicts and makes reviewing easier. * Rename MAX_BLOCK_BASE_SIZE back to MAX_BLOCK_SIZE * Use DoS score of 100 for bad-blk-sigops This was accidently changed to 10 while backporting bitcoin#7287 and causes test failures in p2p-fullblocktest.py * Use allowOptimisticSend=true when sending reject messages This fixes test failures in p2p-fullblocktest.py which expects reject messages to be sent/received before connections get closed. * Fix p2p-fullblocktest.py - CBlock and friends are still in test_framework.mininode - "-whitelist" causes connections to not be dropped, which in turn causes sync_blocks with reconnect=True to fail - "bad-cb-amount" does not cause a ban in Dash, so reconnect must be False - Dash already bans when a header is received which is a child of an invalid header, causing block requests to never happen * Backport missing changes from bitcoin#13003 bitcoin#13003 was backported out of order which causes missed changes. * Bump p2p-fullblocktest timeouts * Increase RPC timeout in p2p-fullblocktest.py Co-authored-by: John Newbery <jonnynewbs@gmail.com> Co-authored-by: MarcoFalke <falke.marco@gmail.com>
If considered usefull, I'm open to feedback and comments. I've been using this for ages with my local build and never had any problems with it. I know that currently there is no "flush to disk" when downloading blocks, but perhaps this pull can help investigating the Mac corruption problems.