Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement CAutoFile via std::fstream #3277

Closed
wants to merge 1 commit into from
Closed

implement CAutoFile via std::fstream #3277

wants to merge 1 commit into from

Conversation

Diapolo
Copy link

@Diapolo Diapolo commented Nov 18, 2013

  • implement CAutoFile via std::fstream and use it in the code
  • unify log/error messages for serializing/deserializing exceptions
  • add debug and benchmark messages for writing/reading block/undo files
  • remove boost::path member from CAddrDB class
  • add new helper functions GetBlockFile() and GetUndoFile()

If considered usefull, I'm open to feedback and comments. I've been using this for ages with my local build and never had any problems with it. I know that currently there is no "flush to disk" when downloading blocks, but perhaps this pull can help investigating the Mac corruption problems.

@brandondahler
Copy link
Contributor

I see no problems with the implementation, but I would suggest changing

class CAutoFile

to

class CAutoFile : protected std::fstream

Making this change will require the removal of the fstream file member variable, likewise that will cause you to have to replace all instances of "file." with "fstream::" .

This allows the underlying fstream to be used in spite of being wrapped by a CAutoFile, if an only if it is referenced as a fstream and not a CAutoFile. In the end you could add it to an array of fstreams, while ensuring explicit CAutoFile references are forced to use the RAII function versions.

@Diapolo
Copy link
Author

Diapolo commented Dec 1, 2013

@brandondahler Thanks for that suggestion, I'm going to update this pull. Have you got a nice idea, how I could add back the FileCommit() whilst beeing compatible to sdt::fstream? I guess a ::flush() is not enough...

- implement CAutoFile via std::fstream and use it in the code
- unify log/error messages for serializing/deserializing exceptions
- add debug and benchmark messages for writing/reading block/undo files
- remove boost::path member from CAddrDB class
- add new helper functions GetBlockFile() and GetUndoFile()
@BitcoinPullTester
Copy link

Automatic sanity-testing: PASSED, see http://jenkins.bluematt.me/pull-tester/5ee9baec85bcc972853af954967ee6b24bdeff2c for binaries and test log.
This test script verifies pulls every time they are updated. It, however, dies sometimes and fails to test properly. If you are waiting on a test, please check timestamps to verify that the test.log is moving at http://jenkins.bluematt.me/pull-tester/current/
Contact BlueMatt on freenode if something looks broken.

@brandondahler
Copy link
Contributor

@Diapolo I've researched this for some other changes to move everything to fstream. It appears that there is no easy way to get the fileno from a c++ fstream, and likewise there's no way to directly request the OS flush the data to the disk.

The best bet that I've found is doing:

::rdbuf().pubsync()

That being said, I question the need to request/force the OS to flush the cache buffers to disk. It seems like we are trying to creep into the operating system's responsibilities instead of accepting that the OS may or may not follow our requests at the time that we make them.

@sipa
Copy link
Member

sipa commented Dec 1, 2013

Being able to flush block files to disk is essential for consistency of on-disk structures (you don't want to make a database update that refers to a block being on disk before knowing that block is actually on disk).

@jgarzik
Copy link
Contributor

jgarzik commented Dec 13, 2013

This does not look like an improvement.

@laanwj
Copy link
Member

laanwj commented Dec 13, 2013

I have a slight preference of the low-level file functions to fstream implementations here, because the warts of those are a little bit more apparent to me than those of particular C++ libraries.

And this change did not help with fixing the MacOSX corruption either.

@laanwj laanwj closed this Dec 13, 2013
@Diapolo
Copy link
Author

Diapolo commented Dec 13, 2013

AFAIK no one ever tried this anyway...

@brandondahler Isn't ::rdbuf().pubsync() for reading from files?

@Diapolo Diapolo deleted the use_fstream branch January 30, 2014 08:22
Bushstar pushed a commit to Bushstar/omnicore that referenced this pull request Apr 8, 2020
…nTestFramework (bitcoin#3277)

* [tests] Change feature_block.py to use BitcoinTestFramework

* [tests] Fix flake8 warnings in feature_block.py

* [tests] Tidy up feature_block.py

- move all helper methods to the end
- remove block, create_tx and create_and_sign_tx shortcuts
- remove --runbarelyexpensive option, since it defaults to True and it's
unlikely that anyone ever runs the test with this option set to false.

* [tests] Add logging to feature_block.py

* [tests] Improve assert message when wait_until() fails

* Merge bitcoin#13048: [tests] Fix feature_block flakiness

c1d7420 [tests] Fix feature_block flakiness (John Newbery)

Pull request description:

  feature_block.py occasionally fails on Travis. I believe this is due to
  a a race condition when reconnecting to bitcoind after a subtest that
  expects disconnection. If the test runs ahead and sends the INV for the
  subsequent test before we've received the initial sync getheaders, then
  we may end up sending two headers messages - one as a response to the
  initial sync getheaders and one in response to the INV getheaders. If
  both of those headers fail validation with a DoS score of 50 or higher,
  then we'll unexpectedly be disconnected.

  There is only one validation failure that has a DoS score bewteen 50 and
  100, which is high-hash. That's why the test is failing immediately
  after the "Reject a block with invalid work" subtest.

  Fix is to wait for the initial getheaders from the peer before we
  start populating our blockstore. That way we won't have any invalid
  headers to respond to it with.

Tree-SHA512: dc17d795fcfaf0f8c0bf1e9732b5e11fbc8febbfafba4c231b7c13a5404a2c297dcd703a7a75bc7f353c893e12efc87f424f2201abd47ba5268af32d4d2e841f

* Temporarely rename MAX_BLOCK_SIZE -> MAX_BLOCK_BASE_SIZE

We'll undo this after the next commit. This avoids merge many conflicts and
makes reviewing easier.

* Rename MAX_BLOCK_BASE_SIZE back to MAX_BLOCK_SIZE

* Use DoS score of 100 for bad-blk-sigops

This was accidently changed to 10 while backporting bitcoin#7287 and causes
test failures in p2p-fullblocktest.py

* Use allowOptimisticSend=true when sending reject messages

This fixes test failures in p2p-fullblocktest.py which expects reject
messages to be sent/received before connections get closed.

* Fix p2p-fullblocktest.py

- CBlock and friends are still in test_framework.mininode
- "-whitelist" causes connections to not be dropped, which in turn causes
  sync_blocks with reconnect=True to fail
- "bad-cb-amount" does not cause a ban in Dash, so reconnect must be False
- Dash already bans when a header is received which is a child of an invalid
  header, causing block requests to never happen

* Backport missing changes from bitcoin#13003

bitcoin#13003 was backported out of order which causes missed changes.

* Bump p2p-fullblocktest timeouts

* Increase RPC timeout in p2p-fullblocktest.py

Co-authored-by: John Newbery <jonnynewbs@gmail.com>
Co-authored-by: MarcoFalke <falke.marco@gmail.com>
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants