Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't allow getutxo answer to go over the size of the send buffer. #4770

Closed
wants to merge 1 commit into from

Conversation

mikehearn
Copy link
Contributor

Limit the possible answer size for getutxo.

This does not adjust MAX_INV_SZ because it's unnecessary for the mix, additionally, requesting normal outputs even up to MAX_INV_SZ does not run out of space in the send buffer. To make a giant response you need a giant output. So there might be some use case for doing a big query, at any rate, if the send buffer is too big then that's a general issue we should fix elsewhere.

In the end I didn't truncate the answer. I just return false which hits Misbehaving(). The reason is, again, no real app will ever hit this case. You need to deliberately pick a giant output and a giant number of repeated queries. So Misbehaving is a reasonable solution.

@jgarzik
Copy link
Contributor

jgarzik commented Aug 27, 2014

Where are the tests that would have caught this in the first place?
Have those tests been updated?

@luke-jr
Copy link
Member

luke-jr commented Aug 27, 2014

Any reason to allow >1? Seems like a good opportunity to discourage address reuse abuses.

@mikehearn
Copy link
Contributor Author

Jeff, once again you are asking questions that were already answered in the description of the original patch. Please do read that document, but I'll repeat the relevant section here:

Testing

I attempted to write unit tests for this, but Core has no infrastructure for building test chains .... the miner_tests.cpp code does it but at the cost of not allowing any other unit test to do so, as it doesn't reset or clean up the global state afterwards! I tried to fix this and ended up down a giant rabbit hole.

So instead I've tested it with a local test app I wrote, which also exercises the client side part in bitcoinj.

The good news is, Sergio is making a start on fixing this, so in future we'll be able to write unit tests that build test chains. The bad news is we still lack test infrastructure for testing the network stack - there are zero tests that cover ProcessMessages and as a result it's difficult to write them. If you'd like to help fill that need that'd be fantastic.

Additionally, what dhill reported does not trigger a crash or any other kind of bug that could be easily found using unit testing. It doesn't even seem to trigger a lot of network usage, I guess the data does get truncated somewhere inside the network stack. I repeated his test against my own bitcoind and it survived just fine. What it does is cause a spike in memory usage, so dhill ran such queries many times in parallel until his host ran out of memory.

But Bitcoin Core does not have a fixed memory budget or any kind of limit - it's just "whatever the host happens to have at the time". I'm not even sure if there's any way to write a unit test that checks memory usage. If there was one, it'd lead to the question of what kind of memory usage limit would we select?

So testing this is not as trivial as it looks. It comes back to the entire anti-DoS question we've been debating for years. A full resource management infrastructure and ability to set resource budgets would be great, and it'd allow us to define this as a bug and test for it.

@BitcoinPullTester
Copy link

Automatic sanity-testing: PASSED, see http://jenkins.bluematt.me/pull-tester/p4770_dfb40dd0cdb8f3e65a3422c55353550c45130930/ for binaries and test log.
This test script verifies pulls every time they are updated. It, however, dies sometimes and fails to test properly. If you are waiting on a test, please check timestamps to verify that the test.log is moving at http://jenkins.bluematt.me/pull-tester/current/
Contact BlueMatt on freenode if something looks broken.

@jgarzik
Copy link
Contributor

jgarzik commented Aug 27, 2014

That's a dodge. Key phrase "that would have caught this"

It is clearly possible to test that a P2P message does not return oversized output.

@mikehearn
Copy link
Contributor Author

Well, like I said, at the moment I'm not sure how to write a test for it and there are no examples to guide me. I tried to test ProcessGetUTXOs independently (that's why it's a separate function) but until Sergio's work lands there's no way to build test chains, so I didn't succeed.

Once there's the right infrastructure for writing this kind of unit test then I'd be happy to do so. I did write functionality tests in the pull tester.

@@ -3577,6 +3585,9 @@ bool ProcessGetUTXOs(const vector<COutPoint> &vOutPoints, bool fCheckMemPool, ve
coin.nHeight = coins.nHeight;
coin.out = coins.vout.at(vOutPoints[i].n);
assert(!coin.out.IsNull());
bytes_used += coin.GetSizeOf();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use:

bytes_used += ::GetSerializeSize(coin, SER_NETWORK, PROTOCOL_VERSION);

instead of defining a custom size computer.

@btcdrak
Copy link
Contributor

btcdrak commented Aug 27, 2014

Seems like yet another reason to revert #4351 pending proper investigation and Sergio's patch to enable tests to be written. There is clearly zero urgency for this and it needs more time and consideration to find the right solution (or even decide if it should be a feature of p2p at all). There is no reason you can't write this for bitcoinj afterall...

@jgarzik
Copy link
Contributor

jgarzik commented Sep 14, 2014

Closing due to #4351 revert. Presumably this gets rolled into the main getutxos implementation.

@jgarzik jgarzik closed this Sep 14, 2014
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants