Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Method of deciding amino overhead of tx in ReapMaxBytesMaxGas is wrong #2789

Closed
yutianwu opened this issue Nov 9, 2018 · 6 comments
Closed
Labels
C:consensus Component: Consensus C:mempool Component: Mempool T:bug Type Bug (Confirmed) T:security Type: Security (specify priority)

Comments

@yutianwu
Copy link
Contributor

yutianwu commented Nov 9, 2018

I wrote a test for amino overhead of tx:

func TestAmino(t *testing.T) {
	txs := []types.Tx{
		[]byte{1, 2, 3},
		[]byte{1, 2, 3},
	}
	var totalBytes int64 = 0
	for _, tx := range txs {
		aminoOverhead := int64(amino.UvarintSize(uint64(len(tx))))

		totalBytes += aminoOverhead + int64(len(tx))
	}

	bz, _ := cdc.MarshalBinary(txs)
	println("len: ", len(txs), "\nexpected length: ", totalBytes, "\nactual length: ", len(bz), "\ndiff: ", (int64(len(bz)) - totalBytes))
}
/*
=== RUN   TestAmino
len:  2 
expected length:  8 
actual length:  11 
diff:  3
--- PASS: TestAmino (0.00s)
PASS
*/

The underlying reason is amino.UvarintSize forget encodeFieldNumberAndTyp3 which will be called in marshal list slice. So we add one extra byte in every tx when marshaling real txs and that's why we have a diff in the test.

It will cause consensus failure because the actual block size exceeds the max block size for the error deciding amino overhead of txs.

@liamsi
Copy link
Contributor

liamsi commented Nov 9, 2018

Thanks for raising this issue!

Not sure I understand this correctly, but amino.UvarintSize isn't meant to compute the amino overhead! It just computes the encoded size of an unsigned varint (and not the whole size of a marshalled message like a list slice). Can you elaborate how this would make consensus fail? We do not use amino.UvarintSize to compute the amino overhead for transactions.

If you properly want to compute amino overhead, you would need to fill the underlying type with the largest possible data and encode it. The len of the resulting byte arrary is the max allowed size. See for example how we check against / compute the max. allowed header size here:

func TestMaxHeaderBytes(t *testing.T) {
// Construct a UTF-8 string of MaxChainIDLen length using the supplementary
// characters.
// Each supplementary character takes 4 bytes.
// http://www.i18nguy.com/unicode/supplementary-test.html
maxChainID := ""
for i := 0; i < MaxChainIDLen; i++ {
maxChainID += "𠜎"
}
// time is varint encoded so need to pick the max.
// year int, month Month, day, hour, min, sec, nsec int, loc *Location
timestamp := time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC)
h := Header{
Version: version.Consensus{math.MaxInt64, math.MaxInt64},
ChainID: maxChainID,
Height: math.MaxInt64,
Time: timestamp,
NumTxs: math.MaxInt64,
TotalTxs: math.MaxInt64,
LastBlockID: makeBlockID(make([]byte, tmhash.Size), math.MaxInt64, make([]byte, tmhash.Size)),
LastCommitHash: tmhash.Sum([]byte("last_commit_hash")),
DataHash: tmhash.Sum([]byte("data_hash")),
ValidatorsHash: tmhash.Sum([]byte("validators_hash")),
NextValidatorsHash: tmhash.Sum([]byte("next_validators_hash")),
ConsensusHash: tmhash.Sum([]byte("consensus_hash")),
AppHash: tmhash.Sum([]byte("app_hash")),
LastResultsHash: tmhash.Sum([]byte("last_results_hash")),
EvidenceHash: tmhash.Sum([]byte("evidence_hash")),
ProposerAddress: crypto.AddressHash([]byte("proposer_address")),
}
bz, err := cdc.MarshalBinaryLengthPrefixed(h)
require.NoError(t, err)
assert.EqualValues(t, MaxHeaderBytes, len(bz))

As transactions in general are arbitrary bytes, the above approach won't work though. The only way to compute the actual overhead of an arbitrary list of byte slices is to encode it and check its length.

@yutianwu
Copy link
Contributor Author

yutianwu commented Nov 9, 2018

the below line calculate the amino overhead of one tx. but it's wrong and less than the real size.
the consequence is the produced block is larger than the maxBlockSize.

aminoOverhead := int64(amino.UvarintSize(uint64(len(memTx.tx))))

here is the error log I added for testing:

I[11-09|09:00:14.917] Add block part error                         module=consensus err="Read overflow, maxSize is 512000 but this amino binary object is 513824 bytes." height=3272 csRound=112 blockRound=112

@liamsi
Copy link
Contributor

liamsi commented Nov 9, 2018

Thanks a lot! I'll have a look.

@liamsi
Copy link
Contributor

liamsi commented Nov 9, 2018

You are right! Thanks again! It is a bit weird that amino encodes the array of transactions as if it was contained in a wrapping parent struct (which adds the fieldnumber & type). The fieldnumber is always 1 (as this is the default). When they are encoded in a Block this will also hold (as Tx is the 1st field in Data. I'll add a quickfix and a one or two tests for computing the amino overhead here.

@ebuchman ebuchman added T:bug Type Bug (Confirmed) C:mempool Component: Mempool C:consensus Component: Consensus T:breaking Type: Breaking Change T:security Type: Security (specify priority) and removed T:breaking Type: Breaking Change labels Nov 11, 2018
@ebuchman
Copy link
Contributor

Yikes. Thanks for reporting this. Feel free to report this kind of thing to the bug bounty: https://hackerone.com/tendermint/

@ebuchman
Copy link
Contributor

Oh I see you did that. Excellent!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C:consensus Component: Consensus C:mempool Component: Mempool T:bug Type Bug (Confirmed) T:security Type: Security (specify priority)
Projects
None yet
Development

No branches or pull requests

3 participants