New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various serialization simplifcations and optimizations #9039
Changes from all commits
50e8a9c
c2c5d42
fad9b66
657e05a
5284721
a603925
a2929a2
25a211a
d59a518
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,6 +17,9 @@ | |
#include <leveldb/db.h> | ||
#include <leveldb/write_batch.h> | ||
|
||
static const size_t DBWRAPPER_PREALLOC_KEY_SIZE = 64; | ||
static const size_t DBWRAPPER_PREALLOC_VALUE_SIZE = 1024; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see no reason not to go higher here? Keys should ~always be 35 bytes (ish, but 64 is fine), but values could be higher...but might as well also prealloc more than 1K for values since they're occasionally higher and the memory usage isnt an issue. Did you benchmark this? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I did benchmark the change overall, and the "Flush to disk" time went down by around 5% from just this. I did not benchmark any other sizes, but my expectation is that as long as the key size is over 35, and the value over 100 or so, there will not be any measurable difference, as these cases occur so infrequently, and a vector resize isn't the end of the world. |
||
|
||
class dbwrapper_error : public std::runtime_error | ||
{ | ||
public: | ||
|
@@ -60,12 +63,12 @@ class CDBBatch | |
void Write(const K& key, const V& value) | ||
{ | ||
CDataStream ssKey(SER_DISK, CLIENT_VERSION); | ||
ssKey.reserve(ssKey.GetSerializeSize(key)); | ||
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE); | ||
ssKey << key; | ||
leveldb::Slice slKey(&ssKey[0], ssKey.size()); | ||
|
||
CDataStream ssValue(SER_DISK, CLIENT_VERSION); | ||
ssValue.reserve(ssValue.GetSerializeSize(value)); | ||
ssValue.reserve(DBWRAPPER_PREALLOC_VALUE_SIZE); | ||
ssValue << value; | ||
ssValue.Xor(dbwrapper_private::GetObfuscateKey(parent)); | ||
leveldb::Slice slValue(&ssValue[0], ssValue.size()); | ||
|
@@ -77,7 +80,7 @@ class CDBBatch | |
void Erase(const K& key) | ||
{ | ||
CDataStream ssKey(SER_DISK, CLIENT_VERSION); | ||
ssKey.reserve(ssKey.GetSerializeSize(key)); | ||
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE); | ||
ssKey << key; | ||
leveldb::Slice slKey(&ssKey[0], ssKey.size()); | ||
|
||
|
@@ -107,7 +110,7 @@ class CDBIterator | |
|
||
template<typename K> void Seek(const K& key) { | ||
CDataStream ssKey(SER_DISK, CLIENT_VERSION); | ||
ssKey.reserve(ssKey.GetSerializeSize(key)); | ||
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE); | ||
ssKey << key; | ||
leveldb::Slice slKey(&ssKey[0], ssKey.size()); | ||
piter->Seek(slKey); | ||
|
@@ -200,7 +203,7 @@ class CDBWrapper | |
bool Read(const K& key, V& value) const | ||
{ | ||
CDataStream ssKey(SER_DISK, CLIENT_VERSION); | ||
ssKey.reserve(ssKey.GetSerializeSize(key)); | ||
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE); | ||
ssKey << key; | ||
leveldb::Slice slKey(&ssKey[0], ssKey.size()); | ||
|
||
|
@@ -234,7 +237,7 @@ class CDBWrapper | |
bool Exists(const K& key) const | ||
{ | ||
CDataStream ssKey(SER_DISK, CLIENT_VERSION); | ||
ssKey.reserve(ssKey.GetSerializeSize(key)); | ||
ssKey.reserve(DBWRAPPER_PREALLOC_KEY_SIZE); | ||
ssKey << key; | ||
leveldb::Slice slKey(&ssKey[0], ssKey.size()); | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strictly speaking this is also a performance regression...do we care about the time taken to iterate over the nMastkSize bits (ie do we ever care about the performance of CCoins serialization that much?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually think this will be optimized out. It's a loop with known count in CCoin::Serialize, where each iteration results in a counter increment by one. Also, yes, I don't think we care.