-
Notifications
You must be signed in to change notification settings - Fork 38.9k
Add mempool statistics collector #8501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
3526d0a to
4149b34
Compare
src/stats/stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this semicolon unintentional?
|
Hi Jonas, I understand the concern in your description for the addMempoolSample() stat
Is the 'flat' encoding strictly needed? or is there some other concern with
Cheers, Isle |
|
@isle2983 Welcome to github. For your questions/inputs:
|
6bb319d to
c76840b
Compare
src/stats/rpc_stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to add a line break after each sample's values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or just add the names of the columns as another entry in the dict.
Otherwise I fail to see how this rpc call is useful.
|
I have been playing around making my own changes off these commits (isle2983:getmempoolstats). Mostly to just to get some hands on with the code and try to get my C++ up to par. But anyway, I made the rpc output of the samples full JSON: The JSON 'pretty' print through bitcoin-cli is definitely unwieldy. However, the computational overhead in doing the wrangling doesn't seem so bad. The 1.7MM of stat data is from collecting just overnight. With that data, I can pull it off the node, parse and convert the JSON into CSV with a python script and plot it in gnuplot in under a second. Not sure what the comparable is with the qt gui stuff branch that is running, but this doesn't seem too bad on the face of it. Also, if getting this info from the node to the UI quickly is a concern, perhaps a more dense, binary-like format is worth considering i.e: One could imagine it being more efficient than even the 'flat' format, depending on the sophistication. |
|
Thanks @isle2983 for the testing, benchmarks and improvements. I also though again about copy the samples hash before filtering it. I came to the conclusion that it's not worth generating a memory peak (by a copy of the whole samples vector) in order to allow a faster release of the LOCK. The filtering should be very fast because it only compares some uint32 and does construct a new vector with a from-/to-iterators (should also preform fast). |
f88d10b to
7d1849f
Compare
|
Needed rebase. |
7d1849f to
4856b31
Compare
|
Rebased. |
4856b31 to
5722e27
Compare
|
Assigning "would be nice to have" for 0.14 per meeting today. |
|
Just saw @gmaxwell's comment on #8550 (which I completely agree with) and it reminded to look at that PR and this one. Sorry for not getting involved sooner, but I really like the idea. Unfortunately I can think of many many stats (dozens and dozens) that we might want to collect, both to potentially show to users in whiz-bangy gui's and also would be useful for developers and businesses trying to understand unusual behavior on the network. If we envision that there might be 1 KB of different stats data, then maybe rather than just saving sample data points and trimming when they hit memory usage, we should be smart about saving it along different time frames. For instance we could have second, minute, hour, day sampling intervals and we could save 1000 points or more on each and still have quite reasonable memory usage, but they could be auto trimmed.. so if you wanted to look at data from hours ago, you couldn't look at it on the second time frame... |
|
@morcos: thanks for the comment. Yes. I completely agree. I think this is a first start and the current design allows features like you mentioned. |
|
@jonasschnelli Well I guess what I was thinking was that one general framework might fit all stats. You log it with whatever frequency you want. And it's stored in up to 4 different histories (by second, minute, hour, day) and each of those is trimmed to some limit (say 1000 or 2000 data points each). Is there any type of stat that such a general framework might not work well with? |
src/stats/stats_mempool.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fallbackMaxSamplesPerPercision -> fallbackMaxSamplesPerPrecision
aeb8090 to
400eb31
Compare
|
Fixed @paveljanik points and nits. |
src/stats/rpc_stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wshadow emits:
stats/rpc_stats.cpp:78:42: warning: declaration shadows a variable in the global namespace [-Wshadow]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
62b33f8 to
c412d0a
Compare
ryanofsky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few more comments
src/stats/rpc_stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "Add mempool statistics collector"
It seems like it would be more user friendly and extensible if this were an object instead of an array, so the elements could be named. It's a little awkard to have to remember that the first element in the array is time delta, second element is transaction count, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Any thoughts on this? Named fields seems more natural than tuples in a JSON data structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree. I would also use the timestamp directly instead of an offset.
src/validation.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "Add mempool statistics collector"
Would be good to move declaration closer to where this is being set (at least as close as possible).
src/stats/stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "Add mempool statistics collector"
Most of the other settings seem to be in seconds instead of milliseconds. Seems like it would be good to switch this one to seconds for consistency, and to be able to get rid of the division by 1000 in CStatsMempool::addMempoolSamples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Would be nice to use either seconds or milliseconds internally in the stats classes instead of both.
src/stats/stats_mempool.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "Add mempool statistics collector"
Seems like it would be more efficient to switch to a circular buffer, or use std::deque instead of deleting from the beginning of the vector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Still worth considering alternatives to erasing from the beginning of a vector, I think.
src/stats/stats_mempool.h
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "Add mempool statistics collector"
All three of these vectors have the same length (number of precision levels). Seems like it would be cleaner to have one vector containing a struct with all the information that needs to be stored for a given precision level. It would also make the c++ data structure more consistent with the json data structure returned from rpc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be possible though I think that the current approach makes the core more readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think having parallel lists is more readable in c++ and having one list is more readable in json? Also, why are the first two vectors labeled perPrecision, and the third one not? I guess I think something like the following would be less ambiguous and duplicative:
struct PrecisionLevel {
std::vector<CStatsMempoolSample> samples;
size_t max_num_samples;
int64_t last_sample_time;
};
std::vector<PrecisionLevel> m_precision_levels;Could also rename cs_mempool_stats to cs_precision_levels to make it clearer what the lock is protecting.
src/stats/stats_mempool.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In commit "Add mempool statistics collector"
I don't understand what this is doing. It seems unnecessary, and strange because it is comparing the counter against numbers of seconds without taking collectInterval into account. If you will keep this, I think it'd be good to add a comment with an example of interval and counter values that explains how this is supposed to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added this to avoid an overflow of intervalCounter. I though resetting it when we exceeded the highest possible interval seems sane but I agree that a size_t overflow will very likely happens never... would you recommend to remove it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would you recommend to remove it?
I'd say remove it unless there's a way to reset it that makes sense (I'm sure there is one but nothing immediately comes to mind).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Should follow up by removing this or adding comments suggested above.
|
Rebased on my repo. |
b3f506e to
915d873
Compare
915d873 to
7af0ea4
Compare
|
Rebased (rebased @luke-jr's version). |
| # test mempool stats | ||
| time.sleep(15) | ||
| mps = self.nodes[0].getmempoolstats() | ||
| assert_equal(len(mps), 3) #fixed amount of percision levels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/percision/precision
src/stats/rpc_stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Any thoughts on this? Named fields seems more natural than tuples in a JSON data structure.
src/stats/stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Would be nice to use either seconds or milliseconds internally in the stats classes instead of both.
src/stats/stats_mempool.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Still worth considering alternatives to erasing from the beginning of a vector, I think.
| const bool DEFAULT_STATISTICS_ENABLED = false; | ||
| const static unsigned int STATS_COLLECT_INTERVAL = 2000; // 2 secs | ||
|
|
||
| CStats* CStats::sharedInstance = NULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use unique_ptr
src/stats/stats_mempool.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread #8501 (comment)
Should follow up by removing this or adding comments suggested above.
| void startCollecting(CScheduler& scheduler); | ||
|
|
||
| /* set the target for the maximum memory consumption (in bytes) */ | ||
| void setMaxMemoryUsageTarget(size_t maxMem); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be private since it is called by parameterInteraction?
Either way, it seems like this needs a comment about when it is safe to be called. E.g. it seems like this will not always work properly if it is called after startCollecting because it won't schedule the callback. But maybe it will work to change the size if it was previously greater than 0?
| { | ||
| private: | ||
| static CStats* sharedInstance; //!< singleton instance | ||
| std::atomic<bool> statsEnabled; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have this as a separate variable when it seems to just be true when maxStatsMemory != 0. Maybe eliminate this or else add comments documenting the intended interfaction between the two variables.
|
|
||
| /* SIGNALS | ||
| * ======= */ | ||
| boost::signals2::signal<void(void)> MempoolStatsDidChange; //mempool signal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nobody seems to be listening to this signal. Should add comment explaining who intended listeners are, if keeping this.
|
|
||
| MempoolSamplesVector samples = CStats::DefaultStats()->mempoolCollector->getSamplesForPrecision(i, timeFrom); | ||
|
|
||
| // use "flat" json encoding for performance reasons |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should expand comment, unclear what it's referring to.
Sjors
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a table would be more human readable. To also make it easier for other software to parse, you could make the sample interval an argument and add the timestamp to each sample:
bitcoin-cli getmempoolstats 2
timestamp | tx_count | dynamic_mem_usage | min_fee_per_k
------------------------------------------------------------
1515419437 | 1042 | 1797344 | 0
1515419439 | 1050 | 1837344 | 0
...
You could add another argument for the output format (CSV, JSON, etc).
Multiple precision intervals seems redundant, or at least something that could be added later, if users want long term charts without storing too much detail.
| std::string CStats::getHelpString(bool showDebug) | ||
| { | ||
| std::string strUsage = HelpMessageGroup(_("Statistic options:")); | ||
| strUsage += HelpMessageOpt("-statsenable=", strprintf("Enable statistics (default: %u)", DEFAULT_STATISTICS_ENABLED)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be an RPC method to turn stats collecting on? If not, maybe the getmempoolstats help message can inform the user to launch bitcoind with -statsenable=1?
src/stats/rpc_stats.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree. I would also use the timestamp directly instead of an offset.
| There hasn't been much activity lately and the patch still needs rebase, so I am closing this for now. Please let me know when you want to continue working on this, so the pull request can be re-opened. |
This PR adds a statistics collector class which aims to collect various types of statistics up to the configurable maximum memory target. At the moment, only mempool statistics will be collected.
Motivation
Adding more statistics and visualization to the GUI would leverage its usage. To do so, we need stats that are collected even when the visualization is not visible (example: the GUI network graph will only draw data when it's visible which is kinda unusable)
How it works
This PR adds a simple stats manager that polls stats over a repetitive
CSchedulertask.The samples are not guaranteed to be const-time. Each sample contains a time delta to the last one (uint16_t).
API
-statsenabledefault disabled-statsmaxmemorytarget10MB default maximal memory target to use for statistics.getmempoolstats==
[ { "percision_interval": 2, "time_from": 1494252401, "samples": [ [ 11, 1, 1008, 0 ], .... }... ]Features
-> CScheduler driven sample collecting (current interval 2000ms)
-> Relevant mempool data (size, memory requirement, minfee) gets written to an atomic cache (no additional locking required)
-> Multiple precision levels (currently three, 2s, 60s, 1800s)
-> Memory target that will calculate how many samples do fit in the given target
-> Sample do only have a 2byte time-delta to the last sample, allowing to save some memory
-> Flexible design, adding more data-points should be simple (bandwidth, utxo-stats, etc.).