Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging performance and status measurements #1763

Closed
ion1 opened this issue Sep 28, 2015 · 5 comments
Closed

Logging performance and status measurements #1763

ion1 opened this issue Sep 28, 2015 · 5 comments
Labels
need/community-input Needs input from the wider community

Comments

@ion1
Copy link

ion1 commented Sep 28, 2015

To have built-in logging of measurements in order to generate graphs such as the ones in #1750, we could simply have an on-disk circular array of e.g. 8640 floating-point samples for 24 hours of data at 1 sample/10 seconds along with a pointer to the last update, its timestamp and the number of samples in the array. If appending a new sample would overwrite the oldest sample, first free the oldest sample by decrementing the number of samples.

If we want to be fancier and store measurements from a longer time period but without growing the size of the log linearly, we could instead (upon an append to a full array) free the two oldest samples and append their average to another array. The second array would have 8640 samples for 48 hours of data at 1 sample/20 seconds. Given a chain of such arrays, this would result in a log size of about O(log₂ time).

A chain of arrays as described above would be maintained for every value to be measured.

This log could be plotted and analyzed using any tool such as RRDtool, Gnuplot, Flot, IPFS GUIs etc.

If we implemented even fancier things such as having multiple downsampled array chains for different downsampling functions (average, min, max, …), we would basically have reimplemented RRDtool’s logging part and at that point could just depend on it. But if we’re happy with a simple circular array or a repeatedly downsampled chain of circular arrays, we could implement it with very little code and no dependencies.

@ghost
Copy link

ghost commented Sep 28, 2015

We're exposing a Prometheus scraping endpoint at :5001/debug/metrics/prometheus and run an internal Grafana dashboard.

The only IPFS-specific metric so far is ipfs_p2p_peers_total: 8b164f9 && 9c30b85

@jbenet
Copy link
Member

jbenet commented Sep 29, 2015

Yeah, we should add more metrics to that \o/

@ion1 want to try adding the metrics you need for your nifty dup graphs? (we could turn certain metrics on with certain config options.)

@ion1
Copy link
Author

ion1 commented Sep 30, 2015

I would certainly like to contribute code but I’m afraid I can’t say if and when I’ll have the energy to study the go language and the go-ipfs codebase.

@whyrusleeping
Copy link
Member

metrics! cc @Kubuxu

@em-ly em-ly added the need/community-input Needs input from the wider community label Aug 25, 2016
@eingenito
Copy link
Contributor

This issue has been superseded by #5783, #5604 and ongoing work on gateway performance monitoring.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need/community-input Needs input from the wider community
Projects
None yet
Development

No branches or pull requests

5 participants