-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is LevelDB a good fit for a RRD ? #55
Comments
cc @wolfeidau |
The main issue with the statsd + graphite integration is that they are painfully hard to setup. |
@mcollina exactly what I had in mind, pure node statsd, though I think we'd need to make it more like a RRD to make it usable... |
what does RRD stand for? |
Round robin database http://en.wikipedia.org/wiki/RRDtool ? |
If you want to expire stuff, you can try https://github.com/rvagg/node-level-ttl, expiring values older than X. |
I had a shot at using leveldb for storing data similar to how RRD and graphite whisper files work and ran into a few challenges. Firstly some background, RRD doesn't store the time series data, it stores a rolling series of values based on things like average, mean percentile. RRD pre-allocates the buckets for the data being stored, say averages for the intervals 1min for a week, 1hour for a month and 1 day for a year. When time series data is fed to RRD it updates these buckets to reflect the changing average across the time periods. So back to leveldb, in my case I employed a rather simplistic sort of continuous map reduce job where data was fed in and rolled into the aggregates based on a trigger. This trigger had quite a bit of work to do, it would update each of the windows I had specified. The resulting implementations main flaw was it just stored way to much data, this was mainly because of how level map reduce works. I moved onto hacking on another implementation using the raw triggers and my own state table, however this had issues again with data volume and how much i churned through leveldb. That said all is not lost, there are people using log structured data stores for this kind of data I just haven't had a chance to search for papers or ideas on how to adapt this type of data to leveldb. |
@mcollina just using the TTL isn't enough for a round robin database Ahhh @wolfeidau, thanks for the writeup. I did think of using map though I had a feeling that there'd be a better way that involves less recomputation. I did think it may involve some statistical optimisation, which would require some math-smarts, nevertheless, here's my LevelDB RRD design: So a Round Robin Database is essentially a circular buffer, and let's say our circular buffer can store 1Mb of data. We need to fit this data not in an array, but in a set of sorted key-value pairs. So, if we use the key naming convention:
the data will be sorted from oldest to newest. Maybe an extra key Note: the each entry will look something like: {
"counters": {
"statsd.bad_lines_seen": 0,
"statsd.packets_received": 98,
"bucket": 26
},
"timers": {},
"gauges": {
"gaugor": 303
},
"timer_data": {},
"counter_rates": {
"statsd.bad_lines_seen": 0,
"statsd.packets_received": 9.8,
"bucket": 2.6
},
"sets": [
[
"5"
]
],
"pctThreshold": [
90
]
} So the compression step, would be: as the size reaches our arbitrary limit, we'll stream off as much of the oldest data (top of the stream) as required to fit in the new entries and statistically combine the old values it into a single value (the data would need to include the range somehow). This would cause every batch of data to trigger this "compression" process, and I'm not how well this would perform. Thoughts? |
Also note, this is just a rough outline, we would need to make the compress algorithm smarter, so we're not only combining the oldest data. Instead we need to combine the data by specific time periods. What we want - for example - is to use 33% capacity for data within now to -1month another 33% for -1month to -6months and then the remaining -6months to beginning of time. With configuration to set these thresholds and time periods. Also, it would handy to be able to set the amount of granularity for each compression step - though maybe by default, each compression may steps might follow |
Probably won't get time to start this for a few weeks, so if anyone else does, please post the link to the repo here 😄 |
Blast from the past! I'll close this now, there's a lot of tools nowadays to do this: InfluxDB, Prometheus, etc |
Fair enough. I realized I was jumping the gun on closing a lot of issues. Changed my mind and re-opened and moved to |
Thinking about implementing a backend for https://github.com/etsy/statsd/ using levelup. Are there any insights anyone can provide me with :) ?
The text was updated successfully, but these errors were encountered: