Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Downsampling/decaying howto? #686

Closed
nwmcsween opened this issue Sep 6, 2018 · 2 comments

Comments

Projects
None yet
2 participants
@nwmcsween
Copy link

commented Sep 6, 2018

Is there a simple way to do drop_chunks but instead of throwing away data it's munged through some sort of downsampler (e.g. 1s intervals to 10s) and is it possible to cascade it (1s to 10s to 100s to ...)?

@mfreed mfreed added the question label Sep 14, 2018

@mfreed

This comment has been minimized.

Copy link
Member

commented Sep 14, 2018

Hi @nwmcsween the common approach folks do is use a scheduled UPSERT to aggregate data from one hypertable to a second. https://docs.timescale.com/v1.0/using-timescaledb/writing-data#upsert

With our new support for background tasks, we'll be adding it policy-drive aggregations in the future. For now, you can easily schedule them external to the database similar to with drop_chunks: https://docs.timescale.com/v1.0/using-timescaledb/data-retention

The nice approach using an UPSERT is they you can more easily handle late data. i.e., every 5second, do a pass over the last 10 seconds to make sure you recompute any of the secondly data that arrived late.

@mfreed

This comment has been minimized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.