-
Notifications
You must be signed in to change notification settings - Fork 9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement ASAP() smoother #5094
Comments
This wouldn't be doable due to the PromQL execution model, I'd suggest requesting this in Grafana which has the required data. |
Just to follow up, this was the Grafana response:
|
You can try something with the |
I guess it would be possible to implement something similar in PromQL where you pass in a large-enough window to cover all use cases, and then the algorithm decides within that window which section to use for averaging, depending on the shape of the rest of the data under that window. But that would be inefficient to do that general shape computation again and again for each eval timestamp. |
I recall playing with an implementation like that - copy/pasted from Skimming the paper, there is a "Streaming ASAP" (section 4.5), which reads along the lines of what @juliusv suggests. |
Please note that porting datadog's javascript proprietary code in golang is probably not be legal. It might be better to start from the PDF only, if that's allowed. |
I poked at the code provided along with the paper at https://github.com/stanford-futuredata/ASAP, which is Apache 2.0. Edit: Just to be 100% clear - I have not looked at Datadog's code. |
By the way predict linear is also a function looking back in time that takes a duration as parameters, maybe some inspiration can be taken there |
We are currentlly doing a bug scrub. This is currently beyond the scope of Prometheus & PromQL, but we will discuss these types of use cases at the next dev summit. Depending on what's the result of that, we will implement or close this issue. |
From my experience with using ASAP, I definitely think it would be a win for Prometheus. As to implementation, I recall that the streaming variant isn't far from Promenetus' running average code (though it would probably need a bigger window to produce the desired output). |
Noice!
Sent by mobile; please excuse my brevity.
…On Fri, Jul 24, 2020, 21:05 Julien Pivotto ***@***.***> wrote:
[image: ASAP]
<https://user-images.githubusercontent.com/291750/88426339-4bc16780-cdf1-11ea-8568-4bcc9e16f706.png>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#5094 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFYII2LT43L7HJGAEUOEQTR5HLN5ANCNFSM4GPX6HSQ>
.
|
Do we plan to implement this? |
This would not be a simple function but rather creating a brand new language. |
Or a new option in the |
That is something that could in theory work, but that's still tending towards creating a brand new language. |
I feel this could be done without a new language. It could be a list of post processors to be applied in sequence. The input to to API could be something like |
That smells suspiciously like a new language. |
my opinion is that asap smoother should run on ALL samples (not limited to stepped samples) |
Hello from the bug scrub. Note that we have recently decided to be more open to experimental PromQL functions by hiding them behind the |
From using Datadog, I've come to find their
autosmooth()
-function very handy when dealing with "spiky" data series.Their implementation (blog post) is inspired by the ASAP smoother (Automatic Smoothing for Attention Prioritization; homepage, PDF paper).
While it is not perfect for every use-case, I have found it to work extremely well compared to hand-tuning some "average over X minutes" until a pattern actually emerges, yet it doesn't hide actual spikes.
Example from the ASAP homepage:
The text was updated successfully, but these errors were encountered: