-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For ASAP method, what is the difference between 1000 original data points downsampled to 1000 data points versus 900 original data points to 900 #16
Comments
Would like to know the same. @janjakubnanista is this intended behavior? |
@tommedema @peterj35 hi there! Thank you for your question - indeed this might be a little confusing. ASAP works a bit different than the other methods - when, for example, using LTD, the number of points you get is exactly the same as the number you passed in as the resolution. ASAP on the other hand takes the resolution only as a guide and will give you optimal* number of point for the specified resolution. *) optimal in this case depends both on the data and the resolution. ASAP will give you as few points as possible while preserving the data characteristics (using kurtosis as a measure). It does so by looking for parameters for SMA (simple moving average) smoothing function, then applying it to the data. So ASAP is merely a automatic SMA if you wish. Try playing around with the data, passing data of different "shape" to the function and you will see better what I mean :) Does that answer your question? Let me know! |
@janjakubnanista we understand that, but then why would ASAP with 900 input data points and basically no downsampling but only smoothing result in a straight line? Clearly it doesn't try to preserve data characteristics here. And when changing to 1000 input data points which is pretty much the same and no downsampling, the characteristics are preserved. It seems like something is going wrong in the algorithm here? |
To ask this differently: we want to always achieve the smooth waves as in the first screenshot (1000 data points), how would we achieve that? We don't want the curve to flatten to a straight line. |
Sorry for my delayed responses! I need to look at the code but I don't have a computer now, in the middle of moving countries :) If you give me a week i should be ready. Is that okay @tommedema @peterj35 ? |
Of course, thank you
…On Tue, Sep 15, 2020 at 00:44 Ján Jakub Naništa ***@***.***> wrote:
Sorry for my delayed responses! I need to look at the code but I don't
have a computer now, in the middle of moving countries :) If you give me a
week i should be ready. Is that okay @tommedema
<https://github.com/tommedema> @peterj35 <https://github.com/peterj35> ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#16 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACRAOOI7KE4ASWLBURPOVLSF4LN5ANCNFSM4RDR4NQQ>
.
|
I would add this information to the Readme file, since some users may want an exact output size and they should be aware that ASAP will not respect the size specified in |
Hello, we were playing around with the periodic data demo and had a question regarding the ASAP downsampling method.
When I 'downsample' 1000 original points to 1000 downsampled points, I get a graph that looks like this:
I would say that the ASAP trendline matches the original data fairly accurately, applied a little bit of smoothing, but the original data is pretty much intact.
However, when I downsample 900 original points to 900 downsampled points, I get a graph that looks like this:
Although theoretically it seems to me that mapping 1000 -> 1000 or 900 -> 900 should've made no difference in the amount of detail that is preserved, in practice, it seems like there's quite a bit of difference, where the 900 -> 900 lost significantly more detail than 1000 -> 1000.
Was wondering if this is a characteristic somehow of the ASAP downsampling method, or if this was a potential bug. We would like the results of the 1000 -> 1000 sampling consistently, to preserve around that level of detail as a result of the downsampling.
The text was updated successfully, but these errors were encountered: