New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: ERROR: cannot create continuous aggregate with incompatible bucket width #5277
Comments
Cannot create continuous aggregate with incompatible bucket width
errorERROR: cannot create continuous aggregate with incompatible bucket width
ERROR: cannot create continuous aggregate with incompatible bucket width
I think the time interval for "b" and "c" is different. Check this out
The time interval for both 'b' and 'c' in this modified code is set to 74752 milliseconds, ensuring that both time intervals are the identical and preventing the occurrence of the error "ERROR: cannot construct continuous aggregate with incompatible bucket width." |
Thanks for the quick reply. The time interval for "b" and "c" need to be different, because I want "c" down-sampling "b". Here are the time bucket intervals I choose: 0 (raw hypertable) |
Thanks for the bug report @natebowang. I can reproduce the issue |
Internally we use date_part("epoch", timestamp) and integer division to determine whether the top cagg's interval is a multiple of the parent's. This would lead to loss of precision and incorrect results in the case of intervals with sub-second components. Fixed by multiplying by appropriate factors when possible to maintain precision. Fixes timescale#5277
Internally we use date_part("epoch", timestamp) and integer division to determine whether the top cagg's interval is a multiple of the parent's. This would lead to loss of precision and incorrect results in the case of intervals with sub-second components. Fixed by multiplying by appropriate factors when possible to maintain precision. Fixes timescale#5277
Internally we use date_part("epoch", timestamp) and integer division to determine whether the top cagg's interval is a multiple of the parent's. This would lead to loss of precision and incorrect results in the case of intervals with sub-second components. Fixed by multiplying by appropriate factors when possible to maintain precision. Fixes timescale#5277
Internally we use date_part("epoch", interval) and integer division to determine whether the top cagg's interval is a multiple of the parent's. This would lead to loss of precision and incorrect results in the case of intervals with sub-second components. Fixed by multiplying by appropriate factors when possible to maintain precision. Fixes timescale#5277
Internally we use date_part("epoch", interval) to determine whether the top cagg's interval is a multiple of its parent's. Previously we were using integer division for this calculation and integer bucket widths, leading to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using fmod for the division instead of integer division. Fixes timescale#5277
Internally we use date_part("epoch", interval) to determine whether the top cagg's interval is a multiple of its parent's. Previously we were using integer division for this calculation and integer bucket widths, leading to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using fmod for the division instead of integer division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes timescale#5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes #5277
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes #5277 (cherry picked from commit 5a3cacd)
Previously we used date_part("epoch", interval) and integer division internally to determine whether the top cagg's interval is a multiple of its parent's. This led to precision loss and wrong results in the case of intervals with sub-second components. Fixed by using the `ts_interval_value_to_internal` function to convert intervals to appropriate integer representation for division. Fixes #5277 (cherry picked from commit 5a3cacd)
What type of bug is this?
Unexpected error
What subsystems and features are affected?
Continuous aggregate
What happened?
I want to create continuous aggregate c on top of another continuous aggregate b:
And I got this error:
But
74752 ms
is 8 times of9344 ms
. How can I created
?Thanks
TimescaleDB version affected
2.9.2
PostgreSQL version used
14.6
What operating system did you use?
Docker image timescale/timescaledb:latest-pg14 on Archlinux host
What installation method did you use?
Docker
What platform did you run on?
On prem/Self-hosted
Relevant log output and stack trace
How can we reproduce the bug?
The text was updated successfully, but these errors were encountered: