New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Assertion failure in ts_hypertable_has_compression_table #4795
Comments
another instance? https://github.com/timescale/timescaledb/actions/runs/3431435306/jobs/5719565367
|
Still happening: https://github.com/timescale/timescaledb/actions/runs/3642792492/jobs/6150310637
|
Also reproduces on a different query:
|
A cache is used for the duration of a query to speed up planning time, but when destroying the cache, an error in the destruction would leave the cache pointer pointing to a partially or fully destroyed cache, which would give incorrect cache hits for the next query. This commit changes the order of the assignment so that the cache pointer is cleared before destroying the cache. This will allocate a new cache for the next query, even if the previous destruction had an error. Fixes timescale#4795
The planner uses a hypertable cache and a baserel cache. Since the planner calls can be recursive, they both need to handle recursion. The hypertable cache uses a stack of caches and will typically push a pointer to the same cache on a stack. Once an invalidation event for the hypertable meta-table comes in, it will replace the current hypertable cache with a new version. In the end you might have a cache stack that looks like this, where an invalidation event occurred causing a new H2 hypertable cache to be created.
The baserel cache is created once for all recursions and destroyed when existing the top-most recursive call of the planner. After some investigations, this is a hypothesis to go by:
|
There is indeed entanglement between the hypertable cache and the baserel cache but there are additional problems with the baserel cache since it uses the reloid as the cache key wich means it will not return correct results when chunks occur multiple times in a plan as they could require different classification. |
This problem was fixed here: #4870 |
When popping the hypertable cache stack, it might happen that the hypertable cache was invalidated betweem the push and the pop. In that case, the baserel cache can contain invalid entries pointing to the now popped hypertable cache, so we reset the baserel cache. Fixes timescale#4795
When popping the hypertable cache stack, it might happen that the hypertable cache was invalidated betweem the push and the pop. In that case, the baserel cache can contain invalid entries pointing to the now popped hypertable cache, so we reset the baserel cache. Fixes timescale#4795
When popping the hypertable cache stack, it might happen that the hypertable cache was invalidated betweem the push and the pop. In that case, the baserel cache can contain invalid entries pointing to the now popped hypertable cache, so we reset the baserel cache. Fixes timescale#4795
When popping the hypertable cache stack, it might happen that the hypertable cache was invalidated betweem the push and the pop. In that case, the baserel cache can contain invalid entries pointing to the now popped hypertable cache, so we reset the baserel cache. Fixes timescale#4795
When popping the hypertable cache stack, it might happen that the hypertable cache was invalidated between the push and the pop. In that case, the baserel cache can contain invalid entries pointing to the now popped hypertable cache, so we reset the baserel cache. Fixes timescale#4795
When popping the hypertable cache stack, it might happen that the hypertable cache was invalidated between the push and the pop. In that case, the baserel cache can contain invalid entries pointing to the now popped hypertable cache, so we reset the baserel cache. Fixes #4795
When popping the hypertable cache stack, it might happen that the hypertable cache was invalidated between the push and the pop. In that case, the baserel cache can contain invalid entries pointing to the now popped hypertable cache, so we reset the baserel cache. Fixes #4795
This is hit in sanitizer test occasionally.
Failed run with coredump:
Query that triggers (not guaranteed):
Stacktrace:
Extra context:
The text was updated successfully, but these errors were encountered: