-
Notifications
You must be signed in to change notification settings - Fork 862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Segfault when trying to insert into multiple compressed chunks at once #4778
Comments
Hi @antekresic Let me try to understand more on what and where the corruption is. |
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 10, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 10, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 10, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 10, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 10, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 11, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 11, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 11, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 12, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
to sb230132/timescaledb
that referenced
this issue
Oct 13, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: timescale#4778
sb230132
added a commit
that referenced
this issue
Oct 13, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: #4778
SachinSetiya
pushed a commit
that referenced
this issue
Nov 16, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: #4778
SachinSetiya
pushed a commit
that referenced
this issue
Nov 28, 2022
INSERT into compressed hypertable with number of open chunks greater than ts_guc_max_open_chunks_per_insert causes segementation fault. New row which needs to be inserted into compressed chunk has to be compressed. Memory required as part of compressing a row is allocated from RowCompressor::per_row_ctx memory context. Once row is compressed, ExecInsert() is called, where memory from same context is used to allocate and free it instead of using "Executor State". This causes a corruption in memory. Fixes: #4778
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What type of bug is this?
Crash
What subsystems and features are affected?
Data ingestion
What happened?
While trying to insert into a hypertable where all the chunks were already compressed, I get a segfault. Interestingly, this only happens if there is a lot of compressed chunks in which you are trying to insert to.
Expected an error that I cannot insert into compressed chunks or something similar.
TimescaleDB version affected
2.8.0
PostgreSQL version used
14.2
What operating system did you use?
Arch
What installation method did you use?
Source
What platform did you run on?
On prem/Self-hosted
Relevant log output and stack trace
How can we reproduce the bug?
The text was updated successfully, but these errors were encountered: