You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment compaction is not safe to perform on quite a lot of data - essentially it only works for static schema. Ideally it should work more like downsampling in the processing pipeline, in that it needs to multiply reference segments where the new segment boundary crosses the existing boundary.
Long term it would be desirable to be able to physically compact existing versions in a way that left them logically the same, however we would need to have great confidence in its robustness before taking the step of altering existing data, so it needs a really good test suite involving some declarative testing to flush out the edge cases.
The text was updated successfully, but these errors were encountered:
At the moment compaction is not safe to perform on quite a lot of data - essentially it only works for static schema. Ideally it should work more like downsampling in the processing pipeline, in that it needs to multiply reference segments where the new segment boundary crosses the existing boundary.
Long term it would be desirable to be able to physically compact existing versions in a way that left them logically the same, however we would need to have great confidence in its robustness before taking the step of altering existing data, so it needs a really good test suite involving some declarative testing to flush out the edge cases.
The text was updated successfully, but these errors were encountered: