You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@mgautierfr What is the status here? We just have released version 1.1.7 of the Gutenberg scraper which introduce zstd conpression in place of lzma. It seems that for this scraper at least, we have/had a regression impacting the lzma compression. Considering that lzma is going to be deprecated at some point, I believe fixing a bug on lzma conpression is not a priority. The question left is: do we have an additional problem? A problem which impacts zstd as well? What is its exact impact?
The commit a7201ce
wrongly try to allocate a bigger buffer for the output data (a7201ce#diff-f89546dd4147a65d6ae025e38c7d09ce535c435db7f27984dc5256653d5a5c65R224-R245)
Before we were allocating in case of BUFFER_ERROR when
stream.avail_out == 0
now we allocate simply on BUFFER_ERROR.This may be the cause of the failing creation of Gutenberg with a std::bad_alloc (https://farm.openzim.org/pipeline/bc15fab2d29dc287d8fdec06/debug) (to confirm)
The text was updated successfully, but these errors were encountered: