Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deflate plugin crashes (an optimised) VM when the "deflateBlock" primitive fails #193

Open
theseion opened this issue Jan 16, 2018 · 1 comment

Comments

@theseion
Copy link
Contributor

So, I seem to have identified the problem. According to the RFC on Deflate (https://tools.ietf.org/html/rfc1951#page-5) the maximum number of non-compressible blocks is limited to (2^16) - 1:

"A compressed data set consists of a series of blocks, corresponding
to successive blocks of input data. The block sizes are arbitrary,
except that non-compressible blocks are limited to 65,535 bytes."

Stephan's case exceeds this limit, which leads to an error when validating the match in DeflateStream>>validateMatchAt:from:to:. I assume that the compiler optimization removes the trap for the validation failure in this case (note that I simply used the Smalltalk implementation to trace the problem by replacing the primitive send with a super send).

With Fuel loaded (Pharo 7, 64-bit) the following produces a reproducible validation error, which is misleading, the problem is actually that the number of blocks was exceeded (I think that's because of the 16 bit bit mask that is being used in the implementation):

(FLGZipStrategy newWithTarget: FLByteArrayStreamStrategy new) writeStreamDo: [ :stream |
FLSerializer newDefault
serialize: ((1 to: (2 raisedTo: 16) - 82) collect: [ :i | 1 ]) asByteArray
on: stream ]

I'm not sure what should be done in this case but it appears to me that all remaining data could simply be appended to the last block, at least in the implementation I found in GCC (https://gcc.gnu.org/ml/gcc/2009-09/txt00001.txt):

/* Stored blocks are limited to 0xffff bytes, pending_buf is limited

 * to pending_buf_size, and each stored block has a 5 byte header:

 */

ulg max_block_size = 0xffff;

ulg max_start;



if (max_block_size > s->pending_buf_size - 5) {

    max_block_size = s->pending_buf_size - 5;

}

According to http://forum.world.st/Re-Pharo-project-GZipWriteStream-crashing-my-VM-when-serializing-td4177725.html this could be a problem with optimizations. Specifically, with optimzations turned off, the primitive still fails but does not crash the VM.

@theseion
Copy link
Contributor Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants