New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Browser crash on large file decompression #12
Comments
900MB is definitely pushing it. I have a feeling you're hitting this bug It was recently fixed in Canary. But even still, 900MB is pretty big. In the future, I plan implementing typed arrays. Perhaps that would help |
I'll run some more tests and see if I can get some solid correlations on memory usage to crashing. If so, I'll try splitting it up and let you know how it goes. Oh, and typed arrays would be awesome, I'm sure. I'm currently storing the compressed data using PouchDB, and the system works great. |
I'm hitting this bug with a file much smaller than 900MB. My uncompressed data is 66MB, and I hit the bug whether I compress it with |
I'm surprised that a file that small is crashing. I've been able to decompress files about that size. But it is hard to determine how v8 is using memory. I suggest you try splitting up the data as a workaround. Hopefully we'll be able to implement a streaming api soon to make big files work. |
In Chrome, I was able to compress a 34mb json file (with some stringification) down to 5MB...
That's friggin' awesome!!!
But, upon trying to decompress the same, it crashes around 60% every time. I'm pretty sure it's getting put on a web worker, and at the moment, that worker pushes my chrome tab to about 120% CPU and 900MB of RAM. This is all on level 1 compression, too.
This may be expected behavior and I'm pushing the envelope here on size ;)
Or not?
I just wanted to get your opinion on large dataset compression.
The text was updated successfully, but these errors were encountered: