You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Context
In the browser web worker, zipping a directory containing 2000 files totalling 2GB. 80% of files are under 100KB, there are about 10 of 10-50MB, and the rest are in between. It takes just under 2 minutes (initial sync implementation), resulting zip file is 1.8GB.
The renderer process memory usage grows to over 2GB during the compression.
As the output is being streamed to high performance disk with each chunk (Origin-private file system using syncAccessHandle), this isn't expected. Chunks should be read, compressed, and written, without any data hanging around.
Looking at the allocation timeline of the worker in DevTools at a random point a few seconds into the compression, I can see 500MB of JSArrayBuffer data being retained. Most are of size 98,304 (Uint8Array) or 2,097,152 (Uint16Array) and are retained by Deflate objects held in the u array of Zip. They are buffers and other structures to do with compression. It doesn't seem to me that it's necessary for these to be retained in memory once a file has been compressed.
Workaround
Discard all references to the dDeflate object after the final compressed chunk has been emitted:
constondata=compressionStream.ondata;compressionStream.ondata=(error,data,final)=>{ondata(error,data,final);if(final){compressionStream.d=null;zip.u.at(-1).d=null;// Object created in `zip.add()`}}
With this in place, my scenario will use 100-500MB of renderer memory depending on when Chrome garbage collects.
The text was updated successfully, but these errors were encountered:
Context
In the browser web worker, zipping a directory containing 2000 files totalling 2GB. 80% of files are under 100KB, there are about 10 of 10-50MB, and the rest are in between. It takes just under 2 minutes (initial sync implementation), resulting zip file is 1.8GB.
How to reproduce
In principle:
The problem
The renderer process memory usage grows to over 2GB during the compression.
As the output is being streamed to high performance disk with each chunk (Origin-private file system using syncAccessHandle), this isn't expected. Chunks should be read, compressed, and written, without any data hanging around.
Looking at the allocation timeline of the worker in DevTools at a random point a few seconds into the compression, I can see 500MB of
JSArrayBuffer
data being retained. Most are of size 98,304 (Uint8Array) or 2,097,152 (Uint16Array) and are retained byDeflate
objects held in theu
array ofZip
. They are buffers and other structures to do with compression. It doesn't seem to me that it's necessary for these to be retained in memory once a file has been compressed.Workaround
Discard all references to the
d
Deflate
object after the final compressed chunk has been emitted:With this in place, my scenario will use 100-500MB of renderer memory depending on when Chrome garbage collects.
The text was updated successfully, but these errors were encountered: