-
Notifications
You must be signed in to change notification settings - Fork 37.7k
Description
I am playing around with reading and writing very large files (27 MB) when sandbox is enabled where the file service is implemented via remote IPC (same as when you are connected to a remote such as SSH).
For some unrelated reason, our call into IFileService.readFileStream ends up with a small chunk of 256kb and a large chunk of the rest instead of N chunks of 256kb each (which is what we end up getting when sandbox is not enabled). We end up passing this into the PieceTreeTextBufferBuilder#acceptChunk method.
Later, when I trigger save, we use the snapshot method to have a Readable we can iterate over to write to disk. I end up with the exact 2 chunks as before. So we end up with a small write and a very large write instead of smaller chunks.
Is this the intended behaviour or does the piece tree have some own logic for managing chunks? Is it potentially dangerous to have such a large chunk or is that fine?
Steps to reproduce:
- produce a large text file with 1m lines
- run with
scripts/code.sh --__sandbox - open the file and have a breakpoint where we create the piece tree