This repository has been archived by the owner on Apr 8, 2020. It is now read-only.
Serialize node invocationInfo JSON directly to stream to avoid running out of memory #314
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes sending invocation info to the node process when it is first created for the SocketNodeInstance, when serializing >30MB of JSON text and using 32-bit IIS Express (with a .NET Framework 4.5.2 app), which would otherwise result in an OutOfMemoryException at the GetBytes method.
This fix serializes the invocationInfo into a JSON string directly to the stream without double-handling it as a string to reduce the memory footprint of the application. It uses the NewtonsoftJson JsonTextWriter to handle serialization to the stream.
The underlying stream is kept open while the writers are disposed by creating the StreamWriter with 'true' as the last parameter of its constructor (this also requires specifying the buffer size, so I've copied the size used in a method below), and setting the CloseOutput property of the JsonWriter to false.
The cancellationToken is no longer passed through as the method is no longer async - JsonSerializer.serialize doesn't seem to have an async variant. I'm guessing this would have a slight performance impact, so maybe there's a better way of doing this?