Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enforce HttpClient limits on GetFromJsonAsync #79386

Merged
merged 5 commits into from Jan 10, 2023

Conversation

MihaZupan
Copy link
Member

Enforces HttpClient.Timeout and HttpClient.MaxResponseContentBufferSize on the whole response (including the content transfer) to match HttpClient.Get{String/ByteArray}Async.

@MihaZupan MihaZupan added this to the 8.0.0 milestone Dec 8, 2022
@MihaZupan MihaZupan requested a review from a team December 8, 2022 07:46
@ghost
Copy link

ghost commented Dec 8, 2022

Tagging subscribers to this area: @dotnet/ncl
See info in area-owners.md if you want to be subscribed.

Issue Details

Enforces HttpClient.Timeout and HttpClient.MaxResponseContentBufferSize on the whole response (including the content transfer) to match HttpClient.Get{String/ByteArray}Async.

Author: MihaZupan
Assignees: -
Labels:

area-System.Net.Http

Milestone: 8.0.0

{
using Stream contentStream = await HttpContentJsonExtensions.GetContentStreamAsync(content, linkedCTS?.Token ?? cancellationToken).ConfigureAwait(false);

var lengthLimitStream = new LengthLimitReadStream(contentStream, (int)client.MaxResponseContentBufferSize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid allocating this wrapper stream if MaxResponseContentBufferSize is the default int.MaxValue? Not just about this allocation, but about the increased cost on every individual read operation.

Copy link
Member Author

@MihaZupan MihaZupan Dec 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure.
Looks like the DeleteAsync overload doesn't use ResponseHeadersRead, so we can avoid all of this overhead there as well.

While on the topic, wrapper streams like these are the primary reason why I opened #77935. The per-call allocation overhead disappears with zero-byte reads.
Another similar example is the WebSockets telemetry in YARP where we intercept a read stream like this, but it adds almost no overhead because YARP does zero-byte reads by default

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this again, I don't think we should avoid this wrapper - we'd still want to prevent the deserialize from reading more than 2 GB in that case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The per-call allocation overhead disappears with zero-byte reads.

How so?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The read in the wrapper stream is implemented like so:

ValueTask<int> readTask = _innerStream.ReadAsync(buffer, cancellationToken);

if (buffer.IsEmpty)
{
    return readTask;
}

if (readTask.IsCompletedSuccessfully)
{
    int read = readTask.Result;
    CheckLengthLimit(read);
    return new ValueTask<int>(read);
}

return Core(readTask); // Allocate

Assuming the caller is using zero-byte reads, you will alternate between

  • The buffer.IsEmpty branch for the zero-byte read where we'll just pass through the task as we're not interested in the result (it won't affect the read limit)
  • The readTask.IsCompletedSuccessfully branch after a zero-byte read as data is available immediately

You'll only hit the allocating path if the underlying stream doesn't support zero-byte reads, or if you hit an exception.

The WebSocket telemetry stream I mentioned is using exactly the same pattern. The result of the zero-byte read doesn't affect us as there isn't any data to parse.

@MihaZupan MihaZupan merged commit 47779c9 into dotnet:main Jan 10, 2023
@ghost ghost locked as resolved and limited conversation to collaborators Feb 9, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants