-
Notifications
You must be signed in to change notification settings - Fork 908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revise the doc for Streaming Response #3847
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3847 +/- ##
============================================
- Coverage 73.24% 73.23% -0.01%
- Complexity 15034 15075 +41
============================================
Files 1322 1326 +4
Lines 57853 58003 +150
Branches 7340 7356 +16
============================================
+ Hits 42374 42479 +105
- Misses 11748 11781 +33
- Partials 3731 3743 +12
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent! @freevie
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some minor comments. 😄
Thanks a lot for rephrasing the sentences.
They are much clearer now.
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/) | ||
to understand backpressure and the situation when an `OutOfMemoryError` is raised. | ||
Waiting for the chunk to be written is to avoid loading into memory when the client is not ready to receive. This is called __back pressure__. | ||
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/) to learn back pressure and what happens when an `OutOfMemoryError` is raised. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/) to learn back pressure and what happens when an `OutOfMemoryError` is raised. | |
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/) to learn back pressure and when an `OutOfMemoryError` is raised. |
Because I wanted to say that an OutOfMemoryError
is raised if we don't use back pressure.
We all know what happens when an OutOfMemoryError
is raised. 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
## Sending a streaming response using <type://HttpResponseWriter> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it's not a good idea to use a link in the subtitle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some sites have clickable headings which points to themselves so users can copy a link straight away from the URL field of a browser. Considering that many tools these days add links to headings automatically (like Armeria site providing a permalink) by SSG, adding links to a heading doesn't seems like a good practice. Although Armeria overcomes this and generates an anchor that works perfectly well, having the term "type" attached to the HttpResponseWriter may be confusing in terms of accsesibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That totally makes sense. We have a link for the type in the section anyway. 😄
@@ -64,15 +58,14 @@ sb.service("/big_file.dat", (ctx, req) -> { | |||
... | |||
private HttpData produceChunk(int index) { | |||
// Divide the file by the pre-defined chunk size(e.g. 8192 bytes) | |||
// and read it using index. | |||
// and read it using index. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert?
the backpressure by ourselves. Let's start it by implementing the simplified version of <type://HttpFile>. | ||
To send large data other than files such as database, you need to implement back pressure yourself. Let's start off with implementing a minimal <type://HttpFile>. | ||
|
||
Prepare to send a streaming response with <type://HttpResponseWriter> and <type://HttpResponse#streaming()>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to say that HttpResponseWriter
is returned when you call HttpResponse.streaming()
which is:
HttpResponseWriter writer = HttpResponse.streaming();
So how about:
Prepare to send a streaming response with <type://HttpResponseWriter> and <type://HttpResponse#streaming()>. | |
Prepare to send a streaming response with <type://HttpResponseWriter> returned from <type://HttpResponse#streaming()>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
returned by?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that's correct. 😄
so the server will get the `OutOfMemoryError`. To solve it, we have to implement backpressure using | ||
<type://StreamWriter#whenConsumed()>: | ||
|
||
With the code above, the server would still encounter `OutOf MemoryError`. We still need to take care of preventing loading data chunks into memory before a chunk is sent to the client. To solve the problem, implement back pressure with <type://StreamWriter#whenConsumed()>: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the code above, the server would still encounter `OutOf MemoryError`. We still need to take care of preventing loading data chunks into memory before a chunk is sent to the client. To solve the problem, implement back pressure with <type://StreamWriter#whenConsumed()>: | |
With the code above, the server would encounter `OutOfMemoryError`. We still need to take care of preventing loading data chunks into memory before a chunk is sent to the client. To solve the problem, implement back pressure with <type://StreamWriter#whenConsumed()>: |
Perhaps, it's better to use still
only once?
which is written to the <type://HttpResponseWriter>, is finally written to the socket. So, you can add | ||
the next task by adding a callback (`thenRun()` in the example). We produced the next chunk using callback | ||
in the example. | ||
<type://StreamWriter#whenConsumed()> returns a `CompletableFuture` that is complete when the chunk written to the <type://HttpResponseWriter> is finally written to the socket. This enables you to add the next task by adding a callback (`thenRun()` in the code example). The next task in the example is set to producing the next chunk. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
<type://StreamWriter#whenConsumed()> returns a `CompletableFuture` that is complete when the chunk written to the <type://HttpResponseWriter> is finally written to the socket. This enables you to add the next task by adding a callback (`thenRun()` in the code example). The next task in the example is set to producing the next chunk. | |
<type://StreamWriter#whenConsumed()> returns a `CompletableFuture` that is complete when the chunk written to the <type://HttpResponseWriter> is finally written to the socket. This enables you to add the next task by adding a callback (`thenRun()` in the code example). The next task in the example is set to produce the next chunk. |
You can also implement backpressure with other libraries, such as [Reactor](https://projectreactor.io) and | ||
[RxJava](https://github.com/ReactiveX/RxJava). With the implementation, you can simply return it using | ||
<type://HttpResponse#of(ResponseHeaders,Publisher)>: | ||
So far, we have implemented a simple version of <type://HttpFile>. Now, we can implement a streaming response with back pressure for any type of source (e.g. database) by simply changing the `produceChunk()` method to fetch data from the source. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So far, we have implemented a simple version of <type://HttpFile>. Now, we can implement a streaming response with back pressure for any type of source (e.g. database) by simply changing the `produceChunk()` method to fetch data from the source. | |
So far, we have implemented a simple version of <type://HttpFile>. Now, we can implement a streaming response with back pressure for any type of source (e.g. database) by simply changing the `produceChunk()` method to fetch data from the source. |
Co-authored-by: minux <songmw725@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…streaming() Show the relationship between the two; the former is returned by the latter.
Mention "still" just once.
@minwoox , |
@freevie That's perfect. 😄 |
Source: #3397
Motivation:
To provide better reading experience to Armeria users.
Modifications:
<type:..
) from a section titleResult: