Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise the doc for Streaming Response #3847

Merged
merged 9 commits into from
Oct 5, 2021

Conversation

freevie
Copy link
Contributor

@freevie freevie commented Sep 30, 2021

Source: #3397

Motivation:

To provide better reading experience to Armeria users.

Modifications:

  • Paraphrased sentences
  • Removed monospace + link styling (<type:..) from a section title

Result:

  • Hopefully, readers will be able to use streaming response perfectly for their services.

@codecov
Copy link

codecov bot commented Sep 30, 2021

Codecov Report

Merging #3847 (1ad1e22) into master (edde869) will decrease coverage by 0.00%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff              @@
##             master    #3847      +/-   ##
============================================
- Coverage     73.24%   73.23%   -0.01%     
- Complexity    15034    15075      +41     
============================================
  Files          1322     1326       +4     
  Lines         57853    58003     +150     
  Branches       7340     7356      +16     
============================================
+ Hits          42374    42479     +105     
- Misses        11748    11781      +33     
- Partials       3731     3743      +12     
Impacted Files Coverage Δ
...meria/common/stream/RegularFixedStreamMessage.java 82.89% <0.00%> (-2.64%) ⬇️
.../com/linecorp/armeria/server/file/FileService.java 82.43% <0.00%> (-2.60%) ⬇️
...rmeria/common/stream/ConcatArrayStreamMessage.java 77.77% <0.00%> (-2.23%) ⬇️
.../linecorp/armeria/client/DefaultClientFactory.java 72.00% <0.00%> (-2.00%) ⬇️
...dpoint/healthcheck/PartialHealthCheckStrategy.java 95.65% <0.00%> (-1.45%) ⬇️
...armeria/internal/common/CancellationScheduler.java 78.17% <0.00%> (-1.20%) ⬇️
...inecorp/armeria/server/HttpResponseSubscriber.java 79.41% <0.00%> (-1.04%) ⬇️
...p/armeria/internal/common/eureka/InstanceInfo.java 51.75% <0.00%> (-0.88%) ⬇️
...ecorp/armeria/internal/common/ArmeriaHttpUtil.java 88.32% <0.00%> (-0.11%) ⬇️
.../java/com/linecorp/armeria/common/HttpRequest.java 80.73% <0.00%> (ø)
... and 32 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update edde869...1ad1e22. Read the comment docs.

Copy link
Contributor

@ikhoon ikhoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent! @freevie

@ikhoon ikhoon added this to the 1.12.0 milestone Oct 1, 2021
Copy link
Member

@minwoox minwoox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some minor comments. 😄
Thanks a lot for rephrasing the sentences.
They are much clearer now.

site/src/pages/docs/advanced-streaming-backpressure.mdx Outdated Show resolved Hide resolved
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/)
to understand backpressure and the situation when an `OutOfMemoryError` is raised.
Waiting for the chunk to be written is to avoid loading into memory when the client is not ready to receive. This is called __back pressure__.
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/) to learn back pressure and what happens when an `OutOfMemoryError` is raised.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about

Suggested change
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/) to learn back pressure and what happens when an `OutOfMemoryError` is raised.
See [Let’s Play with Reactive Streams on Armeria - 1](https://engineering.linecorp.com/en/blog/reactive-streams-armeria-1/) to learn back pressure and when an `OutOfMemoryError` is raised.

Because I wanted to say that an OutOfMemoryError is raised if we don't use back pressure.
We all know what happens when an OutOfMemoryError is raised. 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@minwoox ,
Hope this fix is as you expected
1ad1e22


## Sending a streaming response using <type://HttpResponseWriter>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it's not a good idea to use a link in the subtitle?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some sites have clickable headings which points to themselves so users can copy a link straight away from the URL field of a browser. Considering that many tools these days add links to headings automatically (like Armeria site providing a permalink) by SSG, adding links to a heading doesn't seems like a good practice. Although Armeria overcomes this and generates an anchor that works perfectly well, having the term "type" attached to the HttpResponseWriter may be confusing in terms of accsesibility.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That totally makes sense. We have a link for the type in the section anyway. 😄

@@ -64,15 +58,14 @@ sb.service("/big_file.dat", (ctx, req) -> {
...
private HttpData produceChunk(int index) {
// Divide the file by the pre-defined chunk size(e.g. 8192 bytes)
// and read it using index.
// and read it using index.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert?

the backpressure by ourselves. Let's start it by implementing the simplified version of <type://HttpFile>.
To send large data other than files such as database, you need to implement back pressure yourself. Let's start off with implementing a minimal <type://HttpFile>.

Prepare to send a streaming response with <type://HttpResponseWriter> and <type://HttpResponse#streaming()>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to say that HttpResponseWriter is returned when you call HttpResponse.streaming() which is:
HttpResponseWriter writer = HttpResponse.streaming();
So how about:

Suggested change
Prepare to send a streaming response with <type://HttpResponseWriter> and <type://HttpResponse#streaming()>.
Prepare to send a streaming response with <type://HttpResponseWriter> returned from <type://HttpResponse#streaming()>.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

returned by?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, that's correct. 😄

so the server will get the `OutOfMemoryError`. To solve it, we have to implement backpressure using
<type://StreamWriter#whenConsumed()>:

With the code above, the server would still encounter `OutOf MemoryError`. We still need to take care of preventing loading data chunks into memory before a chunk is sent to the client. To solve the problem, implement back pressure with <type://StreamWriter#whenConsumed()>:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
With the code above, the server would still encounter `OutOf MemoryError`. We still need to take care of preventing loading data chunks into memory before a chunk is sent to the client. To solve the problem, implement back pressure with <type://StreamWriter#whenConsumed()>:
With the code above, the server would encounter `OutOfMemoryError`. We still need to take care of preventing loading data chunks into memory before a chunk is sent to the client. To solve the problem, implement back pressure with <type://StreamWriter#whenConsumed()>:

Perhaps, it's better to use still only once?

which is written to the <type://HttpResponseWriter>, is finally written to the socket. So, you can add
the next task by adding a callback (`thenRun()` in the example). We produced the next chunk using callback
in the example.
<type://StreamWriter#whenConsumed()> returns a `CompletableFuture` that is complete when the chunk written to the <type://HttpResponseWriter> is finally written to the socket. This enables you to add the next task by adding a callback (`thenRun()` in the code example). The next task in the example is set to producing the next chunk.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<type://StreamWriter#whenConsumed()> returns a `CompletableFuture` that is complete when the chunk written to the <type://HttpResponseWriter> is finally written to the socket. This enables you to add the next task by adding a callback (`thenRun()` in the code example). The next task in the example is set to producing the next chunk.
<type://StreamWriter#whenConsumed()> returns a `CompletableFuture` that is complete when the chunk written to the <type://HttpResponseWriter> is finally written to the socket. This enables you to add the next task by adding a callback (`thenRun()` in the code example). The next task in the example is set to produce the next chunk.

You can also implement backpressure with other libraries, such as [Reactor](https://projectreactor.io) and
[RxJava](https://github.com/ReactiveX/RxJava). With the implementation, you can simply return it using
<type://HttpResponse#of(ResponseHeaders,Publisher)>:
So far, we have implemented a simple version of <type://HttpFile>. Now, we can implement a streaming response with back pressure for any type of source (e.g. database) by simply changing the `produceChunk()` method to fetch data from the source.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
So far, we have implemented a simple version of <type://HttpFile>. Now, we can implement a streaming response with back pressure for any type of source (e.g. database) by simply changing the `produceChunk()` method to fetch data from the source.
So far, we have implemented a simple version of <type://HttpFile>. Now, we can implement a streaming response with back pressure for any type of source (e.g. database) by simply changing the `produceChunk()` method to fetch data from the source.

Co-authored-by: minux <songmw725@gmail.com>
Copy link
Member

@trustin trustin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @freevie 🙇

@minwoox, please merge once all your comments are addressed. I guess you might want to fix them by yourself to accelerate the process. 😉

@freevie
Copy link
Contributor Author

freevie commented Oct 5, 2021

@minwoox ,
I believe I've applied all your feedback. Please let me know if I left out some, messed up or misunderstood your intention.

@minwoox
Copy link
Member

minwoox commented Oct 5, 2021

@freevie That's perfect. 😄

@minwoox minwoox merged commit 0613e51 into line:master Oct 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants