Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Akka HTTP, docs: Explain toStrict in more detail #206

ktoso opened this issue Sep 9, 2016 · 2 comments

Akka HTTP, docs: Explain toStrict in more detail #206

ktoso opened this issue Sep 9, 2016 · 2 comments


Copy link

@ktoso ktoso commented Sep 9, 2016

Issue by ktoso
Tuesday Sep 29, 2015 at 18:00 GMT
Originally opened as akka/akka#18599

Based on gitter feedback, we mention toStrict a bit in the model's page, however people are looking for "how do I get a string out". We should answer that question AND explain why it may not be the best idea, then we can ease them into thinking about streaming.

Is this ( the easiest way to get the body of an HTTP response in a human-readable format? We are using the host-level client, so all responses should be strictly populated.
It seems awfully involved to convert the byte stream to a strict Stream. Surely there must be an easier way?
*strict String

ktoso 19:54
okey, here we go then 😄

  1. being host level does not mean that the response is immediatly available
  2. Sink.head only takes the first chunk, if the content is very long that's not what you wanted - you want all the chunks together into the string
  3. try to avoid Await, instead just .map{ data => } on the future
  4. there's a helper 😄 wait...

Docs would be more or less so:

koso 19:56
you want to make the entity "strict", read about the data model here:
Streaming entity types (i.e. all but Strict) cannot be shared or serialized. To create a strict, sharable copy of an entity or message use HttpEntity.toStrict or HttpMessage.toStrict which returns a Future of the object with the body data collected into a ByteString.
right, but the server may respond with the data in multiple chunks
by using streaming, we win - we can process them as soon as parts arrive
if you really want to strict it, see above quote
it's a good question though, people will be asking this so I'll open a ticket and improve docs about the specific use case

@ktoso ktoso added this to the http-backlog milestone Sep 9, 2016

This comment has been minimized.

Copy link

@pelepelin pelepelin commented Dec 7, 2016

I've added a SO question here
Probably, answers to some of my doubts should go to this documentation section as well.


This comment has been minimized.

Copy link
Member Author

@ktoso ktoso commented Dec 8, 2016

Most of your questions are answered in the docs:

I'll reply there eventually too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants
You can’t perform that action at this time.