Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client As Kleisli - Client Redesign Proposal #1772

Conversation

ChristopherDavenport
Copy link
Member

type Client[F[_]] = Kleisli[Stream[F, ?], Request[F], Response[F]]

With Syntax Class For Operations

case DisposableResponse(response, dispose) =>
response.copy(body = response.body.onFinalize(dispose))
def toHttpService: HttpService[F] =
Kleisli{req: Request[F] => OptionT.liftF(fetch(req)(_.pure[F]))}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright this I find interesting: If reading the body of the request after fetch results in an error, I think toHttpService might not be expressible in terms of fetch

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's more problematic than that: the response is going to be closed when the _.pure[F] runs, so we're returning a dead response body. We can't go from Stream to F without closing the resource.

@rossabaker
Copy link
Member

There has been a lot of talk on Gitter. Not everybody follows there, so I'll summarize here.


1. Status quo

  • easy Client.toHttpApp
  • no opportunity to share middleware between client and server
  • FollowRedirect and Retry are awkward
  • Incorrect use of .open leaks
  • presents ad hoc DisposableResponse type
  • tricky to create F[Response[F]] from a resource acquired and released by (F[A], A => F[Unit])
  • impossible to create F[Response[F]] from a resource bracketed in Stream[F, A]
  • one failure mode in F

2. type HttpClient[F] = Http[Disposable[F, ?], F]

  • easy Client.toHttpApp
  • can share polymorphic middleware between client and server if they don't rerun the client
  • FollowRedirect and Retry are awkward
  • Incorrect use of .run leaks
  • presents ad hoc Disposable monad transformer
  • easy(?) to create F[Response[F]] from a resource acquired and released by (F[A], A => F[Unit]).
  • impossible to create F[Response[F]] from a resource bracketed in Stream[F, A]
  • one failure mode in F

3. type HttpApp[F] = type HttpClient[F] = Http[Stream[F, ?], ?]

  • easy Client.toHttpApp
  • impossible to interpret into Http[F, F]
  • can share polymorphic middleware between client and server if they don't rerun the client
  • FollowRedirect and Retry are elegant
  • No leaks
  • No ad hoc monads
  • easy to create F[Response[F]] from a resource acquired and released by (F[A], A => F[Unit]).
  • easy to create F[Response[F]] from a resource bracketed in Stream[F, A]
  • three failure modes in Stream[F, ?]: 0 outputs, >1 output, error
  • more overhead for server, @jmcardon is measuring

I think 1 is strictly dominated by 2: it echoes the MaybeResponse mistake instead of being a monad.

2 vs. 3: discuss. If still tied, we will thumb wrestle at the next NE Scala Symposium.

@jmcardon
Copy link
Contributor

Bench Results here

I put them up on my website for fun.

@rossabaker
Copy link
Member

So it looks like we'd taking about a 3% hit, with some extra variance, in a tough scenario.

This makes it possible to write a few things and easier to write a couple more things and easy to represent some incorrect things.

Discuss.

@rossabaker
Copy link
Member

Another quick and dirty benchmark. Probably not as interesting as @jmcardon's.

[info] RenderBench.httpApp         avgt   60  1191.201 ±  15.814  ns/op
[info] RenderBench.httpStream      avgt   60  8563.606 ±  59.079  ns/op
[info] RenderBench.httpStreamEval  avgt   60  8914.895 ± 242.725  ns/op

This is proportionally a whole lot, marginally not a lot at all.

@rossabaker
Copy link
Member

Having studied gatling a bit more, the 3% comment above is misleading. We need to rerun the benchmarks.

If we're at a real impasse here, it's possible to have the servers support both modes of operation. I think this is confusing and would rather not, but there is a compelling case for each.

@rossabaker
Copy link
Member

I think this is superseded by #2102?

@aeons
Copy link
Member

aeons commented Nov 5, 2018

Yes, looks like it. Ok to close for me.

@rossabaker
Copy link
Member

Please reopen if we're wrong.

@rossabaker rossabaker closed this Nov 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants