Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Dispatcher within BlazeServerBuilder #4137

Merged
merged 3 commits into from Jan 6, 2021

Conversation

RaasAhsan
Copy link
Member

Partially addresses #4134 for blaze-server


for {
scheduler <- tickWheelResource
dispatcher <- Resource.eval(Dispatcher[F].allocated.map(_._1))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so I'm having to allocate this because something weird is happening with the finalizers which is breaking tests. Basically, even after the Dispatcher is finalized (which is after the server finalizes), something is trying to submit effects to the Dispatcher, which is throwing an exception and putting the selector threads in a bad state.

I don't really have insight into what blaze is doing here, but this is a temporary fix..

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW here's a stack trace

23:29:22.304 [blaze-selector-19] ERROR o.http4s.blazecore.IdleTimeoutStage - Exception caught when attempting inbound command
java.lang.IllegalStateException: dispatcher already shutdown
	at cats.effect.std.Dispatcher$$anon$1.unsafeToFutureCancelable(Dispatcher.scala:208)
	at cats.effect.std.Dispatcher.unsafeToFuture(Dispatcher.scala:35)
	at cats.effect.std.Dispatcher.unsafeToFuture$(Dispatcher.scala:34)
	at cats.effect.std.Dispatcher$$anon$1.unsafeToFuture(Dispatcher.scala:108)
	at cats.effect.std.Dispatcher$$anon$1.$anonfun$unsafeToFutureCancelable$3(Dispatcher.scala:120)
	at cats.effect.std.Dispatcher$$anon$1.loop$3(Dispatcher.scala:192)
	at cats.effect.std.Dispatcher$$anon$1.$anonfun$unsafeToFutureCancelable$6(Dispatcher.scala:196)
	at org.http4s.server.blaze.Http1ServerStage.$anonfun$cancel$1(Http1ServerStage.scala:329)
	at org.http4s.server.blaze.Http1ServerStage.$anonfun$cancel$1$adapted(Http1ServerStage.scala:328)
	at scala.Option.foreach(Option.scala:437)
	at org.http4s.server.blaze.Http1ServerStage.cancel(Http1ServerStage.scala:328)
	at org.http4s.server.blaze.Http1ServerStage.stageShutdown(Http1ServerStage.scala:320)
	at org.http4s.blaze.pipeline.Stage.inboundCommand(Stages.scala:76)
	at org.http4s.blaze.pipeline.Stage.inboundCommand$(Stages.scala:73)
	at org.http4s.server.blaze.Http1ServerStage.inboundCommand(Http1ServerStage.scala:85)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand(Stages.scala:258)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand$(Stages.scala:254)
	at org.http4s.blazecore.IdleTimeoutStage.sendInboundCommand(IdleTimeoutStage.scala:28)
	at org.http4s.blaze.pipeline.Head.inboundCommand(Stages.scala:276)
	at org.http4s.blaze.pipeline.Head.inboundCommand$(Stages.scala:274)
	at org.http4s.blazecore.IdleTimeoutStage.inboundCommand(IdleTimeoutStage.scala:28)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand(Stages.scala:258)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand$(Stages.scala:254)
	at org.http4s.blaze.channel.ChannelHead.sendInboundCommand(ChannelHead.scala:15)
	at org.http4s.blaze.channel.nio1.NIO1HeadStage.org$http4s$blaze$channel$nio1$NIO1HeadStage$$doClose$1(NIO1HeadStage.scala:307)
	at org.http4s.blaze.channel.nio1.NIO1HeadStage$$anon$3.run(NIO1HeadStage.scala:322)
	at org.http4s.blaze.channel.nio1.SelectorLoop.executeTask(SelectorLoop.scala:100)
	at org.http4s.blaze.channel.nio1.NIO1HeadStage.doClosePipeline(NIO1HeadStage.scala:310)
	at org.http4s.blaze.pipeline.HeadStage.closePipeline(Stages.scala:327)
	at org.http4s.blaze.pipeline.HeadStage.closePipeline$(Stages.scala:325)
	at org.http4s.blaze.channel.ChannelHead.closePipeline(ChannelHead.scala:15)
	at org.http4s.blaze.pipeline.Tail.closePipeline(Stages.scala:88)
	at org.http4s.blaze.pipeline.Tail.closePipeline$(Stages.scala:84)
	at org.http4s.blazecore.IdleTimeoutStage.closePipeline(IdleTimeoutStage.scala:28)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand(Stages.scala:262)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand$(Stages.scala:254)
	at org.http4s.blazecore.IdleTimeoutStage.sendInboundCommand(IdleTimeoutStage.scala:28)
	at org.http4s.blaze.pipeline.Head.inboundCommand(Stages.scala:276)
	at org.http4s.blaze.pipeline.Head.inboundCommand$(Stages.scala:274)
	at org.http4s.blazecore.IdleTimeoutStage.inboundCommand(IdleTimeoutStage.scala:28)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand(Stages.scala:258)
	at org.http4s.blaze.pipeline.Head.sendInboundCommand$(Stages.scala:254)
	at org.http4s.blaze.channel.ChannelHead.sendInboundCommand(ChannelHead.scala:15)
	at org.http4s.blaze.channel.nio1.NIO1HeadStage.org$http4s$blaze$channel$nio1$NIO1HeadStage$$doClose$1(NIO1HeadStage.scala:307)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And that wasn't necessary when the dispatcher was passed in?

This seems like an artifact of blaze's still graceless shutdowns, but I'm surprised to see it surface here and not before.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the tests just allocated a Dispatcher because that was the easiest 😅 And I imagine Mike didn't see an issue because his app just keeps the server running. If the JVM is then forced to exit, certainly you wouldn't see something like this

Copy link
Member Author

@RaasAhsan RaasAhsan Jan 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's no chance that we can get a graceful shutdown, then I think the realistic solution here is to keep the allocated, add a comment, and punt on it :/ If the blaze server does shutdown, we'll be left with a fiber that's semantically blocked forever.

I'm interested in seeing whether other backends encounter the same issue

EDIT: Maybe another solution is to just swallow those dispatcher exceptions? Not sure how great of an idea that is without familiarity of blaze

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, those are good points.

I think a graceful shutdown is possible and desirable in blaze, but it's going to require some surgery, and I suspect the result starts to look increasingly like Ember.'

I don't know how blaze would swallow that exception without doing message parsing, but swallowing it from Http1ServerStage.cancel seems not the worst thing in the world.

@RaasAhsan RaasAhsan marked this pull request as ready for review January 4, 2021 07:04
@rossabaker rossabaker merged commit 51ea94a into http4s:cats-effect-3 Jan 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants