New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
port jetty client #4165
port jetty client #4165
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I guess most of these handlers need to be synchronous for ordering? It would be bad to have onContent
running as a future and maybe enqueue more content out of order?
If that's true:
- I'm not sure we still need a loggingAsyncCallback. They'll throw exceptions synchronously anyway, so we'll probably get two in the logs?
- Are any of these really blocking calls? The
unsafeRunSync
makes more sense to me than theF.blocking
.
ahh no thanks, I just simply misunderstood the old code and was on the wrong track due to how |
private def enqueue(item: Item)(cb: Either[Throwable, Unit] => IO[Unit]): Unit = | ||
queue.enqueue1(item).runAsync(cb).unsafeRunSync() | ||
private def enqueue(item: Item)(cb: Either[Throwable, Unit] => F[Unit]): Unit = | ||
D.unsafeRunAndForget(queue.offer(item.some).attempt.flatMap(cb)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't think I explained myself well. The F.blocking
doesn't seem necessary, but I think they still need to be synchronous.
For example, this is going to start a Future
. There's no backpressure on Jetty, so it might invoke onContent
again with the next chunk, which will submit another Future
. There won't be an guarantee that the first offer
happens before the second
, right? We don't want long running actions, but we need to wait for them to complete before Jetty sends us the next event... I think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
think they still need to be synchronous.
that would be different from the original code right? the logic does make sense to me but I just want to make sure that the change in behaviour is intended
additionally the onHeaders
callback wouldn't need any delaying at all if we imediately run the effect so it simplifies just to `cb(r)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right about prior behavior. I have a gut feeling this is a race condition that we've been getting away with, but I have no documentation or proof.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we're being naughty here, and I think it's also a bug in 0.21.
/**
* Callback method invoked when the response content has been received, parsed and there is demand.
* This method may be invoked multiple times, and the {@code content} buffer
* must be consumed (or copied) before returning from this method.
*
* @param response the response containing the response line data and the headers
* @param content the content bytes received
*/
void onContent(Response response, ByteBuffer content);
I think this is wrong for the reasons above, but as good as what we have. I'd rather get it compiling, and fix the discovered bug as part of #4180. |
fixes #4089