Skip to content

Conversation

@dvdplm
Copy link
Contributor

@dvdplm dvdplm commented May 25, 2019

Ref. #331

This PR implements the idea put forward by @tomusdrw to add an additional oneshot channel to let the Server know when the future is shut down, which lets us write an implementation of Server::wait() that can be interrupted by an external signal.

(Also includes a few fixed compiler warnings)

@dvdplm dvdplm requested a review from tomusdrw May 25, 2019 22:34
@dvdplm dvdplm self-assigned this May 27, 2019
@dvdplm dvdplm added the feat label May 27, 2019
@dvdplm
Copy link
Contributor Author

dvdplm commented May 27, 2019

The CI failure is an odd one (and not related to this PR afaict). I could reproduce it once locally using rust stable but never with nighlty. After a cargo clean the failure went away. Not sure how to debug it tbh.

Copy link
Contributor

@tomusdrw tomusdrw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

dvdplm and others added 2 commits May 27, 2019 14:32
Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>
@dvdplm
Copy link
Contributor Author

dvdplm commented Jun 1, 2019

@tomusdrw good to go?

Copy link
Contributor

@tomusdrw tomusdrw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@tomusdrw tomusdrw merged commit 1bbd403 into master Jun 3, 2019
@tomusdrw tomusdrw deleted the dp/feature/http-close-handle-from-carllin branch June 3, 2019 09:13
executor.wait();
if let Some(receivers) = self.done.take() {
for receiver in receivers {
let _ = receiver.wait();
Copy link

@carllin carllin Aug 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dvdplm Is this wait sufficient to guarantee that all the threads owned by the server have cleaned up by the time wait returns? From what I understand it guarantees that the serving threads have been cleaned up here: https://github.com/paritytech/jsonrpc/blob/master/http/src/lib.rs#L565, but the tokio threads owned by the RpcEventLoop aren't guaranteed to have exited until this other wait returns from within the Executor->RpcEventLoop here: https://github.com/paritytech/jsonrpc/blob/master/server-utils/src/reactor.rs#L75

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can wait for the Executor to finish - it could have been spawned externally or shared for some other tasks. I guess the issue might be that wait() only actually waits for the moment where we stop accepting new requests, not when we stop processing them.
Do you have a specific issue that this behaviour causes for you?

Copy link

@carllin carllin Aug 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomusdrw, @dvdplm thanks for the reply! I am running into issues writing test code for testing start/ restart behavior on the server. I have specific cleanup behavior that runs on things like filesystem objects in the Drop implementation of certain structs that are owned by Executor tasks. B/c shutting down the server doesn't guarantee the spawned Executor tasks have finished, then on restart of the server, there's potential for data races based on when these Executor tasks finish and these objects get dropped.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomusdrw, @dvdplm should I file an issue for this to ensure wait() waits for when all requests are finished processing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do, but I think it's a tricky one. We need to distinguish between externally provided executor (where we don't really want to wait until it ends) and a one that we've spawned ourselves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants