Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run handlers in a different thread than io #23

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

KillTheMule
Copy link

Hey!

I've run into a race condition with neovim-lib: If we recieve a Request, then the handler for that request blocks the iO thread (I'm calling it the iO thread, it's the one spawned by dispatch_thread). That's a problem for 2 related reasons:

  1. The handler itself might run requests to neovim, but the response can't be handled by the iO thread, since it's blocked by the handler.
  2. If the main thread is just busy doing something else when the request arrives, it might itself request some data from neovim, but will never receive the response, since the iO thread is blocked by the handler.

Both times, a timeout is the result. I've run into 2. because the request during a test would arrive faster than my main thread could reach it's event loop, and when it requested the current buffer to set up things, it would never get an answer because the request was already blocking the iO thread.

So here's a rough shot at fixing this. The iO thread sends Requests/Notifications to another thread, which handels those, while the iO thread stays free to get more events, in particular responses. The new thread needs to take ownership of the closed variables (notably, the handler itself), which implied that we need to change the signature of the handler functions to take String instead of &str. It's not a problem per se (we weren't doing anything with the method name anyways other than passing it to the handler), but it's a breaking change.

Also, I've opted to saving the new threads handle in the dispatch_guard. I've also not fully polished that, let me know what you think and I'll clean up the unwraps.

CC @bfredl @vhakulinen

@bfredl
Copy link

bfredl commented Feb 3, 2019

Can this handle multiple levels of nesting, like a stack of 5 calls in interleaved directions?

The pynvim client runs each handler in a lightweight thread (greenlet) to support these kinds of situations. It might be worthwhile to use one rust async task per handler once those become available, though that would require another set of API wrappers, I suppose.

@KillTheMule
Copy link
Author

Can this handle multiple levels of nesting, like a stack of 5 calls in interleaved directions?

Not fully. Responses are handled all the time. Requests and notifications are handled sequentially, with the handler thread blocking while a request/notification is handled. So if you have a request/notification, and the handler can only finish if another request/notification has been handled, you've got yourself a deadlock again (not sure if the terminology is right). Sounds like something one wouldn't do anyways, maybe?

Could you point out the corresponding code in the python client? Seem like we really want that once async/await becomes available in rust...

@daa84
Copy link
Owner

daa84 commented Feb 8, 2019

Hello!
Thanks for PR, can this be solved on ChannelHandler part or on some new struct? I prefer to keep this code unchanged and just add some functions around this.
Does I understand right that this solve same problem as currently solve by

sender: mpsc::Sender<(String, Vec<Value>)>,
?

@KillTheMule
Copy link
Author

KillTheMule commented Feb 8, 2019

Does I understand right that this solve same problem as currently solve by

Somewhat, the ChannelHandler solves this problem for notifications, but not requests, and it's much more pressing for requests. But I think it's possible to solve this in the ChannelHandler, I'll try to come up with something!

I spoke to soon. Whatever we do in any handler, the way an RpcRequest is handled in dispatch_thread means that any other iO is impossible until the handle_request has returned. There are probably different ways to structure this, but not changing the handling in dispatch_thread is impossible while solving this.

Two thoughts (e: Still sort of valid, but ignore them until we've sorted out the above problem):

  1. It was extremely easy for me to trigger this once I'd implemented a RpcRequest. My event handling isn't complicated by any means, so I suppose others could fall into this trap fast as well. If we can solve this in the ChannelHandler, that will be pretty much invisible to everybody else I guess. Maybe Handler/RequestHandler should not be public, so others need to go through the provided handlers that don't exhibit this problem? I've not thought this through, but I kinda feel the need to keep others from falling into the same trap :)

  2. The handler functions should take name as a String rather than a &str, for semantics and to avoid unnecessary clones. It doesn't need to be part of this PR, so I'll try to keep that out, but would you be open to change this?

handler.handle_notify(&method, params);
}
Ok(model::RpcMessage::RpcResponse { .. }) => {
error!("Handler threat does not handle responses!");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo? "Handler thread..."

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup :D

@boxofrox
Copy link
Contributor

boxofrox commented Feb 8, 2019

@KillTheMule do you have a minimal viable example demonstrating the deadlock you're trying to avoid?

@KillTheMule
Copy link
Author

No, I don't, my own plugin shows this, but it's not really minimal. It's not hard to see though, just imagine a RequestHandler that itself uses a request, i.e. calls nvim_get_current_buf and then waits for the response. That will lock up.

Just the same, imagina a plugin that calls nvim_get_current_buf during startup, but there's a request sent before that. So the request gets into the qeue before the response, and you've got yourself the same thing.

@boxofrox
Copy link
Contributor

boxofrox commented Feb 8, 2019

So vim sends a request to the rust plugin, and the rust plugin tries to call nvim_get_current_buf before sending back a response?

I think I can tinker with that to understand the situation better. Thanks.

@KillTheMule
Copy link
Author

KillTheMule commented Feb 8, 2019

You could take scorched earth, take, e.g. your handler for CursorMovedI and put in a call to nvim_get_current_buf. That should lock up.

(e) Ah no that doesn't work, because it's "just" a notification, so the io thread does not wait for a result.

@KillTheMule
Copy link
Author

Is there a way forward with this? I'm using this on a vendored copy of neovim-lib for my plugin without issues (not a huge usage scenario, granted). I'd be putting in more work if needed, but I'd need a bit of guidance if/how it should be done.

Or is there something unclear? Should I put work into producing a MWE?

@daa84
Copy link
Owner

daa84 commented Feb 21, 2019

Sorry for long response.
Get the idea of what is problem.
Not sure, maybe this is not important but i prefer to allow user to not create addition thread if it does not needed. So solution can be to pass writer (or some structure contains writer) to handler so user can write directly to it. And then provide default some implementation with thread.
Not sure it is need to do it this way...
Also notification also pass data to request thread? maybe use two handlers is better to not pass same data from one thread to another.

@KillTheMule
Copy link
Author

No worries :)

Not sure, maybe this is not important but i prefer to allow user to not create addition thread if it does not needed.

I understand. Otoh, we're already using a different thread for the iO, so maybe it's not that bad.

So solution can be to pass writer (or some structure contains writer) to handler so user can write directly to it. And then provide default some implementation with thread.

So like, pass a buffer to the handler, have the hander pass that buffer to the main thread and return immediately, then the request gets handled without blocking the iO thread... but how's the data written into the buffer returned to neovim? I don't fully understand how to do this, but it gave me an idea: How about we pass a closure to the handler, that the handler than invokes on the Value that needs to be returned? The closure basically only needs to consist of

let writer = &mut *writer.lock().unwrap();
model::encode(writer, response).expect("Error sending RPC response");

Seems doable, I'll try to get that to work over the weekend!

Also notification also pass data to request thread?

I also put the notification handler on the other thread on the offchance someone write a notification handler that tried to get data from nvim before returning control. Might not even be possible...

maybe use two handlers is better to not pass same data from one thread to another.

I don't understand this, sorry.

Overall though, this might be doable with async/await, once that lands on stable, but I'm not sure right now how or if it's a good idea.

@vhakulinen
Copy link
Contributor

vhakulinen commented Feb 21, 2019

What if, instead of requiring the API user to dispatch the IO thread through start_event_loop* functions, allow the user to get handles to the reader and the writer (or expose functions read_message and write_message). Then, its up to the user to handle the IO properly. This could also avoid any extra threads in cases where they might not be needed.

@KillTheMule
Copy link
Author

KillTheMule commented Feb 23, 2019

Ok, here's a different angle, having the handler take a closure argument where it's supposed to stick in the return value or a possible error. I'm not sure it's optimal (using std::error::Error is bad for interaction with failure), but it avoids spawning another thread, while solving the problem at hand.

@vhakulinen I don't think exposing things this low-level is appropriate, it would put quite a bit more burden on plugin authors. Plus, I'm not really sure why everyone wants to avoid threads so much. You don't want to pile them up all that much, alright, but it's one of the nice features of rust that one can use threads easily, so let take advantage of that :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants