Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement proper client request support #134

Merged
merged 23 commits into from
Mar 4, 2020

Conversation

ebkalderon
Copy link
Owner

@ebkalderon ebkalderon commented Mar 3, 2020

Added

  • Implement routing of client responses back to the Printer (now renamed Client).
  • Implement support for the following client requests:
    • window/showMessageRequest
    • workspace/configuration
    • workspace/workspaceFolders

Changed

  • Rename Printer to Client.
  • Update LanguageServer trait method type signatures to reference client and Client.
  • Increase minimum supported Rust version to 1.40.0.
  • Rename Client::send_notification() to Client::send_custom_notification(). This is to avoid a name clash with a new internal method and to be more explicit during usage.
  • Convert workspace/applyEdit request to async fn which returns an ApplyWorkspaceEditResponse.
  • Improve compatibility notes for client/server requests added in a later LSP specification version.

Fixed

  • Clean up some incorrect or outdated doc comments.

Closes #13 and closes #128.

CC @icsaszar

Once hooked up, this should allow us to route responses from clients
back to the `Printer` for handling.
Since the Printer itself doesn't do much beyond check whether the server
is initialized, perhaps we can combine `Router` and `Printer` together
into a single struct.
This should more accurately reflect what this object does, since it no
longer just prints notifications but also routes requests and responses.
This is to be more consistent with the body of send_notification()`.
This change ensures that we only ever insert the request ID into
`pending_requests` if it was actually successfully sent over the
channel.
This is because `dashmap` indirectly makes use of standard library
features which requires a new version of Rust.

We also revert the changes from #111 to see if the `rustfmt`
inconsistencies have been fixed in Rust 1.40.
Having `MessageSender` be a newtype is less important than
`MessageStream` because it is not intended to be exposed to downstream
users as a public API, and removing the manual `Sink` implementation
cleans up quite a bit of unneeded boilerplate.
This should adhere to the upstream LSP specification, which defines the
JSON-RPC error code `-32002` to mean "server not initialized."
@ebkalderon ebkalderon self-assigned this Mar 3, 2020
@ebkalderon
Copy link
Owner Author

Sorry for the massive pull request, @icsaszar! Feel free to review this at your leisure.

@ebkalderon
Copy link
Owner Author

Local tests of nix-language-server ported to use this version of tower-lsp show that workspace/showMessageRequest and workspace/applyEdit now work! 🎉

However, please note that this branch hasn't yet been tested with multiple concurrent client requests, so it is not definitively proven yet that awaiting multiple concurrent client responses simultaneously does not block the executor.

@ebkalderon
Copy link
Owner Author

The request/response routing works roughly as follows:

  1. The appropriate method is called on the Client, e.g. show_message_request(), and the response is awaited inside the LanguageServer trait method body.
  2. Inside of show_message_request(), the next available request ID is atomically incremented.
  3. The JSON-RPC request is created and is serialized to text.
  4. The string is sent over an async channel to the MessageStream and is printed to stdout.
  5. There exists a lockless concurrent hashmap as a field of Client which maps currently pending request IDs to an Option<jsonrpc_core::Output>. An entry of type None is inserted, keyed by the request ID.
  6. The method enters an infinite loop which polls the hashmap and attempts to remove the entry value if it has changed to Some(_). If the entry value is not yet Some(_), we call tokio::task::yield_now() to yield control back to the async runtime.
  7. There exists a background job that is constantly pulling client responses sent from LspService::call() and inserting them into the pending request hashmap as Some(response). This job was spawned when the Client was first constructed.
  8. Once the appropriate request entry for show_message_request() has become Some(_), the entry is removed from the hashmap.
  9. If the response was Output::Success, we use serde_json::from_value() to transform the result field into a strongly typed response and return it.
  10. If the response was Output::Failure, we return Err(response.error).

@ebkalderon
Copy link
Owner Author

ebkalderon commented Mar 3, 2020

The lockless concurrent hashmap was chosen for the sake of better performance when compared to an RwLock<HashMap>. dashmap, the implementation chosen in this pull request, demonstrates excellent insert and get performance compared to an std::sync::RwLock, but it isn't known how well a tokio::sync::RwLock would stack up against it though.

We could set up a benchmark for both implementations and count the messages per second with a certain number of concurrent client requests all firing at once, if we were interested in finding out. But note that this might require a little extra tweaking since std::collections::HashMap does not provide an equivalent method to DashMap::remove_if().

@ebkalderon ebkalderon force-pushed the implement-client-request-support branch from d6f5df7 to fc5ac6a Compare March 3, 2020 15:10
@ebkalderon ebkalderon changed the title Implement complete client request support Implement proper client request support Mar 3, 2020
@icsaszar
Copy link
Contributor

icsaszar commented Mar 3, 2020

Very nice and clean implementation! The PR might seem big and scary, but as presented by the diff viewer in Clion everything was easy to follow.

The only thing that comes to my mind is using a tokio::sync::oneshot channel instead of polling the map.

The changes in the client would look something like this:

async fn send_request<R>(&self, params: R::Params) -> Result<R::Result>
    {
        // ...

        let (tx, rx) = tokio::sync::oneshot::channel();
        self.pending_requests.insert(id, tx);
        let response = rx.await.unwrap();

        // ...
    }
pub(super) fn new(
        sender: Sender<String>,
        mut receiver: Receiver<Output>,
        initialized: Arc<AtomicBool>,
    ) -> Self {
        let pending_requests = Arc::new(DashMap::default());

        let pending = pending_requests.clone();
        tokio::spawn(async move {
            while let Some(response) = receiver.next().await {
                match response.id() {
                    Id::Num(id) => {
                        if let Some(tx) = pending.remove(id) {
                            tx.send(response);
                        } else {
                            error!("received response from client with no matching request"),
                        }
                    }
                    _ => error!("received response from client with non-numeric ID"),
                }
            }
        });
        // ...
    }

But since currently there's no objective way to compare the 2 options (performance vs memory usage), by suggestion would be to go forward with the current version, maybe open an issue for this suggestion alongside another issue to investigate testing and benchmarking.

@ebkalderon
Copy link
Owner Author

Thanks for the review! I like your oneshot::channel() based design a lot. It might perform even better than the current implementation, since I assume oneshot::channel() is making more efficient use of the Tokio executor internally than my manual calls to tokio::task::yield_now() inside of a loop. The code is also quite a bit easier to read as a result. I think I'll investigate the performance implications of this in a subsequent PR.

I don't really know how efficient tokio::sync::oneshot::channel() is when compared with futures::channel::oneshot::channel(), and I think that might need to be the subject of another set of benchmarks if we're curious to find out.

@ebkalderon ebkalderon merged commit 2843da6 into master Mar 4, 2020
@ebkalderon ebkalderon deleted the implement-client-request-support branch March 4, 2020 04:05
ebkalderon added a commit that referenced this pull request Mar 4, 2020
This changes the `pending_requests` map to use a oneshot async channel
to simplify the code and better capitalize on the async runtime.

The previous implementation used a `loop` to poll the `pending_requests`
hashmap and explicitly yield execution back to the executor by calling
`tokio::task::yield_now()` if the client's response hasn't been received
yet by the server. This should hopefully be a bit more efficient, given
that `futures::channel::oneshot::channel()` is presumably using the
underlying runtime's task scheduler more efficiently. This assumption
hasn't been tested with dedicated benchmarks, however, but the code is
much easier to reason about, though.

Original suggestion here:

#134 (comment)
ebkalderon added a commit that referenced this pull request Mar 4, 2020
This changes the `pending_requests` map to use a oneshot async channel
to simplify the code and better capitalize on the async runtime.

The previous implementation used a `loop` to poll the `pending_requests`
hashmap and explicitly yield execution back to the executor by calling
`tokio::task::yield_now()` if the client's response hasn't been received
yet by the server. This should hopefully be a bit more efficient, given
that `futures::channel::oneshot::channel()` is presumably using the
underlying runtime's task scheduler more efficiently. This assumption
hasn't been tested with dedicated benchmarks, however, but the code is
much easier to reason about, though.

See original suggestion here:

#134 (comment)

We also add a private `RequestMap` type alias to help `rustc` infer the
type of the `pending_requests` map inside the `tokio::spawn()` async
block.
ebkalderon added a commit that referenced this pull request Mar 4, 2020
This changes the `pending_requests` map to use a oneshot async channel
to simplify the code and better capitalize on the async runtime.

The previous implementation used a `loop` to poll the `pending_requests`
hashmap and explicitly yield execution back to the executor by calling
`tokio::task::yield_now()` if the client's response hasn't been received
yet by the server. This should hopefully be a bit more efficient, given
that `futures::channel::oneshot::channel()` is presumably using the
underlying runtime's task scheduler more efficiently. This assumption
hasn't been tested with dedicated benchmarks, however, but at least the
code is much easier to reason about.

See original suggestion here:

#134 (comment)

We also add a private `RequestMap` type alias to help `rustc` infer the
type of the `pending_requests` map inside the `tokio::spawn()` async
block.
ebkalderon added a commit that referenced this pull request Mar 4, 2020
This changes the `pending_requests` map to use a oneshot async channel
to simplify the code and better capitalize on the async runtime.

The previous implementation used a `loop` to poll the `pending_requests`
hashmap and explicitly yield execution back to the executor by calling
`tokio::task::yield_now()` if the client's response hasn't been received
yet by the server.

This should hopefully be a bit more efficient, given that
`futures::channel::oneshot::channel()` is presumably using the
underlying runtime's task scheduler more efficiently. This assumption
hasn't been tested with dedicated benchmarks, however, but at least the
code is much easier to reason about.

See original suggestion here:
#134 (comment)

We also add a private `RequestMap` type alias to help `rustc` infer the
type of the `pending_requests` map inside the `tokio::spawn()` async
block.
ebkalderon added a commit that referenced this pull request Mar 4, 2020
This changes the `pending_requests` map to use a oneshot async channel
to simplify the code and better capitalize on the async runtime.

The previous implementation used a `loop` to poll the `pending_requests`
hashmap and explicitly yield execution back to the executor by calling
`tokio::task::yield_now()` if the client's response hasn't been received
yet by the server.

This should hopefully be a bit more efficient, given that
`futures::channel::oneshot::channel()` is presumably using the
underlying runtime's task scheduler more efficiently. This assumption
hasn't been tested with dedicated benchmarks, however, but at least the
code is much easier to reason about.

See original suggestion here:
#134 (comment)

We also add a private `RequestMap` type alias to help `rustc` infer the
type of the `pending_requests` map inside the `tokio::spawn()` async
block.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Refactor Printer Add support for client requests
2 participants