Join GitHub today
x/tools/gopls: add support for multiple concurrent clients #34111
Vim users often have multiple instances of vim running at the same time, and starting/exiting is a natural part of the workflow. Currently there is a 1 to 1 mapping between vim and gopls (at least when using govim as plugin). It would be really useful to be able to share the same cache and avoid waiting for warmup each time.
In its default state gopls supports a specific header based JSON stream of the LSP on stdin /stdout. In this mode it only supports a single client as stdin/stdout cannot be multiplexed.
It also has the flags
The protocol spoken between a -remote and -listen gopls is not defined, and never will be, we only support it as a way of intercommunication, not as an API surface. This is because to achieve some of its goals it will have to have significant extensions to the protocol, and may mutate some of the data on the way through. Part of the reason for this is that it should be feasible to have the server run on a separate machine, and it may not have access to the same file system, or it may know the files by different absolute paths etc. These features require a reliable way of translating paths, and also the remote file extension so the true server can ask the intermediary gopls for file contents. It may also be necessary to have some forms of caching and cancelling within the intermediary.
The current state is that we use this mode only for debugging. It only gets fixed when we need to use it to debug a problem, and even then it does not get properly fixed. It does mostly work, but there are things like straight proxying of shutdown message causes the shared server to quit when any of clients does.
There are also design issues still to fix, things like should we support some kind of "discovery" protocol, should we have a mode where we start a server if one is not running but connect to it if it is, when all the clients go away should the server shut down again, how do we manage the cache so we dont explode because of lots of clients over a very long time, how do we prevent one client from starving the others, how do we manage the config and environment of the server correctly etc
Thanks for this detail.
For editors like Vim, Emacs, etc, where users end up starting multiple instances on the same machine having a single instance of
Given that, do you have plans to fully support this mode?
Yes, but I have no time to do anything about it right now.
I am just being really careful to make sure people do not think I will get round to it any time soon, I don't want someone waiting on it and getting frustrated that I am not doing it!
This is also an area where contributions would be welcome, although it would be a very high touch contribution as I have a lot to say about how it is done :)
When debugging multiple instances of gopls simultaneously, it is useful to be able to inspect stateful debugging information for each server instance, such as the location of logfiles and server startup information. This CL adds an additional section to the /info http handler, that formats additional information related to the gopls instance handling the request. Updates golang/go#34111 Change-Id: I6cb8073800ce52b0645f1898461a19e1ac980d2b Reviewed-on: https://go-review.googlesource.com/c/tools/+/214803 Reviewed-by: Rebecca Stambler <email@example.com> Run-TryBot: Robert Findley <firstname.lastname@example.org> TryBot-Result: Gobot Gobot <email@example.com>
The passed-in Context is not used, and creates the illusion of a startup dependency problem: existing code is careful to pass in the context containing the correct Client instance. This allows passing in a source.Session, rather than a source.Cache, into lsp server constructors. Updates golang/go#34111 Change-Id: I081ad6fa800b846b63e04d7164577e3a32966704 Reviewed-on: https://go-review.googlesource.com/c/tools/+/215740 Run-TryBot: Robert Findley <firstname.lastname@example.org> TryBot-Result: Gobot Gobot <email@example.com> Reviewed-by: Rebecca Stambler <firstname.lastname@example.org> Reviewed-by: Ian Cottrell <email@example.com>
A new test is added to verify that contextual logs are reflected back to the LSP client. In the future when we are considering servers with multiple clients, this test will be used to verify that client log exporting is scoped to the specific client session. Updates golang/go#34111. Change-Id: I29044e5355e25b81a759d064929520345230ea82 Reviewed-on: https://go-review.googlesource.com/c/tools/+/215739 Reviewed-by: Rebecca Stambler <firstname.lastname@example.org>
Update: I've got a number of things working locally, but some of them required significant refactoring that needs to be evaluated before merging. I thought it would be helpful to comment here describing what I'm working on, in case anyone has opinions, questions, or advice.
Here's what I’ve identified as the primary goals for the shared gopls implementation:
Ease of use
To connect to a shared gopls instance, we'll use the existing
This will allow the following usage patterns:
I'm not married the the
For future discussion, I'll refer to the thin client gopls process (the one started with the
The Server Shutdown Problem
One major problem with starting a shared gopls process automatically is server shutdown: the shared gopls will be a child process of whichever forwarder gopls process started it, and will die when that forwarder process exits. For certain workflows this might be a big problem, for example users who use only short-lived vim processes. I can think of four potential solutions for this:
Of these, I don't think (1) or (4) are reasonable solutions in isolation. We can't expect every user to manage their own gopls daemon, and we can't expect every LSP plugin to gracefully handle the LSP server process crashing. Notably VS Code gives up if the language server crashes five times, so if a shared gopls instance is to be used by VS Code, we shouldn’t be intentionally crashing the forwarder.
(1) and (3) both result in the loss of the gopls cache, so after a restart users would have to again pay the initial price of warming the cache. On large projects this can be painful, and since users won't be aware of which forwarder owned the shared gopls, it will be confusing when this happens. However, I will note that so far while working in x/tools with a shared gopls instance, I hardly notice when it restarts.
(2) would be the ideal solution as it results in the least amount of lost state, but I think it simply won't be possible in many executing environments. I could be wrong though: I need to do more research on daemonization.
My current plan is to start by supporting (1) and (4) so that we can all begin experimenting with using a shared gopls instance, and then work on (2) or (3) (or both, or <a better idea>) toward the end of this project.
I'm lifting the LSP forwarding to the jsonrpc2 layer. What is currently TCP forwarding will instead be two jsonrpc2 connections talking to each other. This is done both so that we can instrument the forwarder gopls the same way we instrument the shared gopls, and so that we can insert a handshake across the jsonrpc2 stream connecting the forwarder to shared gopls, before starting to forward the LSP. In doing so, we allow the forwarder and shared gopls to exchange information that can be used in debugging. For example, the forwarder gopls can know the location of shared logs or the shared gopls debug port.
Doing this will require some refactoring of the jsonrpc2 API.
I'm going to do a bit of refactoring of
@leitzler pointed out on slack: it would be good to also support unix domain sockets as an IPC mechanism between forwarder and shared gopls instance (thanks for the suggestion!). I agree, but I think we will always need to support TCP as well. One use case that has been discussed is running gopls in docker, in which case exposing a TCP listener is simplest.
For now, I'm going to focus on TCP. I can add support for unix domain sockets later, or perhaps it would be a good opportunity for others to contribute.
@findleyr thanks for awesome write up.
Regarding "loss of the gopls cache"
In addition to the cost of re-warming the diagnostics/analysis cache, another pain point is the re-warming of the unimported cache. This can be particularly costly is you have a large module cache.
Regarding option 2
Do we really want to shut it down when there are no more forwarded gopls connected? Because if I open Vim, do some work then quit, the shared gopls will be shutdown. Meaning if I re-open Vim it will need to start from cold again. Which I think defeats the point of what we're trying to solve here, unless I misunderstood? I'd say keep the shared gopls instance running forever and provide a means via a forwarder (a flag, like
The option space
Per our Slack chat I completely agree it's worth getting something landed so we can start playing/experimenting. I'm minded to think that option 2 is really the only solution in the medium-long term: if a user is working in an environment where we can't daemonize then I think it's probably fair to start fallback to the current, i.e. non-remote, behaviour. I don't think we'd want to not do something with daemonization simply because we can't support 100% of cases because we will get a huge return for those cases where we can (he says, selfishly
@myitcv thanks for the feedback. I think in your case the better solution might be to explicitly manage the daemon (option 4). Even if it were possible for gopls to automatically start a daemon (which won't be the case in most environments), it would be bad to silently leave behind a process that consumes so much memory.
It will always be possible to manage the daemon yourself, but it would be great if this isn't necessary for most users to get the benefit of shared state.