-
Notifications
You must be signed in to change notification settings - Fork 90
Containerized ALS terminates itself unexpectedly #598
Comments
Here is the log: |
I've found that it's pretty hard to ensure the local paths on a project match the remote paths in the container's bind-mount. So I end up with a bunch of timeouts, where the LSP server in the container is being asked to find a file outside its file system. So my startup looks like this: docker run --rm -i \
--volume type=bind,src=/home/me/src/ansible-role-gitlab,dst=/root/project \
dc/lsp-docker \
ansible-language-server --stdio" And i'm getting events like this:
It's really not worth it to try to connect to ALS running in a container, unless vim has something like docker-tramp.el. Tramp allows you to transparently work with remote files as though they're local. Docker-tramp allows you to start a container then have emacs start processes on the container as though everything is local. My notes are here, but they're a bit scattered: Emacs: Using lsp-docker from eglot. I was getting that error at several points. Basically it means the container process is shutting down immediately. This could be for a variety of reasons. For me, it happened when I rebuilt the container, but the workdir didn't exist. |
I have the same issue, did you end up finding a solution @har7an ? |
That's only true if you mount host paths under different directories inside your container. I generally launch containers with the git repo of the current file bind-mounted (and keeping the PWD of the parent process), or, if there is no git repo, fall back to either mounting the PWD or all of HOME, depending on how I expect to use it. But whatever I do: I keep the paths intact (i.e. something like In the meantime, I have found the error but forgot to report back (thanks for the bump @vRoussel). It appears that the ansible LSP is built on top of a typescript LSP-template offered my Microsoft. This template (and apparently all LSPs derived from it) has a very annoying property: It receives the PID of whoever started it as a startup parameter and, if it cannot find that PID itself, kills itself shortly afterwards (See neovim/neovim#14504 for a discussion in neovim). Since the process runs in a container, it has, by default, no access to the hosts PID namespace. So it doesn't find the PID and stops right there. There are (at least) two solutions to this problem: Break sandboxingAs the name implies, this shouldn't be your favored option, but it will get you going if all else fails: Run the LSP container with Override the
|
Summary
TL;DR: When using ALS, packaged into a container, with neovim, the client is attached to the current buffer briefly but terminates itself shortly afterwards with an exit code of 1.
Here is the Containerfile I use to build the container:
nvim then attaches to the container with the
--stdio
parameter. I'm calling the container through the following shell wrapper:I have tried various parameter combinations to podman, but none of them had much of an effect. I also tried building the container above with specific versions of ALS starting from 1.1.0, but all show the same behavior. When removing
--rm
from the container wrapper, the container remains on my PC and I can see the log (which show pretty much exactly what's inside the nvim logs below). I also see that the container quits with an exit code of 1, but I see no reason for that. The last message written to stdout is always the "rpc.receive" with a very long array "data" full of numbers.I'm pretty sure the combination of running LSP as a podman container from a toolbx container isn't the problem here because I do the same thing for other LSPs as well (in particular, Python, Lua and R) and they work just fine. I also ruled my nvim configuration as culprit by using a "minimal" config supplied by the nvim-lspconfig project which showed the same behavior.
Since I don't use VSCode I attached the nvim trace log below instead. I hope this helps! I'll happily provide additional info if required.
Extension version
N/A
VS Code version
N/A
Ansible Version
OS / Environment
Relevant log output
The text was updated successfully, but these errors were encountered: