Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A "marketplace" of language servers #76

Closed
mickaelistria opened this issue Sep 29, 2016 · 41 comments
Closed

A "marketplace" of language servers #76

mickaelistria opened this issue Sep 29, 2016 · 41 comments

Comments

@mickaelistria
Copy link

As all the clients are supposed to be able to work with any language server, it would be interesting if they could all share a same "marketplace" where to search and fetch implementations of language server protocol. Example: I'm in Eclipse IDE, opening a .cs file, the language server "marketplace" would say "Do you want to use OmniSharp?", I press "Yes, thanks" and then OmniSharp is downloaded and installed and configured in my client so the .cs file editor gets features coming from this language server.

@angelozerr
Copy link
Contributor

I think JSON, CSS server, etc should be installed as npm module. After that you could do just:

mpm install json-language

TypeScript 2.0 provides this features to install TypeScript definition @types. See https://blogs.msdn.microsoft.com/typescript/2016/06/15/the-future-of-declaration-files/

@mickaelistria
Copy link
Author

What about servers that aren't npm modules, such as https://github.com/felixfbecker/php-language-server ?

@angelozerr
Copy link
Contributor

IMHO, It should provide install with npm.

@masaeedu
Copy link
Contributor

@angelozerr I don't think this is a good idea. There will be a great deal of grief if you try to contort every language's build and packaging ecosystems into the node module format. The fact that a language server implementation does not need to be implemented in node (or forced into its packaging format) is one of the strengths of a common language server protocol.

I think the direction Che has gone with this is a good approach. Containers can be used as the packaging format for language servers, and e.g. the Docker hub could be a "marketplace" of language server implementations. You can treat any language server as a black box exposed over a socket. Since all major development platforms (Win 10/OSX/Linux) support Docker already, cross-platform compatibility should not be a major issue.

Another alternative is to distribute language server implementations alongside scripts that can stand up a JSON-RPC service over each platform's equivalent of named pipes. This would eliminate some of the overhead of containers, but forces a lot of work onto the developers of language server implementations.

@bruno-medeiros
Copy link

@mickaelistria This is a good idea, but I would take it a step further: instead of a having a "marketplace" where an LSP client would fetch different servers itself, why not have instead a "meta" LS that would be able to transparently serve requests for multiple languages. This has the obvious effect that it makes client implementation easier: so instead of each client having to learn how to deal with the marketplace, only the meta LS has to.

Furthermore, if we want language servers to be able to handle cross-language functionality, the correct way to this would be to abstract all the language interactions under a single LSP server.

@dbaeumer
Copy link
Member

@egamma @seanmcbreen fyi.

@felixfbecker
Copy link

felixfbecker commented Oct 17, 2016

@angelozerr That doesn't make much sense. The package manager / registry to be used is dependent on the language. The package manager's main job is to manage dependencies. A language server in TypeScript / JavaScript has a lot of dependencies through NPM, so it makes sense to install it through NPM. But a language server written in PHP has a lot of dependencies through Composer/Packagist, so it only makes sense to publish it as a Composer package, which I did. The VS Code extension is written in TypeScript, but installs the language server through Composer, which works totally fine.

I could publish this as an NPM package, but then I would have to publish it with dependencies included - and that would prevent soft versioning, which is a dealbreaker. It would be a nightmare to maintain because I would have to republish for every small dependency update.

The same goes for a LS in Java, it will have dependencies through Maven, or a C# LS that will have dependencies through NuGet.

The usecase outlined in the original post doesn't work because it's not only about installation - different LS also have different ways to run them. A PHP LS needs to be run with PHP, a Java LS with Java, a Node LS with node. This is all the job of the editor extension that integrates the LS with the editor. VS Code already supports this concept of "recommended extensions".

So I'm clearly -1 on a uniform way to install.
But of course it would be nice to have a searchable, dedicated list of LS implementations instead of a wiki page, that link to the package on the respective registry.

@bruno-medeiros
Copy link

@felixfbecker you're confusing runtime dependencies with compile/dev-time dependencies. I don't think @angelozerr sugestion was to mirror in NPM whe whole compile/dev-time dependencies of each LS. (if that was the sugestion, yeah, it's bad, at least for LSs that are not already NPM based)

Instead NPM could be used to provide "binary" or distribution packages of each LS. So a Java-based LS would ship a jar with all dependencies in one single NPM package. An LS based in C/C++, Go, Rust or any compiled language would ship an NPM package with executables, etc.

I'm not familiar with NPM so I don't know how suitable it would be for this task, but I understand the problem at hand: we'd want package manager that is not target at a specific language (ie, development time), and yet is cross-platform. Otherwise a "meta-LS" (or an LSP client trying to work with multiple LSs) would have to know how to install and update each specific LS, which would not be extensible.

@TylerJewell
Copy link

Codenvy and eclipse che has started on an open source registry for language servers. We have the first elements of that embedded within Eclipse Che now. The basic elements include:

  1. Registration and listing of language servers
  2. Advertisement of language servers accessible over HTTP & through containers using volumes-from for mounts
  3. Independenty packaging of language servers - the only dependency is a launch script - each language server deploys and runs itself using their own internal dependencies.
  4. Language server updates.

We have a long way to go - but have proposed an open source registry implementation to be hosted with the Eclipse Foundation.

@felixfbecker
Copy link

@bruno-medeiros a JavaScript or PHP LS does not have "binaries". You run the PHP file with php, it is an interpreted language, just like JavaScript. It depends on other PHP scripts at runtime.
I think having an own registry (may be just a GitHub repo with some JSON files) would be better suited.

@mickaelistria
Copy link
Author

Linux people will prefer rpm/deb, OSX will want things over brew, Windows people will prefer whatever language, Android people will want .apk...
IMHO, the registry need to allow multiple way to deliver a language server and not tie to a specific one. Clients should be able to decide whatever format they prefer, and the marketplace would expose the formats delivered by any language client.

@felixfbecker
Copy link

felixfbecker commented Oct 17, 2016

I am also for having wrapper extensions, that take care of the LS dependencies and launching it. For example, vscode-php-intellisense in written in TypeScript, has dependencies from NPM and the PHP LS as a Composer dependency. It takes care of spawning the LS with PHP, checks if you have the right version of PHP installed, displays nice error messages, uses a socket on Windows instead of STDIO. It is published to the VS marketplace with included dependencies.

It is the task of the editor to "recommend" these extensions. We can still have a human-searchable registry of LS, but we basically have that here: https://github.com/Microsoft/language-server-protocol/wiki/Protocol-Implementations. You click on the link and follow the installation instructions.

@bruno-medeiros
Copy link

@bruno-medeiros a JavaScript or PHP LS does not have "binaries". You run the PHP file with php, it is an interpreted language, just like JavaScript. It depends on other PHP scripts at runtime.
I think having an own registry (may be just a GitHub repo with some JSON files) would be better suited.

Yeah, I know - but my point is you could still try to have a single NPM package that transparently install/manages the PHP LS. (possibly using Composer, but that would be an internal aspect)

@felixfbecker
Copy link

@bruno-medeiros So you use a JavaScript package manager to install a package that then installs other packages through another package manager that you need to have installed on your PC in a postinstall hook? Because then why not install it with that other package manager directly? Alternatively: You use a JavaScript dependency manager to install a package, that has no JavaScript dependencies but tons of PHP dependencies, distributed in the package (and therefor version-locked). Both don't seem right to me.

LS will have runtime dependencies anyway. A Java LS will require that you have Java installed, so why not require Maven installed (which a Java developer likely both has installed anyway). A PHP LS will require PHP installed, so why not require Composer installed as well (which a PHP developer will likely have both installed anyway).

@TylerJewell
Copy link

Inside of Che - with our repository, the workspaces where a language server will need to run varies by the user. We cannot predict or mandate the underlying operating system, or any dependencies. So we have implemented a language server packaging approach where the only dependency is bash. It's crude, but it's the lowest common denominator. Then within the language server, the packager of the language server must provide the ability for the language server to install everything else that it needs.

Since our workspaces are defined by Docker, if a user provides the underlying dependnecies within their Docker image that is loaded, then our packager will not repeat download. And this is why our codenvy base images are so big - we provide all dependencies for each type of platform.

But overall, for language servers to truly be portable across platforms, we should not require any dependencies. Even the bash dependency is too much and we are thinking about POSIX shell.

@felixfbecker
Copy link

felixfbecker commented Oct 17, 2016

@TylerJewell You shouldn't expect a language server to provide an install script that is catered to Che's use case. That install/launch script would only be used by Che... VS Code for example has no use for a bash install script or a Dockerfile. You should rather use wrapper extensions like VS Code does, and that could then contain a Dockerfile and a launch script.

@TylerJewell
Copy link

The language server does not provide the install script. The language server "packager" does. In the Che world, agents are dynamically added to / from an existing workspace, which is already running. so the packager is the wrapper in bash around the language server, and when the agent is deployed into an already running workspace, then it is activated to do its magic.

It is complicated as the bash script has to handle differerent OS issues. One thing that we'd like to look at in the future is to have our agents running in their own containers side by side with a workspace, but so far performance for this configuration hasn't been the best vs. having the agent runnin natively within the worksapce container itself.

@bruno-medeiros
Copy link

Because then why not install it with that other package manager directly?

Because the other package manager (Composer in that case) is language specific, and in this case "you" would need specific knowlege of how to work with Composer.

Note that we are not talking about a situation were a PHP developer wants to install a PHP LS and nothing else. In this scenario, yes, it makes more sense for the developer to install the PHP LS using Composer directly, no need for an extra layer or abstraction. But this is not the case we are talking about.

We are talking about a having an LSP client be able to install arbitrary LS's. For that to work, the client must not have specific knowlege about each LS - otherwise it would not be extensible. How would the client handle an LS, say for a new language that doesn't exist yet, but is developed in the future, has it own package manager, etc?

There would have to be some sort of language-indepedent abstraction for installing and updating LS's. This coudl could be NPM packages. It coud be bash script like @TylerJewell mentioned, for Ché - although this does not meet the requisites of being cross-platform. (it's POSIX only, as mentioned). Works for Ché world, but wouldn't necessarily work beyond that.

@TylerJewell
Copy link

Yes correct - even for Che with being POSIX shell, this poses problems if th workspace continer is a windows container, which is now possible with windows 2016 server. However, every byte of memory is something that we are very concerned about, and we had looked at npm and other package mangaers as a way to deliver language servers, and their footprints were all quite taxing.

Imagine a situation where we are running a single physical node and we are trying to load up 20 workspaces on a single azure server. And each of those workspaces have varying agents for language servers and other stuff that we provide such as debuggers, ssh, and web terminal. The overhead requirements add up quickly - so shell was a quick way for us to have a low impact to start.

@felixfbecker
Copy link

felixfbecker commented Oct 17, 2016

@bruno-medeiros

We are talking about a having an LSP client be able to install arbitrary LS's. For that to work, the client must not have specific knowlege about each LS - otherwise it would not be extensible.

Or you have intermediate installers for each LS, that have knowledge about how to install the LS, and also knowledge about how to integrate into the desired environment. The LS itself should not tailor to a client. The VS Code extension will simply spawn the LS with PHP and ask the user to install PHP himself, a Che LS package might also have a Dockerfile that sets up an environment. But the LS should not include a specific launch script or Dockerfile, it should only document how install and run it.

@masaeedu
Copy link
Contributor

A language server is a black box that speaks a particular protocol. It may
have library deps, OS configuration requirements, need binaries on PATH,
expect configuration files, etc. The only sane way to distribute all of
this to a user is in an isolated container or VM, as Che does.

The ability to set up a container (or evem container system) does not
require bash, and can be implemented e.g. in powershell or a x-plat binary.
E.g. on Windows you'd need docker run -p lang-server-php and on Ubuntu
much the same.

On Oct 17, 2016 9:27 AM, "Bruno Medeiros" notifications@github.com wrote:

Because then why not install it with that other package manager directly?

Because the other package manager (Composer in that case) is language
specific, and in this case "you" would need specific knowlege of how to
work with Composer.

Note that we are not talking about a situation were a PHP developer wants
to install a PHP LS and nothing else. In this scenario, yes, it makes more
sense for the developer to install the PHP LS using Composer directly, no
need for an extra layer or abstraction. But this is not the case we are
talking about.

We are talking about a having an LSP client be able to install arbitrary
LS's. For that to work, the client must not have specific knowlege about
each LS - otherwise it would not be extensible. How would the client handle
an LS, say for a new language that doesn't exist yet, but is developed in
the future, has it own package manager, etc?

There would have to be some sort of language-indepedent abstraction for
installing and updating LS's. This coudl could be NPM packages. It coud be
bash script like @TylerJewell https://github.com/TylerJewell mentioned,
for Ché - although this does not meet the requisites of being
cross-platform. (it's POSIX only, as mentioned). Works for Ché world, but
wouldn't necessarily work beyond that.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#76 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADgPyOJXYzxSDfY5qVIwVT3IabZrB0NWks5q03fMgaJpZM4KJo5S
.

@felixfbecker
Copy link

@masaeedu That totally depends on the type of client. For Che, a Dockerfile makes sense. For VS Code, it makes much more sense to simply tell the user "hey, if you want to use the PHP LS, you need PHP installed". Setup is the responsibility of the client (or wrapper extensions), it's out of scope for the LS. The LS should only document the requirements.

@TylerJewell
Copy link

Multiple people have referenced that a language server provider should just provide a Dockerfile and then be done. While we have investigated this packaging approach, it poses limitations in a distributed environment.

Please assume that the project code that a language server will operate against needs to be accessible by the language server. In a Che distributed world, each workspace is its own set of containers. If the language server that the code depends upon is a separate container, then this poses some constraints on the underlying orchestration system to have the containers co-located on the same physical nodes (or at least for the orchestrator to ensure that the LS container and the ws container have the same shared mount point over a NAS) AND for certain project files to be accessible through shared volume mounts, which in some cases creates I/O performance problems.

So, let's please consider that packaging / installation of langugae servers may need to be done both in-process - inside an existing container and as a separate container.

@felixfbecker
Copy link

felixfbecker commented Oct 17, 2016

That's exactly the kind of complicated problems an LS should not be concerned with, but the client / wrapper extension.

@masaeedu
Copy link
Contributor

masaeedu commented Oct 17, 2016

@felixfbecker I don't understand why a Dockerfile is Che specific. Like I said before, a language server is a black box that speaks a particular language protocol. The goal is to start from nothing and have a language server running on an accessible port/named pipe/socket. Docker provides a x-platform approach to doing so. You deploy a file-for-file identical server that speaks the language protocol for a particular language. You don't need to worry about whether the PHP binary is installed within the container or it has the right extensions or whatever. You simply implement the client code for communicating with the server against a port on localhost.

@bruno-medeiros
Copy link

The LS itself should not tailor to a client. The VS Code extension will simply spawn the LS with PHP and ask the user to install PHP himself, a Che LS package might also have a Dockerfile that sets up an environment. But the LS should not include a specific launch script or Dockerfile, it should only document how install and run it.

I'm not saying otherwise. It's best that the LS should only concern itself with the LSP protocol, and yes, installation is beyond the scope of the LSP protocol. However, the sugestion is that there could be an additional protocol, an additional abstraction layer (the "marketplace" abstraction) that would handle installation/update/startup of arbitrary LSs. But this would be separate from the main LSP protocol, rather, it would just be something additional than could be developed on top of each existing LS.

@felixfbecker
Copy link

@masaeedu Because no editor like VS Code, Visual Studio, Atom etc. will not spin up a container just for the LS, that would take way to many resources. In that way the Dockerfile is Che-specific, because it is only used by Che.

@masaeedu
Copy link
Contributor

@TylerJewell The solution to this problem is volumes, which again work x-plat. While it is true this poses latency problems if you're running your containers in AWS europe, a reasonable fix for this is to simply run the containers on the user's machine.

@felixfbecker
Copy link

@bruno-medeiros That's what I am saying - setup/installation/startup needs to be in a seperate registry with separate packages, and that registry must be per-client, because setup will be different for different client as I explained in my previous comment. For VS Code this registry already exists, it's the VS marketplace. Che just needs something similar.

@masaeedu
Copy link
Contributor

Because no editor like VS Code, Visual Studio, Atom etc. will not spin up a container just for the LS

@felixfbecker Why not? We are discussing possible ways to package language servers, not discussing what is currently implemented. When you say "would take up way too many resources", this needs to be backed up with tests. Running 20-30 containers on my laptop does not result in a noticeable slowdown, and starting up a busybox container once I have the image downloaded takes a few milliseconds.

@TylerJewell
Copy link

@masaeedu - It's diverging from the intention of the thread, but with the experience we have of running 500,000 workspaces on a distributed basis - we can confidently say that shared volumes is not the way to solve Docker container distribution problems :).

@felixfbecker - Che has its own registry. We call it hte language server registry. We mandate our own parameters for installation + configuration, but we expect total reuse of the language server. It would be really nice, though, if we could consolidate on packaging / installation philosophies over time so that we can have language server providers provide a single (or two) packaging philosophies. It'll speed adoption.

@bruno-medeiros
Copy link

For VS Code this registry already exists, it's the VS marketplace. Che just needs something similar.

Yeah but those are all client-specific registries. Eclipse would needs it's own registry too, vim/emacs and every other editor/IDE out there. And each registry would need an extension for each LS... Obviously that sucks and does not really scale. The interesting project is to have a "marketplace" registry that is client-agnostic, much like LSP itself is client-agnostic.

@masaeedu
Copy link
Contributor

@TylerJewell Here's the thing though, the typical user is not running 500,000 workspaces on their local machine, or using a number of language-servers at that scale simultaneously. It is possible that Che would need some Che-specific configuration in addition to the Dockerfile to make this scale well, but for typical editors (sublime, vim, VS Code, Atom, etc. etc.), a Docker container with the source volume mounted is performant enough for day-to-day use.

@bruno-medeiros
Copy link

However, every byte of memory is something that we are very concerned about, and we had looked at npm and other package mangaers as a way to deliver language servers, and their footprints were all quite taxing

@TylerJewell Interesting, I didn't consider the performance aspect. Have you looked at Cargo, Rust's package manager? It's cross-platform, compiled natively (thus generally faster), and also not GC-based (so also smaller memory footprint). True, its intended purpose is to manage Rust source code and its depedencies, but maybe it could be used just for distribution purposes?

@TylerJewell
Copy link

@masaeedu - the use cases for the language servers that I have seen are:

  1. LS / workspace in a distributed system running many workspaces - the Che model.
  2. Shared LS - in a distributed system where there are many clients sharing a single LS for different underlying code projects. This is the orion model, which has its own problems, but they envision having many shared volumes.
  3. LS / workspace in a local system running a single ws.

I think it's good to state the uses of language servers in the wild. Any proposed or anti-proposed solution should be aware that this is happening.

@bruno-medeiros - I have not studied Cargo in any depth, other than doing a hello world with it about a year ago. I think for something to replace shell / bash for us, our pre-req would be that it works on every distro of linux, mac, and windows. If cargo compiles natively to these platforms, then that is certainly an option. I don't think we thought of native compiled options yet, but worth a good dialog on it.

@mickaelistria
Copy link
Author

Additionally to the definition of how to provision the langauge server, a marketplace would also require to define how to connect to the provisioned server. As there are already sdio, pipes, sockets being used and big chances to have some HTTP or other network-based transports, it's also something not trivial.
@TylerJewell Do you know how does Che define the connection? I guess Che has some conventions on this regard, or does it already have some relatively reusable format to describe that?

@TylerJewell
Copy link

@mickaelistria - good question. I am not sure the answer. I do know that within Codenvy, our on prem distributed system, we hvae implemented an HTTP server for hosting language server installations. When a workspace is deployed via Docker, the workspace checks for a volume mount to get the language server and if that fails, then it connects to the hosted codenvy server to connect and download the server.

Once the language server is installed and started. Then there is some way that our editor detects its usage. I suspect that we have deployed a server-side API that allows the editor to query the workspace to see the status of language servers and then once the server is active, hands over to the editor. If you want specifics, I can ask Eugene Vidolob and Anatoliy Bazko.

@masaeedu
Copy link
Contributor

@TylerJewell Have you considered using an orchestration tool like Ansible/Chef/Puppet to specify the language server configuration instead of bash scripts? That way the configuration can be applied outside of Docker containers, and depending on what surface of the configuration API you use, can even run it directly on Windows boxes.

@TylerJewell
Copy link

We have thought about puppet - we have a lot of experience with running distributed puppet systems and aware of what it is capable of doing. But we do all of our work for configuration now inside of docker containers, so to apply that sort of configuration, we'd do it in a container outside of the workspace container and then inject it in. We have explored doing that, or potentially having the codenvy / che server host specialized configurations that are downloaded based upon the OS distribution where the language server is intended to run. But there is some configuration like this that is possible. We will keep experimenting, but we will look for something that gives us the most cross-platform, highly distributed flexibility, and allows language server packagers to package any language server as simply as possible.

@mickaelistria
Copy link
Author

I've given another round of thought about it, and in any case, I'm quite skeptical about the possibility and value to create a marketplace of language servers that goes beyond the wiki page.
Integrating language server will always require some integration/packaging effort, that will always require some integration effort from someone working on the target tool, according to the tool specificity (Che will prefer docker images, whereas Eclipse IDE will prefer locally installed tools with STDIO, whereas some language servers offering analysis on the Cloud as SaaS will definitely require a remote network stream...). I find the idea of an interoperable marketplace of LS very expensive to set up, hard to be fully universal, and really not something that would help end-users (who don't care about language servers as it's an implementation detail of a language support).
The wiki page is IMHO a very good and sufficient approach.
As I'm the initial reporter of this issue, I'm tempted to close it; but I'm leaving it open to keep potential disucssions going on. But I wouldn't mind if project owners close it for me ;)

@dbaeumer
Copy link
Member

dbaeumer commented Feb 8, 2017

@mickaelistria I agree with our statement. I will close the issue. If someone feels strongly about this please comment here and I am willing to reopen.

@dbaeumer dbaeumer closed this as completed Feb 8, 2017
@vscodebot vscodebot bot locked and limited conversation to collaborators Nov 21, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants