Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FR: Add jj api command to establish structured interprocess communication for seamless IDE integration #3219

Open
ak0rz opened this issue Mar 5, 2024 · 9 comments
Labels
enhancement New feature or request

Comments

@ak0rz
Copy link

ak0rz commented Mar 5, 2024

First of all, thank you very much for your outstanding work. I'm supposed to leave G soon and one of things that I'll definitely miss would be the Fig. :)

Luckily, I found this project and I'm very excited about its future!

Is your feature request related to a problem? Please describe.

There's no integrations available for any mainstream IDE on the market, and also that described as one of the most drawbacks by article authors in media, ultimately damaging adoption rate.

Describe the solution you'd like

I propose adding jj api command which provides a way for IDE and library to communicate through sending structured data. Having written API specification jj api might start serving (gRPC?) requests for it over stdin/stdout with the calling process, including sending events back to IDE itself. Another benefit of doing it that way is that we can have a single requirement for IDE extension authors - having jj in the PATH, so no need to package native libraries or dynamically linking with the unknown version of library that need to be installed somehow.

Describe alternatives you've considered

Surfacing of integration with CLI itself involves painful construction of arguments and parsing outputs which can change from version to version.

Integration with the native library is even harder because most of Electron-based IDEs (like VSCode) cannot use native node modules in extensions (see microsoft/vscode#658). Using native library in the third-party is also harsh because then it would work with storage directly and it would make changes to storage format much harder because of constant commitment for backward-compatibility in the heart of library.

Having independent server binary running for each workspace is also an option, which could open a way to connecting to jj instances on remote hosts. But for me it seems to be overkill now, but if use case appears it would be very easy to implement using the same code as jj api would.

@joyously
Copy link

joyously commented Mar 5, 2024

See issue #3034

@PhilipMetzger PhilipMetzger added the enhancement New feature or request label Mar 5, 2024
@khionu
Copy link
Collaborator

khionu commented Mar 14, 2024

Using jj as a library would mean bindings as a blocker to creating integrations. I think that that should definitely be an option, but being able to simply using the CLI would be more accessible.

An alternative would be to print the output of each command as a serialized format, eg JSON. It would likely be the same internal overhead as a dedicated command, but by the difference being a single flag, it would be much more straight forward for implementers to get precisely the behaviour they want. If there's information that integrations need/want that wouldn't be natural to include in any command's --json output, it could be a good point to figure out if we should be including that as part of the human-readable output.

@dairyisscary
Copy link

dairyisscary commented Mar 31, 2024

An alternative would be to print the output of each command as a serialized format, eg JSON. It would likely be the same internal overhead as a dedicated command, but by the difference being a single flag, it would be much more straight forward for implementers to get precisely the behavior they want. If there's information that integrations need/want that wouldn't be natural to include in any command's --json output, it could be a good point to figure out if we should be including that as part of the human-readable output.

I was thinking a similar thing -- I want to build integration with vcs status in shell prompts like Starship and avoiding parsing human readable console output would be ideal. One thing that Jujutsu has going for it is that it has a templating language and the ability to pass a template -T/--template flag to most commands. I don't think the templating language currently has enough expressiveness to have nested data structures like JSON objects and lists, but imagine if every integration could write their own template rather than jj commit to a interface beforehand.

jj log --template 'json({ author: author, desc: description })'
# Conceivably, there could be multiple encoding functions, not just json
jj log --template 'toml({ author: author, desc: description })'

@necauqua
Copy link
Collaborator

necauqua commented Apr 1, 2024

I've been doing manual csv with the templater a while ago, something like
jj log -r <revset> --no-graph --no-pager -T 'commit_id ++ "," ++ etc ++ "," ++ description'
and the only way to "escape" commas in description was to put it last 🙃

@PhilipMetzger
Copy link
Collaborator

I dislike the idea of representing internal datastructures as json and/or TOML, as it will bind us to a representation which will be available as a API, see Hyrum's Law. The idea of adding it to the templater is great but makes it more complex.

@dairyisscary Can you add the use-case to #3262?

@khionu
Copy link
Collaborator

khionu commented Apr 1, 2024

I like the idea of using the templater. --json could then simply utilize default templates.

@matts1
Copy link
Collaborator

matts1 commented Apr 30, 2024

I've created a prototype in #3601 if anyone's interested. At the moment you run it with jj api grpc, but I designed it in such a way that we could also create jj api json relatively easily (I split apart the gRPC and the actual request handlers).

@noahmayr noahmayr mentioned this issue May 1, 2024
4 tasks
@cdmistman
Copy link

a caveat to relying on the templater approach is that there's little ability for clients to act in response to jj events - instead they will have to repeatedly call jj api.

alongside the fact that there might be several clients, it feels like a long-lived server with proper ipc would be better. further, since much of the library's interface is already implemented with grpc protos, it feels natural to expose those in a server format, though i'm not sure any server implementation needs to support more interfaces than networking/unix sockets.

I dislike the idea of representing internal datastructures as json and/or TOML, as it will bind us to a representation which will be available as a API, see Hyrum's Law. The idea of adding it to the templater is great but makes it more complex.

i'm afraid i'm a little confused by this. why is it better to expose internal datastructures in a DSL vs a serializable format?

if a DSL is desired, perhaps one in lua can be considered instead? there'd be a few losses in some of the expressivity of the current templater language, but it would be nice to support this client-server model instead of being restricted to single command invocations with templater lacking so many features. for example, one may imagine a 3rd party "plugin" for jj which provides jj a lua script that runs a loop with a basic bidirectional channel that jj exposes for JSON IPC:

-- maybe luarocks support?
local json = require('cjson')

jj.ipc(function(chan)
  while chan:await() do
    local request = chan:request()
    local revs = jj:revset(request.revset) -- would be nice to still support revset language
    local templated = jj:template(request.template) -- can still support templater language, though i think it would be better to have templating as a lua api
    chan:send(json.encode(response))
  end
end)

this way, the shape of request-response format is still the decision of the client (as is the case with templater), but now we get some improvements over interaction (proper user-defined functions, loops, etc).

@PhilipMetzger
Copy link
Collaborator

I dislike the idea of representing internal datastructures as json and/or TOML, as it will bind us to a representation which will be available as a API, see Hyrum's Law. The idea of adding it to the templater is great but makes it more complex.

i'm afraid i'm a little confused by this. why is it better to expose internal datastructures in a DSL vs a serializable format?

I don't think I implied that I would want to expose internal datastructures with the templater, which the conversation before talked about. It is rather phrased as exposing the necessary parts via templater, which won't bind us to a representation. This allows 3rd parties to write some integrations w/o heavily depending on our internals, which will need to be solved at some point anyway.

if a DSL is desired, perhaps one in lua can be considered instead? there'd be a few losses in some of the expressivity of the current templater language, but it would be nice to support this client-server model instead of being restricted to single command invocations with templater lacking so many features. for example, one may imagine a 3rd party "plugin" for jj which provides jj a lua script that runs a loop with a basic bidirectional channel that jj exposes for JSON IPC:

This is the discussion in #3262, as I've repeatedly said that exposing some kind of programmable DSL will lead to maintenance issues. while the approach for supporting other languages with a SDK/plugins is in #3575.

And since both of those issues exist, it is a known problem which needs to be solved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants