Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Copilot Plugin #1927

Open
7flash opened this issue Apr 2, 2022 · 37 comments
Open

Add Copilot Plugin #1927

7flash opened this issue Apr 2, 2022 · 37 comments
Labels
A-plugin Area: Plugin system C-enhancement Category: Improvements R-wontfix Not planned: Won't fix

Comments

@7flash
Copy link

7flash commented Apr 2, 2022

No description provided.

@7flash 7flash added the C-enhancement Category: Improvements label Apr 2, 2022
@kirawi kirawi added the A-plugin Area: Plugin system label Apr 2, 2022
@nick887
Copy link

nick887 commented Sep 11, 2022

is there any discussions? i think it is a really useful feature if we can use github copilot in helix and i will choose switch from goland to helix to get my work done

@sudormrfbin
Copy link
Member

There are no plans to have copilot in the editor core, so this will have to wait till there is proper plugin support.

@luccahuguet
Copy link

luccahuguet commented Sep 18, 2022

thanks for the info Sudormrfbin.

If something changes please let us know, it would be a very useful feature!

In the meanwhile, it is possible to run copilot (Edit: OpenAI's Codex, that powers Copilot) in the terminal, with bash or zsh

not as good but could help

@roehst
Copy link

roehst commented Sep 21, 2022

Waiting on this feature to move from neovim to helix for good

@lukepighetti
Copy link

Has anyone tried integrating this copilot LSP? https://github.com/TerminalFi/LSP-copilot/blob/master/language-server/package.json#L4

@yudjinn
Copy link

yudjinn commented Nov 13, 2022

isnt github getting sued for how copilot takes data? I def dont think copilot should be anywhere near core

@gaetschwartz
Copy link

Is there any update on this ?

@luccahuguet
Copy link

Is there any update on this ?

Hi, this needs a plugin system in the first place, which is not the case

The plugin system is currently being prototyped, and might take a long while before it is done. The current prototype might even be scrapped if it turns out the tech is not as proper as other options

A plugin system is quite a big endeavor so don't wait on that

That said, the maintainers are quite good and productive so it will get done, at some point

PS: I also miss this feature a lot and been using vscode when i have to use it

@lukepighetti
Copy link

lukepighetti commented Jan 9, 2023

This seems like something we can solve with LSP integration instead of the plugin system. I believe that's how folks are using it in Sublime https://forum.sublimetext.com/t/github-copilot-for-sublime-text-4-is-coming/64449/3

@kanwarujjaval
Copy link

Was anyone able to use the copilot LSP successfully?
It would be really useful specially when #2507 is merged.

@leeola
Copy link

leeola commented Mar 22, 2023

With Copilot X's recent announcement i had to look this up for Helix, landing me here. The LSP idea sounds alright, but it would be nice to craft a deeper UX around Copilot.

Are plugin-like APIs available in Helix to, perhaps, temporarily fork Helix and integrate a Copilot Plugin directly into the forked Helix binary? Ie write a plugin directly into fork of Helix proper, even though we only intend to migrate it to a Plugin asap?

If Copilot X is useful (big if, heh) it would be nice to have a good experience in Helix for that. They have a NeoVim plugin, for comparison.

@patrick-kidger
Copy link

Also ended up here after the Copilot X announcement, FWIW. No idea if implementing it as just an LSP is technically feasible, but that would certainly be nice if so.

@mikkelfj
Copy link

The way things are going with AI these days, some level of non-trivial support would eventually be necessary, but some aspects like documentation could also belong to the build system.

Either way, I suspect we will see competitors, notably also fully open source, to Copilot-X in the coming years (or days, maybe). Also the feature set will evolve drastically.

For these reasons I think both some sense of urgency and restraint is necessary at the same time.

@7flash
Copy link
Author

7flash commented Mar 26, 2023

A temporary solution I have for myself is a chatgpt window running a "bridge" script in devtools. It allows chatgpt to communicate with other apps through a locally running database/job queue. If it makes sense, I can try to make it into a plugin.

@ptman
Copy link

ptman commented Mar 27, 2023

Not just copilot, there are now several similar services. Copilot, tab9, codeium, codegeex, (vim-ai just uses chatgpt), ...

@7flash
Copy link
Author

7flash commented Apr 12, 2023

A temporary solution I have for myself is a chatgpt window running a "bridge" script in devtools. It allows chatgpt to communicate with other apps through a locally running database/job queue. If it makes sense, I can try to make it into a plugin.

Relevant: kazuki-sf/YouTube_Summary_with_ChatGPT#9

@ptman
Copy link

ptman commented Apr 21, 2023

And also amazon code whisperer. I suggest title is rewritten to reflect the variety of alternatives

@Neugierdsnase
Copy link

Neugierdsnase commented Apr 23, 2023

What can we do to make this happen as soon as possible? I want to switch over completely, but the reality is, that I just don't want miss AI tools.

I'm willing to invest time in this.

@7flash
Copy link
Author

7flash commented Apr 23, 2023

What can we do to make this happen as soon as possible? I want to switch over completely, but the reality is, that I just don't want miss AI tools.

I'm willing to invest time in this.

Consider running a browser extension which establishes communication between chatgpt and helix, here is minimal example I made in between two instances of chatgpt, but you can imagine to replace second instance with helix editor: https://github.com/7flash/AutoChatGPT

@rnarenpujari
Copy link

Related discussion thread: #4037

@kirawi
Copy link
Member

kirawi commented Jul 3, 2023

I noticed that the related fork was not linked: #6865
AFAIK it works and some people are using it, but we have no plans to merge it into Helix for reasons discussed there

@Neugierdsnase
Copy link

I noticed that the related fork was not linked: #6865 AFAIK it works and some people are using it, but we have no plans to merge it into Helix for reasons discussed there

Thanks for linking!

@0x61nas
Copy link

0x61nas commented Nov 23, 2023

isnt github getting sued for how copilot takes data? I def dont think copilot should be anywhere near core

+ not everyone wanna be distracted by its stupid suggestions. We should wait for the plugin system to be ready, and am pretty sure someone will write a plugin for this.

@hemedani
Copy link

hemedani commented Dec 18, 2023

how about codeium?

@leona
Copy link

leona commented Jan 25, 2024

In-case anybody is still looking for this #4037 (comment)

@tirithen
Copy link

As suggested here #9369 I believe that adding Ollama LSP support would be a great possibility. Although requiring a GPU with enough memory for he heavier models they all run locally on your machine. This might be considered a much safer setup.

Having gpt text completion would be a very welcome addition to helix, and having it work with open locally run models even better.

@iocron
Copy link

iocron commented Feb 1, 2024

As suggested here #9369 I believe that adding Ollama LSP support would be a great possibility. Although requiring a GPU with enough memory for he heavier models they all run locally on your machine. This might be considered a much safer setup.

Having gpt text completion would be a very welcome addition to helix, and having it work with open locally run models even better.

I agree except the heavier models. We don't necessarily need them, because there are a couple great and efficient ollama code models out there without using a big gpu :) https://ollama.ai/library?q=code

@mikkelfj
Copy link

mikkelfj commented Feb 1, 2024

ollama run mistral
works fine on mac studio M2 Max 32GB RAM, its a 7B model.

@mikkelfj
Copy link

mikkelfj commented Feb 1, 2024

However, it only really gets interesting when the models can understand content referenced in compile_commands.json

@tirithen
Copy link

tirithen commented Feb 1, 2024

Regardless the model size, the great thing with ollama is that projects like that makes it possible to easily pull the model of your choice, that fits your need and system capabilities while ensuring privacy.

I wonder if there is already good LSPs out there for ollama that could be hooked into helix? Has anyone considered the llm-ls project? Just found it now, but it is supposed to work as an LSP and it can bridge over to LLM runners ollama being one of them.

There has already been some interest in figuring out the LSP integration from @hemedani and @webdev23 on the issue huggingface/llm-ls#49 . Maybe there is someone from here that can help them out with some directions to get going?

@leona
Copy link

leona commented Feb 1, 2024

Regardless the model size, the great thing with ollama is that projects like that makes it possible to easily pull the model of your choice, that fits your need and system capabilities while ensuring privacy.

I wonder if there is already good LSPs out there for ollama that could be hooked into helix? Has anyone considered the llm-ls project? Just found it now, but it is supposed to work as an LSP and it can bridge over to LLM runners ollama being one of them.

There has already been some interest in figuring out the LSP integration from @hemedani and @webdev23 on the issue huggingface/llm-ls#49 . Maybe there is someone from here that can help them out with some directions to get going?

It's possible to overwrite the openai endpoint of my language server here but if it doesn't follow the openai pattern it won't work, and I don't think ollama does. If people wanted support for ollama it would be fairly easy to add, but any time I've tried locally hosted models they've been pretty poor for this use case.

@tirithen
Copy link

tirithen commented Feb 1, 2024

@leona thanks for sharing! Your project could be a nice template for running against ollama. When I tested Mistral 7B separately for code assistance in Rust it worked pretty well for me, at least better than without, but I would probably want to run the larger models for even better responses.

I'm personally more into these open models mainly for privacy reasons, but anyhow your project could also be useful as is for the ones that that started the issue and where interested in having copilot running (sorry for hijacking the thread for ollama things by the way).

Really nice with a running LSP plugin example, now I which I just had more time to try this out with ollama. If anyone gets started I'll definitely try to find some time to help out (but preferably in Rust in that case).

@mikkelfj
Copy link

mikkelfj commented Feb 3, 2024

I'm not quite ready for relying on LLMs for coding: This is mistral 7B

"An octahedron consists of eight vertices and eight triangular faces, with each face being made up of three vertices. Therefore, there are indeed a total of 8 x 3 = 24 individual vertices, but since a triangle is defined by three non-unique vertices, there are only 12 unique triangles in an octahedron."

@7flash
Copy link
Author

7flash commented Mar 3, 2024

Hi! I would like to share here my current workflow I found myself comfortable with.

https://github.com/7flash/helix-chatgpt

Notice, it works especially well with Warp Terminal, where I created a workflow/button to execute the script, but as well you can define bash function etc.

What happens when you run the script, it opens a new file with helix where you can write your prompt, and I found it being more comofrtable than any existing UI.

  • You can separate user and assistant messages with |user| and |assistant| separators directly in your prompt
  • You can reference local files with file:/ references and web links with http[s]:// and their content will be embedded into the prompt
  • Now, you obviously won't have to embed the whole content of given webpage into your prompt and therefore, you can specify CSS selector after # hashtag in its path.
  • As well, you won't have to embed full source files, so you can reference specific sections using special comments starting with "."

Just sharing here what works for me, but there isn't good documentation yet, so please feel free to contribute.

@kyfanc
Copy link
Contributor

kyfanc commented Mar 6, 2024

Hi everyone!

@leona thanks for your effort in creating helix-gpt. I made an attempt to integrate with ollama. @tirithen may be you would be interested in having a try with it?

Regarding llm-ls, I believe they are implementing the features as LSP custom methods, which helix built-in LSP client do not support currently.

@tirithen
Copy link

tirithen commented Mar 6, 2024

@kyfanc I'm off for a trip for a while now without my GPU, but I'll give it a try once I'm back.

Ideally in the end I think this sort of application should be a simple binary rather than a bun + typescript, to be fast and efficient, written in Rust or similar. But I also saw that it at least can build to a binary via bun (I suppose it means bundling and running a full V8 to run the lsp).

Nice that there is some progress either way! :-) And also nice with the approach to support both cloud gpt providers and ollama so the users can choose what fits them best. :-)

@RayyanNafees
Copy link

RayyanNafees commented May 15, 2024

(I suppose it means bundling and running a full V8 to run the lsp).

@tirithen bun doesnt use v8 actually, it's JavaScript-core which has smaller size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-plugin Area: Plugin system C-enhancement Category: Improvements R-wontfix Not planned: Won't fix
Projects
None yet
Development

No branches or pull requests