Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]: Can integrate locally deployed LLM #382

Closed
1 task done
gixiphy opened this issue Nov 1, 2023 · 11 comments
Closed
1 task done

[Enhancement]: Can integrate locally deployed LLM #382

gixiphy opened this issue Nov 1, 2023 · 11 comments
Assignees
Labels
enhancement New feature or request released

Comments

@gixiphy
Copy link

gixiphy commented Nov 1, 2023

Before Requesting

  • I have searched the existing issues, and there is no existing issue for my feature request

What feature do you want?

Can integrate locally deployed LLM
Like something like this
https://github.com/danielgross/localpilot

@gixiphy gixiphy added the enhancement New feature or request label Nov 1, 2023
@intitni
Copy link
Owner

intitni commented Nov 1, 2023

After implementing the project scope and some other project scope related feature (in 1 or 2 more versions), I will be experimenting to allow third party extensions to provide suggestion and chat features.

If it goes well, I think this can be implemented as an extension. If it doesn't go well, I will consider allowing users to use a completion model as the backend of the suggestion feature.

But running an LLM inside the app will be out of scope.

I just checked the link you posted, it looks like it's using the GitHub Copilot proxy to redirect it to use other LLMs. Copilot for Xcode supports GitHub Copilot proxy (in the GitHub Copilot settings, not realy tested), and you can give it a try, let me know if it works or not.

@mirko-milovanovic-vidiemme
Copy link

Hi, regarding this issue, i've wanted to use a local LLM through Ollama and the litellm OpenAPI proxy (which takes the Ollama api and converts it to a compatible OpenAPI spec) for the Chat and Promt to Code features of the plugin

I've sort of managed to get it working but the Chat feature doesn't work quite right because the answer tokens are streamed from the local litellm proxy to the CopilotForXcode chat box as single messages, making it unreadable and unusable

I don't have any clue if it's a proxy problem, ollama problem or the way the plugin makes requests to the proxy (because using the OpenAPI servers works as expected)

I've wanted to know if it was possible to make it work or if it was a interesting feature to include/implement

Here's and example vid demonstrating the issue:

Registrazione.schermo.2023-11-17.alle.10.48.42.mov
Screenshot 2023-11-17 alle 10 48 26

@intitni
Copy link
Owner

intitni commented Nov 17, 2023

I think it happens because the service is returning each data chunk with a different ID. I will see what I can do with it in the hot fix releasing later today.

@mirko-milovanovic-vidiemme

Thanks for the quick response! I've also discovered that using the Prompt to Code feature the messages do actually behave as expected which is something to consider

Registrazione.schermo.2023-11-17.alle.11.01.43.mp4

@intitni
Copy link
Owner

intitni commented Nov 17, 2023

@mirko-milovanovic-vidiemme Can you post the response body here?

@mirko-milovanovic-vidiemme
Copy link

This is what i managed to log for the prompt to code request that seemingly seems to work (tho it's pretty ugly and kinda unreadable)
debug.txt

@intitni
Copy link
Owner

intitni commented Nov 18, 2023

@mirko-milovanovic-vidiemme please give 0.27.1 a try

@mirko-milovanovic-vidiemme

Yup, it seems to be working as expected now! Thanks for the quick fix

Copy link

This issue is stale because it has been open for 30 days with no activity.

@github-actions github-actions bot added the stale label Dec 20, 2023
@intitni
Copy link
Owner

intitni commented Dec 30, 2023

Now that CopilotForXcodeKit is available, as mentioned before, I will make an extension that provides suggestion service via the local models.

@intitni
Copy link
Owner

intitni commented Feb 22, 2024

Suggestion feature with locally running model is released

@intitni intitni closed this as completed Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request released
Projects
None yet
Development

No branches or pull requests

3 participants