Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ollama provider #42

Merged
merged 1 commit into from
Mar 13, 2024
Merged

add ollama provider #42

merged 1 commit into from
Mar 13, 2024

Conversation

kyfanc
Copy link
Contributor

@kyfanc kyfanc commented Mar 2, 2024

This PR add basic support for Ollama.

Prompts are copied from openai provider

Testing

  1. install ollama
  2. launch ollama
  3. ollama pull codellam
  4. modify helix languages.toml
[[language]]
name = "go"
language-servers = ["gopls", "gpt"]

[language-server.gpt]
command = "bun"
args = [
  "--inspect=0.0.0.0:6499", 
  "run", 
  "helix-gpt/src/app.ts", 
  "--handler",
   "ollama",
   "--logFile",
   "helix-gpt.log"
]

Notes

I am new to coding with LLM and just wanted to play around with using helix and local hosted ollama.
As I don't have access to other providers for comparison, and not much experience with prompt engineering, this is just something that seems working. Please help to test it out and let me know what is missing.

Discussion

  • for user without strong hardware, some actions with larger file may takes too long and trigger helix Async job failed: request 8 timed out
  • prompt and parameters may need some more fine tuning

@kyfanc kyfanc marked this pull request as ready for review March 6, 2024 04:13
@leona
Copy link
Owner

leona commented Mar 13, 2024

Thanks a lot for the PR, merged.

@leona leona merged commit 2e908d4 into leona:master Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants