Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Integrate Ollama for local model support. #92

Merged
merged 9 commits into from
Mar 1, 2024

Conversation

eli64s
Copy link
Owner

@eli64s eli64s commented Mar 1, 2024

Summary

This pull request adds Ollama support via OpenAI compatibility, enabling users to run readme-ai using local model like llama2 and mistral. Additionally, this update includes new CLI options to improve README customization.

Ollama Example

Start by pulling a model such as Llama 2 or Mistral and then start ollama without running the desktop application.

$ ollama pull mistral:latest 
$ ollama serve

Set Ollama local host as an environment variable and run the CLI.

$ export OLLAMA_HOST=127.0.0.1
$ readmeai --api OLLAMA --model mistral --repository https://github.com/eli64s/readme-ai 

@eli64s eli64s added the feature label Mar 1, 2024
@eli64s eli64s merged commit 9fe8dcb into main Mar 1, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant