Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Confirm GPU usage command
feature request
New feature or request
#4458
opened May 15, 2024 by
puddlejumper90
Error: llama runner process has terminated: exit status 0xc0000409
bug
Something isn't working
#4457
opened May 15, 2024 by
xdfnet
Failure when building docker from source code Dockerfile
bug
Something isn't working
#4456
opened May 15, 2024 by
lewismacnow
Ollama + sentence-transformers with torch cuda
bug
Something isn't working
#4453
opened May 15, 2024 by
qsdhj
openai.error.InvalidRequestError: model 'deepseek-coder:6.7b' not found, try pulling it first
bug
Something isn't working
#4449
opened May 15, 2024 by
userandpass
Streaming Chat Completion via OpenAI API should support stream option to include Usage
feature request
New feature or request
#4448
opened May 15, 2024 by
odrobnik
JSON Mode + Streaming + OpenAI API + Llama3 = never sends STOP, and a lot of whitespace after the JSON
bug
Something isn't working
#4446
opened May 15, 2024 by
odrobnik
Add tab completions for fish shell
feature request
New feature or request
#4444
opened May 15, 2024 by
coder543
Models remain resident in VRAM after deletion
bug
Something isn't working
#4443
opened May 15, 2024 by
coder543
Error: llama runner process has terminated: exit status 0xc0000409
bug
Something isn't working
#4442
opened May 15, 2024 by
hcr707305003
Add support for third-party hosted APIs
feature request
New feature or request
#4440
opened May 14, 2024 by
19h
Ollama vs Llama-cpp-python : Slow response time as compared to llama-cpp-python
bug
Something isn't working
gpu
nvidia
Issues relating to Nvidia GPUs and CUDA
#4437
opened May 14, 2024 by
utility-aagrawal
GPU layer control / prioritisation
feature request
New feature or request
#4433
opened May 14, 2024 by
AncientMystic
Lora models / Lora training
feature request
New feature or request
#4432
opened May 14, 2024 by
AncientMystic
BUG: Custom System Prompt not loading
bug
Something isn't working
#4431
opened May 14, 2024 by
MichaelFomenko
ollama can't run qwen:72b, error msg ""gpu VRAM usage didn't recover within timeout
bug
Something isn't working
#4427
opened May 14, 2024 by
changingshow
joanfm / jina-embeddings-v2-base-en and -de fail with error code 500
bug
Something isn't working
#4425
opened May 14, 2024 by
qsdhj
LLAVA1.6 performance huge drop after export/import using ModelFile
bug
Something isn't working
#4423
opened May 14, 2024 by
vai-minzhou
Would it be possible to add the Bloom model and other multilanguage/multilingual models?
model request
Model requests
#4406
opened May 13, 2024 by
asterbini
Add model GEITje Ultra / Dutch models
model request
Model requests
#4405
opened May 13, 2024 by
thisisawesome1994
error loading model vocabulary: unknown pre-tokenizer type: 'qwen2'
bug
Something isn't working
#4404
opened May 13, 2024 by
HouseYeung
Ask user to restart Ollama after Nvidia driver updates
feature request
New feature or request
gpu
nvidia
Issues relating to Nvidia GPUs and CUDA
#4396
opened May 13, 2024 by
owenzhao
Cannot Use GPU properly
bug
Something isn't working
gpu
nvidia
Issues relating to Nvidia GPUs and CUDA
#4395
opened May 13, 2024 by
applepieiris
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.