-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Add this web app to the list of apps in the README
feature request
New feature or request
#4758
opened May 31, 2024 by
greenido
(windows) ollama model download will not keep on downloading when reopen ollama
feature request
New feature or request
#4755
opened May 31, 2024 by
waldolin
FROM is not recognized
bug
Something isn't working
#4753
opened May 31, 2024 by
EugeoSynthesisThirtyTwo
Multi-GPU and batch management
feature request
New feature or request
#4752
opened May 31, 2024 by
LaetLanf
Garbage output running llama3 GGUF model
bug
Something isn't working
#4750
opened May 31, 2024 by
DiptenduIDEAS
OLLAMA_MODELS not applied on initial start or on restart after upgrade on macOS
feature request
New feature or request
#4749
opened May 31, 2024 by
vernonstinebaker
CMake Error at CMakeLists.txt:2 (project): Generator System.Management.Automation.RemoteException Ninja System.Management.Automation.RemoteException does not support platform specification, but platform
bug
Something isn't working
#4745
opened May 31, 2024 by
chaoqunxie
sensitivity to slow or unstable internet
bug
Something isn't working
#4739
opened May 31, 2024 by
logiota
Unable to Change Ollama Models Directory on Linux (Rocky9)
bug
Something isn't working
#4732
opened May 30, 2024 by
pykeras
llama3:8b-instruct performs much worse than llama3-8b-8192 on groq
bug
Something isn't working
#4730
opened May 30, 2024 by
mitar
have an NVIDIA GPU, but can not use.
bug
Something isn't working
nvidia
Issues relating to Nvidia GPUs and CUDA
#4726
opened May 30, 2024 by
pengyuxiang1
Slower performance on Arm64 with Phi3 and Lexi-Llama on 1.39
bug
Something isn't working
performance
#4722
opened May 30, 2024 by
khanumballz
Ollama unload model when embedding a large pdf file
bug
Something isn't working
#4720
opened May 30, 2024 by
travisgu
Codestral doesn't output correct response
bug
Something isn't working
#4713
opened May 30, 2024 by
jasonhotsauce
Adding function calling support for Agents management
feature request
New feature or request
#4711
opened May 29, 2024 by
flefevre
s390x build ollama : running gcc failed
bug
Something isn't working
#4710
opened May 29, 2024 by
woale
Code models like codestral should have a lower temperature
feature request
New feature or request
#4709
opened May 29, 2024 by
DuckyBlender
arm64 llama runner takes a long time to start compared to amd64 arch
bug
Something isn't working
#4705
opened May 29, 2024 by
glenamac
msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000409 "
bug
Something isn't working
#4704
opened May 29, 2024 by
wsry888
Could you please support deepseek v2 ?
model request
Model requests
#4703
opened May 29, 2024 by
netspym
Introduce regular support releases
feature request
New feature or request
#4702
opened May 29, 2024 by
dcasota
Previous Next
ProTip!
Updated in the last three days: updated:>2024-05-28.