-
Notifications
You must be signed in to change notification settings - Fork 7.7k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
not able to download models from ollama behind proxy
bug
Something isn't working
#7522
opened Nov 6, 2024 by
anshika1234
Build instructions in https://github.com/ollama/ollama/blob/main/llama/README.md are outdated or non-functional
bug
Something isn't working
#7520
opened Nov 6, 2024 by
yeahdongcn
Support for # of completions? (for loom obsidian plugin)
feature request
New feature or request
#7518
opened Nov 5, 2024 by
cognitivetech
Realtime API like OpenAI (full fledged voice to voice integrations)
feature request
New feature or request
#7514
opened Nov 5, 2024 by
ryzxxn
About OLLAMA_SCHED_SPREAD env,How to load a model on two GPUs
bug
Something isn't working
needs more info
More information is needed to assist
nvidia
Issues relating to Nvidia GPUs and CUDA
#7511
opened Nov 5, 2024 by
Kouuh
Add support for function call (response back) (message.role=tool)
api
feature request
New feature or request
#7510
opened Nov 5, 2024 by
RogerBarreto
Support partial loads of LLaMA 3.2 Vision 11b on 6G GPUs
feature request
New feature or request
#7509
opened Nov 5, 2024 by
Romultra
Expose DRY and XTC parameters
feature request
New feature or request
#7504
opened Nov 5, 2024 by
p-e-w
llama slows down a lot on the second and subsequent runs.
bug
Something isn't working
nvidia
Issues relating to Nvidia GPUs and CUDA
#7497
opened Nov 4, 2024 by
vertikalm
langchain-python-rag-document not working
bug
Something isn't working
#7492
opened Nov 4, 2024 by
gsportelli
Avoid clearing response content when parsing tools is unnecessary
feature request
New feature or request
#7488
opened Nov 4, 2024 by
ouariachi
I hope ollama can provide rerank models and speech recognition models.
model request
Model requests
#7485
opened Nov 4, 2024 by
ardyli
Invalid prompt generation when the request message exceeds the context size
bug
Something isn't working
#7484
opened Nov 3, 2024 by
b4rtaz
Packaging ollama: make including ROCm libraries in the dist optional
build
Issues relating to building ollama from source
feature request
New feature or request
#7483
opened Nov 3, 2024 by
breezerider
HIP_VISIBLE_DEVICES vs ROCR_VISIBLE_DEVICES
amd
Issues relating to AMD GPUs and ROCm
bug
Something isn't working
needs more info
More information is needed to assist
#7480
opened Nov 3, 2024 by
nathan-skynet
Issue with Reinstalling Ollama: "Killed" Error on ollama serve
bug
Something isn't working
needs more info
More information is needed to assist
#7478
opened Nov 3, 2024 by
hosein97
Submit 4 images to Ollama visual model, generate a large amount of log without any return
bug
Something isn't working
needs more info
More information is needed to assist
#7477
opened Nov 3, 2024 by
delubee
Cannot generate id_ed25519 - read-only file system
bug
Something isn't working
#7471
opened Nov 2, 2024 by
duhow
[Model request] The First-Ever Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection
model request
Model requests
#7470
opened Nov 2, 2024 by
monkeycc
Dual GPU token generation bug on 0.3.15 that did not exist on 0.3.13
amd
Issues relating to AMD GPUs and ROCm
bug
Something isn't working
windows
#7461
opened Nov 1, 2024 by
calmingaura
mistake:ollama run llama3_8b_chat_uncensored_q4_0
bug
Something isn't working
#7458
opened Oct 31, 2024 by
1015g
1 task done
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-10-05.