-
Notifications
You must be signed in to change notification settings - Fork 306
Issues: SciSharp/LLamaSharp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[BUG]: ChatSession unnecessarily prevents arbitrary conversation interleaving
#857
opened Jul 19, 2024 by
lostmsu
[BUG]: Tokenization in 0.14.0 adds spaces
bug
Something isn't working
Upstream
Tracking an issue in llama.cpp
#856
opened Jul 18, 2024 by
newsletternewsletter
Method not found: 'Double Microsoft.KernelMemory.AI.TextGenerationOptions.get_TopP()'.
#832
opened Jul 6, 2024 by
KanonRim
How to handle Tracking an issue in llama.cpp
CUDA error: out of memory
?
Upstream
#831
opened Jul 6, 2024 by
yukozh
[BUG]: Cannot load the backend on MACOS
backend
bug
Something isn't working
#785
opened Jun 5, 2024 by
AsakusaRinne
[BUG]: When using large models with the GPU the code crashes with cannot allocate kvcache
bug
Something isn't working
Upstream
Tracking an issue in llama.cpp
#759
opened May 28, 2024 by
zsogitbe
[BUG]: Fail to Load Model with Chinese Model Path
bug
Something isn't working
Upstream
Tracking an issue in llama.cpp
#744
opened May 19, 2024 by
tinkle-bell
Add debug mode of LLamaSharp
enhancement
New feature or request
good first issue
Good for newcomers
#732
opened May 12, 2024 by
AsakusaRinne
Add unit test about long context
enhancement
New feature or request
good first issue
Good for newcomers
#731
opened May 12, 2024 by
AsakusaRinne
[BUG]: WSL2 has problem running LLamaSharp with cuda11
bug
Something isn't working
#727
opened May 10, 2024 by
AsakusaRinne
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.