Skip to content

Issues: abetlen/llama-cpp-python

Roadmap for v0.2
#487 opened Jul 18, 2023 by abetlen
Open
Add batched inference
#771 opened Sep 30, 2023 by abetlen
Open 29
Improve installation process
#1178 opened Feb 12, 2024 by abetlen
Open 7
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

System role not supported Gemma 2
#1580 opened Jul 7, 2024 by rzafiamy
2 tasks
Please provide CU118 wheels in future
#1570 opened Jul 3, 2024 by i486
Switch to disable adding BOS token
#1561 opened Jun 28, 2024 by etemiz
Hope to add support for GLM4-9b.
#1554 opened Jun 26, 2024 by Axiaozhu1
How to log raw token generation? enhancement New feature or request
#1546 opened Jun 21, 2024 by sisi399
Not Able To Utilize AMD GPU's bug Something isn't working
#1545 opened Jun 21, 2024 by Essak786
Recurrent errors in server mode bug Something isn't working
#1541 opened Jun 18, 2024 by mhoangvslev
Cannot build wheel
#1538 opened Jun 17, 2024 by LankyPoet
Llama3 instruct prompt template missing BOS token
#1537 opened Jun 17, 2024 by pmbaumgartner
4 tasks done
ProTip! What’s not been updated in a month: updated:<2024-06-08.