Skip to content

Issues: lxe/simple-llm-finetuner

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Is CUDA 12.0 supported? question Further information is requested
#8 opened Mar 22, 2023 by vadi2 updated Mar 22, 2023
Collecting info on memory requirements question Further information is requested
#2 opened Mar 22, 2023 by jmiskovic updated Mar 22, 2023
Inference output text keeps running on... bug Something isn't working question Further information is requested
#1 opened Mar 22, 2023 by lxe updated Mar 23, 2023
Finetuning in unsupported language question Further information is requested
#15 opened Mar 23, 2023 by jumasheff updated Mar 25, 2023
Can Nivdia 3090 with 24G video memory support finetune? question Further information is requested
#7 opened Mar 22, 2023 by pczzy updated Mar 26, 2023
Examples to get started with enhancement New feature or request
#11 opened Mar 22, 2023 by vadi2 updated Mar 27, 2023
Traceback during inference. bug Something isn't working
#6 opened Mar 22, 2023 by Hello1024 updated Mar 28, 2023
Not a problem - but like people should know documentation Improvements or additions to documentation
#26 opened Mar 26, 2023 by Atlas3DSS updated Mar 28, 2023
Attempting to use 13B in the simple tuner - bug Something isn't working
#28 opened Mar 27, 2023 by Atlas3DSS updated Mar 30, 2023
how to finetune with 'system information'
#30 opened Mar 30, 2023 by mhyeonsoo updated Mar 31, 2023
Training using long stories instead of question/response question Further information is requested
#24 opened Mar 26, 2023 by leszekhanusz updated Apr 3, 2023
question: could the model trained be used for alpaca.cpp? question Further information is requested
#20 opened Mar 24, 2023 by goog updated Apr 7, 2023
Suggestion to improve UX
#32 opened Apr 2, 2023 by ch3rn0v updated Apr 9, 2023
Getting OOM
#46 opened Apr 12, 2023 by alior101 updated Apr 24, 2023
Issue in train in colab colab
#42 opened Apr 9, 2023 by fermions75 updated Apr 24, 2023
Multi GPU running
#51 opened May 10, 2023 by Shashika007 updated May 10, 2023
LLaMATokenizer vs LlamaTokenizer class names question Further information is requested
#22 opened Mar 25, 2023 by vadi2 updated May 20, 2023
[Request] QLoRA support
#55 opened Jun 29, 2023 by CoolOppo updated Jun 29, 2023
[Request] Mac ARM support
#56 opened Jul 2, 2023 by voidcenter updated Jul 2, 2023
M1/M2 Metal support?
#57 opened Jul 22, 2023 by itsPreto updated Jul 22, 2023
RuntimeError: expected scalar type Half but found Float
#52 opened May 24, 2023 by jasperan updated Aug 8, 2023
ProTip! no:milestone will show everything without a milestone.