-
Notifications
You must be signed in to change notification settings - Fork 729
Issues: Lightning-AI/litgpt
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Training lasts just 150 seconds for TinyLlama OpenWebtext dataset
#1447
opened May 26, 2024 by
srivassid
performing continuous pretraining and then finetuning causes error
bug
Something isn't working
#1430
opened May 22, 2024 by
richardzhuang0412
Is there any best practice for using litdata to load custom data for pretraining?
#1428
opened May 21, 2024 by
wen020
Continually pretrained Llama2-7B-hf model inference is not working on 16GB GPU machine
question
Further information is requested
#1423
opened May 16, 2024 by
karkeranikitha
Continue pre-training got RuntimeError: Failed processing /tmp/data
3rd party
bug
Something isn't working
pre-training
#1413
opened May 13, 2024 by
BestJiayi
'Phi-3-mini-4k-instruct' is not a supported config name
checkpoints
#1412
opened May 12, 2024 by
georgehu0815
Pretraining example from readme fails in Colab
3rd party
bug
Something isn't working
pre-training
#1402
opened May 8, 2024 by
AndisDraguns
LR scheduler can result in a division by 0
bug
Something isn't working
#1393
opened May 6, 2024 by
carmocca
Address frozen parameter warning with FSDP on nightly torch
fine-tuning
#1392
opened May 6, 2024 by
carmocca
fabric.print only works on sys.stderr, does not print inference result
question
Further information is requested
#1384
opened May 4, 2024 by
lastmjs
Previous Next
ProTip!
Updated in the last three days: updated:>2024-05-26.