-
Notifications
You must be signed in to change notification settings - Fork 259
Issues: pytorch/torchtune
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Can someone give me an example of how to evaluate a llama 3 finetuned model using LORA
#1067
opened Jun 6, 2024 by
jayson1200
[Bug] Phi3 tokenizer adds a extra start token when Something isn't working
ignore_system_prompts=True
bug
#1063
opened Jun 6, 2024 by
hmosousa
[Feature Request] Add lr_scheduler for full_finetune (single_device/distributed)
#1060
opened Jun 6, 2024 by
andyl98
Using include_path with eval file for custom evaluation configs in lm-eval is not supported
#1054
opened Jun 5, 2024 by
yasser-sulaiman
Recommendations for obtaining validation dataset loss after each epoch
#1042
opened Jun 1, 2024 by
dcsuka
GPTQ quantization not working with fine-tuned LLaMA3 models
#1033
opened May 30, 2024 by
sanchitintel
Benchmark performance against other implementation such as
Llama-factory
and Unsloth
?
#1023
opened May 27, 2024 by
liyucheng09
"Bus error (core dumped)" when saving recipe state after restarting training
#1018
opened May 24, 2024 by
calvinpelletier
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.