Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

posts/prompt_weight_experiments_for_llm_instruction_fine_tuning/2024-01-24-prompt_weight_experiments_for_llm_instruction_fine_tuning #24

Open
utterances-bot opened this issue Jan 29, 2024 · 1 comment

Comments

@utterances-bot
Copy link

Bayesian beagle - Prompt Weight Experiments for LLM Instruction Fine-Tuning

Study examines impact of prompt token classification loss weighting on LLaMA models fine-tuned on instruction tasks. Results vary based on dataset length.

https://bayesian-beagle.netlify.app/posts/prompt_weight_experiments_for_llm_instruction_fine_tuning/2024-01-24-prompt_weight_experiments_for_llm_instruction_fine_tuning

Copy link
Owner

wesslen commented Jan 29, 2024

Motivation

To investigate validating OpenAI’s claim on prompt loss weighting (PLW) for fine-tuning LLMs. How does this parameter affect training? How important is it for model performance on instruction tasks?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants