-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why does fine-tuning perform worse on the same data? #23
Comments
eubinecto
changed the title
fine-tune danvinci with the revised data
Why does fine-tuning perform worse on the same data?
May 2, 2022
Hypothesis - is it because there are only 40 examples?ExperimentInclude the training set portion of PIE dataset to the batch.
Well, let's see how it goes. ResultsWait, what? Training that costs 20 dollars? That is a lot of money for the uncertainties I have. All I want to is just a simple experiment. Let's run an experiment with smaller models than.
|
We should wait around 3 hours until this finishes. Take some break for this now. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Why?
but you know that with few-shot prompt design, the performance is generally great with only a few examples (~10).
Why is this?
The text was updated successfully, but these errors were encountered: