Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What about starting a crowdfunding campaign to collect money to run the examples against GPT-4? #37

Open
viraniaman94 opened this issue Mar 31, 2023 · 5 comments

Comments

@viraniaman94
Copy link

viraniaman94 commented Mar 31, 2023

  1. Collect money (~500 USD should be enough I believe?)
  2. Open account with OpenAI and connect it to the bank account holding the money above
  3. Get API key
  4. Run the examples against GPT-4 to correct them

I guess one challenge is to maintain transparency at every step, or what would be the legal implications, but its just USD 500, so it shouldn't matter as much anyway!

@HideLord
Copy link
Contributor

Not as good an idea as it might seem. The instructions themselves are very repetitive. When it comes to writing code, there are like 10 different questions reworded 700 times.
Most instructions are also very simple. Add a and b, write a function to check prime numbers, etc. GPT-3.5-turbo answers them very well and is incredibly cheap. ~1000 instructions per dollar.
The best thing that could be done to improve the dataset is to generate better and more diverse instructions.

Otherwise, it's not necessary to collect money at all. Just create an interface, and then each user can run it locally with his own API key on a specific section of the dataset (e.g. 3650 instruction to 10500 instruction for user @viraniaman94).

@stefangrotz
Copy link

stefangrotz commented Mar 31, 2023

It might be possible to use dialogues from the Open Assistant project (https://github.com/LAION-AI/Open-Assistant) as a seed to create more questions. But maybe paying 500$ to clickworkers to create more questions for Open Assistant might be a better investment.

@gururise
Copy link
Owner

gururise commented Mar 31, 2023

We have likely corrected many of the major issues with the original alpaca dataset. What remains are a lot of mostly minor issues (a lot of math issues still remain). So I tend to agree with the sentiment that taking the dataset and feeding into GPT-4 will not result in a huge performance boost; however, what might be interesting is having ChatGPT provide a response for all the instructions we have not curated.

This would likely result in a more verbose alpaca model that is closer to the output of ChatGPT.
Here's an example:

"instruction": "Give three tips for staying healthy."

original alpaca output:

1. Eat a balanced diet and make sure to include plenty of fruits and vegetables.
2. Exercise regularly to keep your body active and strong.
3. Get enough sleep and maintain a consistent sleep schedule.

ChatGPT output:

Sure! Here are three tips for staying healthy:

1. Eat a balanced diet that includes plenty of fruits, vegetables, whole grains, and lean proteins. Avoid processed foods and limit your intake of sugar, salt, and saturated fats.
2. Exercise regularly, aiming for at least 30 minutes of moderate activity most days of the week. This can include activities like walking, jogging, cycling, swimming, or strength training.
5. Get enough sleep each night, aiming for 7-8 hours of uninterrupted sleep. 

This will help you feel more rested and energized throughout the day, and can also help boost your immune system and reduce your risk of chronic health problems.

Not sure if it would really do anything in terms of performance though. Ideally, we would diversify the dataset.

@Tostino
Copy link

Tostino commented Apr 1, 2023

Are there any plans on how to fix the dataset for the math issues @gururise? Is anyone actively working on that? I'd rather try my hand at something that isn't going to be useless by the time I'm done because someone already finished it.

@gururise
Copy link
Owner

gururise commented Apr 1, 2023

Are there any plans on how to fix the dataset for the math issues @gururise? Is anyone actively working on that? I'd rather try my hand at something that isn't going to be useless by the time I'm done because someone already finished it.

@HideLord fixed the first few batches of the math issues. I know there are quite a few math issues remaining. As far as I know, there is nobody working on that right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants