Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test case average of solutions in real dataset #15

Closed
sindhura97 opened this issue Aug 2, 2022 · 8 comments
Closed

Test case average of solutions in real dataset #15

sindhura97 opened this issue Aug 2, 2022 · 8 comments

Comments

@sindhura97
Copy link

Hi, Have you checked the ground truth solutions in the original dataset to make sure that they pass the test cases?

@xksteven
Copy link
Collaborator

xksteven commented Aug 2, 2022

We filtered when we collected the solutions to check if they pass the testcases however they may be horribly inefficient such as requiring several gigabytes of RAM to execute or take a really long time to execute (sometimes minutes). This varied based on the source of the ground truth solutions. We didn't filter the solutions further to only those that were optimal or near optimal.

@xksteven xksteven closed this as completed Aug 2, 2022
@sindhura97
Copy link
Author

sindhura97 commented Aug 2, 2022

I see. I was running evaluation by generating code by simply copying first ground truth solution this way:

from datasets import load_dataset
import json

ds = load_dataset("codeparrot/apps", split="train")
examples = {}
for eg in ds:
        for sol in eg['solutions'][2:-2].split('", "'):
            sol = sol.replace('\\n', '\n')
            examples[eg['problem_id']] = [sol]
            print ('='*10)
            print (sol)
            break
json.dump(examples, open('results/all_codes_orig_train.json','w'))

And when I run evaluation for these codes, I only got 60%. Does this seem right?

@xksteven
Copy link
Collaborator

xksteven commented Aug 2, 2022

You may need to select a different solution to test out. I can rerun the evaluation script to see how many optimal solutions. I might not be able to get to it for a while though as I'll need the compute in the background to re-evaluate all of the solutions and with sufficient RAM etc. So I can't really give an ETA on that.

@sindhura97
Copy link
Author

Okay, btw I see that the low 60% is due to some problems with '' character used unnecessarily in solutions at some places.

@sindhura97
Copy link
Author

Update: Doing sol = sol.replace('\n', '\n').replace('\"','"').replace('\r','').replace('\\','\').replace('\t','\t') has pushed it to >95% when I tested on first few training samples.

@xksteven
Copy link
Collaborator

xksteven commented Aug 5, 2022

Great thanks for the information. I thought we did that for our preprocessing but maybe something happened where it got removed.

Feel free to make a PR with the changes if you have time :)

@loubnabnl
Copy link
Contributor

loubnabnl commented Aug 8, 2022

Hi this isn't an error in the dataset, to load it in HuggingFace hub and respect some format constraints we had to save the solutions and input_output columns in json format which led to this behaviour. But in the README of the dataset we show how to load the solutions and input_output columns correctly: https://huggingface.co/datasets/codeparrot/apps#how-to-use-it
image

@xksteven
Copy link
Collaborator

xksteven commented Aug 8, 2022

@loubnabnl Thanks for the input! Leaving the issue closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants