Skip to content
This repository has been archived by the owner on Oct 3, 2021. It is now read-only.

Seeding bugs in AWS benchmarks #1183

Open
zvonimir opened this issue Oct 24, 2020 · 4 comments
Open

Seeding bugs in AWS benchmarks #1183

zvonimir opened this issue Oct 24, 2020 · 4 comments

Comments

@zvonimir
Copy link
Contributor

If I am reading this right, the AWS category has no false benchmarks. This doesn't seem like a good idea since it can really skew the results - for example a verifier that is fast but would potentially miss all bugs in this category could easily win it.
Unless someone objects or has a better idea for how to go about this, I'll try to seed a bunch of bugs in this category. I am totally open to other suggestions.

@gernst given that you submitted these benchmarks, would you maybe be interested in seeding some reasonable bugs?
Given that you known them, you would probably do a better job than me.

@gernst
Copy link
Contributor

gernst commented Oct 27, 2020

It would be great to have failing variants.

I am not actually familiar with the actual code, I just ported the benchmarks to SV-COMP, so I would doubt whether I would do a better job.

If you agree, we can collect some ideas for reasonable classes of bugs here, and then decide how we could automate the process of inserting such bugs into the benchmarks. I don't know the literature on bug-seeding at all, but here are some generic ideas

  • change some nondeterministic choice uint to int to provoke negative sizes (which should be invalid)
  • omit some allocations, e.g. by replacing malloc by a faulty version
  • change the loops to have off-by-one exit conditions, or insert if(nondet_bool()) continue; at the loop head
  • patch the bounds of memcpy and friends
  • omit assignments to struct fields

@MartinSpiessl
Copy link
Contributor

So there were no bugs found yet in the initial versions of the tasks (other than general undefined behavior)? Usually there should be some versions where we already identified bugs, and those should have been preserved.

@gernst
Copy link
Contributor

gernst commented Oct 27, 2020

@MartinSpiessl probably. I think these would be two separate goals: seeded artificial bugs determine the quality of the test-suite, and actual bugs from the upstream bug tracker would be nice for regression analysis, maybe.
I currently don't have the time to scrape the history of the original project, but it could be a nice student project?

@MartinSpiessl
Copy link
Contributor

but it could be a nice student project?

That is a good idea, but will mean it will take some time to get finished. Maybe we can find a compromise where we at least add a few faulty programs now such that the problem @zvonimir mentioned is at least somehow addressed.

history of the original project

Just to have the information here as well (it is also in the README.txt), the original project is this one:
https://github.com/awslabs/aws-c-common

This was referenced Nov 4, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

3 participants