Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better testing approach for the generated parser? #53

Open
ciaranmcnulty opened this issue Nov 11, 2022 · 1 comment
Open

Better testing approach for the generated parser? #53

ciaranmcnulty opened this issue Nov 11, 2022 · 1 comment

Comments

@ciaranmcnulty
Copy link
Contributor

Currently we test the parser primarily via the testdata features. These are pretty good as documentation of the correct behaviour in specific situations.

However looking at 'coverage', they cover less than 30% of the Parser's LOC in the PHP version, and probably similar in others.

Is there another approach we can use to validate the Parser is 'correct' in a wide range of situations? I'm thinking

  • Fuzz testing?
  • Property-based testing?
  • Some sort of approvals-based testing?
@mpkorstanje
Copy link
Contributor

Before looking into any of those it might be good to understand why with the existing test set the coverage is so low.

If we are missing examples categorically we may be able to add a few without losing any documentation value (i.e. we won't add trivial examples).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants