Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

union versus intersection testing and the testing strategy in general #1214

Open
caryr opened this issue Dec 17, 2020 · 2 comments
Open

union versus intersection testing and the testing strategy in general #1214

caryr opened this issue Dec 17, 2020 · 2 comments

Comments

@caryr
Copy link
Contributor

caryr commented Dec 17, 2020

Since the discussion in #1213 I have been thinking about the overall testing strategy sv-tests uses and here are some thoughts regarding improvements I think would help the framework in general:

I think sv-tests should include tests that cover anything that is allowed by the standard and for tools that choose to limit support or choose a different but still acceptable implementation choice there should be a tool specific exclude file/list that can be used to mark certain tests as expected failures with a given reason. This inclusive strategy allows the raw parsing tools to be exercised with the broadest number of tests, but still allows tools with elaboration or more powerful linting capabilities to correctly fail for reasons that are still within the guidelines of the standard.

Using verilator as an example it has limitations because it does not support delays and is a two state simulator. So there should be a way for it to mark tests that require this functionality as expected failures which gives someone viewing the dashboard a better understanding that this is an expected limitation and not just something that has not been implemented yet. I assume the expected fail line should have a field that describes the reason it is expected to fail and that can be displayed along with using a different color in the dashboard.

There should also likely be a elaboration fail list that could be common which allows raw parsers to pass for a specific test, but any tool that is marked as supporting elaboration would know that it should fail elaboration. I'm not sure how to distinguish between the case of a tool that supports elaboration not parsing the code or correctly reporting a elaboration failure.

I assume eventually it would be nice to actual run test and verify functionality so other categories will likely be needed over time.

@wsnyder do you have any comments?

@wsnyder
Copy link
Member

wsnyder commented Dec 17, 2020

#446 ;)

@caryr
Copy link
Contributor Author

caryr commented Dec 17, 2020

Okay, it's good to see we are basically on the same page!

Though I think your concept of expected fail and mine here are slightly different. It looks like what you are talking about in #446 is more like what we have in ivtest where an expected fail is a different category than unexpected fail. What I'm talking about here is more changing an expected pass to an expected fail based on characteristics available in the various tools. For example code that should fail elaboration would be marked as pass for parsing only tools, but would switch to fail for tools that support elaboration and can detect the error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants