You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the discussion in #1213 I have been thinking about the overall testing strategy sv-tests uses and here are some thoughts regarding improvements I think would help the framework in general:
I think sv-tests should include tests that cover anything that is allowed by the standard and for tools that choose to limit support or choose a different but still acceptable implementation choice there should be a tool specific exclude file/list that can be used to mark certain tests as expected failures with a given reason. This inclusive strategy allows the raw parsing tools to be exercised with the broadest number of tests, but still allows tools with elaboration or more powerful linting capabilities to correctly fail for reasons that are still within the guidelines of the standard.
Using verilator as an example it has limitations because it does not support delays and is a two state simulator. So there should be a way for it to mark tests that require this functionality as expected failures which gives someone viewing the dashboard a better understanding that this is an expected limitation and not just something that has not been implemented yet. I assume the expected fail line should have a field that describes the reason it is expected to fail and that can be displayed along with using a different color in the dashboard.
There should also likely be a elaboration fail list that could be common which allows raw parsers to pass for a specific test, but any tool that is marked as supporting elaboration would know that it should fail elaboration. I'm not sure how to distinguish between the case of a tool that supports elaboration not parsing the code or correctly reporting a elaboration failure.
I assume eventually it would be nice to actual run test and verify functionality so other categories will likely be needed over time.
Okay, it's good to see we are basically on the same page!
Though I think your concept of expected fail and mine here are slightly different. It looks like what you are talking about in #446 is more like what we have in ivtest where an expected fail is a different category than unexpected fail. What I'm talking about here is more changing an expected pass to an expected fail based on characteristics available in the various tools. For example code that should fail elaboration would be marked as pass for parsing only tools, but would switch to fail for tools that support elaboration and can detect the error.
Since the discussion in #1213 I have been thinking about the overall testing strategy sv-tests uses and here are some thoughts regarding improvements I think would help the framework in general:
I think sv-tests should include tests that cover anything that is allowed by the standard and for tools that choose to limit support or choose a different but still acceptable implementation choice there should be a tool specific exclude file/list that can be used to mark certain tests as expected failures with a given reason. This inclusive strategy allows the raw parsing tools to be exercised with the broadest number of tests, but still allows tools with elaboration or more powerful linting capabilities to correctly fail for reasons that are still within the guidelines of the standard.
Using verilator as an example it has limitations because it does not support delays and is a two state simulator. So there should be a way for it to mark tests that require this functionality as expected failures which gives someone viewing the dashboard a better understanding that this is an expected limitation and not just something that has not been implemented yet. I assume the expected fail line should have a field that describes the reason it is expected to fail and that can be displayed along with using a different color in the dashboard.
There should also likely be a elaboration fail list that could be common which allows raw parsers to pass for a specific test, but any tool that is marked as supporting elaboration would know that it should fail elaboration. I'm not sure how to distinguish between the case of a tool that supports elaboration not parsing the code or correctly reporting a elaboration failure.
I assume eventually it would be nice to actual run test and verify functionality so other categories will likely be needed over time.
@wsnyder do you have any comments?
The text was updated successfully, but these errors were encountered: