You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
here I would like to focus on the Python-only tests (aka no LLVM compilation involved)
these are the test levels that are appropriate:
First, minimal unit tests should be added for verifying small helper/utility functions used in the codebase.
Next we can add integration tests for testing specific submodules/components (such as transforms/backends) independently (using dummy inputs and comparing the resulting metamodel or generated patches with the expected output)
Finally there are more complex test-cases which involve more than one component or include running multiple steps in the seal5 flow. I am not planning to add specific tests cases for these as they should (hopefully) show up in the end-to-end examples based on real-world inputs.
I hope this answers your question. I think to get this started we should start with the ones which are very simple (aka level 1). We should probably also add coverage support to see which files/functions lack proper testing.
No description provided.
The text was updated successfully, but these errors were encountered: