-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: check all SED-ML XPath targets and warn appropriately #102
Conversation
This was part of the earlier change but somehow got dropped.
I removed this because such attribute changes can also make changes that can break other XPaths. While this might not be the typical use pattern, this is possible. More nuanced analysis is needed to evaluate this. |
Basically, every change should be considered a "structural" change. This is one of the insidious features of XPaths. For many reasons, I'd advocate L2 SED-ML use something else. |
OK, in that case I think we should validate the XPaths but put in warnings instead. Not validating the XPaths at all seems like a waste. (And again, I'm getting pages and pages of 'didn't check this xpath; didn't check the other xpath', which is not very helpful.) |
The XPaths are still validated and a warning is generated
|
Here's the assumptions/insights that make this update work: * A ComputeChange will never change the structure of a model. Thus, no task change will invalidate any XPath. * Similarly, has_structural_changes now additionally checks to see if any model attribute change is present. * We need but a single warning that all valid XPaths could be made invalid by a rogue ModelChange. * Conversely, if an XPath is *in*valid, it might too be made valid by the application of a ModelChange. Overall: All XPaths are validated, but if there are structural model changes present, all 'is there a target' validations are warnings, and we also add a single warning that 'valid' XPaths may be wrong anyway.
Could you add unit tests which illustrate the cases this impacts? |
@luciansmith it looks like you're still working on this. I converted this to draft so I know not to merge this yet. When its ready, click the "Ready for review" button below. |
OK! Major update that fundamentally changes the goal of this pull request. See e72edd4 for details, but in general: given that attribute changes can make XPaths valid or invalid, we just acknowledge this everywhere, and validate targets anyway. |
Sure, adding tests is a good idea. |
Because this can lead to the false identification of errors, such issues should be reported as warnings. An additional option (whose default value is OFF) could be added to report them as errors. |
Correct! XPath validation errors are only treated as errors if there are no model changes. See https://github.com/biosimulators/Biosimulators_utils/blob/check-datagens/biosimulators_utils/sedml/validation.py#L1659 |
That works. This should help close the remaining gap in the validation of the SED-ML files from BioModels. |
I'm having a little trouble figuring out how best to fit the design of your unit test system. I have some SED-ML files that should each produce a different suite of errors and warnings, but I'm not sure how best to wrap them into your system. Should the SED-ML of each be created from scratch? Should I load just one and make changes to create the others internally? Should I just save all four and load them? The results should be: No change, good xpath: no errors, no warnings I also feel like xpaths for elements other than data generators should be tested in the presence/absence of a model change, but figured the datagens could be first. |
The SED-ML can either be generated in the code, or it could saved to the repository in |
The new test cases can be added to |
OK! I'll do that, then. |
Several tests added that check XPath validation in various contexts in the presence and absence of a 'structural' model change. To make things work, the model_etree had to be passed around a lot, and created earlier than it had been. But everything else mostly worked out of the box.
This should be good to go! |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
Codecov Report
@@ Coverage Diff @@
## dev #102 +/- ##
==========================================
- Coverage 96.68% 96.66% -0.02%
==========================================
Files 87 87
Lines 9294 9309 +15
==========================================
+ Hits 8986 8999 +13
- Misses 308 310 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Will be released as |
This was part of the earlier change but somehow got dropped.
EDIT: fundamental changes due to comments; see below.