New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JP-2675: Bug in setting SKIPPED for cal_step keywords, when input is ModelContainer #62
JP-2675: Bug in setting SKIPPED for cal_step keywords, when input is ModelContainer #62
Conversation
Codecov Report
@@ Coverage Diff @@
## master #62 +/- ##
======================================
Coverage 0.00% 0.00%
======================================
Files 25 25
Lines 3060 3067 +7
======================================
- Misses 3060 3067 +7
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Have you been able to do some semi-comprehensive testing on a range of use cases? |
Full disclosure: I made this PR and left for dinner. I'll start a regression set against it, but besides testing on David's example data, it's not clear we have something to catch this. I'm also not convinced it warrants a test of its own, but this PR may suggest otherwise... |
@ddavis-stsci is it possible for someone to test this PR against Roman before we merge it? |
I ran regtests in Romancal against this PR from a clean environment: pip install -e ".[test]" pytest-xdist
pip install git+https://github.com/tapastro/stpipe@jp-2675-cal-step-skip-bug
export CRDS_SERVER_URL=https://roman-crds-test.stsci.edu
pytest -n auto --bigdata romancal/regtest and had the following failures: `test_level2_image_processing_pipeline`
`test_level2_grism_processing_pipeline`
|
To my eye, those error traces point to an issue occurring in cmdline, while this change is a pretty narrow keyword update change in step.py. Are those errors not present when tested on 0.4.0 or master? |
You're right, I thought I had tested on master previously (and had green tests) but I had actually overrode that control test with the PyPI release on accident, so I was actually testing |
it seems, then, that there aren't any new failures in the regtests introduced by this PR from what I can see |
So is that my green light to go ahead and merge this, and then have someone create a new stpipe release? |
I ran one romancal test with this branch (corrected for the problem in #54 and it passed. |
I would think so, but I'll defer to @ddavis-stsci and @nden. I'm currently testing against the |
I ran successfully a Roman test.
…On Thu, Jul 14, 2022, 6:29 AM Howard Bushouse ***@***.***> wrote:
@ddavis-stsci <https://github.com/ddavis-stsci> is it possible for
someone to test this PR against Roman before we merge it?
—
Reply to this email directly, view it on GitHub
<#62 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIPCWNLSLILXUN4AY25DV3VT72ZZANCNFSM53QMZHHA>
.
You are receiving this because your review was requested.Message ID:
***@***.***>
|
Addresses #61
Addresses JP-2675
Recent code added by yours truly tried to catch errors in meta keyword assignment of cal_step values to SKIPPED, but missed the possibliity of the first step in a pipeline being skipped, exposing this code to a ModelContainer. Code is added to "broadcast" behavior initially applied to a single input DataModel to all DataModel enclosed within the ModelContainer.
NB: This reinforces the need for an stdatamodels issue to add a parent class to ModelContainer that provides for a robust isinstance() check, e.g. @stscieisenhamer's suggested
ModelList
.