Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More descriptive debugging graph errors and bugfix in values validator #581

Merged
merged 3 commits into from
Oct 23, 2022

Conversation

tclose
Copy link
Contributor

@tclose tclose commented Sep 20, 2022

Types of changes

  • Bug fix (non-breaking change which fixes an issue):
  • New feature (non-breaking change which adds functionality):

Summary

  • _allowed_values_validator now accepts LazyField objects
  • more detailed error messages when there is a problem with the workflow graph and the dependencies of some nodes cannot be satisfied

Checklist

  • I have added tests to cover my changes (if necessary)
  • I have updated documentation (if necessary)

@codecov
Copy link

codecov bot commented Sep 20, 2022

Codecov Report

Base: 77.13% // Head: 76.72% // Decreases project coverage by -0.41% ⚠️

Coverage data is based on head (bad613a) compared to base (67f7e72).
Patch coverage: 13.04% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #581      +/-   ##
==========================================
- Coverage   77.13%   76.72%   -0.42%     
==========================================
  Files          20       19       -1     
  Lines        4330     4331       +1     
  Branches     1217     1225       +8     
==========================================
- Hits         3340     3323      -17     
- Misses        802      821      +19     
+ Partials      188      187       -1     
Flag Coverage Δ
unittests 76.63% <13.04%> (-0.42%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pydra/engine/specs.py 88.59% <0.00%> (ø)
pydra/engine/submitter.py 77.84% <9.52%> (-10.48%) ⬇️
pydra/engine/helpers.py 79.90% <100.00%> (ø)
pydra/__init__.py

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@tclose
Copy link
Contributor Author

tclose commented Sep 26, 2022

@djarecka do you know why this would be failing the SLURM check? I don't think I changed anything that would impact on it

plugin = 'slurm'
tmpdir = local('/tmp/pytest-of-root/pytest-0/popen-gw0/test_wf_upstream_error3_slurm_0')

    def test_wf_upstream_error3(plugin, tmpdir):
        """task2 dependent on task1, task1 errors, task-level split on task 1
        goal - workflow finish running, one output errors but the other doesn't
        """
        wf = Workflow(name="wf", input_spec=["x"], cache_dir=tmpdir)
        wf.add(fun_addvar_default(name="addvar1", a=wf.lzin.x))
        wf.inputs.x = [1, "hi"]  # TypeError for adding str and int
        wf.addvar1.split("a")  # task-level split
        wf.plugin = plugin
        wf.add(fun_addvar_default(name="addvar2", a=wf.addvar1.lzout.out))
        wf.set_output([("out", wf.addvar2.lzout.out)])
    
        with pytest.raises(Exception) as excinfo:
            with Submitter(plugin=plugin) as sub:
                sub(wf)
>       assert "addvar1" in str(excinfo.value)
E       AssertionError: assert 'addvar1' in 'Event loop is closed'
E        +  where 'Event loop is closed' = str(RuntimeError('Event loop is closed'))
E        +    where RuntimeError('Event loop is closed') = <ExceptionInfo RuntimeError('Event loop is closed') tblen=11>.value

@djarecka
Copy link
Collaborator

@tclose - this test is flaky, perhaps I should do more reruns, before reporting fails.

I wonder if you had some test cases that you were testing this on and could be added to the repo?

@tclose
Copy link
Contributor Author

tclose commented Oct 23, 2022

@tclose - this test is flaky, perhaps I should do more reruns, before reporting fails.

I wonder if you had some test cases that you were testing this on and could be added to the repo?

I fixed the "real" case that was failing but I have been meaning to create a test with an incorrectly designed task, which will trigger this case

@djarecka
Copy link
Collaborator

ok, I will merge this now, so it is in the newest release. I'm sure this will be an update to the previous error report. I've created an issue to think about the tests: #590

@djarecka djarecka merged commit 9cdd84b into nipype:master Oct 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants