Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let get_status_dict() use exception message if none is passed #7037

Merged
merged 2 commits into from
Sep 19, 2022

Conversation

mih
Copy link
Member

@mih mih commented Sep 15, 2022

A result with an exception attached has no error message, unless one is explicitly given, even when the exception is a CapturedException and could provide a message.

This change reports such an exception-provided message in the result.

A result with an exception attached has no error message, unless
one is explicitly given, even when the exception is a CapturedException
and could provide a message.

This change reports such an exception-provided message in the result.
@mih mih added UX user experience semver-patch Increment the patch version when merged CHANGELOG-missing When a PR's description does not contain a changelog item, yet. labels Sep 15, 2022
@github-actions github-actions bot removed the CHANGELOG-missing When a PR's description does not contain a changelog item, yet. label Sep 15, 2022
@adswa
Copy link
Member

adswa commented Sep 15, 2022

There was a clone failure in an Appveyor run, which I have restarted.

I've seen the Travis failure in other PRs already:
=================================== FAILURES ===================================
_____________________________ test_gracefull_death _____________________________
[gw1] linux -- Python 3.7.8 /tmp/dl-miniconda-v52fqo40/bin/python
    @known_failure_osx  # https://github.com/datalad/datalad/issues/5309
    def test_gracefull_death():
    
        def assert_provides_and_raises(pc, exception, target=None):
            """Helper to get all results before exception is raised"""
            results = []
            with assert_raises(exception):
                for r in pc:
                    results.append(r)
            # results should be sorted since we do not guarantee order
            results = sorted(results)
            if target is not None:
                assert_equal(results, target)
            return results
    
        def interrupted_producer():
            yield 1
            raise ValueError()
    
        def consumer(i):
            sleep(0.001)
            yield i
    
        assert_provides_and_raises(
            ProducerConsumer(interrupted_producer(), consumer, jobs=3), ValueError, [1])
    
        def faulty_consumer(i):
            sleep(0.001)
            if i == 1:
                raise ValueError()
            return i
    
        # so we do not get failed, but other parallel ones finish their job
        results = assert_provides_and_raises(
            ProducerConsumer(range(1000), faulty_consumer, jobs=5), ValueError)
        # and analysis of futures to raise an exception can take some time etc, so
        # we could get more, but for sure we should not get all 999 and not even a 100
        if info_log_level:
            assert_greater(100, len(results))
        assert_equal(results[:4], [0, 2, 3, 4])
    
        def producer():
            for i in range(10):
                sleep(0.0003)
                yield i
            raise ValueError()
        # by default we do not stop upon producer failing
        assert_provides_and_raises(
            ProducerConsumer(producer(), consumer, jobs=2), ValueError, list(range(10)))
        # if producer produces more than we can as quickly consume but then fails
        # ATM we do not proceed to consume other items, but fail when we finish
        # consuming until the time point when producer has failed
        # by default we do not stop upon producer failing
        results = assert_provides_and_raises(
            ProducerConsumer(producer(), consumer, reraise_immediately=True, jobs=2),
            ValueError)
        # we will get some results, seems around 4 and they should be "sequential"
        assert_equal(results, list(range(len(results))))
        assert_greater_equal(len(results), 2)
    
        # This test relies too much on threads scheduling to not hog up on handling
        # consumers, but if it happens so - they might actually consume all results
        # before producer decides to finally raise an exception.  As such it remains
        # flaky and thus not ran, but could be useful to test locally while
        # changing that logic.
        #
        # if info_log_level and not (on_windows or on_osx):
        #     # consumers should not be able to consume all produced items.
        #     # production of 10 should take 3 unites, while consumers 10/2 (jobs)
        #     # 5 units, so some should not have a chance.
        #     assert_greater_equal(8, len(results))
    
        # Simulate situation close to what we have when outside code consumes
        # some yielded results and then "looses interest" (on_failure="error").
        # In this case we should still exit gracefully (no GeneratorExit warnings),
        # not over-produce, and also do not kill already running consumers
        consumed = []
        def inner():
            def consumer(i):
                sleep(0.01)
                consumed.append(i)
                return i
            pc = iter(ProducerConsumer(range(1000), consumer, jobs=2))
            yield next(pc)
            yield next(pc)
>       assert_equal(sorted(inner()), [0, 1])
../datalad/support/tests/test_parallel.py:223: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
first = [0, 2], second = [0, 1], msg = None
    def assert_equal(first, second, msg=None):
        if msg is None:
>           assert first == second
E           assert [0, 2] == [0, 1]
E             At index 1 diff: 2 != 1
E             Full diff:
E             - [0, 1]
E             ?     ^
E             + [0, 2]
E             ?     ^
../datalad/tests/utils_pytest.py:107: AssertionError

@codecov
Copy link

codecov bot commented Sep 15, 2022

Codecov Report

Base: 75.43% // Head: 88.76% // Increases project coverage by +13.32% 🎉

Coverage data is based on head (4cbe9b4) compared to base (91f6949).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@             Coverage Diff             @@
##            maint    #7037       +/-   ##
===========================================
+ Coverage   75.43%   88.76%   +13.32%     
===========================================
  Files         354      354               
  Lines       58750    46348    -12402     
  Branches     6613        0     -6613     
===========================================
- Hits        44319    41140     -3179     
+ Misses      14417     5208     -9209     
+ Partials       14        0       -14     
Impacted Files Coverage Δ
datalad/interface/results.py 96.32% <100.00%> (+0.05%) ⬆️
datalad/cmdline/__init__.py 0.00% <0.00%> (-100.00%) ⬇️
datalad/interface/run_procedure.py 0.00% <0.00%> (-100.00%) ⬇️
datalad/cmdline/helpers.py 0.00% <0.00%> (-78.27%) ⬇️
datalad/support/nda_.py 0.00% <0.00%> (-54.55%) ⬇️
datalad/__main__.py 39.65% <0.00%> (-29.32%) ⬇️
datalad/distribution/uninstall.py 63.26% <0.00%> (-28.58%) ⬇️
datalad/tests/utils.py 21.60% <0.00%> (-27.05%) ⬇️
datalad/api.py 75.60% <0.00%> (-15.51%) ⬇️
datalad/support/collections.py 81.25% <0.00%> (-12.50%) ⬇️
... and 189 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@yarikoptic
Copy link
Member

appveyor is green , travis as test_parallel.py::test_gracefull_death fail which was mitigated since then, let's proceed and also try adjusted release workflow .

@yarikoptic yarikoptic added the release Create a release when this pr is merged label Sep 19, 2022
@yarikoptic yarikoptic merged commit 27c18e8 into datalad:maint Sep 19, 2022
@yarikoptic-gitmate
Copy link
Collaborator

PR released in 0.17.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release Create a release when this pr is merged semver-patch Increment the patch version when merged UX user experience
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants