Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UX: Don't log stderr as warning when expect_stderr was True #3518

Merged
merged 3 commits into from Jul 10, 2019

Conversation

@bpoldrack
Copy link
Member

@bpoldrack bpoldrack commented Jul 9, 2019

Fixes #3515

@@ -2124,7 +2124,9 @@ def _run_annex_command_json(self, command,
if e.stderr:
# else just warn about present errors
shorten = lambda x: x[:1000] + '...' if len(x) > 1000 else x
lgr.warning(

_log = lgr.debug if kwargs.get('expect_stderr', False) else lgr.warning
Copy link
Contributor

@kyleam kyleam Jul 9, 2019

When deciding to log at 9 or 11, Runner looks at expect_fail rather than expect_stderr in the case of a non-zero exit status. Given that we're processing a non-zero exit (we're under except CommandError), should we look at expect_fail rather than expect_stderr?

Copy link
Member Author

@bpoldrack bpoldrack Jul 10, 2019

You're absolutely right. Will change.

Copy link
Member Author

@bpoldrack bpoldrack Jul 10, 2019

Although - it is somewhat confusing how exactly we want to have it. expect_stderr and expect_fail are different, independent parameters. So, what should be the meaning of combining them? If I declare to expect a non-zero exit, but not expecting something on stderr shouldn't we still have it logged as a warning? I guess, we are not really consistent in the interpretation of those.
However, I agree and would like to go with the simple approach: really consider expect_stderr only for zero exit.

Copy link
Contributor

@kyleam kyleam Jul 10, 2019

Although - it is somewhat confusing how exactly we want to have it. expect_stderr and expect_fail are different, independent parameters. So, what should be the meaning of combining them? If I declare to expect a non-zero exit, but not expecting something on stderr shouldn't we still have it logged as a warning?

Yeah, I find the interaction between expect_stderr and expect_fail confusing. I can find spots in cmd.py where it clearly looks at expect_fail if the status is non-zero and expect_stderr otherwise, but I don't think that's necessarily the behavior I'd expect from their docstring descriptions. I'd guess we could collapse them into one parameter and retain all the cases we actually care about.

Anyway, for the case of this change, where we are in a non-zero exit, I think whether we look at expect_stderr or expect_fail isn't a big deal because I'd think expect_fail == expect_stderr would be the dominant case.

with swallow_logs(logging.DEBUG) as cml:
repo._run_annex_command_json('add', ['non-existing'], expect_stderr=True)
# message shows up at DEBUG level:
assert_re_in(r".*\[DEBUG\][^[]*git-annex: add: 1 failed", cml.out, flags=DOTALL)
Copy link
Contributor

@kyleam kyleam Jul 9, 2019

Fine as is, but the helper cml.assert_logged() could also be used here, which I find slightly more readable but always forget about myself.

Copy link
Member Author

@bpoldrack bpoldrack Jul 10, 2019

It would be nice if it could be used, but it can't ATM. Requires a rewrite of assert_logged, which isn't flexible enough for that. It will prepend the message I'm looking for with a fixed prefix of for the level. But this prefix doesn't account for other things in the captured log before that. It would assume the captured log actually starts with [DEBUG] for example. But that doesn't work if there are other things logged at higher level before. In short: I tried, but gave up on it, since I was already spending more time than I wanted. Only tackled this one, because I thought: "Well, one-liner. Just spend those 5 minutes." Always turns out to be wrong ;-)

Copy link
Contributor

@kyleam kyleam Jul 10, 2019

It would be nice if it could be used, but it can't ATM. Requires a rewrite of assert_logged, which isn't flexible enough for that. It will prepend the message I'm looking for with a fixed prefix of for the level. But this prefix doesn't account for other things in the captured log before that. It would assume the captured log actually starts with [DEBUG] for example. But that doesn't work if there are other things logged at higher level before.

match=False should cover that:

diff --git a/datalad/support/tests/test_annexrepo.py b/datalad/support/tests/test_annexrepo.py
index ddacc5ef8..99bd2b10a 100644
--- a/datalad/support/tests/test_annexrepo.py
+++ b/datalad/support/tests/test_annexrepo.py
@@ -2249,8 +2249,10 @@ def test_annex_cmd_expect_fail(path):
     with swallow_logs(logging.DEBUG) as cml:
         repo._run_annex_command_json('add', ['non-existing'], expect_fail=True)
         # message shows up at DEBUG level:
-        assert_re_in(r".*\[DEBUG\][^[]*git-annex: add: 1 failed", cml.out, flags=DOTALL)
+        cml.assert_logged(r"[^[]*git-annex: add: 1 failed",
+                          level="DEBUG", match=False)
     with swallow_logs(logging.DEBUG) as cml:
         repo._run_annex_command_json('add', ['non-existing'], expect_fail=False)
         # message shows up at WARNING level
-        assert_re_in(r".*\[WARNING\][^[]*git-annex: add: 1 failed", cml.out, flags=DOTALL)
+        cml.assert_logged(r"[^[]*git-annex: add: 1 failed",
+                          level="WARNING", match=False)

In short: I tried, but gave up on it, since I was already spending more time than I wanted. Only tackled this one, because I thought: "Well, one-liner. Just spend those 5 minutes." Always turns out to be wrong ;-)

A familiar feeling :)

Copy link
Member Author

@bpoldrack bpoldrack Jul 10, 2019

Well, that seems true. Still don't like assert_logged much, since it fails to properly check a log entry at a time (which is why r"[^[]* is there), but I'll honor you pointing that out by using it :-)

@codecov
Copy link

@codecov codecov bot commented Jul 10, 2019

Codecov Report

Merging #3518 into master will decrease coverage by 0.16%.
The diff coverage is 85.71%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3518      +/-   ##
==========================================
- Coverage   82.83%   82.67%   -0.17%     
==========================================
  Files         269      269              
  Lines       35015    35028      +13     
==========================================
- Hits        29005    28959      -46     
- Misses       6010     6069      +59
Impacted Files Coverage Δ
datalad/support/annexrepo.py 60.01% <0%> (-0.56%) ⬇️
datalad/support/tests/test_annexrepo.py 96.11% <100%> (+0.03%) ⬆️
datalad/customremotes/base.py 67.31% <0%> (-11.01%) ⬇️
datalad/customremotes/tests/__init__.py 91.66% <0%> (-8.34%) ⬇️
datalad/customremotes/tests/test_archives.py 86.18% <0%> (-3.29%) ⬇️
datalad/cmd.py 87.08% <0%> (-1.93%) ⬇️
datalad/__init__.py 82.7% <0%> (-1.51%) ⬇️
datalad/tests/utils_testrepos.py 93.1% <0%> (-0.87%) ⬇️
datalad/core/local/create.py 98.41% <0%> (-0.8%) ⬇️
datalad/tests/test_cmd.py 97.04% <0%> (-0.6%) ⬇️
... and 6 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 99d9008...bfceb19. Read the comment docs.

@codecov
Copy link

@codecov codecov bot commented Jul 10, 2019

Codecov Report

Merging #3518 into master will decrease coverage by 0.16%.
The diff coverage is 85.71%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #3518      +/-   ##
==========================================
- Coverage   82.83%   82.67%   -0.17%     
==========================================
  Files         269      269              
  Lines       35015    35028      +13     
==========================================
- Hits        29005    28959      -46     
- Misses       6010     6069      +59
Impacted Files Coverage Δ
datalad/support/annexrepo.py 60.01% <0%> (-0.56%) ⬇️
datalad/support/tests/test_annexrepo.py 96.11% <100%> (+0.03%) ⬆️
datalad/customremotes/base.py 67.31% <0%> (-11.01%) ⬇️
datalad/customremotes/tests/__init__.py 91.66% <0%> (-8.34%) ⬇️
datalad/customremotes/tests/test_archives.py 86.18% <0%> (-3.29%) ⬇️
datalad/cmd.py 87.08% <0%> (-1.93%) ⬇️
datalad/__init__.py 82.7% <0%> (-1.51%) ⬇️
datalad/tests/utils_testrepos.py 93.1% <0%> (-0.87%) ⬇️
datalad/core/local/create.py 98.41% <0%> (-0.8%) ⬇️
datalad/tests/test_cmd.py 97.04% <0%> (-0.6%) ⬇️
... and 6 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 99d9008...5d80cb9. Read the comment docs.

... which seems fine. Just adapt the test.
@bpoldrack
Copy link
Member Author

@bpoldrack bpoldrack commented Jul 10, 2019

FTR: Failure in Travis is about singularity hub being down.

kyleam
kyleam approved these changes Jul 10, 2019
@yarikoptic yarikoptic merged commit 4273189 into datalad:master Jul 10, 2019
2 of 3 checks passed
@yarikoptic
Copy link
Member

@yarikoptic yarikoptic commented Jul 10, 2019

Thanks!

@kyleam
Copy link
Contributor

@kyleam kyleam commented Jul 10, 2019

yarikoptic added the pick up to 0.11.x label

FWIW I don't think these sorts of non-bugfix tweaks are worth cherry picking from master to 0.11.x.

@yarikoptic
Copy link
Member

@yarikoptic yarikoptic commented Jul 10, 2019

Sure thing we have more sever divergences in UX/API between the too, but I thought that given its size might be worth. But ok, will remove the label

@kyleam
Copy link
Contributor

@kyleam kyleam commented Jul 10, 2019

but I thought that given its size might be worth

If you end up deciding it is, please discard my two cents and pick away :]

@yarikoptic yarikoptic added this to the Release 0.12.0 milestone Aug 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

3 participants