Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to suppress printing RunResult on init #41

Closed
geoffrey464 opened this issue Nov 16, 2020 · 8 comments
Closed

Add option to suppress printing RunResult on init #41

geoffrey464 opened this issue Nov 16, 2020 · 8 comments

Comments

@geoffrey464
Copy link

Hello, I would like the option to suppress the automatic printing of the RunResult class in the init (Lines 107-109 in pytest_console_scripts.py).

I think that a relatively easy way to do this is to have RunResult take a default parameter - print_output = True and use that as a flag to print the return code, stdout, and stderr.

This flag could be set in the ScriptRunner object and then passed through the run commands.

Thanks,

@kvas-it
Copy link
Owner

kvas-it commented Nov 17, 2020

Hi Geoffrey!

Thanks for this ticket. It made me realize that I messed up: my intention was to only print out the result if the test fails. By default this is more or less how it works, but when the output capturing is disabled with something like -s (which you probably do), it starts printing the output of every run, which can be quite annoying and was never my intention.

However, there's no easy way to print only on test failure, at least I don't see it, because at the time when RunResult is created we don't know yet if the test will fail or not. Your proposed parameter allows fixing it in one test, but people usually disable capturing for the whole test suite and it's probably annoying to go and add print_output=False everywhere. We also don't want to turn output printing off by default because it's useful for debugging test failures. Perhaps something like a command line option would work better.

What do you think? I'm happy to implement your original request and/or a command line option.

Cheers,
Vasily

@geoffrey464
Copy link
Author

Vasily, Thanks for replying. I didn't even realize that I was suppressing the output capturing since I'm running the test suite through PyCharm which must automatically add that flag when running the pytest module. Sorry to make you have to go through and try to figure out what I was talking about since it doesn't show on normal invocation.

I agree with you that there's not a great way to only print on test failure. Could there be an if statement in the init block that would be, print stuff if not returncode == 0? I'm not too familiar with the test failure criterion so my thought process may be too naive for this application.

I think that a global flag would be better to disable capturing. What would your proposal be for the command line option and how would it interact with the larger pytest suite?

Thanks, Geoffrey

@kvas-it
Copy link
Owner

kvas-it commented Nov 18, 2020

Sorry to make you have to go through and try to figure out what I was talking about since it doesn't show on normal invocation.

No worries, I knew more or less what's happening right away when I read your ticket. And if the tests behave this way in PyCharm, that's also not great so it would be good to fix it.

Could there be an if statement in the init block that would be, print stuff if not returncode == 0?

Yes, this is possible. But the thing is: nonzero returncode is not necessarily a failure. Some tests check that the script under test correctly handles errors so signaling error would be right (and returncode == 0 would cause the test to fail). Then there might be several calls to script_runner.run() inside of one test and stuff like that.

I think that a global flag would be better to disable capturing. What would your proposal be for the command line option and how would it interact with the larger pytest suite?

When you run pytest from the command line, capturing is enabled by default, so you don't see any prints that happen inside of the tests, unless the test fails (so you'd see the RunResult prints only in failed tests, which is usually what you want). It seems that PyCharm might be disabling the capturing so that the prints go to the console and then you see them. It would be good to check if it's possible to configure this in PyCharm -- let me know if you find anything, otherwise I can try to install PyCharm and reproduce what you see.

Regarding the command line option, it can be something like --script-output=yes|no. When you give this option to pytest, RunResult would skip the printing. This can be useful in case you can't control capturing when running from PyCharm and also for people that want to disable capturing but not see the RunResult prints. However, if you can't control pytest command line options, this would not be useful for you.

So I'm thinking that we have three possible paths to investigate:

  1. If it's possible to control capturing when running tests under PyCharm, you can turn it off and it seems like this will solve your problem. I can then add a note to the README about this so that if others have the same issue, they'd know what to do.
  2. The command line option seems useful in some scenarios, but it will only help you if you can control pytest command line options from PyCharm.
  3. Finally, I can just implement it the way you originally proposed. Seems pretty easy and if it solves your problem, I'm happy to add this.

Let me know if any of this sounds good.

Cheers,
Vasily

P.S. Sorry for too much text 🤷

@geoffrey464
Copy link
Author

Gotcha, thanks for all of the info!

After messing around in the settings, I don't see any user interaction that would allow for the toggling of capturing in PyCharm. However, after poking around a bit deeper in the actual call itself it appears that pytest is called indirectly through a runner script that automatically includes the run silent -s configuration (See Attached Image).
pytest_runner_auto_include_silent

If it's possible to control capturing when running tests under PyCharm, you can turn it off and it seems like this will solve your problem. I can then add a note to the README about this so that if others have the same issue, they'd know what to do.

Unfortunately, this doesn't appear to be the case unless users want to go into the application files itself and modify it.

The command line option seems useful in some scenarios, but it will only help you if you can control pytest command line options from PyCharm.

I believe that the command line option will most likely be the most extensible, as well as the most usable for the majority of people. It is very simple to add additional arguments to the pytest call signature within PyCharm so that should not be a problem.

Finally, I can just implement it the way you originally proposed. Seems pretty easy and if it solves your problem, I'm happy to add this.

If its not too difficult, I believe that there are edge cases where disabling the output printing on a individual test function basis would be appropriate.

If you do end up implementing both the command line option and on an individual basis, I think that they should complement each other. I.e. there shouldn't be two separate functions that do the same thing. It would be really elegant if the command line option set the parameter for each individual test, or something like that.

Best, Geoffrey

Lol It's all good, I appreciate all of the background info.

@kvas-it
Copy link
Owner

kvas-it commented Nov 19, 2020

Hi Geoffrey!

The -s option disables the capturing, so if there's a way to change the value of JD_DISABLE_BUFFERING, that might solve the issue.

In any case, it seems like the command line option and the additional argument to script_runner.run() would be useful to have -- I will implement them.

Cheers,
Vasily

@geoffrey464
Copy link
Author

Hey Vasily!

Unfortunately, I played with both commenting the offending line of code and changing the JD_DISABLE_BUFFERING environment variable and neither was able to fix the capturing of stdout. I think that it's a larger issue with how PyCharm is handling capturing.

I just realized also, that I never showed what the actual output of the test results were. So I'll paste an example below (with names changed for brevity),

tests/test_scripts.py::test_script1[subprocess] 
tests/test_scripts.py::test_script2[subprocess] 
tests/test_scripts.py::test_script3[subprocess] 
# Running console script: script1 <path-to-script-argument1> <path-to-script-argument2> -o<optinal-script-argument1> --silent
# Script return code: 0
# Script stdout:

# Script stderr:

# Running console script: script2 <path-to-script-argument1> <path-to-script-argument2> -o<optinal-script-argument1> --silent
# Script return code: 0
# Script stdout:

# Script stderr:
<insert warning for standard error here>

But in either case, thanks for all your help with solving this! I'm looking forward to trying out both solutions.

Best, Geoffrey

@kvas-it
Copy link
Owner

kvas-it commented Nov 19, 2020

Hi Geoffrey!

Can you check out the version from this branch and let me know if it solves your problem?

Cheers,
Vasily

@kvas-it
Copy link
Owner

kvas-it commented Nov 20, 2020

Merged and released. Thanks for the feedback, Geoffrey!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants