Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Future Design of Python Testing Output #21861

Closed
eleanorjboyd opened this issue Aug 21, 2023 · 15 comments · Fixed by #22039
Closed

Future Design of Python Testing Output #21861

eleanorjboyd opened this issue Aug 21, 2023 · 15 comments · Fixed by #22039
Assignees
Labels
area-testing on-testplan Added to test plan

Comments

@eleanorjboyd
Copy link
Member

eleanorjboyd commented Aug 21, 2023

The following issue describes the current behavior and outlines a few questions that need to be figured out to write a proposal.

Current Behavior (pre-rewrite)

Currently, there are three places where users can view different output types from their test run. First is the "python test log" under the output panel, second is the "Test Result" panel, and finally the in-line pop-up for failed tests. The "Test Result" panel and the in-line pop-up are identical in output (when the in-line is displayed since it isn't displayed on pass). Therefore for this discussion, we will just consider the "python test log" and the "Test Result" panel but all things printed to the "Test Result" panel will show up on the in-line pop-up if changes are made.

Due to prior constraints, the extension used to parse output to determine the result of test run and update the UI. This is the primary reason for the two different output views but due to the testing rewrite this is no longer a constraint. This means a decision needs to be made moving forward regarding these two channels and their information.

The current difference between the two are noted below:

Item "python test log" Test Result panel
print statements (when applicable)
test args
cwd / rootdir
workspace directory
failure message
test summary (section that goes "Total number of tests expected to run: 1.....")
------ Captured stdout call -------
=========== short test summary info ==============
FAILED test_output_types.py::test_print_fail - assert False
config file
platform / pytest version
# tests passed, # failed, # skipped, time taken (ex === 1 passed in 0.01s ====)
Screenshot 2023-08-21 at 11 47 25 AM

comparison on the differences just as text for a test that fails and has a print statement. (right: python test output, left: Test Result)
Screenshot 2023-08-21 at 1 04 47 PM

comparison on the difference just as text for a test that fails with logging enabled. (right: python test output, left: Test Result)
Screenshot 2023-08-21 at 1 06 41 PM

Moving Forward

The following questions arise after the above analysis. Please add your feedback in comments below.

  1. Would be better if we could instead switch all the necessary information to the Test Result panel? If we do this then do we even need the "python test output"?
  2. If we want to keep the two then what is the difference between them?
  3. What of the given information is most useful? Should we have the Test Results panel look as similar to the pytest output as possible (like what pytest returns to the command line)? If we do this do we still want custom additions to the Test Result panel such as "workspace directory", "args", and "test summary (section that goes "Total number of tests expected to run: 1.....")"?
  4. When we do these types of changes what should the "show output button on the testing explorer go to? What should tests open on default when test run finish? Are there any more commands / ways to pull up these different outputs that should be added?

Pros and Cons

  • If we switch to using the Test Result panel then we can support color in the output since it's all parsed by xterm.js

The following issues are related to this spike and those involved in these issues are encouraged to give their opinion. Thanks!

@eleanorjboyd eleanorjboyd added feature-request Request for new features or functionality needs spike Label for issues that need investigation before they can be worked on. labels Aug 21, 2023
@eleanorjboyd eleanorjboyd self-assigned this Aug 21, 2023
@eleanorjboyd
Copy link
Member Author

@eleanorjboyd eleanorjboyd added area-testing and removed feature-request Request for new features or functionality labels Aug 21, 2023
@eleanorjboyd
Copy link
Member Author

One item @karrtikr brought up was if the Test Result panel is always around or if it goes away. Users should be able to reference these logs not just right after run so the result panel will need to have the same lifespan as the output channel.

@brettcannon
Copy link
Member

I say just merge it all into the Test Result panel. It's the integrated solution in VS Code for this sort of information.

@eleanorjboyd
Copy link
Member Author

another related issue: #17371

@eleanorjboyd
Copy link
Member Author

If we got rid of the python test output then this issue will be different as all commands to open the output would then route to the test panel: #21694

@connor4312
Copy link
Member

Would be better if we could instead switch all the necessary information to the Test Result panel? If we do this then do we even need the "python test output"?

I would like the Test Results view to be the 'one stop shop' for the test experience in VS Code. If there are things that make this prohibitive or undesirable, let me know and we can solve them.

If we want to keep the two then what is the difference between them?

I'm not a pythonista, but one thing I do find useful is test extensions that print the command they're running (or the equivalent command a user could run) for the tests. Sometimes, for whatever reason, I want to run them manually in a terminal.

What of the given information is most useful? Should we have the Test Results panel look as similar to the pytest output as possible (like what pytest returns to the command line)?

From my point of view when writing test extensions, my goal for the output is to write human readable data and also not lose information. If there is VS Code API for providing information, such as showing large failure diffs in the TestMessage, then that counts as "not losing information" and I may write a more concise version of that failure into the output.

It seems like "python test output" is a mix of the test output and diagnostic data. I think it's good to have a way to get that diagnostic data, but that could be a "log" type output channel and not something we necessarily direct users to unless they run into problems.

When we do these types of changes what should the "show output button on the testing explorer go to? What should tests open on default when test run finish?

Today the "show output" button always opens the output in the Test Results view. Are you thinking that would be different?

@eleanorjboyd
Copy link
Member Author

Based on discussions with team members, the plan is to move fully to the Test Results panel (as we have also confirmed this is feasible) and possible remove the python test logs section from the output panel. The panel was created to handle engineering constraints which are no longer present with the rewrite.

This being said there is one question I have. Discovery (for example pytest discovery which is the equivalent of the command python -m pytest --collect-only) has output- does it make sense for this output to be put in the test results panel? Below I added an example of what the output of discovery is- this comes from pytest and it is useful as it outlines which tests were found as well as a few other important pieces of information such as the config file being used. Would having test discovery output end up in panel called "Test Results" be confusing for users?

@brettcannon, @luabud, @karthiknadig, @cwebster-99 thoughts?
@connor4312 from a core perspective how challenging would it be to implement given the test result panel is mostly based on the test runner object which handles pass/failure?

=========================================== test session starts ============================================
platform darwin -- Python 3.7.9, pytest-7.2.2, pluggy-1.0.0
rootdir: /Users/eleanorboyd/testingFiles/inc_dec_example, configfile: pytest.ini
collected 46 items                                                                                         

<Module diff_pattern_test.py>
  <UnitTestCase ClassDiffPattern>
    <TestCaseFunction test_a1>
<Module first_test.py>
  <Function test_add>
<DoctestTextfile test_docstring.txt>
  <DoctestItem test_docstring.txt>
<Module test_from_that.py>
  <Class TestSomething>
    <Function test_a>
<Module test_many_functions.py>
  <Function test_many_1p>
  <Function test_many_2f>
  <Function test_many_3f>
<Module test_plat.py>
  <Function test_if_apple_is_evil>
  <Function test_if_linux_works>
  <Function test_if_win32_crashes>
  <Function test_runs_everywhere>
<Module test_single_function.py>
  <Function test_single_1f>
<Module test_subtests.py>
  <UnitTestCase NumbersTest>
    <TestCaseFunction test_even>
<Module test_unit_and_pytest_combo.py>
  <Function test_single_pytest>
  <UnitTestCase test_class_unittest_combo_file>
    <TestCaseFunction test_combo1_function_unittest>
<Module test_unit_decorator.py>
  <UnitTestCase my_test>
    <TestCaseFunction test_first>
    <TestCaseFunction test_fourth>
    <TestCaseFunction test_third>
<Module blank_folder/test_two_classes.py>
  <UnitTestCase ClassA>
    <TestCaseFunction test_a1>
    <TestCaseFunction test_a2>
  <UnitTestCase ClassB>
    <TestCaseFunction test_b1>
    <TestCaseFunction test_b2>
<Module outer nested_folder/test_outer_folder.py>
  <Function test_outer_folder1>
  <Function test_outer_folder2>
<Module outer nested_folder/test_trial.py>
  <Function test_test[0]>
  <Function test_test[1]>
  <Function test_test[2]>
  <Function test_test[3]>
  <Function test_test[4]>
  <Function test_test[5]>
  <Function test_test[6]>
  <Function test_test[7]>
  <Function test_test[8]>
  <Function test_test[9]>
<Module outer nested_folder/inner_nested_folder/test_nested.py>
  <Function test_answer_1pNest>
<Module outer nested_folder/inner_nested_folder/test_unittest.py>
  <UnitTestCase test_class_unittest>
    <TestCaseFunction test1_function_unittest>
    <TestCaseFunction test2_function_unittest>
<Module param_folder/test_other.py>
  <Function test_this>
<Module param_folder/test_parameterized.py>
  <Function test_adding[test adding]>
  <Function test_adding[2      +V]>
  <Function test_adding[\n ffda]>
  <Function test_odd_even[1]>
<Package test_dup_class>
  <Module test_a.py>
    <UnitTestCase TestSomething>
      <TestCaseFunction test_a>
  <Module test_b.py>
    <UnitTestCase TestSomething>
      <TestCaseFunction test_b>
<Module tests/test_unit_path_no.py>
  <UnitTestCase Test_TestIncrementDecrement>
    <TestCaseFunction test_decrement>
    <TestCaseFunction test_increment>

============================================= warnings summary =============================================
test_plat.py:4
  /Users/eleanorboyd/testingFiles/inc_dec_example/test_plat.py:4: PytestUnknownMarkWarning: Unknown pytest.mark.darwin - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
    @pytest.mark.darwin

test_plat.py:10
  /Users/eleanorboyd/testingFiles/inc_dec_example/test_plat.py:10: PytestUnknownMarkWarning: Unknown pytest.mark.linux - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
    @pytest.mark.linux

test_plat.py:16
  /Users/eleanorboyd/testingFiles/inc_dec_example/test_plat.py:16: PytestUnknownMarkWarning: Unknown pytest.mark.win32 - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html
    @pytest.mark.win32

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================= 46 tests collected in 0.06s ========================================

@brettcannon
Copy link
Member

We could stick it in the Python channel of the Output panel since, as you pointed out, the Test Results panel doesn't really have a concept of discovery.

@eleanorjboyd
Copy link
Member Author

for clarity on this issue, this is what I mean when I talk about the Test Result Panel,
Screenshot 2023-09-18 at 10 44 56 AM

@connor4312
Copy link
Member

Yea, I'm not a fan of the idea of putting it in the Test Results panel.

You could have an output channel, or even a simple memory buffer that's opened in an editor on some command. Up to you.

@eleanorjboyd
Copy link
Member Author

@connor4312 and @brettcannon, thanks for the input! Will move forward with keeping discovery in the python output panel thanks!

@eleanorjboyd
Copy link
Member Author

What are people's thoughts on using color for the debug console logs? This is what it would look like if pytest was set to color true and it was in debug mode:
Screenshot 2023-09-20 at 2 25 59 PM

@connor4312
Copy link
Member

Colors are great, they make output much more readable 👍

@brettcannon
Copy link
Member

I say we give it a shot and see what users think!

eleanorjboyd added a commit that referenced this issue Oct 11, 2023
closes #21861 and
related issues

---------

Co-authored-by: Courtney Webster <60238438+cwebster-99@users.noreply.github.com>
@github-actions github-actions bot removed the needs spike Label for issues that need investigation before they can be worked on. label Oct 11, 2023
@flying-sheep
Copy link

Ah, amazing, with "python.testing.pytestArgs": ["--color=yes"], I see color in the test output panel.

The “Peek View” still doesn’t have color, but that can be deactivated. Thank you!

@eleanorjboyd eleanorjboyd added the on-testplan Added to test plan label Oct 23, 2023
This was referenced Oct 23, 2023
@jrieken jrieken mentioned this issue Oct 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 25, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-testing on-testplan Added to test plan
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants