Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make launchtest junit XML match pytest XML more closely #228

Merged
merged 10 commits into from
May 4, 2019

Conversation

pbaughman
Copy link
Contributor

  • Use the classname attribute in the testcase element instead of jamming everything into name
  • Use the message attribute for skipped elements
  • Handle the case where @unittest.skip("Skip Message") is added to the generate_launch_description function

Signed-off-by: Pete Baughman pete.baughman@apex.ai

I noticed some of our internal tooling was choking on apex_launchtest XML, so I've made the generated XML match pytest's XML more closely

Example pytest XML using same classnames and test names as the good_proc.test.py example, but with empty test methods:

<?xml version="1.0" encoding="utf-8"?>
<testsuite errors="0" failures="0" name="pytest" skipped="1" tests="4" time="0.110">
  <testcase classname="example_pytest.TestGoodProcess" file="example_pytest.py" line="55" name="test_count_to_four" time="0.001"/>
  <testcase classname="example_pytest.TestProcessOutput" file="example_pytest.py" line="68" name="test_exit_code" time="0.001">
    <skipped message="Example of a skip" type="pytest.skip">example_pytest.py:68: Example of a skip</skipped>
  </testcase>
  <testcase classname="example_pytest.TestProcessOutput" file="example_pytest.py" line="75" name="test_full_output" time="0.001"/>
  <testcase classname="example_pytest.TestProcessOutput" file="example_pytest.py" line="85" name="test_out_of_order" time="0.001"/>
</testsuite>

Example launchtest XML:

<?xml version="1.0" encoding="us-ascii"?>
<testsuites errors="0" failures="0" name="apex_launchtest" tests="4">
  <testsuite errors="0" failures="0" name="good_proc.test.py" skipped="1" tests="4" time="4.111">
    <testcase classname="TestGoodProcess" name="test_count_to_four" time="4.11"/>
    <testcase classname="TestProcessOutput" name="test_full_output" time="0.0"/>
    <testcase classname="TestProcessOutput" name="test_exit_code" time="0.0">
      <skipped message="Example of a skip"/>
    </testcase>
    <testcase classname="TestProcessOutput" name="test_out_of_order" time="0.0"/>
  </testsuite>
</testsuites>

It's not spot-on, but it's sufficient for our internal test reporting tooling. I'm open to feedback about what parts of the pytest junit XML are important.

This PR matches is my internal PR ApexAI/apex_rostest#41 on the Apex AI repo

@tfoote tfoote added the in review Waiting for review (Kanban column) label Apr 17, 2019
@pbaughman pbaughman changed the title Make junit XML match pytest XML more closely Make launchtest junit XML match pytest XML more closely Apr 17, 2019
@pbaughman
Copy link
Contributor Author

The test failure is in test_flake8, but it seems the checker has barfed up an error instead of telling me what I did wrong:

test/test_flake8.py:22: in test_flake8
    rc = main(argv=[])
/opt/ros/dashing/lib/python3.6/site-packages/ament_flake8/main.py:75: in main
    max_line_length=args.linelength)
/opt/ros/dashing/lib/python3.6/site-packages/ament_flake8/main.py:175: in generate_flake8_report
    report.report = style.check_files(paths)
/usr/lib/python3/dist-packages/flake8/api/legacy.py:101: in check_files
    self._application.report_errors()
/usr/lib/python3/dist-packages/flake8/main/application.py:340: in report_errors
    results = self.file_checker_manager.report()
/usr/lib/python3/dist-packages/flake8/checker.py:267: in report
    results_reported += self._handle_results(filename, results)
/usr/lib/python3/dist-packages/flake8/checker.py:166: in _handle_results
    physical_line=physical_line,
/usr/lib/python3/dist-packages/flake8/style_guide.py:394: in handle_error
    self.listener.notify(error.code, error)
E   AttributeError: 'NoneType' object has no attribute 'notify'

Copy link
Contributor

@hidmic hidmic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman I see that APEX still shows up in the API. There's an ongoing effort to homogenize the naming (see #208), so consider re-targeting this PR to that structure and style instead (see #215).

@@ -229,6 +229,12 @@ def run(self):
try:
worker = _RunnerWorker(run, self._launch_file_arguments, self._debug)
results[run] = worker.run()
except unittest.case.SkipTest as skip_exception:
# If a 'skip' decorator was placed on the generate_launch_description function,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman leftover

Suggested change
# If a 'skip' decorator was placed on the generate_launch_description function,
# If a 'skip' decorator was placed on the generate_test_description function,

?

@pbaughman
Copy link
Contributor Author

@hidmic Ok, this PR is now rebased onto origin/hidmic/import_launch_testing and is now a super-set of #215

The only merge conflicts I hit were in test_xml_output.py - everything else went in cleanly.

The CI environment doesn't appear to like my use of 'mock' in test_runner_results.py:

ImportError while importing test module '/tmp/ws/src/launch/launch_testing/test/launch_testing/test_runner_results.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test/launch_testing/test_runner_results.py:31: in <module>
    import mock
E   ModuleNotFoundError: No module named 'mock'

That wasn't added in this PR though - It looks like it's also breaking in #215 - should we re-do this test, or add 'mock' to CI, or something different?

@hidmic
Copy link
Contributor

hidmic commented Apr 23, 2019

@pbaughman you mean Dpr__launch__ubuntu_bionic_amd64? Ignore it, it's a job we use to check things still work against a stable release, currently crystal, and there's been enough breaking changes since it was released to pass without errors.

@hidmic
Copy link
Contributor

hidmic commented Apr 30, 2019

@pbaughman mind to rebase?

Pete Baughman added 2 commits April 30, 2019 17:10
 - Use the `classname` attribute in the `testcase` element instead of jamming everything into `name`
 - Use the `message` attribute for `skipped` elements
 - Handle the case where @unittest.skip("Skip Message") is added to the generate_test_description function

Signed-off-by: Pete Baughman <pete.baughman@apex.ai>
Signed-off-by: Pete Baughman <pete.baughman@apex.ai>
@pbaughman
Copy link
Contributor Author

@hidmic Done

@dirk-thomas
Copy link
Member

Use the classname attribute in the testcase element instead of jamming everything into name

Please double check how the changes are rendered in the Jenkins test results report.

Copy link
Contributor

@hidmic hidmic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some more comments. Running CI (full run up to system tests) below:

  • Linux Build Status
  • Linux-aarch64 Build Status
  • macOS Build Status
  • Windows Build Status (unrelated test failure)


results = runner.run()
# Should just get one result, even though there were multiple tests
assert len(results.values()) == 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman it'd be nice to check the XML structure too.

class __skipped_test_case:

def __init__(self):
self._testMethodName = 'skipped_launch'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman meta: doesn't this make the whole XML output rather obscure? It removes all references to the actual tests being skipped.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know about 'rather obscure.' The name of the test file is still in the XML output. Your point is received, though. . .

Upon looking at this a second time, the test run is available at the point we create the SkipResult. We could pass it into the SkipResult constructor and generate a full XML report that lists each test case as skipped. That's what pytest does if you put a skip decorator on the test class.

# <testsuite name="run1" . . . >
# <testcase classname="TestHost" name="test_0" . . . />
# </testsuite>
# <testsuites>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman nit: consider putting explanation on a test docstring.

# <skipped message="My reason is foo" . . . />
# </testcase>
# </testsuite>
# <testsuites>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman nit: consider putting explanation on a test docstring.

# </failure>
# </testcase>
# </testsuite>
# <testsuites>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman nit: consider putting explanation on a test docstring.

# </failure>
# </testcase>
# </testsuite>
# <testsuites>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pbaughman nit: consider putting explanation on a test docstring.

@hidmic
Copy link
Contributor

hidmic commented May 2, 2019

Hmm, Jenkins is not picking up any jUnit file when it should. I'll check locally if the tool is even generating them.

@pbaughman
Copy link
Contributor Author

@hidmic I imagine it would have to be a Jenkins run of some package that uses add_launch_test right?

The test test_examples invokes the program launch_test, but without the --junit-xml option.

 - When we skip all tests by putting @unittest.skip on the generate_test_description function,
   we generate the junit XML as though all the tests individually had the @unittest.skip decorator
 - Improve documentation on new tests

Signed-off-by: Pete Baughman <pete.baughman@apex.ai>
@pbaughman
Copy link
Contributor Author

@hidmic I think I've addressed all of your PR comments

@hidmic
Copy link
Contributor

hidmic commented May 2, 2019

Hmm tests do generate jUnit XML but Jenkins is not consuming it. I had not realized before. We need to follow up on this.

@pbaughman
Copy link
Contributor Author

@hidmic Ok - we can hold this PR open until we've identified the problem and then try to include the fix here if it turns out to be an issue with the XML format.

hidmic added 6 commits May 3, 2019 11:52
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
@hidmic
Copy link
Contributor

hidmic commented May 3, 2019

Alright, it wasn't that XML report were not being picked up but rather that these were not being labeled properly.

@pbaughman I'm pull requesting against your branch, pbaughman#1. Mind to review/merge so we get this boy going?

@hidmic
Copy link
Contributor

hidmic commented May 3, 2019

@pbaughman I took the liberty to push to your branch to get things going, hope that's not an issue for you.

Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
@hidmic
Copy link
Contributor

hidmic commented May 4, 2019

Ready to be merged after a final re-approval.

@hidmic hidmic merged commit f5246cc into ros2:master May 4, 2019
@hidmic hidmic removed the in review Waiting for review (Kanban column) label May 4, 2019
@dirk-thomas
Copy link
Member

Can you link to a launch test result on Jenkins as an example?

@hidmic
Copy link
Contributor

hidmic commented May 6, 2019

Of course! Here you can see test_communication test reports aggregation on Jenkins for a Linux x86_64 build.

@@ -81,13 +82,12 @@ def all_cases(self):
yield from _iterate_tests_in_test_suite(self.post_shutdown_tests)

def __str__(self):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO Pete: reconcile this with this

@@ -83,7 +83,8 @@ def unittestCaseToXml(test_result, test_case):
class needs to be a launch_testing.TestResult class
"""
case_xml = ET.Element('testcase')
case_xml.set('classname', type(test_case).__name__)
full_classname = type(test_case).__module__ + '.' + type(test_case).__name__
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hidmic Sorry, I know this question is late to the party, but this will always be launch_testing, right?

I used the module name of the test description function https://github.com/ApexAI/apex_rostest/blob/c8cd0ac6dc5383334fdbfd67e7bfeee62bf37c1f/apex_launchtest/apex_launchtest/loader.py#L97

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow-up I see in a later commit that you did basically the same thing by generating the test name from the name of the test file. That's what I get for reviewing one commit at a time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I know this question is late to the party, but this will always be launch_testing, right?

It'll be the name of the package the test is in, which may be provided by a caller or be the base name of the test file.

@@ -98,7 +98,7 @@ def main():
parser.error("Test file '{}' does not exist".format(args.test_file))

args.test_file = os.path.abspath(args.test_file)
test_module = _load_python_file_as_module(args.test_file)
test_module = _load_python_file_as_module(args.test_name, args.test_file)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this works too - usually the only time we care about the test name is when we run in CI and there we have launch_testing_cmake to give us a good test name.

For reference, https://github.com/ApexAI/apex_rostest/blob/c8cd0ac6dc5383334fdbfd67e7bfeee62bf37c1f/apex_launchtest/apex_launchtest/apex_launchtest_main.py#L34
I think this is a result of the parametrization stuff getting grabbed while it was still WIP

test_runs = LoadTestsFromPythonModule(test_module, name=args.test_name + '.launch_tests')
test_runs = LoadTestsFromPythonModule(
test_module, name='{}.{}.launch_tests'.format(
args.package_name, test_file_basename
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this give silly names if --package-name is not specified?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't, see line 102.

@pbaughman pbaughman deleted the fix_launchtest_xml branch May 7, 2019 22:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants