Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for CTest #49

Closed
kreuzberger opened this issue Dec 8, 2022 · 6 comments
Closed

Support for CTest #49

kreuzberger opened this issue Dec 8, 2022 · 6 comments

Comments

@kreuzberger
Copy link
Contributor

In CMake/CTest Tool used for test runs could generate junit xml files. The files are not properly parsed due to some missing tags seemed to be required by sphinx-test-reports.

ctest.xml.txt

kreuzberger added a commit to procitec/sphinx-test-reports that referenced this issue Dec 8, 2022
@kreuzberger
Copy link
Contributor Author

kreuzberger commented Dec 8, 2022

By testing this i have some questions about the workflows, considering "normal" sphinx-needs handling.

  1. If i use sphinx-needs to define requirements and tests, i use the default .. req:: and .. test:: directives. The test directive seems not to be equal or slightly different to the .. test-case:: directive. (E.g. regarding the output names in html). Is this intended or whats the difference? After reading the documentation "test" and "test-case" are the same to me...

  2. WHEN should the directive with the xml file be used?
    If the file is NOT existing i get a warning (thats ok). But the build also fails with an exception, so the file MUST exists.
    So currently i would do:

  • build the software and the documents (online help, specifications etc)
  • ctest/squish/behave test all
  • build the test report with imported requirements (needs.json)
  1. If i define requirements and tests in a specification file and export it via json. If i had run the tests, i could generate a test report. This Test should help me to "check" if my requirements are fullfilled. But how to resolve this?
    E.g.: Define a test for a requirement TEST_01. This test is a unit test. This test is run 4 times. Linux, Windows 7, Linux Other os, Windows 11.

So should i define 4 test-case with the 4 different files and how to match my super Test-definition to "pass

.. req:: My unique Feature
  :id: REQ_MY_FEATURE

  Implement the DWIM button

.. test:: Unit test my feature
 :id: TEST_MY_FEATURE
:links: REQ_MY_FEATURE

.. test-case:: Feature Ubuntu
  :id: TEST_MY_FEATURE_UBUNTU
  :file: ctest.ubuntu.xml
  :suite: Linux-c++
  :classname: myfeaturetest
  :links: TEST_MY_FEATURE

.. test-case:: Feature Win7
  :id: TEST_MY_FEATURE_WIN7
  :file: ctest.win7.xml
  :suite: Windows
  :classname: myfeaturetest
  :links: TEST_MY_FEATURE

How should i manage to get "TEST_MY_FEATURE" also be passed? (e.g. in test-reports) or how to handle this properly?
E.g. by adding an option "result" to the the "test" directive and add voodo so that the option is set to passed if all linked issues (incoming) have the option "result" with value "passed".

Any help appreciated 😄

danwos added a commit that referenced this issue Dec 21, 2022
danwos added a commit that referenced this issue Dec 21, 2022
@danwos danwos closed this as completed in ea1b6b3 Dec 21, 2022
@kreuzberger
Copy link
Contributor Author

Thank you for integration of the PR. Could you help me on the workflow answer above? This affects how and when my testcases could updated due to "real" test results via some script magic? Also affects the differnt naming of tests and test-case directives using needs or test-reports?

@danwos
Copy link
Member

danwos commented Dec 21, 2022

Regarding your questions, please be aware that Sphinx-Needs and the related extensions are more of a toolbox and normally do not have a hard-coded process, which one must follow.
So it's up to you to decide how Sphinx-Needs types are used.

For me test from Sphinx-Needs is more a Test-Specification, where test-case from Sphinx-Test-Report is more a test-result or test-run.
So test-case can be used to document the implementation and result of test.
But for sure, you are open to renaming some types.

The question for the workflow is in my eyes more difficult to answer, as the context is important.
For a local build, which just shall create the user-documentation, no test-runs should be needed, as they waste time for the developer. But a documentation build for an official SW release should contain everything, so there the CI should run the tests before the docs get build.
You could use Sphinx internal "tag" mechanism to define and build different build targets , which may or may not contain the test-cases.

For getting the test-results all the way up to the requirements, you can use the Dynamic function check_linked_values.
It does more or less what you have described, check with some voodoo if an option of incoming needs has the needed value.

@kreuzberger
Copy link
Contributor Author

ok, that answers most of my questions and does match with my current workflow. Thanks for the hint with check_linked_values. I will try to get it run. The exception in case of test-case directive missing the files (gives a Warning this is ok) is intended or a bug?

@danwos
Copy link
Member

danwos commented Dec 21, 2022

It's intended because it's defined as some kind of data source and its existence is normally expected by the user.
So a missing file may produce process invalidations, therefore an exception is thrown.

But for sure we could add a configuration value to allow this.

@kreuzberger
Copy link
Contributor Author

ok, no need to change/configure. I was just wondering that first a warning is emitted (and sphinx builds further) and after that the exception occurs. Due to the nature of the workflow its ok that the file must exists. Like you described, the part of the test protocol should be "processed" after the build then the file exists. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants