Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed RFC Feature editor_test.py reconciles included Report.results against actual Report summary #37

Open
smurly opened this issue Apr 21, 2022 · 1 comment
Labels
priority/major Major priority. Work that should be handled after all blocking and critical work is done. rfc-feature Request for Comments for a Feature triage/accepted Issue that has been accepted and is ready for work

Comments

@smurly
Copy link
Contributor

smurly commented Apr 21, 2022

Summary:

Currently when a test runs using editor_test.py (batch/parallel), the Editor Python Binding (EPB, hydra) script contains some number of Report.result and Report.critical_result lines. If the script ends early without having called those lines there is no accounting that tests were skipped or missed. Only if a log is successfully captured can we see which Report lines actually got logged.

What is the relevance of this feature?

In the event that the application under test (AUT) exits early but does not indicate error or is non-responsive and fails to log information before being terminated for timeout, we have no way of assessing test lines that were skipped or missed. We can only see that the overall test completed with status timeout or ended with exit code (expected or otherwise). If the test exits without indicating an explicit failing result, the overall assessment would then be passing even if some number of test lines are skipped.

Feature design description:

  • before executing the AUT we should parse the EPB script for expected results so that on completion we can reconcile expected to the actual Report summary. This would prevent us from failing to notice skipped test functionality.

Technical design description:

  • Parse the EPB script for a list of Report.result lines to create an accounting
    • code branching will add complexity for e.g. if-statements
    • asserts are also important to verify whether they were evaluated (may require spying, or profiling, or using a Report.Assert?)
  • After execution of the AUT with the EPB script, compare the result summary to the expected and reconcile the difference
  • Indicate discrepancies in tests reporting with a mechanism that alerts users to missed testing

What are the advantages of the feature?

  • Having this may prevent false perception about the completeness of testing; we will know if tests lines are not running.
  • Provides a consistency check that the full set of assertions in a script ran, and did not exit early.

What are the disadvantages of the feature?

  • Some tests may intentionally be within branching code that either intentionally doesn't run for various conditions or is meant to run multiple times for a list of values. This would make reconciliation difficult to achieve. (implementation complexity)
  • It may be more expensive in resources to build this than to use mechanisms to manually ensure testing is completed as expected. (auditing test health, unlikely to be done except when the test or feature fails for other reasons)

How will this be implemented or integrated into the O3DE environment?

Work would be done within the editor_test.py batch/parallel framework and should be transparent to test implementers.

Are there any alternatives to this feature?

How will users learn this feature?

  • possibly docstrings in editor_test.py or report summary output differences

Are there any open questions?

@smurly smurly added the rfc-feature Request for Comments for a Feature label Apr 21, 2022
@Kadino Kadino added the triage/accepted Issue that has been accepted and is ready for work label Apr 26, 2022
@AMZN-Dk AMZN-Dk added the needs-priority Indicates issue lacks a priority/foo label and requires one. label May 10, 2022
@Kadino Kadino added priority/major Major priority. Work that should be handled after all blocking and critical work is done. and removed needs-priority Indicates issue lacks a priority/foo label and requires one. labels Jun 21, 2022
@Kadino
Copy link
Collaborator

Kadino commented Jun 21, 2022

This can help add warnings for when asserts are not being reached, or could never be reached.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/major Major priority. Work that should be handled after all blocking and critical work is done. rfc-feature Request for Comments for a Feature triage/accepted Issue that has been accepted and is ready for work
Projects
None yet
Development

No branches or pull requests

3 participants