Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skipping tests at runtime with GTEST_SKIP() #1544

Closed
wants to merge 12 commits into from
Closed

Conversation

app13y
Copy link
Contributor

@app13y app13y commented Apr 4, 2018

Description

Nothing fancy, plain and simple test skipping based on condition evaluated at runtime. One might want to skip

  • tests which are not supported within current environment,
  • tests which depend on some external resource with conditional availability,
  • tests which require privileges when run under user with no such privileges, etc.

Feature has been requested several times, see #160, #490 and #822. Works as demonstrated in sample test:

TEST(SkipTest, DoesSkip) {
  SKIP();
  EXPECT_EQ(0, 1);
}

yields

[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from SkipTest
[ RUN      ] SkipTest.DoesSkip
[  SKIPPED ] SkipTest.DoesSkip (1 ms)
[----------] 1 test from SkipTest (1 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (1 ms total)
[  PASSED  ] 0 tests.
[  SKIPPED ] 1 test, listed below:
[  SKIPPED ] SkipTest.DoesSkip

Skipped tests are displayed in green, and are not considered neither successful nor failed since they have never been completely run.

Further work

There are some conceptual issues which are not yet resolved, but I would love to hear you thoughts on them.

  1. Listing all skipped tests on teardown might be not such a good idea; it is at least reasonable to add compile option to control this behaviour.
  2. If test is skipped at any point, all preceding failures, which occurred in this test, are ignored; this can be handled in test result assigning with checking for failures before setting kSkip result.
  3. Skipping is not available from SetUp() and TearDown() methods, as well as from functions external to test body, since skipping aborts only current function; this can be resolved by adding additional AssertHelpers in fashion similar to handling FAIL().
  4. Additional improvements shall be made in loggers to support this additional test result.

@gennadiycivil
Copy link
Contributor

@aprelev Thank you for your contribution. On the surface I am tempted to say that the ability to skip tests is already available with disabling . Could you please explain why this is different and better? Thank you very much

@app13y
Copy link
Contributor Author

app13y commented Apr 5, 2018

Sure! The key here is ability to skip tests at run-time (based on a run-time condition); this feature is introduced in addition to compile-time disabling, rather than as replacement.

Consider a test, which can fail or succeed, but is only supported (i.e. can be executed) when certain condition is met, and testing the condition can only be carried out in run-time; otherwise test is not supported. As of now, there is no pretty way to handle this:

  • test cannot be disabled with DISABLED_ keyword because it is not known in advance (at compile-time) whether the test will be supported,
  • test can be manually set to fail or succeed (with FAIL() or return in test body) if condition proves to be false, but this would indicate that test ran with definitive result type, such as failure, or success, albeit result of such test shall be inconclusive.

My use-case

Test description

Consider a test which validates correctness of hardware-accelerated AES encryption (e.g. with AES-NI); such test can only be executed on processors with support for AES-NI, and it is not known in advance if target machine has such processor.

Environment constraints

Tests binary is pre-built and distributed onto several target machines, and none of target machines have C++ compiler installed.

Solution with SKIP()

TEST(SampleSuite, EncryptionTest) {
  if (!is_available(AES_NI))
    SKIP();

  test_encryption();
}

Other use-cases

A few cases where described in referenced issues:

#160 We are using GTEST for very many tests, however all test in our test case
definition are linked together and executed. Unfortunately, some tests can
only be executed if the environment is correctly set up (must be done
manually). Therefore these tests are always failing, but in fact they could
not be executed. Would it be possible to the test case result "SKIPPED" as a new class to a
test case (maybe colored yellow ;-) to abort a test if it determines it
cannot be executed in this environment?

#160 Typical use case would be testing a library that
might do things that require administrative privileges. If the tests are run as
a limited user, then the requiring-root tests would have a guard marking the
test as skipped if administrative privileges are absent.

#160 I use gtest in my work, and it rely on the remote server. So I want some test is skipped if the remote server is break.

#490 It would be nice to have a possibility to skip some tests basing on criteria
available during runtime.

#822 we want to do unit und integration testing of our scientific application that uses MPI. <...> Some tests only run sequentially (some other use MPI to communicate between instances of the same test case), therefore they should be skipped when the rank != 0. For that we need some means of runtime skipping, like the boost.test prondition decorator

More examples can be found elsewhere:

25999057 I have a set of typed test cases in google test. However, some of these test cases are simply not applicable for a specific type parameter.

Such feature has been implemented in other testing frameworks as throw SkipException, Assert.Inconclusive(), etc.

@gennadiycivil
Copy link
Contributor

@aprelev Thank you for the explanation. We had a bit of internal discussion, yes this is a good feature for googletest to consider.
With that, we prefer a different implemtation, without using A MACRO , would you be willing to re-work this solution using just a function - something like testing::SkipTest(). Thank you again

@app13y
Copy link
Contributor Author

app13y commented Apr 8, 2018

I will look into it. What is your take on problems mentioned earlier?

  1. Listing all skipped tests on teardown might be not such a good idea; it is at least reasonable to add compile option to control this behaviour.
  2. If test is skipped at any point, all preceding failures, which occurred in this test, are ignored; this can be handled in test result assigning with checking for failures before setting kSkip result.
  3. Skipping is not available from SetUp() and TearDown() methods, as well as from functions external to test body, since skipping aborts only current function; this can be resolved by adding additional AssertHelpers in fashion similar to handling FAIL().
  4. Additional improvements shall be made in loggers to support this additional test result.

@gennadiycivil
Copy link
Contributor

@aprelev
My comments:

Listing all skipped tests on teardown might be not such a good idea; it is at least reasonable to add >>compile option to control this behaviour.

Could you please clarify why you feel that this may not be a good idea? I don't immediately see issues with it

If test is skipped at any point, all preceding failures, which occurred in this test, are ignored; this can >>be handled in test result assigning with checking for failures before setting kSkip result.

Skipping preceding failure seems to be logical. Skip is a user over-ride, meaning "I know better that this test can not run so lets skip it". Moreover, deciding to skip the test should be done in the very beginning of the test.

Skipping is not available from SetUp() and TearDown() methods, as well as from functions external to >>test body, since skipping aborts only current function; this can be resolved by adding additional >>AssertHelpers in fashion similar to handling FAIL().

Dont see a need to complicate this way. Skipping a test is a user-override for this test. It should be decided on the test level and not outside it

Dont forget the docs!

Thank you again
G

@googlebot
Copy link

So there's good news and bad news.

👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there.

😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request.

Note to project maintainer: This is a terminal state, meaning the cla/google commit status will not change from this state. It's up to you to confirm consent of the commit author(s) and merge this pull request when appropriate.

@app13y
Copy link
Contributor Author

app13y commented Apr 10, 2018

Okay, I've looked into implementing skipping with function.

As far as I understand, fatal failure, FAIL(), escapes current scope by expanding into return statement, while non-fatal failure, ADD_FAILURE(), does not. That is why test skipping was introduced as a macro:

#define SKIP() \
  return AssertHelper() = Message()
  ^^^^^^

TestBody() {
  SKIP();
  FAIL(); // < This will not be executed.
}

Any function, on the other hand, fails to escape current scope, unless exception is thrown or long jump is performed, therefore rendering skipping useless.

TestBody() {
  ::testing::SkipTest();
  FAIL(); // < This will be executed anyway.
}

Could you be more specific on how test skipping could be implemented with function; and what was the rationale behind your decision?

@gennadiycivil
Copy link
Contributor

@aprelev Sorry for the delay. It will be some time before I can pay my full attention to this. The current priority is to produce 1.8.1 release, see #1079. This is still important but takes a second priority , thank you again

@app13y
Copy link
Contributor Author

app13y commented Apr 13, 2018

@gennadiycivil Okay, no worries :) poke me here once you are done with important stuff

@dturner
Copy link

dturner commented Jul 3, 2018

Any update here? SKIP() seems like a useful function for my use case as well. I am writing tests for Android NDK code and some tests are dependent on features introduced in a specific API. For me it makes sense to write something like:

if (androidAPI() < 26){
     SKIP();
} else {
     ASSERT_EQ(...);
}

@app13y
Copy link
Contributor Author

app13y commented Jul 4, 2018

I am waiting for @gennadiycivil for his attention on this feature and implementation mechanics.

@aidancully
Copy link

I don't know if this needs to be in this PR, but it'd be useful to me if we could skip all tests from a test fixture. Something like:

class MyTestFixture : public ::testing::Test {
protected:
  virtual void SetUp() override {
    if (runtime_check_failed) {
      SkipAll(); // defined in base class
      return;
    }
  }

@gennadiycivil
Copy link
Contributor

@aprelev please resolve all conflicts. Thank you

@googlebot
Copy link

A Googler has manually verified that the CLAs look good.

(Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.)

@googlebot
Copy link

CLAs look good, thanks!

@app13y
Copy link
Contributor Author

app13y commented Sep 22, 2018

Ping.

@gennadiycivil
Copy link
Contributor

@aprelev I made the changes in your branch. Please add documentation and ping me when ready
Thank you again

@app13y
Copy link
Contributor Author

app13y commented Sep 30, 2018

Please add documentation and ping me when ready.

Okay, sure thing.

@zrphercule
Copy link

@aprelev Thanks! Your PR really helps a lot. Hope it can be merged soon =)

@gennadiycivil
Copy link
Contributor

gennadiycivil commented Oct 1, 2018

This PR is ready to be merged. I am waiting for the docs @aprelev @zrphercule

gennadiycivil added a commit that referenced this pull request Oct 2, 2018
PiperOrigin-RevId: 215273083
gennadiycivil added a commit that referenced this pull request Oct 2, 2018
PiperOrigin-RevId: 215273083
@gennadiycivil
Copy link
Contributor

@aprelev given that others are waiting,
I made changes internally that fix small issues you have in the code, same changes that I made in your branch before.
I will push the changes shortly.
You might see that this PR is "Closed" and not "Merged" but when you look at the actual commits you should see your name as the author.

@gennadiycivil
Copy link
Contributor

@aprelev Ping on documentation?

@sandy5842
Copy link

Hi guys, GTEST_SKIP() is available from which release onwards. I need it for my unit testing. Not finding sufficient details on this, so posting here

@Apteryks
Copy link

Apteryks commented Jan 3, 2019

@sandy5842 given that the last tagged release (1.8.1 at time of writing) dates back to the end of August 2018, I'd say it is still yet unreleased unfortunately.

@gennadiycivil
Copy link
Contributor

It is in the master branch and has been for a while .

pbaughman pushed a commit to pbaughman/googletest that referenced this pull request Mar 29, 2019
  - Update to the version that adds GTEST_SKIP macro to google test
  - google#1544
@samsta
Copy link

samsta commented Jun 24, 2019

Hi Guys, does this also put <skipped/> in the XML output ? This would be useful for CI systems like Jenkins.

sebkraemer added a commit to sebkraemer/googletest that referenced this pull request Dec 15, 2020
A subsection "Skipping test execution" was added to document GTEST_SKIP
and where it can be used.

relates issue google#1544
sebkraemer added a commit to sebkraemer/googletest that referenced this pull request Apr 7, 2021
A subsection "Skipping test execution" was added to document GTEST_SKIP
and where it can be used.

relates issue google#1544
sebkraemer added a commit to sebkraemer/googletest that referenced this pull request Apr 15, 2021
A subsection "Skipping test execution" was added to document GTEST_SKIP
and where it can be used.

relates issue google#1544
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants