New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout problem: timings vary depending on test selection #834

Open
Mrodent opened this Issue Apr 14, 2018 · 1 comment

Comments

Projects
None yet
2 participants
@Mrodent

Mrodent commented Apr 14, 2018

I just ran some Specifications derived from a filter and I have two problems:

  1. The first test in the Spec failed despite the Spec's timeout value being set to 200 ms (long!)
  2. The second test in the Spec failed with a timeout-related message: "Could not sync with Watcher for method ..."

In fact this was a selection of tests, like so:

    task currentTestBunch(type: Test, dependsOn: testClasses) {
    	filter { includeTestsMatching '*UT_LTFM_BASH_Processing*' }
        ...

When I do a global test, using gradle build, all my tests pass fine.

I have noticed that the first test in a Specification seems usually to take longer than the others, because it is the first. And I think I have also noticed that, at least with a selection of Specs or tests chosen as above that the first Spec is also subject to a lot of "initial lag" too, seemingly as Spock "ramps up to speed".

It may be, of course, that the main function of the @timeout annotation is to simply to prevent test runs from getting totally messed up in the event of a strange anomaly. So maybe all Specifications should be set to ( value = 1, unit = SECONDS ) ... or longer. In other words maybe it is just a failsafe mechanism?

But it is possible, as you develop your code, that a test which used to run really snappily doesn't any more and a @timeout mechanism should arguably help you identify problems of that kind. When a test which passes in 25 ms during a gradle build fails to complete in 200 ms with a selection of filtered tests this means you can't really use it for that: you have to set timeout to far longer than the real "realistic maximum time permitted" for the test.

One partial workaround to the problem described above would potentially be if you could set the timeout values for a given Gradle Test run... (so for the filter-based run as above I would set the timeout values to much longer than for a gradle build...). This doesn't seem (as far as I can tell) to be possible currently.

I'm also not clear why my problem 2) "Could not sync with Watcher for method" error happened. Could it be something to do with the timeout mechanism not resetting in time for the second method?

I have no idea whether anyone else has a) had this experience b) thinks it'd nice if you could use the timeout mechanism to "police" your tests in the way I suggest or c) what sort of code enhancements would be needed to get timeout to measure the actual time span of exclusively the code executed in the test: at the moment it appears that timeout is also measuring some "preparatory" and/or "clearup" activity.

Additional Environment information

Java/JDK

java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

Groovy version

Groovy Version: 2.6.0-alpha-2 JVM: 1.8.0_121 Vendor: Oracle Corporation OS: Windows 10

Build tool version


Gradle 4.4.1

Build time: 2017-12-20 15:45:23 UTC
Revision: 10ed9dc355dc39f6307cc98fbd8cea314bdd381c

Groovy: 2.4.12
Ant: Apache Ant(TM) version 1.9.9 compiled on February 2 2017
JVM: 1.8.0_121 (Oracle Corporation 25.121-b13)
OS: Windows 10 10.0 amd64

Operating System

W10, using Cygwin

IDE

Eclipse (Oxygen)

Build-tool dependencies used

Quite a few... could this be relevant?

Gradle/Grails

testCompile 'org.spockframework:spock-core:1.1-groovy-2.4'

@leonard84

This comment has been minimized.

Member

leonard84 commented Jun 30, 2018

As you have already said the @Timeout extension is intended to be used as a fail safe and not a precise test timing tool. Using it actually costs a bit performance as a second thread is spawned to monitor the time, so using it to enforce something like 25ms run time is not the best choice.

If you just want to fail tests that take longer, but don't actually have to stop their execution, you could either do that based on the test report or write another extension, that just measures the test execution time and fails it if it ran to long. This approach would be much more resource efficient, and make more sense semantically. Also take into account that depending on the load on the system and its specs the times can vary hugely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment