Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Timeout problem: timings vary depending on test selection #834
I just ran some Specifications derived from a filter and I have two problems:
In fact this was a selection of tests, like so:
When I do a global test, using
I have noticed that the first test in a Specification seems usually to take longer than the others, because it is the first. And I think I have also noticed that, at least with a selection of Specs or tests chosen as above that the first Spec is also subject to a lot of "initial lag" too, seemingly as Spock "ramps up to speed".
It may be, of course, that the main function of the @timeout annotation is to simply to prevent test runs from getting totally messed up in the event of a strange anomaly. So maybe all Specifications should be set to ( value = 1, unit = SECONDS ) ... or longer. In other words maybe it is just a failsafe mechanism?
But it is possible, as you develop your code, that a test which used to run really snappily doesn't any more and a @timeout mechanism should arguably help you identify problems of that kind. When a test which passes in 25 ms during a
One partial workaround to the problem described above would potentially be if you could set the timeout values for a given Gradle Test run... (so for the filter-based run as above I would set the timeout values to much longer than for a
I'm also not clear why my problem 2) "Could not sync with Watcher for method" error happened. Could it be something to do with the timeout mechanism not resetting in time for the second method?
I have no idea whether anyone else has a) had this experience b) thinks it'd nice if you could use the timeout mechanism to "police" your tests in the way I suggest or c) what sort of code enhancements would be needed to get timeout to measure the actual time span of exclusively the code executed in the test: at the moment it appears that timeout is also measuring some "preparatory" and/or "clearup" activity.
Additional Environment information
Groovy Version: 2.6.0-alpha-2 JVM: 1.8.0_121 Vendor: Oracle Corporation OS: Windows 10
Build tool version
Build time: 2017-12-20 15:45:23 UTC
W10, using Cygwin
Build-tool dependencies used
Quite a few... could this be relevant?
As you have already said the
If you just want to fail tests that take longer, but don't actually have to stop their execution, you could either do that based on the test report or write another extension, that just measures the test execution time and fails it if it ran to long. This approach would be much more resource efficient, and make more sense semantically. Also take into account that depending on the load on the system and its specs the times can vary hugely.