Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idea: pytest-timeout integration #1980

Closed
eli-b opened this issue May 14, 2019 · 9 comments
Closed

Idea: pytest-timeout integration #1980

eli-b opened this issue May 14, 2019 · 9 comments
Labels
interop how to play nicely with other packages legibility make errors helpful and Hypothesis grokable

Comments

@eli-b
Copy link

eli-b commented May 14, 2019

If pytest-timeout is loaded, a timeout is set for the entire test, either globally with --timeout or specifically with @pytest.mark.timeout(seconds). I think it doesn't make sense for max_examples * deadline / 1000 to be larger than the entire test's timeout.

Also, setting a timeout for a specific test is a strong indication from the user that she thinks this test might take at most that long, even if she didn't change the deadline or max_examples settings.

The simplest support I can think of is adding an explanation when a deadline is reached and a (larger) timeout is set for the test. The message could suggest to the user to add @settings(max_examples=smaller) or @settings(deadline=larger) to the test.

Of course, fancy automatic setting of these values can also be considered.

What do you think? I'd be glad contribute a PR if there's interest.

@Zac-HD Zac-HD added interop how to play nicely with other packages legibility make errors helpful and Hypothesis grokable labels May 15, 2019
@Zac-HD
Copy link
Member

Zac-HD commented May 15, 2019

It would definitely be nice to emit a warning, probably from our pytest plugin, for any test that has a shorter pytest-timeout timeout than max_examples * deadline. The catch is that if we find a bug, shrinking may take considerably longer as it can run more examples.

So IMO the ideal case would instead be for pytest-timeout to understand Hypothesis tests upstream, and handle the timer as per-test-case. We already provide the needed hooks for this, but it depends on whether they're willing to accept such a PR - I'd certainly be happy to help write it.

@eli-b
Copy link
Author

eli-b commented May 15, 2019

If we find a bug and then time out while shrinking the test would at least still fail.

Regarding making the pytest-timeout refer to example, do you think there's a difference between

@pytest.mark.timeout(seconds)
@given(...)
def test_foo(...):

and

@given(...)
@pytest.mark.timeout(seconds)
def test_foo(...):

?

Does the first one mean the timeout is for the whole test and the second one mean it's for testing a single example?

@Zac-HD
Copy link
Member

Zac-HD commented May 15, 2019

It would fail, yes, but for the wrong reason. Worse is that there would probably not be any indication of why this happened (ie that shrinking was in progress), let alone what errors had been found. That's why I think the per-test-case timeout is so important.

The order of the decorators does not change the effect at all. In both cases it's currently for the whole test, though as above I'd propose to change that.

@eli-b
Copy link
Author

eli-b commented May 15, 2019

Yes, now both orderings are the same. I meant to ask if they should be the same under your proposal.

I think the user might interpret the two orderings differently, and have different expectations from them. In the first one she may mean for the timeout to be for the entire test, and in the second one be for each example.

@Zac-HD
Copy link
Member

Zac-HD commented May 15, 2019

I think they should continue to be the same.

That's mostly that doing otherwise becomes an implementation nightmare, but also for consistency with other decorators and pytest marks.

There's the potential for confusion either way unfortunately, and we'll just have to rely on docs rather than a runtime warning 😔

@Zac-HD
Copy link
Member

Zac-HD commented May 26, 2019

Closing this issue, because this feature can only really be added upstream, and we already provide the <test>.hypothesis.inner_test attribute that can be wrapped and reassigned to implement it.

@Zac-HD Zac-HD closed this as completed May 26, 2019
@eli-b
Copy link
Author

eli-b commented Jun 23, 2019

I have additional information, with something that can be done in Hypothesis instead of upstream.

When running pytest-timeout with the default --timeout-method=signal it first appeared as though the timeout was simply not respected and the test ran for a very long time, so I reverted to using --timeout-method=thread and using xdist to run tests in worker threads. This way only the worker thread was killed when a timeout was reached, the test was marked as failed, and all the rest of the tests were still run.
When I played with Hypothesis' verbosity setting, I found out that the TimeoutError pytest-timeout raised when I used the default --timeout-method=signal was simply regarded by Hypothesis as a test failure, prompting another run with a different example.

I propose that pytest-timeout's TimeoutError be treated as a special exception that immediately fails the entire test. I imagine this would be easy to implement.

@Zac-HD
Copy link
Member

Zac-HD commented Jun 24, 2019

Unfortunately this is implemented using pytest.fail, so we can't special-case it 😢

If upstream defines an exception type though, we'd be happy to accept a PR to add it to this list!

@eli-b
Copy link
Author

eli-b commented Jun 24, 2019

😢
I opened an issue.
Thanks for looking up their implementation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
interop how to play nicely with other packages legibility make errors helpful and Hypothesis grokable
Projects
None yet
Development

No branches or pull requests

2 participants