Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src/sage/doctest/control.py: double the default test timeout #36223

Merged
merged 1 commit into from
Dec 6, 2023

Conversation

orlitzky
Copy link
Contributor

@orlitzky orlitzky commented Sep 8, 2023

When running the test suite on an older machine, many files time out. For example,

$ sage -t src/sage/manifolds/differentiable/tensorfield.py
...
File "src/sage/manifolds/differentiable/tensorfield.py", line 248,
in sage.manifolds.differentiable.tensorfield.TensorField
Warning: Consider using a block-scoped tag by inserting the line
'sage: # long time' just before this line to avoid repeating the
tag 4 times
  s = t(a.restrict(U), b) ; s  # long time
  Timed out (and interrupt failed)
...
----------------------------------------------------------------------
Total time for all tests: 360.3 seconds
  cpu time: 0.0 seconds
  cumulative wall time: 0.0 seconds

This has run over the default (non-long) test timeout of 300s. This commit doubles that default, a change that should be unobjectionable for a few reasons:

  1. This timeout is a last line of defense intended to keep the test suite from looping forever when run unattended. For that purpose, ten minutes is as good as five.

  2. As more tests get added to each file, those files take longer to test on the same hardware. It should therefore be expected that we will sometimes need to increase the timeout. (Basically, if anyone is hitting it, it's too low.)

  3. We now use Github CI instead of patchbots for most automated testing, and Github has its own timeout.

  4. There is a separate mechanism, --warn-long, intended to catch tests that run for too long. The test timeout should not be thought of as a solution to that problem.

Closes: #32973

When running the test suite on an older machine, many files time
out. For example,

  $ sage -t src/sage/manifolds/differentiable/tensorfield.py
  ...
  File "src/sage/manifolds/differentiable/tensorfield.py", line 248,
  in sage.manifolds.differentiable.tensorfield.TensorField
  Warning: Consider using a block-scoped tag by inserting the line
  'sage: # long time' just before this line to avoid repeating the
  tag 4 times
    s = t(a.restrict(U), b) ; s  # long time
    Timed out (and interrupt failed)
  ...
  ----------------------------------------------------------------------
  Total time for all tests: 360.3 seconds
    cpu time: 0.0 seconds
    cumulative wall time: 0.0 seconds

This has run over the default (non-long) test timeout of 300s. This
commit doubles that default, a change that should be unobjectionable
for a few reasons:

  1. This timeout is a last line of defense intended to keep the test
     suite from looping forever when run unattended. For that purpose,
     ten minutes is as good as five.

  2. As more tests get added to each file, those files take longer to
     test on the same hardware. It should therefore be expected that
     we will sometimes need to increase the timeout. (Basically, if
     anyone is hitting it, it's too low.)

  3. We now use Github CI instead of patchbots for most automated
     testing, and Github has its own timeout.

  4. There is a separate mechanism, --warn-long, intended to catch
     tests that run for too long. The test timeout should not be
     thought of as a solution to that problem.

Closes: sagemath#32973
@github-actions
Copy link

github-actions bot commented Sep 8, 2023

Documentation preview for this PR (built with commit 0737944; changes) is ready! 🎉

@tornaria
Copy link
Contributor

tornaria commented Sep 13, 2023

I kind of disagree. If a file has too many tests it's most probably because it's too long. Having a reasonable test timeout limit is a way to "force" the file to be refactored in smaller pieces.

Since doctests in a single file cannot be parallelized, having long test files will trigger Amdahl's law.

Doctesting the whole of sagelib takes on the order of 100 minutes, or 200 minutes in long mode. It's not unreasonable to support parallelizing that (say) 100-way which means we should strive for < 1m per file, or < 2m in long mode. Allowing 10m seems excesive.

Besides parallel doctesting the whole of sagelib, when working on an issue the workflow often involves running doctest for a few relevant files to catch errors early. If one wants to run some doctests in a file that is too long, this becomes painful.

Maybe it's ok to have a "soft" limit that just warns (very noisily, please), and have the "hard" limit (which aborts) to be 10m.

@orlitzky
Copy link
Contributor Author

My tests have been failing for two years and no one has refactored those files yet so I don't think the --timeout is the QA mechanism you're looking for. All it's doing it wasting half a day every time I forget to override the default with --timeout=0. For improving the usability of the test suite, wouldn't it be better to have the doctest framework count the number of tests in a file and warn about that instead? The timeout is just a bad proxy for that measurement. Obviously I would like to see the test suite improved too I just don't think the timeout is the right mechanism for it.

@orlitzky
Copy link
Contributor Author

I'll put it another way: has the timeout done anything useful, once, in the past two years, other than annoy the shit out of me? I'll bet the answer is no.

@tornaria
Copy link
Contributor

My tests have been failing for two years and no one has refactored those files yet so I don't think the --timeout is the QA mechanism you're looking for. All it's doing it wasting half a day every time I forget to override the default with --timeout=0.

I understand your frustration, that's why I suggested a (reasonable) soft limit which doesn't abort plus a (larger) hard limit which aborts.

@orlitzky
Copy link
Contributor Author

My tests have been failing for two years and no one has refactored those files yet so I don't think the --timeout is the QA mechanism you're looking for. All it's doing it wasting half a day every time I forget to override the default with --timeout=0.

I understand your frustration, that's why I suggested a (reasonable) soft limit which doesn't abort plus a (larger) hard limit which aborts.

This commit does half of that. You can add a soft limit if you want. I think it will be useless for QA purposes, for all the reasons the existing hard limit has been useless.

@jhpalmieri
Copy link
Member

Why not use #33022 (or second_on_modern_computer) to have a "fixed" timeout that should work independent of the platform?

@jhpalmieri
Copy link
Member

By the way, I know you're not proposing changing the long time out, but that one is useful since on OS X, there is at least one file that always times out (#35646), and what happens for me is: all doctests complete except for that one file, and then I am waiting for that file to time out so that I get (a) confirmation that this was the only remaining file and (b) a complete report on the doctests. It would be really annoying if that timeout were increased, because I would just have to wait longer. I wonder if anyone has the same experience with the non-long timeout.

@orlitzky
Copy link
Contributor Author

Why not use #33022 (or second_on_modern_computer) to have a "fixed" timeout that should work independent of the platform?

There's no constant value that will work. As we add doctests (a good thing!), each file will take longer and longer to test. Eventually we just have to increase the timeout.

Replacing second_on_modern_computer with a fixed amount of work would make the timeout less dependent on the platform, which I guess addresses the problem by annoying everyone at the same time instead of just me with my two old computers. But even if everyone starts hitting the timeout simultaneously, the solution is still going to be for someone to type a bigger number into the file and commit it.

It would be really annoying if that timeout were increased, because I would just have to wait longer.

What happens if you test this file with --timeout=0 and wait a day or two? Are you sure you have an infinite loop, and not just a reeeeeeeally slow test? The latter would be a --warn-long issue and not a --timeout one.

@jhpalmieri
Copy link
Member

Why not use #33022 (or second_on_modern_computer) to have a "fixed" timeout that should work independent of the platform?

There's no constant value that will work. As we add doctests (a good thing!), each file will take longer and longer to test. Eventually we just have to increase the timeout.

Right, but I don't see much evidence that we have to do this yet. Your two computers, yes, but I don't remember seeing anyone citing timeouts in sage-release. I think that one of the points behind the environment variable SAGE_TIMEOUT is for exactly your situation: you can set it to 10 minutes on your own machines. Once we see more people talking about frequent timeouts, that's when it would make sense (to me) to increase the number.

It would be really annoying if that timeout were increased, because I would just have to wait longer.

What happens if you test this file with --timeout=0 and wait a day or two? Are you sure you have an infinite loop, and not just a reeeeeeeally slow test? The latter would be a --warn-long issue and not a --timeout one.

It's been going with --timeout=0 for more than 36 hours now. I think I'll hit ctrl-c. I guess I could also set SAGE_TIMEOUT_LONG to be something shorter than 30 minutes for future runs so I don't have to wait as long.

@orlitzky
Copy link
Contributor Author

There's no constant value that will work. As we add doctests (a good thing!), each file will take longer and longer to test. Eventually we just have to increase the timeout.

Right, but I don't see much evidence that we have to do this yet. Your two computers, yes, but I don't remember seeing anyone citing timeouts in sage-release. I think that one of the points behind the environment variable SAGE_TIMEOUT is for exactly your situation: you can set it to 10 minutes on your own machines. Once we see more people talking about frequent timeouts, that's when it would make sense (to me) to increase the number.

I'm not only building sage for myself, and this is an indication of a wider problem: the doctests fail for anyone with a slower computer. That shouldn't be the case. I'm reporting it here, now, on behalf of everyone with a raspberry pi or cheap web hosting. Users shouldn't have to export MAKE_IT_WORK=true to make the test suite pass when there's no downside to having that be the default.

@orlitzky
Copy link
Contributor Author

Here's a CI failure due to a (non-long) timeout:

https://github.com/sagemath/sage/actions/runs/6993574105/job/19026409690?pr=36776

2023-11-26T09:38:30.3851510Z sage -t --random-seed=142732370142697541280655968131944483835 src/sage/graphs/generic_graph.py
2023-11-26T09:38:30.4112320Z Timed out

@tornaria
Copy link
Contributor

I'm convinced, although I still think it would be a good idea to have a "soft" timeout that COMPLAINS LOUDLY.

I'm thinking about an error, not a warning, that shows up in the summary at the end of the execution. For example:

----------------------------------------------------------------------
sage -t --long --random-seed=286735480429121101562228604801325644303 sage/rings/tests.py  # 0 doctest failed in 350s which is TOO LONG
----------------------------------------------------------------------

The comment can be displayed also when some doctest failed, but the point is that hitting the "soft timeout" will ensure printing the error even for 0 failures.

@orlitzky
Copy link
Contributor Author

I'm convinced, although I still think it would be a good idea to have a "soft" timeout that COMPLAINS LOUDLY.

Thank you. A "long file" warning certainly couldn't hurt.

The psutil package is coming back as a dependency of ipykernel soon. Afterwards I'll reimplement my --warn-long cputime branch using psutil so that it works on macOS. Then hopefully it can get reviewed and these doctest PRs can start to go away.

Once there aren't 5(?) pull requests in the pipeline that touch the doctest framework it would be more fun to start thinking about a 6th.

vbraun pushed a commit to vbraun/sage that referenced this pull request Dec 4, 2023
…t timeout

    
When running the test suite on an older machine, many files time out.
For example,

```
$ sage -t src/sage/manifolds/differentiable/tensorfield.py
...
File "src/sage/manifolds/differentiable/tensorfield.py", line 248,
in sage.manifolds.differentiable.tensorfield.TensorField
Warning: Consider using a block-scoped tag by inserting the line
'sage: # long time' just before this line to avoid repeating the
tag 4 times
  s = t(a.restrict(U), b) ; s  # long time
  Timed out (and interrupt failed)
...
----------------------------------------------------------------------
Total time for all tests: 360.3 seconds
  cpu time: 0.0 seconds
  cumulative wall time: 0.0 seconds
```

This has run over the default (non-long) test timeout of 300s. This
commit doubles that default, a change that should be unobjectionable for
a few reasons:

  1. This timeout is a last line of defense intended to keep the test
suite from looping forever when run unattended. For that purpose, ten
minutes is as good as five.

  2. As more tests get added to each file, those files take longer to
test on the same hardware. It should therefore be expected that we will
sometimes need to increase the timeout. (Basically, if anyone is hitting
it, it's too low.)

  3. We now use Github CI instead of patchbots for most automated
testing, and Github has its own timeout.

  4. There is a separate mechanism, --warn-long, intended to catch tests
that run for too long. The test timeout should not be thought of as a
solution to that problem.

Closes: sagemath#32973
    
URL: sagemath#36223
Reported by: Michael Orlitzky
Reviewer(s):
@vbraun vbraun merged commit d2b4578 into sagemath:develop Dec 6, 2023
14 checks passed
@mkoeppe mkoeppe added this to the sage-10.3 milestone Dec 6, 2023
@orlitzky orlitzky deleted the increase-default-test-timeout branch January 24, 2024 14:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Increase default test timeout
5 participants