Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

regret.testing should provide a testing helper for ensuring unexpected deprecations fail tests #28

Open
Julian opened this issue Aug 25, 2020 · 3 comments

Comments

@Julian
Copy link
Owner

Julian commented Aug 25, 2020

Right now Recorder doesn't necessarily have a way to ensure that deprecations are explicitly "expected", so a Deprecator can call Recorder.emit and if the test case does not check it, nothing will complain (c.f. how trial.unittest.TestCase will fail tests that do not flush warnings).

There should be a testing helper provided which can be used to ensure that any deprecation must have expect called for it, otherwise a test is failed.

(Perhaps it should then be impossible to do anything but expect deprecations in this way -- i.e. impossible to use Recorder without accounting for all deprecations).

@Julian Julian mentioned this issue Aug 25, 2020
4 tasks
@adiroiban
Copy link
Contributor

One idea is to use a context manager that will put all warnings in a queue .

Then do all the checks inside the context manager, and remove the warnings from the queue as they are checked.

When the context manager exits it raise AssertionError if the queue is not empty.

In this way, you don't depend on any testing framework.

@adiroiban
Copy link
Contributor

This is linked in #1 as - Are not doing deprecations themselves and want to ensure other code they run does not raise deprecations

But I think that the scope of this issue is for developer that are doing deprecation and want to test that the deprecation warnings works as expected.


If you are not doing deprecations, then each testing framework should have a way to tap into the warning systems... pytest already has it - https://docs.pytest.org/en/stable/warnings.html

I feel that this is out of scope of regret.

The testing framework should support catching warnings raised by any library.. not only by regret.

@Julian
Copy link
Owner Author

Julian commented Dec 6, 2020

But I think that the scope of this issue is for developer that are doing deprecation and want to test that the deprecation warnings works as expected.

Both use cases are listed in #1, this one yes being for the other one, but I'm happy to have this or another issue be about the one you want too it doesn't matter much.

If you are not doing deprecations, then each testing framework should have a way to tap into the warning systems... pytest already has it - https://docs.pytest.org/en/stable/warnings.html

The point of this issue (or of the other use case) wasn't so much to do the monitoring, it's to provide the interface one would need to wire something up to your test framework. You're suggesting that we decide that interface is the warnings module -- which may be a good decision, so perhaps we do go with that, the point of this ticket was simply to think it through. (The interface being a way to make it easy for an author of a test suite to write "my test run should fail if any deprecation warning is emitted anywhere" -- and then to allow whitelisting things as they come up. As you say, the warnings module has an implementation of that, it's just most people do not know how to figure out how to use it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants