Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[junit] Use junit-platform-testkit for testing extensions #116

Closed
kriegfrj opened this issue May 20, 2020 · 3 comments
Closed

[junit] Use junit-platform-testkit for testing extensions #116

kriegfrj opened this issue May 20, 2020 · 3 comments
Milestone

Comments

@kriegfrj
Copy link
Contributor

I have learned in the last couple of years that mocking as a testing technique is fraught, especially as your level of functionality increases:

  1. you spend as much time debugging your mocks as you spend debugging your actual code;
  2. in spite of 1, your mocks are an approximation, and you can end up with tests that pass but the deliverable will still fail in the real world.

For this reason, I think it is almost always better to test against real code (where practical) rather than mocks.

Having wrestled briefly with #113 and #114, I think we have reached that point now where our mock-based tests are getting difficult to maintain and error-prone. Though it will involve a bit of work, I think we would be well served for the long time by putting in that effort now to port them to use junit-platform-testkit. I think that AssertJ's tests for SoftAssertionsExtension are a good model to follow (they were written by one of the core Jupiter developers): https://github.com/joel-costigliola/assertj-core/tree/master/src/test/java/org/assertj/core/api/junit/jupiter

@kriegfrj
Copy link
Contributor Author

And I forgot point 3, which was the one that actually prompted me to raise this issue: when low-level unit testing, you often end up testing behaviour that doesn't need to be tested. Not only is this a waste of time, it also means that when you refactor that code you break a bunch of all the lower-level tests even though the higher-level tests continue to pass, and you spend additional time fixing the broken test code.

My specific example: in BundleContextExtension we are manually cleaning up the state objects in the afterEach() callback. I changed the Namespace key in the beforeEach() implementation which meant that it was storing the CloseableResourceBundleContext in a different place to where afterEach() was looking for it, so afterEach() was effectively a no-op. Yet all the tests in BundleContextExtensionExampleTest were still passing (even those that I had added to verify that the context was being properly closed)! This is because CloseableResourceBundleContext implements the Jupiter API interface CloseableResource, and the contract for CloseableResource stipulates that Jupiter will automatically close any resources in the Store that implement CloseableResource when the Store is closed (ie, the ExtensionContext in which they were created goes out of scope and is closed). Realising that the code in afterEach() was redundant, I removed it - BundleContextExtensionExampleTest continued to pass, but many of the tests in BundleContextExtensionTest, ServiceExtensionTest etc started to fail because they were specifically testing for afterEach() to clean up.

@bjhargrave
Copy link
Member

I personally never really got into mock testing. Seems like too much voodoo magic to me :-)

So I would be fine with this approach.

@kriegfrj
Copy link
Contributor Author

I've found that mocks are good for simple tests (eg, for supplying a callback listener interface). But as the tests get more complicated, it's simpler to write concrete mock classes (where the API permits) than to build them up in code using the mocking library. But even this is inferior (IMO) to testing against real code for anything but the simpler cases. I know that some testing enthusiasts are obsessed with the speed of their tests and want to mock everything that might be slow - but IMO, there's no point rapidly being able to test your code if the tests are not accurate.

kriegfrj added a commit to kriegfrj/osgi-test that referenced this issue Jun 3, 2020
Fixes osgi#113, fixes osgi#114

Also improved test fidelity by using EngineTestKit

Fixes osgi#116.

Signed-off-by: Fr Jeremy Krieg <fr.jkrieg@greekwelfaresa.org.au>
@bjhargrave bjhargrave added this to the 1.0.0 milestone Jun 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants