Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make learning mocks print immediately #303

Open
thoni56 opened this issue Aug 11, 2022 · 4 comments
Open

Make learning mocks print immediately #303

thoni56 opened this issue Aug 11, 2022 · 4 comments

Comments

@thoni56
Copy link
Contributor

thoni56 commented Aug 11, 2022

Currently learning_mocks are collected and reported when constraints are tallied, i.e. at the end of the test.

This makes any learned calls not printed if the test fails with an exception. Something which is not unlikely given that you have not expect()s with will_return() set up yet.

Doing this in the mock() call instead would create a better chance that anything actually comes out.

@matthargett
Copy link
Contributor

This was actually on purpose in the original intent: In legacy code bases, there may be gobs of console output and we didn't want the learning mocks output interspersed with legacy console cruft. When in learning mode, I thought that mocks wouldn't error at all?

@thoni56
Copy link
Contributor Author

thoni56 commented Aug 11, 2022

Hi Matt! Long time no see!

Thanks for that historical data point ;-)

The problem is not that the mocks error out, do they ever? But the code called under the test, possibly indirectly, calling the mock might break if an expected value was not returned. A typical case is that the mocked function was supposed to return a valid pointer and the calling function might crash with a NPE since the mock did not return anything, or rather the default value (null).

Yup, you could say

  1. You should always check for null, but legacy code... and I think you shouldn't need to if the "contract" says "will always return valid pointer"
  2. You will have to fix that return value anyway, but it is a matter of in which order we are exploring how the mock should work, return value first or arguments.

I've come across a number of cases when I did not know the code under test enough to realize that that would happen, and tried with learning mocks, but was disappointed. I'm not sure that changing as suggested in this issue would have helped though. Maybe a hidden flag so we can try both next time ;-)

But the "legacy console cruft" is a fair point.

@matthargett
Copy link
Contributor

I like the configuration idea, and you could even change the default behavior from what it is now, to support both reasonable needs. Going deeper into optimizing the developer iteration loop for learning mocks is a great thing to be proactive about!

@thoni56
Copy link
Contributor Author

thoni56 commented Aug 11, 2022

Yes, learning mocks was one of the "revolutionary" features that brought me to Cgreen in the first place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants