I had some concerns with the created Mocker class. The most obvious thing is I hate that I'm using eval so if anybody can suggest alternatives I would greatly appreciate it.
Secondly it's not really a Mocking class but more of a Stubbing class. I had some thoughts to improve this, however since it can go various directions I figured I'd open a ticket for discussion.
From how I understand, stubs is what mocking does now and allows you to overwrite methods. Mocks should have expectations such as called once, never called, etc.
However doing it this way mixes setup and assertions since it has requirements that do assertions and can fail tests. One pattern I've recently learned about was the test spy. I like this approach.
I was thinking of a hybrid approach. We could change the Mocker to Stubber or something similar which implement the "Spy" pattern with some basic logging and a single method interface for interacting with it. Then implement a new Mocker class inherits Stubber that can have expectations such as called once and is a type of "mega class" since it can stub, mock, and spy.
I have no qualms about using eval where needed. Sometimes it's the only/the right solution as long as the input is safe imo.
I don't like it because it blocks things such as coverage and complexity. To help with those items it might be possible to have these classes not as strings but as files and mocker reads them into once into a static variable to reuse later. Putting them in files allows me to get more complex with the logic and be able to test them separate of the mocker class interface. This will be especially useful for when we add more complexity such as spy's and/or expectations.
Or do you think I'm over reacting and simply need to just leave them as strings on the class?
Yeah, I'm familiar with spies from Jasmine, this sounds like a good approach (though it'll be easier to be decisive once we have some example implementations to talk about).
I'm not sure I totally understand the coverage/complexity arguments against eval(). I mean, the code under test is what we care about measuring, not the eval'd code. Not saying I wouldn't be all for a solution other than eval(), I just don't see how faking certain things would be possible.
I'm working on something currently where it would do something similar to:
On any method calls (not just filtered ones) it would log input/output which is publicly available to do assertions on.
I decided to add two custom assertions and leave the rest to the user assertCalled($mock, 'method') and assertNotCalled($mock, 'method'). Or we could possible drop those assertions and do one similar to assertCalled($stub, 'method', 'eq', 0) which makes it a single assertion method with a lot more flexibility (eq, gt, gte, ...).
assertCalled($stub, 'method', 'eq', 0)
I found an issue with mocking classes that pass by reference so I'm working on that part as well, but it logs correctly on my branch, and the assertion methods shouldn't take long to implement at all.
Any objections thoughts or improvements?