Skip to content


Spies should collect return values #164

ExtAnimal opened this Issue · 12 comments

5 participants


We have so we can check how a method was called.

I'd like to check that the method was not only called, but that it returned the expected result.


Spies don't actually call through to the methods they're spying on unless you explicitly tell them to (with andCallThrough()).

I can't see a good use case for this -- either use andCallThrough and write an integration test (which uses the expected result), or spyOn your method and also write a unit test for it.


@ragaskar ragaskar closed this

Can this please be reopened? It would be much more convenient if I can just tell it to 'andCallThrough()' and then still have the ability to spy on the return value without having to record the original method first, then replace it with 'andCallFake()' which then calls the recorded original instead.

I've already wished for this countless times and would be willing to write the code - but only if you accept it.


I think having a use case that explains how you're wanting to use this feature would help. We don't appreciate the need.


The use case is to help you more easily create 'realistic' stub values in your tests? In my mind, this isn't really a jasmine responsibility and it would add considerable complexity/behavior to jasmine that we'd then have to maintain.

It feels like you could solve this 'recording responses' problem with some kind of spec helper; for example (warning, untested -- even for syntax -- inelegant theoretical code ahead):

var ResponseRecorder = function(object, method, responseFixture) {
  var originalMethod = object.method,
  recordedResponses = responseFixture || {};
  spyOn(object, method);
  return {
    record: function() {
      object.method.andCallFake(function() {
        var response = originalMethod.apply(null, arguments);
        recordedResponses[JSON.stringify(arguments)] = response;
        return response;
    dumpFixture: function() {
     return JSON.stringify(recordedResponses);
    playback: function() {
      object.method.andCallFake(function() {
        if (var response = recordedResponses[JSON.stringify(arguments)]) {
          return response;
        } else {
          throw new Error("Could not find response for " + arguments);

var recorder = new ResponseRecorder(myObject, 'myMethod');
//exercise thing that calls myObject, myMethod
var fixture = recorder.dumpFixture();
//in your case, you'd probably copy-paste fixture into some static file.
//now we stub the playback with our recorded responses.
recorder = new ResponseRecorder(myObject, 'myMethod', JSON.parse(fixture));

Well, lets try to focus then. I find that I mostly would like it if getting access to the return value of some method would make it much easier to test some object.

For example: Today I was testing a typeahead widget (a library I don't control) and wanted to attach some code to it to show a spinner while it was fetching data from the net. Now the easiest way to test this was to get access to the jQuery XHR object generated for each request so I could manually reject() or resolve() it to test the stability of my attachment.

To do that I had to patch some method quite deep in it's guts to record it's return value and get access to it after a new search was triggered.

This resulted in something like this:

        describe("searching visualization", function() {

            beforeEach(function () {
                this.searchView = this.view.namedSubview('user-search-type-ahead-view')
                var transport = this.searchView.dataset().transport
                this.mostRecentXHR = null;
                var that = this;
                var originalSendRequest = transport._sendRequest
                spyOn(transport, '_sendRequest').andCallFake(function() {
                    var jqXHR = originalSendRequest.apply(this, arguments);
                    that.mostRecentXHR = jqXHR;
                    return jqXHR;

            it("should call a callback when a fetch starts or stops", function() {
                spyOn(this.searchView, 'didStartSearch')
                spyOn(this.searchView, 'didStopSearch')
                jasmine.Clock.tick(300) // actually send the request
// ....

The key here is the patch spyOn(transport, '_sendRequest'). which I'm doing purely to get access to it's return value - and it's one off code that doesn't cope well with multiple requests generated for example by multiple datasets which ask different urls.

Thats the sort of situation where it would be much easier if that was just baked in to spyOn().andCallThrough() when I could just check the calls afterwards and get the return values out there.

I hope that makes it easier for you guys to understand the kind of use cases I would like to use something like this.

As an aside: The arguments are already recorded - but not the returned values, while recording them would be easy in the code - I don't quite get what the argument against it should be? Code complexity doesn't seem to be it as it looks like it's about 3-5 lines of code (well plus the tests) to add this feature.


Many of us would solve the problem you posted by mocking at the xhr layer (using jasmine-ajax, for example) instead of stubbing an internal to get access to the xhr.

I really can't imagine a ton of cases where this would be useful. The complexity here is not necessarily in the LoC necessary to implement your feature, but in the behavior. I almost certainly wouldn't want to add some kind of spy interface that sometimes returns values and sometimes not. IE, it's only sensible to ask a spy what return values it has if you're using the andCallThrough strategy. What does it return otherwise? Empty hash? Null? Raise error? What happens if you were using one strategy and then switch to another?

I could maybe imagine andCallThrough taking a function that then gets called back with the return arguments (this at least has more precedent). To be honest, andCallThrough is vestigial from the way we used to think spies/mocks should be used so I'm reluctant to add behavior to it. If you're calling through to a function you probably shouldn't care about what arguments you've passed, as long as the end behavior is the expected one.


Well, I don't particular care how it is implemented, I'm just caring about that it is implemented.

If implemented as a generic thing the spy does, I would expect that the recorded return value is always the value that is actually returned. I.e. if you specify nothing, the spy does return undefined and that should be recorded. If you specify andReturn(something) then something should be recorded. And if you specify andCalThrough() then the real return value should be recorded.

If implemented as a callback to andCallThrough() that strikes me as a more special cased implementation that is not as simple to understand - but I'm just as fine with that.

When observing my own unit testing, I would also assert that I'm pretty firmly in the 'test behavior, not interaction' camp of unit testing. Never the less, for certain use cases I like to have the power to easily switch to the 'test interaction' style of unit tests.

Of course the example above can be implemented differently, and of course a good ajax mocking framework would probably have made it possible to write this test even nicer. But that means having another framework and it doesn't solve the general problem of sensing values that are only available as return values in a method call chain. (Or where that is the easiest way to get at them).

That being said, the reason I've come here is because yesterday was about the seventh time in the last year that I wish I've had this feature and I decided that it's about time to do something about it. Sure I don't need it every day, but every time I do I curse because I have to write 10 extra lines of code - which means I'm probably using it less than it would be useful.


That's fair. We'll ask the core team what they think. We're trying to be extremely careful about what we add to Jasmine, because it initially started life with a bunch of extraneous behavior that later had to be ripped out (ajax mocking, for example, was included. so was fixture handling!). This means we tend to be pretty reluctant to add features that don't solve everyday problems for a wide range of users -- we'd rather add functionality that helps you solve your particular problem in an easy way.


I would welcome an update to this bug report with the results of your internal discussion.


To be true, I would greatly prefer if you and the 'core-team' could discuss this in an open way on the bug tracker so the community (in this case me) can weight in on your opinions.

Other than that, did you have any results yet?


We haven't talked about it yet. Feature planning for the next version of Jasmine won't start happening until 2.0 comes out.

We're trying to get better about updating github tickets, but there's often long gaps between when tickets are opened and when we can address them. Thank you for your understanding.


As linked above, I've now independently put in a pull request to fix this, as I've hit up against it a few times too, and obviously I'm also in favour. The implementation linked is pretty simple, and I don't think there's any significant conceptual complexity at all; whatever is returned from the spy call is immediately stored as call.returnValue, regardless of the spy strategy in use.

I'd agree there's few cases where you can't implement this yourself, but it would result in considerably more succinct tests semi-often for me personally, and I do currently have multiple stub implementations where I'm managing tracking return values by hand, because it's something that jasmine won't do for me. Also, I'd note that Sinon.JS does provide a returnValues property, which already does exactly what we're suggesting here.

In terms of use cases, I've found this as an issue a few times in various cases, but particularly when trying to test mocked constructors and factory methods, where interactions with the return value are really the thing I'm aiming to assert on in my test.

As a recent example, I wanted to test that the callbacks the factory provided were only ever used once in the SUT. Without this, you have to manually build a factory stub and then track the return values in each case by hand, and the assertion testing them then has a much less clear relationship with the factory itself. With this change however, it becomes:

var factory = jasmine.createSpy('factory').and.callFake(function () {
  return jasmine.createSpy('callback');
var sut = new SystemUnderTest(factory);


_.each(factory.calls.all(), function (call) {
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.