Skip to content
This repository has been archived by the owner on May 10, 2022. It is now read-only.

Add nodejs support #21

Merged
merged 8 commits into from
Feb 3, 2015
Merged

Add nodejs support #21

merged 8 commits into from
Feb 3, 2015

Conversation

BenSayers
Copy link
Contributor

Summary of the changes I made:

  • Added nodejs support
  • Updated the tests to be compatible with nodejs. They are now run in a nodejs environment as well as a browser environment to ensure compatibility is maintained
  • Updated mockService to make async http requests
  • No longer expose internal mockService methods to external consumers to avoid confusion about what is api and what is not
  • Refactored the low level http logic into mockServiceRequests and http
  • Removed unnecessary 'use strict' statements - only 1 is needed at the top of the distribution file
  • Added more robust pact server spin up code to the gulp file - the previous code did not wait for the server to spin up before running the tests which caused timing issues in nodejs which runs the tests faster then karma does

I've never used gulp before so I may not have done things in the "gulp" way. I also had a lot of merge conflicts when rebasing against master so please check I didn't blow away anything accidentally.

@bethesque
Copy link
Contributor

Ta, will have a look at this tomorrow.

@markdalgleish
Copy link

FYI I've sent @BenSayers a PR (BenSayers#1) which converts this branch to use Browserify, based on the work from my previous PR. My motivation here is to make supporting both Node and the browser simpler in the long run.

@bethesque
Copy link
Contributor

Ok, whose should I merge first then?!

@markdalgleish
Copy link

Looks like there's some contention around the size of the compiled browser output in my version. My pull request shouldn't hold this one up, if this one is deemed suitable.

@bethesque
Copy link
Contributor

Ok, in the right place now.

When verification fails, is there a way to fail the test in an asynchronous world? In the example, if I take out the code that verifies that the response returned by the mock service is right, but make a wrong call to the mock service, the test should still fail, but it doesn't.

//updated example/karma.conf.js to

    files: [
      'client.js',
      'client-spec.js',
      '../src/bootstrap.js',
      '../src/**/*.js'
    ],
describe("when there are no friends", function () {
        it("returns an error message", function (jasmineDone) {
          //Add interaction
          helloProvider
            .given("I have no friends")
            .uponReceiving("a request to unfriend")
            .withRequest("put", "/unfriendMe2")
            .willRespondWith(404);

          //Run the tests
          helloProvider.run(function(pactDone) {

            function success(message) {
              //The success callback should *not* be invoked!
              // expect(true).toEqual(false); //COMMENTED THIS OUT
              pactDone(jasmineDone);
            }

            function error(message) {
              //The error callback *should* be invoked
              // expect(message).toEqual("No friends :(") //COMMENTED THIS OUT
              pactDone(jasmineDone);
            }

            client.unfriendMe(success, error);

          });
        });
      });

@bethesque
Copy link
Contributor

Hi Ben, just to summarise, I'm happy to merge this when we can work out a way to raise an error when verification fails. Thanks for your work on this.

@BenSayers
Copy link
Contributor Author

@bethesque I mistakenly thought if you give an error to jasmine's done callback it will fail the test, I have updated the tests and they fail when expected now.

However upon reflection of this change I'm not happy with the way the error handling in the async world turned out. In the clean and setup stages of running the tests we don't have a way of communicating failure to the tests, so in this case I just throw the error. This is not a good thing to do as this is asynchronous code and we are in a new call stack so there is no clean way for the test to catch this and makes it hard to test the error handling. In the verify and write stages of running the tests we have the testComplete callback so I pass the error object to that. This means the test needs to ensure no error was passed.

I'm thinking we need to change the way errors are communicated back to the tests, but am not clear on the best way forward.

Some possible solutions I have in mind right now are:

  • Pass errors generated during the clean and setup stages to the testComplete callback. This means we have to execute the tests passed to the run function even though we know the pact mock server is not setup correctly.
  • Create an api for allowing the tests to give us a function for the test results to be passed to, e.g.
      it("should say hello", function(done) {

        helloProvider
          .uponReceiving("a request for hello")
          .withRequest("get", "/sayHello")
          .willRespondWith(200, {
            "Content-Type": "application/json"
          }, {
            reply: "Hello"
          });

        hellpProvider.onTestComplete(function (pactError) {
          expect(pactError).toBe(null);
          done();
        });

        helloProvider.run(function(runComplete) {
          client.sayHello(function (error, reply) {
            expect(error).toBe(null);
            expect(reply).toBe("Hello");
            runComplete();
          });
        });

      });

Thoughts?

I'm open to better ways of dealing with this if you think of any.

@@ -14,7 +14,7 @@
"quotmark": "single",
"undef": true,
"unused": true,
"strict": true,
"strict": false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way we can be strict again?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are still using strict mode - it appears at the top of the dist file. I removed it from the jshint rules because it was insisting we put it at the top of every file in src.

@bethesque
Copy link
Contributor

I wish I had some better ideas, but this is not a style of coding that I'm very familiar with.
If we were happy to make a jasmine specific plugin (which I am, if we can abstract it nicely for other libraries too), we could add an expect in the callback code with an appropriate error, that would fail the test if any of the administration calls to the mock service failed.

@markdalgleish
Copy link

Promises might be a good way to solve this. They would play nicely with something like jasmine-as-promised.

@BenSayers
Copy link
Contributor Author

I'd rather not couple this library to jasmine, that seems like the job of something built on top of this if it turns out to be useful.

@markdalgleish how would you involve promise in the mix? I'm imagining you'd change the run method to return a promise indicating the success or failure of the test? The test would wind up looking like this:

helloProvider.run(function(runComplete) {
  client.sayHello(function (error, reply) {
    expect(error).toBe(null);
    expect(reply).toBe("Hello");
    runComplete();
  });
}).then(done, function (error) {
    expect(error).toBe(null);
    done();
});

This would look very similar if you changed the signature of run to accept two functions, one being the test and another being a finished callback.

helloProvider.run(function(runComplete) {
  client.sayHello(function (error, reply) {
    expect(error).toBe(null);
    expect(reply).toBe("Hello");
    runComplete();
  });
}, function (error) {
    expect(error).toBe(null);
    done();
});

The approach used by the jasmine-as-promised plugin is broken by the async api changes introduced in jasmine 2.

Right now I'm leaning toward adding a done method onto the MockService object which accepts a callback that we can pass the error to. I'll implement this and you can give feedback.

@BenSayers
Copy link
Contributor Author

@bethesque @markdalgleish have a look at the latest commit and let me know what you think.

@BenSayers
Copy link
Contributor Author

@bethesque just rebased onto the latest master. Anything else you'd like done before accepting this PR?

On a random note, it would be good if the pact mock server exposed a healthcheck url that we can call to determine when it is ready to receive requests. I had to put a timeout in to hold back the tests from running when the server is not ready, I'd like it to be less brittle.

@bethesque
Copy link
Contributor

I was about to say, it does have a healthcheck URL, I added it the other day, but then I checked the code, and I realised it must be sitting on my home laptop, uncommitted. I will chase that up.

I'm still not entirely happy with the way this is working. I think the two step process will be confusing to people, and they'll not realise why they have to do the onTestComplete part.

That onTestComplete callback will be the same for every test using Jasmine, so it really could be a method on Pact that registers a global callback. What if we built a pact-consumer-js-sl-jasmine.js artifact, where that callback was set automatically? Then it can be called automatically when we call runComplete(done). It's actually more accurately onVerify.

@bethesque
Copy link
Contributor

Regarding the healthcheck endpoint - I have released pact-mock_service gem 0.2.4. If you hit the root URL http://localhost:XXXX with the X-Pact-Mock-Service header set, then it will respond with a 200.

@BenSayers
Copy link
Contributor Author

@bethesque I'm not a fan of coupling this implementation to a specific testing language. The reasons behind this are:

  • There are many testing frameworks in the JavaScript community, for example jasmine, qunit, mocha, buster, jest, node unit. For each library we don't support we are excluding some part of the community.
  • In order to add support for each testing framework we need to setup a new test fixture, one for the browser and one for nodejs. There is a non-trivial up front cost in doing this.
  • We would need to maintain support for each testing library, as it makes breaking api changes etc. There is a non-trivial maintenance cost to doing this.
  • Not all the libraries have plugin points to allow us to cleanly extend them. For example in Jasmine to transparently run the onTestComplete stage (by transparently I mean intercepting the done callback so we don't need to have it passed to us) and check there is no errors the only way I can think of to do this would involve monkey patching the it function.

What if we moved the fail test code into the beforeEach and passed the done callback to run?

The test would look like this:

describe('Client', function() {
  var client, helloProvider;

  beforeEach(function() {
    client = new ProviderClient('http://localhost:1234');
    helloProvider = MockService.create({
      consumer: 'Hello Consumer',
      provider: 'Hello Provider',
      port: 1234,
      pactDir: '.',
      done: function (error) {
        expect(error).toBe(null);
      }
    });
  });

  it("should say hello", function(done) {
    helloProvider
      .uponReceiving("a request for hello")
      .withRequest("get", "/sayHello")
      .willRespondWith(200, {
        "Content-Type": "application/json"
      }, {
        reply: "Hello"
      });

    helloProvider.run(function(runComplete) {
      expect(client.sayHello()).toEqual("Hello");
      runComplete();
    }, done);
  });
});

@bethesque
Copy link
Contributor

I like that much better. Do you think it's more or less confusing to do this:

helloProvider.run(function(runComplete) {
      expect(client.sayHello()).toEqual("Hello");
      runComplete(done);
    });

@BenSayers
Copy link
Contributor Author

The problem with having the done callback passed back inside the run callback is we are not able to get a reference to it when things go wrong in the steps prior to executing the run callback. This means we need to execute the run callback even though we know the pact mock server is not setup properly because without it we can't stop the test. This may result in a confusing failure because along with the error we generate there will be others genetated by the run callback. Worst case the run callback throws an exception and we never get the done callback at all, and the test fails with a timeout error.

To allow us to fail cleanly we need a reference to the done callback directly.

@bethesque
Copy link
Contributor

Right, that makes sense. How do we fail it at that stage then? With the done callback that the user provides as the mock service argument?

@BenSayers
Copy link
Contributor Author

In the before we pass in a function that accepts an error argument and will fail the test if it has a value. So if we have an error early in the mock setup stages we will call this function handing it the error to fail the test then call the done callback to end the test.


if (Pact.IsNodeJs) {
module.exports = Pact;
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I merge in this PR https://github.com/DiUS/pact-consumer-js-dsl/pull/27/files, I believe we won't need this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That PR is based upon the node module xmlhttprequest which we discussed a while back.

I think @markdalgleish has a nicer solution for making the differences between node and the browser more transparent using browserify, which gives us a dependency management solution as well.

But to be honest I'd like to focus on getting this change through before going through a refactoring exercise to clean up the nodejs checks. The length of time it's taking to get this PR approved is hurting my ability to get Pact widely adopted at my company.

If I make the changes discussed above are there any other concerns which will prevent this getting approved?

@bethesque
Copy link
Contributor

I think that's it. I really appreciate your work on this.

@bethesque
Copy link
Contributor

Let me know when it's ready to merge, as I don't get any alerts when there's a new commit.

@BenSayers
Copy link
Contributor Author

@bethesque should be good to merge now

@bethesque
Copy link
Contributor

Thanks. I'll look at it as soon as I have a moment. Once it's merged, there's a few simplifications we can make. The mock service now has a PUT /interactions, which will get rid of all the callbacks to set up the interactions sequentially, and there's now a 'pact-mock-service start' and 'pact-mock-service stop' task. The start task will block until the port is ready, so we can remove the 'wait for' stuff.

@bethesque
Copy link
Contributor

This is looking good. Failing in all the right places when I deliberately mess up the tests.

@bethesque bethesque merged commit e71bc02 into DiUS:master Feb 3, 2015
@bethesque
Copy link
Contributor

Ok, I've merged this in, thanks for your awesome work Ben.

There's one thing that I'm considering - is it confusing or consistent to have a "done" function in the mock service, as well as in Jasmine? What do you think of "verify"?

beforeEach(function() {
  client = new ProviderClient('http://localhost:1234');
  helloProvider = MockService.create({
    consumer: 'Hello Consumer',
    provider: 'Hello Provider',
    port: 1234,
    verify: function (error) {
      expect(error).toBe(null);
    }
  });
});

@BenSayers
Copy link
Contributor Author

I have no objections to renaming the function to verify.

A story to add to the backlog should be to add api documentation so we can explain the purpose of this function clearly. I've had success using jsdoc-to-markdown for automatically generating the docs from the source.

@bethesque
Copy link
Contributor

Good point. Firstly, we should add an explanation to the README.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants