Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Integration test in a pub-sub design pattern.
JavaScript Ruby CSS
branch: master

code examples added to readme

latest commit 74d9e75735
Alex Moore-Niemi authored
Failed to load latest commit information.
my_api removing hardcoded value for model fetch
my_client restyling index a little
.bowerrc rescaffolding client app so it has requirejs and jasmine
.editorconfig rescaffolding client app so it has requirejs and jasmine
.gitattributes
.gitignore rescaffolding client app so it has requirejs and jasmine
.jshintrc rescaffolding client app so it has requirejs and jasmine
.yo-rc.json rescaffolding client app so it has requirejs and jasmine
README.md code examples added to readme

README.md

PubSub Pattern Integration Tests (wip)

Making sure your clients pay attention to their API

Testing the relationship between your client apps and a central API can be tricky. There's pros and cons to almost anywhere you draw the line to stub. This project is one more alternative strategy to draw the line as close to actual production responses as possible, without requiring your API to do actual computation.

How would a client app pay attention to an API?

The most obvious technique is to actually hit your API and use real responses in your tests. This is not unusual in full-stack integration tests. One disadvantage of this, however, is that you are asking your API to do a lot of work serving these test requests. This often means slow tests, which is another way of saying "no tests".

What about your client app keeping a copy of your API's responses?

Typically, the next step down from a full integration test is to have your client app retain local copies of sample responses from your API in the form of JSON fixtures. This approach is a workhorse of TDD (in my experience at least), but it has an obvious downside. Your client apps have no idea when their JSON fixtures have gone stale. Your tests will stay green, while suddenly production goes down! This means that you still need those full-stack tests if you want to keep yourself covered, bringing us back to the problem of "well I forgot to run it because I didn't have twenty minutes to wait".

Instead, why not have your client app pay attention to a snapshot of your API?

Rather than hit your API for real responses, what if your client could ask it what its current API structure is like? Another way of looking at this is: what if your client app could read the documentation of your API? Instead of keeping canned responses in your client app, or hitting your API for actual responses, your API keeps copies of its own canned responses which your client can then use for test purposes. As long as your API-side fixtures stay fresh, you have a fast, reliable bridge.

Ugh so then I have to maintain fixtures API-side? Those could just go stale too!

Not if they're generated automatically, and fail tests when they go out of date. This is our secret weapon for the PubSub integration test scheme: documentation that is generated straight out of our specs. The best example of this strategy I've found so far is to modify the use of rspec_api_documentation and its partner gem apitome. Rspec_api_documentation is a great gem that essentially adds a DSL for your controller tests from which you can automatically generate documentation. Do you write controller tests? Then your client apps will never have stale fixtures!

Hmmm okay, how does this work?

The workflow is pretty straight forward.

On your api-side: you write controller tests using rspec_api_documentation, you generate docs using its provided rake task, and you use my forked version of apitome in your api project. My fork of apitome adds routes to host the JSON examples that are first generated by Rspec and then used by rspec_api_documentation.

On your client-side: you set up an alternative routing mechanism for your model fetches, then you write tests as if you were doing real requests (you are!) but use a model id of 'test'. :) That's it!

How did you implement your client side routing?

I've so far gone for the simplest implementation which will be refactored into something modular. The main "magic" is in my model's url function:

    url: function() {
        if (this.id == 'test') {
            return 'http://0.0.0.0:3000/api/docs/body/high-score/get-a-high-score.json';
        } else {
            return 'http://0.0.0.0:3000/high_scores/' + this.id;
        }
    },

This way I can easily ask for a snapshot response from my API in my tests:

    describe('fetch from api documentation', function() {
        describe('displaying scores', function() {
            it('should fetch and render correctly from my_api', function() {
                var fetchDone = false,
                    model = new HighScoreModel({
                        id: 'test'
                    }),
                    view = new HighScoreView({
                        model: model
                    });

                runs(function() {
                    model.fetch().done(
                        function() {
                            view.render();
                            fetchDone = true;
                        });
                });

                waitsFor(function() {
                    return fetchDone;
                }, 'model was fetched and view rendered', 5000);

                runs(function() {
                    expect($('#main')).toContainText('settlers of catan');
                });

            });
        });
    });

The above spec is taken from high_score.spec.js, which also contains the 2 most popular alternatives I've seen to the PubSub integration pattern that I've mapped out here.

How do I run your example project?

I'm assuming you have ruby, rails, node, npm, and bower installed.

git clone git@github.com:mooreniemi/pubsub-test-bridge.git
cd pubsub-test-bridge
cd my_api
bundle install
rake db:migrate && RAILS_ENV=test rake db:migrate
cd ..
cd my_client
bower install
karma start
grunt serve

Author

Alex Moore - Niemi

@feminstwerewolf

MIT License, 2014

Something went wrong with that request. Please try again.