Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch from supertest to assert-request #700

Closed
wants to merge 1 commit into from

Conversation

PlasmaPower
Copy link
Contributor

Assert-request is a promise-based utility I created to replace supertest. Supertest often leads to a custom .end function checking things like a header not existing. In addition, .expect is messy because it tries to be a single function to assert everything. As you can see by looking through the diff, assert-request leads to much cleaner code.

If you're worried about this getting unpublished, NPM recently changed the unpublish feature after left-pad (you first have to contact support and they will check if anything will break because it's gone). If you're worried about this not being maintained, first of all I plan to maintain this, and second of all you can always fork it. The code base is well tested, and IMO well written.

@PlasmaPower
Copy link
Contributor Author

Hmm the travis build seems to be crashing because of IPv6? Odd. I'll look into it. The travis build for assert-request itself works fine though.

@PlasmaPower
Copy link
Contributor Author

It looks like that's because this is running on the newer Precise builds. From their blog:

IPv6 no longer present
The major change that is coming with this migration is that local and external IPv6 networking will no longer be present.

It still should be working though, I'll see if I can't figure out why it's still using IPv6 even though that's disabled.

@PlasmaPower
Copy link
Contributor Author

Supertest is hardcoded to 127.0.0.1. I'm not sure that that's the best solution. I'll hardcode mine to localhost, I think that will work better. I've heard localhost results in latency on Windows, but that shouldn't matter too much since it's just tests.

@PlasmaPower
Copy link
Contributor Author

Tests are now working on Travis!

@dead-horse
Copy link
Member

I'd like to migrate tests from mocha to ava after we merged v2.x into master. but before we merged, maybe keep the test cases for now is good for the merge?

BTW: maybe we should use supertest-as-promised

@jonathanong
Copy link
Member

yeah i'm more into supertest-as-promised.

only thing i don't like about ava and tap is how you have to do t.pass() or t.fail(). i like how mocha works better as it requires less typing. that's of course unless i'm missing something about those testing frameworks.

@dead-horse
Copy link
Member

https://github.com/sindresorhus/ava/blob/master/docs/recipes/endpoint-testing.md
https://github.com/node-modules/utility/blob/master/test/optimize.test.js

looks like t.pass() or t.fail() is not necessary, I'm new to ava too, but it's assertion looks really awesome.

@PlasmaPower
Copy link
Contributor Author

supertest-as-promised doesn't look maintained, doesn't support .catch (it was built for .then(success, failure)), and you still have the problem of a .expect that doesn't work too well.

Ava looks good, but it looks like a pain to use before async/await arrives (and we can't test versions of node without async/await). I guess we could use babel for our tests, but I'm not sure that would be optimal.

Edit: and from what I've seen, async/await in node doesn't look like it's in the near future.

@blakeembrey
Copy link

@PlasmaPower The last commit was only two days ago, I'm sure they'd add a handler for .catch as it's common.

Ava (and any others) are pretty straightforward, it's just promises.

I typically use my request library for testing instead of specific request testing modules. For example, https://github.com/mulesoft-labs/osprey-method-handler/blob/master/test/index.js uses Popsicle and just has a plugin that mounts the server like Supertest (https://github.com/blakeembrey/popsicle-server/blob/master/test.js#L17). This allows me to switch between any test suite without any issue - I used to use Mocha and lately been using blue-tape (Tape with promises) and will probably try Ava at some point (E.g. https://github.com/blakeembrey/popsicle-proxy-agent/blob/master/src/index.spec.ts).

@PlasmaPower
Copy link
Contributor Author

@blakeembrey I checked out supertest-as-promised before that commit, sorry. As for the .catch handler, there's a PR open to add it with passing tests and no unresolved comments. If async/await arrives, I agree that using a standard request library would work great, but before that I think that there would be too much boilerplate, especially for tests.

@PlasmaPower
Copy link
Contributor Author

I'm not sure if I agree with how ava does some things. It seems that it doesn't utilize errors well. For instance, with mocha we might have:

it('should do something', () => Promise.resolve(5).then(n => assert.equal(n, 5));

But from the ava examples we have:

it('should do something', function (t) {
  t.plan(1);
  return Promise.resolve(5).then(n => t.is(n, 5));
});

With the first example, if Promise.resolve fails then the error will flow back up to mocha. If assert fails then the error will flow back up to mocha. However, with ava, it is apparently recommended to not use the fact that Promise.resolve might throw an error, but instead use the fact that t.is won't run. In addition, instead of assert generating a typical error, ava uses t.is. This whole thing ties into why t.pass exists.

However, I must say that I like concurrent tests with multiple threads (processes b/c this is node) when possible. Is it possible to write the tests without the t parameter? If so, then I think that would be a great replacement for mocha, and a great replacement for supertest when async/await arrives (or we could use babel).

@blakeembrey
Copy link

@PlasmaPower You should look into a bit more, but it definitely works with rejected failures - see https://github.com/sindresorhus/ava/blob/master/lib/test.js#L157. The test planning is really useful when you have more complex tests that aren't possible just to rely on a rejected promise. For example, asserting that .catch is run - you would want to plan so that when it's not run it fails because it didn't fulfill the numbers of tests. That was a nice reason for me to move to tape, since having assertions outside of Mocha stops you from being able to do that properly - I would regularly keep track of the counts myself and do .then to make sure the count is correct.

@blakeembrey
Copy link

Is it possible to write the tests without the t parameter?

No, that parameter is tied to the current test suite - that's how it knows which test has failed and how many assertions were made. Especially when it's running concurrently.

@PlasmaPower
Copy link
Contributor Author

Okay, I guess I could see how the test planning could be useful in some situations, but there are other ways to do things (.catch(err => err).then(err => assert(err instanceof Error))). Could you explain why the t parameter is needed for concurrent tests?

@blakeembrey
Copy link

that's how it knows which test has failed and how many assertions were made.

So that you can keep track of which test has failed when two tests are running concurrently and so it can count the number of assertions made. There are situations in Mocha (especially callbacks) where you would usually do assertions but it's not within the execution of the context anymore since it's asynchronous. Mocha works with this because it catches unhandled exceptions, but that doesn't scale when two things can fail assertions concurrently.

@PlasmaPower
Copy link
Contributor Author

I understand that t. is needed if you are planning tests, but I don't think it's needed if you aren't. While I can see why it would also be necessary if you're using callbacks, if an assertion fails in a promise based setup like this, the promise fails and the error is sent back to mocha. That is how I think tests should be designed.

@blakeembrey
Copy link

@PlasmaPower It's not even just planning tests, you need it for any assertion to associate the assertion to the correct test suite. Also to set up correct dependencies. It's basically a non-magical version of Mocha, in case you've ever looked into how describe and it work. I prefer less magic personally.

You're welcome to try designing a test runner that does what you say, but it's really an unnecessary limitation to try and enforce that flow. Any assertion outside of synchronous and promise flow will be very hard to understand where it's coming from. I understand the idea, promises are certainly easier to manage, but there's a lot of cases where the flow is hard to test without that. Even the tests you've just changed don't do that because there's assertions out of band in the server middleware. How would you test that easily?

You may be interested in checking out domains in node too, although deprecated it may give some interesting insights for you.

@PlasmaPower
Copy link
Contributor Author

Given a test, here's how I would handle it:

return Promise.resolve().then(test);

Then do .catch to detect if the test had an error, or use .then to detect when the test finishes without an error. An error anywhere in the promise chain will be brought up to the test handler. Promises solve these problems. Want to assert an error happens? .catch(err => err).then(err => assert(err instanceof Error)). Want a timeout on your tests? Promise.race([wait(timeout).then(() => { throw new Error('Timeout exceeded!') }, testHandler()]) This tests can be run concurrently, in multiple processes, whatever.

I don't like Mocha very much, but it does allow me to use this format. Ava seems nice if you're using callbacks, but not so much if you're using promises.

Edit: removed slightly aggressive statement, sorry.

@PlasmaPower
Copy link
Contributor Author

Although if you can use ava without t and just return a promise, I'd be fine with it, just like I'm fine with mocha.

@PlasmaPower
Copy link
Contributor Author

Here's an example of a test for that BTW:

let rp = require('request-promise');

// Any rp error here will be caught
return rp({ url: 'http://example.com', resolveWithFullResponse: true }).then(function (res) {
  assert.equal(res.body, 'Hello World!'); // Will be caught by the test handler
  // Any rp error here will also be caught
  return rp({ url: 'http//example.com/other', resolveWithFullResponse: true}).then(function (otherRes) {
    assert.deepEqual(res.headers, otherRes.headers); // Will also be caught
  });
}).then(function () {
  assert.equal(2 + 2, 4); // Will still be caught
});

Of course you'd use an actual testing request library instead of request-promise, but the point is made.

I don't see a problem attaching these errors to the tests that they came from. When you use promises, errors flow up nicely.

@blakeembrey
Copy link

@PlasmaPower I think you're misunderstand how asynchronicity and promises work. Sure, you can try to do everything in the callback of a promise, but any asynchronous assertions will still fail to be caught by the promise. It's not a domain or execution thread.

Also, your test does not demonstrate what I said, you just wrote a very basic promise chain.

@PlasmaPower
Copy link
Contributor Author

@blakeembrey Promises replace standard asynchronous callbacks when possible though. I don't see where we couldn't use promises in the place of callbacks in Koa's testing.

@blakeembrey
Copy link

It definitely can be, I'm just trying to show that some tests can't (or rather, aren't) be described in a pure way to enable straightforward validation with promises. I'm trying to find a good example in this codebase, but if I can't I'll just write one up for you.

@PlasmaPower
Copy link
Contributor Author

Makes sense! I'd love to see an example, especially in this code base. Edit: there are some process.on('deprecation's that might be tricky, I haven't looked into them much.

@blakeembrey
Copy link

No worries, we're a little off track now and the tests in this repo are fairly clear enough that it could be corrected to how you propose. Imagine https://github.com/koajs/koa/blob/v2.x/test/application/response.js#L13-L22. I'm going to bring up something a little different here, but currently that assertion relies on the fact that assert will be throwing an error inside Koa hence you don't respond with 204 but with a 500. Using t.equal assertions keeps the original assertion context, so you will get an error that says the actual failure ('bla' != 'hello') instead of relying on 500 != 204 which is a little layer of confusion in between. Then you don't actually even need to do the extra slightly hacky checks because of the layer between assertions caught in different context, you have the original assertion information.

@blakeembrey
Copy link

I'm not going to say these tests are perfect either, but https://github.com/blakeembrey/popsicle/blob/master/lib/test/index.ts#L488-L510 is another example (outside Koa). Instead of having to do any trickery in my tests to make them assert correctly, I can just say I want 3 assertions and use that to verify my tests are running correctly. In fact, I think that let asserted = 0 is left over from when I was using Mocha and had to count assertions manually and do that trickery I just described.

Edit: Here's Mocha when I did a few errored: boolean checks: https://github.com/blakeembrey/popsicle/blob/70c05146fde27761d43599e3f9c38eb1ecfe3d20/test/popsicle.js#L387-L429.

@PlasmaPower
Copy link
Contributor Author

@blakeembrey I didn't think about asserts in koa middleware combined with needing the error message itself, fair enough. Event emitters are another hard point. However I still think that the t parameter shouldn't be necessary in 90% of cases, can ava be used without it?

@blakeembrey
Copy link

@PlasmaPower Technically, I suppose. But I don't see why you wouldn't want the built-in assertion framework to work for you instead of rolling your own.

@PlasmaPower
Copy link
Contributor Author

@blakeembrey A valid point. I guess I'm more against planning tests in promise based situations. Ava sounds good then.

@PlasmaPower
Copy link
Contributor Author

@blakeembrey also, it's nice to have should sometimes.

@PlasmaPower
Copy link
Contributor Author

Would a PR changing tests over to ava + node-fetch be accepted? I think it's a much better system than what we currently have.

@jonathanong
Copy link
Member

i'm going to close this. can we move this to an issue to discuss?

@jonathanong jonathanong closed this Apr 4, 2016
@PlasmaPower
Copy link
Contributor Author

@blakeembrey quick question: couldn't a CLS be used to associate an error with a test?

@blakeembrey
Copy link

@PlasmaPower Not sure on the relevancy to this, but sure?

@PlasmaPower
Copy link
Contributor Author

@blakeembrey I'm just saying that a test runner could theoretically be both simultaneous and catch errors globally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants