Skip to content
Branch: master
Find file Copy path
327 lines (224 sloc) 12.6 KB

Writing tests

Translations: Français

Tests are run concurrently. You can specify synchronous and asynchronous tests. Tests are considered synchronous unless you return a promise, an observable, or declare it as a callback test.

You must define all tests synchronously. They can't be defined inside setTimeout, setImmediate, etc.

AVA tries to run test files with their current working directory set to the directory that contains your package.json file.

Process isolation

Each test file is run in a separate Node.js process. This allows you to change the global state or overriding a built-in in one test file, without affecting another. It's also great for performance on modern multi-core processors, allowing multiple test files to execute in parallel.

AVA will set process.env.NODE_ENV to test, unless the NODE_ENV environment variable has been set. This is useful if the code you're testing has test defaults (for example when picking what database to connect to, or environment-specific Babel options). It may cause your code or its dependencies to behave differently though. Note that 'NODE_ENV' in process.env will always be true.

Declaring tests

To declare a test you call the test function you imported from AVA. Provide the required title and implementation function. Titles must be unique within each test file. The function will be called when your test is run. It's passed an execution object as its first argument.

Note: In order for the enhanced assertion messages to behave correctly, the first argument must be named t.

import test from 'ava';

test('my passing test', t => {

Running tests serially

Tests are run concurrently by default, however, sometimes you have to write tests that cannot run concurrently. In these rare cases you can use the .serial modifier. It will force those tests to run serially before the concurrent ones.

test.serial('passes serially', t => {

Note that this only applies to tests within a particular test file. AVA will still run multiple tests files at the same time unless you pass the --serial CLI flag.

You can use the .serial modifier with all tests, hooks and even .todo(), but it's only available on the test function.

Promise support

Tests may return a promise. AVA will wait for the promise to resolve before ending the test. If the promise rejects the test will fail.

test('resolves with unicorn', t => {
	return somePromise().then(result => {, 'unicorn');

Async function support

AVA comes with built-in support for async functions.

test(async function (t) {
	const value = await promiseFn();

// Async arrow function
test('promises the truth', async t => {
	const value = await promiseFn();

Observable support

AVA comes with built-in support for observables. If you return an observable from a test, AVA will automatically consume it to completion before ending the test.

You do not need to use "callback mode" or call t.end().

test('handles observables', t => {
	return Observable.of(1, 2, 3, 4, 5, 6)
		.filter(n => {
			// Only even numbers
			return n % 2 === 0;
		.map(() => t.pass());

Callback support

AVA supports using t.end as the final callback when using Node.js-style error-first callback APIs. AVA will consider any truthy value passed as the first argument to t.end to be an error. Note that t.end requires "callback mode", which can be enabled by using the test.cb chain.

test.cb('data.txt can be read', t => {
	// `t.end` automatically checks for error as first argument
	fs.readFile('data.txt', t.end);

Running specific tests

During development it can be helpful to only run a few specific tests. This can be accomplished using the .only modifier:

test('will not be run', t => {;

test.only('will be run', t => {

You can use the .only modifier with all tests. It cannot be used with hooks or .todo().

Note: The .only modifier applies to the test file it's defined in, so if you run multiple test files, tests in other files will still run. If you want to only run the test.only test, provide just that test file to AVA.

Skipping tests

Sometimes failing tests can be hard to fix. You can tell AVA to skip these tests using the .skip modifier. They'll still be shown in the output (as having been skipped) but are never run.

test.skip('will not be run', t => {;

You must specify the implementation function. You can use the .skip modifier with all tests and hooks, but not with .todo(). You can not apply further modifiers to .skip.

Test placeholders ("todo")

You can use the .todo modifier when you're planning to write a test. Like skipped tests these placeholders are shown in the output. They only require a title; you cannot specify the implementation function.

test.todo('will think about writing this later');

You can signal that you need to write a serial test:

test.serial.todo('will think about writing this later');

Failing tests

You can use the .failing modifier to document issues with your code that need to be fixed. Failing tests are run just like normal ones, but they are expected to fail, and will not break your build when they do. If a test marked as failing actually passes, it will be reported as an error and fail the build with a helpful message instructing you to remove the .failing modifier.

This allows you to merge .failing tests before a fix is implemented without breaking CI. This is a great way to recognize good bug report PR's with a commit credit, even if the reporter is unable to actually fix the problem.

// See:
test.failing('demonstrate some bug', t => {; // Test will count as passed

Before & after hooks

AVA lets you register hooks that are run before and after your tests. This allows you to run setup and/or teardown code.

test.before() registers a hook to be run before the first test in your test file. Similarly test.after() registers a hook to be run after the last test. Use test.after.always() to register a hook that will always run once your tests and other hooks complete. .always() hooks run regardless of whether there were earlier failures, so they are ideal for cleanup tasks. Note however that uncaught exceptions, unhandled rejections or timeouts will crash your tests, possibly preventing .always() hooks from running.

test.beforeEach() registers a hook to be run before each test in your test file. Similarly test.afterEach() a hook to be run after each test. Use test.afterEach.always() to register an after hook that is called even if other test hooks, or the test itself, fail.

If a test is skipped with the .skip modifier, the respective .beforeEach(), .afterEach() and .afterEach.always() hooks are not run. Likewise, if all tests in a test file are skipped .before(), .after() and .after.always() hooks for the file are not run.

Like test() these methods take an optional title and an implementation function. The title is shown if your hook fails to execute. The implementation is called with an execution object. You can use assertions in your hooks. You can also pass a macro function and additional arguments.

.before() hooks execute before .beforeEach() hooks. .afterEach() hooks execute before .after() hooks. Within their category the hooks execute in the order they were defined. By default hooks execute concurrently, but you can use test.serial to ensure only that single hook is run at a time. Unlike with tests, serial hooks are not run before other hooks:

test.before(t => {
	// This runs before all tests

test.before(t => {
	// This runs concurrently with the above

test.serial.before(t => {
	// This runs after the above

test.serial.before(t => {
	// This too runs after the above, and before tests

test.after('cleanup', t => {
	// This runs after all tests

test.after.always('guaranteed cleanup', t => {
	// This will always run, regardless of earlier failures

test.beforeEach(t => {
	// This runs before each test

test.afterEach(t => {
	// This runs after each test

test.afterEach.always(t => {
	// This runs after each test and other test hooks, even if they failed

test('title', t => {
	// Regular test

Hooks can be synchronous or asynchronous, just like tests. To make a hook asynchronous return a promise or observable, use an async function, or enable callback mode via test.before.cb(), test.beforeEach.cb() etc.

test.before(async t => {
	await promiseFn();

test.after(t => {
	return new Promise(/* ... */);

test.beforeEach.cb(t => {

test.afterEach.cb(t => {

Keep in mind that the .beforeEach() and .afterEach() hooks run just before and after a test is run, and that by default tests run concurrently. This means each multiple .beforeEach() hooks may run concurrently. Using test.serial.beforeEach() does not change this. If you need to set up global state for each test (like spying on console.log for example), you'll need to make sure the tests themselves are run serially.

Remember that AVA runs each test file in its own process. You may not have to clean up global state in a .after()-hook since that's only called right before the process exits.

Test context

Hooks can share context with the test:

test.beforeEach(t => { = generateUniqueData();

test('context data is foo', t => { + 'bar', 'foobar');

Context created in .before() hooks is cloned before it is passed to .beforeEach() hooks and / or tests. The .after() and .after.always() hooks receive the original context value.

For .beforeEach(), .afterEach() and .afterEach.always() hooks the context is not shared between different tests, allowing you to set up data such that it will not leak to other tests.

By default t.context is an object but you can reassign it:

test.before(t => {
	t.context = 'unicorn';

test('context is unicorn', t => {, 'unicorn');

Retrieving test meta data

Helper files can determine the filename of the test being run by reading test.meta.file. This eliminates the need to pass __filename from the test to helpers.

import test from 'ava';

console.log('Test currently being run: ', test.meta.file);

Reusing test logic through macros

Additional arguments passed to the test declaration will be passed to the test implementation. This is useful for creating reusable test macros.

function macro(t, input, expected) {, expected);

test('2 + 2 = 4', macro, '2 + 2', 4);
test('2 * 3 = 6', macro, '2 * 3', 6);

You can build the test title programmatically by attaching a title function to the macro:

function macro(t, input, expected) {, expected);

macro.title = (providedTitle = '', input, expected) => `${providedTitle} ${input} = ${expected}`.trim();

test(macro, '2 + 2', 4);
test(macro, '2 * 3', 6);
test('providedTitle', macro, '3 * 3', 9);

The providedTitle argument defaults to undefined if the user does not supply a string title. This means you can use a parameter assignment to set the default value. The example above uses the empty string as the default.

You can also pass arrays of macro functions:

const safeEval = require('safe-eval');

function evalMacro(t, input, expected) {, expected);

function safeEvalMacro(t, input, expected) {, expected);

test([evalMacro, safeEvalMacro], '2 + 2', 4);
test([evalMacro, safeEvalMacro], '2 * 3', 6);

We encourage you to use macros instead of building your own test generators (here is an example of code that should be replaced with a macro). Macros are designed to perform static analysis of your code, which can lead to better performance, IDE integration, and linter rules.

You can’t perform that action at this time.