Skip to content
Blazing Fast Testing with AssemblyScript πŸŽ‰
Branch: master
Clone or download
jtenner Merge pull request #87 from jtenner/configuration-warnings πŸŽ‰
Configuration warnings and add RTrace features -> version 1.2.0
Latest commit 943f31d Jun 14, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
__tests__
assembly update packages, use latest assemblyscript Jun 13, 2019
bin Update bin file location May 17, 2019
init Update configuration default maxTestRuntime to prevent output warnings Jun 12, 2019
lib update packages, use latest assemblyscript Jun 13, 2019
src
.gitignore Add to gitignore May 24, 2019
.markdownlint.json Update documentation, add mdlint Jun 4, 2019
.travis.yml update travis, use default reporter locally May 30, 2019
LICENSE
as-pect.config.js Remove logo display bottlenecks and report compiler loading time Jun 12, 2019
jest.config.js add jest, add tests, add coverage, 100% οΏ½οΏ½ Apr 25, 2019
package-lock.json Bump version Jun 13, 2019
package.json Bump version Jun 13, 2019
readme.md Update configuration default maxTestRuntime to prevent output warnings Jun 12, 2019
tsconfig.json switch *back* to umd Feb 13, 2019

readme.md

as-pect

Greenkeeper badge Build Status Coverage Status

Write your module in TypeScript and get blazing fast testing with web assembly speeds!

Philosophy

Testing is the first step of every project. It's important to make sure that the library or app you've chosen to become responsible for works in the manner you wish to have it work. It only takes a few minutes to get setup even if the benefits are transparent.

Usage

To install as-pect, install the latest version from github. Once AssemblyScript becomes more stable, as-pect will be published to npm.

$ npm install jtenner/as-pect

To initialize a test suite, run npx asp --init. It will create the following folders and files.

$ npx asp --init

# It will create the following folders if they don't exist
C ./assembly/
C ./assembly/__tests__/

# The as-pect types file will be created here if it doesn't exist
C ./assembly/__tests__/as-pect.d.ts

# An example test file will be created here if the __tests__ folder does not exist
C ./assembly/__tests__/example.spec.ts

# The default configuration file will be created here if it doesn't exist
C ./as-pect.config.js

To run as-pect, use the command line: npx asp, or create an npm script.

{
  "scripts": {
    "test": "asp"
  }
}

Run asp from the command line using npx without any parameters to use ./as-pect.config.js as your test configuration. Otherwise, you can specify a configuration using the command line syntax.

$ npx asp --config=as-pect.config.js

Most of the values configured in the configuration are overridable via the command line, with the exception of the Web Assembly imports provided to the module.

CLI

To access the help screen, use the --help flag.

SYNTAX
  asp --init                          Create a test config, an assembly/__tests__ folder and exit.
  asp -i
  asp --config=as-pect.config.js      Use a specified configuration
  asp -c as-pect.config.js
  asp --version                       View the version.
  asp -v
  asp --help                          Show this help screen.
  asp -h
  asp --types                         Copy the types file to assembly/__tests__/as-pect.d.ts
  asp -t

TEST OPTIONS
  --file=[regex]                       Run the tests of each file that matches this regex. (Default: /./)
    --files=[regex]
    -f=[regex]

  --group=[regex]                      Run each describe block that matches this regex (Default: /(:?)/)
    --groups=[regex]
    -g=[regex]

  --test=[regex]                       Run each test that matches this regex (Default: /(:?)/)
    --tests=[regex]
    -t=[regex]

  --output-binary                      Create a (.wasm) file can contains all the tests to be run later.
    -o

  --norun                              Skip running tests and output the compiler files.
    -n

  --nortrace                           Skip rtrace ref counting calculations.
    -nr

  --reporter                           Define the reporter to be used. (Default: DefaultTestReporter)
    --reporter=SummaryTestReporter     Use the summary reporter.
    --reporter=DefaultTestReporter     Use the default test reporter.
    --reporter=JSONTestReporter        Use the JSON reporter (output results to json files.)
    --reporter=CSVTestReporter         Use the empty reporter (output results to csv files.)
    --reporter=EmptyReporter           Use the empty reporter. (This reporter reports nothing)
    --reporter=./path/to/reporter.js   Use the default exported object from this module as the reporter.

PERFORMANCE OPTIONS
  --performance                        Enable performance statistics for every test. (Default: false)
  --max-samples=[number]               Set the maximum number of samples to run for each test. (Default: 10000 samples)
  --max-test-run-time=[number]         Set the maximum test run time in milliseconds. (Default: 2000ms)
  --round-decimal-places=[number]      Set the number of decimal places to round to. (Default: 3)
  --report-median(=false)?             Enable/Disable reporting of the median time. (Default: true)
  --report-average(=false)?            Enable/Disable reporting of the average time. (Default: true)
  --report-standard-deviation(=false)? Enable/Disable reporting of the standard deviation. (Default: false)
  --report-max(=false)?                Enable/Disable reporting of the largest run time. (Default: false)
  --report-min(=false)?                Enable/Disable reporting of the smallest run time. (Default: false)
  --report-variance(=false)?           Enable/Disable reporting of the variance. (Default: false)

Configuration File

Currently as-pect will compile each file that matches the Globs in the include property of your configuration. The default include is "assembly/__tests__/**/*.spec.ts". It must compile each file, and run each binary seperately inside it's own TestContext. This is a limitation of AssemblyScript, not of as-pect.

A typical configuration looks like this:

module.exports = {
  /**
   * A set of globs passed to the glob package that qualify typescript files for testing.
   */
  include: ["assembly/__tests__/**/*.spec.ts"],
  /**
   * A set of globs passed to the glob package that quality files to be added to each test.
   */
  add: ["assembly/__tests__/**/*.include.ts"],
  /**
   * All the compiler flags needed for this test suite. Make sure that a binary file is output.
   */
  flags: {
    "--validate": [],
    "--debug": [],
    /** This is required. Do not change this. The filename is ignored, but required by the compiler. */
    "--binaryFile": ["output.wasm"],
    /** To enable wat file output, use the following flag. The filename is ignored, but required by the compiler. */
    // "--textFile": ["output.wat"],
    /** To select an appropriate runtime, use the --runtime compiler flag. */
    "--runtime": ["full"] // Acceptable values are: full, half, stub (arena), and none
  },
  /**
   * A set of regexp that will disclude source files from testing.
   */
  disclude: [/node_modules/i],
  /**
   * Add your required AssemblyScript imports here.
   */
  imports: {},
  /**
   * All performance statistics reporting can be configured here.
   */
  performance: {
    /** Enable performance statistics gathering for every test. */
    enabled: false,
    /** Set the maximum number of samples to run for each test. */
    maxSamples: 10000,
    /** Set the maximum test run time in milliseconds. */
    maxTestRunTime: 5000,
    /** Set the number of decimal places to round to. */
    roundDecimalPlaces: 3,
    /** Report the median time in the default reporter. */
    reportMedian: true,
    /** Report the average time in milliseconds. */
    reportAverage: true,
    /** Report the standard deviation. */
    reportStandardDeviation: false,
    /** Report the maximum run time in milliseconds. */
    reportMax: false,
    /** Report the minimum run time in milliseconds. */
    reportMin: false,
    /** Report the variance. */
    reportVariance: false,
  },
  // reporter: new CustomReporter(),
  /**
   * Specify if the binary wasm file should be written to the file system.
   */
  outputBinary: false,
};

CI Usage

If your module requires a set of imported functions, it's encouarged to mock them here in the imports property of the configuration. If any module fails during compilation, the utility will exit immediately with code 1 so it can be used for quicker ci builds.

Adding this line to your .travis.yml will allow you to specify a custom script to your CI build.

script:
  - npm run test:ci

Then in your package.json file, you can instruct the "test:ci" script to run the asp command line tool to use the SummaryTestReporter like this:

{
  "scripts": {
    "test:ci": "asp --reporter=SummaryTestReporter"
  }
}

Compiler Flags

Regardless of the installed version, all the compiler flags will be passed to the asc command line tool.

import asc from "assemblyscript/cli/asc";

Inside the callback, any files that are generated, except for the .wasm file will be output using the {testFolder}/{testName}.{ext} format. This includes sourcemaps, .wat files, .js files, and types files generated by the compiler.

Reporters

Reporters are the way tests get reported. When running the CLI, the DefaultReporter is used and all the values will be logged to the console. The test suite itself does not log out test results. If you want to use a custom reporter, you can create your own by extending the abstract reporter class.

export abstract class Reporter {
  public abstract onStart(suite: TestSuite): void;
  public abstract onGroupStart(group: TestGroup): void;
  public abstract onGroupFinish(group: TestGroup): void;
  public abstract onTestStart(group: TestGroup, result: TestResult): void;
  public abstract onTestFinish(group: TestGroup, result: TestResult): void;
  public abstract onFinish(suite: TestSuite): void;
  public abstract onTodo(group: TestGroup, todo: string): void;
}

Each test suite run will use the provided reporter and call onStart(suite: TestSuite) to notify a consumer that a test has started. This happens once per test file. Since a file can have multiple describe function calls, these are logically placed into TestGroups. Each TestGroup has it's own description and contains a list of TestResults that were run.

Each function is self explainatory, and you don't need to call super() when extending the Reporter class, since Reporter does not have one.

If no reporter is provided to the configuration, one will be provided that uses console.log() and chalk to provide colored output.

If performance is enabled, then the times array will be populated with the runtime values measured in milliseconds.

Using as-pect as a Package

When writing your tests, it's possible that running your tests requires a browser environment. Instead of running as-pect from the command line, instead, use the --output-binary flag along with --norun and this will cause as-pect to output the *.spec.wasm file. This binary can be fetch()ed and instantiate like the following example.

// browser-test.ts
import { instantiateBuffer } from "assemblyscript/lib/loader";
import {
  TestContext,
  IPerformanceConfiguration,
  IAspectExports,
} from "as-pect";

const performanceConfiguration: IPerformanceConfiguration = {
  // put performance configuration values here
};

// Create a TestContext
const runner = new TestContext({
  // reporter: new EmptyReporter(), // Use this to override default test reporting
  performanceConfiguration,
  // testRegex: /.*/, // Use this to run only tests that match this regex
  // groupRegex: /.*/, // Use this to run only groups that match this regex
  fileName: "./test.spec.wasm", // Always set the filename
});
const imports = runner.createImports({
  // put your assemblyscript imports here
});

// instantiate your test module here via the "assemblyscript/lib/loader" module
const wasm = instantiateStreaming<IAspectExports>(fetch("./test.spec.wasm"), imports);

//don't forget the `IAspectExports` interface for the `runner.run()` function
runner.run(wasm); // run the tests synchronously

// loop over each group and test in that group
for (const group of runner.testGroups) {
  for (const test of group.tests) {
    console.log(test.name, test.pass ? "pass" : "fail");
  }
}

If you want to compile each test suite manually, it's possible to use the asc compiler yourself by including the following file in your compilation.

./node_modules/as-pect/assembly/index.ts

Types And Tooling

The as-pect cli comes with a way to generate the types for all the globals used by the framework. Simply use the --init or --types flag. When a new version of as-pect is released, simply run the npx asp --types flag to get the latest version of these function definitions. This will greatly increase your productivity because it comes with lots of documentation, and adds a lot of intellisense to your development experience.

It is also possible to reference the types manually. Use the following reference at the top of your assembly/index.ts file to include these types in your project automatically. If you use this method for your types, feel free to delete the auto-generated types file in your test folder.

/// <reference path="../node_modules/as-pect/assembly/__tests__/as-pect.d.ts" />

Closures

AssemblyScript currently does not support closure, however, you can place all relevant tests and setup function calls for a test suite into the corresponding describe block.

// setup a global vector reference
var vec: Vec3;

describe("vectors", () => {
  // this runs before each test function, and must be placed within the describe function
  beforeEach(() => {
    // create a new vector for each test
    vec = new Vec3(1, 2, 3);
  });

  // this runs after each test function, and must be placed within the describe function
  afterEach(() => {
    memory.free(changetype<usize>(vec)); // free the vector
    vec = null;
  });

  // use `test()` or `it()` to run a test
  test("vec should not be null", () => {
    // write an expectation
    expect<Vec3>(vec).not.toBeNull();
  });
});

Nested describes are supported and the outer describe should be evaluated first.

describe("vector", () => {
  // this test block runs first
  it("should run first", () => {});

  describe("addition", () => {
    // this test block runs second
    it("should add vectors together", () => {
       expect<Vec3>(vec1.add(vec2)).toStrictEqual(new Vec3(1, 2, 3));
    });
  });
});

Expectations

Calling the expect<T>(value: T) function outside of a test function or a setup function will cause unexpected behavior. If this happens, the test suite will fail before it runs in the CLI, and the Error will be reported to the console.

RTrace and Memory Leaks

If an expectation fails and hits an unreachable() instruction, any unreleased references in the function call stack will be held indefinitely as a memory leak. Test Suites don't stop running if they fail the test callback. However, tests will stop if they fail inside the beforeEach(), beforeAll(), afterEach(), and afterAll() callbacks.

Typically, a throws() test will leave at least a single Expectation on the heap. This is to be expected, because the unreachable() instruction unwinds the stack, and prevents the ability for each function to __release a reference pointer properly. Your test suite output may look like this:

[Describe]: toHaveLength TypedArray type: Uint32Array

 [Success]: βœ” should assert expected length
 [Success]: βœ” when expected length should not equal the same value RTrace: +3
 [Success]: βœ” should verify the length is not another value
 [Success]: βœ” when the length is another expected value RTrace: +3

The RTrace: +3 corresponds to an Expectation, a Uint32Array, and a single backing ArrayBuffer that was left on the heap because of the fact that the expectation failed. This was expected because these two tests were annotated with the throws(desc, callback) function. If perhaps you see a function that is expected to pass and RTrace returns a very large value, it might be an indicator of a very serious memory leak, and the DefaultTestReporter can be your best friend when it comes to finding these sorts of problems.

Among other solutions, the following methods are exposed to you as a way to inspect how many allocations and frees occurred during the course of function execution. Every one of these functions exist in the RTrace namespace and will call into JavaScript to query the state of the heap relative to the overall test file, the test group, and each individual test depending on the function.

RTrace.count()

The count method returns the current number of heap allocations.

Example:

const num: i32 = RTrace.count();

RTrace.start(label: i32)

The start method creates a starting point for a relative number of heap allocations. It should be used in conjunction with the RTrace.end(label) method which returns the relative number of heap allocations compared to the starting number when the label was created.

Example:

const enum RTraceLabels {
  MEMORY_INTENSIVE_OPERATION = 0,
}

RTrace.start(RTraceLabels.MEMORY_INTENSIVE_OPERATION);
doSomething();
const end: i32 = RTrace.end(RTraceLabels.MEMORY_INTENSIVE_OPERATION);
expect<i32>(end).toBe(0);

RTrace.end(label: i32)

The end method creates an ending point for a relative number of heap allocations to be measured from. It should be used in conjunction with the RTrace.start(label) method which returns the relative number of heap allocations compared to the starting number when the label was created.

Example:

const enum RTraceLabels {
  MEMORY_INTENSIVE_OPERATION = 0,
}

RTrace.start(RTraceLabels.MEMORY_INTENSIVE_OPERATION);
doSomething();
const end: i32 = RTrace.end(RTraceLabels.MEMORY_INTENSIVE_OPERATION);
expect<i32>(end).toBe(0);

RTrace.allocations()

The allocations function will report the exact number of allocations that have occurred during the course of test file evaluation.

const allocations: i32 = RTrace.allocations();

RTrace.frees()

The allocations function will report the exact number of frees that have occurred during the course of test file evaluation.

const frees: i32 = RTrace.frees();

RTrace.groupAllocations()

The allocations function will report the exact number of allocations that have occurred during the course of the test group's evaluation.

describe("a group", () => {
  afterAll(() => {
    const groupAllocations: i32 = RTrace.groupAllocations();
  });
});

RTrace.groupFrees()

The frees function will report the exact number of frees that have occurred during the course of the test group's evaluation.

describe("a group", () => {
  afterAll(() => {
    const groupFrees: i32 = RTrace.groupFrees();
  });
});

RTrace.testAllocations()

The allocations function will report the exact number of allocations that have occurred during the course of the test's evaluation.

describe("a group", () => {
  afterEach(() => {
    const testAllocations: i32 = RTrace.testAllocations();
  });
});

RTrace.testFrees()

The frees function will report the exact number of frees that have occurred during the course of the test's evaluation.

describe("a group", () => {
  afterEach(() => {
    const testFrees: i32 = RTrace.testFrees();
  });
});

RTrace.increments()

The increments function returns the total number of reference counted increments that occurred over the course of the current test file.

Example:

const increments: i32 = RTrace.increments();

RTrace.decrements()

The decrements function returns the total number of reference counted decrements that occurred over the course of the current test file.

Example:

const decrements: i32 = RTrace.decrements();

RTrace.groupIncrements()

The groupIncrements function returns the total number of reference counted increments that occurred over the course of the current testing group.

describe("A testing group", () => {
  afterAll(() => {
    // log how many increments occured
    log<i32>(RTrace.groupIncrements());
  });
});

RTrace.groupDecrements()

The groupDecrements function returns the total number of reference counted decrements that occurred over the course of the current testing group.

describe("A testing group", () => {
  afterAll(() => {
    // log how many increments occured
    log<i32>(RTrace.groupDecrements());
  });
});

RTrace.testIncrements()

The testIncrements function returns the total number of reference counted increments that occurred over the course of the current testing group.

describe("A testing group", () => {
  afterEach(() => {
    // log how many increments occured
    log<i32>(RTrace.testIncrements());
  });
});

RTrace.testDecrements()

The testDecrements function returns the total number of reference counted decrements that occurred over the course of the current testing group.

describe("A testing group", () => {
  afterEach(() => {
    // log how many increments occured
    log<i32>(RTrace.testDecrements());
  });
});

RTrace.collect()

This method triggers a garbage collection.

describe("something", () => {
  // put some tests here
});

afterEach(() => {
  // trigger a garbage collection after each test
  RTrace.collect();
});

Logging

A global log<T>(value: T): void function is provided by as-pect to help collect useful information about the state of your program. Simply give it the type you want to log, and it will append a LogValue item to the corresponding TestResult or TestGroup item the log() function was called within.

log<string>("This will log a string"); // Remember, strings are references
log<f64>(0.4); // this logs a float value
log<i32>(42); // this logs the meaning of life
log<Vec3>(new Vec3(1, 2, 3)); // this logs every byte in the reference
log<i32[]>([1, 2, 3]); // this will log an array

This log function does not pipe the output to stdout. It simply attaches the log value to the current group or test the log() function was called in. When the DefaultTestReporter does it's job, it will pipe the collected log information to stdout at that point instead of immediately when the function executes.

Performance Testing

To increase performance on testing, do not use the log() function and reduce the amount of IO that as-pect must do to compile your tests. The biggest bottleneck in Web Assembly testing, is compilation. This means that using things like @inline many times will cause your module to compile more slowly, and as a result the test file will run slower.

Performance Enabling Via API

To enable performance using the global test functions, call the performanceEnabled() function with a true value.

describe("my test suite", () => {
  performanceEnabled(true);
  test("some performance test", () => {
    // some performance sensitive code
  });
});

When using performanceEnabled(true) on a test, logs are not supported for that specific test. Running 10000 samples of a function that collects logs will result in a very large amount of memory usage and IO.

Note that each of the performance functions must be called before the test is declared in the same describe block to override the corresponding default configuration values on a test by test basis.

To override the maximum number of samples collected, use the maxSample function.

maxSamples(10000); // 10000 is the maximum value
it("should collect only 10000 samples at most", () => {});

To override the maximum test run time (including test logic), use the maxRunTime function.

maxRunTime(5000); // 5000 ms, or 5 seconds of test run time
it("should have a maxRunTime of 5 seconds", () => {});

To override how many decimal places are rounded to, use the roundDecimalPlaces function.

roundDecimalPlaces(4); // 3 is the default
it("should round to 4 decimal places", () => {});

To force reporting of the median test runtime, use the reportMedian function.

reportMedian(true); // false will disable reporting of the median
it("should report the median", () => {});

To force reporting of the average, or mean test runtime, use the reportAverage function.

reportAverage(true); // false will disable reporting of the mean
it("should report the average", () => {});

To force reporting of the variance in the runTime sample, use the reportVariance function.

reportVariance(true); // false will disable reporting of the variance
it("should report the variance deviation", () => {});

To force reporting of the standard deviation of the runTime sample, use the reportStdDev function. This method implies the use of a variance calculation, and will auto-include it in the test result.

reportStdDev(true); // false will disable reporting of the standard deviation
it("should report the standard deviation", () => {});

To force reporting of the maximum runTime value, use the reportMax function.

reportMax(true); // false will disable reporting of the max
it("should report the max", () => {});

To force reporting of the minimum runTime value, use the reportMin function.

reportMin(true); // false will disable reporting of the min
it("should report the min", () => {});

Performance Enabling Via Configuration

Providing these values inside an as-pect.config.js configuration will set these as the global defaults.

Note that when using the cli, the cli flag inputs will override the as-pect.config.js configured values.

// in as-pect.config.js
module.exports = {
  performance: {
    /** Enable performance statistics gathering for *every* test. */
    enabled: false,
    /** Set the maximum number of samples to run for every test. */
    maxSamples: 10000,
    /** Set the maximum test run time in milliseconds for every test. */
    maxTestRunTime: 2000,
    /** Report the median time in the default reporter for every test. */
    reportMedian: true,
    /** Report the average time in milliseconds for every test. */
    reportAverage: true,
    /** Report the standard deviation for every test. */
    reportStandardDeviation: false,
    /** Report the maximum run time in milliseconds for every test. */
    reportMax: false,
    /** Report the minimum run time in milliseconds for every test. */
    reportMin: false,
  },
}

Custom Imports Using CLI

If a set of custom imports are required for your test module, it's possible to provide a set of imports for a given test file.

If your test is located at assembly/__tests__/customImports.spec.ts, then use filename assembly/__tests__/customImports.spec.imports.js to export the test module's imports. This file will be required by the cli before the module is instantiated.

IMPORTANT: THIS WILL IGNORE as-pect.config.js'S IMPORTS COMPLETELY

Please see the provided example located in assembly/__tests__/customImports.spec.ts.

Special Thanks

Special thanks to the AssemblyScript team for creating one of the coolest computer languages that compile to web assembly.

You can’t perform that action at this time.