Skip to content

Commit

Permalink
benchmark: Remove top level platform specific imports, and add todos (#…
Browse files Browse the repository at this point in the history
…16112)

## Description

A user of benchmark ran into issue when updating to 0.47 because of the
unconditional use of v8 and perf_hooks in memoryTestRunner.ts.

This removes the use of perf_hooks and instead uses the timer written
for performance testing which is more portable.
It also moves the import of v8 into where its used so its only imported
when actually taking measurements and not statically or in correctness
mode.

Lastly adds some TODOs regarding issues I noticed while making these
changes which should be addressed in the future but are unrelated to the
changes made here,
  • Loading branch information
CraigMacomber committed Jun 23, 2023
1 parent 3cb7cef commit 50bf078
Showing 1 changed file with 24 additions and 6 deletions.
30 changes: 24 additions & 6 deletions tools/benchmark/src/mocha/memoryTestRunner.ts
Expand Up @@ -3,8 +3,7 @@
* Licensed under the MIT License.
*/

import * as v8 from "v8";
import { performance } from "perf_hooks";
import type * as v8 from "v8";
import { assert } from "chai";
import { Test } from "mocha";
import {
Expand All @@ -18,6 +17,22 @@ import {
TestType,
} from "../Configuration";
import { getArrayStatistics, Stats } from "../ReporterUtilities";
import { timer } from "../timer";

// TODO:
// Much of the logic and interfaces here were duplicated from the runtime benchmark code.
// This code should be either updated and/or deduplicated to reflect the major refactoring and improvements done to the runtime benchmark code.
// TODO:
// The majority of this code is not mocha specific and should be factored into a place where it can be used by tests not using mocha.
// TODO:
// IMemoryTestObject provides a rather unintuitive way to measure memory used by some destructure.
// Data data to measure needs to be allocated in either `run` or `afterIteration` (there is no reason to separate those as the tooling does nothing between),
// then freed in `beforeIteration`.
// Having methods like "allocate" and "free" would make more sense.
// TODO:
// Leaks from one iteration not cleaned up before the next should be an error not silently ignored via subtracting them out
// since the statistics assume samples are independent and that can't be true if each sample leaks memory.
// Alternatively a mode to measure tests that work this way could be added via a separate API and characterize the grow of memory over iterations.

/**
* Contains the samples of all memory-related measurements we track for a given benchmark (a test which was
Expand Down Expand Up @@ -170,7 +185,6 @@ export interface MemoryTestObjectProps extends MochaExclusiveOptions {
* Tests created with this function get tagged with '\@MemoryUsage', so mocha's --grep/--fgrep
* options can be used to only run this type of tests by fitering on that value.
*/

export function benchmarkMemory(testObject: IMemoryTestObject): Test {
const options: Required<MemoryTestObjectProps> = {
maxBenchmarkDurationSeconds: testObject.maxBenchmarkDurationSeconds ?? 30,
Expand Down Expand Up @@ -299,7 +313,10 @@ export function benchmarkMemory(testObject: IMemoryTestObject): Test {
totalRunTimeMs: -1,
};

const startTime = performance.now();
// Do this import only if isInPerformanceTestingMode so correctness mode can work on a non-v8 runtime like the a browser.
const v8 = await import("v8");

const startTime = timer.now();
try {
let heapUsedStats: Stats = {
marginOfError: NaN,
Expand Down Expand Up @@ -342,7 +359,7 @@ export function benchmarkMemory(testObject: IMemoryTestObject): Test {
// Break if max elapsed time passed, only if we've reached the min sample count
if (
benchmarkStats.runs >= options.minSampleCount &&
(performance.now() - startTime) / 1000 > options.maxBenchmarkDurationSeconds
timer.toSeconds(startTime, timer.now()) > options.maxBenchmarkDurationSeconds
) {
break;
}
Expand All @@ -352,11 +369,12 @@ export function benchmarkMemory(testObject: IMemoryTestObject): Test {
);
benchmarkStats.stats = heapUsedStats;
} catch (error) {
// TODO: This results in the mocha test passing when it should fail. Fix this.
benchmarkStats.aborted = true;
benchmarkStats.error = error as Error;
} finally {
// It's not perfect, since we don't compute it *immediately* after we stop running tests but it's good enough.
benchmarkStats.totalRunTimeMs = performance.now() - startTime;
benchmarkStats.totalRunTimeMs = timer.toSeconds(startTime, timer.now()) * 1000;
}

test.emit("benchmark end", benchmarkStats);
Expand Down

0 comments on commit 50bf078

Please sign in to comment.