Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: skip less scenarios for test-linux-perf.js #27364

Merged
merged 1 commit into from
Jun 6, 2019

Conversation

refack
Copy link
Contributor

@refack refack commented Apr 23, 2019

Refs: nodejs/build#1774

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines

@nodejs-github-bot

This comment has been minimized.

@nodejs-github-bot nodejs-github-bot added the test Issues and PRs related to the tests. label Apr 23, 2019
@refack refack requested a review from mmarchini April 23, 2019 14:35
@refack refack added performance Issues and PRs related to the performance of Node.js. v8 engine Issues and PRs related to the V8 dependency. labels Apr 23, 2019
@mmarchini
Copy link
Contributor

These changes should suffice, thank you for looking into this!

09:00:06 not ok 1 v8-updates/test-linux-perf
09:00:06   ---
09:00:06   duration_ms: 0.816
09:00:06   severity: fail
09:00:06   exitcode: 1
09:00:06   stack: |-
09:00:06     /home/iojs/build/workspace/node-test-commit-custom-suites-freestyle/test/v8-updates/test-linux-perf.js:52
09:00:06       throw new Error(`Failed to execute 'perf': ${perf.stderr}`);
09:00:06       ^
09:00:06     
09:00:06     Error: Failed to execute 'perf': 
09:00:06     
09:00:06     #
09:00:06     # Fatal error in , line 0
09:00:06     # Check failed: !obj->IsCode().
09:00:06     #
09:00:06     #
09:00:06     #
09:00:06     #FailureMessage Object: 0x7ffd12ff5640[ perf record: Woken up 1 times to write data ]
09:00:06     [ perf record: Captured and wrote 0.003 MB perf.data (2 samples) ]
09:00:06     
09:00:06         at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-custom-suites-freestyle/test/v8-updates/test-linux-perf.js:52:9)
09:00:06         at Module._compile (internal/modules/cjs/loader.js:759:30)
09:00:06         at Object.Module._extensions..js (internal/modules/cjs/loader.js:770:10)
09:00:06         at Module.load (internal/modules/cjs/loader.js:628:32)
09:00:06         at Function.Module._load (internal/modules/cjs/loader.js:555:12)
09:00:06         at Function.Module.runMain (internal/modules/cjs/loader.js:826:10)
09:00:06         at internal/main/run_main_module.js:17:11

The failing test is caused by #27346, which will be fixed once we pick v8/v8@d915b8d (I requested a upstream backport to 7.4, waiting for response. If backport upstream is not possible, I'll cherry-pick and open a PR). Not sure if we should wait until the issue is fixed on master before landing this PR, since it's exposing a legitimate failure.

@refack refack self-assigned this Apr 23, 2019
@nodejs-github-bot

This comment has been minimized.

@nodejs-github-bot

This comment has been minimized.

@nodejs-github-bot
Copy link
Collaborator

nodejs-github-bot commented Jun 5, 2019

@nodejs-github-bot
Copy link
Collaborator

nodejs-github-bot commented Jun 5, 2019

@refack
Copy link
Contributor Author

refack commented Jun 6, 2019

@mmarchini do you know what this means?

12:51:53 not ok 2 v8-updates/test-linux-perf
12:51:53   ---
12:51:53   duration_ms: 3.121
12:51:53   severity: fail
12:51:53   exitcode: 1
12:51:53   stack: |-
12:51:53     assert.js:362
12:51:53         throw err;
12:51:53         ^
12:51:53     
12:51:53     AssertionError [ERR_ASSERTION]: Couldn't find interpreted functionOne()
12:51:53         at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-v8-linux/nodes/benchmark-ubuntu1604-intel-64/v8test/v8test/test/v8-updates/test-linux-perf.js:67:8)
12:51:53         at Module._compile (internal/modules/cjs/loader.js:777:30)
12:51:53         at Object.Module._extensions..js (internal/modules/cjs/loader.js:788:10)
12:51:53         at Module.load (internal/modules/cjs/loader.js:640:32)
12:51:53         at Function.Module._load (internal/modules/cjs/loader.js:555:12)
12:51:53         at Function.Module.runMain (internal/modules/cjs/loader.js:840:10)
12:51:53         at internal/main/run_main_module.js:17:11 {
12:51:53       generatedMessage: false,
12:51:53       code: 'ERR_ASSERTION',
12:51:53       actual: null,
12:51:53       expected: true,
12:51:53       operator: '=='
12:51:53     }
12:51:53   ...

https://ci.nodejs.org/job/node-test-commit-v8-linux/2359/nodes=benchmark-ubuntu1604-intel-64,v8test=v8test/console

@mmarchini
Copy link
Contributor

Yeah, I noticed this version of V8 is optimizing functionOne faster than before. Increasing sampling frequency might help, but will still be flaky. Or we could run the fixture script a few times (5, 10) and search for frames on the output of all executions. Let me see if we can improve this somehow first, if not we can try alternative solutions.

@refack
Copy link
Contributor Author

refack commented Jun 6, 2019

Thanks.

Since this is run only once and only for V8 updates, IMO it could be a "pummel" test. We can extend it's timeout, or generate synthetic CPU load if needed.

@mmarchini
Copy link
Contributor

Ok, I was able to rewrite it in a way it's guaranteed to work:

diff --git a/test/fixtures/linux-perf.js b/test/fixtures/linux-perf.js
index 011ef19777..90ea2f2e33 100644
--- a/test/fixtures/linux-perf.js
+++ b/test/fixtures/linux-perf.js
@@ -1,26 +1,17 @@
 'use strict';
 
-const crypto = require('crypto');
+const { spawnSync } = require("child_process");
+const sleepTime = new Number(process.argv[2] || "0.1");
+const repeat = new Number(process.argv[3]) || 5;
 
-// Functions should be complex enough for V8 to run them a few times before
-// compiling, but not complex enough to always stay in interpreted mode. They
-// should also take some time to run, otherwise Linux perf might miss them
-// entirely even when sampling at a high frequency.
-function functionOne(i) {
-  for (let j=i; j > 0; j--) {
-    crypto.createHash('md5').update(functionTwo(i, j)).digest("hex");
-  }
+function functionOne() {
+  functionTwo();
 }
 
-function functionTwo(x, y) {
-  let data = ((((x * y) + (x / y)) * y) ** (x + 1)).toString();
-  if (x % 2 == 0) {
-    return crypto.createHash('md5').update(data.repeat((x % 100) + 1)).digest("hex");
-  } else {
-    return crypto.createHash('md5').update(data.repeat((y % 100) + 1)).digest("hex");
-  }
+function functionTwo() {
+  spawnSync('sleep', [`${sleepTime}`]);
 }
 
-for (let i = 0; i < 1000; i++) {
-  functionOne(i);
+for (let i = 0; i < repeat; i++) {
+  functionOne();
 }
diff --git a/test/v8-updates/test-linux-perf.js b/test/v8-updates/test-linux-perf.js
index 98f8f6c527..526dfbcd4e 100644
--- a/test/v8-updates/test-linux-perf.js
+++ b/test/v8-updates/test-linux-perf.js
@@ -22,16 +22,53 @@ tmpdir.refresh();
 if (process.config.variables.node_shared)
   common.skip("can't test Linux perf with shared libraries yet");
 
-const perfArgs = [
+if (!common.isLinux)
+  common.skip('only testing Linux for now');
+
+const frequency = 99;
+
+const repeat = 5;
+
+// Expected number of samples we'll capture per repeat
+const sampleCount = 10;
+const sleepTime = sampleCount * (1.0 / frequency);
+
+const perfFlags = [
   'record',
-  '-F999',
+  `-F${frequency}`,
   '-g',
-  '--',
-  process.execPath,
+];
+
+const nodeCommonFlags = [
   '--perf-basic-prof',
   '--interpreted-frames-native-stack',
   '--no-turbo-inlining',  // Otherwise simple functions might get inlined.
+];
+
+const perfInterpretedFramesArgs = [
+  ...perfFlags,
+  '--',
+  process.execPath,
+  ...nodeCommonFlags,
+  '--no-opt',
+  fixtures.path('linux-perf.js'),
+  `${sleepTime}`,
+  `${repeat}`,
+];
+
+const perfCompiledFramesArgs = [
+  ...perfFlags,
+  '--',
+  process.execPath,
+  ...nodeCommonFlags,
+  '--always-opt',
   fixtures.path('linux-perf.js'),
+  `${sleepTime}`,
+  `${repeat}`,
+];
+
+const perfArgsList = [
+  perfInterpretedFramesArgs, perfCompiledFramesArgs
 ];
 
 const perfScriptArgs = [
@@ -44,25 +81,27 @@ const options = {
   maxBuffer: Infinity
 };
 
-if (!common.isLinux)
-  common.skip('only testing Linux for now');
+let output = "";
+
+for (let perfArgs of perfArgsList) {
+  const perf = spawnSync('perf', perfArgs, options);
+  assert.ifError(perf.error);
+  if (perf.status !== 0)
+    throw new Error(`Failed to execute 'perf': ${perf.stderr}`);
 
-const perf = spawnSync('perf', perfArgs, options);
-assert.ifError(perf.error);
-if (perf.status !== 0)
-  throw new Error(`Failed to execute 'perf': ${perf.stderr}`);
+  const perfScript = spawnSync('perf', perfScriptArgs, options);
+  assert.ifError(perfScript.error);
+  if (perfScript.status !== 0)
+    throw new Error(`Failed to execute perf script: ${perfScript.stderr}`);
 
-const perfScript = spawnSync('perf', perfScriptArgs, options);
-assert.ifError(perfScript.error);
-if (perfScript.status !== 0)
-  throw new Error(`Failed to execute perf script: ${perfScript.stderr}`);
+  output += perfScript.stdout;
+}
 
 const interpretedFunctionOneRe = /InterpretedFunction:functionOne/;
 const compiledFunctionOneRe = /LazyCompile:\*functionOne/;
 const interpretedFunctionTwoRe = /InterpretedFunction:functionTwo/;
 const compiledFunctionTwoRe = /LazyCompile:\*functionTwo/;
 
-const output = perfScript.stdout;
 
 assert.ok(output.match(interpretedFunctionOneRe),
           "Couldn't find interpreted functionOne()");

Essentially we'll run the fixture two times, one with optimizations disabled (--no-opt) and one forcing optimization (--always-opt). Instead of using arithmetic and crypto operations to generate load that will keep frames on the CPU (which gives us no guarantee about execution time), we use spawnSync('sleep'), because this gives us a predictable minimum time each frame will stay in the CPU. The sleep time we use is calculated based on the frequency we'll use to sample stack traces and the number of samples we want to capture for a given stack trace. With this methods we guarantee that we'll get at least one sample with the functions we want on the stack trace. We repeat the function calls a few times just to be sure, but in theory this is not necessary.

I ran this 100 times (with a frequency of 99, which is way lower than what we're using today) and there was no failures.

@nodejs-github-bot
Copy link
Collaborator

nodejs-github-bot commented Jun 6, 2019

Co-authored-by: Matheus Marchini <mat@mmarchini.me>

PR-URL: nodejs#27364
Refs: nodejs/build#1774
Reviewed-By: Matheus Marchini <mat@mmarchini.me>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
@refack refack removed their assignment Jun 6, 2019
@refack refack merged commit 546d6cd into nodejs:master Jun 6, 2019
@refack refack deleted the skip-less-v8-linux-perf branch June 6, 2019 22:47
BridgeAR pushed a commit that referenced this pull request Jun 17, 2019
Co-authored-by: Matheus Marchini <mat@mmarchini.me>

PR-URL: #27364
Refs: nodejs/build#1774
Reviewed-By: Matheus Marchini <mat@mmarchini.me>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
@BridgeAR BridgeAR mentioned this pull request Jun 17, 2019
richardlau pushed a commit that referenced this pull request Oct 9, 2020
Co-authored-by: Matheus Marchini <mat@mmarchini.me>

PR-URL: #27364
Refs: nodejs/build#1774
Reviewed-By: Matheus Marchini <mat@mmarchini.me>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
@richardlau richardlau mentioned this pull request Oct 9, 2020
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Issues and PRs related to the performance of Node.js. test Issues and PRs related to the tests. v8 engine Issues and PRs related to the V8 dependency.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants