Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: add bench for zlib gzip + gunzip cycle #20034

Closed
wants to merge 4 commits into from

Conversation

addaleax
Copy link
Member

Originally wrote this for some work that is going to take a while
longer before it’s ready to be PR’ed, so it seems fine to start
with this on its own.

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • commit message follows commit guidelines

@nodejs-github-bot nodejs-github-bot added benchmark Issues and PRs related to the benchmark subsystem. zlib Issues and PRs related to the zlib subsystem. labels Apr 14, 2018
});

function main({ inputLen, duration, type }) {
const buffer = Buffer.alloc(inputLen, fs.readFileSync(__filename));
Copy link
Member

@lpinca lpinca Apr 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: any reason for using fs.readFileSync(__filename) instead of an arbitrary fill character? Is it for making the chunks to gzip/gunzip realistic?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lpinca Yes, exactly that. If you have other suggestions, I’ll take them too :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, that's ok and I have no better ideas.

inputLen: [1024],
duration: [5],
type: ['string', 'buffer']
});
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These options have to be reflected in the benchmark test as well (parallel/test-benchmark-zlib.js).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BridgeAR they are no? See line 14.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can tell, inputLen and duration have to be added to the test.

Copy link
Member

@lpinca lpinca Apr 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, true.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually inputLen and duration are used in the test.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now we just have:

runBenchmark('zlib',
             [
               'method=deflate',
               'n=1',
               'options=true',
               'type=Deflate'
             ]);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems this benchmark is like net benchmarks where throughput is calculated after x sec. I don't think this will be run in CI. Or I am not understanding the issue :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right but we have tests that run these benchmarks to confirm that they work and also to confirm that we didn't break some functionality. See test/parallel/test-benchmark-zlib.js.

Copy link
Member

@lpinca lpinca Apr 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now, I didn't read the original comment carefully. Sorry for the noise.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

bench.end(readFromOutput * 8 / (1024 ** 3));

// Cut off writing the easy way.
input.write = () => {};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should not better write be replaced instead of input.write?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’d find it a bit surprising to override something defined with function foo(){}, tbh…?

@BridgeAR
Copy link
Member

Ping @addaleax

Copy link
Member

@BridgeAR BridgeAR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure it does not land as is.

Originally wrote this for some work that is going to take a while
longer before it’s ready to be PR’ed, so it seems fine to start
with this on its own.
@addaleax
Copy link
Member Author

@addaleax
Copy link
Member Author

@addaleax addaleax added the author ready PRs that have at least one approval, no pending requests for changes, and a CI started. label Apr 27, 2018
'type=Deflate'
'type=Deflate',
'inputLen=1024',
'duration=1'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm -0 on this. It's heuristic (so still might fail on a really slow or clogged machine), probably safer to call bench.end(gb) with

// Give result in GBit/s, like the net benchmarks do.
// Never return 0, so that tests won't fail on really slow machines.
const gb = (readFromOutput * 8 / (1024 ** 3)) || 1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh and #20128 - settings duration=1 will add a 1sec test to the suite.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we do this elsewhere? Otherwise it seems like it would falsify test results – what if the test is actually broken and returns 0?

Would you be be okay with the current code if it passes a stress test run?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the duration: Yeah, that’s not great. It’s not quite as bad as if it were a sequential test though…

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me read through the test and harness a little bit more. AFAIK the test-benchmark-* set are there "mostly" for syntax, API and regression testing. Designing correctness tests for the benchmarks might be out-of-scope.
Maybe we can just relax the Error: called end() with operation count <= 0 rule anyway 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found it:

if (!process.env.NODEJS_BENCHMARK_ZERO_ALLOWED && operations <= 0) {

so calling:

runBenchmark('zlib',
             [
               'method=deflate',
               'n=1',
               'options=true',
               'type=Deflate',
               'inputLen=1024',
               'duration=0.001'
             ],
             {
               'NODEJS_BENCHMARK_ZERO_ALLOWED': 1
             });

should get the test to pass.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, done!

@addaleax addaleax removed the author ready PRs that have at least one approval, no pending requests for changes, and a CI started. label Apr 27, 2018
@addaleax addaleax added the author ready PRs that have at least one approval, no pending requests for changes, and a CI started. label Apr 27, 2018
@addaleax
Copy link
Member Author

@apapirovski
Copy link
Member

apapirovski commented Apr 28, 2018

@addaleax I can't get this properly landed, it seems to be giving me a conflict. Any chance you can rebase? My bad. User error 😆

@apapirovski
Copy link
Member

Landed in e797d5b

apapirovski pushed a commit that referenced this pull request Apr 28, 2018
Originally wrote this for some work that is going to take a while
longer before it’s ready to be PR’ed, so it seems fine to start
with this on its own.

PR-URL: #20034
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
MylesBorins pushed a commit that referenced this pull request May 4, 2018
Originally wrote this for some work that is going to take a while
longer before it’s ready to be PR’ed, so it seems fine to start
with this on its own.

PR-URL: #20034
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
Reviewed-By: Ruben Bridgewater <ruben@bridgewater.de>
@MylesBorins MylesBorins mentioned this pull request May 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
author ready PRs that have at least one approval, no pending requests for changes, and a CI started. benchmark Issues and PRs related to the benchmark subsystem. zlib Issues and PRs related to the zlib subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants