Benchmarks are in a sorry state #604

Open
piscisaureus opened this Issue Oct 24, 2012 · 4 comments

2 participants

@piscisaureus
Joyent member
  • Benchmarks take way too long to run. There really is no need to run 16 variants of udp_packet_storm, 100 variants of fs_stat etc.
  • Many benchmarks time out on windows.
@bnoordhuis

Benchmarks take way too long to run. There really is no need to run 16 variants of udp_packet_storm, 100 variants of fs_stat etc.

Strongly disagree. We're talking benchmarks here, not test cases.

The reason that there are many variations is to a) torture-test libuv, and b) catch regressions and/or non-linear behavior. If anything, most benchmarks should run a lot longer than they do now because I'm not convinced they're as rigorous as they could be.

@bnoordhuis

Many benchmarks time out on windows.

That's a strong indication you have some catching up to do, Bert Belder. :-)

@piscisaureus
Joyent member

Strongly disagree. We're talking benchmarks here, not test cases.

You can say that, but I almost never run benchmarks any more because it takes ages to run all of them. Maybe we can have 2 flavors, one "quick" and one "full" suite?

That's a strong indication you have some catching up to do, Bert Belder. :-)

Maybe you need to write better benchmarks. See 0dbab84 and a54b9e2 for example.

@piscisaureus
Joyent member

That's a strong indication you have some catching up to do, Bert Belder. :-)

Also, the fs_stat benchmark times out and this is not fixable. It's just windows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment