Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More realistic benchmarks are "needed" #2

Closed
castarco opened this issue Jul 29, 2016 · 3 comments
Closed

More realistic benchmarks are "needed" #2

castarco opened this issue Jul 29, 2016 · 3 comments

Comments

@castarco
Copy link

castarco commented Jul 29, 2016

If performance claims are made, I think it's important to contribute strong proofs over that statements.

  • Testing methodology : No info is available about...
    • The used machine specs.
    • Number of loop iterations
    • Number of "outter loop" iterations (to measure variance). In this case we know is 1, because the variance is not measured.
    • initial "warming up" process.
    • Which MessagePack implementation was used? (There are multiple implementations)
  • Cases 'matrix' :
    • Object with only scalar properties:
      • how does number of properties affects performance?
      • how does the scalar types affect performance?
    • Object with array properties (how does array length affects performance?)
    • Object with nested object properties (how does depth affects performance?)

I know that the "benchmarks" are inside the repository, but this is far from being OK:

  • Benchmarks have been named as "tests", tests are no the same as benchmarks. If the benchmarks are being used as tests, it should be changed and start decoupling them.
  • Benchmarks need to be modified by hand (removing comment marks over some requires) before being executed for comparative purposes.
  • Some requires have to be done "by hand" before executing the benchmarks. Is possible to declare dev dependencies in the package.json file, it's better than the current approach.
  • Tests/benchmarks files are at the same directory level than the "production" code, and a file like "index" shouldn't be used as a way to execute the benchmarks/tests.
@castarco
Copy link
Author

I haven't many time to contribute, but I'll try to help the project in order to improve its development & testing methodology , maybe with one or two merge requests.

@phretaddin
Copy link
Owner

phretaddin commented Jul 29, 2016

Sure, that would be greatly appreciated! I'm still new to the node scene so I'm still learning all these best practices. As for array length, the library is very efficient with large arrays, at least compared to other libraries I tested (avsc, protobuf,msgpack).

For some insight in to how the serialization/deserialization works and performs/scales, you can analyze the contents of strEncodeFunction and strDecodeFunction right before they're returned from getCompiledSchema. Remove the quote marks and run it through jsbeautifier and you can see the exact code that is called to encode/decode the passed in objects/buffers.

@phretaddin
Copy link
Owner

  • Added my machine specs to benchmarks section on README
  • Switched to using benchmark.js to better capture variance and such
  • msgpack-lite is used. Future readers, just check the top of benchmarks.js
  • Benchmarks separated from tests
  • Other libraries to benchmark now included in dev dependencies

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants