Skip to content

Conversation

@alechirsch
Copy link
Contributor

@alechirsch alechirsch commented Aug 20, 2019

I ran the benchmark before and after the changes. It seems to have made deserialization MUCH faster, and a smaller performance boost to serialize.
Resolves #84

Before convertCase optimization

Platform info:

Darwin 18.6.0 x64
Node.JS: 10.15.1
V8: 6.8.275.32-node.12
Intel(R) Core(TM) i5-5287U CPU @ 2.90GHz × 4

Suite:

serializeAsync x 71,616 ops/sec ±1.76% (75 runs sampled)
serialize x 110,926 ops/sec ±4.38% (81 runs sampled)
serializeConvertCase x 70,255 ops/sec ±2.48% (91 runs sampled)
deserializeAsync x 152,227 ops/sec ±2.29% (80 runs sampled)
deserialize x 394,181 ops/sec ±1.24% (91 runs sampled)
deserializeConvertCase x 30,294 ops/sec ±1.01% (92 runs sampled)
serializeError x 284,856 ops/sec ±1.06% (92 runs sampled)
serializeError with a JSON API error object x 12,753,945 ops/sec ±4.90% (91 runs sampled)

After convertCase optimization

Platform info:

Darwin 18.6.0 x64
Node.JS: 10.15.1
V8: 6.8.275.32-node.12
Intel(R) Core(TM) i5-5287U CPU @ 2.90GHz × 4

Suite:

serializeAsync x 70,198 ops/sec ±3.08% (76 runs sampled)
serialize x 119,645 ops/sec ±3.64% (85 runs sampled)
serializeConvertCase x 91,489 ops/sec ±3.21% (88 runs sampled)
deserializeAsync x 144,878 ops/sec ±5.55% (70 runs sampled)
deserialize x 361,896 ops/sec ±3.33% (87 runs sampled)
deserializeConvertCase x 133,200 ops/sec ±1.91% (91 runs sampled)
serializeError x 281,794 ops/sec ±1.05% (84 runs sampled)
serializeError with a JSON API error object x 12,884,729 ops/sec ±1.89% (86 runs sampled)

@coveralls
Copy link

coveralls commented Aug 20, 2019

Coverage Status

Coverage remained the same at 100.0% when pulling 9492711 on alechirsch:convert-case-performance into a32ba8d on danivek:master.

@alechirsch
Copy link
Contributor Author

Added LRU cache, here are the new benchmarks

serializeAsync x 62,434 ops/sec ±13.04% (69 runs sampled)
serialize x 125,513 ops/sec ±2.49% (81 runs sampled)
serializeConvertCase x 91,754 ops/sec ±2.83% (88 runs sampled)
deserializeAsync x 161,124 ops/sec ±3.41% (80 runs sampled)
deserialize x 392,788 ops/sec ±1.19% (89 runs sampled)
deserializeConvertCase x 114,102 ops/sec ±0.55% (88 runs sampled)
serializeError x 267,726 ops/sec ±3.77% (83 runs sampled)
serializeError with a JSON API error object x 12,156,898 ops/sec ±3.78% (85 runs sampled)

@danivek danivek self-requested a review August 22, 2019 10:22
Copy link
Owner

@danivek danivek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! I think it's the best way to avoid memory leaks. Good job 👍

@alechirsch
Copy link
Contributor Author

@danivek Do you need anything else from me?

@danivek danivek merged commit fe4a878 into danivek:master Aug 27, 2019
@alechirsch alechirsch deleted the convert-case-performance branch August 27, 2019 17:05
alechirsch added a commit to alechirsch/json-api-serializer that referenced this pull request Dec 10, 2019
added performance boost to convert case with LRU caching mechanism

resolves danivek#84
alechirsch added a commit to alechirsch/json-api-serializer that referenced this pull request Dec 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Use a cache for converting to snake/camel/kebab case

3 participants