New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improve performance of JSON marshaling #381
Labels
Comments
I would suggest using https://github.com/goccy/go-json
|
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment io is a dropin replacement that it's faster than the standard library. In this case is ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
dcu
added a commit
to dcu/aws-lambda-go
that referenced
this issue
Oct 28, 2023
The segment json library is a drop-in replacement that it's faster than the standard library. In this case it's ~37% faster. goos: darwin goarch: arm64 pkg: github.com/aws/aws-lambda-go/lambda │ bench.orig.txt │ bench.json.txt │ │ sec/op │ sec/op vs base │ JSON-8 7.901µ ± 0% 4.985µ ± 0% -36.90% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ B/op │ B/op vs base │ JSON-8 4.353Ki ± 0% 38.273Ki ± 0% +779.34% (p=0.000 n=40) │ bench.orig.txt │ bench.json.txt │ │ allocs/op │ allocs/op vs base │ JSON-8 31.00 ± 0% 17.00 ± 0% -45.16% (n=40) re aws#381
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This was originally raised (accidentally) over at aws/aws-sdk-go-v2#1312
Is your feature request related to a problem? Please describe.
As detailed in the fantastic article at https://yalantis.com/blog/speed-up-json-encoding-decoding/ the default json marshaling / unmarshaling in Go uses reflection and is quite slow compared to what is possible.
It is possible to provide a custom marshaler for a type by implementing a stdlib inferface. However it is quite tedious. Thankfully, the tool ffjson does all the codegen!
Describe the solution you'd like
For the lambda library to ship optimised marshallers and unmarshallers for types that are sent over JSON, i.e. incoming / outgoing events.
Describe alternatives you've considered
easyjson is a similar alternative if for some reason ffjson doesn't do the job. The linked article also includes a summary of other alternatives, but neither are a drop in replacement.
And of course you have your own smithy framework.
Additional context
The benchmark results in the article conclude
easyjson and ffjson offer the biggest win for "large" and "extra large" objects, and are no worse than the stdlib for small objects.
However, as with everything, it would make sense to perform some benchmarks on typical lambda data, e.g. I am particularly interested in
events.KinesisEvent
, which could make for an excellent pilot study. The true test would be if my lambda invocations get smaller as a result of using a non-reflective decoder.The text was updated successfully, but these errors were encountered: