Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JSON: Fix complex64 encoding precision #1011

Merged
merged 4 commits into from Sep 10, 2021
Merged

JSON: Fix complex64 encoding precision #1011

merged 4 commits into from Sep 10, 2021

Conversation

abhinav
Copy link
Contributor

@abhinav abhinav commented Sep 10, 2021

Converting complex64 to complex128 meant that
the real and imaginary components of the complex64 value
were encoded as float64 values which,
similar to #1002, encoded them with incorrect precision.

Fix this by adding a precision parameter during encoding to specify
the precision with which the components should be encoded.

Guard against other such cases where there's loss of precision
by adding tests for several such corner cases for
complex, float, int, and uint.

This has no significant performance impact on my laptop.
(The drop is likely a false positive.)

name                          old time/op  new time/op  delta
ZapJSONFloat32AndComplex64-4   491ns ±22%   442ns ± 6%  -10.00%  (p=0.000 n=9+9)

Fixes #1010
Ref GO-872

Per #1010, #1002, and #995, some of the corner cases where precision is
changed or lost aren't fully tested.

Add test cases for corner cases for a number of these:

- complex{64, 128}:  Test incorrect precision and negatives
- float{32, 64}: Test incorrect precision
- int{8, 16, 32, 64}: Test minimum and maximum values
- uint{8, 16, 32, 64}: Test maximum values

Per #1010, the test for complex64 incorrect precision is currently
failing.
Converting to Complex128 meant that the real and imaginary components of
the Complex64 were encoded as float64s which, like #1002, ended up
those components with the incorrect precision.

Fix this by adding a precision parameter to appendComplex tp specify the
precision with which the two components of the complex number should be
encoded.
@codecov
Copy link

codecov bot commented Sep 10, 2021

Codecov Report

Merging #1011 (e272362) into master (a0e2380) will increase coverage by 0.09%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1011      +/-   ##
==========================================
+ Coverage   98.10%   98.20%   +0.09%     
==========================================
  Files          46       46              
  Lines        2058     2060       +2     
==========================================
+ Hits         2019     2023       +4     
+ Misses         30       29       -1     
+ Partials        9        8       -1     
Impacted Files Coverage Δ
zapcore/json_encoder.go 100.00% <100.00%> (ø)
zapcore/sampler.go 100.00% <0.00%> (+3.77%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a0e2380...e272362. Read the comment docs.

func (enc *jsonEncoder) AppendComplex64(v complex64) { enc.appendComplex(complex128(v), 32) }
func (enc *jsonEncoder) AppendComplex128(v complex128) { enc.appendComplex(complex128(v), 64) }
func (enc *jsonEncoder) AppendFloat64(v float64) { enc.appendFloat(v, 64) }
func (enc *jsonEncoder) AppendFloat32(v float32) { enc.appendFloat(float64(v), 32) }
Copy link
Contributor

@sywhang sywhang Sep 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I should've probably commented on this when float32 encoding was being changed... But I think there may be a chance we're taking a performance hit here because now the payloads are not all 64-bits anymore, whereas before everything was 64 bits, so we may be taking a hit on alignment (and therefore cache locality+fragmentation for buffers in the pool).

Have we done any benchmarks with these changes? Just to be clear, I think incorrect encoding is still a bigger issue than a perf hit, but at least we should understand what impact it had. The perf impact may not be significant after all :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair. I'll add a benchmark and results.

Copy link
Contributor

@sywhang sywhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@abhinav abhinav merged commit 5e08337 into master Sep 10, 2021
7 checks passed
@abhinav abhinav deleted the abg/complex64 branch September 10, 2021 21:13
manjari25 added a commit that referenced this pull request Nov 9, 2021
manjari25 added a commit that referenced this pull request Nov 9, 2021
* Update changelog with text for PR #989

* Edit changelog entry for PR #1011
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Bug: Incorrect encoding of complex64 numbers
2 participants