Feature: support double precision option#33
Merged
dparrini merged 6 commits intodparrini:masterfrom Sep 7, 2023
Merged
Conversation
Option applies to time values and analog channel values.
The original test was failing because UTF-8 is the default now in Python.
Owner
|
Hi @drewsilcock , this is a nice feature. I just solved some old issues and this caused some conflicts with your PR. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This adds a keyword argument to the
Comtrade()constructor calleduse_double_precision(default: false) which causes the time and analog data arrays to use double-precision floating point precision instead of single precision.This also adds:
This arose from a need we have to have double-precision on the timestamps specifically - the time values are in nanosecond precision and single precision floating point only gets you 7-8 decimal points of precision.
As an example, if you take the 2013 sample file, the sampling rate is 1200 Hz and there are 40 data points which means the last data point has timestamp
39 / 1200 = 0.0325but when loaded into the single-precision array it gets stored as0.032499998807907104. The larger the index of the data point, the larger the discrepancy. We're dealing with files that have almost 400,000 data points so the precision errors are non-insignificant.