Fix corner case of very large integer parsing that promotes to BigInt #118
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes JuliaData/CSV.jl#1007. The issue here is there's
actually a git sha "19129370688824811353f0bcee35b917" where the multithreaded
CSV.File case was trying to detect if this value was a float or not. But attempting
to call
Parsers.xparse
threw an error, which shouldn't happen from the Parsers.xparsecode path (it should just return an error code in the parsing result). The problem
was it parsed the very large integer 19129370688824811353, then
f
which it treatedas the exponenent character, then
0
as the exponent. When it went to do the scalingcode, it promoted the integer to
BigInt
, but the_scale
BigInt path didn'thandle
0
exponent form. This is easily handled before we promote toBigInt
since we're really just looking to convert the large integer to the desired type.