Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Paraphrasing the code comments, the field length of a 'C' column is usually defined by the single FieldLength byte. The current implementation extends this by also combining the FieldLength and Decimal bytes as is done by per Clipper and FoxPro. This should work well enough as standard DBFs should a decimal count of 0.
This is not always the case. I had a set of DBFs that had a non-zero decimal byte in a 'C' column. Odd, but oh well.
I've split the header's 'Read' into two phases. The first reads the header assuming that we are using standard field lengths. After reading the header the record length is calculated by summing the field lengths and checked against the expected "RecordLength". If the lengths do not match, re-process the header assuming the extended 'C' field length format.
No further checking is done after that as, potentially, non-standard implementations could have non-standard "RecordLength" field.