#842 Add options for handling of null values when writing EBCDIC files#846
Conversation
…arious data types.
… about the new feature.
|
Warning Rate limit exceeded
To continue reviewing without waiting, purchase usage credits in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
WalkthroughThis PR introduces writer parameter infrastructure to the COBOL parser. It adds a new ChangesWriter Configuration & Null-Aware Field Writing
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
JaCoCo code coverage report - 'cobol-parser'
|
JaCoCo code coverage report - 'spark-cobol'
|
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/writer/NestedRecordCombiner.scala (2)
138-170: 💤 Low valueDocument the new
writerParametersparameter.The Scaladoc blocks for
processRDD(lines 138–159) andwriteToBytes(lines 361–379) describe every other parameter but omit the newly addedwriterParameters. Please extend both docblocks so the public contract of these helpers is complete.Also applies to: 361-385
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/writer/NestedRecordCombiner.scala` around lines 138 - 170, The Scaladoc for processRDD and writeToBytes is missing documentation for the newly added writerParameters parameter; update both docblocks to include a `@param` entry named writerParameters describing its type (WriterParameters), purpose (configuration options for writing—e.g., compression, encoding, padding, trimming, RDW behavior, etc.), and any key effects on output (how it alters record generation such as trimming, padding, or RDW adjustments). Locate the docblocks above the private[cobrix] def processRDD(...) and the writeToBytes(...) method and add a concise `@param` writerParameters line consistent with the existing style and detail level used for other parameters.
86-86: ⚡ Quick winAvoid
.getonwriterParameters; surface a clearer error or default.
readerParameters.writerParameters.getwill throw a confusingNoSuchElementException: None.getif any future caller invokescombinewithoutisWriter = truepopulating the field. Either fall back to a defaultWriterParameters()or fail fast with a descriptive message.♻️ Proposed defensive fix
- readerParameters.writerParameters.get) + readerParameters.writerParameters.getOrElse( + throw new IllegalStateException( + "Writer parameters were not initialized. Ensure CobolParametersParser.parse is invoked with isWriter = true on the writer path.") + ))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/writer/NestedRecordCombiner.scala` at line 86, The call readerParameters.writerParameters.get is unsafe and can throw NoSuchElementException; in the combine method (and any path relying on isWriter) replace the .get with a safe alternative: use readerParameters.writerParameters.getOrElse(WriterParameters()) to supply a sensible default WriterParameters instance, or use getOrElse(throw new IllegalArgumentException("writerParameters must be set when calling combine with isWriter=true")) to fail fast with a clear message; update the code paths that reference readerParameters.writerParameters accordingly so they no longer call .get directly.cobol-parser/src/test/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParametersParserSuite.scala (1)
21-40: ⚡ Quick winConsider expanding test coverage.
The suite covers a single happy-path case. Consider adding tests that:
- Verify default values when writer options are absent (e.g.,
nullStringsAsSpacesdefault).- Assert
writerParametersisNonewhenisWriter = false, to lock down the reader vs. writer parsing contract.- Validate that unknown writer options are rejected when
pedantic = true(the test setspedanticbut never relies on it).🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@cobol-parser/src/test/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParametersParserSuite.scala` around lines 21 - 40, Add tests to cover defaults, reader-vs-writer behavior, and pedantic unknown-option rejection: create new test cases in CobolParametersParserSuite that call CobolParametersParser.parse with (a) a Parameters map missing writer options and isWriter = true and assert parsedParams.writerParameters.get has expected default values (e.g., nullStringsAsSpaces default), (b) a Parameters map with writer options but isWriter = false and assert parsedParams.writerParameters is None, and (c) a Parameters map containing an unknown writer option with pedantic = "true" and assert the parser throws or returns an error according to CobolParametersParser.parse behavior; reference the Parameters constructor, CobolParametersParser.parse, WriterParameters and parsedParams.writerParameters to locate the code under test.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/parser/Copybook.scala`:
- Around line 551-554: getFieldPositionAndSize currently treats any non-zero
fieldStartOffsetOverride as an override which allows negative sentinels to be
used directly; change its override check to "fieldStartOffsetOverride > 0" (the
same logic used in setPrimitiveField) so 0 or negative values fall back to
configuredStartOffset + field.binaryProperties.offset, preventing negative
offsets from being applied and out-of-bounds writes.
- Around line 522-525: The two branches handling DISPLAY numeric nulls (case i:
Integral ... and case d: Decimal ...) are bypassing the encoder by filling
recordBytes with 0xF0, which breaks sign-aware DISPLAY encodings (e.g., PIC
S9...). Replace the raw java.util.Arrays.fill(...) calls with a call into the
existing encoder via setPrimitiveField(...), passing a zero value (0 for
Integral, BigDecimal(0) for Decimal) and the same field
descriptor/offset/fieldLength so the normal sign-aware DISPLAY formatting is
used; keep the guard writerParameters.nullDisplayNumbersAsZeros and reuse the
same parameters (Integral/Decimal instances and offset/fieldLength) when
invoking setPrimitiveField.
In
`@cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala`:
- Around line 117-118: The public case class constructor CobolParameters was
changed by inserting writerParameters before options which breaks source and
binary compatibility; restore compatibility by moving writerParameters to the
end of the parameter list or give it a default value and keep the original
parameter order so existing positional calls still work (e.g., ensure the
signature keeps options in its original position and append writerParameters:
Option[WriterParameters] = None or provide a factory/builder that accepts the
new field), updating the CobolParameters definition and any companion-object
constructors accordingly.
In
`@cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/ReaderParameters.scala`:
- Around line 143-145: The change inserted writerParameters before options in
the ReaderParameters case class which breaks the public constructor signature
and thus binary compatibility (affecting generated apply() and copy()); to fix,
restore the original parameter order so options remains in its original position
(move writerParameters to the end of the parameter list) or alternatively add a
backward-compatible companion factory/overload that preserves the original
positional signature; update the ReaderParameters declaration and any companion
object factory methods so that apply() and copy() signatures remain unchanged
while still supporting writerParameters as an optional field.
In `@README.md`:
- Around line 1697-1702: The README's EBCDIC Writer "Handling of PIC X(n)" text
is inconsistent with the new options; update that section to reference the new
options (write_null_strings_as_spaces, write_null_display_numbers_as_zeros,
write_null_comp3_numbers_as_zeros) and describe the actual behavior depending on
their values (e.g., when write_null_strings_as_spaces=true PIC X(n) nulls are
written as space characters, otherwise as 0x00 bytes; similarly describe DISPLAY
and COMP-3 numeric behaviors and state defaults), and mention that strict_schema
remains unaffected; replace the hardcoded statement that PIC X(n) nulls are
written as 0x00 with this configurable description.
---
Nitpick comments:
In
`@cobol-parser/src/test/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParametersParserSuite.scala`:
- Around line 21-40: Add tests to cover defaults, reader-vs-writer behavior, and
pedantic unknown-option rejection: create new test cases in
CobolParametersParserSuite that call CobolParametersParser.parse with (a) a
Parameters map missing writer options and isWriter = true and assert
parsedParams.writerParameters.get has expected default values (e.g.,
nullStringsAsSpaces default), (b) a Parameters map with writer options but
isWriter = false and assert parsedParams.writerParameters is None, and (c) a
Parameters map containing an unknown writer option with pedantic = "true" and
assert the parser throws or returns an error according to
CobolParametersParser.parse behavior; reference the Parameters constructor,
CobolParametersParser.parse, WriterParameters and parsedParams.writerParameters
to locate the code under test.
In
`@spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/writer/NestedRecordCombiner.scala`:
- Around line 138-170: The Scaladoc for processRDD and writeToBytes is missing
documentation for the newly added writerParameters parameter; update both
docblocks to include a `@param` entry named writerParameters describing its type
(WriterParameters), purpose (configuration options for writing—e.g.,
compression, encoding, padding, trimming, RDW behavior, etc.), and any key
effects on output (how it alters record generation such as trimming, padding, or
RDW adjustments). Locate the docblocks above the private[cobrix] def
processRDD(...) and the writeToBytes(...) method and add a concise `@param`
writerParameters line consistent with the existing style and detail level used
for other parameters.
- Line 86: The call readerParameters.writerParameters.get is unsafe and can
throw NoSuchElementException; in the combine method (and any path relying on
isWriter) replace the .get with a safe alternative: use
readerParameters.writerParameters.getOrElse(WriterParameters()) to supply a
sensible default WriterParameters instance, or use getOrElse(throw new
IllegalArgumentException("writerParameters must be set when calling combine with
isWriter=true")) to fail fast with a clear message; update the code paths that
reference readerParameters.writerParameters accordingly so they no longer call
.get directly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a08db0fd-1906-4a01-9ae9-33cd22d207f6
📒 Files selected for processing (11)
README.mdcobol-parser/src/main/scala/za/co/absa/cobrix/cobol/parser/Copybook.scalacobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scalacobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParametersParser.scalacobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/ReaderParameters.scalacobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/WriterParameters.scalacobol-parser/src/test/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParametersParserSuite.scalaspark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/source/DefaultSource.scalaspark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/writer/NestedRecordCombiner.scalaspark-cobol/src/test/scala/za/co/absa/cobrix/spark/cobol/source/fixtures/TextComparisonFixture.scalaspark-cobol/src/test/scala/za/co/absa/cobrix/spark/cobol/writer/NestedWriterSuite.scala
| case i: Integral if writerParameters.nullDisplayNumbersAsZeros && i.compact.isEmpty => | ||
| java.util.Arrays.fill(recordBytes, offset, offset + fieldLength, 0xF0.toByte) | ||
| case d: Decimal if writerParameters.nullDisplayNumbersAsZeros && d.compact.isEmpty => | ||
| java.util.Arrays.fill(recordBytes, offset, offset + fieldLength, 0xF0.toByte) |
There was a problem hiding this comment.
Don’t bypass the encoder for DISPLAY zero writes.
Filling DISPLAY numerics with 0xF0 works for unsigned zoned digits, but it skips sign-aware encoding for signed or sign-separate DISPLAY fields. That can emit an invalid zero representation for PIC S9... DISPLAY while the normal encoder already knows how to format zero correctly. Please route these branches through setPrimitiveField(..., 0 / BigDecimal(0), ...) instead of raw byte fills.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/parser/Copybook.scala`
around lines 522 - 525, The two branches handling DISPLAY numeric nulls (case i:
Integral ... and case d: Decimal ...) are bypassing the encoder by filling
recordBytes with 0xF0, which breaks sign-aware DISPLAY encodings (e.g., PIC
S9...). Replace the raw java.util.Arrays.fill(...) calls with a call into the
existing encoder via setPrimitiveField(...), passing a zero value (0 for
Integral, BigDecimal(0) for Decimal) and the same field
descriptor/offset/fieldLength so the normal sign-aware DISPLAY formatting is
used; keep the guard writerParameters.nullDisplayNumbersAsZeros and reuse the
same parameters (Integral/Decimal instances and offset/fieldLength) when
invoking setPrimitiveField.
There was a problem hiding this comment.
Yeah, this is a valid point. But I'm going to wait for a use case from a user before implementing the suggestion. The reason is that null is different from 0 and the way different systems handle nulls may vary. As soon as anybody is having issues with nulls for signed DISPLAY numbers, will implement this suggestion
There was a problem hiding this comment.
@yruslan, that's a fair call — deferring until there's a concrete use case avoids over-engineering a potentially system-specific behaviour. I'll note the rationale for future reviews.
(ʕ •ᴥ•ʔ)っ 🐇✨
✏️ Learnings added
Learnt from: yruslan
Repo: AbsaOSS/cobrix PR: 846
File: cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/parser/Copybook.scala:522-525
Timestamp: 2026-05-06T08:58:03.850Z
Learning: In `cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/parser/Copybook.scala`, the `setPrimitiveFieldNullAware` method intentionally fills DISPLAY numeric null fields with raw `0xF0` bytes (EBCDIC zero) rather than routing through the encoder with a zero value. The maintainer (yruslan) has deliberately deferred sign-aware encoding for null DISPLAY numerics (e.g. `PIC S9... DISPLAY`) until a concrete user case arises, because `null` and `0` have different semantics and null handling for signed DISPLAY fields may vary across systems. Do not flag this as a bug in future reviews unless there is a specific user-reported issue.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: yruslan
Repo: AbsaOSS/cobrix PR: 820
File: cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala:50-50
Timestamp: 2026-02-11T08:15:51.206Z
Learning: Rename the field from _corrupted_fields to _corrupt_fields to align with Apache Spark's naming convention (e.g., _corrupt_record). Apply this change consistently across the codebase where the field is defined or referenced (including cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala) and update any tests or serialization logic that rely on the old name.
| writerParameters: Option[WriterParameters], | ||
| options: Map[String, String] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd cobol-parser && find . -name "CobolParameters.scala" -type fRepository: AbsaOSS/cobrix
Length of output: 139
🏁 Script executed:
cat -n cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala | sed -n '100,130p'Repository: AbsaOSS/cobrix
Length of output: 1520
🏁 Script executed:
# Check if CobolParameters is a case class and its visibility
cat -n cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala | sed -n '1,120p' | tail -40Repository: AbsaOSS/cobrix
Length of output: 3034
🏁 Script executed:
# Search for existing usages of CobolParameters constructor with positional arguments
rg "CobolParameters\s*\(" cobol-parser/src --type scala -A 2 -B 1 | head -60Repository: AbsaOSS/cobrix
Length of output: 1077
🏁 Script executed:
# Get the full class definition and its scope
cat -n cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala | sed -n '1,80p'Repository: AbsaOSS/cobrix
Length of output: 7066
🏁 Script executed:
# Check the constructor call in CobolParametersParser to see if positional arguments are used
cat -n cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParametersParser.scala | grep -A 100 "val cobolParameters = CobolParameters("Repository: AbsaOSS/cobrix
Length of output: 5451
🏁 Script executed:
# Check git history to see if writerParameters is new and its placement changed
cd cobol-parser && git log --oneline -n 20 -- src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scalaRepository: AbsaOSS/cobrix
Length of output: 160
🏁 Script executed:
# Check for default value on writerParameters - if it has one, it's backwards compatible
cat cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala | grep -A 1 "writerParameters"Repository: AbsaOSS/cobrix
Length of output: 381
This changes the public CobolParameters constructor shape and breaks backward compatibility.
Adding writerParameters before options (without a default value) breaks both source and binary compatibility for code using positional arguments with this public case class. Existing callers will fail to compile, and previously compiled code will break at runtime. Since this case class is part of the public API, the safe approach is to append new fields at the end with a default value, or refactor behind a builder/factory method instead of inserting into the middle of the signature.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/CobolParameters.scala`
around lines 117 - 118, The public case class constructor CobolParameters was
changed by inserting writerParameters before options which breaks source and
binary compatibility; restore compatibility by moving writerParameters to the
end of the parameter list or give it a default value and keep the original
parameter order so existing positional calls still work (e.g., ensure the
signature keeps options in its original position and append writerParameters:
Option[WriterParameters] = None or provide a factory/builder that accepts the
new field), updating the CobolParameters definition and any companion-object
constructors accordingly.
| recordsToExclude: Set[String] = Set.empty, | ||
| writerParameters: Option[WriterParameters] = None, | ||
| options: Map[String, String] = Map.empty |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and read the ReaderParameters.scala file
find . -path "*cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/ReaderParameters.scala" -type fRepository: AbsaOSS/cobrix
Length of output: 153
🏁 Script executed:
# Read the ReaderParameters.scala file to see the full case class definition
cat -n cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/ReaderParameters.scala | head -160Repository: AbsaOSS/cobrix
Length of output: 13134
🏁 Script executed:
# Search for usages of ReaderParameters() constructor with positional arguments
rg "ReaderParameters\s*\(" --type scala -A 3 -B 1 | head -100Repository: AbsaOSS/cobrix
Length of output: 11424
🏁 Script executed:
# Check for publication configuration and artifact metadata
find . -name "pom.xml" -o -name "build.sbt" -o -name "build.gradle" | head -5Repository: AbsaOSS/cobrix
Length of output: 196
🏁 Script executed:
# Check for any binary compatibility or versioning guidelines
find . -name "CHANGELOG*" -o -name "CONTRIBUTING*" -o -name "README*" | grep -E "\.(md|txt)$" | head -10Repository: AbsaOSS/cobrix
Length of output: 148
🏁 Script executed:
# Check the main build configuration to see if this is published to Maven/artifact repos
head -100 build.sbtRepository: AbsaOSS/cobrix
Length of output: 3677
🏁 Script executed:
# Check pom.xml files for publish configuration
cat cobol-parser/pom.xml | grep -A 10 -E "<distributionManagement>|<repository>|<name>" | head -40Repository: AbsaOSS/cobrix
Length of output: 40
🏁 Script executed:
# Check README to understand the project scope
head -50 README.mdRepository: AbsaOSS/cobrix
Length of output: 2583
🏁 Script executed:
# Check if there are any explicit tests or documentation about binary compatibility
rg -i "binary.?compatible|backward.?compatible" --type md --type scala | head -20Repository: AbsaOSS/cobrix
Length of output: 40
🏁 Script executed:
# Look for any versioning or release documentation
find . -name "VERSION*" -o -name "RELEASE*" -o -name ".github" -type dRepository: AbsaOSS/cobrix
Length of output: 68
🏁 Script executed:
# Check for any version management or semantic versioning docs
cat .github/workflows/*.yml 2>/dev/null | grep -A 5 -B 5 "version\|release" | head -40Repository: AbsaOSS/cobrix
Length of output: 1312
🏁 Script executed:
# Verify whether WriterParameters is a new addition and check imports
rg "WriterParameters" --type scala -B 2 | head -60Repository: AbsaOSS/cobrix
Length of output: 5095
🏁 Script executed:
# Check if any test or documentation mentions positional constructor usage
rg "ReaderParameters\(" cobol-parser/src/test/scala --type scala -A 2 | grep -v "recordFormat\|isEbcdic\|isText" | head -40Repository: AbsaOSS/cobrix
Length of output: 4342
Binary compatibility risk: inserting writerParameters before options breaks the public ReaderParameters case-class constructor signature.
Even with a default value, inserting writerParameters before options changes the generated apply() and copy() method signatures. Pre-compiled external callers using positional construction will no longer be binary-compatible. Since this is a published library on Maven Central, binary compatibility with external consumers must be preserved.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@cobol-parser/src/main/scala/za/co/absa/cobrix/cobol/reader/parameters/ReaderParameters.scala`
around lines 143 - 145, The change inserted writerParameters before options in
the ReaderParameters case class which breaks the public constructor signature
and thus binary compatibility (affecting generated apply() and copy()); to fix,
restore the original parameter order so options remains in its original position
(move writerParameters to the end of the parameter list) or alternatively add a
backward-compatible companion factory/overload that preserves the original
positional signature; update the ReaderParameters declaration and any companion
object factory methods so that apply() and copy() signatures remain unchanged
while still supporting writerParameters as an optional field.
| | Option (usage example) | Description | | ||
| |---------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ||
| | .option("strict_schema", "true") | If 'true' (default) Cobrix will throw an exception if a field exists in the copybook but not in the Spark schema. Array count fields (defined in DEPENDING ON clause) are auto-generated and never required to exist in Spark schema. | | ||
| | .option("write_null_strings_as_spaces", "false") | If 'true' Cobrix will write `null` alphanumeric fields as spaces when writing output files. | | ||
| | .option("write_null_display_numbers_as_zeros", "false") | If 'true' Cobrix will write `null` numeric fields having DISPLAY format as serquence of zeros when writing output files. | | ||
| | .option("write_null_comp3_numbers_as_zeros", "false") | If 'true' Cobrix will write `null` numeric fields having COMP-3 format as zeros when writing EBCDIC files. | |
There was a problem hiding this comment.
Update the later null-handling docs to match these new options.
This section now documents configurable null handling, but the later EBCDIC Writer -> Implementation details -> Handling of PIC X(n) section still says null values are written as 0x00 bytes. That leaves the README internally contradictory for write_null_strings_as_spaces=true.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@README.md` around lines 1697 - 1702, The README's EBCDIC Writer "Handling of
PIC X(n)" text is inconsistent with the new options; update that section to
reference the new options (write_null_strings_as_spaces,
write_null_display_numbers_as_zeros, write_null_comp3_numbers_as_zeros) and
describe the actual behavior depending on their values (e.g., when
write_null_strings_as_spaces=true PIC X(n) nulls are written as space
characters, otherwise as 0x00 bytes; similarly describe DISPLAY and COMP-3
numeric behaviors and state defaults), and mention that strict_schema remains
unaffected; replace the hardcoded statement that PIC X(n) nulls are written as
0x00 with this configurable description.
Summary by CodeRabbit
New Features
write_null_strings_as_spaces,write_null_display_numbers_as_zeros, andwrite_null_comp3_numbers_as_zeros.Documentation
Tests