-
Notifications
You must be signed in to change notification settings - Fork 2
Introduce DataSource abstraction #80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Reviewer's GuideThis PR introduces a generic DataSource abstraction for unified zero-copy and unescaped content access, refactors stream/slice builders and escape processing to depend on it, enhances error handling and parser-core event accumulation, and removes specialized helper methods in favor of shared utilities. Class diagram for the new DataSource abstraction and implementorsclassDiagram
class DataSource {
<<trait>>
+get_borrowed_slice(start, end): Result<&[u8], ParseError>
+get_unescaped_slice(): Result<&[u8], ParseError>
+has_unescaped_content(): bool
}
class StreamContentBuilder {
+stream_buffer: StreamBuffer
+unicode_escape_collector: UnicodeEscapeCollector
+unescaped_reset_queued: bool
+current_position(): usize
+has_unescaped_content(): bool
+queue_unescaped_reset()
+get_borrowed_slice(start, end): Result<&[u8], ParseError>
+get_unescaped_slice(): Result<&[u8], ParseError>
}
class SliceContentBuilder {
+buffer: SliceInputBuffer
+copy_on_escape: CopyOnEscape
+unicode_escape_collector: UnicodeEscapeCollector
+current_position(): usize
+has_unescaped_content(): bool
+get_borrowed_slice(start, end): Result<&[u8], ParseError>
+get_unescaped_slice(): Result<&[u8], ParseError>
}
DataSource <|.. StreamContentBuilder
DataSource <|.. SliceContentBuilder
Class diagram for ContentPiece and its integrationclassDiagram
class ContentPiece {
<<enum>>
+Input(&[u8])
+Scratch(&[u8])
+into_string(): Result<String, ParseError>
}
class String {
<<enum>>
+Borrowed(&str)
+Unescaped(&str)
}
ContentPiece --> String : into_string()
Class diagram for updated UnicodeEscapeCollector and escape processingclassDiagram
class UnicodeEscapeCollector {
+pending_high_surrogate: Option<u32>
+has_pending_high_surrogate(): bool
+get_pending_high_surrogate(): Option<u32>
+set_pending_high_surrogate(surrogate: Option<u32>)
+add_hex_digit(hex_digit: u8)
+process_to_utf8(buf: &mut [u8; 4])
}
Class diagram for get_content_piece helper functionclassDiagram
class get_content_piece {
+get_content_piece(source, start_pos, current_pos): Result<ContentPiece, ParseError>
}
get_content_piece --> DataSource
get_content_piece --> ContentPiece
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
WalkthroughThis change introduces a new Changes
Sequence Diagram(s)sequenceDiagram
participant Parser
participant ContentBuilder
participant DataSource
participant CopyOnEscape
participant InputBuffer
Parser->>ContentBuilder: extract_string_content()
ContentBuilder->>DataSource: get_content_piece(start, end)
alt Has unescaped content
DataSource->>CopyOnEscape: get_unescaped_slice()
CopyOnEscape-->>DataSource: &[u8] (unescaped)
else
DataSource->>InputBuffer: get_borrowed_slice(start, end)
InputBuffer-->>DataSource: &[u8] (borrowed)
end
DataSource-->>ContentBuilder: ContentPiece
ContentBuilder->>Parser: String
sequenceDiagram
participant ContentBuilder
participant DataSource
participant UnicodeEscapeProcessor
ContentBuilder->>UnicodeEscapeProcessor: process_unicode_escape_sequence(pos, pending_surrogate, DataSource)
UnicodeEscapeProcessor->>DataSource: get_borrowed_slice() (for hex digits)
DataSource-->>UnicodeEscapeProcessor: &[u8]
UnicodeEscapeProcessor-->>ContentBuilder: (utf8_bytes, new_pos, updated_surrogate)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @kaidokert, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've introduced a new DataSource
abstraction to unify how different JSON parsers access input data, whether it's directly from the original input buffer or from a temporary scratch buffer used for unescaped content. This refactoring aims to streamline content extraction logic across slice-based and stream-based parsers, improving code consistency and maintainability. It centralizes data access patterns, making the parsing components more modular and easier to manage.
Highlights
- New
DataSource
Abstraction: I've introduced a newDataSource
trait to abstract how different parsers access input data, whether it's directly from the original input or from a temporary scratch buffer for unescaped content. This trait defines a unified interface for data retrieval. - Unified Content Extraction Logic: A new helper function,
get_content_piece
, has been added. This function leverages theDataSource
trait to intelligently determine whether to extract content directly from the input or from a scratch buffer, significantly simplifying the logic for string and key extraction across parsers. - Implementation of
DataSource
for Parsers: BothSliceContentBuilder
andStreamContentBuilder
now implement theDataSource
trait. This allows them to seamlessly integrate with the new unified content extraction patterns and the refactored unicode escape processing. - Refactored Unicode Escape Processing: The
process_unicode_escape_sequence
function has been refactored to accept a genericDataSource
instead of a specific closure for providing hex digits. This change makes the unicode escape processing more flexible and reusable across different parser implementations. - Error Type Refinement: The
ParseError
enum has been refined, specifically changingScratchBufferFull
toInputBufferFull
for better clarity on the error's origin. Additionally, aFrom
implementation forujson::Error
was added to streamline error handling. - Enhanced
CopyOnEscape
Functionality: TheCopyOnEscape
component now includes new public methods,has_unescaped_content
andget_scratch_contents
, which are essential for its role as aDataSource
for theSliceContentBuilder
.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @kaidokert - I've reviewed your changes - here's some feedback:
- There’s still repeated pre‐extraction escape handling in both the stream and slice builders (e.g. calling
queue_unescaped_reset
orcopy_on_escape.end_string
)—consider moving that into the DataSource trait or a shared helper soextract_*
only needs to callget_content_piece
. - The change in
ContentRange::string_content_bounds_from_content_start
silently clamps cases wherestart > end
to an empty slice, which can mask invalid bounds—consider returning an explicit error or panic to catch logic bugs sooner.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- There’s still repeated pre‐extraction escape handling in both the stream and slice builders (e.g. calling `queue_unescaped_reset` or `copy_on_escape.end_string`)—consider moving that into the DataSource trait or a shared helper so `extract_*` only needs to call `get_content_piece`.
- The change in `ContentRange::string_content_bounds_from_content_start` silently clamps cases where `start > end` to an empty slice, which can mask invalid bounds—consider returning an explicit error or panic to catch logic bugs sooner.
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a DataSource
abstraction to unify content extraction logic between the stream and slice parsers. This is an excellent refactoring that significantly improves the code structure, reduces duplication, and enhances maintainability. The changes are well-implemented and consistent across the codebase. My review includes a couple of minor suggestions to simplify boolean logic and redundant calculations for improved code clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
picojson/src/slice_content_builder.rs (1)
122-161
: Improved Unicode escape handling with proper surrogate support.The explicit tracking of pending surrogate state and bounds checking for UTF-8 slices are good improvements. However, there's a potential numeric overflow:
At lines 150-152, the calculation could overflow:
let low_surrogate_pos = escape_start_pos .checked_add(6) .ok_or(ParseError::NumericOverflow)?;Good that you're using
checked_add
to handle this case safely!
🧹 Nitpick comments (1)
picojson/src/event_processor.rs (1)
76-85
: Consider simplifying the accumulation logic for clarity.While functionally correct, the nested conditionals could be simplified:
-let should_accumulate = if always_accumulate_during_escapes { - if self.in_escape_sequence { - true // Always accumulate during escape sequences - } else { - !have_events(&self.parser_state.evts) // Normal behavior outside escapes - } -} else { - !have_events(&self.parser_state.evts) && !self.in_escape_sequence -}; +let should_accumulate = match (always_accumulate_during_escapes, self.in_escape_sequence) { + (true, true) => true, // Flag set + in escape: always accumulate + (_, false) => !have_events(&self.parser_state.evts), // Not in escape: accumulate if no events + (false, true) => false, // Flag not set + in escape: don't accumulate +};This makes the three distinct cases more explicit.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
picojson/src/copy_on_escape.rs
(1 hunks)picojson/src/escape_processor.rs
(4 hunks)picojson/src/event_processor.rs
(2 hunks)picojson/src/parse_error.rs
(2 hunks)picojson/src/shared.rs
(2 hunks)picojson/src/slice_content_builder.rs
(4 hunks)picojson/src/slice_input_buffer.rs
(0 hunks)picojson/src/stream_content_builder.rs
(6 hunks)picojson/src/stream_parser.rs
(1 hunks)
💤 Files with no reviewable changes (1)
- picojson/src/slice_input_buffer.rs
🧰 Additional context used
🧠 Learnings (8)
📚 Learning: in json parsing tests for picojson-rs, when testing unicode escape sequences, raw string literals us...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#69
File: picojson/tests/json_checker_tests.rs:296-297
Timestamp: 2025-07-27T05:05:22.707Z
Learning: In JSON parsing tests for picojson-rs, when testing Unicode escape sequences, raw string literals use double backslashes (e.g., r#"\\uCAFE"#) to create JSON input containing single backslashes (\uCAFE) that the JSON parser processes. The double backslashes are not an error - they correctly represent the JSON input format that contains escape sequences for the parser to decode.
Applied to files:
picojson/src/stream_parser.rs
picojson/src/slice_content_builder.rs
picojson/src/event_processor.rs
picojson/src/escape_processor.rs
picojson/src/shared.rs
📚 Learning: in the picojson-rs project, the `use tokenizer as ujson;` alias in lib.rs is a transitionary and ful...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#5
File: picojson/src/lib.rs:0-0
Timestamp: 2025-06-29T17:48:18.198Z
Learning: In the picojson-rs project, the `use tokenizer as ujson;` alias in lib.rs is a transitionary and fully internal private alias used during crate reorganization. Examples and external code no longer depend on this alias, making the private visibility appropriate.
Applied to files:
picojson/src/stream_parser.rs
picojson/src/copy_on_escape.rs
picojson/src/parse_error.rs
picojson/src/stream_content_builder.rs
picojson/src/event_processor.rs
picojson/src/escape_processor.rs
picojson/src/shared.rs
📚 Learning: in picojson-rs event_processor module, the tokenizer callback intentionally uses a fixed-size array ...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#55
File: picojson/src/event_processor.rs:188-203
Timestamp: 2025-07-13T05:06:50.688Z
Learning: In picojson-rs event_processor module, the tokenizer callback intentionally uses a fixed-size array of 2 slots and silently drops events when full. This is deliberate design for embedded/constrained environments requiring: zero-allocation guarantee, panic-free operation, deterministic memory usage, and fixed memory footprint. The array size of 2 is sufficient for ujson's event generation patterns when processed in tight loops.
Applied to files:
picojson/src/stream_parser.rs
picojson/src/parse_error.rs
picojson/src/stream_content_builder.rs
picojson/src/slice_content_builder.rs
picojson/src/event_processor.rs
📚 Learning: in the stax json parser codebase, escapesequence event handlers exist in flex_parser.rs not because ...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#1
File: stax/src/flex_parser.rs:0-0
Timestamp: 2025-06-28T18:12:30.015Z
Learning: In the stax JSON parser codebase, EscapeSequence event handlers exist in flex_parser.rs not because they're needed by that parser variant, but to avoid catch-all patterns in match statements. The flex parser doesn't need to process EscapeSequence events, but the other parser variant (direct parser) does need them.
Applied to files:
picojson/src/stream_parser.rs
picojson/src/copy_on_escape.rs
picojson/src/stream_content_builder.rs
picojson/src/slice_content_builder.rs
picojson/src/event_processor.rs
picojson/src/escape_processor.rs
picojson/src/shared.rs
📚 Learning: in the picojson-rs project, the bitstack trait was redesigned to return bool instead of option...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#1
File: tokenizer/src/bitstack/mod.rs:0-0
Timestamp: 2025-06-28T23:43:22.783Z
Learning: In the picojson-rs project, the BitStack trait was redesigned to return bool instead of Option<bool> for pop() and top() methods. Empty stacks return false rather than None, which simplifies the API and avoids Option handling.
Applied to files:
picojson/src/stream_parser.rs
picojson/src/copy_on_escape.rs
picojson/src/stream_content_builder.rs
picojson/src/event_processor.rs
picojson/src/shared.rs
📚 Learning: in picojson-rs sliceparser, is_empty() and is_past_end() serve different purposes: is_empty() return...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#55
File: picojson/src/slice_parser.rs:273-286
Timestamp: 2025-07-13T05:11:46.914Z
Learning: In picojson-rs SliceParser, is_empty() and is_past_end() serve different purposes: is_empty() returns true when pos >= data.len() (at document boundary, all input consumed), while is_past_end() returns true when pos > data.len() (gone beyond input). For number parsing delimiter logic, is_empty() is correct because it detects when parsing the last token at document end, whereas is_past_end() would incorrectly indicate not at document end for standalone numbers.
Applied to files:
picojson/src/stream_parser.rs
picojson/src/copy_on_escape.rs
picojson/src/parse_error.rs
picojson/src/stream_content_builder.rs
picojson/src/slice_content_builder.rs
picojson/src/event_processor.rs
picojson/src/shared.rs
📚 Learning: in picojson-rs event processing, the sliceparser uses an if/else pattern with process_simple_events(...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#55
File: picojson/src/slice_parser.rs:0-0
Timestamp: 2025-07-13T05:10:01.847Z
Learning: In picojson-rs event processing, the SliceParser uses an if/else pattern with process_simple_events() first, then process_begin_events() as fallback. Both branches use identical match statements for all EventResult variants for consistency and maintainability, even though process_begin_events() only returns Continue or None while process_simple_events() can return all variants.
Applied to files:
picojson/src/stream_parser.rs
picojson/src/stream_content_builder.rs
picojson/src/slice_content_builder.rs
picojson/src/event_processor.rs
picojson/src/escape_processor.rs
📚 Learning: the pullparser trait in picojson-rs provides both next() and next_event() methods. the next() method...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#44
File: picojson/src/chunk_reader.rs:28-33
Timestamp: 2025-07-07T01:39:55.177Z
Learning: The PullParser trait in picojson-rs provides both next() and next_event() methods. The next() method is an iterator-like convenience method that returns Option<Result<Event, ParseError>>, returning None when EndDocument is reached. The next_event() method returns Result<Event, ParseError> directly. Both methods are valid and the choice depends on whether you want iterator-style usage (next) or direct result handling (next_event).
Applied to files:
picojson/src/parse_error.rs
picojson/src/stream_content_builder.rs
picojson/src/slice_content_builder.rs
picojson/src/event_processor.rs
picojson/src/shared.rs
🧬 Code Graph Analysis (3)
picojson/src/copy_on_escape.rs (4)
picojson/src/shared.rs (1)
has_unescaped_content
(216-216)picojson/src/slice_content_builder.rs (1)
has_unescaped_content
(197-199)picojson/src/stream_content_builder.rs (1)
has_unescaped_content
(337-339)picojson/src/stream_buffer.rs (1)
has_unescaped_content
(243-245)
picojson/src/stream_content_builder.rs (4)
picojson/src/shared.rs (4)
get_content_piece
(256-274)get_borrowed_slice
(198-202)get_unescaped_slice
(209-209)has_unescaped_content
(216-216)picojson/src/slice_content_builder.rs (4)
extract_key_content
(84-95)get_borrowed_slice
(182-184)get_unescaped_slice
(186-195)has_unescaped_content
(197-199)picojson/src/escape_processor.rs (1)
process_unicode_escape_sequence
(679-739)picojson/src/stream_buffer.rs (2)
get_unescaped_slice
(218-225)has_unescaped_content
(243-245)
picojson/src/shared.rs (4)
picojson/src/slice_content_builder.rs (3)
get_borrowed_slice
(182-184)get_unescaped_slice
(186-195)has_unescaped_content
(197-199)picojson/src/stream_content_builder.rs (3)
get_borrowed_slice
(327-331)get_unescaped_slice
(333-335)has_unescaped_content
(337-339)picojson/src/stream_buffer.rs (2)
get_unescaped_slice
(218-225)has_unescaped_content
(243-245)picojson/src/copy_on_escape.rs (1)
has_unescaped_content
(210-212)
🔇 Additional comments (20)
picojson/src/parse_error.rs (2)
14-19
: LGTM! Clear error differentiation.The documentation updates are more concise, and the new
InputBufferFull
variant properly distinguishes input buffer capacity issues from scratch buffer limitations, improving error diagnostics.
78-82
: Good addition of error conversion trait.The
From<ujson::Error>
implementation provides clean automatic conversion from tokenizer errors, simplifying error propagation throughout the parser.picojson/src/stream_parser.rs (1)
1081-1084
: Test correctly updated for new error semantics.The comment now clearly explains the buffer size limitation, and the expected error type correctly uses
InputBufferFull
to indicate input buffer capacity issues rather than scratch buffer issues.picojson/src/copy_on_escape.rs (1)
209-219
: Well-designed DataSource support methods.The new methods properly expose unescaped content state and provide safe access to the scratch buffer with correct lifetime annotations. The error handling in
get_scratch_contents
appropriately validates slice bounds.picojson/src/event_processor.rs (2)
40-51
: Good refactoring pattern for backward compatibility.Delegating to the extended method with a default flag value preserves the existing API while enabling new functionality.
91-99
: Proper end-of-stream handling.Good practice to capture the
finish()
return value even if unused, and the explicitEndDocument
return when no events are generated correctly handles the end-of-stream case.picojson/src/slice_content_builder.rs (3)
71-95
: Excellent refactoring to unify content extraction.The consistent pattern for string and key extraction improves maintainability by:
- Properly completing CopyOnEscape processing when needed
- Using the unified
get_content_piece
helper to centralize logic- Converting ContentPiece to the appropriate event type
This aligns well with the DataSource abstraction.
97-112
: Clean simplification of number extraction.Good removal of conditional logic by consistently using
ContentRange::end_position_excluding_delimiter
and the DataSource trait method. This makes the code more maintainable and aligns with the abstraction pattern.
177-200
: Well-implemented DataSource trait.The implementation correctly:
- Provides borrowed slices from the input buffer
- Accesses unescaped content from CopyOnEscape with proper lifetime management
- Validates state before returning unescaped slices
This completes the abstraction that unifies content access across parser implementations.
picojson/src/stream_content_builder.rs (5)
59-60
: Error type change improves semantic clarityThe change from
ScratchBufferFull
toInputBufferFull
better reflects the actual issue - the input buffer is too small to hold the current token, not the scratch buffer. This aligns with the error's usage context where buffer compaction fails.
198-216
: Excellent unification of content extraction logicThe refactoring of both
extract_string_content
andextract_key_content
to use the sharedget_content_piece
helper is well-executed. The queue-based reset mechanism for unescaped content is appropriate for the streaming nature of this parser, preventing content contamination between strings.
230-231
: Appropriate use of DataSource trait for number extractionUsing the
get_borrowed_slice
method from the DataSource trait properly abstracts the buffer access. This is correct since numbers don't require escape processing.
260-273
: Well-structured adaptation to centralized Unicode escape processingThe refactoring correctly:
- Extracts pending surrogate state before the call
- Passes
self
as the DataSource implementation- Updates the collector's surrogate state after processing
This maintains proper state management while leveraging the new unified escape processing function.
319-340
: Clean DataSource trait implementation with appropriate lifetime handlingThe implementation correctly uses the buffer lifetime
'b
for both borrowed and unescaped content, which is appropriate for the streaming nature of this parser. The delegation to StreamBuffer methods is clean and maintains proper error conversion.picojson/src/escape_processor.rs (3)
6-12
: Well-designed type alias improves code clarityThe
UnicodeEscapeResult
type alias effectively encapsulates the complex return type for Unicode escape processing, making the code more readable and maintainable.
278-286
: Necessary accessor methods for surrogate state managementThe getter and setter methods provide controlled access to the pending surrogate state, which is required for the refactored Unicode escape processing design.
663-738
: Excellent refactoring improves modularity and testabilityThe refactored
process_unicode_escape_sequence
function:
- Decouples from external mutable state by creating an internal collector
- Uses the DataSource trait for clean abstraction of hex digit access
- Returns all state by value, making it more functional and testable
- Maintains correct surrogate pair handling, including proper position adjustment
This design is more modular and aligns well with the overall DataSource abstraction pattern.
picojson/src/shared.rs (3)
178-217
: Excellent trait design provides clean abstractionThe
DataSource
trait is well-designed with:
- Clear separation between borrowed (zero-copy) and unescaped (processed) content
- Proper lifetime parameters enabling flexible implementations
- Comprehensive documentation explaining the purpose of each method
- Consistent error handling through Result types
This abstraction successfully unifies content access patterns across different parser implementations.
219-247
: Clean enum design with proper UTF-8 validationThe
ContentPiece
enum effectively separates the two content sources and provides a clean conversion to theString
type with proper UTF-8 validation for both variants.
253-274
: Well-implemented helper unifies content extraction logicThe
get_content_piece
function successfully consolidates the content extraction pattern used across parsers. It correctly prioritizes unescaped content when available and falls back to borrowed content, using consistent position calculations viaContentRange
.
Summary by Sourcery
Introduce a unified DataSource abstraction to streamline content extraction across stream and slice parsers
New Features:
Bug Fixes:
Enhancements:
Tests:
Chores:
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Refactor
Tests