Skip to content

Conversation

kaidokert
Copy link
Owner

@kaidokert kaidokert commented Aug 3, 2025

Summary by Sourcery

Introduce a unified DataSource abstraction to streamline content extraction across stream and slice parsers

New Features:

  • Define DataSource trait with methods for borrowed and unescaped slices
  • Add ContentPiece enum and get_content_piece helper for unified content handling
  • Implement DataSource for StreamContentBuilder and SliceContentBuilder

Bug Fixes:

  • Rename ParseError::ScratchBufferFull to InputBufferFull and update related error handling
  • Prevent invalid string bounds when content start exceeds end

Enhancements:

  • Refactor extract_string, extract_key, and extract_number to delegate to DataSource and shared helpers
  • Migrate escape_processor to use DataSource and return pending surrogate state by value
  • Extend ParserCore with next_event_impl and flags to control byte accumulation during escape sequences
  • Update ContentRange to safely handle empty or reversed bounds

Tests:

  • Update tests to expect InputBufferFull instead of ScratchBufferFull

Chores:

  • Implement Fromujson::Error for ParseError

Summary by CodeRabbit

  • New Features

    • Added unified content extraction through a new interface, allowing consistent access to both original and unescaped data across different parser types.
    • Introduced new error variant for input buffer overflows, improving error clarity.
    • Enhanced Unicode escape handling with improved surrogate pair support and state management.
    • Added configurability for byte accumulation during escape sequences.
  • Bug Fixes

    • Improved handling of invalid slice bounds during content extraction to prevent errors.
  • Documentation

    • Clarified error descriptions for buffer and UTF-8 related issues.
  • Refactor

    • Unified and simplified content extraction and Unicode escape logic for maintainability and correctness.
    • Replaced direct buffer access with a new abstraction for safer and more flexible parsing.
  • Tests

    • Updated test cases to reflect new error handling for buffer size limitations.

Copy link

sourcery-ai bot commented Aug 3, 2025

Reviewer's Guide

This PR introduces a generic DataSource abstraction for unified zero-copy and unescaped content access, refactors stream/slice builders and escape processing to depend on it, enhances error handling and parser-core event accumulation, and removes specialized helper methods in favor of shared utilities.

Class diagram for the new DataSource abstraction and implementors

classDiagram
    class DataSource {
        <<trait>>
        +get_borrowed_slice(start, end): Result<&[u8], ParseError>
        +get_unescaped_slice(): Result<&[u8], ParseError>
        +has_unescaped_content(): bool
    }
    class StreamContentBuilder {
        +stream_buffer: StreamBuffer
        +unicode_escape_collector: UnicodeEscapeCollector
        +unescaped_reset_queued: bool
        +current_position(): usize
        +has_unescaped_content(): bool
        +queue_unescaped_reset()
        +get_borrowed_slice(start, end): Result<&[u8], ParseError>
        +get_unescaped_slice(): Result<&[u8], ParseError>
    }
    class SliceContentBuilder {
        +buffer: SliceInputBuffer
        +copy_on_escape: CopyOnEscape
        +unicode_escape_collector: UnicodeEscapeCollector
        +current_position(): usize
        +has_unescaped_content(): bool
        +get_borrowed_slice(start, end): Result<&[u8], ParseError>
        +get_unescaped_slice(): Result<&[u8], ParseError>
    }
    DataSource <|.. StreamContentBuilder
    DataSource <|.. SliceContentBuilder
Loading

Class diagram for ContentPiece and its integration

classDiagram
    class ContentPiece {
        <<enum>>
        +Input(&[u8])
        +Scratch(&[u8])
        +into_string(): Result<String, ParseError>
    }
    class String {
        <<enum>>
        +Borrowed(&str)
        +Unescaped(&str)
    }
    ContentPiece --> String : into_string()
Loading

Class diagram for updated UnicodeEscapeCollector and escape processing

classDiagram
    class UnicodeEscapeCollector {
        +pending_high_surrogate: Option<u32>
        +has_pending_high_surrogate(): bool
        +get_pending_high_surrogate(): Option<u32>
        +set_pending_high_surrogate(surrogate: Option<u32>)
        +add_hex_digit(hex_digit: u8)
        +process_to_utf8(buf: &mut [u8; 4])
    }
Loading

Class diagram for get_content_piece helper function

classDiagram
    class get_content_piece {
        +get_content_piece(source, start_pos, current_pos): Result<ContentPiece, ParseError>
    }
    get_content_piece --> DataSource
    get_content_piece --> ContentPiece
Loading

File-Level Changes

Change Details Files
Add DataSource abstraction and shared utilities
  • Define DataSource trait with methods for borrowed and unescaped slices
  • Introduce ContentPiece enum and get_content_piece helper
  • Add pending surrogate getters/setters in UnicodeEscapeCollector
picojson/src/shared.rs
picojson/src/escape_processor.rs
Refactor content builders to use DataSource
  • Implement DataSource on StreamContentBuilder and SliceContentBuilder
  • Replace create_unescaped_string/create_borrowed_string with get_content_piece
  • Use get_borrowed_slice/get_unescaped_slice and has_unescaped_content for string, key, and number extraction
picojson/src/stream_content_builder.rs
picojson/src/slice_content_builder.rs
Escape processing refactoring and DataSource integration
  • Change process_unicode_escape_sequence signature to accept DataSource and return new surrogate state
  • Pass self to escape_processor instead of hex_slice_provider closure
  • Implement scratch buffer access in CopyOnEscape for DataSource
picojson/src/escape_processor.rs
picojson/src/copy_on_escape.rs
Enhance error handling and buffer bounds
  • Add InputBufferFull error and replace ScratchBufferFull in StreamContentBuilder
  • Implement Fromujson::Error for ParseError
  • Adjust ContentRange to avoid start> end, remove data_len, and update slice_input_buffer
  • Update tests to expect InputBufferFull
picojson/src/parse_error.rs
picojson/src/shared.rs
picojson/src/slice_input_buffer.rs
picojson/src/stream_content_builder.rs
Extend ParserCore event accumulation behavior
  • Add next_event_impl overload with always_accumulate_during_escapes flag
  • Adjust byte_accumulator logic to optionally accumulate during escape sequences
picojson/src/event_processor.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

coderabbitai bot commented Aug 3, 2025

Walkthrough

This change introduces a new DataSource trait and refactors content extraction and Unicode escape processing throughout the codebase to use this abstraction. New methods and trait implementations are added to core structs, error handling is clarified with a new error variant, and content extraction logic is unified and simplified across both slice and stream-based parsers.

Changes

Cohort / File(s) Change Summary
CopyOnEscape DataSource Support
picojson/src/copy_on_escape.rs
Added public methods to expose scratch buffer state (has_unescaped_content, get_scratch_contents) for DataSource compatibility.
Unicode Escape Refactor
picojson/src/escape_processor.rs
Refactored Unicode escape processing to use a DataSource trait object and a pending surrogate parameter; added getter/setter for surrogate state; introduced a result type alias.
Parser Event Handling
picojson/src/event_processor.rs
Added a wrapper and extended method for event processing with a flag to control byte accumulation during escapes; clarified end-of-stream handling.
ParseError and Error Handling
picojson/src/parse_error.rs
Added InputBufferFull error variant, improved documentation, and implemented From<ujson::Error> for ParseError.
DataSource Abstraction
picojson/src/shared.rs
Introduced the DataSource trait, ContentPiece enum, unified content extraction logic, and improved bounds safety.
SliceContentBuilder Refactor
picojson/src/slice_content_builder.rs
Implemented DataSource for SliceContentBuilder, refactored content and Unicode escape extraction to use this trait, and improved surrogate handling.
SliceInputBuffer Simplification
picojson/src/slice_input_buffer.rs
Removed the data_len method.
StreamContentBuilder Refactor
picojson/src/stream_content_builder.rs
Implemented DataSource for StreamContentBuilder, unified content extraction, clarified error handling, and updated Unicode escape processing.
Stream Parser Test Update
picojson/src/stream_parser.rs
Updated a test to clarify buffer size limitations and expected error variant.

Sequence Diagram(s)

sequenceDiagram
    participant Parser
    participant ContentBuilder
    participant DataSource
    participant CopyOnEscape
    participant InputBuffer

    Parser->>ContentBuilder: extract_string_content()
    ContentBuilder->>DataSource: get_content_piece(start, end)
    alt Has unescaped content
        DataSource->>CopyOnEscape: get_unescaped_slice()
        CopyOnEscape-->>DataSource: &[u8] (unescaped)
    else
        DataSource->>InputBuffer: get_borrowed_slice(start, end)
        InputBuffer-->>DataSource: &[u8] (borrowed)
    end
    DataSource-->>ContentBuilder: ContentPiece
    ContentBuilder->>Parser: String
Loading
sequenceDiagram
    participant ContentBuilder
    participant DataSource
    participant UnicodeEscapeProcessor

    ContentBuilder->>UnicodeEscapeProcessor: process_unicode_escape_sequence(pos, pending_surrogate, DataSource)
    UnicodeEscapeProcessor->>DataSource: get_borrowed_slice() (for hex digits)
    DataSource-->>UnicodeEscapeProcessor: &[u8]
    UnicodeEscapeProcessor-->>ContentBuilder: (utf8_bytes, new_pos, updated_surrogate)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • kaidokert/picojson-rs#1: Initial implementation of CopyOnEscape, directly related as this PR adds new methods to the same struct.
  • kaidokert/picojson-rs#7: Introduced the original process_unicode_escape_sequence function; this PR refactors and extends its logic.
  • kaidokert/picojson-rs#62: Added ParserCore, content builders, and refactored Unicode escape processing; this PR builds upon and extends these abstractions.

Poem

In the warren where bytes do flow,
A DataSource trait now helps us know
If content’s borrowed or freshly made,
Unescaped slices hop into the shade.
Surrogates handled with gentle care,
Parsing JSON with rabbits’ flair!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch push_parts3

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kaidokert, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've introduced a new DataSource abstraction to unify how different JSON parsers access input data, whether it's directly from the original input buffer or from a temporary scratch buffer used for unescaped content. This refactoring aims to streamline content extraction logic across slice-based and stream-based parsers, improving code consistency and maintainability. It centralizes data access patterns, making the parsing components more modular and easier to manage.

Highlights

  • New DataSource Abstraction: I've introduced a new DataSource trait to abstract how different parsers access input data, whether it's directly from the original input or from a temporary scratch buffer for unescaped content. This trait defines a unified interface for data retrieval.
  • Unified Content Extraction Logic: A new helper function, get_content_piece, has been added. This function leverages the DataSource trait to intelligently determine whether to extract content directly from the input or from a scratch buffer, significantly simplifying the logic for string and key extraction across parsers.
  • Implementation of DataSource for Parsers: Both SliceContentBuilder and StreamContentBuilder now implement the DataSource trait. This allows them to seamlessly integrate with the new unified content extraction patterns and the refactored unicode escape processing.
  • Refactored Unicode Escape Processing: The process_unicode_escape_sequence function has been refactored to accept a generic DataSource instead of a specific closure for providing hex digits. This change makes the unicode escape processing more flexible and reusable across different parser implementations.
  • Error Type Refinement: The ParseError enum has been refined, specifically changing ScratchBufferFull to InputBufferFull for better clarity on the error's origin. Additionally, a From implementation for ujson::Error was added to streamline error handling.
  • Enhanced CopyOnEscape Functionality: The CopyOnEscape component now includes new public methods, has_unescaped_content and get_scratch_contents, which are essential for its role as a DataSource for the SliceContentBuilder.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kaidokert - I've reviewed your changes - here's some feedback:

  • There’s still repeated pre‐extraction escape handling in both the stream and slice builders (e.g. calling queue_unescaped_reset or copy_on_escape.end_string)—consider moving that into the DataSource trait or a shared helper so extract_* only needs to call get_content_piece.
  • The change in ContentRange::string_content_bounds_from_content_start silently clamps cases where start > end to an empty slice, which can mask invalid bounds—consider returning an explicit error or panic to catch logic bugs sooner.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- There’s still repeated pre‐extraction escape handling in both the stream and slice builders (e.g. calling `queue_unescaped_reset` or `copy_on_escape.end_string`)—consider moving that into the DataSource trait or a shared helper so `extract_*` only needs to call `get_content_piece`.
- The change in `ContentRange::string_content_bounds_from_content_start` silently clamps cases where `start > end` to an empty slice, which can mask invalid bounds—consider returning an explicit error or panic to catch logic bugs sooner.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a DataSource abstraction to unify content extraction logic between the stream and slice parsers. This is an excellent refactoring that significantly improves the code structure, reduces duplication, and enhances maintainability. The changes are well-implemented and consistent across the codebase. My review includes a couple of minor suggestions to simplify boolean logic and redundant calculations for improved code clarity.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
picojson/src/slice_content_builder.rs (1)

122-161: Improved Unicode escape handling with proper surrogate support.

The explicit tracking of pending surrogate state and bounds checking for UTF-8 slices are good improvements. However, there's a potential numeric overflow:

At lines 150-152, the calculation could overflow:

let low_surrogate_pos = escape_start_pos
    .checked_add(6)
    .ok_or(ParseError::NumericOverflow)?;

Good that you're using checked_add to handle this case safely!

🧹 Nitpick comments (1)
picojson/src/event_processor.rs (1)

76-85: Consider simplifying the accumulation logic for clarity.

While functionally correct, the nested conditionals could be simplified:

-let should_accumulate = if always_accumulate_during_escapes {
-    if self.in_escape_sequence {
-        true // Always accumulate during escape sequences
-    } else {
-        !have_events(&self.parser_state.evts) // Normal behavior outside escapes
-    }
-} else {
-    !have_events(&self.parser_state.evts) && !self.in_escape_sequence
-};
+let should_accumulate = match (always_accumulate_during_escapes, self.in_escape_sequence) {
+    (true, true) => true,  // Flag set + in escape: always accumulate
+    (_, false) => !have_events(&self.parser_state.evts),  // Not in escape: accumulate if no events
+    (false, true) => false,  // Flag not set + in escape: don't accumulate
+};

This makes the three distinct cases more explicit.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1213716 and 7831724.

📒 Files selected for processing (9)
  • picojson/src/copy_on_escape.rs (1 hunks)
  • picojson/src/escape_processor.rs (4 hunks)
  • picojson/src/event_processor.rs (2 hunks)
  • picojson/src/parse_error.rs (2 hunks)
  • picojson/src/shared.rs (2 hunks)
  • picojson/src/slice_content_builder.rs (4 hunks)
  • picojson/src/slice_input_buffer.rs (0 hunks)
  • picojson/src/stream_content_builder.rs (6 hunks)
  • picojson/src/stream_parser.rs (1 hunks)
💤 Files with no reviewable changes (1)
  • picojson/src/slice_input_buffer.rs
🧰 Additional context used
🧠 Learnings (8)
📚 Learning: in json parsing tests for picojson-rs, when testing unicode escape sequences, raw string literals us...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#69
File: picojson/tests/json_checker_tests.rs:296-297
Timestamp: 2025-07-27T05:05:22.707Z
Learning: In JSON parsing tests for picojson-rs, when testing Unicode escape sequences, raw string literals use double backslashes (e.g., r#"\\uCAFE"#) to create JSON input containing single backslashes (\uCAFE) that the JSON parser processes. The double backslashes are not an error - they correctly represent the JSON input format that contains escape sequences for the parser to decode.

Applied to files:

  • picojson/src/stream_parser.rs
  • picojson/src/slice_content_builder.rs
  • picojson/src/event_processor.rs
  • picojson/src/escape_processor.rs
  • picojson/src/shared.rs
📚 Learning: in the picojson-rs project, the `use tokenizer as ujson;` alias in lib.rs is a transitionary and ful...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#5
File: picojson/src/lib.rs:0-0
Timestamp: 2025-06-29T17:48:18.198Z
Learning: In the picojson-rs project, the `use tokenizer as ujson;` alias in lib.rs is a transitionary and fully internal private alias used during crate reorganization. Examples and external code no longer depend on this alias, making the private visibility appropriate.

Applied to files:

  • picojson/src/stream_parser.rs
  • picojson/src/copy_on_escape.rs
  • picojson/src/parse_error.rs
  • picojson/src/stream_content_builder.rs
  • picojson/src/event_processor.rs
  • picojson/src/escape_processor.rs
  • picojson/src/shared.rs
📚 Learning: in picojson-rs event_processor module, the tokenizer callback intentionally uses a fixed-size array ...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#55
File: picojson/src/event_processor.rs:188-203
Timestamp: 2025-07-13T05:06:50.688Z
Learning: In picojson-rs event_processor module, the tokenizer callback intentionally uses a fixed-size array of 2 slots and silently drops events when full. This is deliberate design for embedded/constrained environments requiring: zero-allocation guarantee, panic-free operation, deterministic memory usage, and fixed memory footprint. The array size of 2 is sufficient for ujson's event generation patterns when processed in tight loops.

Applied to files:

  • picojson/src/stream_parser.rs
  • picojson/src/parse_error.rs
  • picojson/src/stream_content_builder.rs
  • picojson/src/slice_content_builder.rs
  • picojson/src/event_processor.rs
📚 Learning: in the stax json parser codebase, escapesequence event handlers exist in flex_parser.rs not because ...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#1
File: stax/src/flex_parser.rs:0-0
Timestamp: 2025-06-28T18:12:30.015Z
Learning: In the stax JSON parser codebase, EscapeSequence event handlers exist in flex_parser.rs not because they're needed by that parser variant, but to avoid catch-all patterns in match statements. The flex parser doesn't need to process EscapeSequence events, but the other parser variant (direct parser) does need them.

Applied to files:

  • picojson/src/stream_parser.rs
  • picojson/src/copy_on_escape.rs
  • picojson/src/stream_content_builder.rs
  • picojson/src/slice_content_builder.rs
  • picojson/src/event_processor.rs
  • picojson/src/escape_processor.rs
  • picojson/src/shared.rs
📚 Learning: in the picojson-rs project, the bitstack trait was redesigned to return bool instead of option...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#1
File: tokenizer/src/bitstack/mod.rs:0-0
Timestamp: 2025-06-28T23:43:22.783Z
Learning: In the picojson-rs project, the BitStack trait was redesigned to return bool instead of Option<bool> for pop() and top() methods. Empty stacks return false rather than None, which simplifies the API and avoids Option handling.

Applied to files:

  • picojson/src/stream_parser.rs
  • picojson/src/copy_on_escape.rs
  • picojson/src/stream_content_builder.rs
  • picojson/src/event_processor.rs
  • picojson/src/shared.rs
📚 Learning: in picojson-rs sliceparser, is_empty() and is_past_end() serve different purposes: is_empty() return...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#55
File: picojson/src/slice_parser.rs:273-286
Timestamp: 2025-07-13T05:11:46.914Z
Learning: In picojson-rs SliceParser, is_empty() and is_past_end() serve different purposes: is_empty() returns true when pos >= data.len() (at document boundary, all input consumed), while is_past_end() returns true when pos > data.len() (gone beyond input). For number parsing delimiter logic, is_empty() is correct because it detects when parsing the last token at document end, whereas is_past_end() would incorrectly indicate not at document end for standalone numbers.

Applied to files:

  • picojson/src/stream_parser.rs
  • picojson/src/copy_on_escape.rs
  • picojson/src/parse_error.rs
  • picojson/src/stream_content_builder.rs
  • picojson/src/slice_content_builder.rs
  • picojson/src/event_processor.rs
  • picojson/src/shared.rs
📚 Learning: in picojson-rs event processing, the sliceparser uses an if/else pattern with process_simple_events(...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#55
File: picojson/src/slice_parser.rs:0-0
Timestamp: 2025-07-13T05:10:01.847Z
Learning: In picojson-rs event processing, the SliceParser uses an if/else pattern with process_simple_events() first, then process_begin_events() as fallback. Both branches use identical match statements for all EventResult variants for consistency and maintainability, even though process_begin_events() only returns Continue or None while process_simple_events() can return all variants.

Applied to files:

  • picojson/src/stream_parser.rs
  • picojson/src/stream_content_builder.rs
  • picojson/src/slice_content_builder.rs
  • picojson/src/event_processor.rs
  • picojson/src/escape_processor.rs
📚 Learning: the pullparser trait in picojson-rs provides both next() and next_event() methods. the next() method...
Learnt from: kaidokert
PR: kaidokert/picojson-rs#44
File: picojson/src/chunk_reader.rs:28-33
Timestamp: 2025-07-07T01:39:55.177Z
Learning: The PullParser trait in picojson-rs provides both next() and next_event() methods. The next() method is an iterator-like convenience method that returns Option<Result<Event, ParseError>>, returning None when EndDocument is reached. The next_event() method returns Result<Event, ParseError> directly. Both methods are valid and the choice depends on whether you want iterator-style usage (next) or direct result handling (next_event).

Applied to files:

  • picojson/src/parse_error.rs
  • picojson/src/stream_content_builder.rs
  • picojson/src/slice_content_builder.rs
  • picojson/src/event_processor.rs
  • picojson/src/shared.rs
🧬 Code Graph Analysis (3)
picojson/src/copy_on_escape.rs (4)
picojson/src/shared.rs (1)
  • has_unescaped_content (216-216)
picojson/src/slice_content_builder.rs (1)
  • has_unescaped_content (197-199)
picojson/src/stream_content_builder.rs (1)
  • has_unescaped_content (337-339)
picojson/src/stream_buffer.rs (1)
  • has_unescaped_content (243-245)
picojson/src/stream_content_builder.rs (4)
picojson/src/shared.rs (4)
  • get_content_piece (256-274)
  • get_borrowed_slice (198-202)
  • get_unescaped_slice (209-209)
  • has_unescaped_content (216-216)
picojson/src/slice_content_builder.rs (4)
  • extract_key_content (84-95)
  • get_borrowed_slice (182-184)
  • get_unescaped_slice (186-195)
  • has_unescaped_content (197-199)
picojson/src/escape_processor.rs (1)
  • process_unicode_escape_sequence (679-739)
picojson/src/stream_buffer.rs (2)
  • get_unescaped_slice (218-225)
  • has_unescaped_content (243-245)
picojson/src/shared.rs (4)
picojson/src/slice_content_builder.rs (3)
  • get_borrowed_slice (182-184)
  • get_unescaped_slice (186-195)
  • has_unescaped_content (197-199)
picojson/src/stream_content_builder.rs (3)
  • get_borrowed_slice (327-331)
  • get_unescaped_slice (333-335)
  • has_unescaped_content (337-339)
picojson/src/stream_buffer.rs (2)
  • get_unescaped_slice (218-225)
  • has_unescaped_content (243-245)
picojson/src/copy_on_escape.rs (1)
  • has_unescaped_content (210-212)
🔇 Additional comments (20)
picojson/src/parse_error.rs (2)

14-19: LGTM! Clear error differentiation.

The documentation updates are more concise, and the new InputBufferFull variant properly distinguishes input buffer capacity issues from scratch buffer limitations, improving error diagnostics.


78-82: Good addition of error conversion trait.

The From<ujson::Error> implementation provides clean automatic conversion from tokenizer errors, simplifying error propagation throughout the parser.

picojson/src/stream_parser.rs (1)

1081-1084: Test correctly updated for new error semantics.

The comment now clearly explains the buffer size limitation, and the expected error type correctly uses InputBufferFull to indicate input buffer capacity issues rather than scratch buffer issues.

picojson/src/copy_on_escape.rs (1)

209-219: Well-designed DataSource support methods.

The new methods properly expose unescaped content state and provide safe access to the scratch buffer with correct lifetime annotations. The error handling in get_scratch_contents appropriately validates slice bounds.

picojson/src/event_processor.rs (2)

40-51: Good refactoring pattern for backward compatibility.

Delegating to the extended method with a default flag value preserves the existing API while enabling new functionality.


91-99: Proper end-of-stream handling.

Good practice to capture the finish() return value even if unused, and the explicit EndDocument return when no events are generated correctly handles the end-of-stream case.

picojson/src/slice_content_builder.rs (3)

71-95: Excellent refactoring to unify content extraction.

The consistent pattern for string and key extraction improves maintainability by:

  • Properly completing CopyOnEscape processing when needed
  • Using the unified get_content_piece helper to centralize logic
  • Converting ContentPiece to the appropriate event type

This aligns well with the DataSource abstraction.


97-112: Clean simplification of number extraction.

Good removal of conditional logic by consistently using ContentRange::end_position_excluding_delimiter and the DataSource trait method. This makes the code more maintainable and aligns with the abstraction pattern.


177-200: Well-implemented DataSource trait.

The implementation correctly:

  • Provides borrowed slices from the input buffer
  • Accesses unescaped content from CopyOnEscape with proper lifetime management
  • Validates state before returning unescaped slices

This completes the abstraction that unifies content access across parser implementations.

picojson/src/stream_content_builder.rs (5)

59-60: Error type change improves semantic clarity

The change from ScratchBufferFull to InputBufferFull better reflects the actual issue - the input buffer is too small to hold the current token, not the scratch buffer. This aligns with the error's usage context where buffer compaction fails.


198-216: Excellent unification of content extraction logic

The refactoring of both extract_string_content and extract_key_content to use the shared get_content_piece helper is well-executed. The queue-based reset mechanism for unescaped content is appropriate for the streaming nature of this parser, preventing content contamination between strings.


230-231: Appropriate use of DataSource trait for number extraction

Using the get_borrowed_slice method from the DataSource trait properly abstracts the buffer access. This is correct since numbers don't require escape processing.


260-273: Well-structured adaptation to centralized Unicode escape processing

The refactoring correctly:

  1. Extracts pending surrogate state before the call
  2. Passes self as the DataSource implementation
  3. Updates the collector's surrogate state after processing

This maintains proper state management while leveraging the new unified escape processing function.


319-340: Clean DataSource trait implementation with appropriate lifetime handling

The implementation correctly uses the buffer lifetime 'b for both borrowed and unescaped content, which is appropriate for the streaming nature of this parser. The delegation to StreamBuffer methods is clean and maintains proper error conversion.

picojson/src/escape_processor.rs (3)

6-12: Well-designed type alias improves code clarity

The UnicodeEscapeResult type alias effectively encapsulates the complex return type for Unicode escape processing, making the code more readable and maintainable.


278-286: Necessary accessor methods for surrogate state management

The getter and setter methods provide controlled access to the pending surrogate state, which is required for the refactored Unicode escape processing design.


663-738: Excellent refactoring improves modularity and testability

The refactored process_unicode_escape_sequence function:

  1. Decouples from external mutable state by creating an internal collector
  2. Uses the DataSource trait for clean abstraction of hex digit access
  3. Returns all state by value, making it more functional and testable
  4. Maintains correct surrogate pair handling, including proper position adjustment

This design is more modular and aligns well with the overall DataSource abstraction pattern.

picojson/src/shared.rs (3)

178-217: Excellent trait design provides clean abstraction

The DataSource trait is well-designed with:

  • Clear separation between borrowed (zero-copy) and unescaped (processed) content
  • Proper lifetime parameters enabling flexible implementations
  • Comprehensive documentation explaining the purpose of each method
  • Consistent error handling through Result types

This abstraction successfully unifies content access patterns across different parser implementations.


219-247: Clean enum design with proper UTF-8 validation

The ContentPiece enum effectively separates the two content sources and provides a clean conversion to the String type with proper UTF-8 validation for both variants.


253-274: Well-implemented helper unifies content extraction logic

The get_content_piece function successfully consolidates the content extraction pattern used across parsers. It correctly prioritizes unescaped content when available and falls back to borrowed content, using consistent position calculations via ContentRange.

@kaidokert kaidokert merged commit 31328b5 into main Aug 3, 2025
36 checks passed
@kaidokert kaidokert deleted the push_parts3 branch August 3, 2025 23:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant