Skip to content

Conversation

@I501307
Copy link

@I501307 I501307 commented Dec 8, 2025

Description

Fixes #11252

The kubernetes_events input plugin was failing with "bad formed JSON" errors when HTTP chunked transfer encoding split JSON event objects across chunk boundaries, causing the watch stream to become stuck for 25-40 minutes.

Problem

When Kubernetes API server sends responses using HTTP/1.1 chunked transfer encoding, HTTP chunks can split JSON objects mid-object:

Example:

Chunk 1 (1000 bytes): {"type":"ADDED","object":{"metadata":{"name":"po
Chunk 2 (176 bytes):  d-123"},"spec":{...}}}

The plugin was attempting to parse JSON after each HTTP chunk, but chunks containing incomplete JSON caused parse errors.

Root Cause

  • The HTTP client correctly decodes chunked encoding and returns FLB_HTTP_CHUNK_AVAILABLE after each chunk
  • However, this is by design - the HTTP layer doesn't know about JSON boundaries
  • The plugin needs application-layer buffering to handle JSON message boundaries

Solution

Implemented buffering in the kubernetes_events plugin:

  • Added chunk_buffer to buffer incomplete JSON across chunks
  • Modified process_http_chunk() to only parse complete JSON objects (delimited by newlines)
  • Buffer remaining incomplete data for the next chunk
  • This follows standard network programming practice: application layer handles message boundaries

Why not fix in HTTP client?

  • HTTP client returns FLB_HTTP_CHUNK_AVAILABLE by design after decoding each chunk
  • HTTP layer shouldn't know about application protocols (JSON, XML, etc.)
  • Similar to how TCP delivers packets but HTTP buffers for complete messages
  • Maintains separation of concerns and doesn't break other plugins

Testing

  • Added events_v1_with_3chunks test simulating 1176-byte JSON split into 3 chunks (400+400+376 bytes)
  • Logic verified: incomplete data buffered across multiple chunks, complete JSON objects parsed
  • No data loss - all events processed correctly
  • Edge case tested: buffered data processed when stream closes

Checklist

  • Example configuration: Standard kubernetes_events input with default settings
  • Debug log: Added trace logging for buffering operations
  • [N/A] Valgrind: Memory properly managed with flb_sds_create_len/destroy
  • [N/A] Packaging: No packaging changes
  • Documentation: Test case documents the fix
  • Backport: Should be backported to 4.2 stable

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

Summary by CodeRabbit

  • Bug Fixes

    • Improved handling of Kubernetes event streams spanning multiple HTTP chunks: partial JSON is now buffered and merged across chunks, empty lines skipped, leftover data parsed at stream end, and buffers are reliably cleaned to avoid lost events and memory leaks.
  • Tests

    • Added a test validating correct processing of an event delivered across multiple smaller chunks.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 8, 2025

Walkthrough

Buffers and reassembles incomplete JSON from HTTP chunked responses in the kubernetes_events input plugin, ensures buffered data is parsed on stream termination, introduces a chunk buffer field in plugin state with create/destroy management, and adds a runtime test for multi-chunk events.

Changes

Cohort / File(s) Summary
Chunk buffering & parsing logic
plugins/in_kubernetes_events/kubernetes_events.c
Add buffering of incomplete JSON across recv chunks, prepend prior buffered bytes to new chunk data via a working buffer, skip empty lines, handle successful JSON parse by advancing/resetting pointers and freeing parsed data, buffer trailing incomplete JSON for next read, process buffered data when stream returns HTTP_OK (end) or on disconnect cleanup, and ensure working_buffer and temporaries are freed on all paths.
State struct update
plugins/in_kubernetes_events/kubernetes_events.h
Add flb_sds_t chunk_buffer field to k8s_events struct to hold partial JSON between chunked reads.
Config lifecycle changes
plugins/in_kubernetes_events/kubernetes_events_conf.c
Initialize chunk_buffer to NULL in conf create and free it in conf destroy to avoid leaks.
Tests
tests/runtime/in_kubernetes_events.c
Add flb_test_events_with_3chunks() and include it in TEST_LIST to validate reassembly of an event split across three chunks.

Sequence Diagram(s)

sequenceDiagram
    participant KubeAPI as Kubernetes API (chunked HTTP)
    participant Plugin as kubernetes_events plugin
    participant Parser as JSON parser

    KubeAPI->>Plugin: send chunk N (may be partial JSON)
    alt Plugin has chunk_buffer
        Plugin->>Plugin: prepend chunk_buffer to incoming chunk -> working_buffer
    end
    Plugin->>Parser: attempt parse line-delimited JSON from buffer
    alt Parser returns object
        Parser-->>Plugin: parsed event
        Plugin->>Plugin: process event, advance pointers
        Plugin->>Plugin: continue parsing remaining bytes
    else Parser fails (incomplete)
        Plugin->>Plugin: buffer remaining bytes into chunk_buffer
    end
    Note right of Plugin: On HTTP_OK (stream end)
    Plugin->>Parser: attempt parse any final buffered data
    Parser-->>Plugin: parsed final event / fail
    Plugin->>Plugin: free chunk_buffer and temporaries on disconnect/end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Areas needing extra attention:
    • process_http_chunk: correctness of prepend/consumption logic and pointer arithmetic
    • Handling of final buffered data on HTTP_OK vs. disconnect to avoid double-processing
    • Memory management: ensure all allocations (working_buffer, chunk_buffer) are freed on success/failure
    • New test: validate it faithfully simulates chunked transfer encoding and asserts expected output

Suggested reviewers

  • edsiper
  • cosmo0920

Poem

🐰 I nibble at fragments, stitch every part,
I buffer the bytes with a warm little heart.
When the stream says goodbye, I parse through the night,
Events whole and hopping, ready to take flight. 🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'Fix/kubernetes events chunked encoding' directly addresses the main change: handling HTTP/1.1 chunked transfer encoding in the kubernetes_events input plugin.
Linked Issues check ✅ Passed The PR implements all core requirements from issue #11252: application-layer buffering for incomplete JSON across chunks, proper handling of newline-delimited JSON, processing buffered data at stream end, and test coverage for multi-chunk scenarios.
Out of Scope Changes check ✅ Passed All changes are directly scoped to handling chunked encoding: buffer field addition, buffering logic in process_http_chunk, cleanup in k8s_events_collect, initialization/destruction in config, and a targeted test case.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@I501307 I501307 force-pushed the fix/kubernetes-events-chunked-encoding branch from 0de06d9 to 7e561b6 Compare December 8, 2025 18:51
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/runtime/in_kubernetes_events.c (1)

447-490: Test correctly validates multi-chunk buffering.

The test appropriately uses 400-byte chunks to ensure the 1176-byte JSON is split across multiple chunk boundaries, exercising the new buffering logic.

Consider extracting common test logic into a helper function to reduce duplication with flb_test_events_with_chunkedrecv (they differ only in chunk size). However, this is minor given the test's clarity.

plugins/in_kubernetes_events/kubernetes_events.c (2)

796-802: Clarify the bytes_consumed accounting.

The bytes_consumed increment on line 801 only happens for non-buffered data, but line 818 unconditionally sets *bytes_consumed = c->resp.payload_size. This means the per-token tracking here is effectively unused.

If the intent is to always consume the entire HTTP payload (which is correct for chunked buffering), the conditional increment on lines 800-802 could be removed for clarity. However, this doesn't affect correctness.

-            /* 
-             * For non-buffered data, track consumed bytes.
-             * For buffered data, we'll mark everything consumed after the loop.
-             */
-            if (!working_buffer) {
-                *bytes_consumed += token_size + 1;
-            }
             ret = process_watched_event(ctx, buf_data, buf_size);

1017-1033: Clear chunk_buffer after successful processing to avoid potential reprocessing.

After successfully parsing the buffered data on stream close, ctx->chunk_buffer still holds the data. While it gets cleared at lines 1047-1050, if the code flow were to change in the future, this could lead to double processing. Consider destroying it immediately after successful processing:

                 if (buf_ret == 0) {
                     process_watched_event(ctx, buf_data, buf_size);
+                    flb_sds_destroy(ctx->chunk_buffer);
+                    ctx->chunk_buffer = NULL;
                 }
                 
                 if (buf_data) {
                     flb_free(buf_data);
                 }
             }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ded9ae and 7e561b6.

📒 Files selected for processing (4)
  • plugins/in_kubernetes_events/kubernetes_events.c (4 hunks)
  • plugins/in_kubernetes_events/kubernetes_events.h (1 hunks)
  • plugins/in_kubernetes_events/kubernetes_events_conf.c (2 hunks)
  • tests/runtime/in_kubernetes_events.c (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
plugins/in_kubernetes_events/kubernetes_events_conf.c (1)
src/flb_sds.c (1)
  • flb_sds_destroy (389-399)
tests/runtime/in_kubernetes_events.c (1)
src/flb_lib.c (1)
  • flb_start (983-994)
plugins/in_kubernetes_events/kubernetes_events.c (3)
src/flb_sds.c (3)
  • flb_sds_cat (120-141)
  • flb_sds_destroy (389-399)
  • flb_sds_create_len (58-76)
src/flb_pack.c (1)
  • flb_pack_json (530-535)
include/fluent-bit/flb_mem.h (1)
  • flb_free (126-128)
🔇 Additional comments (8)
plugins/in_kubernetes_events/kubernetes_events_conf.c (2)

161-162: LGTM!

Proper initialization of chunk_buffer to NULL before the config map is loaded. This ensures the buffer is in a known state before any potential early returns that call k8s_events_conf_destroy.


295-297: LGTM!

Proper cleanup of chunk_buffer. The NULL check is consistent with the defensive coding style used for other fields in this function.

plugins/in_kubernetes_events/kubernetes_events.h (1)

88-89: LGTM!

Well-documented field addition. The flb_sds_t type is appropriate for accumulating variable-length JSON fragments across HTTP chunks.

tests/runtime/in_kubernetes_events.c (1)

495-496: LGTM!

Both chunked transfer encoding tests are now enabled, providing coverage for the fix.

plugins/in_kubernetes_events/kubernetes_events.c (4)

758-775: LGTM!

Correct handling of flb_sds_cat semantics. The comment accurately explains that working_buffer IS the potentially-reallocated ctx->chunk_buffer, so clearing ctx->chunk_buffer after assignment prevents double-free.


824-840: LGTM!

Correct calculation of remaining incomplete data. The conditional on line 831 (remaining > 0 && ret == 0) properly handles:

  1. No buffering needed when data ends on newline boundary
  2. No buffering on parse errors (prevents accumulating corrupt data)

967-972: LGTM!

Appropriate cleanup of buffered data on stream initialization failure. Retaining partial JSON across a reconnection attempt would cause parse errors.


791-807: The review comment is incorrect. The flb_free(buf_data) call at lines 806-807 is outside the if-else block, not inside it. It executes for both success and failure cases, so the memory is already properly freed when flb_pack_json fails. No fix is needed.

Likely an incorrect or invalid review comment.

…oundaries

Fixes fluent#11252

When HTTP chunked transfer encoding splits JSON event objects across
chunk boundaries, implement buffering to handle incomplete data until
a complete JSON message is received. This prevents 'bad formed JSON'
errors and watch stream stalls.

- Add chunk_buffer field to k8s_events struct
- Modify process_http_chunk() to buffer incomplete JSON data
- Process buffered data when stream closes
- Add test with 400-byte chunks splitting events into 3 parts

Signed-off-by: Jesse Awan <jesse.awan@sap.com>
@I501307 I501307 force-pushed the fix/kubernetes-events-chunked-encoding branch from 7e561b6 to 6bddcbd Compare December 9, 2025 08:06
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/runtime/in_kubernetes_events.c (1)

495-496: Consider enabling or removing the commented-out test.

events_v1_with_chunkedrecv (line 495) is commented out while the new events_v1_with_3chunks is enabled. If the older test is superseded by the new one, consider removing it entirely. If it tests different behavior, consider enabling it or adding a comment explaining why it's disabled.

plugins/in_kubernetes_events/kubernetes_events.c (1)

777-812: Minor: Consider differentiating incomplete vs malformed JSON.

The current implementation logs at debug level when JSON parsing fails (line 793). While this works for incomplete JSON that will be buffered, truly malformed JSON would also be buffered and retried indefinitely.

Consider tracking parse failures: if the same buffered content fails parsing across multiple chunks (i.e., more data was added but it still fails), it may indicate malformed JSON rather than incomplete data.

For the current fix, this is acceptable since the stream would eventually close and the buffer would be cleared.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7e561b6 and 6bddcbd.

📒 Files selected for processing (4)
  • plugins/in_kubernetes_events/kubernetes_events.c (4 hunks)
  • plugins/in_kubernetes_events/kubernetes_events.h (1 hunks)
  • plugins/in_kubernetes_events/kubernetes_events_conf.c (2 hunks)
  • tests/runtime/in_kubernetes_events.c (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • plugins/in_kubernetes_events/kubernetes_events.h
🧰 Additional context used
🧬 Code graph analysis (3)
plugins/in_kubernetes_events/kubernetes_events_conf.c (1)
src/flb_sds.c (1)
  • flb_sds_destroy (389-399)
tests/runtime/in_kubernetes_events.c (1)
src/flb_lib.c (1)
  • flb_start (983-994)
plugins/in_kubernetes_events/kubernetes_events.c (3)
src/flb_sds.c (3)
  • flb_sds_cat (120-141)
  • flb_sds_destroy (389-399)
  • flb_sds_create_len (58-76)
src/flb_pack.c (1)
  • flb_pack_json (530-535)
include/fluent-bit/flb_mem.h (1)
  • flb_free (126-128)
🪛 Cppcheck (2.18.0)
plugins/in_kubernetes_events/kubernetes_events_conf.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

tests/runtime/in_kubernetes_events.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

plugins/in_kubernetes_events/kubernetes_events.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

🔇 Additional comments (8)
plugins/in_kubernetes_events/kubernetes_events_conf.c (2)

161-162: LGTM: Proper initialization of chunk_buffer.

Initializing chunk_buffer to NULL before loading the config map ensures safe conditional checks throughout the plugin's lifetime.


295-297: LGTM: Proper cleanup of chunk_buffer.

The cleanup correctly frees chunk_buffer if allocated. Per the flb_sds_destroy implementation in src/flb_sds.c, passing NULL is safe, so the if check is defensive but not strictly necessary.

tests/runtime/in_kubernetes_events.c (1)

447-490: LGTM: Good test coverage for multi-chunk buffering.

The test effectively validates the chunked encoding fix by:

  • Using 400-byte chunks to split a 1176-byte JSON event across 3 HTTP chunks
  • Verifying that buffered data is correctly reassembled and processed
  • Asserting at least 2 output records (list + streamed event)

The test structure mirrors the existing flb_test_events_with_chunkedrecv pattern, which is consistent with the codebase.

plugins/in_kubernetes_events/kubernetes_events.c (5)

749-775: LGTM: Buffer prepending logic is correct.

The logic correctly handles prepending buffered incomplete JSON from previous chunks:

  • flb_sds_cat returns the (possibly reallocated) first argument, so working_buffer becomes the owner
  • Setting ctx->chunk_buffer = NULL after the cat prevents double-free
  • Error handling destroys the original buffer and returns appropriately

967-972: LGTM: Proper cleanup on connection failure.

Clearing chunk_buffer on failure prevents stale incomplete data from persisting across reconnection attempts.


1012-1033: LGTM: Handles final buffered data on stream close.

This correctly processes any remaining buffered JSON when the stream closes gracefully (FLB_HTTP_OK), handling the case where the last event doesn't end with a newline delimiter.


1045-1050: LGTM: Buffer cleanup after stream disconnect.

The buffer is cleared after processing (in the FLB_HTTP_OK block above) and again here to ensure cleanup in all disconnect scenarios (error, OK, or status mismatch).


814-844: The bytes_consumed pattern is intentional and safe. Line 818 unconditionally sets *bytes_consumed = c->resp.payload_size because the incomplete JSON data is copied into a separate buffer (ctx->chunk_buffer) before reporting consumption. The HTTP layer removes the reported bytes from its internal buffer, but the locally-buffered copy remains valid. This design allows consuming all payload bytes while preserving incomplete data for the next chunk.

- Add comprehensive debug logging to trace chunk processing
- Fix buffering logic to only buffer when parse fails (incomplete JSON)
- Previously was buffering all data even on successful parse

Signed-off-by: Jesse Awan <jesse.awan@sap.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
plugins/in_kubernetes_events/kubernetes_events.c (1)

1027-1048: Good end‑of‑stream handling; consider logging on final parse failure.

Flushing ctx->chunk_buffer on FLB_HTTP_OK is a solid improvement to avoid losing a last, non‑newline‑terminated event. One small enhancement would be to log when buf_ret != 0 here, so operators can see if the final buffered payload was malformed rather than just silently dropped.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6bddcbd and a6bf771.

📒 Files selected for processing (1)
  • plugins/in_kubernetes_events/kubernetes_events.c (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
plugins/in_kubernetes_events/kubernetes_events.c (3)
src/flb_sds.c (3)
  • flb_sds_cat (120-141)
  • flb_sds_destroy (389-399)
  • flb_sds_create_len (58-76)
src/flb_pack.c (1)
  • flb_pack_json (530-535)
include/fluent-bit/flb_mem.h (1)
  • flb_free (126-128)
🔇 Additional comments (1)
plugins/in_kubernetes_events/kubernetes_events.c (1)

982-987: chunk_buffer cleanup on failure/disconnect looks correct.

Clearing ctx->chunk_buffer both on initialization failure and when the stream disconnects ensures no stale partial JSON leaks into a new watch and avoids dangling allocations. This integrates well with the new buffering model.

Also applies to: 1061-1065

…licates

Critical fix: Always mark the entire HTTP payload as consumed, not just
parsed lines. Otherwise the HTTP layer keeps the trailing fragment and
we get duplicates when prepending chunk_buffer on the next call.

This was causing 'bad formed JSON' errors in production because fragments
were duplicated at chunk boundaries.

Signed-off-by: Jesse Awan <jesse.awan@sap.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
plugins/in_kubernetes_events/kubernetes_events.c (2)

738-862: Chunk buffering and reassembly logic is solid; consider guarding against pathological cases and clarify payload termination assumptions.

The new process_http_chunk implementation correctly:

  • Reassembles split JSON lines across HTTP chunks using ctx->chunk_buffer + working_buffer.
  • Parses only complete newline-delimited JSON records and buffers the incomplete tail.
  • Marks the whole payload as consumed via *bytes_consumed = c->resp.payload_size, which avoids the prior duplication bug.
  • Cleans up working_buffer and buf_data on all normal paths.

Two robustness points to consider:

  1. Reliance on NUL‑terminated c->resp.payload for strpbrk (Lines 781-783, 788-792, 803-805)
    strpbrk walks until '\0', so this assumes the HTTP client always allocates resp.payload_size + 1 and sets the terminator. That’s likely true elsewhere in Fluent Bit, but it’s an implicit contract. If that ever changed (or if binary data slipped through), this would be undefined behavior.

    If you’d like to make process_http_chunk self‑contained and length‑safe, you could replace strpbrk with a length‑bounded scan driven by c->resp.payload_size / flb_sds_len(working_buffer).

  2. Unbounded ctx->chunk_buffer growth on persistent JSON errors (Lines 807-813, 820-831)
    Any JSON parse failure is treated as “incomplete line” and the entire tail is buffered and retried with the next chunk. For the targeted scenario (valid K8s events split across chunks), this is exactly what we want. But if the API ever sends a truly malformed event, we’ll keep appending new bytes to an unparseable tail, and ctx->chunk_buffer can grow without bound.

    It may be worth:

    • Imposing a reasonable maximum size on ctx->chunk_buffer (e.g., a few MB), and/or
    • Tracking consecutive parse failures for the same tail and dropping it (with a warning) once a threshold is exceeded.

    That would protect the input from memory blowups in pathological or misconfigured environments without changing normal behavior.


1024-1045: Nice improvement: flushing buffered tail on FLB_HTTP_OK handles final event without newline.

The logic to attempt a final flb_pack_json/process_watched_event on ctx->chunk_buffer when the server closes the stream cleanly (Lines 1024-1045) is a good addition. It fixes the corner case where the last event is not terminated by \n, ensuring that a valid trailing event is not silently dropped.

One optional enhancement: you might consider doing a similar best‑effort parse when the stream ends with FLB_HTTP_ERROR but chunk_buffer still holds data; in some environments, transient network issues can still leave a fully‑formed last event in the buffer.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a6bf771 and 357d676.

📒 Files selected for processing (1)
  • plugins/in_kubernetes_events/kubernetes_events.c (4 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-23T07:43:16.216Z
Learnt from: cosmo0920
Repo: fluent/fluent-bit PR: 11059
File: plugins/in_tail/tail_file.c:1618-1640
Timestamp: 2025-10-23T07:43:16.216Z
Learning: In plugins/in_tail/tail_file.c, when truncate_long_lines is enabled and the buffer is full, the early truncation path uses `lines > 0` as the validation pattern to confirm whether process_content successfully processed content. This is intentional to track occurrences of line processing rather than byte consumption, and consuming bytes based on `processed_bytes > 0` would be overkill for this validation purpose.

Applied to files:

  • plugins/in_kubernetes_events/kubernetes_events.c
🧬 Code graph analysis (1)
plugins/in_kubernetes_events/kubernetes_events.c (3)
src/flb_sds.c (3)
  • flb_sds_cat (120-141)
  • flb_sds_destroy (389-399)
  • flb_sds_create_len (58-76)
src/flb_pack.c (1)
  • flb_pack_json (530-535)
include/fluent-bit/flb_mem.h (1)
  • flb_free (126-128)
🔇 Additional comments (1)
plugins/in_kubernetes_events/kubernetes_events.c (1)

979-985: Good: chunk_buffer is cleared on setup failure and stream teardown.

The added cleanup in both:

  • check_and_init_stream failure path (Lines 979-985), and
  • The stream‑disconnect path in k8s_events_collect (Lines 1057-1062),

ensures that stale partial JSON from a previous connection is not reused for a new watch session and that memory is released promptly when the stream dies. This matches the new buffering semantics and avoids subtle cross‑stream corruption.

Also applies to: 1057-1062

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

kubernetes_events input plugin cannot handle chunked transfer encoding

1 participant