Skip to content

eventstore: improve compression by reusing buffer and use a new compression level#3751

Merged
ti-chi-bot[bot] merged 13 commits intomasterfrom
ldz/fix-compress1222
Feb 24, 2026
Merged

eventstore: improve compression by reusing buffer and use a new compression level#3751
ti-chi-bot[bot] merged 13 commits intomasterfrom
ldz/fix-compress1222

Conversation

@lidezhu
Copy link
Collaborator

@lidezhu lidezhu commented Dec 22, 2025

What problem does this PR solve?

Issue Number: close #4041

What is changed and how it works?

This pull request focuses on optimizing the performance of ZSTD compression and decompression within the event store. By implementing a strategy to reuse byte buffers for these operations, the changes aim to significantly reduce memory allocation overhead and improve the speed and efficiency of event processing, both when writing and reading compressed data.

Highlights

  • ZSTD Compression Buffer Reuse: The writeEvents function now utilizes a reusable byte buffer for ZSTD compression operations. This change aims to reduce memory allocations and improve performance during event writing by avoiding repeated buffer creation.
  • ZSTD Decompression Buffer Reuse: Similarly, the eventStoreIter has been enhanced to reuse a byte buffer for ZSTD decompression. This optimization minimizes memory allocations during event iteration and reading, contributing to overall efficiency.
  • New Test Case for Buffer Reuse: A new unit test, TestEventStoreCompressionAndIterDecodeBufferReuse, has been added. This test specifically validates the correctness of the buffer reuse logic for both compression and decompression, ensuring data integrity and non-mutation.

Check List

Tests

  • Manual test (add detailed scripts or steps below)
    Here is a performance test result using large_row workload

    Row Size No Compression Compression with zstd Compression with zstd (SpeedFastest) Compression with zstd (SpeedFastest + Reuse Buffer)
    4096 (4KB) 155MB/s 60MB/s 110MB/s Skip Test
    8192 (8KB) 140MB/s 90MB/s 135MB/s Skip Test
    16384 (16KB) 140MB/s 145MB/s 140MB/s Skip Test
    32768 (32KB) 140MB/s 155MB/s Skip Test Skip Test
    65536 (64KB) 135MB/s 150MB/s Skip Test Skip Test
    131072 (128KB) 130MB/s 155MB/s Skip Test Skip Test
    262144 (256KB) Skip Test 165MB/s Skip Test 165MB/s
    524288 (512KB) 140MB/s 190MB/s 210MB/s 210MB/s
    1024000 (1MB) Skip Test 220MB/s Skip Test Skip Test

Here is a performance test using zstd heavy workload("repetitive structure + changing values" pattern)
Master: When input throughput is about 90MB/s, resolved ts lag keeps increasing.

image image

This Pr: When input throughput is about 150MB/s, resolved ts lag is still steady.

image image

Questions

Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?

Release note

Please refer to [Release Notes Language Style Guide](https://pingcap.github.io/tidb-dev-guide/contribute-to-tidb/release-notes-style-guide.html) to write a quality release note.

If you don't think this PR needs a release note then fill it with `None`.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added a compression ratio histogram metric for monitoring event-store compression.
  • Performance Improvements

    • Improved compression/decompression buffer reuse to reduce memory churn and improve throughput.
  • Tests

    • Added comprehensive wide-table integration tests and tests exercising compression/decoding paths.
  • Documentation

    • Clarified that decoded entry data may be mutated in-place; callers should not retain references.

@ti-chi-bot ti-chi-bot bot added do-not-merge/needs-linked-issue release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 22, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @lidezhu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on optimizing the performance of ZSTD compression and decompression within the event store. By implementing a strategy to reuse byte buffers for these operations, the changes aim to significantly reduce memory allocation overhead and improve the speed and efficiency of event processing, both when writing and reading compressed data.

Highlights

  • ZSTD Compression Buffer Reuse: The writeEvents function now utilizes a reusable byte buffer for ZSTD compression operations. This change aims to reduce memory allocations and improve performance during event writing by avoiding repeated buffer creation.
  • ZSTD Decompression Buffer Reuse: Similarly, the eventStoreIter has been enhanced to reuse a byte buffer for ZSTD decompression. This optimization minimizes memory allocations during event iteration and reading, contributing to overall efficiency.
  • New Test Case for Buffer Reuse: A new unit test, TestEventStoreCompressionAndIterDecodeBufferReuse, has been added. This test specifically validates the correctness of the buffer reuse logic for both compression and decompression, ensuring data integrity and non-mutation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to improve compression and decompression speed by reusing buffers. The changes involve modifying writeEvents to accept a reusable compression buffer and updating eventStoreIter to reuse a decompression buffer. The implementation for buffer reuse in both compression and decompression appears correct and safe. The logic correctly handles buffer resizing and resetting for subsequent uses. The tests have been updated accordingly, and a new test has been added to verify the safety of buffer reuse in the iterator, which is a good addition. Overall, the changes are well-implemented and should achieve the intended performance improvement.

@asddongmen
Copy link
Collaborator

Is there a bench result?

@ti-chi-bot ti-chi-bot bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 23, 2025
@lidezhu
Copy link
Collaborator Author

lidezhu commented Dec 23, 2025

Is there a bench result?

Working on it.

@lidezhu lidezhu force-pushed the ldz/fix-compress1222 branch from bd474fc to cce8ff5 Compare January 4, 2026 01:37
@lidezhu lidezhu changed the title eventstore: improve compress&decompress speed by reusing buffer eventstore: improve compression by reusing buffer and use a new compression level Jan 22, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2026

📝 Walkthrough

Walkthrough

Per-worker compression buffering and buffer-reuse were added to the event store write path; writeEvents signature now accepts a compression buffer pointer. Decompression reuses a per-iterator buffer. A compression-ratio histogram metric was added. New wide_table integration test and CI token updates were introduced.

Changes

Cohort / File(s) Summary
Event Store: write path & tests
logservice/eventstore/event_store.go, logservice/eventstore/event_store_test.go
Added per-worker compressionBuf reuse and zstd encoder lifecycle in writeTaskPool; expanded writeEvents signature to accept *[]byte; reused dst buffer for multiple kvs; tests updated to pass &compressionBuf and validate metrics.
Event Store: read path buffer reuse
logservice/eventstore/event_store.go (iterator section)
Added decodeBuf []byte on iterator and updated decompression to use iter.decodeBuf for DecodeAll result to avoid reallocations.
Compression Metrics
pkg/metrics/event_store.go
Added exported EventStoreCompressionRatioHistogram and registered it in event-store metrics init with defined buckets.
API docs note
pkg/common/kv_entry.go
Documented that Decode may mutate the input slice and callers must not retain references.
Wide-table integration tests
tests/integration_tests/wide_table/main.go, tests/integration_tests/wide_table/run.sh, tests/integration_tests/wide_table/conf/diff_config.toml
Added new wide-table test program, orchestration script, and diff_config for end-to-end validation of wide-table scenarios.
CI test token
tests/integration_tests/run_light_it_in_ci.sh
Added "wide_table" token to G05 groups across sink types.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    rect rgba(200,200,255,0.5)
    participant Worker
    end
    rect rgba(200,255,200,0.5)
    participant WriteEvents
    end
    rect rgba(255,200,200,0.5)
    participant ZstdEncoder
    end
    rect rgba(240,240,240,0.5)
    participant PebbleDB
    end

    Worker->>WriteEvents: call writeEvents(events, encoder, &compressionBuf)
    WriteEvents->>ZstdEncoder: encode value -> dstBuf (reuse/compress)
    ZstdEncoder-->>WriteEvents: compressed bytes
    WriteEvents->>PebbleDB: write key + compressed value
    WriteEvents-->>Worker: callbacks / stats (compression bytes)
Loading
sequenceDiagram
    autonumber
    rect rgba(200,200,255,0.5)
    participant Iterator
    end
    rect rgba(255,240,200,0.5)
    participant PebbleDB
    end
    rect rgba(200,255,200,0.5)
    participant ZstdDecoder
    end
    rect rgba(220,220,255,0.5)
    participant Consumer
    end

    Consumer->>Iterator: Next()
    Iterator->>PebbleDB: read key, value
    PebbleDB-->>Iterator: value (maybe ZSTD)
    Iterator->>ZstdDecoder: DecodeAll(value, dst=iter.decodeBuf)
    ZstdDecoder-->>Iterator: decompressed bytes stored in iter.decodeBuf
    Iterator-->>Consumer: yields decoded kv
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested reviewers

  • asddongmen
  • flowbehappy
  • bufferflies

Poem

🐇 Hop-hop — I nibble buffers light,

Reuse the crumbs through day and night,
Compressed and warm, the bytes align,
Metrics hum a tidy sign,
Wide tables dance — the tests take flight.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'eventstore: improve compression by reusing buffer and use a new compression level' accurately and specifically summarizes the main optimization changes: buffer reuse for compression/decompression and compression level adjustment.
Description check ✅ Passed The PR description is comprehensive with issue reference (close #4041), clear problem statement, specific highlights of changes, test information, and performance data tables/graphs demonstrating improvements.
Linked Issues check ✅ Passed The PR directly addresses all coding requirements from #4041: implements buffer reuse for compression/decompression to reduce allocations, applies SpeedFastest compression level optimization, includes unit test validation, and provides performance data showing improved throughput and resolved-ts lag.
Out of Scope Changes check ✅ Passed Changes are narrowly focused on compression optimization in event store with supporting test infrastructure. Integration test additions (wide_table test case and related files) appear to serve as validation for the compression/buffer reuse changes rather than unrelated scope creep.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ldz/fix-compress1222

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/integration_tests/wide_table/main.go`:
- Around line 69-76: Replace all uses of log.S() with structured pingcap/log
calls and zap fields: add the import "go.uber.org/zap" and change
log.S().Fatal(err) to log.Fatal("failed to create source database",
zap.Error(err)); change the deferred close error logging to log.Error("failed to
close source database", zap.Error(closeErr)); and similarly update the other
four instances to use log.Info/log.Error/log.Fatal with descriptive messages and
zap.Error(err) or other zap fields for context (e.g., table name, row count)
instead of formatted strings. Locate occurrences around the CreateDB/CloseDB
calls (util.CreateDB, util.CloseDB) and other logging sites reported and replace
formatted log.S() calls with the corresponding log.<Info|Error|Fatal>("message",
zap.Error(...), zap.String("key", val), ...) calls.

In `@tests/integration_tests/wide_table/run.sh`:
- Line 54: The call to the run function uses $* which flattens and can break
argument boundaries; change the invocation in run.sh to call run with "$@" so
the run function receives each original argument exactly (preserving spaces and
quoting). Locate the invocation of run (symbol: run) and replace the use of $*
with "$@" when forwarding script arguments.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7e03137 and 00c5f05.

📒 Files selected for processing (8)
  • logservice/eventstore/event_store.go
  • logservice/eventstore/event_store_test.go
  • pkg/common/kv_entry.go
  • pkg/metrics/event_store.go
  • tests/integration_tests/run_light_it_in_ci.sh
  • tests/integration_tests/wide_table/conf/diff_config.toml
  • tests/integration_tests/wide_table/main.go
  • tests/integration_tests/wide_table/run.sh

Comment on lines +69 to +76
sourceDB, err := util.CreateDB(sourceConfig)
if err != nil {
log.S().Fatal(err)
}
defer func() {
if closeErr := util.CloseDB(sourceDB); closeErr != nil {
log.S().Errorf("failed to close source database: %s\n", closeErr)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

wc -l tests/integration_tests/wide_table/main.go

Repository: pingcap/ticdc

Length of output: 103


🏁 Script executed:

cat -n tests/integration_tests/wide_table/main.go | sed -n '1,20p'

Repository: pingcap/ticdc

Length of output: 776


🏁 Script executed:

cat -n tests/integration_tests/wide_table/main.go | sed -n '60,85p'

Repository: pingcap/ticdc

Length of output: 918


🏁 Script executed:

cat -n tests/integration_tests/wide_table/main.go | sed -n '195,215p'

Repository: pingcap/ticdc

Length of output: 857


🏁 Script executed:

rg "log\.S\(\)" tests/integration_tests/wide_table/main.go -n

Repository: pingcap/ticdc

Length of output: 374


🏁 Script executed:

cat -n tests/integration_tests/wide_table/main.go | sed -n '16,35p'

Repository: pingcap/ticdc

Length of output: 534


🏁 Script executed:

rg "errors\.(Trace|WrapError)" tests/integration_tests/wide_table/main.go

Repository: pingcap/ticdc

Length of output: 39


🏁 Script executed:

rg "import" tests/integration_tests/wide_table/main.go -A 12

Repository: pingcap/ticdc

Length of output: 242


🏁 Script executed:

rg "log\.(Info|Error|Fatal)" --type go -B 1 -A 1 | head -50

Repository: pingcap/ticdc

Length of output: 2733


🏁 Script executed:

rg "go.uber.org/zap" tests/integration_tests/ -l | head -10

Repository: pingcap/ticdc

Length of output: 542


🏁 Script executed:

cat -n tests/integration_tests/util/db.go | head -40

Repository: pingcap/ticdc

Length of output: 1353


🏁 Script executed:

rg "log\.(Info|Error|Fatal|Warn)" tests/integration_tests/util/db.go -B 1 -A 1

Repository: pingcap/ticdc

Length of output: 641


Switch to structured logging instead of log.S() usage.

The codebase uses structured logging via github.com/pingcap/log with zap fields. Replace log.S() calls with log.Info(), log.Error(), and log.Fatal() using zap fields for context. This applies to all 5 instances: lines 71, 75, 202, 205, and 207.

Add the go.uber.org/zap import and update the logging calls to pass structured fields instead of formatted strings.

♻️ Example conversions
 import (
 	"database/sql"
 	"flag"
 	"fmt"
 	"strings"
 	"sync"

 	"github.com/pingcap/log"
 	"github.com/pingcap/ticdc/tests/integration_tests/util"
+	"go.uber.org/zap"
 )

 	sourceDB, err := util.CreateDB(sourceConfig)
 	if err != nil {
-		log.S().Fatal(err)
+		log.Fatal("failed to create source database", zap.Error(err))
 	}
 	defer func() {
 		if closeErr := util.CloseDB(sourceDB); closeErr != nil {
-			log.S().Errorf("failed to close source database: %s\n", closeErr)
+			log.Error("failed to close source database", zap.Error(closeErr))
 		}
 	}()

 	if err := row.Scan(&width); err != nil {
-		log.S().Fatalf("failed to scan row width: %v", err)
+		log.Fatal("failed to scan row width", zap.Error(err))
 	}
 	if width < int64(minRowWidthBytes) {
-		log.S().Fatalf("row %d width %d bytes is smaller than expected %d bytes", id, width, minRowWidthBytes)
+		log.Fatal("row width smaller than expected",
+			zap.Int("rowID", id),
+			zap.Int64("widthBytes", width),
+			zap.Int("minWidthBytes", minRowWidthBytes),
+		)
 	}
-	log.S().Infof("row %d width %d bytes", id, width)
+	log.Info("row width measured", zap.Int("rowID", id), zap.Int64("widthBytes", width))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sourceDB, err := util.CreateDB(sourceConfig)
if err != nil {
log.S().Fatal(err)
}
defer func() {
if closeErr := util.CloseDB(sourceDB); closeErr != nil {
log.S().Errorf("failed to close source database: %s\n", closeErr)
}
sourceDB, err := util.CreateDB(sourceConfig)
if err != nil {
log.Fatal("failed to create source database", zap.Error(err))
}
defer func() {
if closeErr := util.CloseDB(sourceDB); closeErr != nil {
log.Error("failed to close source database", zap.Error(closeErr))
}
}()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration_tests/wide_table/main.go` around lines 69 - 76, Replace all
uses of log.S() with structured pingcap/log calls and zap fields: add the import
"go.uber.org/zap" and change log.S().Fatal(err) to log.Fatal("failed to create
source database", zap.Error(err)); change the deferred close error logging to
log.Error("failed to close source database", zap.Error(closeErr)); and similarly
update the other four instances to use log.Info/log.Error/log.Fatal with
descriptive messages and zap.Error(err) or other zap fields for context (e.g.,
table name, row count) instead of formatted strings. Locate occurrences around
the CreateDB/CloseDB calls (util.CreateDB, util.CloseDB) and other logging sites
reported and replace formatted log.S() calls with the corresponding
log.<Info|Error|Fatal>("message", zap.Error(...), zap.String("key", val), ...)
calls.

}

trap 'stop_tidb_cluster; collect_logs $WORK_DIR' EXIT
run $*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n tests/integration_tests/wide_table/run.sh

Repository: pingcap/ticdc

Length of output: 2370


🏁 Script executed:

# Check what SINK_TYPE is used for in the run function
grep -n "SINK_TYPE\|SINK_URI" tests/integration_tests/wide_table/run.sh | head -20

Repository: pingcap/ticdc

Length of output: 749


🏁 Script executed:

# Verify that the run function doesn't use any positional parameters
sed -n '11,51p' tests/integration_tests/wide_table/run.sh | grep -E '\$[0-9]|\$\*|\$@' || echo "No positional parameters found in run function"

Repository: pingcap/ticdc

Length of output: 103


Use "$@" to preserve argument boundaries when calling run.

The run function doesn't declare parameters, so arguments could be omitted entirely (run instead of run "$@"). However, if arguments are passed, use "$@" instead of $* to preserve argument boundaries correctly.

✅ Suggested fix
-run $*
+run "$@"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
run $*
run "$@"
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 54-54: Use "$@" (with quotes) to prevent whitespace problems.

(SC2048)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration_tests/wide_table/run.sh` at line 54, The call to the run
function uses $* which flattens and can break argument boundaries; change the
invocation in run.sh to call run with "$@" so the run function receives each
original argument exactly (preserving spaces and quoting). Locate the invocation
of run (symbol: run) and replace the use of $* with "$@" when forwarding script
arguments.

@ti-chi-bot ti-chi-bot bot added needs-1-more-lgtm Indicates a PR needs 1 more LGTM. approved labels Feb 24, 2026
@ti-chi-bot ti-chi-bot bot added the lgtm label Feb 24, 2026
@ti-chi-bot
Copy link

ti-chi-bot bot commented Feb 24, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: 3AceShowHand, asddongmen

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [3AceShowHand,asddongmen]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot removed the needs-1-more-lgtm Indicates a PR needs 1 more LGTM. label Feb 24, 2026
@ti-chi-bot
Copy link

ti-chi-bot bot commented Feb 24, 2026

[LGTM Timeline notifier]

Timeline:

  • 2026-02-24 09:23:45.263315795 +0000 UTC m=+174097.778110394: ☑️ agreed by 3AceShowHand.
  • 2026-02-24 10:06:58.235747755 +0000 UTC m=+176690.750542374: ☑️ agreed by asddongmen.

@lidezhu
Copy link
Collaborator Author

lidezhu commented Feb 24, 2026

/retest

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests/integration_tests/wide_table/run.sh (1)

53-55: Preserve argument boundaries when forwarding to run.

Use "$@" instead of $* to avoid word-splitting and preserve quoting if arguments are ever passed. (Line 54)

✅ Suggested fix
-run $*
+run "$@"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration_tests/wide_table/run.sh` around lines 53 - 55, The call to
run is forwarding positional parameters using $*, which can break quoting and
word boundaries; update the invocation in the script to use "$@" instead
(preserving argument boundaries) — locate the trap and subsequent lines where
stop_test, run, and check_logs reference WORK_DIR and replace the run $* call
with run "$@".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/integration_tests/wide_table/run.sh`:
- Around line 53-55: The call to run is forwarding positional parameters using
$*, which can break quoting and word boundaries; update the invocation in the
script to use "$@" instead (preserving argument boundaries) — locate the trap
and subsequent lines where stop_test, run, and check_logs reference WORK_DIR and
replace the run $* call with run "$@".

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 00c5f05 and 62ed439.

📒 Files selected for processing (1)
  • tests/integration_tests/wide_table/run.sh

@ti-chi-bot ti-chi-bot bot merged commit b292520 into master Feb 24, 2026
26 checks passed
@ti-chi-bot ti-chi-bot bot deleted the ldz/fix-compress1222 branch February 24, 2026 12:44
lidezhu added a commit that referenced this pull request Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved lgtm release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

improve compression preformance in event store

3 participants