Skip to content

fix: BatchWriter: Ensure flushed batch has at least one record#2489

Merged
kodiakhq[bot] merged 2 commits intomainfrom
protect-empty-write
Apr 29, 2026
Merged

fix: BatchWriter: Ensure flushed batch has at least one record#2489
kodiakhq[bot] merged 2 commits intomainfrom
protect-empty-write

Conversation

@bbernays
Copy link
Copy Markdown
Contributor

@bbernays bbernays commented Apr 27, 2026

Summary

Ensure only flushing batches that have more than 0 records.

Issue originally was discovered in cloudquery/cloudquery#22750

@bbernays bbernays requested a review from a team as a code owner April 27, 2026 19:10
@bbernays bbernays requested a review from marianogappa April 27, 2026 19:10
@disq disq requested a review from Copilot April 27, 2026 19:18
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates BatchWriter’s worker loop to avoid flushing empty insert batches (i.e., calling WriteTableBatch when the current batch has 0 rows), addressing the issue linked in cloudquery/cloudquery#22750.

Changes:

  • Guard the “flush current batch” path so send() is only called when the current batch contains at least one row (limit.Rows() > 0).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread writers/batchwriter/batchwriter.go
@disq disq changed the title fix: Update batchwriter.go fix: BatchWriter: Ensure flushed batch has at least one record Apr 28, 2026
if limit.Rows() > 0 {
send()
}
ticker.Reset(w.batchTimeout)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this go inside the if as well?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or have batch.SliceRecord return toFlush = 0?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to reset the batchTimeout even if there were no resources to ensure that we don't just sync a single row the next time

@erezrokah erezrokah requested a review from Copilot April 28, 2026 10:48
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


testClient := &noEmptyBatchClient{t: t}
// batchSizeBytes=1 ensures that no single row fits in the initial batch:
// SliceRecord returns add==nil, toFlush=[one record per row], rest=nil.
Copy link

Copilot AI Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment describing the WithBatchSizeBytes(1) scenario says SliceRecord returns ... rest=nil, but internal/batch.SliceRecord currently returns a non-nil remaining when the byte cap is too small (because getToFlush slices to single-row batches without clearing the original record). Please update this comment to reflect the actual SliceRecord behavior so future readers aren’t misled about why this regression case triggers.

Suggested change
// SliceRecord returns add==nil, toFlush=[one record per row], rest=nil.
// SliceRecord returns add==nil, toFlush=[one record per row], and a non-nil remaining
// record for the unsliced original batch.

Copilot uses AI. Check for mistakes.
@kodiakhq kodiakhq Bot merged commit 1b94998 into main Apr 29, 2026
14 checks passed
@kodiakhq kodiakhq Bot deleted the protect-empty-write branch April 29, 2026 13:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants