Skip to content

Fix incorrect stream partition id for multi-stream realtime consumption#17953

Merged
xiangfu0 merged 1 commit intoapache:masterfrom
leguo2:fix/multi-stream-consumption-issue
Mar 25, 2026
Merged

Fix incorrect stream partition id for multi-stream realtime consumption#17953
xiangfu0 merged 1 commit intoapache:masterfrom
leguo2:fix/multi-stream-consumption-issue

Conversation

@leguo2
Copy link
Copy Markdown
Contributor

@leguo2 leguo2 commented Mar 24, 2026

Screenshot 2026-03-24 at 10 42 21

Summary

This PR fixes a bug in multi-stream realtime consumption where the server can use the Pinot partitionGroupId
instead of the actual upstream streamPartitionId when creating the consuming state for a segment.

The server already derives the correct _streamPartitionId, but when constructing
PartitionGroupConsumptionStatus in RealtimeSegmentDataManager, it previously used the constructor that defaults:

streamPartitionId = partitionGroupId

As a result, the Kafka consumer could incorrectly consume partition 10000 instead of partition 0.

This change updates RealtimeSegmentDataManager to explicitly pass the derived _streamPartitionId
when creating PartitionGroupConsumptionStatus.

@xiangfu0 xiangfu0 added bug Something is not working as expected kafka Related to Kafka stream connector ingestion Related to data ingestion pipeline labels Mar 24, 2026
@xiangfu0
Copy link
Copy Markdown
Contributor

Good catch, thanks for fixing

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Mar 24, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 63.20%. Comparing base (be3ad3b) to head (f65af22).
⚠️ Report is 2 commits behind head on master.

Additional details and impacted files
@@             Coverage Diff              @@
##             master   #17953      +/-   ##
============================================
- Coverage     63.21%   63.20%   -0.02%     
  Complexity     1525     1525              
============================================
  Files          3194     3194              
  Lines        193239   193239              
  Branches      29706    29706              
============================================
- Hits         122161   122133      -28     
- Misses        61494    61518      +24     
- Partials       9584     9588       +4     
Flag Coverage Δ
custom-integration1 100.00% <ø> (ø)
integration 100.00% <ø> (ø)
integration1 100.00% <ø> (ø)
integration2 0.00% <ø> (ø)
java-11 63.18% <100.00%> (-0.01%) ⬇️
java-21 63.16% <100.00%> (-0.04%) ⬇️
temurin 63.20% <100.00%> (-0.02%) ⬇️
unittests 63.19% <100.00%> (-0.02%) ⬇️
unittests1 55.51% <100.00%> (-0.02%) ⬇️
unittests2 34.17% <0.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes multi-stream realtime consumption on the server by ensuring the consuming state uses the upstream streamPartitionId (derived from the Pinot partitionGroupId) instead of incorrectly defaulting streamPartitionId = partitionGroupId when initializing consumption status.

Changes:

  • Update RealtimeSegmentDataManager to pass _streamPartitionId into PartitionGroupConsumptionStatus during initialization.

Comment on lines 1758 to 1760
_partitionGroupConsumptionStatus =
new PartitionGroupConsumptionStatus(_partitionGroupId, llcSegmentName.getSequenceNumber(),
new PartitionGroupConsumptionStatus(_partitionGroupId, _streamPartitionId, llcSegmentName.getSequenceNumber(),
_streamPartitionMsgOffsetFactory.create(_segmentZKMetadata.getStartOffset()),
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes a multi-stream bug, but there’s no unit test exercising the multi-stream branch where partitionGroupId encodes the stream index (e.g., 10000) and _streamPartitionId is derived via IngestionConfigUtils.getStreamPartitionIdFromPinotPartitionId. Consider adding a test that builds a RealtimeSegmentDataManager with a multi-stream IngestionConfig and asserts the constructed PartitionGroupConsumptionStatus carries the derived stream partition id (not the partitionGroupId), so this regression can’t recur.

Copilot uses AI. Check for mistakes.
@xiangfu0 xiangfu0 merged commit e03c4e9 into apache:master Mar 25, 2026
35 of 36 checks passed
@Jackie-Jiang Jackie-Jiang added the real-time Related to realtime table ingestion and serving label Mar 25, 2026
shauryachats added a commit that referenced this pull request Mar 31, 2026
This PR adds MultiTopicRealtimeClusterIntegrationTest, an integration test that validates realtime ingestion from multiple Kafka topics into a single table using the streamConfigMaps multi-stream configuration.

The motivation was that currently there is no E2E integration test for multi-topic realtime tables (I encountered the same bug which was fixed in #17953).

The test is parameterized over N topics (default 3, overridable via getNumTopics()). For each topic, it generates CSV records with a mutually exclusive source column and non-overlapping value ranges, then verifies correctness through targeted queries:

- Total doc count equals the sum across all topics
- GROUP BY source returns exactly N groups with the expected counts
- Filter and aggregation queries isolate each topic's data correctly
- Value ranges don't leak between topics
- Segments exist for all N stream config indices (verified via partition group IDs)
- DISTINCT source returns exactly N values
- No existing tests covered multi-topic ingestion end-to-end.
yashmayya pushed a commit that referenced this pull request Apr 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something is not working as expected ingestion Related to data ingestion pipeline kafka Related to Kafka stream connector real-time Related to realtime table ingestion and serving

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants