Skip to content

HDDS-9279. Basic implementation of OM Follower read#9222

Merged
ivandika3 merged 15 commits intoapache:masterfrom
symious:HDDS-9279
Nov 18, 2025
Merged

HDDS-9279. Basic implementation of OM Follower read#9222
ivandika3 merged 15 commits intoapache:masterfrom
symious:HDDS-9279

Conversation

@symious
Copy link
Contributor

@symious symious commented Oct 30, 2025

What changes were proposed in this pull request?

This is another implemetation of OM FollowerRead.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-9279

How was this patch tested?

Test with new Freon class: FollowerReader.

@swamirishi swamirishi requested a review from jojochuang November 3, 2025 18:02
@ivandika3 ivandika3 requested review from szetszwo and whbing November 5, 2025 01:10
Copy link
Contributor

@szetszwo szetszwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@symious , Thanks for working on this!

A quick comment: For OM, any Ratis confs can be set by adding the ozone.om.ha prefix. In this case, the existing confs are

  • ozone.om.ha.raft.server.read.option
  • ozone.om.ha.raft.server.read.leader.lease.enabled

So, the new confs are not needed.

@peterxcli peterxcli self-requested a review November 6, 2025 14:05
@symious
Copy link
Contributor Author

symious commented Nov 11, 2025

the new confs are not needed.

@szetszwo Thank you for the review, configs removed.

Copy link
Contributor

@szetszwo szetszwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 the change looks good. It is very simple and elegant!

Copy link
Contributor

@ivandika3 ivandika3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM +1 since it is disabled by default so there should not be any regression. @symious Please help to address #9222 (comment) first.

Regarding my other comment #5288 (comment), we currently don't have clear root cause of the performance regression (e.g. whether it's inherent to the algorithm or implementation issues). I think we can merge this first and improve it from Ozone (e.g. client failover implementation) or Ratis (e.g. ReadIndex performance improvements) in follow up patches.

@ivandika3 ivandika3 merged commit a5658ee into apache:master Nov 18, 2025
43 checks passed
@ivandika3
Copy link
Contributor

@symious Thanks for the patch and @szetszwo for the review.

@symious
Copy link
Contributor Author

symious commented Nov 18, 2025

@ivandika3 @szetszwo Thank you for the review and merge.

Comment on lines +61 to +62
" DEFAULT - Directly query statemachine (non-linearizable). " +
" Only the leader can serve read requests. " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
" DEFAULT - Directly query statemachine (non-linearizable). " +
" Only the leader can serve read requests. " +
" DEFAULT - Directly query statemachine. " +
" Only the leader can serve read requests. " +

IIUC, the current DEFAULT is still linearizable. To avoid confusion, let’s rename the modes to something more explicit—for example:

Then we can add our own parsing that maps these OM-level configs to the corresponding Ratis read options, instead of relying directly on RaftServerConfigKeys.Read.option(...).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this configuration is using directly from Ratis. If changed, OM Ratis Server won't start normally.

The thing is the localLease is not a 1:1 mapping, if locallease condition check doesn't fit, it will fallback to ReadIndex which is linearizable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants