Skip to content

The server is abnormal after data is written in the cluster environment #3784

Locked Answered by chengjianyun
cigarl asked this question in Q&A
Discussion options

You must be logged in to vote

For case[2], we find the root cause. The issue happened because some entry is so large that can't be put into logDataBuffer which will throw a BufferOverflowException. The function call is like below in RaftLogManager.java

     ...
     startTime = Statistic.RAFT_SENDER_COMMIT_APPEND_AND_STABLE_LOGS.getOperationStartTime();
     // the entry is add to committedEntries
     getCommittedEntryManager().append(entries);

     if (ClusterDescriptor.getInstance().getConfig().isEnableRaftLogPersistence()) {
       // exception be throwed from here
       getStableEntryManager().append(entries, maxHaveAppliedCommitIndex);
     }
     
     // all the rest lines won't be executed for the entry. 
 …

Replies: 5 comments 6 replies

Comment options

You must be logged in to vote
2 replies
@OneSizeFitsQuorum
Comment options

@cigarl
Comment options

Comment options

You must be logged in to vote
2 replies
@cigarl
Comment options

@OneSizeFitsQuorum
Comment options

Comment options

You must be logged in to vote
2 replies
@OneSizeFitsQuorum
Comment options

@cigarl
Comment options

Comment options

You must be logged in to vote
0 replies
Answer selected by jixuan1989
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
5 participants