Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BasePersistentServiceProcessor的onApply接收到Read、Write、Delete之外的operation就ERROR_TYPE_STATE_MACHINE #7757

Closed
zrlw opened this issue Feb 13, 2022 · 9 comments · Fixed by #7797

Comments

@zrlw
Copy link
Contributor

zrlw commented Feb 13, 2022

Describe the bug
临时实例数量太多超过承载能力时,nacos集群会崩溃,除了leader节点外其他节点都是ERROR_TYPE_STATE_MACHINE状态,查了日志,BasePersistentServiceProcessor的onApply方法接收到sofa发送的operation类型并不是Read、Write、Delete,而是根本没有operation数据项。
下面这行代码直接抛了IllegalArgumentException异常:
final Op op = Op.valueOf(request.getOperation());
一路抛到了NacosStateMachine的onApply方法,然后状态就改为了ERROR_TYPE_STATE_MACHINE,只能重启节点才能恢复。

Op.valueOf这个代码为啥不catch一下IllegalArgumentException呢,非Read、Write、Delete的operation忽略掉不就好了么?

@zrlw
Copy link
Contributor Author

zrlw commented Feb 13, 2022

补充一下,导致nacos节点进入ERROR_TYPE_STATE_MACHINE状态的message只有group和key,其中group是naming_persistent_service,key是“com.alibaba.nacos.naming.iplist.public##临时实例service全称”的base64码。

@zrlw
Copy link
Contributor Author

zrlw commented Feb 13, 2022

protocol-raft日志:

ERROR processor : com.alibaba.nacos.naming.consistency.persistent.impl.PersistentServiceProcessor@7596d1ef, stateMachine meet critical error: {}.

java.lang.IllegalArgumentException: No enum constant com.alibaba.nacos.naming.consistency.persistent.impl.BasePersistentServiceProcessor.Op.
        at java.lang.Enum.valueOf(Enum.java:238)
        at com.alibaba.nacos.naming.consistency.persistent.impl.BasePersistentServiceProcessor$Op.valueOf(BasePersistentServiceProcessor.java:63)
        at com.alibaba.nacos.naming.consistency.persistent.impl.BasePersistentServiceProcessor.onApply(BasePersistentServiceProcessor.java:166)
        at com.alibaba.nacos.core.distributed.raft.NacosStateMachine.onApply(NacosStateMachine.java:115)
        at com.alipay.sofa.jraft.core.FSMCallerImpl.doApplyTasks(FSMCallerImpl.java:541)
        at com.alipay.sofa.jraft.core.FSMCallerImpl.doCommitted(FSMCallerImpl.java:510)
        at com.alipay.sofa.jraft.core.FSMCallerImpl.runApplyTask(FSMCallerImpl.java:442)
        at com.alipay.sofa.jraft.core.FSMCallerImpl.access$100(FSMCallerImpl.java:73)
        at com.alipay.sofa.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:148)
        at com.alipay.sofa.jraft.core.FSMCallerImpl$ApplyTaskHandler.onEvent(FSMCallerImpl.java:142)
        at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:137)
        at java.lang.Thread.run(Thread.java:748)

@zrlw
Copy link
Contributor Author

zrlw commented Feb 13, 2022

catch捕获这行代码的异常直接返回失败应答的效果好得出乎意外,极限压测场景下不仅不再出现这个问题,而且nacos居然还能持续正常提供服务,最初只是希望修改之后nacos别被直接干趴下就行了,没想到在超大心跳数量压测场景下3个nacos注册中心节点的cpu从原来的持续95%以上直接降到了50%以下,网络连接的句柄数量也从原来的每个客户端上百个降到了现在的十几个。

@warmonipa
Copy link

👍,大佬提Pull Request了吗?

@MajorHe1
Copy link
Collaborator

蹲一下,我们也有可能会碰到类似的问题

@KomachiSion
Copy link
Collaborator

问题的根本原因应该不是压力,我在2.0中也遇到了这个问题,本质应该是ProtoMessageUtil.parse 会尝试反序列化写,再反序列化读,但是读请求的字段更少,所以有可能会反序列化成功(变成写请求),从而在apply的时候出现错误。

我会现在develop分支修复这个问题,v1.x分支可以也用同样方式修复一下试试看。

@KomachiSion
Copy link
Collaborator

image

image

@zrlw
Copy link
Contributor Author

zrlw commented Feb 28, 2022

问题的根本原因应该不是压力,我在2.0中也遇到了这个问题,本质应该是ProtoMessageUtil.parse 会尝试反序列化写,再反序列化读,但是读请求的字段更少,所以有可能会反序列化成功(变成写请求),从而在apply的时候出现错误。

我会现在develop分支修复这个问题,v1.x分支可以也用同样方式修复一下试试看。

两次try-catch的方法不太优雅,解析WriteRequest的时候凭空多了一次冗余操作,可能在序列化结果前加一位类型前缀r/w区分一下readRequest和writeRequest要好点,反序列前根据类型前缀反序列后面的序列化内容,但是这种方式要改的代码可能比较多;如果不考虑兼容,可以修改一下consistency.proto里的WriteRequest消息定义,在group前面加一个bool数据项作为第1项,后面的数据项序号+1,ReadRequest定义保持不变,这样反序列化之前只需要判断首字节是否是0x0a就能区分是否是ReadRequest了(前提条件是ReadRequest的group必须存在)。

更新一下:
#7797 解析下面这个testcase会将WriteRequest解析为ReadRequest类型,所以两次try-catch无法哪个放在前,从算法角度看都是存在缺陷的。

    @Test
    public void testParseWriteRequestWithKey() {
        String group = "test";
        String key = "testKey";
        WriteRequest testCase = WriteRequest.newBuilder().setGroup(group).setKey(key).build();
        Object actual = ProtoMessageUtil.parse(testCase.toByteArray());
        assertEquals(WriteRequest.class, testCase.getClass());
   }

@zrlw
Copy link
Contributor Author

zrlw commented Mar 2, 2022

触发过程分析如下:

  1. 压测场景下nacos心跳检查机制判断部分临时服务实例超过了DELETE期限,调用deregister接口,通过serviceManager的removeInstance->substractIpAddresses->updateIpAddresses调用链执行
    Datum datum = consistencyService
    .get(KeyBuilder.buildInstanceListKey(service.getNamespaceId(), service.getName(), ephemeral));
    这些ReadRequest请求经由leader节点通过JRaftServer的applyOperation方法调用node.apply放入Jraft的Ringbuffer队列时触发了ApplyTask事件。
  2. sofa的FSMCallerImpl的内部类ApplyTaskHandler的onEvent调用runApplyTask处理ApplyTask事件,其中doCommitted方法先调用closureQueue.popClosureUntil获取本批处理的任务,返回的firstClosureIndex是本批次首个任务的index,然后再new出IteratorImpl对象,currentIndex初值设为lastAppliedIndex(上次doCommitted执行的最后一个任务的index)。
  3. doCommitted通过doApplyTasks的this.fsm.onApply语句调用了NacosStateMachine的onApply方法循环处理每个任务,处理完一个就将currentIndex增1。
  4. 由于复制日志时不复制task的done,所以follower节点的closureQueue条目都为null,follower节点执行onApply方法时都会执行ProtoMessageUtil.parse分支,从而触发本问题。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants