This is not critical but shows some problems worth investigating.
Our QA cluster is having trouble with some accumulated data. It has 10k+ real time segments in place, data are not too much.
In logs we see something like below:
2021/11/04 20:31:34.166 ERROR [ZkClient] [HelixTaskExecutor-message_handle_thread] Data size larger than 1M, will not write to zk. Data (first 1k): {
"id" : "point_entry_REALTIME",
"simpleFields" : {
"BATCH_MESSAGE_MODE" : "false",
"BUCKET_SIZE" : "0",
"SESSION_ID" : "30069443a0581e1",
"STATE_MODEL_DEF" : "SegmentOnlineOfflineStateModel",
"STATE_MODEL_FACTORY_NAME" : "DEFAULT"
},
"mapFields" : {
"point_entry__0__0__20211030T0056Z" : {
"CURRENT_STATE" : "OFFLINE"
},
"point_entry__0__100__20211102T0746Z" : {
"CURRENT_STATE" : "OFFLINE"
},
"point_entry__0__101__20211102T0817Z" : {
"CURRENT_STATE" : "OFFLINE"
},
"point_entry__0__102__20211102T0909Z" : {
"CURRENT_STATE" : "OFFLINE"
},
"point_entry__0__103__20211102T0946Z" : {
"CURRENT_STATE" : "ONLINE",
"END_TIME" : "1636056441791",
"INFO
This is not critical but shows some problems worth investigating.
Our QA cluster is having trouble with some accumulated data. It has 10k+ real time segments in place, data are not too much.
In logs we see something like below: