Skip to content
This repository has been archived by the owner on Dec 5, 2017. It is now read-only.

kubelet executor sending multiple terminal status updates for tasks #786

Open
jdef opened this issue Feb 15, 2016 · 1 comment
Open

kubelet executor sending multiple terminal status updates for tasks #786

jdef opened this issue Feb 15, 2016 · 1 comment

Comments

@jdef
Copy link

jdef commented Feb 15, 2016

... and this is crashing the slave, perhaps related to a mishandling of status-update UUIDs in the go driver?

xref https://issues.apache.org/jira/browse/MESOS-4679

executor log

I0215 20:43:59.551657   11068 executor.go:426] Executor driver killTask
I0215 20:43:59.551719   11068 executor.go:436] Executor driver is asked to kill task '&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],}'
I0215 20:43:59.552189   11068 executor.go:687] Executor sending status update &StatusUpdate{FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},ExecutorId:&ExecutorID{Value:*31df9d040f057abd_k8sm-executor,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Status:&TaskStatus{TaskId:&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},State:*TASK_LOST,Data:nil,Message:*kill-pod-task,SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,ExecutorId:nil,Healthy:nil,Source:nil,Reason:nil,Uuid:nil,Labels:nil,ContainerStatus:nil,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,Uuid:*[214 253 145 223 212 36 17 229 158 224 82 84 0 231 66 70],LatestState:nil,XXX_unrecognized:[],}
I0215 20:43:59.552599   11068 executor.go:687] Executor sending status update &StatusUpdate{FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},ExecutorId:&ExecutorID{Value:*31df9d040f057abd_k8sm-executor,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Status:&TaskStatus{TaskId:&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},State:*TASK_KILLED,Data:nil,Message:*pod-deleted,SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,ExecutorId:nil,Healthy:nil,Source:nil,Reason:nil,Uuid:nil,Labels:nil,ContainerStatus:nil,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,Uuid:*[214 253 162 110 212 36 17 229 158 224 82 84 0 231 66 70],LatestState:nil,XXX_unrecognized:[],}
I0215 20:43:59.557376   11068 suicide.go:51] stopping suicide watch
I0215 20:43:59.559077   11068 executor.go:445] Executor statusUpdateAcknowledgement
I0215 20:43:59.559129   11068 executor.go:448] Receiving status update acknowledgement &StatusUpdateAcknowledgementMessage{SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},TaskId:&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},Uuid:*[214 253 145 223 212 36 17 229 158 224 82 84 0 231 66 70],XXX_unrecognized:[],}
I0215 20:43:59.562016   11068 executor.go:470] Executor driver received frameworkMessage
I0215 20:43:59.562073   11068 executor.go:480] Executor driver receives framework message
I0215 20:43:59.562100   11068 executor.go:445] Executor statusUpdateAcknowledgement
I0215 20:43:59.562112   11068 executor.go:448] Receiving status update acknowledgement &StatusUpdateAcknowledgementMessage{SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},TaskId:&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},Uuid:*[214 253 162 110 212 36 17 229 158 224 82 84 0 231 66 70],XXX_unrecognized:[],}
I0215 20:43:59.562173   11068 executor.go:579] Receives message from framework task-lost:pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f
I0215 20:43:59.562292   11068 executor.go:687] Executor sending status update &StatusUpdate{FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},ExecutorId:&ExecutorID{Value:*31df9d040f057abd_k8sm-executor,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Status:&TaskStatus{TaskId:&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},State:*TASK_LOST,Data:nil,Message:*task-lost-ack,SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,ExecutorId:nil,Healthy:nil,Source:nil,Reason:nil,Uuid:nil,Labels:nil,ContainerStatus:nil,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,Uuid:*[214 255 28 217 212 36 17 229 158 224 82 84 0 231 66 70],LatestState:nil,XXX_unrecognized:[],}
I0215 20:43:59.562463   11068 executor.go:687] Executor sending status update &StatusUpdate{FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},ExecutorId:&ExecutorID{Value:*31df9d040f057abd_k8sm-executor,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Status:&TaskStatus{TaskId:&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},State:*TASK_LOST,Data:nil,Message:*kill-pod-task,SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,ExecutorId:nil,Healthy:nil,Source:nil,Reason:nil,Uuid:nil,Labels:nil,ContainerStatus:nil,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,Uuid:*[214 255 35 27 212 36 17 229 158 224 82 84 0 231 66 70],LatestState:nil,XXX_unrecognized:[],}
I0215 20:43:59.568237   11068 executor.go:445] Executor statusUpdateAcknowledgement
I0215 20:43:59.568286   11068 executor.go:448] Receiving status update acknowledgement &StatusUpdateAcknowledgementMessage{SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},TaskId:&TaskID{Value:*pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},Uuid:*[214 255 28 217 212 36 17 229 158 224 82 84 0 231 66 70],XXX_unrecognized:[],}
I0215 20:43:59.588373   11068 suicide.go:51] stopping suicide watch
I0215 20:43:59.588566   11068 executor.go:687] Executor sending status update &StatusUpdate{FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},ExecutorId:&ExecutorID{Value:*31df9d040f057abd_k8sm-executor,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Status:&TaskStatus{TaskId:&TaskID{Value:*pod.6ce1b7db-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},State:*TASK_KILLED,Data:nil,Message:*pod-deleted,SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,ExecutorId:nil,Healthy:nil,Source:nil,Reason:nil,Uuid:nil,Labels:nil,ContainerStatus:nil,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,Uuid:*[215 3 30 254 212 36 17 229 158 224 82 84 0 231 66 70],LatestState:nil,XXX_unrecognized:[],}
I0215 20:43:59.595983   11068 executor.go:260] slave disconnected, will wait for recovery
I0215 20:43:59.596040   11068 executor.go:328] Slave is disconnected
I0215 20:43:59.623678   11068 suicide.go:51] stopping suicide watch
I0215 20:43:59.623841   11068 executor.go:687] Executor sending status update &StatusUpdate{FrameworkId:&FrameworkID{Value:*df95a79b-d6d4-4b96-853e-55686628e898-0006,XXX_unrecognized:[],},ExecutorId:&ExecutorID{Value:*31df9d040f057abd_k8sm-executor,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Status:&TaskStatus{TaskId:&TaskID{Value:*pod.6d006a26-d1db-11e5-8a9a-525400309a8f,XXX_unrecognized:[],},State:*TASK_KILLED,Data:nil,Message:*pod-deleted,SlaveId:&SlaveID{Value:*20150628-154106-117441034-5050-1315-S2,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,ExecutorId:nil,Healthy:nil,Source:nil,Reason:nil,Uuid:nil,Labels:nil,ContainerStatus:nil,XXX_unrecognized:[],},Timestamp:*1.455569039e+09,Uuid:*[215 8 128 159 212 36 17 229 158 224 82 84 0 231 66 70],LatestState:nil,XXX_unrecognized:[],}
I0215 20:43:59.624399   11068 executor.go:284] slave exited ... shutting down
I0215 20:43:59.624442   11068 executor.go:613] Aborting the executor driver
I0215 20:43:59.624458   11068 messenger.go:295] stopping messenger executor(1)@10.2.0.6:40672..
I0215 20:43:59.624515   11068 http_transporter.go:447] stopping HTTP transport
I0215 20:43:59.627217   11068 executor.go:630] Stopping executor driver
I0215 20:43:59.628198   11068 executor.go:572] Stopping the executor driver
W0215 20:43:59.628253   11068 executor.go:633] failed to stop executor driver: Unable to Stop, expecting status DRIVER_RUNNING, but got DRIVER_ABORTED
I0215 20:43:59.628288   11068 executor.go:636] Shutdown the executor
I0215 20:43:59.628180   11068 service.go:139] executor Run completed
E0215 20:43:59.627711   11068 kubelet.go:2295] Update channel is closed. Exiting the sync loop.
I0215 20:43:59.628520   11068 kubelet.go:43] kubelet run terminated
I0215 20:43:59.628556   11068 executor.go:655] exiting

slave log:

I0215 20:43:59.564084 15780 slave.cpp:2762] Handling status update TASK_LOST (UUID: d6ff231b-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006 from executor(1)@10.2.0.6:40672
W0215 20:43:59.564115 15780 slave.cpp:2856] Could not find the executor for status update TASK_LOST (UUID: d6ff231b-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006
I0215 20:43:59.564321 15782 status_update_manager.cpp:826] Checkpointing UPDATE for status update TASK_LOST (UUID: d6ff1cd9-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006
I0215 20:43:59.566783 15782 status_update_manager.cpp:322] Received status update TASK_LOST (UUID: d6ff231b-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006
I0215 20:43:59.566879 15782 slave.cpp:3087] Forwarding the update TASK_LOST (UUID: d6ff1cd9-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006 to master@10.2.0.5:5050
I0215 20:43:59.566952 15782 slave.cpp:3011] Sending acknowledgement for status update TASK_LOST (UUID: d6ff1cd9-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006 to executor(1)@10.2.0.6:40672
F0215 20:43:59.567073 15782 slave.cpp:3003] CHECK_READY(future): is FAILED: Mismatched checkpoint value for status update TASK_LOST (UUID: d6ff231b-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006 (expected checkpoint=true actual checkpoint=false) Failed to handle status update TASK_LOST (UUID: d6ff231b-d424-11e5-9ee0-525400e74246) for task pod.1e4f9fbe-d1db-11e5-8a9a-525400309a8f of framework df95a79b-d6d4-4b96-853e-55686628e898-0006
@jdef
Copy link
Author

jdef commented Feb 23, 2016

more evidence, from a failed conformance test:

[08:05:07][Step 7/7] Feb 22 13:05:07.533: INFO: event for nettest-2a1sz: {default-scheduler } FailedScheduling: Error scheduling: No suitable offers for pod/task
[08:05:07][Step 7/7] Feb 22 13:05:07.533: INFO: event for nettest-2a1sz: {controllermanager } NodeControllerEviction: Marking for deletion Pod nettest-2a1sz from Node 172.17.0.9
[08:05:07][Step 7/7] Feb 22 13:05:07.533: INFO: event for nettest-ye1zy: {default-scheduler } FailedScheduling: Error scheduling: No suitable offers for pod/task
[08:05:07][Step 7/7] Feb 22 13:05:07.533: INFO: event for nettest-ye1zy: {default-scheduler } Scheduled: Successfully assigned nettest-ye1zy to 172.17.0.10
[08:05:07][Step 7/7] Feb 22 13:05:07.533: INFO: event for nettest-ye1zy: {kubelet 172.17.0.10} FailedSync: Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "failed to apply oom-score-adj to container \"exceeded maxTries, some processes might not have desired OOM score\"- /k8s_POD.14c529bb_nettest-ye1zy_e2e-tests-nettest-bf41z_b2283970-d964-11e5-9fb8-0242ac110006_78ba203b"
[08:05:07][Step 7/7] 


I0222 13:04:27.418985      46 nodecontroller.go:477] Deleting node (no longer present in cloud provider): 172.17.0.9
I0222 13:04:27.418994      46 nodecontroller.go:546] Recording Deleting Node 172.17.0.9 because it's not present according to cloud provider event message for node 172.17.0.9
I0222 13:04:27.419058      46 event.go:211] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.17.0.9", UID:"172.17.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeletingNode' Node 172.17.0.9 event: Deleting Node 172.17.0.9 because it's not present according to cloud provider
I0222 13:04:27.421690      46 nodecontroller.go:487] Deleting node 172.17.0.9 is delayed while pods are evicted
I0222 13:04:27.467246      46 nodecontroller.go:546] Recording Deleting all Pods from Node 172.17.0.9. event message for node 172.17.0.9
I0222 13:04:27.467301      46 nodecontroller.go:790] Starting deletion of pod nettest-2a1sz
I0222 13:04:27.467324      46 event.go:211] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"172.17.0.9", UID:"172.17.0.9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeletingAllPods' Node 172.17.0.9 event: Deleting all Pods from Node 172.17.0.9.
I0222 13:04:27.467398      46 event.go:211] Event(api.ObjectReference{Kind:"Pod", Namespace:"e2e-tests-nettest-bf41z", Name:"nettest-2a1sz", UID:"b22ae462-d964-11e5-9fb8-0242ac110006", APIVersion:"v1", ResourceVersion:"2015", FieldPath:""}): type: 'Normal' reason: 'NodeControllerEviction' Marking for deletion Pod nettest-2a1sz from Node 172.17.0.9
I0222 13:04:27.472767      46 controller.go:279] Pod nettest-2a1sz updated.
I0222 13:04:27.472798      46 nodecontroller.go:790] Starting deletion of pod kubernetes-dashboard-v0.1.0-k4clv
I0222 13:04:27.472840      46 controller.go:243] No daemon sets found for pod nettest-2a1sz, daemon set controller will avoid syncing
I0222 13:04:27.472903      46 event.go:211] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-v0.1.0-k4clv", UID:"c3263daa-d95e-11e5-9fb8-0242ac110006", APIVersion:"v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'NodeControllerEviction' Marking for deletion Pod kubernetes-dashboard-v0.1.0-k4clv from Node 172.17.0.9

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant