You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our kubelet logs are full with thousands of lines like this:
E0416 19:44:35.803861 2391 kubelet_volumes.go:154] Orphaned pod "017aa18b-44cb-11e9-84a1-eeeeeeeeeeee" found, but volume paths are still present on disk : There were a total of 155 errors similar to th
is. Turn up verbosity to see them.
Not sure if it is actually a Longhorn issue, but as discussed with @yasker , we at least wanted to document it.
The text was updated successfully, but these errors were encountered:
We've encountered this problem too.
There is already a kubernetes Ticket open.
we delete the orphaned pods from time to time manual by running:
num=$(docker logs --tail 20 kubelet 2>&1 | grep "errors similar to this. Turn up verbosity to see them." | tail -1 | awk '{print $23}' | sed 's/"//g')
while [ $num ]
do
rm -r /var/lib/kubelet/pods/$(docker logs --tail 20 kubelet 2>&1 | grep "errors similar to this. Turn up verbosity to see them." | tail -1 | awk '{print $7}' | sed 's/"//g')
sleep 2s
num=$(docker logs --tail 10 kubelet 2>&1 | grep "errors similar to this. Turn up verbosity to see them." | tail -1 | awk '{print $23}' | sed 's/"//g')
echo "$num remaining"
done
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Our kubelet logs are full with thousands of lines like this:
Not sure if it is actually a Longhorn issue, but as discussed with @yasker , we at least wanted to document it.
The text was updated successfully, but these errors were encountered: