You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we deployed AKS cluster and using calico & kubenet
we used same pipeline to deploy many AKS clusters
we need the tigera-operator is in running state and not crashing
Current Behavior
pod tigera-operator is in crashloopback with below logs
2021/07/05 10:53:40 [INFO] Version: v1.17.1
2021/07/05 10:53:40 [INFO] Go Version: go1.15.2
2021/07/05 10:53:40 [INFO] Go OS/Arch: linux/amd64
{"level":"error","ts":1625482450.1487107,"logger":"controller-runtime.manager","msg":"Failed to get API Group-Resources","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.New\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.0/pkg/manager/manager.go:317\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:157\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
{"level":"error","ts":1625482450.1493962,"logger":"setup","msg":"unable to start manager","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:175\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
*we tested traffic from the node hosted the tigera operator toward the API server FQDN above and its working fine !
*if it fail to connect with API server so we expect t all nodes are not ready , but all are ready state
*we have firewall but its allowing traffic to API server FQDN and all calico pods are running expect tigera
below describe and logs for tigera-operator
C:\WINDOWS\system32>kubectl describe pod tigera-operator-64bd78b58-99lmc -n tigera-operator
Expected Behavior
we deployed AKS cluster and using calico & kubenet
we used same pipeline to deploy many AKS clusters
we need the tigera-operator is in running state and not crashing
Current Behavior
pod tigera-operator is in crashloopback with below logs
2021/07/05 10:53:40 [INFO] Version: v1.17.1
2021/07/05 10:53:40 [INFO] Go Version: go1.15.2
2021/07/05 10:53:40 [INFO] Go OS/Arch: linux/amd64
{"level":"error","ts":1625482450.1487107,"logger":"controller-runtime.manager","msg":"Failed to get API Group-Resources","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.New\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.7.0/pkg/manager/manager.go:317\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:157\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
{"level":"error","ts":1625482450.1493962,"logger":"setup","msg":"unable to start manager","error":"Get "https://aks-mirai-acc-dns-d9c58f46.hcp.westeurope.azmk8s.io:443/api?timeout=32s\": dial tcp: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:175\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
*we tested traffic from the node hosted the tigera operator toward the API server FQDN above and its working fine !
*if it fail to connect with API server so we expect t all nodes are not ready , but all are ready state
*we have firewall but its allowing traffic to API server FQDN and all calico pods are running expect tigera
below describe and logs for tigera-operator
C:\WINDOWS\system32>kubectl describe pod tigera-operator-64bd78b58-99lmc -n tigera-operator
Name: tigera-operator-64bd78b58-99lmc
Namespace: tigera-operator
Priority: 0
Node: aks-systempool-14727861-vmss000000/10.248.56.4
Start Time: Fri, 02 Jul 2021 18:03:40 +0200
Labels: k8s-app=tigera-operator
Annotations:
Status: Running
IP: 10.248.56.4
IPs:
IP: 10.248.56.4
Controlled By: ReplicaSet/tigera-operator-64bd78b58
Containers:
tigera-operator:
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
var-lib-calico:
tigera-operator-token-4sv2d:
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoExecute op=Exists
Events:
Type Reason Age From Message
Normal Pulled 46m (x711 over 2d18h) kubelet Container image "mcr.microsoft.com/oss/tigera/operator:v1.17.1" already present on machine
Warning BackOff 66s (x16784 over 2d18h) kubelet Back-off restarting failed container
Context
tigera-operator pod failing
Your Environment
AKS cluster 1.20.7
The text was updated successfully, but these errors were encountered: