Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container vcluster can not startup during create vcluster #261

Closed
pengtaoman opened this issue Dec 21, 2021 · 2 comments
Closed

Container vcluster can not startup during create vcluster #261

pengtaoman opened this issue Dec 21, 2021 · 2 comments
Labels

Comments

@pengtaoman
Copy link

pengtaoman commented Dec 21, 2021

My environments are as follows:

$uname -a
Linux centos111 3.10.0-1160.45.1.el7.x86_64 #1 SMP Wed Oct 13 17:20:51 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

$ vcluster --version
vcluster version 0.4.5

$ kubectl cluster-info dump | grep -m 1 service-cluster-ip-range
                            "--service-cluster-ip-range=10.96.0.0/12",

$kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.13", GitCommit:"2444b3347a2c45eb965b182fb836e1f51dc61b70", GitTreeState:"clean", BuildDate:"2021-11-17T13:05:33Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.13", GitCommit:"2444b3347a2c45eb965b182fb836e1f51dc61b70", GitTreeState:"clean", BuildDate:"2021-11-17T13:00:29Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

$docker version
Client: Docker Engine - Community
 Version:           20.10.11
 API version:       1.41
 Go version:        go1.16.9
 Git commit:        dea9396
 Built:             Thu Nov 18 00:38:53 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:       #true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.11
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.9
  Git commit:       847da18
  Built:            Thu Nov 18 00:37:17 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

In my k8s cluster, there are three nodes(one master and two workload nodes), I installed calico,istio, cert-manager, kubernetes-dashboard,metrics-server,ingress-nginx, and metallb in my k8s cluster.

The api-resources are as follow:

NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
events                            ev           v1                                     true         Event
limitranges                       limits       v1                                     true         LimitRange
namespaces                        ns           v1                                     false        Namespace
nodes                             no           v1                                     false        Node
persistentvolumeclaims            pvc          v1                                     true         PersistentVolumeClaim
persistentvolumes                 pv           v1                                     false        PersistentVolume
pods                              po           v1                                     true         Pod
podtemplates                                   v1                                     true         PodTemplate
replicationcontrollers            rc           v1                                     true         ReplicationController
resourcequotas                    quota        v1                                     true         ResourceQuota
secrets                                        v1                                     true         Secret
serviceaccounts                   sa           v1                                     true         ServiceAccount
services                          svc          v1                                     true         Service
challenges                                     acme.cert-manager.io/v1                true         Challenge
orders                                         acme.cert-manager.io/v1                true         Order
mutatingwebhookconfigurations                  admissionregistration.k8s.io/v1        false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io/v1        false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io/v1                false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io/v1              false        APIService
controllerrevisions                            apps/v1                                true         ControllerRevision
daemonsets                        ds           apps/v1                                true         DaemonSet
deployments                       deploy       apps/v1                                true         Deployment
replicasets                       rs           apps/v1                                true         ReplicaSet
statefulsets                      sts          apps/v1                                true         StatefulSet
tokenreviews                                   authentication.k8s.io/v1               false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io/v1                true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io/v1                false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io/v1                false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io/v1                false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling/v1                         true         HorizontalPodAutoscaler
cronjobs                          cj           batch/v1beta1                          true         CronJob
jobs                                           batch/v1                               true         Job
certificaterequests               cr,crs       cert-manager.io/v1                     true         CertificateRequest
certificates                      cert,certs   cert-manager.io/v1                     true         Certificate
clusterissuers                                 cert-manager.io/v1                     false        ClusterIssuer
issuers                                        cert-manager.io/v1                     true         Issuer
certificatesigningrequests        csr          certificates.k8s.io/v1                 false        CertificateSigningRequest
leases                                         coordination.k8s.io/v1                 true         Lease
bgpconfigurations                              crd.projectcalico.org/v1               false        BGPConfiguration
bgppeers                                       crd.projectcalico.org/v1               false        BGPPeer
blockaffinities                                crd.projectcalico.org/v1               false        BlockAffinity
clusterinformations                            crd.projectcalico.org/v1               false        ClusterInformation
felixconfigurations                            crd.projectcalico.org/v1               false        FelixConfiguration
globalnetworkpolicies             gnp          crd.projectcalico.org/v1               false        GlobalNetworkPolicy
globalnetworksets                              crd.projectcalico.org/v1               false        GlobalNetworkSet
hostendpoints                                  crd.projectcalico.org/v1               false        HostEndpoint
ipamblocks                                     crd.projectcalico.org/v1               false        IPAMBlock
ipamconfigs                                    crd.projectcalico.org/v1               false        IPAMConfig
ipamhandles                                    crd.projectcalico.org/v1               false        IPAMHandle
ippools                                        crd.projectcalico.org/v1               false        IPPool
kubecontrollersconfigurations                  crd.projectcalico.org/v1               false        KubeControllersConfiguration
networkpolicies                                crd.projectcalico.org/v1               true         NetworkPolicy
networksets                                    crd.projectcalico.org/v1               true         NetworkSet
endpointslices                                 discovery.k8s.io/v1beta1               true         EndpointSlice
events                            ev           events.k8s.io/v1                       true         Event
ingresses                         ing          extensions/v1beta1                     true         Ingress
flowschemas                                    flowcontrol.apiserver.k8s.io/v1beta1   false        FlowSchema
prioritylevelconfigurations                    flowcontrol.apiserver.k8s.io/v1beta1   false        PriorityLevelConfiguration
istiooperators                    iop,io       install.istio.io/v1alpha1              true         IstioOperator
nodes                                          metrics.k8s.io/v1beta1                 false        NodeMetrics
pods                                           metrics.k8s.io/v1beta1                 true         PodMetrics
monitoringdashboards                           monitoring.kiali.io/v1alpha1           true         MonitoringDashboard
destinationrules                  dr           networking.istio.io/v1beta1            true         DestinationRule
envoyfilters                                   networking.istio.io/v1alpha3           true         EnvoyFilter
gateways                          gw           networking.istio.io/v1beta1            true         Gateway
serviceentries                    se           networking.istio.io/v1beta1            true         ServiceEntry
sidecars                                       networking.istio.io/v1beta1            true         Sidecar
virtualservices                   vs           networking.istio.io/v1beta1            true         VirtualService
workloadentries                   we           networking.istio.io/v1beta1            true         WorkloadEntry
workloadgroups                    wg           networking.istio.io/v1alpha3           true         WorkloadGroup
ingressclasses                                 networking.k8s.io/v1                   false        IngressClass
ingresses                         ing          networking.k8s.io/v1                   true         Ingress
networkpolicies                   netpol       networking.k8s.io/v1                   true         NetworkPolicy
runtimeclasses                                 node.k8s.io/v1                         false        RuntimeClass
poddisruptionbudgets              pdb          policy/v1beta1                         true         PodDisruptionBudget
podsecuritypolicies               psp          policy/v1beta1                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io/v1           false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io/v1           false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io/v1           true         RoleBinding
roles                                          rbac.authorization.k8s.io/v1           true         Role
priorityclasses                   pc           scheduling.k8s.io/v1                   false        PriorityClass
authorizationpolicies                          security.istio.io/v1beta1              true         AuthorizationPolicy
peerauthentications               pa           security.istio.io/v1beta1              true         PeerAuthentication
requestauthentications            ra           security.istio.io/v1beta1              true         RequestAuthentication
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

In order to apply one PV used by the PVC when creating Vcluster, I create a PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-vcluster
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /srv/nfs/kubedata/vcluster
    server: 10.211.55.111

Then I deploy a VCluster

$ vcluster create --k3s-image rancher/k3s:v1.22.2-k3s2-amd64 vcluster-1 -n host-namespace-1
[info]   Creating namespace host-namespace-1
[info]   execute command: helm upgrade vcluster-1 vcluster --repo https://charts.loft.sh --version 0.4.5 --kubeconfig /var/folders/n7/wss_6zrd79vg2k903vxb08900000gn/T/846113136 --namespace host-namespace-1 --install --repository-config='' --values /var/folders/n7/wss_6zrd79vg2k903vxb08900000gn/T/870671375
[done] √ Successfully created virtual cluster vcluster-1 in namespace host-namespace-1. Use 'vcluster connect vcluster-1 --namespace host-namespace-1' to access the virtual cluster

First time , the syncer container report errors

$ kubectl -n host-namespace-1 logs vcluster-1-0
error: a container name must be specified for pod vcluster-1-0, choose one of: [vcluster syncer]
pengtaodeMacBook-MiPro-2:.kube pengtao$ kubectl -n host-namespace-1 logs vcluster-1-0 -c syncer
I1221 07:37:18.573565       1 main.go:238] Using physical cluster at https://10.96.0.1:443
I1221 07:37:18.623091       1 main.go:269] Can connect to virtual cluster with version v1.22.2+k3s2
F1221 07:37:18.628630       1 main.go:147] register controllers: register ingresses indices: no matches for kind "Ingress" in version "networking.k8s.io/v1"

So I delete the pod, let it restart by StatefulSet controller, but it didn't work, the syncer container report the same errors, and after I delete the pod three or four times , the syncer container report:

$ kubectl -n host-namespace-1 logs vcluster-1-0 -c syncer
I1221 07:59:27.564254       1 main.go:238] Using physical cluster at https://10.96.0.1:443
I1221 07:59:27.656748       1 main.go:269] Can connect to virtual cluster with version v1.22.2+k3s2
I1221 07:59:27.765415       1 leaderelection.go:243] attempting to acquire leader lease host-namespace-1/vcluster-vcluster-1-controller...
I1221 07:59:27.781468       1 plugins.go:158] Loaded 1 mutating admission controller(s) successfully in the following order: MutatingAdmissionWebhook.
I1221 07:59:27.781484       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I1221 07:59:27.789213       1 leaderelection.go:253] successfully acquired lease host-namespace-1/vcluster-vcluster-1-controller
I1221 07:59:27.789347       1 leaderelection.go:68] Acquired leadership and run vcluster in leader mode
I1221 07:59:27.789922       1 leaderelection.go:31] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"95cb7b38-cc6f-408b-b5b3-668691410704", APIVersion:"v1", ResourceVersion:"7727935", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' vcluster-1-0-external-vcluster-controller became leader
E1221 07:59:27.792915       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"vcluster-vcluster-1-controller.16c2b6386d95219d", GenerateName:"", Namespace:"host-namespace-1", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"95cb7b38-cc6f-408b-b5b3-668691410704", APIVersion:"v1", ResourceVersion:"7727935", FieldPath:""}, Reason:"LeaderElection", Message:"vcluster-1-0-external-vcluster-controller became leader", Source:v1.EventSource{Component:"vcluster", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0688037ef0a2b9d, ext:2338810854, loc:(*time.Location)(0x31cece0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0688037ef0a2b9d, ext:2338810854, loc:(*time.Location)(0x31cece0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:host-namespace-1:vc-vcluster-1" cannot create resource "events" in API group "" in the namespace "host-namespace-1"' (will not retry!)
I1221 07:59:28.224126       1 server.go:172] Starting tls proxy server at 0.0.0.0:8443
I1221 07:59:28.224948       1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
I1221 07:59:28.224985       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
I1221 07:59:28.225132       1 syncer.go:170] Generating serving cert for service ips: [10.105.163.1]
I1221 07:59:28.226995       1 secure_serving.go:197] Serving securely on [::]:8443
I1221 07:59:28.227075       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1221 07:59:35.334697       1 loghelper.go:65] Start endpoints sync controller
I1221 07:59:35.334775       1 loghelper.go:65] Start pods sync controller
I1221 07:59:35.335085       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints: Starting EventSource source &source.Kind{Type:(*v1.Endpoints)(0xc0002f6500), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.335116       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints: Starting EventSource source &source.kindWithCache{kind:source.Kind{Type:(*v1.Endpoints)(0xc0002f63c0), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}}
I1221 07:59:35.335124       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints: Starting Controller
I1221 07:59:35.340365       1 loghelper.go:65] Start events sync controller
I1221 07:59:35.340392       1 loghelper.go:65] Start ingresses sync controller
I1221 07:59:35.340630       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000a53800), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.340636       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting EventSource source &source.Kind{Type:(*v1.Event)(0xc0002c2500), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.340657       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting Controller
I1221 07:59:35.340649       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod: Starting EventSource source &source.kindWithCache{kind:source.Kind{Type:(*v1.Pod)(0xc000a53400), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}}
I1221 07:59:35.340696       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod: Starting Controller
I1221 07:59:35.343235       1 loghelper.go:65] Start nodes sync controller
I1221 07:59:35.343285       1 loghelper.go:65] Start persistentvolumes sync controller
I1221 07:59:35.343310       1 loghelper.go:65] Start configmaps sync controller
I1221 07:59:35.343330       1 loghelper.go:65] Start secrets sync controller
I1221 07:59:35.343547       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc0011be480), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.343560       1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress: Starting EventSource source &source.kindWithCache{kind:source.Kind{Type:(*v1.Ingress)(0xc0011be300), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}}
I1221 07:59:35.343566       1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress: Starting Controller
I1221 07:59:35.343729       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node: Starting EventSource source &source.Kind{Type:(*v1.Node)(0xc0011a5200), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.343739       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000464000), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.343743       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node: Starting Controller
I1221 07:59:35.343813       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolume)(0xc0001d0780), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.343823       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc000dda1c0), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.343829       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes: Starting Controller
I1221 07:59:35.345689       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap: Starting EventSource source &source.Kind{Type:(*v1.ConfigMap)(0xc0002f6f00), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.345741       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap: Starting EventSource source &source.kindWithCache{kind:source.Kind{Type:(*v1.ConfigMap)(0xc0002f6b40), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}}
I1221 07:59:35.345754       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000464400), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.345759       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap: Starting Controller
I1221 07:59:35.347775       1 loghelper.go:65] Start priorityclasses sync controller
I1221 07:59:35.347792       1 loghelper.go:65] Start storageclasses sync controller
I1221 07:59:35.347796       1 loghelper.go:65] Start services sync controller
I1221 07:59:35.347814       1 loghelper.go:65] Start persistentvolumeclaims sync controller
I1221 07:59:35.350278       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc0001a9340), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.350305       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim: Starting EventSource source &source.kindWithCache{kind:source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc0001a9180), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}}
I1221 07:59:35.350312       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim: Starting Controller
I1221 07:59:35.350555       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret: Starting EventSource source &source.Kind{Type:(*v1.Secret)(0xc000665680), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.350570       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret: Starting EventSource source &source.kindWithCache{kind:source.Kind{Type:(*v1.Secret)(0xc000665540), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}}
I1221 07:59:35.350577       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret: Starting EventSource source &source.Kind{Type:(*v1.Ingress)(0xc00112c480), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.350584       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc0006c8400), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.350588       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret: Starting Controller
I1221 07:59:35.350650       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service: Starting EventSource source &source.Kind{Type:(*v1.Service)(0xc0006d6500), cache:(*cache.informerCache)(0xc00000eca8), started:(chan error)(nil), startCancel:(func())(nil)}
I1221 07:59:35.350660       1 controller.go:165] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service: Starting EventSource source &source.kindWithCache{kind:source.Kind{Type:(*v1.Service)(0xc0006d6280), cache:(*cache.informerCache)(0xc00000e948), started:(chan error)(nil), startCancel:(func())(nil)}}
I1221 07:59:35.350664       1 controller.go:173] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service: Starting Controller
I1221 07:59:35.435536       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints: Starting workers worker count 1
I1221 07:59:35.441773       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Pod: controller: pod: Starting workers worker count 1
I1221 07:59:35.451200       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Node: controller: fake-node: Starting workers worker count 1
I1221 07:59:35.451359       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Event: controller: event-backward: Starting workers worker count 1
I1221 07:59:35.451396       1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress: Starting workers worker count 1
I1221 07:59:35.454799       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service: Starting workers worker count 1
I1221 07:59:35.454848       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim: Starting workers worker count 1
I1221 07:59:35.455028       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind PersistentVolume: controller: fake-persistent-volumes: Starting workers worker count 1
I1221 07:59:35.455070       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind ConfigMap: controller: configmap: Starting workers worker count 1
I1221 07:59:35.455322       1 controller.go:207] controller-runtime: manager: reconciler group  reconciler kind Secret: controller: secret: Starting workers worker count 1
E1221 08:00:09.442069       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints: name kubernetes namespace default: Reconciler error Timeout: request did not complete within requested timeout - context deadline exceeded
E1221 08:00:09.458937       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service: name kubernetes namespace default: Reconciler error Timeout: request did not complete within requested timeout - context deadline exceeded
E1221 08:00:12.310340       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Endpoints: controller: endpoints: name kubernetes namespace default: Reconciler error timed out waiting for the condition
E1221 08:00:43.471573       1 controller.go:302] controller-runtime: manager: reconciler group  reconciler kind Service: controller: service: name kubernetes namespace default: Reconciler error Timeout: request did not complete within requested timeout - context deadline exceeded

The vcluster container report:

$ kubectl -n host-namespace-1 logs vcluster-1-0 -c vcluster
time="2021-12-21T07:57:24Z" level=warning msg="cgroup2 is not mounted. cgroup2 evacuation is discarded."
time="2021-12-21T07:57:24Z" level=info msg="Starting k3s v1.22.2+k3s2 (3f5774b4)"
time="2021-12-21T07:57:24Z" level=info msg="Cluster bootstrap already complete"
time="2021-12-21T07:57:24Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-12-21T07:57:24Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-12-21T07:57:24Z" level=info msg="Database tables and indexes are up to date"
time="2021-12-21T07:57:25Z" level=info msg="Kine available at unix://kine.sock"
time="2021-12-21T07:57:25Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/data/server/tls/temporary-certs --client-ca-file=/data/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --insecure-port=0 --kubelet-certificate-authority=/data/server/tls/server-ca.crt --kubelet-client-certificate=/data/server/tls/client-kube-apiserver.crt --kubelet-client-key=/data/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/data/server/tls/client-auth-proxy.crt --proxy-client-key-file=/data/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/data/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/server/tls/service.key --service-account-signing-key-file=/data/server/tls/service.key --service-cluster-ip-range=10.96.0.0/12 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/data/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/data/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I1221 07:57:25.005865      44 server.go:581] external host was not specified, using 10.244.166.219
I1221 07:57:25.007461      44 server.go:175] Version: v1.22.2+k3s2
I1221 07:57:25.014307      44 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I1221 07:57:25.017384      44 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1221 07:57:25.017402      44 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1221 07:57:25.026581      44 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1221 07:57:25.026598      44 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W1221 07:57:25.043534      44 genericapiserver.go:455] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I1221 07:57:25.044441      44 instance.go:278] Using reconciler: lease
I1221 07:57:25.120415      44 rest.go:130] the default service ipfamily for this cluster is: IPv4
W1221 07:57:25.528598      44 genericapiserver.go:455] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W1221 07:57:25.530532      44 genericapiserver.go:455] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W1221 07:57:25.542144      44 genericapiserver.go:455] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W1221 07:57:25.544132      44 genericapiserver.go:455] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W1221 07:57:25.550433      44 genericapiserver.go:455] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W1221 07:57:25.553912      44 genericapiserver.go:455] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1221 07:57:25.561244      44 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W1221 07:57:25.561274      44 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1221 07:57:25.563336      44 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W1221 07:57:25.563365      44 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1221 07:57:25.568291      44 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1221 07:57:25.570887      44 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1221 07:57:25.576123      44 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources.
W1221 07:57:25.576155      44 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources.
W1221 07:57:25.578571      44 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
I1221 07:57:25.583688      44 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1221 07:57:25.583705      44 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W1221 07:57:25.590211      44 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
time="2021-12-21T07:57:25Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/data/server/cred/controller.kubeconfig --authorization-kubeconfig=/data/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/data/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/data/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/data/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/data/server/tls/client-ca.key --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/data/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/data/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/data/server/tls/service.key --use-service-account-credentials=true"
time="2021-12-21T07:57:25Z" level=info msg="Waiting for API server to become available"
time="2021-12-21T07:57:25Z" level=info msg="Node token is available at /data/server/token"
time="2021-12-21T07:57:25Z" level=info msg="To join node to cluster: k3s agent -s https://10.244.166.219:6443 -t ${NODE_TOKEN}"
time="2021-12-21T07:57:25Z" level=info msg="Wrote kubeconfig /k3s-config/kube-config.yaml"
time="2021-12-21T07:57:25Z" level=info msg="Run: k3s kubectl"
I1221 07:57:27.200541      44 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/data/server/tls/request-header-ca.crt"
I1221 07:57:27.200620      44 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/data/server/tls/client-ca.crt"
I1221 07:57:27.200975      44 secure_serving.go:266] Serving securely on 127.0.0.1:6444
I1221 07:57:27.201053      44 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/data/server/tls/serving-kube-apiserver.crt::/data/server/tls/serving-kube-apiserver.key"
I1221 07:57:27.201083      44 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1221 07:57:27.201980      44 apf_controller.go:299] Starting API Priority and Fairness config controller
I1221 07:57:27.202364      44 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/data/server/tls/client-auth-proxy.crt::/data/server/tls/client-auth-proxy.key"
I1221 07:57:27.202831      44 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1221 07:57:27.202843      44 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1221 07:57:27.202869      44 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1221 07:57:27.202922      44 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1221 07:57:27.202942      44 available_controller.go:491] Starting AvailableConditionController
I1221 07:57:27.202945      44 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1221 07:57:27.202969      44 autoregister_controller.go:141] Starting autoregister controller
I1221 07:57:27.202972      44 cache.go:32] Waiting for caches to sync for autoregister controller
I1221 07:57:27.202983      44 controller.go:83] Starting OpenAPI AggregationController
I1221 07:57:27.203208      44 customresource_discovery_controller.go:209] Starting DiscoveryController
I1221 07:57:27.212855      44 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/data/server/tls/client-ca.crt"
I1221 07:57:27.212904      44 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/data/server/tls/request-header-ca.crt"
I1221 07:57:27.213141      44 crdregistration_controller.go:111] Starting crd-autoregister controller
I1221 07:57:27.213147      44 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1221 07:57:27.213171      44 controller.go:85] Starting OpenAPI controller
I1221 07:57:27.213187      44 naming_controller.go:291] Starting NamingConditionController
I1221 07:57:27.213199      44 establishing_controller.go:76] Starting EstablishingController
I1221 07:57:27.213214      44 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1221 07:57:27.213230      44 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1221 07:57:27.213243      44 crd_finalizer.go:266] Starting CRDFinalizer
I1221 07:57:27.306315      44 apf_controller.go:304] Running API Priority and Fairness config worker
I1221 07:57:27.306762      44 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I1221 07:57:27.306809      44 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1221 07:57:27.310026      44 cache.go:39] Caches are synced for AvailableConditionController controller
I1221 07:57:27.310531      44 cache.go:39] Caches are synced for autoregister controller
I1221 07:57:27.313550      44 shared_informer.go:247] Caches are synced for crd-autoregister
I1221 07:57:27.316180      44 shared_informer.go:247] Caches are synced for node_authorizer
E1221 07:57:27.344252      44 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
I1221 07:57:28.201641      44 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1221 07:57:28.201706      44 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
time="2021-12-21T07:57:59Z" level=info msg="Waiting for API server to become available"
E1221 07:58:01.328597      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 70.586µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
I1221 07:58:01.328962      44 trace.go:205] Trace[266908071]: "Create" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:fcd415ee-1b4b-4357-bc85-bd39853ea5b8,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:57:27.327) (total time: 34001ms):
Trace[266908071]: [34.001136143s] [34.001136143s] END
E1221 07:58:01.336658      44 storage_flowcontrol.go:120] "APF bootstrap ensurer ran into error, will retry later" err="failed ensuring suggested settings - cannot create PriorityLevelConfiguration type=suggested name=\"leader-election\" error=Timeout: request did not complete within requested timeout - context deadline exceeded"
I1221 07:58:01.341811      44 trace.go:205] Trace[2135905338]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (21-Dec-2021 07:57:27.341) (total time: 34000ms):
Trace[2135905338]: [34.000274687s] [34.000274687s] END
I1221 07:58:01.342380      44 trace.go:205] Trace[1332290113]: "Update" url:/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:20114190-d04d-4ad2-bd80-2164ea6019a5,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:57:27.341) (total time: 34000ms):
Trace[1332290113]: [34.000933517s] [34.000933517s] END
E1221 07:58:01.342575      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 26.014µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
E1221 07:58:01.343188      44 cluster_authentication_trust_controller.go:493] kube-system/extension-apiserver-authentication failed with : Timeout: request did not complete within requested timeout - context deadline exceeded
I1221 07:58:25.074521      44 trace.go:205] Trace[1505227348]: "Get" url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:4d014506-677f-48ed-8c4e-135b468fc69a,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:57:28.205) (total time: 56869ms):
Trace[1505227348]: ---"About to write a response" 56868ms (07:58:25.074)
Trace[1505227348]: [56.869042454s] [56.869042454s] END
I1221 07:58:25.099938      44 trace.go:205] Trace[1744051712]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:0ce55f5f-b5a8-46c7-8574-0beee4c031cd,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:02.338) (total time: 22761ms):
Trace[1744051712]: ---"About to write a response" 22761ms (07:58:25.099)
Trace[1744051712]: [22.761874439s] [22.761874439s] END
I1221 07:58:25.100168      44 trace.go:205] Trace[1093491277]: "Get" url:/api/v1/namespaces/kube-system,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:8ba9ad76-ac6c-497e-8e01-c43487e1b937,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:01.352) (total time: 23747ms):
Trace[1093491277]: ---"About to write a response" 23747ms (07:58:25.100)
Trace[1093491277]: [23.747645192s] [23.747645192s] END
I1221 07:58:25.113554      44 trace.go:205] Trace[262304625]: "List etcd3" key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:0,continue: (21-Dec-2021 07:57:28.204) (total time: 56908ms):
Trace[262304625]: [56.908728556s] [56.908728556s] END
I1221 07:58:25.113633      44 trace.go:205] Trace[863404845]: "List" url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:df559101-9bf7-4a81-8b19-21417f99362d,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:57:28.204) (total time: 56908ms):
Trace[863404845]: ---"Listing from storage done" 56908ms (07:58:25.113)
Trace[863404845]: [56.908838452s] [56.908838452s] END
I1221 07:58:25.154660      44 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1221 07:58:25.154686      44 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
time="2021-12-21T07:58:30Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:30Z" level=error msg="error while range on /registry/priorityclasses/ /registry/priorityclasses/: database is locked"
time="2021-12-21T07:58:32Z" level=error msg="error while range on /registry/volumeattachments/ /registry/volumeattachments/: database is locked"
time="2021-12-21T07:58:33Z" level=error msg="error while range on /registry/controllerrevisions/ /registry/controllerrevisions/: database is locked"
time="2021-12-21T07:58:33Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:33Z" level=error msg="error while range on /registry/csinodes/ /registry/csinodes/: database is locked"
time="2021-12-21T07:58:34Z" level=error msg="error while range on /registry/certificatesigningrequests/ /registry/certificatesigningrequests/: database is locked"
time="2021-12-21T07:58:34Z" level=error msg="error while range on /registry/deployments/ /registry/deployments/: database is locked"
time="2021-12-21T07:58:34Z" level=error msg="error while range on /registry/services/specs/ /registry/services/specs/: database is locked"
I1221 07:58:35.133486      44 trace.go:205] Trace[231571898]: "Create" url:/api/v1/namespaces/kube-system/configmaps,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:6f31b9fb-72ea-4b8e-a42e-47ac30cdf0a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:25.130) (total time: 10002ms):
Trace[231571898]: [10.002756301s] [10.002756301s] END
E1221 07:58:35.146376      44 cluster_authentication_trust_controller.go:493] kube-system/extension-apiserver-authentication failed with : Internal error occurred: resource quota evaluation timed out
time="2021-12-21T07:58:35Z" level=info msg="Waiting for API server to become available"
time="2021-12-21T07:58:36Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:36Z" level=error msg="error while range on /registry/prioritylevelconfigurations/ /registry/prioritylevelconfigurations/: database is locked"
time="2021-12-21T07:58:38Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:38Z" level=error msg="error while range on /registry/runtimeclasses/ /registry/runtimeclasses/: database is locked"
time="2021-12-21T07:58:38Z" level=error msg="error while range on /registry/cronjobs/ /registry/cronjobs/: database is locked"
time="2021-12-21T07:58:39Z" level=error msg="error while range on /registry/apiregistration.k8s.io/apiservices/ /registry/apiregistration.k8s.io/apiservices/: database is locked"
time="2021-12-21T07:58:39Z" level=error msg="error while range on /registry/mutatingwebhookconfigurations/ /registry/mutatingwebhookconfigurations/: database is locked"
time="2021-12-21T07:58:40Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:43Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:43Z" level=error msg="error while range on /registry/cronjobs/ /registry/cronjobs/: database is locked"
time="2021-12-21T07:58:43Z" level=error msg="error while range on /registry/minions/ /registry/minions/: database is locked"
time="2021-12-21T07:58:43Z" level=error msg="error while range on /registry/csidrivers/ /registry/csidrivers/: database is locked"
time="2021-12-21T07:58:44Z" level=error msg="error while range on /registry/ingress/ /registry/ingress/: database is locked"
time="2021-12-21T07:58:44Z" level=error msg="error while range on /registry/apiextensions.k8s.io/customresourcedefinitions/ /registry/apiextensions.k8s.io/customresourcedefinitions/: database is locked"
time="2021-12-21T07:58:45Z" level=error msg="error while range on /registry/namespaces/kube-system : database is locked"
E1221 07:58:45.193091      44 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc006136320)}: rpc error: code = Unknown desc = database is locked
I1221 07:58:45.193508      44 trace.go:205] Trace[1194986341]: "Get" url:/api/v1/namespaces/kube-system,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:85830474-f54c-4626-9f8e-cb4b38f1a995,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:35.148) (total time: 10044ms):
Trace[1194986341]: [10.044909681s] [10.044909681s] END
I1221 07:58:45.204648      44 controller.go:611] quota admission added evaluator for: namespaces
time="2021-12-21T07:58:46Z" level=error msg="error while range on /registry/poddisruptionbudgets/ /registry/poddisruptionbudgets/: database is locked"
time="2021-12-21T07:58:46Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:46Z" level=error msg="error while range on /registry/clusterroles/ /registry/clusterroles/: database is locked"
time="2021-12-21T07:58:47Z" level=error msg="error while range on /registry/clusterrolebindings/ /registry/clusterrolebindings/: database is locked"
time="2021-12-21T07:58:48Z" level=error msg="error while range on /registry/leases/ /registry/leases/: database is locked"
time="2021-12-21T07:58:48Z" level=error msg="error while range on /registry/storageclasses/ /registry/storageclasses/: database is locked"
time="2021-12-21T07:58:48Z" level=error msg="error while range on /registry/podtemplates/ /registry/podtemplates/: database is locked"
time="2021-12-21T07:58:49Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:50Z" level=error msg="error while range on /registry/volumeattachments/ /registry/volumeattachments/: database is locked"
time="2021-12-21T07:58:50Z" level=error msg="error while range on /registry/clusterroles/ /registry/clusterroles/: database is locked"
time="2021-12-21T07:58:51Z" level=error msg="error while range on /registry/certificatesigningrequests/ /registry/certificatesigningrequests/: database is locked"
time="2021-12-21T07:58:51Z" level=error msg="error while range on /registry/ingressclasses/ /registry/ingressclasses/: database is locked"
time="2021-12-21T07:58:51Z" level=error msg="error while range on /registry/podsecuritypolicy/ /registry/podsecuritypolicy/: database is locked"
time="2021-12-21T07:58:52Z" level=error msg="error while range on /registry/storageclasses/ /registry/storageclasses/: database is locked"
time="2021-12-21T07:58:52Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:53Z" level=error msg="error while range on /registry/priorityclasses/ /registry/priorityclasses/: database is locked"
time="2021-12-21T07:58:53Z" level=error msg="error while range on /registry/replicasets/ /registry/replicasets/: database is locked"
I1221 07:58:55.206248      44 trace.go:205] Trace[241065514]: "Create" url:/api/v1/namespaces,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:af526855-e322-4f3d-ba30-7ce8ed2db456,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:45.204) (total time: 10002ms):
Trace[241065514]: [10.002159726s] [10.002159726s] END
E1221 07:58:55.217778      44 cluster_authentication_trust_controller.go:493] kube-system/extension-apiserver-authentication failed with : Internal error occurred: resource quota evaluation timed out
time="2021-12-21T07:58:55Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:55Z" level=error msg="error while range on /registry/persistentvolumes/ /registry/persistentvolumes/: database is locked"
time="2021-12-21T07:58:56Z" level=error msg="error while range on /registry/ingress/ /registry/ingress/: database is locked"
time="2021-12-21T07:58:56Z" level=error msg="error while range on /registry/csinodes/ /registry/csinodes/: database is locked"
time="2021-12-21T07:58:56Z" level=error msg="error while range on /registry/services/endpoints/ /registry/services/endpoints/: database is locked"
time="2021-12-21T07:58:58Z" level=error msg="error while range on /registry/health : database is locked"
time="2021-12-21T07:58:59Z" level=error msg="error while range on /registry/rolebindings/ /registry/rolebindings/: database is locked"
I1221 07:58:59.138867      44 trace.go:205] Trace[1976879118]: "GuaranteedUpdate etcd3" type:*flowcontrol.FlowSchema (21-Dec-2021 07:58:25.138) (total time: 34000ms):
Trace[1976879118]: [34.00079166s] [34.00079166s] END
E1221 07:58:59.139175      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 34.976µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
I1221 07:58:59.139441      44 trace.go:205] Trace[1598096658]: "Patch" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-leader-election/status,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:17594060-c87d-4b2e-8cda-4e3d4b61fd22,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:25.137) (total time: 34001ms):
Trace[1598096658]: [34.00146837s] [34.00146837s] END
E1221 07:58:59.143026      44 apf_controller.go:333] failed to set a status.condition for FlowSchema system-leader-election: Timeout: request did not complete within requested timeout - context deadline exceeded
I1221 07:59:00.128792      44 trace.go:205] Trace[992101884]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-node-high,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:d73124d3-c266-409b-8e83-e887c0e8d17d,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:25.121) (total time: 35007ms):
Trace[992101884]: ---"About to write a response" 35007ms (07:59:00.128)
Trace[992101884]: [35.007297569s] [35.007297569s] END
I1221 07:59:00.138034      44 trace.go:205] Trace[1230310144]: "List etcd3" key:/clusterrolebindings,resourceVersion:,resourceVersionMatch:,limit:0,continue: (21-Dec-2021 07:58:25.122) (total time: 35015ms):
Trace[1230310144]: [35.015485662s] [35.015485662s] END
I1221 07:59:00.138127      44 trace.go:205] Trace[2113164021]: "List" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:98064983-8205-4645-a99d-4543011df295,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:25.122) (total time: 35015ms):
Trace[2113164021]: ---"Listing from storage done" 35015ms (07:59:00.138)
Trace[2113164021]: [35.015596741s] [35.015596741s] END
I1221 07:59:00.164859      44 trace.go:205] Trace[2011440991]: "List etcd3" key:/resourcequotas/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (21-Dec-2021 07:58:25.132) (total time: 35032ms):
Trace[2011440991]: [35.032357376s] [35.032357376s] END
I1221 07:59:00.164958      44 trace.go:205] Trace[1754356677]: "List" url:/api/v1/namespaces/kube-system/resourcequotas,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:a66bc4b5-0668-43aa-8190-d2181ee1324f,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:25.132) (total time: 35032ms):
Trace[1754356677]: ---"Listing from storage done" 35032ms (07:59:00.164)
Trace[1754356677]: [35.03248186s] [35.03248186s] END
I1221 07:59:00.179630      44 trace.go:205] Trace[331297602]: "Get" url:/api/v1/namespaces/kube-system,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:9bd3f675-5044-4ba7-a75a-bbdc4a29e2c7,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:55.221) (total time: 4958ms):
Trace[331297602]: ---"About to write a response" 4958ms (07:59:00.179)
Trace[331297602]: [4.958312106s] [4.958312106s] END
I1221 07:59:10.184728      44 trace.go:205] Trace[1624201382]: "Create" url:/api/v1/namespaces,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:631d8b81-7577-4bd0-8eca-163bb0a4e3f2,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:00.182) (total time: 10001ms):
Trace[1624201382]: [10.001817272s] [10.001817272s] END
E1221 07:59:10.189461      44 controller.go:207] unable to create required kubernetes system namespace kube-public: Internal error occurred: resource quota evaluation timed out
time="2021-12-21T07:59:10Z" level=info msg="Waiting for API server to become available"
I1221 07:59:33.159861      44 trace.go:205] Trace[1026042594]: "GuaranteedUpdate etcd3" type:*flowcontrol.FlowSchema (21-Dec-2021 07:58:59.159) (total time: 34000ms):
Trace[1026042594]: [34.000316235s] [34.000316235s] END
I1221 07:59:33.160239      44 trace.go:205] Trace[1947796907]: "Patch" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-leader-election/status,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:d029a597-7d00-440d-9f21-f9a3db956ef0,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:58:59.159) (total time: 34000ms):
Trace[1947796907]: [34.000802398s] [34.000802398s] END
E1221 07:59:33.161180      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 25.364µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
E1221 07:59:33.164332      44 apf_controller.go:333] failed to set a status.condition for FlowSchema system-leader-election: Timeout: request did not complete within requested timeout - context deadline exceeded
I1221 07:59:35.280701      44 trace.go:205] Trace[1633150192]: "GuaranteedUpdate etcd3" type:*flowcontrol.FlowSchema (21-Dec-2021 07:59:33.192) (total time: 2087ms):
Trace[1633150192]: ---"Transaction committed" 2086ms (07:59:35.280)
Trace[1633150192]: [2.087698553s] [2.087698553s] END
I1221 07:59:35.280832      44 trace.go:205] Trace[1468685679]: "Patch" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-leader-election/status,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:3adde48f-19ea-4d33-8dd3-5e96854f4c80,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:33.192) (total time: 2087ms):
Trace[1468685679]: ---"Object stored in database" 2087ms (07:59:35.280)
Trace[1468685679]: [2.087936808s] [2.087936808s] END
I1221 07:59:35.283794      44 trace.go:205] Trace[266971761]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:7e22878d-912b-4249-90c2-704bd2e47372,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:00.196) (total time: 35087ms):
Trace[266971761]: [35.087489894s] [35.087489894s] END
I1221 07:59:35.284151      44 trace.go:205] Trace[868759984]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-scheduler,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:79f2de18-943e-4986-a3b4-de97a5e5a9cf,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:00.193) (total time: 35090ms):
Trace[868759984]: ---"About to write a response" 35090ms (07:59:35.284)
Trace[868759984]: [35.090639915s] [35.090639915s] END
I1221 07:59:35.291263      44 trace.go:205] Trace[1851720170]: "Get" url:/api/v1/namespaces/kube-node-lease,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:6ea859cc-5909-4524-8ef4-5e89891e5531,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:10.192) (total time: 25098ms):
Trace[1851720170]: [25.098973134s] [25.098973134s] END
I1221 07:59:35.291515      44 trace.go:205] Trace[1566411980]: "List etcd3" key:/resourcequotas/kube-public,resourceVersion:,resourceVersionMatch:,limit:0,continue: (21-Dec-2021 07:59:00.183) (total time: 35107ms):
Trace[1566411980]: [35.107896609s] [35.107896609s] END
I1221 07:59:35.291580      44 trace.go:205] Trace[888190528]: "List" url:/api/v1/namespaces/kube-public/resourcequotas,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:006a0e1c-2555-4a95-8f30-de4baa900c73,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:00.183) (total time: 35107ms):
Trace[888190528]: ---"Listing from storage done" 35107ms (07:59:35.291)
Trace[888190528]: [35.107975758s] [35.107975758s] END
I1221 07:59:35.291757      44 trace.go:205] Trace[7990468]: "Get" url:/api/v1/namespaces/kube-system,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:0998c515-718a-44e0-a859-239cd9874cc4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:00.199) (total time: 35092ms):
Trace[7990468]: ---"About to write a response" 35092ms (07:59:35.291)
Trace[7990468]: [35.092364763s] [35.092364763s] END
I1221 07:59:35.291953      44 trace.go:205] Trace[1159984762]: "Get" url:/api/v1/namespaces/host-namespace-1/services/vcluster-1,user-agent:vcluster/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:b8a658ef-ca9a-439c-9977-e4ecd5ee0882,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:27.874) (total time: 7417ms):
Trace[1159984762]: [7.417808191s] [7.417808191s] END
I1221 07:59:35.440398      44 controller.go:611] quota admission added evaluator for: endpoints
time="2021-12-21T07:59:46Z" level=info msg="Waiting for API server to become available"
E1221 08:00:09.326151      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 32.236µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
I1221 08:00:09.326364      44 trace.go:205] Trace[1610672515]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:d331f90f-89c2-4bba-a154-bce63f4f91d3,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:35.325) (total time: 34000ms):
Trace[1610672515]: [34.000565587s] [34.000565587s] END
E1221 08:00:09.328974      44 storage_rbac.go:242] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:discovery: Timeout: request did not complete within requested timeout - context deadline exceeded
I1221 08:00:09.440058      44 trace.go:205] Trace[1884879689]: "GuaranteedUpdate etcd3" type:*core.Endpoints (21-Dec-2021 07:59:35.439) (total time: 34000ms):
Trace[1884879689]: [34.000751397s] [34.000751397s] END
I1221 08:00:09.440670      44 trace.go:205] Trace[1190751915]: "Update" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:vcluster/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7ad5b26b-9182-49ac-9ccc-9bb47a9f0e03,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:35.438) (total time: 34001ms):
Trace[1190751915]: [34.001919456s] [34.001919456s] END
E1221 08:00:09.440726      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 48.24µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
I1221 08:00:09.457869      44 trace.go:205] Trace[1496511769]: "Delete" url:/api/v1/namespaces/default/services/kubernetes,user-agent:vcluster/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:644d8eb7-1fdd-4436-93c7-8736b9091241,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:35.456) (total time: 34000ms):
Trace[1496511769]: [34.000805524s] [34.000805524s] END
E1221 08:00:09.458455      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 28.724µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
time="2021-12-21T08:00:10Z" level=error msg="error in txn: context canceled"
I1221 08:00:10.291383      44 trace.go:205] Trace[799806424]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:monitoring,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:aa22bf6a-dc69-4634-9c23-dfa63b954e8e,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:09.329) (total time: 961ms):
Trace[799806424]: [961.726996ms] [961.726996ms] END
I1221 08:00:10.308579      44 trace.go:205] Trace[1659372905]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/global-default,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:af038c87-4295-4034-a34d-d3cb40186acb,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 07:59:35.326) (total time: 34982ms):
Trace[1659372905]: ---"About to write a response" 34982ms (08:00:10.308)
Trace[1659372905]: [34.982157628s] [34.982157628s] END
I1221 08:00:10.308587      44 trace.go:205] Trace[1839725490]: "GuaranteedUpdate etcd3" type:*core.Endpoints (21-Dec-2021 08:00:09.449) (total time: 858ms):
Trace[1839725490]: ---"Transaction committed" 857ms (08:00:10.308)
Trace[1839725490]: [858.707496ms] [858.707496ms] END
I1221 08:00:10.308784      44 trace.go:205] Trace[2029415001]: "Update" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:vcluster/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:1ee04a29-6185-4040-892f-c8227fd69b03,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:09.449) (total time: 859ms):
Trace[2029415001]: ---"Object stored in database" 858ms (08:00:10.308)
Trace[2029415001]: [859.077861ms] [859.077861ms] END
time="2021-12-21T08:00:20Z" level=info msg="Waiting for API server to become available"
E1221 08:00:43.467353      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 20.894µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
I1221 08:00:43.467500      44 trace.go:205] Trace[23211136]: "Delete" url:/api/v1/namespaces/default/services/kubernetes,user-agent:vcluster/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:3f5e915a-03d7-4b21-aed0-8360392ddfa7,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:09.466) (total time: 34000ms):
Trace[23211136]: [34.000700243s] [34.000700243s] END
I1221 08:00:44.302370      44 trace.go:205] Trace[860719583]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:bddaabea-d02b-4f11-a77f-cf40c92eb162,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:10.295) (total time: 34007ms):
Trace[860719583]: [34.007040803s] [34.007040803s] END
E1221 08:00:44.303024      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 14.191µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
E1221 08:00:44.303202      44 storage_rbac.go:242] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:monitoring: Timeout: request did not complete within requested timeout - context deadline exceeded
E1221 08:00:44.953450      44 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
E1221 08:00:44.953540      44 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
I1221 08:00:44.953706      44 trace.go:205] Trace[1547253266]: "GuaranteedUpdate etcd3" type:*core.Endpoints (21-Dec-2021 08:00:12.323) (total time: 32630ms):
Trace[1547253266]: [32.630356139s] [32.630356139s] END
E1221 08:00:44.953745      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 191.776µs, panicked: false, err: context canceled, panic-reason: <nil>
E1221 08:00:44.954310      44 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
E1221 08:00:44.954349      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 12.818µs, panicked: false, err: context canceled, panic-reason: <nil>
E1221 08:00:44.956321      44 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
I1221 08:00:44.957530      44 trace.go:205] Trace[1113827291]: "Update" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:vcluster/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:45941af2-45d0-48ce-b239-9faa4a6a8ec5,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:12.323) (total time: 32634ms):
Trace[1113827291]: [32.634257251s] [32.634257251s] END
E1221 08:00:44.958034      44 timeout.go:135] post-timeout activity - time-elapsed: 4.635371ms, PUT "/api/v1/namespaces/default/endpoints/kubernetes" result: <nil>
E1221 08:00:44.960332      44 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E1221 08:00:44.963670      44 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
I1221 08:00:44.964829      44 trace.go:205] Trace[2133929622]: "Delete" url:/api/v1/namespaces/default/services/kubernetes,user-agent:vcluster/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:d16ec85d-f0ca-4afb-b528-c7862bd87cb0,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:43.483) (total time: 1481ms):
Trace[2133929622]: [1.481701186s] [1.481701186s] END
E1221 08:00:44.964957      44 timeout.go:135] post-timeout activity - time-elapsed: 10.663131ms, DELETE "/api/v1/namespaces/default/services/kubernetes" result: <nil>
time="2021-12-21T08:00:54Z" level=info msg="Waiting for API server to become available"
E1221 08:01:10.310246      44 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
I1221 08:01:10.310479      44 trace.go:205] Trace[1534247374]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/system,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:914dad6c-64ee-4a82-8283-75f21f2de992,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:10.309) (total time: 60000ms):
Trace[1534247374]: [1m0.000953907s] [1m0.000953907s] END
E1221 08:01:10.311004      44 wrap.go:54] timeout or abort while handling: method=GET URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/system" audit-ID="914dad6c-64ee-4a82-8283-75f21f2de992"
E1221 08:01:10.311216      44 timeout.go:135] post-timeout activity - time-elapsed: 175.947µs, GET "/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/system" result: <nil>
E1221 08:01:10.312570      44 storage_flowcontrol.go:120] "APF bootstrap ensurer ran into error, will retry later" err="failed ensuring suggested settings - failed to retrieve PriorityLevelConfiguration type=suggested name=\"system\" error=Get \"https://127.0.0.1:6444/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/system\": stream error: stream ID 331; INTERNAL_ERROR"
time="2021-12-21T08:01:20Z" level=error msg="error in txn: context deadline exceeded"
I1221 08:01:20.501011      44 trace.go:205] Trace[522508946]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-nodes,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:ff43000b-87c4-4db8-9c67-3dad621d03f9,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:01:10.337) (total time: 10163ms):
Trace[522508946]: ---"About to write a response" 10162ms (08:01:20.500)
Trace[522508946]: [10.163023292s] [10.163023292s] END
I1221 08:01:20.509632      44 trace.go:205] Trace[101340473]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:5a47d28c-58d4-44ce-9257-453c13e72fe9,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:44.308) (total time: 36200ms):
Trace[101340473]: [36.200677194s] [36.200677194s] END
I1221 08:01:20.528072      44 trace.go:205] Trace[230583601]: "Get" url:/api/v1/namespaces/kube-system,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:1d0b0a6e-87d5-4ffa-8e38-7c83023ac0f2,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:00:35.346) (total time: 45181ms):
Trace[230583601]: ---"About to write a response" 45181ms (08:01:20.527)
Trace[230583601]: [45.181313782s] [45.181313782s] END
time="2021-12-21T08:01:30Z" level=info msg="Waiting for API server to become available"
I1221 08:01:55.447709      44 trace.go:205] Trace[2048618164]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:4ea30489-702e-46ca-b7a0-96d7bd754c74,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:01:20.573) (total time: 34874ms):
Trace[2048618164]: [34.874567824s] [34.874567824s] END
I1221 08:01:55.491835      44 trace.go:205] Trace[821558857]: "Get" url:/api/v1/namespaces/kube-public,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:a7256499-ec45-4dfb-8fb4-1a0088df3ab5,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:01:20.529) (total time: 34962ms):
Trace[821558857]: [34.962656005s] [34.962656005s] END
I1221 08:01:55.492275      44 trace.go:205] Trace[138717438]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/system-leader-election,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:ffa919c5-5a1d-4b6a-9453-bc44008ca5c0,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:01:20.573) (total time: 34918ms):
Trace[138717438]: ---"About to write a response" 34918ms (08:01:55.492)
Trace[138717438]: [34.918681389s] [34.918681389s] END
time="2021-12-21T08:02:06Z" level=info msg="Waiting for API server to become available"
E1221 08:02:29.531530      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 35.787µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
I1221 08:02:29.531808      44 trace.go:205] Trace[1190308157]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:e5ca4219-5f96-4abe-983e-af8865005802,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:01:55.526) (total time: 34004ms):
Trace[1190308157]: [34.004842949s] [34.004842949s] END
E1221 08:02:29.535554      44 storage_rbac.go:242] unable to reconcile clusterrole.rbac.authorization.k8s.io/admin: Timeout: request did not complete within requested timeout - context deadline exceeded
time="2021-12-21T08:02:30Z" level=error msg="error in txn: context deadline exceeded"
I1221 08:02:30.513419      44 trace.go:205] Trace[669711938]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-system-service-accounts,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:e355f4a7-1465-4cb9-b06e-01858e8719a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:01:55.531) (total time: 34982ms):
Trace[669711938]: ---"About to write a response" 34982ms (08:02:30.513)
Trace[669711938]: [34.982226698s] [34.982226698s] END
I1221 08:02:30.529329      44 trace.go:205] Trace[1838104877]: "Get" url:/api/v1/namespaces/kube-node-lease,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:4b5a0ac5-c23b-425c-a840-62c785afc43e,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:01:55.533) (total time: 34996ms):
Trace[1838104877]: ---"About to write a response" 34995ms (08:02:30.529)
Trace[1838104877]: [34.996045722s] [34.996045722s] END
I1221 08:02:30.544468      44 trace.go:205] Trace[7304482]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/edit,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:7713b925-9e5c-4848-b1f0-96e9a9d77ce7,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:02:29.536) (total time: 1007ms):
Trace[7304482]: [1.007952264s] [1.007952264s] END
time="2021-12-21T08:02:41Z" level=info msg="Waiting for API server to become available"
E1221 08:03:04.597041      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 47.026µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
I1221 08:03:04.597500      44 trace.go:205] Trace[1722329386]: "Create" url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:28268a44-a225-4278-ab55-0da2670c6f01,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:02:30.595) (total time: 34002ms):
Trace[1722329386]: [34.002237593s] [34.002237593s] END
E1221 08:03:04.607593      44 storage_rbac.go:242] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin: Timeout: request did not complete within requested timeout - context deadline exceeded
I1221 08:03:04.616103      44 trace.go:205] Trace[2026119490]: "GuaranteedUpdate etcd3" type:*flowcontrol.FlowSchema (21-Dec-2021 08:02:30.615) (total time: 34000ms):
Trace[2026119490]: [34.00081917s] [34.00081917s] END
I1221 08:03:04.616498      44 trace.go:205] Trace[534794884]: "Patch" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/kube-scheduler/status,user-agent:k3s/v1.22.2+k3s2 (linux/amd64) kubernetes/3f5774b,audit-id:a47e666d-7e3b-48f9-a858-1b52cf3b8b2e,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (21-Dec-2021 08:02:30.615) (total time: 34001ms):
Trace[534794884]: [34.001283022s] [34.001283022s] END
E1221 08:03:04.616638      44 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 15.949µs, panicked: false, err: context deadline exceeded, panic-reason: <nil>
time="2021-12-21T08:03:15Z" level=info msg="Waiting for API server to become available"
E1221 08:03:30.616409      44 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
E1221 08:03:30.616615      44 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
E1221 08:03:30.617094      44 storage_flowcontrol.go:120] "APF bootstrap ensurer ran into error, will retry later" err="failed ensuring suggested settings - failed to retrieve PriorityLevelConfiguration type=suggested name=\"global-default\" error=the server was unable to return a response in the time allotted, but may still be processing the request (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io global-default)"
F1221 08:03:30.617130      44 hooks.go:202] PostStartHook "priority-and-fairness-config-producer" failed: unable to initialize APF bootstrap configuration
panic: F1221 08:03:30.617130      44 hooks.go:202] PostStartHook "priority-and-fairness-config-producer" failed: unable to initialize APF bootstrap configuration


goroutine 4119 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).output(0x80e63a0, 0xc000000003, 0x0, 0x0, 0xc0004ca460, 0x0, 0x6985b61, 0x8, 0xca, 0x0)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:968 +0x7c5
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).printf(0x80e63a0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x4f2266a, 0x1b, 0xc006c2f5a0, 0x2, ...)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:753 +0x19a
github.com/rancher/k3s/vendor/k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1493
github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server.runPostStartHook(0x4f68bb3, 0x25, 0x5287d48, 0xc000bde580, 0xc4e, 0xc00117c840, 0xc000c766c0, 0xc000edf5c0)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/hooks.go:202 +0x1c9
created by github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server.(*GenericAPIServer).RunPostStartHooks
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apiserver/pkg/server/hooks.go:165 +0x145

Would you please help me to resolve this problem, thanks a lot!

@FabianKramm
Copy link
Member

@pengtaoman thanks for reporting this issue! This looks like a problem in k3s where the sqlite database is locked, I would recommend you look into other ways of storing the vcluster data, such as mysql, postgresql or etcd. You can also try to deploy k0s or k8s instead of k3s which might work for you.

@pengtaoman
Copy link
Author

I upgraded Vcluster command to v0.5.0-beta.0, and used vcluster create my-vcluster -n my-vcluster --distro k8s --kubernetes-version v1.20.13, it worked, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants