New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not run on wsl2 #537
Comments
I'm not sure which kernel is shipping with the WSL2 preview (no time to play around with it yet), but I wouldn't file any issues regarding it until it moves into RC status at least. The error chain actually starts way before flannel gets invoked, and may be due to either a) iptables not being installed at all (hard to tell, since you don't post installation logs) or b) kernel |
Digging around and it looks like the main blocker here is that VXLAN is not supported/enabled in the kernel that ships with WSL2. I played around with a manual deployment of Flannel set to There's an issue raised in WSL about the kernel flags required for enabling it and I'd be glad to take another crack at it once it's resolved: microsoft/WSL#4165 |
This is because wsl2 does not support vxlan |
Issue to track for changes need for WSL2 microsoft/WSL#4203 |
This worked out of the box for me today with a new WSL2 installation. |
How did you install k3s? latest script need systems and wsl has no systemd |
Hmm, not sure. Just followed the instructions. I haven't used this since, so not sure if systemd is a new requirement. Also, I had systemd running on my wsl setup as well using some hacks. |
Rancher Desktop is using k3s on WSL2, so closing this out. |
Describe the bug
flannel exited: operation not supported
To Reproduce
./k3s server --no-deploy traefik --docker
E0615 14:35:20.549744 3108 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"desktop-c8b2mig.15a84bb6f830fa44", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"desktop-c8b2mig", UID:"desktop-c8b2mig", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node desktop-c8b2mig status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"desktop-c8b2mig"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf39424a06326a44, ext:4161607301, loc:(*time.Location)(0x5f955a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf39424a0af1e4c4, ext:4241264901, loc:(*time.Location)(0x5f955a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
W0615 14:35:21.227285 3108 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0615 14:35:21.435765 3108 proxier.go:696] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error appending rule: exit status 1: iptables: No chain/target/match by that name.
E0615 14:35:21.442849 3108 proxier.go:696] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error appending rule: exit status 1: iptables: No chain/target/match by that name.
INFO[2019-06-15T14:35:21.961814600+08:00] waiting for node desktop-c8b2mig CIDR not assigned yet
W0615 14:35:21.978664 3108 controllermanager.go:445] Skipping "root-ca-cert-publisher"
W0615 14:35:21.982211 3108 controllermanager.go:445] Skipping "csrsigning"
INFO[2019-06-15T14:35:23.971895600+08:00] waiting for node desktop-c8b2mig CIDR not assigned yet
INFO[2019-06-15T14:35:25.981412200+08:00] waiting for node desktop-c8b2mig CIDR not assigned yet
INFO[2019-06-15T14:35:27.991427100+08:00] waiting for node desktop-c8b2mig CIDR not assigned yet
INFO[2019-06-15T14:35:30.001619000+08:00] waiting for node desktop-c8b2mig CIDR not assigned yet
INFO[2019-06-15T14:35:32.008877100+08:00] waiting for node desktop-c8b2mig CIDR not assigned yet
E0615 14:35:32.470887 3108 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "k3s.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons"]
W0615 14:35:32.587899 3108 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="desktop-c8b2mig" does not exist
E0615 14:35:32.591357 3108 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0615 14:35:32.591360 3108 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0615 14:35:33.188155 3108 node_lifecycle_controller.go:833] Missing timestamp for Node desktop-c8b2mig. Assuming now as a timestamp.
E0615 14:35:33.301886 3108 proxier.go:696] Failed to ensure that filter chain INPUT jumps to KUBE-EXTERNAL-SERVICES: error appending rule: exit status 1: iptables: No chain/target/match by that name.
E0615 14:35:34.123537 3108 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "k3s.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
FATA[2019-06-15T14:35:35.029360700+08:00] flannel exited: operation not supported
The text was updated successfully, but these errors were encountered: