Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Passing a CIDR to --subnet panics on retries #15516

Closed
spowelljr opened this issue Dec 14, 2022 · 5 comments
Closed

Passing a CIDR to --subnet panics on retries #15516

spowelljr opened this issue Dec 14, 2022 · 5 comments
Labels
area/networking networking issues lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/discuss Items for discussion

Comments

@spowelljr
Copy link
Member

spowelljr commented Dec 14, 2022

Reproduce:

$ minikube start --driver docker --subnet 192.168.49.0/1
πŸ˜„  minikube v1.28.0 on Darwin 13.1 (arm64)
✨  Using the docker driver based on user configuration
πŸ“Œ  Using Docker Desktop driver with root privileges
πŸ‘  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
πŸ”₯  Creating docker container (CPUs=2, Memory=4000MB) .../ panic: runtime error: index out of range [1] with length 0

goroutine 232 [running]:
k8s.io/minikube/pkg/network.FreeSubnet({0x14000c18240, 0xe}, 0x9, 0x14)
	/Users/powellsteven/repo/minikube/pkg/network/network.go:256 +0x644
k8s.io/minikube/pkg/drivers/kic/oci.CreateNetwork({0x14000c18250, 0x6}, {0x14000c18208, 0x8}, {0x14000c18240, 0xe})
	/Users/powellsteven/repo/minikube/pkg/drivers/kic/oci/network_create.go:92 +0x4dc
k8s.io/minikube/pkg/drivers/kic.(*Driver).Create(0x14000edc580)
	/Users/powellsteven/repo/minikube/pkg/drivers/kic/kic.go:95 +0x350
k8s.io/minikube/pkg/minikube/machine.(*LocalClient).Create(0x14000f04880, 0x14000ed2480)
	/Users/powellsteven/repo/minikube/pkg/minikube/machine/client.go:242 +0x4d4
k8s.io/minikube/pkg/minikube/machine.timedCreateHost.func2()
	/Users/powellsteven/repo/minikube/pkg/minikube/machine/start.go:192 +0x3c
created by k8s.io/minikube/pkg/minikube/machine.timedCreateHost
	/Users/powellsteven/repo/minikube/pkg/minikube/machine/start.go:191 +0x144

Root cause:
https://github.com/kubernetes/minikube/blob/master/pkg/network/network.go#L253
If we pass a CIDR to --subnet, net.ParseIP(currSubnet).To4() returns nil and then nextSubnet[1] += byte(step) panics.

Solution:
One fix is to do the same as inspect https://github.com/kubernetes/minikube/blob/master/pkg/network/network.go#L129
It does ParseCIDR first and then if that fails it falls back to ParseIP.
If we do use ParseCIDR we should likely keep the user's original mask, so this will require a little logic.

The bigger question might be that if a user passes us a specific subnet, should we increment the IP and retry at all? If a user is passing a subnet it's likely for good reason, so ignoring their value and incrementing seems like the wrong thing to do.

I propose that if no subnet is passed incrementing and retrying is fine, but if a user passes a subnet that we try the value and if it fails we completely fail. However, changing this logic could break existing implementations, so another option is we add a flag --subnet-retry=false the flag defaults to true to be backwards compatible.

@spowelljr spowelljr added area/networking networking issues triage/discuss Items for discussion labels Dec 14, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2023
@medyagh
Copy link
Member

medyagh commented Mar 29, 2023

I guess we could warn the user that we did not respect their passed subnet and we tried next closet available subnet

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 28, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking networking issues lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/discuss Items for discussion
Projects
None yet
Development

No branches or pull requests

4 participants