Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multiple /etc/hosts entries created for control-plane.minikube.internal #11052

Closed
EdVinyard opened this issue Apr 10, 2021 · 4 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@EdVinyard
Copy link
Contributor

Steps to reproduce the issue:

I've tried this on both Mac OS using the hyperkit driver and Ubuntu 18.04 using the "none" driver.

  1. start Minikube
  2. switch to a different network
  3. start Minikube

Multiple /etc/hosts files entries are created for control-plane.minikube.internal. After using the "none" driver for some time, /etc/hosts has accumulated quite a few entries for me.

Full output of failed command:

$ cat /etc/hosts
127.0.0.1	localhost
127.0.1.1	myhostname

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.0.1	host.minikube.internal
192.168.50.139	control-plane.minikube.internal
10.8.0.22	control-plane.minikube.internal
... many more ...
10.8.0.26	control-plane.minikube.internal
192.168.50.117	control-plane.minikube.internal
10.8.0.5	control-plane.minikube.internal
My colleague @iguanito and I noticed this today while trying to understand why his Minikube was failing to start when he moves from one network to another. We don't know if this is the whole reason, but it certainly doesn't seem to be the author's intent.

We think this is due to a tiny error at machine/start.go:362

script := fmt.Sprintf(`{ grep -v '\t%s$' /etc/hosts; echo "%s"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts`, name, record)

As written, the grep expression never matches anything because the \t is ignored, so a new line is appended to the file every time. There are several ways to fix it, but I was successful adding a $ before of the expression to enable BASH ANSI-C quoting:

script := fmt.Sprintf(`{ grep -v $'\t%s$' /etc/hosts; echo "%s"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts`, name, record)

If there's a better place for this discussion, or I can help with the fix (I have a rough test and fix locally), just point me in the right direction.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 10, 2021

I was successful adding a $ before of the expression

Sounds like a simple fix, the script is hardcoded to use /bin/bash anyway so no need to worry about portability.

Can you do a PR ?

@afbjorklund afbjorklund added the kind/bug Categorizes issue or PR as related to a bug. label Apr 10, 2021
EdVinyard pushed a commit to EdVinyard/minikube that referenced this issue Apr 12, 2021
@EdVinyard
Copy link
Contributor Author

EdVinyard commented Apr 12, 2021

I've opened PR #11081, and I'm working on the contributor agreement.

@spowelljr spowelljr added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 19, 2021
prezha pushed a commit to prezha/minikube that referenced this issue Apr 19, 2021
@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jun 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2021
@sharifelgamal
Copy link
Collaborator

This seems to be fixed now, correct? I'll go ahead and close this issue. If something comes up, feel free to reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

6 participants