Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM keeps changing its IP address #8

Closed
Peeja opened this issue Jan 15, 2014 · 6 comments · Fixed by #35
Closed

VM keeps changing its IP address #8

Peeja opened this issue Jan 15, 2014 · 6 comments · Fixed by #35

Comments

@Peeja
Copy link

Peeja commented Jan 15, 2014

I'm going in circles with this one.

I bring up the fresh VM. I $(dvm env). I also check to see what that exports:

❯❯❯ dvm env
export DOCKER_HOST=tcp://192.168.42.43:4243

If I dvm ssh and check ifconfig, sure enough, that's the IP address.

eth1      Link encap:Ethernet  HWaddr 08:00:27:6B:6F:FF
          inet addr:192.168.42.43  Bcast:192.168.42.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6b:6fff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1388 (1.3 KiB)  TX bytes:1838 (1.7 KiB)

And docker commands work fine. Great.

Then, a little while later, docker commands stop working. I can't ping 192.168.42.43 anymore. So I dvm ssh back in:

eth1      Link encap:Ethernet  HWaddr 08:00:27:6B:6F:FF
          inet addr:192.168.56.102  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6b:6fff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2672 errors:0 dropped:0 overruns:0 frame:0
          TX packets:350 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3838330 (3.6 MiB)  TX bytes:26060 (25.4 KiB)

It's moved to 192.168.56.102.

dvm reload correctly resets the VM's IP address (along with the entire machine).

Any clue what could be going on here? I haven't found a pattern to my activity that could be causing it.

@jopecko
Copy link

jopecko commented Jan 16, 2014

@Peeja I was experiencing the same issue recently. I haven't had a chance to determine the root cause, however, I have a workaround that allows me to continue working without this issue recurring. If I export the DOCKER_IP env var with an address in the 24-bit block, i.e., 10.0.0.0 - 10.255.255.255, before invoking dvm up, the IP address on my VM remains stable and available.

@dotkrnl
Copy link

dotkrnl commented Jan 16, 2014

Same issue experienced.

@corporate-gadfly
Copy link

similar to #4. Do any of you connect to different networks?

@Peeja
Copy link
Author

Peeja commented Jan 16, 2014

@opeckojo Huh. I've already got a 10.* address on my VM's eth0. Do you? I'm not clear what the difference between those interfaces is.

@corporate-gadfly I just connect to a single Wi-Fi network.

@jopecko
Copy link

jopecko commented Jan 17, 2014

@Peeja I think it may be interface eth1 which is the one configured by Vagrant for the private network IP. If I bring up the Virtualbox GUI, the MAC address listed for the vboxnet1 adapter matches the MAC Address for eth1. I can't say for certain why an address in the 16-bit block, 192.168.0.0 - 192.168.255.255, causes this issue, however, for me, changing to a 24-bit block address completely resolved this issue. Before changing my VM's IP, I would be able to communicate with the VM via docker for maybe an hour and then I would no longer be able to connect. Doing a full restart or reload were the only ways I could reestablish connectivity. Since I switched to a 24-bit block IP address, I've had my VM up for over 24 hours and am still able to communicate with it via docker running locally on my Mac.

All I did was set the DOCKER_IP environment variable to a 24-bit block address, in my case 10.211.55.255, eval my dvm env to sync up my session, bring up my VM and work as described in the docs herein.

I admit I don't know the specifics of what's actually occurring under the covers and why the 192.168.42.43 address is causing this problem with Virtualbox and Vagrant's private network. If I get time to dive in and uncover anything I'll be sure and update this issue.

rouge8 pushed a commit to rouge8/dotfiles that referenced this issue Mar 3, 2014
fnichol added a commit that referenced this issue Apr 29, 2014
This causes issues where the udhcpc process may eventually attempt to
acquire a new IP address from VitualBox or VMware's DHCP server, and
thus severing the connection to the Docker daemon (from the client's
point of view).

Sadly this is more of a Tiny Core Linux boot issue with the
busybox/udhcpc startup and not in boot2docker so it'll need to be
handled in Vagrant middleware.

Closes #28
Closes #24
Closes #8
@fnichol
Copy link
Owner

fnichol commented Apr 29, 2014

Finally, I think we've tracked this down to a udhcpc process which Tiny Core Linux auto-starts on boot. (More details in #35). Thank you all for your help in diagnosing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants