Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Virtualbox NAT networking #2779

Closed
phy1729 opened this Issue Jan 7, 2014 · 26 comments

Comments

Projects
None yet
@phy1729
Copy link

phy1729 commented Jan 7, 2014

It would be nice to have a networking option that has access to the public internet and does not require any configuration on the guest for that network so that the Vagrantfile is portable.

@drpebcak

This comment has been minimized.

Copy link

drpebcak commented Jan 7, 2014

@phy1729 Care to expand a little bit? using private_network gets you what Virtualbox calls a NAT adapter and a host-only adapter.

That is portable and requires no host configuration. It can access the internet, but it is not accessable except via the host.

I public_network will ask what interface to bridge to unless you specify (which isn't entirely portable because interface names may vary), but you'll have full access in and out of the box.

@phy1729

This comment has been minimized.

Copy link
Author

phy1729 commented Jan 7, 2014

Per http://www.virtualbox.org/manual/ch06.html#network_hostonly I was under the impression that private networks cannot access the internet.

I don't want to use the public network so other's don't need to know what interface to bridge to and so that the IP and gateway of the box doesn't depend on the user's network.

@drpebcak

This comment has been minimized.

Copy link

drpebcak commented Jan 7, 2014

@phy1729 with the help of a NAT adapter (which vagrant automatically creates) you can access the internet in a one-way path from inside the virt.

@phy1729

This comment has been minimized.

Copy link
Author

phy1729 commented Jan 7, 2014

To use the NAT'd interface on adapter 1 would, in my opinion, overly complicate the difference from production to the virtual mockup and would not allow for the gateways to communicate with each other on the external interface since each machine has its own virtual network for a NAT adapter on Virtualbox.

@drpebcak

This comment has been minimized.

Copy link

drpebcak commented Jan 7, 2014

@phy1729 Well, you could use private_network and an internal network... That would get you a NAT interface, a host-only interface, and another interface connected to a virtual vbox network (that all your virts could be connected to).

I realize that is more complicated than production, but there will always be SOME abstraction when using a virtual machine. Things like networking will not be 100% the same (especially if you are unwilling to use a bridged adapter).

@mitchellh

This comment has been minimized.

Copy link
Member

mitchellh commented Jan 7, 2014

You can use the customize command to run custom VBoxManage commands to do this to your own VM in the mean time. Can you please clairfy:

  • How would this work? (be detailed, please)
  • What is the difference between the VirtualBox NAT adapter and what you describe? Vagrant already sets adapter 1 to a NAT device.
@phy1729

This comment has been minimized.

Copy link
Author

phy1729 commented Jan 7, 2014

Have an option like virtualbox__intnet: for a nat network say virtualbox__natnet: that uses nat-int-network as detailed at http://www.virtualbox.org/manual/ch06.html#network_nat_service I'm unsure what other details you need.

The difference is that a NAT network can have multiple machines in the same virtual network. NAT has a separate virtual network for each VM. Having VMs on the same virtual network allows for testing failover protocols such as CARP.

@phy1729

This comment has been minimized.

Copy link
Author

phy1729 commented Jan 7, 2014

Example Vagrantfile at https://github.com/phy1729/cv_config/blob/natnet-example/Vagrantfile Hopefully it helps.

Problems with using customize:

  • The natnetwork add command should only run once
  • The the network will be added to all VMs (not a problem yet but will be once I add the rest of my network)
  • Doesn't destroy the nat network on vagrant destroy
@drpebcak

This comment has been minimized.

Copy link

drpebcak commented Jan 7, 2014

@phy1729 to address at least one of those issues, can't you nest the provider config within each of the define blocks?

@xraj

This comment has been minimized.

Copy link
Contributor

xraj commented Jan 14, 2014

I think a good reason for defaulting to using Virtualbox's NAT networking over the current NATed interface would be to make the Virtualbox provider work more like the VMware provider in terms of default behavior.

@mitchellh

This comment has been minimized.

Copy link
Member

mitchellh commented Apr 9, 2014

The complexity of implementing this is outweighing the benefits I see. I'd be willing to look at a PR but I don't have any plans in the near term to implement this. Sorry! It is a case of "if it ain't broke, don't fix it" for me.

@mitchellh mitchellh closed this Apr 9, 2014

@megahall

This comment has been minimized.

Copy link

megahall commented Aug 20, 2014

The NAT network option in VBox is helpful for using a single adapter which can communicate with all the other VMs on the system some of which might not be from Vagrant, as well as communicating with the internet. It's somewhat easier setup process than requiring separate NAT iface and internal iface.

@kikitux

This comment has been minimized.

Copy link
Collaborator

kikitux commented Aug 20, 2014

Hello, the first nic on the guest will be NAT, so every machine will be
using a NAT adapter, to access tot he host and the internet or any network
that the host know.

you can share ports between the guest and the host, so any service port can
be mapped to a port in the host and will be available to the network.

then on top of that, you can have private networks for private inter guest
communication, or bridge nics, that will take an IP from the real network,
useful when you require machines from the network to access directly to the
vm.

What you are missing here that can't be provided with this set of
functionality?

Thanks!
Alvaro.

On Thu, Aug 21, 2014 at 11:08 AM, Matthew Hall notifications@github.com
wrote:

The NAT network option in VBox is helpful for using a single adapter which
can communicate with all the other VMs on the system some of which might
not be from Vagrant, as well as communicating with the internet. It's
somewhat easier setup process than requiring separate NAT iface and
internal iface.


Reply to this email directly or view it on GitHub
#2779 (comment).

@mainframe

This comment has been minimized.

Copy link

mainframe commented Jul 17, 2015

Im having the same problem - running cluster of Vagrant boxes and it requires default route interfaces to be able to communicate with each other between VMs (ie to be in the same network). Model where adatper 1 has same private IP (and different private network/VLAN with no intercommunication) for every VM and is actually default route/gateway device - and for VMs interconnection there has to be a separate adapter/network - it seems an overcomplication in case of multi-vm scenario. In my case its a blocker for sure - as reconfiguring a complex cluster application is much more work than running and configuring VBox VMs manually with NAT network adapter. So I would urge to reconsider the importance of supporting NAT network adapter type (ie single NAT adapter with interconnect between VMs in this network) - in order to deliver better user experience also for multi-vm scenarios.

@euidzero

This comment has been minimized.

Copy link

euidzero commented Aug 4, 2015

My use case is precisely to test multi homed NAT'ed clients. So I need multiple NAT'ed NICs on different subnets all accessing the internet through the host IP. Currently I do not see how to achieve this easily as (correct me if I'm wrong) Vagrant does not provide a way to add extra NAT adapters.

Thanks for reconsidering adding this feature.

@kikitux

This comment has been minimized.

Copy link
Collaborator

kikitux commented Aug 5, 2015

@euidzero In your case, as Vagrant is not the tool, yoy may want to look into packer there you can create a new VM modify the nics, etc..

note that packer is also a hashicorp product

@rutsky

This comment has been minimized.

Copy link

rutsky commented Apr 5, 2016

@phy1729, @mainframe have you managed to get VMs in single routable network that also NAT-ed to the Internet?

I'm struggling with Kubernetes deployment in Vagrant with Ansible and current default configuration brings a lot of issues.

Currently each VM in multi-VM configuration has two network adapters:

  1. NAT-ed to the host network (and Internet). Packets are routed by default through this interface.
  2. Private VMs network. Allows VMs to connect to each other.

E.g.:

# ip a
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:96:9e:8a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86299sec preferred_lft 86299sec
    inet6 fe80::a00:27ff:fe96:9e8a/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:48:ea:40 brd ff:ff:ff:ff:ff:ff
    inet 172.28.128.5/24 brd 172.28.128.255 scope global dynamic eth1
       valid_lft 1099sec preferred_lft 1099sec
    inet6 fe80::a00:27ff:fe48:ea40/64 scope link 
       valid_lft forever preferred_lft forever
...
# ip route show
default via 10.0.2.2 dev eth0  proto dhcp  src 10.0.2.15  metric 1024 
10.0.2.0/24 dev eth0  proto kernel  scope link  src 10.0.2.15 
10.0.2.2 dev eth0  proto dhcp  scope link  src 10.0.2.15  metric 1024 
172.28.128.0/24 dev eth1  proto kernel  scope link  src 172.28.128.5 
...

Lots of applications in Kubernetes assumes, that host IP is an IP of default network interface, which is NAT interface in current default configuration, so they try to advertise and use NAT IP (which is same for all machines 10.0.2.15) which fails.

While it is possible to configure each application to use proper interface it's tedious and brings additional complexity to Ansible scripts (same Ansible scripts are used to configure Kubernetes on real hardware which usually have one interface that is connected to other machines and NATed).

Looks like using NAT network suggested by @phy1729 should resolve this issue and make multi-VM configuration closer to real multi-host setup.

Does anybody have working examples of using such NAT network in Vagrant? @phy1729 link to your example of such configuration got rotten.

@mainframe

This comment has been minimized.

Copy link

mainframe commented Apr 5, 2016

@rutsky - no my workaround was to move to Parallels Desktop instead :)

@stephenrlouie

This comment has been minimized.

Copy link

stephenrlouie commented Jul 5, 2016

@rutsky This worked for me.

def nat(config)
    config.vm.provider "virtualbox" do |v|
      v.customize ["modifyvm", :id, "--nic2", "natnetwork", "--nat-network2", "test", "--nictype2", "virtio"]
    end
end

Vagrant.configure(2) do |config|
    config.vm.define "example", autostart: true do |build_example|
        nat(config)
        build_example.vm.box = $box
        build_example.vm.network "forwarded_port", guest: 80, host: 8080
        build_example.vm.network "forwarded_port", guest: 443, host: 8443
    end
end

If Vagrant could natively support this as a standard networking option, that would be awesome!

@riotejas

This comment has been minimized.

Copy link

riotejas commented Jul 25, 2016

We have the same requirements a @mainframe and others. Running a cluster of VMs that need to communicate with each other and public NAT using VirtualBox's 'nat network' I can configure this using just VirtualBox with all VM having adapter 1 as 'natnetwork', but then I can't using Vagrant. A 'vagrant up' fails because adapter 1 is not 'nat' This make Vagrant unusable for me. Please consider making this feature available.

@riotejas riotejas referenced this issue Jul 25, 2016

Closed

reopen #2779 #7635

@simonpie

This comment has been minimized.

Copy link

simonpie commented Apr 13, 2017

I support this feature request. The fact that the main adapter cannot be set to a natnetwork and that we have to add a second NIC makes building ansible playbook to deploy complex software a headache. A lot of code as to be written to find the correct interface and its ip and injects it in all the correct places. The killer is, you have no real idea if the playbook will work when you get to a real environnement where all the VMs are on the same network with only one NIC since your playbook has only been tested with two NIC, the main ip being on the second NIC. Something KVM does out of the box.

Hence, it would make vagrant more useful if we could choose the kind of network for the first NIC and not being forced to add a second nic.

@rul

This comment has been minimized.

Copy link

rul commented Sep 13, 2017

I support these feature request as well. I have a use case where I want to setup a development environment for a software that provides dynamic iPXE scripts. The software is hooked up with a DHCP server that will be running on a VM, and the idea is that other VMs can boot, request and retrieve the iPXE script from the DHCP server (hence VM to VM communication), and then the booting VM will probably bring the kernel from outside (hence VM to Internet communication). In this case two NICs aren't useful because AFAIK iPXE firmware only asks for DHCP on the first adapter.

Also, I can't use bridged networking because I don't want a second DHCP server on my LAN.

@Constantin07

This comment has been minimized.

Copy link

Constantin07 commented Sep 21, 2017

+1
I also need this functionality - NAT Network support, as pure NAT is not enough to spin up a cluster of nodes with inter-connectivity + access to public internet.

@aizikil

This comment has been minimized.

Copy link

aizikil commented Jan 31, 2018

Dear devs we really need it !!!

@ljubon

This comment has been minimized.

Copy link

ljubon commented Jun 6, 2018

Do we have any response to this? Is virtualbox__natnet available?

@FelipeMiranda

This comment has been minimized.

Copy link

FelipeMiranda commented Apr 17, 2019

This Topic is from 2014 any news on that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.