Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Virtualbox NAT networking #2779

Closed
phy1729 opened this issue Jan 7, 2014 · 32 comments
Closed

Support Virtualbox NAT networking #2779

phy1729 opened this issue Jan 7, 2014 · 32 comments

Comments

@phy1729
Copy link

phy1729 commented Jan 7, 2014

It would be nice to have a networking option that has access to the public internet and does not require any configuration on the guest for that network so that the Vagrantfile is portable.

@drpebcak
Copy link

drpebcak commented Jan 7, 2014

@phy1729 Care to expand a little bit? using private_network gets you what Virtualbox calls a NAT adapter and a host-only adapter.

That is portable and requires no host configuration. It can access the internet, but it is not accessable except via the host.

I public_network will ask what interface to bridge to unless you specify (which isn't entirely portable because interface names may vary), but you'll have full access in and out of the box.

@phy1729
Copy link
Author

phy1729 commented Jan 7, 2014

Per http://www.virtualbox.org/manual/ch06.html#network_hostonly I was under the impression that private networks cannot access the internet.

I don't want to use the public network so other's don't need to know what interface to bridge to and so that the IP and gateway of the box doesn't depend on the user's network.

@drpebcak
Copy link

drpebcak commented Jan 7, 2014

@phy1729 with the help of a NAT adapter (which vagrant automatically creates) you can access the internet in a one-way path from inside the virt.

@phy1729
Copy link
Author

phy1729 commented Jan 7, 2014

To use the NAT'd interface on adapter 1 would, in my opinion, overly complicate the difference from production to the virtual mockup and would not allow for the gateways to communicate with each other on the external interface since each machine has its own virtual network for a NAT adapter on Virtualbox.

@drpebcak
Copy link

drpebcak commented Jan 7, 2014

@phy1729 Well, you could use private_network and an internal network... That would get you a NAT interface, a host-only interface, and another interface connected to a virtual vbox network (that all your virts could be connected to).

I realize that is more complicated than production, but there will always be SOME abstraction when using a virtual machine. Things like networking will not be 100% the same (especially if you are unwilling to use a bridged adapter).

@mitchellh
Copy link
Contributor

You can use the customize command to run custom VBoxManage commands to do this to your own VM in the mean time. Can you please clairfy:

  • How would this work? (be detailed, please)
  • What is the difference between the VirtualBox NAT adapter and what you describe? Vagrant already sets adapter 1 to a NAT device.

@phy1729
Copy link
Author

phy1729 commented Jan 7, 2014

Have an option like virtualbox__intnet: for a nat network say virtualbox__natnet: that uses nat-int-network as detailed at http://www.virtualbox.org/manual/ch06.html#network_nat_service I'm unsure what other details you need.

The difference is that a NAT network can have multiple machines in the same virtual network. NAT has a separate virtual network for each VM. Having VMs on the same virtual network allows for testing failover protocols such as CARP.

@phy1729
Copy link
Author

phy1729 commented Jan 7, 2014

Example Vagrantfile at https://github.com/phy1729/cv_config/blob/natnet-example/Vagrantfile Hopefully it helps.

Problems with using customize:

  • The natnetwork add command should only run once
  • The the network will be added to all VMs (not a problem yet but will be once I add the rest of my network)
  • Doesn't destroy the nat network on vagrant destroy

@drpebcak
Copy link

drpebcak commented Jan 7, 2014

@phy1729 to address at least one of those issues, can't you nest the provider config within each of the define blocks?

@xraj
Copy link

xraj commented Jan 14, 2014

I think a good reason for defaulting to using Virtualbox's NAT networking over the current NATed interface would be to make the Virtualbox provider work more like the VMware provider in terms of default behavior.

@mitchellh
Copy link
Contributor

The complexity of implementing this is outweighing the benefits I see. I'd be willing to look at a PR but I don't have any plans in the near term to implement this. Sorry! It is a case of "if it ain't broke, don't fix it" for me.

@megahall
Copy link

The NAT network option in VBox is helpful for using a single adapter which can communicate with all the other VMs on the system some of which might not be from Vagrant, as well as communicating with the internet. It's somewhat easier setup process than requiring separate NAT iface and internal iface.

@kikitux
Copy link
Contributor

kikitux commented Aug 20, 2014

Hello, the first nic on the guest will be NAT, so every machine will be
using a NAT adapter, to access tot he host and the internet or any network
that the host know.

you can share ports between the guest and the host, so any service port can
be mapped to a port in the host and will be available to the network.

then on top of that, you can have private networks for private inter guest
communication, or bridge nics, that will take an IP from the real network,
useful when you require machines from the network to access directly to the
vm.

What you are missing here that can't be provided with this set of
functionality?

Thanks!
Alvaro.

On Thu, Aug 21, 2014 at 11:08 AM, Matthew Hall notifications@github.com
wrote:

The NAT network option in VBox is helpful for using a single adapter which
can communicate with all the other VMs on the system some of which might
not be from Vagrant, as well as communicating with the internet. It's
somewhat easier setup process than requiring separate NAT iface and
internal iface.


Reply to this email directly or view it on GitHub
#2779 (comment).

@mainframe
Copy link

Im having the same problem - running cluster of Vagrant boxes and it requires default route interfaces to be able to communicate with each other between VMs (ie to be in the same network). Model where adatper 1 has same private IP (and different private network/VLAN with no intercommunication) for every VM and is actually default route/gateway device - and for VMs interconnection there has to be a separate adapter/network - it seems an overcomplication in case of multi-vm scenario. In my case its a blocker for sure - as reconfiguring a complex cluster application is much more work than running and configuring VBox VMs manually with NAT network adapter. So I would urge to reconsider the importance of supporting NAT network adapter type (ie single NAT adapter with interconnect between VMs in this network) - in order to deliver better user experience also for multi-vm scenarios.

@euidzero
Copy link

euidzero commented Aug 4, 2015

My use case is precisely to test multi homed NAT'ed clients. So I need multiple NAT'ed NICs on different subnets all accessing the internet through the host IP. Currently I do not see how to achieve this easily as (correct me if I'm wrong) Vagrant does not provide a way to add extra NAT adapters.

Thanks for reconsidering adding this feature.

@kikitux
Copy link
Contributor

kikitux commented Aug 5, 2015

@euidzero In your case, as Vagrant is not the tool, yoy may want to look into packer there you can create a new VM modify the nics, etc..

note that packer is also a hashicorp product

@rutsky
Copy link

rutsky commented Apr 5, 2016

@phy1729, @mainframe have you managed to get VMs in single routable network that also NAT-ed to the Internet?

I'm struggling with Kubernetes deployment in Vagrant with Ansible and current default configuration brings a lot of issues.

Currently each VM in multi-VM configuration has two network adapters:

  1. NAT-ed to the host network (and Internet). Packets are routed by default through this interface.
  2. Private VMs network. Allows VMs to connect to each other.

E.g.:

# ip a
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:96:9e:8a brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86299sec preferred_lft 86299sec
    inet6 fe80::a00:27ff:fe96:9e8a/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:48:ea:40 brd ff:ff:ff:ff:ff:ff
    inet 172.28.128.5/24 brd 172.28.128.255 scope global dynamic eth1
       valid_lft 1099sec preferred_lft 1099sec
    inet6 fe80::a00:27ff:fe48:ea40/64 scope link 
       valid_lft forever preferred_lft forever
...
# ip route show
default via 10.0.2.2 dev eth0  proto dhcp  src 10.0.2.15  metric 1024 
10.0.2.0/24 dev eth0  proto kernel  scope link  src 10.0.2.15 
10.0.2.2 dev eth0  proto dhcp  scope link  src 10.0.2.15  metric 1024 
172.28.128.0/24 dev eth1  proto kernel  scope link  src 172.28.128.5 
...

Lots of applications in Kubernetes assumes, that host IP is an IP of default network interface, which is NAT interface in current default configuration, so they try to advertise and use NAT IP (which is same for all machines 10.0.2.15) which fails.

While it is possible to configure each application to use proper interface it's tedious and brings additional complexity to Ansible scripts (same Ansible scripts are used to configure Kubernetes on real hardware which usually have one interface that is connected to other machines and NATed).

Looks like using NAT network suggested by @phy1729 should resolve this issue and make multi-VM configuration closer to real multi-host setup.

Does anybody have working examples of using such NAT network in Vagrant? @phy1729 link to your example of such configuration got rotten.

@mainframe
Copy link

@rutsky - no my workaround was to move to Parallels Desktop instead :)

@stephenrlouie
Copy link

stephenrlouie commented Jul 5, 2016

@rutsky This worked for me.

def nat(config)
    config.vm.provider "virtualbox" do |v|
      v.customize ["modifyvm", :id, "--nic2", "natnetwork", "--nat-network2", "test", "--nictype2", "virtio"]
    end
end

Vagrant.configure(2) do |config|
    config.vm.define "example", autostart: true do |build_example|
        nat(config)
        build_example.vm.box = $box
        build_example.vm.network "forwarded_port", guest: 80, host: 8080
        build_example.vm.network "forwarded_port", guest: 443, host: 8443
    end
end

If Vagrant could natively support this as a standard networking option, that would be awesome!

@riotejas
Copy link

We have the same requirements a @mainframe and others. Running a cluster of VMs that need to communicate with each other and public NAT using VirtualBox's 'nat network' I can configure this using just VirtualBox with all VM having adapter 1 as 'natnetwork', but then I can't using Vagrant. A 'vagrant up' fails because adapter 1 is not 'nat' This make Vagrant unusable for me. Please consider making this feature available.

@riotejas riotejas mentioned this issue Jul 25, 2016
@simonpie
Copy link

I support this feature request. The fact that the main adapter cannot be set to a natnetwork and that we have to add a second NIC makes building ansible playbook to deploy complex software a headache. A lot of code as to be written to find the correct interface and its ip and injects it in all the correct places. The killer is, you have no real idea if the playbook will work when you get to a real environnement where all the VMs are on the same network with only one NIC since your playbook has only been tested with two NIC, the main ip being on the second NIC. Something KVM does out of the box.

Hence, it would make vagrant more useful if we could choose the kind of network for the first NIC and not being forced to add a second nic.

@rul
Copy link

rul commented Sep 13, 2017

I support these feature request as well. I have a use case where I want to setup a development environment for a software that provides dynamic iPXE scripts. The software is hooked up with a DHCP server that will be running on a VM, and the idea is that other VMs can boot, request and retrieve the iPXE script from the DHCP server (hence VM to VM communication), and then the booting VM will probably bring the kernel from outside (hence VM to Internet communication). In this case two NICs aren't useful because AFAIK iPXE firmware only asks for DHCP on the first adapter.

Also, I can't use bridged networking because I don't want a second DHCP server on my LAN.

@Constantin07
Copy link

Constantin07 commented Sep 21, 2017

+1
I also need this functionality - NAT Network support, as pure NAT is not enough to spin up a cluster of nodes with inter-connectivity + access to public internet.

@mate201
Copy link

mate201 commented Jan 31, 2018

Dear devs we really need it !!!

@ljubon
Copy link

ljubon commented Jun 6, 2018

Do we have any response to this? Is virtualbox__natnet available?

@FelipeMiranda
Copy link

This Topic is from 2014 any news on that?

@ghost
Copy link

ghost commented Oct 7, 2019

@briancain Could we get a definitive answer as to whether this feature request is dead?

Judging by the upvotes on comments, and recent activity, there is still quite a bit of appetite for having NAT Networking supported in vagrant.

@simonpie
Copy link

simonpie commented Oct 7, 2019 via email

@chewi
Copy link

chewi commented Oct 7, 2019

I managed to fudge around the lack of support here but proper support would be nice. However, it's quite useless until VirtualBox itself is fixed. This issue is a showstopper.

@ghost
Copy link

ghost commented Oct 8, 2019

@chewi Can you post your workaround, please?

@chewi
Copy link

chewi commented Oct 22, 2019

Sorry for the wait.

Vagrant.configure(2) do |config|
  config.vm.provider 'virtualbox' do |vb|
    # This is the important part. You could also add extras
    # like '--nictype1', 'virtio'.
    vb.customize ['modifyvm', :id, '--nic1', 'natnetwork', '--nat-network1', 'MyNatNetwork']
  end

  # This part is a bit weird but it keeps SSH working. You'll
  # need to manually configure the NAT network to forward 2222 to
  # 22 on whichever IP the guest is on. Static IP configuration
  # may therefore be simpler than DHCP.
  config.vm.network :forwarded_post, id: 'ssh', guest: 22, host: 2222, disabled: true
  config.ssh.guest_port = 2222
end

@ghost
Copy link

ghost commented Jan 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Jan 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests