Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Butterfly connects Virtual Machines (VM) and controls their traffic flow.

Each VM traffic is contained in a specific VXLAN network and traffic is filtered by (EC2/Openstack-like) security groups.

Security groups can be applied to any VM interface, and contain a list of simple network rules (dropping traffic by default).

Virtual NICs

In Butterfly, a Virtual NIC (or vnic) enables you to add a virtual network interface to your Qemu VM through vhost-user. Each vnic has a 24 bit network id called VNI. If two vnics with the same VNI are located on different physical hosts, Butterfly will encapsulate VM packets over VXLAN and send them to the corresponding physical host. Once received, packets will be decapsulated and routed to their final destination. All vnics created with the same VNI are located on the same network. If two vnics with the same VNI are on the same physical host, then packets do not exit to physical network.

Butterfly is meant to be connected to the physical network using a dedicated DPDK port. It allows Butterfly to have a very low latency between VMs while using physical NIC offload capabilities.

For VM-to-VM communication, checksum and segmentation do not occurs as packets do not transit on a physical network. This enables Butterfly to have a high speed and low latency communication between VMs.

Example: create of a new vnic "vnic_1" on vni "1337":

butterfly nic add --ip --mac 52:54:00:12:34:01 --vni 1337 --id vnic_1


VMs traffic is filtered using an integrated firewall within Butterfly (NetBSD's NPF) for each vnic. Filtering rules are applied to each VM depending on the rules contained in its Security Groups. A vnic can use several security groups and a security group can be used by several vnics. When a vnic use several security group, then rules are cumulated. A security group contains a list of rules to allow (default policy is to block) and a list of members (IP addresses).

A Butterfly rule is mainly described by a protocol/port and source to allow. This source can be either a CIDR block or members of a security group.

Example: Add a rule in the "mysg" security group that allows in TCP protocol on port 22:

butterfly sg rule add mysg --ip-proto tcp --port 22 --cidr

Example: Add a rule in the "mysg" security group that allows "users" security group members in TCP protocol on port 80:

butterfly sg rule add mysg --ip-proto tcp --port 80 --sg-members users

Note: When a security group used by one or more vnics is modified, firewalling rules attached to each impacted VM are reloaded.

Using Butterfly

Butterfly is a daemon you can control over a network API.

It is packaged with a client mainly allowing you to add/remove/list vnics and security groups.

You can of course directly code your calls to Butterfly's API. API message transport is based on ZeroMQ and messages are encoded in Protobuf format. Check out protocol for more details.

Here is an example of Butterfly with 6 VMs isolated in three networks (VNI 42, 51 and 1337).

Butterfly execution

Butterfly binds a dedicated NIC to send/receive VXLAN packets and binds a socket (default: tcp) to listen to queries on its API. If you use a DPDK compatible card, you will not be able to access the API through it.

You can build this configuration using a few lines of client calls:

butterfly nic add --ip --mac 52:54:00:12:34:01 --vni 42 --id vnic_1
butterfly nic add --ip --mac 52:54:00:12:34:01 --vni 51 --id vnic_2
butterfly nic add --ip --mac 52:54:00:12:34:02 --vni 51 --id vnic_3
butterfly nic add --ip --mac 52:54:00:12:34:03 --vni 51 --id vnic_4
butterfly nic add --ip --mac 52:54:00:12:34:01 --vni 1337 --id vnic_5
butterfly nic add --ip --mac 52:54:00:12:34:02 --vni 1337 --id vnic_6

Tip: if you want to see what the graph looks like: run butterfly status and copy past the dot diagram in

You can edit security groups whenever you want, which automatically updates vnics filtering. In the following example, we create a new rule to allow everyone in http protocol and ask some vnics to use this security group.

butterfly sg add sg-web
butterfly sg rule add sg-web --ip-proto tcp --port 80 --cidr
butterfly nic sg add vnic_1 sg-web
butterfly nic sg add vnic_2 sg-web

Note: Butterfly API uses idempotence, meaning that two calls should produce the same result.

Installing Butterfly

The easiest way to install Butterfly is to download and install a package from github releases. You can also build Butterfly yourself (as shown in the next section).

Building Butterfly without docker

This building procedure has been tested on a fresh Centos7.

First, install some dependencies (jemalloc needs manual installation:

$ sudo yum update -y
$ sudo yum install -y gcc-c++ glibc-devel glib2-devel libtool libpcap-devel automake kernel-headers make git cmake kernel-devel unzip zlib-devel wget libstdc++-static numactl numactl-devel openssl-devel openssl-libs clang
$ wget
$ wget
$ sudo rpm -i jemalloc-devel-3.6.0-8.el7.centos.x86_64.rpm jemalloc-3.6.0-8.el7.centos.x86_64.rpm

Build Butterfly:

$ git clone
$ mkdir butterfly/build
$ cd butterfly/build
$ cmake ..
$ make

Build Butterfly with docker:

You can also use Docker in order to build Butterfly. It is based on a Centos7. For developpment, we love to have different Linux distros in order to detect some issues. We could also build for several distro using docker in the future.

Preparing Your Machine

Configure Huge Pages

Butterfly needs some huge pages (adjust to your needs):

  • Edit your /etc/sysctl.conf and add some huge pages:
  • Reload your sysctl configuration:
$ sudo sysctl -p /etc/sysctl.conf
  • Check that your huge pages are available:
$ cat /proc/meminfo | grep Huge
  • Mount your huge pages:
$ sudo mkdir -p /mnt/huge
$ sudo mount -t hugetlbfs nodev /mnt/huge
  • (optional) Add this mount in your /etc/fstab:
hugetlbfs       /mnt/huge  hugetlbfs       rw,mode=0777        0 0

Prepare DPDK Compatible NIC

Before being able to bind your port, you need to enable Intel VT-d in your BIOS and have IOMMU explicitly enabled in your kernel parameters. Check DPDK compatible NICs and how to bind NIC drivers. Packetgraph also has an example on how to bind DPDK NICs.

Additionally, you may also want to isolate a specific core for Butterfly, check isolcpus kernel parameters.

Running Butterfly Server

To get help, see: butterflyd --help

For example, if you have a DPDK compatible NIC, Butterfly will use the first available DPDK port. If no port is found, a (slow) tap interface is created.

$ sudo butterflyd -i -s /tmp

If you do not have a DPDK compatible card, you can also init a DPDK virtual device (which is much slower than a DPDK compatible hardware).

For example, we can ask Butterfly to listen to the existing eth0 interface:

$ sudo butterflyd -i -s /tmp --dpdk-args "-c1 -n1 --socket-mem 64 --vdev=eth_pcap0,iface=eth0"

Alternatively, you can ask Butterfly to read a configuration file at init:

$ sudo butterflyd -c /etc/butterfly/butterfly.conf


Why Another Virtual Switch?

Because we just want a fast vswitch answering our simple needs:

  • simple API: EC2/Openstack security groups style
  • Have some VXLAN for network isolation
  • Have some firewalling per Virtual Machine based on security groups
  • Use as little CPU as possible (and let Virtual Machines use all other cores)
  • Ease a (Cloud) orchestrator to control the whole thing through a simple API

What's Behind Butterfly?

Butterfly is based on:

  • Packetgraph: creates network graph
  • DPDK: fast access to NICs (in Packetgraph)
  • NPF: firewalling (in Packetgraph)
  • ZeroMQ: message transport
  • Protobuf: message encoding/versioning

How Fast?

Benchmarks setup:

  • Two physical machines directly connected
  • A third machine remote setup and launch benchmarks using ./benchmarks/


  • OS: Centos 7 (3.10.0-327.18.2.el7.x86_64)
  • NICs: Intel 82599ES 10-Gigabit SFI/SFP+ (DPDK compatible used with vfio-pci driver)
  • CPU: AMD Opteron(tm) Processor 3350 HE

Results (Juin 2019):

QPerf TCP latence Without TSO on VMs:

cmd: qperf -vvs  <ip address> tcp_lat

                | Without Firewall | With Firewall |
| Same Host     | 18.3 us          | 20.1 us       |
| Diferent Host | 22.6 us          | 24.1 us       |

QPerf UDP latence With TSO on VMs:

cmd: qperf -vvs  <ip address> udp_lat
                | Without Firewall | With Firewall |
| Same Host     | 16.8 us          | 18.5 us       |
| Diferent Host | 21.7 us          | 22.1 us       |

QPerf TCP latence With TSO on VMs:

cmd: qperf -vvs  <ip address> tcp_lat
                | Without Firewall | With Firewall |
| Same Host     | 18.4 us          | 20.1 us       |
| Diferent Host | 22.9 us          | 24.1 us       |

Results (august 2017, 60 seconds per tests):

Without TSO on VMs

                         |   VMs on same host  |  VMs on remote host |
| Ping (min/average/max) | 0.072/0.090/0.160ms | 0.106/0.162/0.236ms |
| TCP (iperf 3)          |     6.00 Gbits/s    |     6.70 Gbits/s    |
| UDP (iperf 3)          |     2.99 Gbits/s    |     1.41 Gbits/s    |

With TSO enabled on VMs (--tso-on)

                         |   VMs on same host  |  VMs on remote host |
| Ping (min/average/max) | 0.077/0.101/0.447ms | 0.059/0.96/0.203ms  |
| TCP  (iperf 3)         |     15.9 Gbits/s    |     3.2 Gbits/s     |
| UDP  (iperf 3)         |     2.99 Gbits/s    |     1.4 Gbits/s     |


  • We get these results using iperf, so packets spend a lot of timer going inside VM's kernel, so these benchmarks are sadly not representative of the speed of Butterfly, we are working on new benchmarks
  • UDP is really bad at the moment, we are working on it
  • We can get even faster with zero copy in vhost-user
  • We can get faster by embedding a more recent libc (make package-fat)
  • If you try to run some benchmarks, you may want to configure your CPU throttling. On Centos7, check cpufreq governors page

How to Connect a Virtual Machine to Butterfly?

Butterfly does not launch your Virtual Machine for you, it just creates a special network interface (vhost-user) so you can connect your Virtual Machine to it. Vhost-user interfaces are Unix sockets allowing you to directly communicate with Virtual Machines in userland.

We tested Butterfly with QEMU >= 2.5 and added the following parameters to the machine's arguments (to adapt):

Some shared memory between the guest and Butterfly:

-object memory-backend-file,id=mem,size=124M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc

For each network interface:

-chardev socket,id=char0,path=/path/to/socket -netdev type=vhost-user,id=mynet0,chardev=char0,vhostforce -device virtio-net-pci,netdev=mynet0,gso=off

Note about enic:

Although butterfly can run on Cisco enic and, depending of your hadware, enic pmd driver may not support checksum offloading or TSO for inner packets yet (needed for vxlan encapsulation). If you want to use Butterfly with enic, you must desactivate TSO and TX checksum offloading on VM:

ethtool -K ensX tx off

For more details, check vhost-user dpdk guide.

Do You Support Any Containers?

Not yet, Butterfly only supports vhost-user network interfaces.

Anyways, connecting a container should be possible too.

Does Butterfly Support IPv6?

  • Virtual Machine traffic can be in IPv6.
  • Outer network (VXLAN's side) is made in IPv4 for the moment.

What If My Virtual Machine Crashes/Reboots/Stops?

Vhost-user interface will still exist until it is removed from Butterfly.

You can just restart your VM, it will be reconnected to Butterfly and run as before.

This is possible because Butterfly acts as "server" in vhost-user communication.

What If Butterfly Crashes?

Too bad, you can just restart Butterfly but the VM won't reconnect, as vhost-user "server" is located on Butterfly's side.

Filing an issue is very valuable to the project. Please provide the following information:

  • The operating system with its version
  • The Butterfly version (butterfly --version)
  • The Butterfly logs (check syslogs)
  • Is your system under memory pressure?
  • What was Butterfly doing? (Heavy traffic? Doing nothing? How many VMs?)
  • The estimated Butterfly uptime until crash
  • Do you have a way to reproduce it?

It may be soon possible to choose which one is the vhost-user server between QEMU and Butterfly, comming soon in DPDK :)

What Is Butterfly’s License?

Butterfly is licensed under GPLv3.

Is There Any Authentication on the API or Protection?

By default, there is no protection on the API but you can configure Butterfly in order to have all it's messages encrypted using AES-256.

For this, you will need to generate a 32 Bytes key (encoded in base 64) in a file and share this file:

openssl rand -base64 -out PlaintextKeyMaterial.bin 32

Then you will need to provide the path to this file using --key --k in the command line or encryption_key_path option in butterflyd.ini. Once a key is correctly loaded, all clear messages will be rejected.

For encryption format details, check api/protocol/encrypted.proto.

On Which Port Does Butterfly Listen?

By default, Butterfly listens on the tcp:// port, but it's up to you!

Butterfly uses ZeroMQ for message transport and allows you to bind in different ways. (like tcp, ipc, inproc, pgm, ...)

Questions? Problems? Contact Us!

Butterfly is an open-source project, feel free to chat with us on IRC, open a Github issue or propose a pull request.


chan: #betterfly

You can’t perform that action at this time.