This repo is used to test new network configurations for EVE OS using docker containers to simulate different network stacks and CLI tools to make configuration changes. The purpose is to quickly and easily validate proposed network configuration changes before commencing any implementation work in the EVE repository.
Developed and tested on Ubuntu 20.04.
The EdgeDevice JSON configuration corresponding to this scenario can be found here.
This is the simplest scenario we could think of that covers all important aspects of networking in EVE OS:
- edge device has 4 uplink interfaces:
eth0
,eth1
,eth2
,eth3
- 6 network instances are created:
- local network
NI1
, usingeth0
as uplink (in the moment which is being simulated here) - local network
NI2
, also usingeth0
as uplink - local network
NI3
, usingeth1
as uplink - vpn network
NI4
, usingeth1
as uplink - vpn network
NI5
, usingeth2
as uplink - switch network
NI6
, bridged witheth3
- local network
- 6 applications are deployed:
app3
is VM-in-container, the other applications are only containersapp1
connected toNI1
andNI2
- it runs HTTP server on the local port
80
- it runs HTTP server on the local port
app2
connected toNI2
- it runs HTTP server on the local port
80
- it runs HTTP server on the local port
app3
connected toNI3
app4
connected toNI4
app5
connected toNI5
app6
connected toNI6
- there is a
GW
container, simulating the router to which the edge device is connected- for simplicity, in this simulation all uplinks are connected to the same router
GW
runs dnsmasq as an (eve-external) DNS+DHCP service for the switch networkNI5
- i.e. this is the DHCP server that will allocate IP address for
app5
- i.e. this is the DHCP server that will allocate IP address for
GW
container is connected to the host via docker bridge with NAT- this gives apps the access to the Internet
- there is a
zedbox
container, representing the default network namespace of EVE OS- in multi-ns proposal there is also one container per local network instance
- remote clouds are represented by
cloud1
andcloud2
containers- in both clouds there is an HTTP server running on port
80
- VPN network
NI4
is configured to open IPsec tunnel tocloud1
- VPN network
NI5
is configured to open IPsec tunnel tocloud2
- in both clouds there is an HTTP server running on port
- the simulated ACLs are configured as follows:
app1
:- able to access
*github.com
- able to access
app2
http server:- either directly via
NI2
(eidset
rule withfport=80 (TCP)
) - or hairpinning:
NI1
->zedbox
namespace (using portmap) ->NI2
- i.e. without leaving the edge node (note that this should be allowed because
NI1
anNI2
use the same uplink) - not sure what the
ACCEPT
ACE should look like in this case - statically configured uplink subnet(s)?
- i.e. without leaving the edge node (note that this should be allowed because
- either directly via
- able to access
app2
:- http server is exposed on the uplink IP and port
8080
- is able to access
eidset
/fport=80 (TCP)
- which means it can talk toapp1
http server
- http server is exposed on the uplink IP and port
app3
:- is able to communicate with any IPv4 endpoint
app4
:- is able to access any endpoint (on the cloud) listening on the HTTP port
80
(TCP)
- is able to access any endpoint (on the cloud) listening on the HTTP port
app5
:- is able to access any endpoint (on the cloud) listening on the HTTP port
80
(TCP)
- is able to access any endpoint (on the cloud) listening on the HTTP port
app6
:- is able to access
app2
by hairpinning outside the box- this is however limited to 5 packets per second with bursts up to 15 packets
- is able to access
- (1) IP subnets of
NI1
andNI3
are identical - (2) IP subnets of
NI2
and that of the uplinketh1
are identical - (3) IP subnets of the remote cloud networks and that of the uplink
eth0
are identical - (4) Traffic selectors of IPsec tunnels
NI4<->cloud1
andNI5<->cloud2
are identical
(1)(2)(3)(4) Because of all that, the separation of NIs via namespaces or VRFs is necessary.