diff --git a/site/encryption/crypto-overview.md b/site/encryption/crypto-overview.md new file mode 100644 index 0000000000..ae8c66027d --- /dev/null +++ b/site/encryption/crypto-overview.md @@ -0,0 +1,36 @@ +--- +title: Encryption and Weave Net +layout: default +--- + + + +Weave can be configured to encrypt both the data passing over the TCP +connections and the payloads of UDP packets sent between peers. This +is accomplished using the [NaCl](http://nacl.cr.yp.to/) crypto +libraries, employing Curve25519, XSalsa20 and Poly1305 to encrypt and +authenticate messages. Weave protects against injection and replay +attacks for traffic forwarded between peers. + +NaCl was selected because of its good reputation both in terms of +selection and implementation of ciphers, but equally importantly, its +clear APIs, good documentation and high-quality +[go implementation](https://godoc.org/golang.org/x/crypto/nacl). It is +quite difficult to use NaCl incorrectly. Contrast this with libraries +such as OpenSSL where the library and its APIs are vast in size, +poorly documented, and easily used wrongly. + +There are some similarities between Weave's crypto and +[TLS](https://tools.ietf.org/html/rfc4346). Weave does not need to cater +for multiple cipher suites, certificate exchange and other +requirements emanating from X509, and a number of other features. This +simplifies the protocol and implementation considerably. On the other +hand, Weave needs to support UDP transports, and while there are +extensions to TLS such as [DTLS](https://tools.ietf.org/html/rfc4347) +which can operate over UDP, these are not widely implemented and +deployed. + +**See Also** + + * [How Weave Implements Encryption](/site/encryption/ephemeral-key.md) + * [Securing Containers Across Untrusted Networks](/site/using-weave/security-untrusted-networks.md) \ No newline at end of file diff --git a/site/encryption/implementation.md b/site/encryption/implementation.md new file mode 100644 index 0000000000..175b831940 --- /dev/null +++ b/site/encryption/implementation.md @@ -0,0 +1,185 @@ +--- +title: How Weave Implements Encryption +layout: default +--- + +This section discusses the folloiwng topics: + + * [Establishing the Ephemeral Session Key](#ephemeral-key) + * [Key Generation](#csprng) + * [Encypting and Decrypting TCP Messages](#tcp) + * [Encypting and Decrypting UDP Messages](#udp) + * [Further Reading](#plugin) + + + +####Establishing the Ephemeral Session Key + +For every connection between peers, a fresh public/private key pair is +created at both ends, using NaCl's `GenerateKey` function. The public +key portion is sent to the other end as part of the initial handshake +performed over TCP. Peers that were started with a password do not +continue with connection establishment unless they receive a public +key from the remote peer. Thus either all peers in a weave network +must be supplied with a password, or none. + +When a peer has received a public key from the remote peer, it uses +this to form the ephemeral session key for this connection. The public +key from the remote peer is combined with the private key for the +local peer in the usual [Diffie-Hellman way](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange), +resulting in both peersarriving at the same shared key. To this is appended the supplied +password, and the result is hashed through SHA256, to form the final +ephemeral session key. + +The supplied password is never exchanged directly, and is thoroughly +mixed into the shared secret. Furthermore, the rate at which TCP connections +are accepted is limited by Weave to 10Hz, which thwarts online +dictionary attacks on reasonably strong passwords. + +The shared key formed by Diffie-Hellman is 256 bits long. Appending +the password to this obviously makes it longer by an unknown amount, +and the use of SHA256 reduces this back to 256 bits, to form the final +ephemeral session key. This late combination with the password +eliminates any "Man In The Middle" attacks: sniffing the public key +exchange between the two peers and faking their responses will not +grant an attacker knowledge of the password, and therefor, an attacker would +not be able to form valid ephemeral session keys. + +The same ephemeral session key is used for both TCP and UDP traffic +between two peers. + +### Key Generation and The Linux CSPRNG + +Generating fresh keys for every connection +provides forward secrecy at the cost of placing a demand on the Linux +CSPRNG (accessed by `GenerateKey` via `/dev/urandom`) proportional to +the number of inbound connection attempts. Weave has accept throttling +to mitigate against denial of service attacks that seek to deplete the +CSPRNG entropy pool, however even at the lower bound of ten requests +per second, there may not be enough entropy gathered on a headless +system to keep pace. + +Under such conditions, the consequences will be limited to slowing +down processes reading from the blocking `/dev/random` device as the +kernel waits for enough new entropy to be harvested. It is important +to note that contrary to intuition, this low entropy state does not +compromise the ongoing use of `/dev/urandom`. [Expert +opinion](http://blog.cr.yp.to/20140205-entropy.html) +asserts that as long as the CSPRNG is seeded with enough entropy (for example, +256 bits) before random number generation commences, then the output is +entirely safe for use as key material. + +By way of comparison, this is exactly how OpenSSL works - it reads 256 +bits of entropy at startup, and uses that to seed an internal CSPRNG, +which is used to generate keys. While Weave could have taken +the same approach and built a custom CSPRNG to work around the +potential `/dev/random` blocking issue, the decision was made to rely +on the [heavily scrutinised](http://eprint.iacr.org/2012/251.pdf) Linux random number +generator as [advised +here](http://cr.yp.to/highspeed/coolnacl-20120725.pdf) (page 10, +'Centralizing randomness'). + +>>**Note:**The aforementioned notwithstanding, if +Weave's demand on `/dev/urandom` is causing you problems with blocking +`/dev/random` reads, please get in touch with us - we'd love to hear +about your use case. + +####Encypting and Decrypting TCP Messages + +TCP connection are only used to exchange topology information between +peers, via a message-based protocol. Encryption of each message is +carried out by NaCl's `secretbox.Seal` function using the ephemeral +session key and a nonce. The nonce contains the message sequence +number, which is incremented for every message sent, and a bit +indicating the polarity of the connection at the sender ('1' for +outbound). The latter is required by the +[NaCl Security Model](http://nacl.cr.yp.to/box.html) in order to +ensure that the two ends of the connection do not use the same nonces. + +Decryption of a message at the receiver is carried out by NaCl's +`secretbox.Open` function using the ephemeral session key and a +nonce. The receiver maintains its own message sequence number, which +it increments for every message it decrypted successfully. The nonce +is constructed from that sequence number and the connection +polarity. As a result the receiver will only be able to decrypt a +message if it has the expected sequence number. This prevents replay +attacks. + +####Encrypting and Decrypting UDP Packets + +UDP connections carry captured traffic between peers. For a UDP packet +sent between peers that are using crypto, the encapsulation looks as +follows: + + +-----------------------------------+ + | Name of sending peer | + +-----------------------------------+ + | Message Sequence No and flags | + +-----------------------------------+ + | NaCl SecretBox overheads | + +-----------------------------------+ -+ + | Frame 1: Name of capturing peer | | + +-----------------------------------+ | This section is encrypted + | Frame 1: Name of destination peer | | using the ephemeral session + +-----------------------------------+ | key between the weave peers + | Frame 1: Captured payload length | | sending and receiving this + +-----------------------------------+ | packet. + | Frame 1: Captured payload | | + +-----------------------------------+ | + | Frame 2: Name of capturing peer | | + +-----------------------------------+ | + | Frame 2: Name of destination peer | | + +-----------------------------------+ | + | Frame 2: Captured payload length | | + +-----------------------------------+ | + | Frame 2: Captured payload | | + +-----------------------------------+ | + | ... | | + +-----------------------------------+ | + | Frame N: Name of capturing peer | | + +-----------------------------------+ | + | Frame N: Name of destination peer | | + +-----------------------------------+ | + | Frame N: Captured payload length | | + +-----------------------------------+ | + | Frame N: Captured payload | | + +-----------------------------------+ -+ + +This is very similar to the [non-crypto encapsulation](/site/router-topology/router-encapsulation.md). + +All of the frames on a connection are encrypted with the same +ephemeral session key, and a nonce constructed from a message sequence +number, flags and the connection polarity. This is very similar to the +TCP encryption scheme, and encryption is again done with the NaCl +`secretbox.Seal` function. The main difference is that the message +sequence number and flags are transmitted as part of the message, +unencrypted. + +The receiver uses the name of the sending peer to determine which +ephemeral session key and local cryptographic state to use for +decryption. Frames which are to be forwarded on to some further peer +will be re-encrypted with the relevant ephemeral session keys for the +onward connections. Thus all traffic is fully decrypted on every peer +it passes through. + +Decryption is once again carried out by NaCl's `secretbox.Open` +function using the ephemeral session key and nonce. The latter is +constructed from the message sequence number and flags that appeared +in the unencrypted portion of the received message, and the connection +polarity. + +To guard against replay attacks, the receiver maintains some state in +which it remembers the highest message sequence number seen. It could +simply reject messages with lower sequence numbers, but that could +result in excessive message loss when messages are re-ordered. The +receiver therefore additionally maintains a set of received message +sequence numbers in a window below the highest number seen, and only +rejects messages with a sequence number below that window, or +contained in the set. The window spans at least 2^20 message sequence +numbers, and hence any re-ordering between the most recent ~1 million +messages is handled without dropping messages. + +**See Also** + + * [architecture documentation](https://github.com/weaveworks/weave/blob/master/docs/architecture.txt) + * [Securing Containers Across Untrusted Networks](/site/using-weave/security-untrusted-networks.md) diff --git a/site/faq.md b/site/faq.md new file mode 100644 index 0000000000..1e38372dc2 --- /dev/null +++ b/site/faq.md @@ -0,0 +1,93 @@ +--- +title: Weave Net FAQ +layout: default +--- + + + +###Q:How do I obtain the IP of a specific container when I'm using Weave? + +You can use `weave ps ` to see the allocated address of a container on a Weave network. + +See [Troubleshooting Weave - List attached containers](/site/troubleshooting.md#list-attached-containers). + + +###Q: How do I expose one of my containers to the outside world? + +Exposing a container to the outside world is described in [Exposing Services to the Outside](/site/using-weave/service-export.md). + + +###Q: My dockerized app needs to check the request of a an application that uses a static IP. Is it possible to manually change the IP of a container? + + +You can manually change the IP of a container using [Classless Inter-Domain Routing or CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). + +For more information, refer to [Manually Specifying the IP Address of a Container](/site/using-weave/manual-ip-address.md). + + +###Q:Can I connect my existing 'legacy' network with a Weave container network? + +Yes you can. + +For example, you have a Weave network that runs on hosts A, B, C. and you have an additional host, that we'll call P, where neither Weave nor Docker are running. However, you need to connect from a process running on host P to a specific container running on host B on the Weave network. Since the Weave network is completely separate from any network that P is connected to, you cannot connect the container using the container's IP address. + +The simplest way to accomplish this would be to run Weave on the host and then run, `weave expose` to expose the network to any running containers. See [Integrating a Network Host](/site/using-weave/host-network-integration.md) If this is not possible, you could also expose a port on the container on host B and then connect to it. + +You can read about exposing ports in [Service Exporting](/site/using-weave/service-export.md) + + +###Q: Why am I seeing the same IP address assigned to two different containers on different hosts? + +Under normal circumstances, this should never happen, but it can occur if `weave forget` and `weave rmpeer` was run on more than one host. + +You cannot call `weave rmpeer` on more than one host. The address space, which was owned by the stale peer cannot be left dangling, and as a result it gets reassigned. In this instance, the address is reassigned to the peer on which `weave rmpeer` was run. Therefore, if you run `weave forget` and then `weave rmpeer` on more than one host at a time, it results in duplicate IPs on more than one host. + +Once the peers detect the inconsistency, they log the error and drop the connection that supplied the inconsistent data. The rest of the peers will carry on with their view of the world, but the network will not function correctly. + +Some peers may be able to communicate their claim to the others before they run `rmpeer` (i.e. it's a race), so what you can expect is a few cliques of peers that are still talking to each other, but repeatedly dropping attempted connections with peers in other cliques. + +For more information on see [Address Allocation with IP Address Management (IPAM)](/site/ipam/overview-init-ipam.md) and also, [Starting, Stopping and Removing Peers](/site/ipam/stop-remove-peers-ipam.md) + + +###Q: What is the best practise for resetting a node that goes out of service? + +When a node goes out of service, the best option is to call `weave rmpeer` on one host and then `weave forget` on all the other hosts. + +See [Starting, Stopping and Removing Peers](/site/ipam/stop-remove-peers-ipam.md) for an indepth discussion. + + +###Q: What about Weave's performance? Are software defined network overlays just as fast as native networking? + +All virtualization techniques have some overhead, and Weave's overhead is typically around 2-3%. Unless your system is completely bottlenecked on the network, you won't notice this during normal operation. + +Weave Net also automatically uses the fastest datapath between two hosts. When Weave Net can't use the fast datapath between two hosts, it falls back to the slower packet forwarding approach. Selecting the fastest forwarding approach is automatic, and is determined on a connection-by-connection basis. For example, a Weave network spanning two data centers might use fast datapath within the data centers, but not for the more constrained network link between them. + +For more information about fast datapath see [How Fast Datapath Works](/site/fastdp/fastdp-how-it-works.md) + + +###Q:How can I tell if Weave is using fast datapath (fastdp) or not? + +To view whether Weave is using fastdp or not, you can run, `weave status connections` + +For more information on this command, see [Using Fast Datapath](/site/fastdp/using-fastdp.md) + + +###Q: Does encryption work with fastdp? + +Encryption does not work with fast datapath. If you enable encryption using the `--password` option to launch Weave (or you use the `WEAVE_PASSWORD` environment variable), fast datapath will by default be disabled. + +You can however have a mixture of fast datapath connections over trusted links, as well as, encrypted connections over untrusted links. + +See [Using Fast Datapath](/site/fastdp/using-fastdp.md) for more information + +###Q: Can I create multiple networks where containers can communicate on one network, but are isolated from containers on other networks? + +Yes, of course! Weave allows you to run isolated networks and still allow open communications between individual containers from those isolated networks. You can find information on how to do this in [Application Isolation](/site/using-weave/application-isolation.md) + + +**See Also** + + * [Troubleshooting Weave](/site/troublehooting.md) + * [Troubleshooting IPAM](/site/ipam/troubleshooting.md) + * [Troubleshooting the Proxy](/site/weave-docker-api/using-proxy.md) + diff --git a/site/fastdp/fastdp-how-it-works.md b/site/fastdp/fastdp-how-it-works.md new file mode 100644 index 0000000000..8087484165 --- /dev/null +++ b/site/fastdp/fastdp-how-it-works.md @@ -0,0 +1,30 @@ +--- +title: How Fast Datapath Works +layout: default +--- + + +Weave implements an overlay network between Docker hosts. Without fast datapath enabled, each packet is encapsulated in a tunnel protocol header and sent to the destination host, where the header is removed. The Weave router is a user space process, which means that the packet follows a winding path in and out of the Linux kernel: + +![Weave Net Encapsulation](/images/weave-net-encap1-1024x459.png) + + +The fast datapath in Weave uses the Linux kernel's [Open vSwitch datapath module](https://www.kernel.org/doc/Documentation/networking/openvswitch.txt). This module enables the Weave router to tell the kernel how to process packets: + +![Weave Net Encapsulation](/images/weave-net-fdp1-1024x454.png) + +Because Weave Net issues instructions directly to the kernel, context switches are decreased, and so by using `fast datapath` CPU overhead and latency is reduced. The packet goes straight from your application to the kernel, where the Virtual Extensible Lan (VXLAN) header is added (the NIC does this if it offers VXLAN acceleration). VXLAN is an IETF standard UDP-based tunneling protocol that enable you to use common networking tools like [Wireshark](https://www.wireshark.org/) to inspect the tunneled packets. + +![Weave Net Encapsulation](/images/weave-frame-encapsulation-178x300.png) + +Prior to version 1.2, Weave Net used a custom encapsulation format. Fast data path uses VXLAN, and like Weave Net's custom encapsulation format, VXLAN is UDP-based, and therefore needs no special configuration with network infrastructure. + +>>Note:The required open vSwitch datapath (ODP) and VXLAN features are present in Linux kernel versions 3.12 and greater. If your kernel was built without the necessary modules Weave Net will fall back to the "user mode" packet path. + + +**See Also** + + * [Deploying Applications to Weave](/site/using-weave/deploying-applications.md) + * [Using Fast Datapath](/site/fastdp/using-fastdp.md) + + diff --git a/site/fastdp/using-fastdp.md b/site/fastdp/using-fastdp.md new file mode 100644 index 0000000000..44cf165961 --- /dev/null +++ b/site/fastdp/using-fastdp.md @@ -0,0 +1,55 @@ +--- +title: Using Fast Datapath +layout: default +--- + + +The most important thing to know about fast datapath is that you don't need to configure anything before using this feature. If you are using Weave Net 1.2 or greater, fast datapath (`fastdp`) is automatically enabled. + +When Weave Net can't use the fast data path between two hosts, it falls back to the slower packet forwarding approach. Selecting the fastest forwarding approach is automatic, and is determined on a connection-by-connection basis. For example, a Weave network spanning two data centers might use fast data path within the data centers, but not for the more constrained network link between them. + +See [How Fastdp Works](/site/fastdp/fastdp-how-it-works.md) for a more indepth discussion of this feature. + +###Disabling Fast Datapath + +You can disable fastdp by enabling the `WEAVE_NO_FASTDP` environment variable at `weave launch`: + +~~~bash +$ WEAVE_NO_FASTDP=true weave launch +~~~ + +###Fast Datapath and Encryption + +Encryption does not work with fast datapath. If you enable encryption using the `--password` option to launch weave (or you use the `WEAVE_PASSWORD` environment variable), fast data path will by default be disabled. + +When encryption is not in use there may be other conditions in which the fastdp will revert back to `sleeve mode`. Once these conditions pass, weave will revert back to using fastdp. To view which mode Weave is using, run `weave status connections`. + +###Viewing Connection Mode Fastdp or Sleeve + +Weave automatically uses the fastest datapath for every connection unless it encounters a situation that prevents it from working. To ensure that Weave can use the fast data path: + + * Avoid Network Address Translation (NAT) devices + * Open UDP port 6784 (This is the port used by the Weave routers) + * Ensure that `WEAVE_MTU` fits with the `MTU` of the intermediate network (see below) + +The use of fast datapath is an automated connection-by-connection decision made by Weave, and because of this, you may end up with a mixture of connection tunnel types. If fast data path cannot be used for a connection, Weave falls back to the "user space" packet path. + +Once a Weave network is set up, you can query the connections using the `weave status connections` command: + +~~~bash +$ weave status connections +<-192.168.122.25:43889 established fastdp a6:66:4f:a5:8a:11(ubuntu1204) +~~~ + +Where fastdp indicates that fast data path is being used on a connection. If fastdp is not shown, the field displays `sleeve` indicating Weave Net's fall-back encapsulation method: + +~~~bash +$ weave status connections +<- 192.168.122.25:54782 established sleeve 8a:50:4c:23:11:ae(ubuntu1204) +~~~ + +**See Also** + + * [Deploying Applications to Weave](/site/using-weave/deploying-applications.md) + * [How Fastdp Works](/site/fastdp/fastdp-how-it-works.md) + \ No newline at end of file diff --git a/site/features.md b/site/features.md index 61d8cedc7f..45d1fa7b8b 100644 --- a/site/features.md +++ b/site/features.md @@ -3,622 +3,253 @@ title: Weave Features layout: default --- -# Weave Features - -Weave has a few more features beyond those illustrated by the [basic -example](https://github.com/weaveworks/weave#example): - - * [Virtual ethernet switch](#virtual-ethernet-switch) - * [Fast data path](#fast-data-path) - * [Seamless Docker integration](#docker) - * [Docker network plugin](#plugin) - * [Address allocation](#addressing) - * [Naming and discovery](#naming-and-discovery) - * [Application isolation](#application-isolation) - * [Dynamic network attachment](#dynamic-network-attachment) +The following is a comprehensive list and overview of all the features available in Weave +Net: + + * [Virtual Ethernet Switch](#virtual-ethernet-switch) + * [Fast Data Path](#fast-data-path) + * [Seamless Docker Integration](#docker) + * [Docker Network Plugin](#plugin) + * [Address Allocation (IPAM)](#addressing) + * [Naming and Discovery](#naming-and-discovery) + * [Application Isolation](#application-isolation) + * [Dynamic Network Attachment](#dynamic-network-attachment) * [Security](#security) - * [Host network integration](#host-network-integration) - * [Service export](#service-export) - * [Service import](#service-import) - * [Service binding](#service-binding) - * [Service routing](#service-routing) - * [Multi-cloud networking](#multi-cloud-networking) - * [Multi-hop routing](#multi-hop-routing) - * [Dynamic topologies](#dynamic-topologies) - * [Container mobility](#container-mobility) - * [Fault tolerance](#fault-tolerance) + * [Host Network Integration](#host-network-integration) + * [Service Export](#services) + * [Service Import](#services) + * [Service Binding](#services) + * [Service Routing](#services) + * [Multi-cloud Networking](#multi-cloud-networking) + * [Multi-hop Routing](#multi-hop-routing) + * [Dynamic Topologies](#dynamic-topologies) + * [Container Mobility](#container-mobility) + * [Fault Tolerance](#fault-tolerance) -### Virtual Ethernet Switch +For step-by-step instructions on how to use Weave Net, +see [Using Weave Net](/site/using-weave/intro-example.md) -To application containers, the network established by weave looks -like a giant Ethernet switch to which all the containers are -connected. +###Virtual Ethernet Switch -Containers can easily access services from each other; e.g. in the -container on `$HOST1` we can start a netcat "service" with +Weave Net creates a virtual network that connects Docker containers +deployed across multiple hosts. +To application containers, the network established by Weave +resembles a giant Ethernet switch, where all containers are +connected and can easily access services from one another. - root@a1:/# nc -lk -p 4422 +Because Weave uses standard protocols, your favorite network +tools and applications, developed over decades, can still +be used to configure, secure, monitor, and troubleshoot +a container network. -and then connect to it from the container on `$HOST2` with +Broadcast and Multicast protocols can also be implemented +over Weave Net. - root@a2:/# echo 'Hello, world.' | nc a1 4422 +To start using Weave Net, see [Installing Weave Net](/site/installing-weave.md) +and [Deploying Applications to Weave Net](/site/using-weave/deploying-applications.md) -Note that *any* protocol is supported. Doesn't even have to be over -TCP/IP, e.g. a netcat UDP service would be run with +###Fast Datapath - root@a1:/# nc -lu -p 5533 +Weave automatically chooses the fastest available method to +transport data between peers. The best performing of these +(the 'fast datapath') offers near-native throughput and latency. - root@a2:/# echo 'Hello, world.' | nc -u a1 5533 +Fast datapath does not support encryption. For full details on configuring +Weave when you have connections that traverse untrusted networks, +see [Securing Connections Across Untrusted Networks](/site/using-weave/security-untrusted-networks.md) for more details. -Broadcast and multicast protocols also work over Weave Net. +See [Using Fast Datapath](/site/fastdp/using-fastdp.md) and +[How Fast Datapath Works](/site/fastdp/fastdp-how-it-works.md). -We can deploy the entire arsenal of standard network tools and -applications, developed over decades, to configure, secure, monitor, -and troubleshoot our container network. To put it another way, we can -now re-use the same tools and techniques when deploying applications -as containers as we would have done when deploying them 'on metal' in -our data centre. +###Seamless Docker Integration (Weave Docker API Proxy) -### Fast data path +Weave includes a [Docker API Proxy](/site/weave-docker-api/set-up-proxy.md), which can be +used to launch containers to the Weave network using the Docker [command-line interface](https://docs.docker.com/reference/commandline/cli/) or the [remote API](https://docs.docker.com/reference/api/docker_remote_api/). -Weave automatically chooses the fastest available method to transport -data between peers. The most performant of these ('fastdp') offers -near-native throughput and latency but does not support encryption; -consequently supplying a password will cause the router to fall back -to a slower mode ('sleeve') that does, for connections that traverse -untrusted networks (see the [security](#security) section for more -details). +To use the proxy run: -Even when encryption is not in use, certain adverse network conditions -will cause this fallback to occur dynamically; in these circumstances, -weave will upgrade the connection back to the fastdp transport without -user intervention once they abate. You can see which method is in use -by examining the output of `weave status connections`. - -You can also administratively disable fastdp with the -`WEAVE_NO_FASTDP` environment variable: - - $ WEAVE_NO_FASTDP=true weave launch - -### Seamless Docker integration - -Weave includes a [Docker API proxy](proxy.html) so that containers -launched via the Docker -[command-line interface](https://docs.docker.com/reference/commandline/cli/) -or -[remote API](https://docs.docker.com/reference/api/docker_remote_api/) -are attached to the weave network before they begin execution. To use -the proxy, run - - $ eval $(weave env) - -and then start containers as usual. - -Containers started in this way that subsequently restart, either by an -explicit `docker restart` command or by Docker restart policy, are -re-attached to the weave network by the weave Docker API proxy. - -### Docker network plugin - -Alternatively, you can use weave as a Docker plugin. A Docker network -named `weave` is created by `weave launch`, which you can use like this: - - $ docker run --net=weave -ti ubuntu - -> NB: The plugin is an *alternative* to the proxy, hence one should -> *not* run `eval $(weave env)` beforehand. - -For more details see the [plugin documentation](plugin.html). - -### Address allocation - -Containers are automatically allocated an IP address that is unique -across the weave network. You can see which address was allocated with -[`weave ps`](troubleshooting.html#list-attached-containers): - - host1$ weave ps a1 - a7aee7233393 7a:44:d3:11:10:70 10.32.0.2/12 - -Weave detects when a container has exited and releases its -automatically allocated addresses so they can be re-used. - -See the [Automatic IP Address Management](ipam.html) documentation for -further details. We also have an explanation of -[the basics of IP addressing](ip-addresses.html) - -Instead of getting weave to allocate IP addresses automatically, it is -also possible to specify an address and network explicitly, expressed -in -[CIDR notation](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation) -\- let's see how the first example in the README would have looked: + host1$ eval $(weave env) + + +With the proxy enabled, you can start and manage containers using standard Docker commands. -On $HOST1: +Containers started in this way that subsequently restart, either +by an explicit `docker restart` command or by Docker restart +policy, are re-attached to the Weave network by the `Weave Docker API Proxy` - host1$ docker run -e WEAVE_CIDR=10.2.1.1/24 -ti ubuntu - root@7ca0f6ecf59f:/# +See [Using the Weave Docker API](/site/weave-docker-api/using-proxy.md) -And $HOST2: - host2$ docker run -e WEAVE_CIDR=10.2.1.2/24 -ti ubuntu - root@04c4831fafd3:/# +###Weave Network Docker Plugin -Then in the container on $HOST1... +Weave can also be used as a [Docker plugin](https://docs.docker.com/engine/extend/plugins_network/). A Docker network +named `weave` is created by `weave launch`, which is used as follows: - root@7ca0f6ecf59f:/# ping -c 1 -q 10.2.1.2 - PING 10.2.1.2 (10.2.1.2): 48 data bytes - --- 10.2.1.2 ping statistics --- - 1 packets transmitted, 1 packets received, 0% packet loss - round-trip min/avg/max/stddev = 1.048/1.048/1.048/0.000 ms + $ docker run --net=weave -ti ubuntu -Similarly, in the container on $HOST2... +Using the Weave plugin enables you to take advantage of [Docker's network functionality]( https://docs.docker.com/engine/extend/plugins_network/) - root@04c4831fafd3:/# ping -c 1 -q 10.2.1.1 - PING 10.2.1.1 (10.2.1.1): 48 data bytes - --- 10.2.1.1 ping statistics --- - 1 packets transmitted, 1 packets received, 0% packet loss - round-trip min/avg/max/stddev = 1.034/1.034/1.034/0.000 ms +Also, Weave’s Docker Network plugin doesn't require an external cluster store and you can start and stop containers even +when there are network connectivity problems. -The IP addresses and netmasks can be anything you like, but make sure -they don't conflict with any IP ranges in use on the hosts or -IP addresses of external services the hosts or containers need to -connect to. The individual IP addresses given to containers must, of -course, be unique - if you pick an address that the automatic -allocator has already assigned you will receive a warning. -If you restart a container, it will retain the same IP addresses on -the weave network: - host1$ docker run --name a1 -tdi ubuntu - f76b09a9fcfee04551dbb8d951d9a83e7e7d55126b02fd9f44f9f8a5f07d7c96 - host1$ weave ps a1 - a1 1e:dc:2a:db:ef:ff 10.32.0.3/12 - host1$ docker restart a1 - host1$ weave ps a1 - a1 16:c0:6f:5d:c5:73 10.32.0.3/12 +>*Note:* The plugin is an *alternative* to the proxy, and therefore you do +*not* need to run `eval $(weave env)` beforehand. -It works the same if you stop and immediately restart: +See [Using the Weave Docker Network Plugin](/site/plugin/weave-plugin-how-to.md) for more details. - host1$ docker stop a1 - host1$ docker start a1 -However, additional addresses added with the `weave attach` command -will not be retained. +###IP Address Management (IPAM) + +Containers are automatically allocated a unique IP address. To view the addresses allocated by Weave run, `weave ps`. -There is also a `weave restart` command, which does re-attach all -current IP addresses: +Instead of allowing Weave to automatically allocate addresses, an IP address and a network can be explicitly +specified. See [How to Manually Specify IP Addresses and Subnets(/site/using-weave/manual-ip-address.md) for instructions. - host1$ weave restart b1 +For a discussion on how Weave uses IPAM, see [Automatic IP Address Management](/site/ipam/overview-init-ipam.md). And also review the +[the basics of IP addressing](/site/ip-addresses/ip-addresses.md) for an explanation of addressing and private networks. -(note that the IP addresses are held for a limited time - currently -30 seconds) -### Naming and discovery +###Naming and Discovery + +Named containers are automatically registered in [weavedns](/site/weavedns/overview-using-weavedns.md), +and are discoverable by using standard, simple name lookups: -Named containers are automatically registered in -[weaveDNS](weavedns.html), which makes them discoverable through -simple name lookups: host1$ docker run -dti --name=service ubuntu host1$ docker run -ti ubuntu root@7b21498fb103:/# ping service -This feature supports load balancing, fault resilience and hot -swapping; see the [weaveDNS](weavedns.html) documentation for more -details. - -### Application isolation - -A single weave network can host multiple, isolated applications, with -each application's containers being able to communicate with each -other but not containers of other applications. - -To accomplish that, we assign each application a different subnet. -Let's begin by configuring weave's allocator to manage multiple -subnets: - - host1$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 - host1$ eval $(weave env) - host2$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 $HOST1 - host2$ eval $(weave env) - -This delegates the entire 10.2.0.0/16 subnet to weave, and instructs -it to allocate from 10.2.1.0/24 within that if no specific subnet is -specified. Now we can launch some containers in the default subnet: - - host1$ docker run --name a1 -ti ubuntu - host2$ docker run --name a2 -ti ubuntu - -And some more containers in a different subnet: - - host1$ docker run -e WEAVE_CIDR=net:10.2.2.0/24 --name b1 -ti ubuntu - host2$ docker run -e WEAVE_CIDR=net:10.2.2.0.24 --name b2 -ti ubuntu - -A quick 'ping' test in the containers confirms that they can talk to -each other but not the containers of our first application... - - root@b1:/# ping -c 1 -q b2 - PING b2.weave.local (10.2.2.128) 56(84) bytes of data. - --- b2.weave.local ping statistics --- - 1 packets transmitted, 1 received, 0% packet loss, time 0ms - rtt min/avg/max/mdev = 1.338/1.338/1.338/0.000 ms - - root@b1:/# ping -c 1 -q a1 - PING a1.weave.local (10.2.1.2) 56(84) bytes of data. - --- a1.weave.local ping statistics --- - 1 packets transmitted, 0 received, 100% packet loss, time 0ms - - root@b1:/# ping -c 1 -q a2 - PING a2.weave.local (10.2.1.130) 56(84) bytes of data. - --- a2.weave.local ping statistics --- - 1 packets transmitted, 0 received, 100% packet loss, time 0ms - - -This isolation-through-subnets scheme is an example of carrying over a -well-known technique from the 'on metal' days to containers. - -If desired, a container can be attached to multiple subnets when it is -started: - - host1$ docker run -e WEAVE_CIDR="net:default net:10.2.2.0/24" -ti ubuntu - -`net:default` is used here to request allocation of an address from -the default subnet in addition to one from an explicitly specified -range. - -NB: By default docker permits communication between containers on the -same host, via their docker-assigned IP addresses. For complete -isolation between application containers, that feature needs to be -disabled by -[setting `--icc=false`](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#communication-between-containers) -in the docker daemon configuration. Furthermore, containers should be -prevented from capturing and injecting raw network packets - this can -be accomplished by starting them with the `--cap-drop net_raw` option. - -### Dynamic network attachment - -Sometimes the application network to which a container should be -attached is not known in advance. For these situations, weave allows -an existing, running container to be attached to the weave network. To -illustrate, we can achieve the same effect as the first example with - - host1$ C=$(docker run -e WEAVE_CIDR=none -dti ubuntu) - host1$ weave attach $C - 10.2.1.3 - -(Note that since we modified `DOCKER_HOST` to point to the proxy -earlier, we have to pass `-e WEAVE_CIDR=none` to start a container -that _doesn't_ get automatically attached to the weave network for the -purposes of this example.) - -The output shows the IP address that got allocated, in this case on -the default subnet. - -There is a matching `weave detach` command: - - host1$ weave detach $C - 10.2.1.3 - -You can detach a container from one application network and attach it -to another: - - host1$ weave detach net:default $C - 10.2.1.3 - host1$ weave attach net:10.2.2.0/24 $C - 10.2.2.3 - -or attach a container to multiple application networks, effectively -sharing it between applications: - - host1$ weave attach net:default - 10.2.1.3 - host1$ weave attach net:10.2.2.0/24 - 10.2.2.3 - -Finally, multiple addresses can be attached or detached with a single -invocation: - - host1$ weave attach net:default net:10.2.2.0/24 net:10.2.3.0/24 $C - 10.2.1.3 10.2.2.3 10.2.3.1 - host1$ weave detach net:default net:10.2.2.0/24 net:10.2.3.0/24 $C - 10.2.1.3 10.2.2.3 10.2.3.1 - -Note that addresses added by dynamic attachment are not re-attached -if the container restarts. - -### Security - -In order to connect containers across untrusted networks, weave peers -can be told to encrypt traffic by supplying a `--password` option or -`WEAVE_PASSWORD` environment variable when launching weave, e.g. - - host1$ weave launch --password wfvAwt7sj - -or - - host1$ export WEAVE_PASSWORD=wfvAwt7sj - host1$ weave launch - -_NOTE: The command line option takes precedence over the environment -variable._ - -> To avoid leaking your password via the kernel process table or your -> shell history, we recommend you store it in a file and capture it -> into a shell variable prior to launching weave: `export -> WEAVE_PASSWORD=$(cat /path/to/password-file)` - -The password needs to be reasonably strong to guard against online -dictionary attacks. We recommend at least 50 bits of entropy. An easy -way to generate a random password which satsifies this requirement is - - < /dev/urandom tr -dc A-Za-z0-9 | head -c9 ; echo - -The same password must be specified for all weave peers; by default -both control and data plane traffic will then use authenticated -encryption. If some of your peers are colocated in a trusted network -(for example within the boundary of your own datacentre) you can use -the `--trusted-subnets` argument to `weave launch` to selectively -disable data plane encryption as an optimisation. Both peers must -consider the other to be in a trusted subnet for this to take place - -if they do not, weave will [fall back to a slower -method](#fast-data-path) for transporting data between peers as fast -datapath does not support encryption. - -Be aware that: - -* Containers will be able to access the router REST API if you have - disabled fast datapath. You can prevent this by setting - [`--icc=false`](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#communication-between-containers) -* Containers are able to access the router control and data plane - ports, but you can mitigate this by enabling encryption - -### Host network integration - -Weave application networks can be integrated with a host's network, -establishing connectivity between the host and application containers -anywhere. - -Let's say that in our example we want `$HOST2` to have access to the -application containers. On `$HOST2` we run - - host2$ weave expose - 10.2.1.132 - -This grants the host access to all application containers in the -default subnet. An IP address is allocated for that purpose, which is -returned. So now - - host2$ ping 10.2.1.132 - -will work, and, more interestingly, we can ping our `a1` application -container, which is residing on `$HOST1`: - - host2$ ping $(weave dns-lookup a1) - -Multiple subnet addresses can be exposed or hidden with a single -invocation: - - host2$ weave expose net:default net:10.2.2.0/24 - 10.2.1.132 10.2.2.130 - host2$ weave hide net:default net:10.2.2.0/24 - 10.2.1.132 10.2.2.130 - -Finally, exposed addresses can be added to weaveDNS by supplying a -fully-qualified domain name: - - host2$ weave expose -h exposed.weave.local - 10.2.1.132 - -### Service export - -Services running in containers on a weave network can be made -accessible to the outside world (and, more generally, other networks) -from any weave host, irrespective of where the service containers are -located. - -Say we want to make our example netcat "service", which is running in -a container on `$HOST1`, accessible to the outside world via `$HOST2`. - -First we need to expose the application network to `$HOST2`, as -explained [above](#host-network-integration), i.e. - - host2$ weave expose - 10.2.1.132 - -Then we add a NAT rule to route from the outside world to the -destination container service. - - host2$ iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 2211 \ - -j DNAT --to-destination $(weave dns-lookup a1):4422 - -Here we are assuming that the "outside world" is connecting to `$HOST2` -via 'eth0'. We want TCP traffic to port 2211 on the external IPs to be -routed to our 'nc' service, which is running on port 4422 in the -container a1. - -With the above in place, we can connect to our 'nc' service from -anywhere with - - echo 'Hello, world.' | nc $HOST2 2211 - -(NB: due to the way routing is handled in the Linux kernel, this won't -work when run *on* `$HOST2`.) -Similar NAT rules to the above can used to expose services not just to -the outside world but also other, internal, networks. +`weavedns` also supports [load balancing](/site/weavedns/load-balance-fault-weavedns.md), [fault resilience](/site/weavedns/load-balance-fault-weavedns.md) and [hot swapping](/site/weavedns/managing-entries-weavedns.md). -### Service import +See [Naming and discovery with Weavedns](/site/weavedns/how-works-weavedns.md). + +###Application Isolation -Applications running in containers on a weave network can be given -access to services which are only reachable from certain weave hosts, -irrespective of where the application containers are located. +A single weave network can host multiple, isolated +applications, with each application's containers being able +to communicate with each other but not with the containers +of other applications. -Say that, as an extension of our example, we have a netcat service -running on `$HOST3`, port 2211, and that `$HOST3` is not part of the weave -network and is only reachable from `$HOST1`, not `$HOST2`. Nevertheless we -want to make the service accessible to an application running in a -container on `$HOST2`. +To isolate applications, Weave Net can make use of the +`isolation-through-subnets` technique. This common strategy + is an example of how with Weave many of your `on metal` + techniques can be used to deploy your applications to + containers. -First we need to expose the application network to the host, as -explained [above](#host-network-integration), this time on `$HOST1`, -i.e. +See [Isolating Applications](/site/using-weave/application-isolation.md) +for information on how to use the isolation-through-subnets +technique with Weave Net. - host1$ weave expose -h host1.weave.local - 10.2.1.3 -Then we add a NAT rule to route from the above IP to the destination -service. +###Dynamic Network Attachment - host1$ iptables -t nat -A PREROUTING -p tcp -d 10.2.1.3 --dport 3322 \ - -j DNAT --to-destination $HOST3:2211 +At times, you may not know the application network for a +given container in advance. In these cases, you can take +advantage of Weave's ability to attach and detach running +containers to and from any network. -This allows any application container to reach the service by -connecting to 10.2.1.3:3322. So if `$HOST3` is indeed running a netcat -service on port 2211, e.g. +See [Dynamically Attaching and Detaching Containers](/site/using-weave/dynamically-attach-containers.md) +for details. - host3$ nc -lk -p 2211 -then we can connect to it from our application container on `$HOST2` with +###Security - root@a2:/# echo 'Hello, world.' | nc host1 3322 +In keeping with our ease-of-use philosophy, the cryptography +in Weave is intended to satisfy a particular user requirement: +strong, out-of-the-box security without a complex setup or +the need to wade your way through the configuration of cipher +suite negotiation, certificate generation or any of the +other things needed to properly secure an IPsec or TLS installation. -The same command will work from any application container. +Weave communicates via TCP and UDP on a well-known port, so +you can adapt whatever is appropriate to your requirements - for +example an IPsec VPN for inter-DC traffic, or VPC/private network +inside a data-center. -### Service binding +For cases when this is not convenient, Weave Net provides a +secure, [authenticated encryption](https://en.wikipedia.org/wiki/Authenticated_encryption) +mechanism which you can use in conjunction with or as an +alternative to any other security technologies you have +running alongside Weave. -Importing a service provides a degree of indirection that allows late -and dynamic binding, similar to what can be achieved with a proxy. In -our example, application containers are unaware that the service they -are accessing at `10.2.1.3:3322` is in fact residing on -`$HOST3:2211`. We can point application containers at another service -location by changing the above NAT rule, without having to alter the -applications themselves. +Weave implements encryption and security using [Daniel J. Bernstein's NaCl library](http://nacl.cr.yp.to/index.html). -### Service routing +For information on how to secure your Docker network connections, +see [Securing Connections Across Untrusted Networks](/site/using-weave/security-untrusted-networks.md) +and for a more technical discussion on how Weave implements encryption see, [Using Encryption with Weave](/site/encryption/crypto-overview.md) and [How Weave Implements Encryption](/site/encryption/ephemeral-key.md) -The [service export](#service-export) and -[service import](#service-import) features can be combined to -establish connectivity between applications and services residing on -disjoint networks, even when those networks are separated by firewalls -and might have overlapping IP ranges. Each network imports its -services into weave, and in turn exports from weave services required -by its applications. There are no application containers in this -scenario (though of course there could be); weave is acting purely as -an address translation and routing facility, using the weave -application network as an intermediary. -In our example above, the netcat service on `$HOST3` is imported into -weave via `$HOST1`. We can export it on `$HOST2` by first exposing the -application network with +###Host Network Integration - host2$ weave expose - 10.2.1.3 +Weave application networks can be integrated with a host's +network, and establish connectivity between the host and +application containers anywhere. -and then adding a NAT rule which routes traffic from the `$HOST2` -network (i.e. anything which can connect to `$HOST2`) to the service -endpoint in the weave network +See [Integrating a Host Network with Weave](/site/using-weave/host-network-integration.md) - host2$ iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 4433 \ - -j DNAT --to-destination 10.2.1.3:3322 +###Managing Services: Exporting, Importing, Binding and Routing + + * **Exporting Services** - Services running in containers on a Weave network can be made accessible to the outside world or to other networks. + * **Importing Services** - Applications can run anywhere, and yet still be made accessible by specific application containers or services. + * **Binding Services** - A container can be bound to a particular IP and port without having to change your application code, while at the same time will maintain its original endpoint. + * **Routing Services** - By combining the importing and exporting features, you can connect to disjointed networks, even when separated by firewalls and where there may be overlapping IP addresses. -Now any host on the same network as `$HOST2` can access the service with +See [Managing Services in Weave: Exporting, Importing, Binding and Routing](/site/using-weave/service-management.md) for instructions on how to manage services on a Weave container network. - echo 'Hello, world.' | nc $HOST2 4433 +###Multi-Cloud Networking -Furthermore, as explained in [service-binding](#service-binding), we -can dynamically alter the service locations without having to touch -the applications that access them, e.g. we could move the example -netcat service to `$HOST4:2211` while retaining its 10.2.1.3:3322 -endpoint in the weave network. +Weave can network containers hosted in different cloud providers +or data centers. For example, you can run an application consisting +of containers that run on [Google Compute Engine](https://cloud.google.com/compute/) +(GCE), [Amazon Elastic Compute Cloud](https://aws.amazon.com/ec2/) +(EC2) and in a local data centre all at the same time. -### Multi-cloud networking +See [Enabling Multi-Cloud networking and Muti-hop Routing](/site/using-weave/multi-cloud-multi-hop.md) -Weave can network containers hosted in different cloud providers / -data centres. So, for example, one could run an application consisting -of containers on GCE, EC2 and in local data centres. -To enable this, the network must be configured to permit connections -to weave's control and data ports on the docker hosts. The control -port defaults to TCP 6783, and the data ports to UDP 6783/6784. You -can override these defaults by setting `WEAVE_PORT` (this is a base -value - setting `WEAVE_PORT=9000` will result in weave using TCP 9000 -for control and UDP 9000/9001 for data). Note that it is highly -recommended that all peers be given the same setting. +###Multi-Hop Routing -### Multi-hop routing +A network of containers across more than two hosts can be established +even when there is only partial connectivity between the hosts. Weave +routes traffic between containers as long as there is at least one *path* +of connected hosts between them. -A network of containers across more than two hosts can be established -even when there is only partial connectivity between the hosts. Weave -is able to route traffic between containers as long as there is at -least one *path* of connected hosts between them. +See [Enabling Multi-Cloud networking and Multi-hop Routing](/site/using-weave/multi-cloud-multi-hop.md) -So, for example, if a docker host in a local data centre can connect -to hosts in GCE and EC2, but the latter two cannot connect to each -other, containers in the latter two can still communicate; weave will -route the traffic via the local data centre. -### Dynamic topologies +###Dynamic Topologies -To add a host to an existing weave network, one simply launches weave -on the host, supplying the address of at least one existing -host. Weave will automatically discover the other hosts in the network -and establish connections to them if it can (in order to avoid -unnecessary multi-hop routing). +Hosts can be added to or removed from a Weave network without stopping +or reconfiguring the remaining hosts. See [Adding and Removing Hosts +Dynamically](/site/using-weave/finding-adding-hosts-dynamically.md) -In some situations all existing weave hosts may be unreachable from -the new host due to firewalls, etc. However, it is still possible to -add the new host, provided inverse connections, i.e. from existing -hosts to the new hosts, are possible. To accomplish that, one launches -weave on the new host without supplying any additional addresses, and -then on one of the existing hosts runs - host# weave connect $NEW_HOST +###Container Mobility -Other hosts in the weave network will automatically attempt to -establish connections to the new host too. Conversely, it is possible -to instruct a peer to forget a particular host that was specified -to it via `weave launch` or `weave connect`: +Containers can be moved between hosts without requiring any +reconfiguration or, in many cases, restarts of other containers. +All that is required is for the migrated container to be started +with the same IP address as it was given originally. - host# weave forget $DECOMMISSIONED_HOST +See [Managing Services in Weave: Exporting, Importing, Binding and Routing](/site/using-weave/service-management.md), in particular, Routing Services for more information on container mobility. -This will prevent the peer from trying to reconnect to that host once -connectivity to it is lost, and thus can be used to administratively -remove decommissioned peers from the network. - -Hosts can also be bulk-replaced. All existing hosts will be forgotten, -and the new hosts will be added, when one runs - - host# weave connect --replace $NEW_HOST1 $NEW_HOST2 - -For complete control over the peer topology, automatic discovery can -be disabled with the `--no-discovery` option to `weave launch`. In -this mode, weave will only connect to the addresses specified at -launch time and with `weave connect`. - -The list of all hosts that a peer has been asked to connect to with -`weave launch` and `weave connect` can be obtained with - - host# weave status targets - -### Container mobility -Containers can be moved between hosts without requiring any -reconfiguration or, in many cases, restarts of other containers. All -that is required is for the migrated container to be started with the -same IP address as it was given originally. +###Fault Tolerance -### Fault tolerance +Weave peers continually exchange topology information, and +monitor and (re)establish network connections to other peers. +So if hosts or networks fail, Weave can "route around" the problem. +This includes network partitions, where containers on either side +of a partition can continue to communicate, with full connectivity +being restored when the partition heals. -Weave peers continually exchange topology information, and monitor -and (re)establish network connections to other peers. So if hosts or -networks fail, weave can "route around" the problem. This includes -network partitions; containers on either side of a partition can -continue to communicate, with full connectivity being restored when -the partition heals. +The Weave Router container is very lightweight, fast and and disposable. +For example, should Weave ever run into difficulty, one can +simply stop it (with `weave stop`) and restart it. Application +containers do *not* have to be restarted in that event, and +if the Weave container is restarted quickly enough, +may not experience a temporary connectivity failure. -The weave container is very light-weight - just over 8MB image size -and a few 10s of MBs of runtime memory - and disposable. I.e. should -weave ever run into difficulty, one can simply stop it (with `weave -stop`) and restart it. Application containers do *not* have to be -restarted in that event, and indeed may not even experience a -temporary connectivity failure if the weave container is restarted -quickly enough. diff --git a/site/how-it-works.md b/site/how-it-works.md deleted file mode 100644 index 70b97ff157..0000000000 --- a/site/how-it-works.md +++ /dev/null @@ -1,426 +0,0 @@ ---- -title: How Weave Works -layout: default ---- - -# How Weave Works - - * [Overview](#overview) - * [Encapsulation](#encapsulation) - * [Topology](#topology) - * [Crypto](#crypto) - * [Further reading](#further-reading) - -### Overview - -A weave network consists of a number of 'peers' - weave routers -residing on different hosts. Each peer has a name, which tends to -remain the same over restarts, a human friendly nickname for use in -status and logging output and a unique identifier (UID) which is -different each time it is run. These are opaque identifiers as far as -the router is concerned, although the name defaults to a MAC address. - -Weave routers establish TCP connections to each other, over which they -perform a protocol handshake and subsequently exchange -[topology](#topology) information. These connections are encrypted if -so configured. Peers also establish UDP "connections", possibly -encrypted, which carry encapsulated network packets. These -"connections" are duplex and can traverse firewalls. - -Weave creates a network bridge on the host. Each container is -connected to that bridge via a veth pair, the container side of which -is given an IP address & netmask supplied either by the user or -Weave's IP address allocator. Also connected to the bridge is the -weave router container. - -A weave router captures Ethernet packets from its bridge-connected -interface in promiscuous mode, using 'pcap'. This typically excludes -traffic between local containers, and between the host and local -containers, all of which is routed straight over the bridge by the -kernel. Captured packets are forwarded over UDP to weave router peers -running on other hosts. On receipt of such a packet, a router injects -the packet on its bridge interface using 'pcap' and/or forwards the -packet to peers. - -Weave routers learn which peer host a particular MAC address resides -on. They combine this knowledge with topology information in order to -make routing decisions and thus avoid forwarding every packet to every -peer. Weave can route packets in partially connected networks with -changing topology. For example, in this network, peer 1 is connected -directly to 2 and 3, but if 1 needs to send a packet to 4 or 5 it must -first send it to peer 3: - -![Partially connected Weave Network](images/top-diag1.png "Partially connected Weave Network") - -### Encapsulation - -When the weave router forwards packets, the encapsulation looks -something like this: - - +-----------------------------------+ - | Name of sending peer | - +-----------------------------------+ - | Frame 1: Name of capturing peer | - +-----------------------------------+ - | Frame 1: Name of destination peer | - +-----------------------------------+ - | Frame 1: Captured payload length | - +-----------------------------------+ - | Frame 1: Captured payload | - +-----------------------------------+ - | Frame 2: Name of capturing peer | - +-----------------------------------+ - | Frame 2: Name of destination peer | - +-----------------------------------+ - | Frame 2: Captured payload length | - +-----------------------------------+ - | Frame 2: Captured payload | - +-----------------------------------+ - | ... | - +-----------------------------------+ - | Frame N: Name of capturing peer | - +-----------------------------------+ - | Frame N: Name of destination peer | - +-----------------------------------+ - | Frame N: Captured payload length | - +-----------------------------------+ - | Frame N: Captured payload | - +-----------------------------------+ - -The name of the sending peer enables the receiving peer to identify -who sent this UDP packet. This is followed by the meta data and -payload for one or more captured frames. The router performs batching: -if it captures several frames very quickly that all need forwarding to -the same peer, it fits as many of them as possible into a single -UDP packet. - -The meta data for each frame contains the names of the capturing and -destination peers. Since the name of the capturing peer name is -associated with the source MAC of the captured payload, it allows -receiving peers to build up their mappings of which client MAC -addresses are local to which peers. The destination peer name enables -the receiving peer to identify whether this frame is destined for -itself or whether it should be forwarded on to some other peer, -accommodating multi-hop routing. This works even when the receiving -intermediate peer has no knowledge of the destination MAC: only the -original capturing peer needs to determine the destination peer from -the MAC. This way weave peers never need to exchange the MAC addresses -of clients and need not take any special action for ARP traffic and -MAC discovery. - -### Topology - -The topology information captures which peers are connected to which -other peers. Weave peers communicate their knowledge of the topology -(and changes to it) to others, so that all peers learn about the -entire topology. This communication occurs over the TCP links between -peers, using a) spanning-tree based broadcast mechanism, and b) a -neighour gossip mechanism. - -Topology messages are sent by a peer... - -- when a connection has been added; if the remote peer appears to be - new to the network, the entire topology is sent to it, and an - incremental update, containing information on just the two peers at - the ends of the connection, is broadcast, -- when a connection has been marked as 'established', indicating that - the remote peer can receive UDP traffic from the peer; an update - containing just information about the local peer is broadcast, -- when a connection has been torn down; an update containing just - information about the local peer is broadcast, -- periodically, on a timer, the entire topology is "gossiped" to a - subset of neighbours, based on a topology-sensitive random - distribution. This is done in case some of the aforementioned - broadcasts do not reach all peers, due to rapid changes in the - topology causing broadcast routing tables to become outdated. - -The receiver of a topology update merges that update with its own -topology model, adding peers hitherto unknown to it, and updating -peers for which the update contains a more recent version than known -to it. If there were any such new/updated peers, and the topology -update was received over gossip (rather than broadcast), then an -improved update containing them is gossiped. - -If the update mentions a peer that the receiver does not know, then -the entire update is ignored. - -#### Message details -Every gossip message is structured as follows: - - +-----------------------------------+ - | 1-byte message type - Gossip | - +-----------------------------------+ - | 4-byte Gossip channel - Topology | - +-----------------------------------+ - | Peer Name of source | - +-----------------------------------+ - | Gossip payload (topology update) | - +-----------------------------------+ - -The topology update payload is laid out like this: - - +-----------------------------------+ - | Peer 1: Name | - +-----------------------------------+ - | Peer 1: NickName | - +-----------------------------------+ - | Peer 1: UID | - +-----------------------------------+ - | Peer 1: Version number | - +-----------------------------------+ - | Peer 1: List of connections | - +-----------------------------------+ - | ... | - +-----------------------------------+ - | Peer N: Name | - +-----------------------------------+ - | Peer N: NickName | - +-----------------------------------+ - | Peer N: UID | - +-----------------------------------+ - | Peer N: Version number | - +-----------------------------------+ - | Peer N: List of connections | - +-----------------------------------+ - -Each List of connections is encapsulated as a byte buffer, within -which the structure is: - - +-----------------------------------+ - | Connection 1: Remote Peer Name | - +-----------------------------------+ - | Connection 1: Remote IP address | - +-----------------------------------+ - | Connection 1: Outbound | - +-----------------------------------+ - | Connection 1: Established | - +-----------------------------------+ - | Connection 2: Remote Peer Name | - +-----------------------------------+ - | Connection 2: Remote IP address | - +-----------------------------------+ - | Connection 2: Outbound | - +-----------------------------------+ - | Connection 2: Established | - +-----------------------------------+ - | ... | - +-----------------------------------+ - | Connection N: Remote Peer Name | - +-----------------------------------+ - | Connection N: Remote IP address | - +-----------------------------------+ - | Connection N: Outbound | - +-----------------------------------+ - | Connection N: Established | - +-----------------------------------+ - -#### Removal of peers -If a peer, after receiving a topology update, sees that another peer -no longer has any connections within the network, it drops all -knowledge of that second peer. - -#### Out-of-date topology -The propagation of topology changes to all peers is not instantaneous, -so it is very possible for a node elsewhere in the network to have an -out-of-date view. - -If the destination peer for a packet is still reachable, then -out-of-date topology can result in it taking a less efficient route. - -If the out-of-date topology makes it look as if the destination peer -is not reachable, then the packet is dropped. For most protocols -(e.g. TCP), the transmission will be retried a short time later, by -which time the topology should have updated. - -### Crypto - -Weave can be configured to encrypt both the data passing over the TCP -connections and the payloads of UDP packets sent between peers. This -is accomplished using the [NaCl](http://nacl.cr.yp.to/) crypto -libraries, employing Curve25519, XSalsa20 and Poly1305 to encrypt and -authenticate messages. Weave protects against injection and replay -attacks for traffic forwarded between peers. - -NaCl was selected because of its good reputation both in terms of -selection and implementation of ciphers, but equally importantly, its -clear APIs, good documentation and high-quality -[go implementation](https://godoc.org/golang.org/x/crypto/nacl). It is -quite difficult to use NaCl incorrectly. Contrast this with libraries -such as OpenSSL where the library and its APIs are vast in size, -poorly documented, and easily used wrongly. - -There are some similarities between weave's crypto and -[TLS](https://tools.ietf.org/html/rfc4346). We do not need to cater -for multiple cipher suites, certificate exchange and other -requirements emanating from X509, and a number of other features. This -simplifies the protocol and implementation considerably. On the other -hand, we need to support UDP transports, and while there are -extensions to TLS such as [DTLS](https://tools.ietf.org/html/rfc4347) -which can operate over UDP, these are not widely implemented and -deployed. - -#### Establishing the Ephemeral Session Key - -For every connection between peers, a fresh public/private key pair is -created at both ends, using NaCl's `GenerateKey` function. The public -key portion is sent to the other end as part of the initial handshake -performed over TCP. Peers that were started with a password do not -continue with connection establishment unless they receive a public -key from the remote peer. Thus either all peers in a weave network -must be supplied with a password, or none. - -When a peer has received a public key from the remote peer, it uses -this to form the ephemeral session key for this connection. The public -key from the remote peer is combined with the private key for the -local peer in the usual Diffie-Hellman way, resulting in both peers -arriving at the same shared key. To this is appended the supplied -password, and the result is hashed through SHA256, to form the final -ephemeral session key. Thus the supplied password is never exchanged -directly, and is thoroughly mixed into the shared secret. Furthermore, -the rate at which TCP connections are accepted is limited by weave to -10Hz, which twarts online dictionary attacks on reasonably strong -passwords. - -The shared key formed by Diffie-Hellman is 256 bits long, appending -the password to this obviously makes it longer by an unknown amount, -and the use of SHA256 reduces this back to 256 bits, to form the final -ephemeral session key. This late combination with the password -eliminates "Man In The Middle" attacks: sniffing the public key -exchange between the two peers and faking their responses will not -grant an attacker knowledge of the password, and so an attacker would -not be able to form valid ephemeral session keys. - -The same ephemeral session key is used for both TCP and UDP traffic -between two peers. - - Generating fresh keys for every connection -provides forward secrecy at the cost of placing a demand on the Linux -CSPRNG (accessed by `GenerateKey` via `/dev/urandom`) proportional to -the number of inbound connection attempts. Weave has accept throttling -to mitigate against denial of service attacks that seek to deplete the -CSPRNG entropy pool, however even at the lower bound of ten requests -per second there may not be enough entropy gathered on a headless -system to keep pace. - -Under such conditions, the consequences will be limited to slowing -down processes reading from the blocking `/dev/random` device as the -kernel waits for enough new entropy to be harvested. It is important -to note that contrary to intuition this low entropy state does not -compromise the ongoing use of `/dev/urandom` - [expert -opinion](http://blog.cr.yp.to/20140205-entropy.html) -asserts that as long as the CSPRNG is seeded with enough entropy (e.g. -256 bits) before random number generation commences then the output is -entirely safe for use as key material. - -By way of comparison, this is exactly how OpenSSL works - it reads 256 -bits of entropy at startup, and uses that to seed an internal CSPRNG -which is used thenceforth to generate keys. Whilst we could have taken -the same approach and built our own CSPRNG to work around the -potential `/dev/random` blocking issue, we thought it was much more -prudent to rely on the [heavily -scrutinised](http://eprint.iacr.org/2012/251.pdf) Linux random number -generator as [advised -here](http://cr.yp.to/highspeed/coolnacl-20120725.pdf) (page 10, -'Centralizing randomness'). The aforementioned notwithstanding, if -weave's demand on `/dev/urandom` is causing you problems with blocking -`/dev/random` reads, please get in touch with us - we'd love to hear -about your use case. - -#### TCP - -TCP connection are only used to exchange topology information between -peers, via a message-based protocol. Encryption of each message is -carried out by NaCl's `secretbox.Seal` function using the ephemeral -session key and a nonce. The nonce contains the message sequence -number, which is incremented for every message sent, and a bit -indicating the polarity of the connection at the sender ('1' for -outbound). The latter is required by the -[NaCl Security Model](http://nacl.cr.yp.to/box.html) in order to -ensure that the two ends of the connection do not use the same nonces. - -Decryption of a message at the receiver is carried out by NaCl's -`secretbox.Open` function using the ephemeral session key and a -nonce. The receiver maintains its own message sequence number, which -it increments for every message it decrypted successfully. The nonce -is constructed from that sequence number and the connection -polarity. As a result the receiver will only be able to decrypt a -message if it has the expected sequence number. This prevents replay -attacks. - -#### UDP - -UDP connections carry captured traffic between peers. For a UDP packet -sent between peers that are using crypto, the encapsulation looks as -follows: - - +-----------------------------------+ - | Name of sending peer | - +-----------------------------------+ - | Message Sequence No and flags | - +-----------------------------------+ - | NaCl SecretBox overheads | - +-----------------------------------+ -+ - | Frame 1: Name of capturing peer | | - +-----------------------------------+ | This section is encrypted - | Frame 1: Name of destination peer | | using the ephemeral session - +-----------------------------------+ | key between the weave peers - | Frame 1: Captured payload length | | sending and receiving this - +-----------------------------------+ | packet. - | Frame 1: Captured payload | | - +-----------------------------------+ | - | Frame 2: Name of capturing peer | | - +-----------------------------------+ | - | Frame 2: Name of destination peer | | - +-----------------------------------+ | - | Frame 2: Captured payload length | | - +-----------------------------------+ | - | Frame 2: Captured payload | | - +-----------------------------------+ | - | ... | | - +-----------------------------------+ | - | Frame N: Name of capturing peer | | - +-----------------------------------+ | - | Frame N: Name of destination peer | | - +-----------------------------------+ | - | Frame N: Captured payload length | | - +-----------------------------------+ | - | Frame N: Captured payload | | - +-----------------------------------+ -+ - -This is very similar to the [non-crypto encapsulation](#encapsulation). - -All of the frames on a connection are encrypted with the same -ephemeral session key, and a nonce constructed from a message sequence -number, flags and the connection polarity. This is very similar to the -TCP encryption scheme, and encryption is again done with the NaCl -`secretbox.Seal` function. The main difference is that the message -sequence number and flags are transmitted as part of the message, -unencrypted. - -The receiver uses the name of the sending peer to determine which -ephemeral session key and local cryptographic state to use for -decryption. Frames which are to be forwarded on to some further peer -will be re-encrypted with the relevant ephemeral session keys for the -onward connections. Thus all traffic is fully decrypted on every peer -it passes through. - -Decryption is once again carried out by NaCl's `secretbox.Open` -function using the ephemeral session key and nonce. The latter is -constructed from the message sequence number and flags that appeared -in the unencrypted portion of the received message, and the connection -polarity. - -To guard against replay attacks, the receiver maintains some state in -which it remembers the highest message sequence number seen. It could -simply reject messages with lower sequence numbers, but that could -result in excessive message loss when messages are re-ordered. The -receiver therefore additionally maintains a set of received message -sequence numbers in a window below the highest number seen, and only -rejects messages with a sequence number below that window, or -contained in the set. The window spans at least 2^20 message sequence -numbers, and hence any re-ordering between the most recent ~1 million -messages is handled without dropping messages. - -### Further reading -More details on the inner workings of weave can be found in the -[architecture documentation](https://github.com/weaveworks/weave/blob/master/docs/architecture.txt). diff --git a/site/images/weave-frame-encapsulation-178x300.png b/site/images/weave-frame-encapsulation-178x300.png new file mode 100644 index 0000000000..06c7d317fa Binary files /dev/null and b/site/images/weave-frame-encapsulation-178x300.png differ diff --git a/site/images/weave-net-encap1-1024x459.png b/site/images/weave-net-encap1-1024x459.png new file mode 100644 index 0000000000..9b7d97262c Binary files /dev/null and b/site/images/weave-net-encap1-1024x459.png differ diff --git a/site/images/weave-net-fdp1-1024x454.png b/site/images/weave-net-fdp1-1024x454.png new file mode 100644 index 0000000000..c8d00c35fa Binary files /dev/null and b/site/images/weave-net-fdp1-1024x454.png differ diff --git a/site/images/weave-net-overview.png b/site/images/weave-net-overview.png new file mode 100644 index 0000000000..fd33922cd3 Binary files /dev/null and b/site/images/weave-net-overview.png differ diff --git a/site/installing-weave.md b/site/installing-weave.md new file mode 100644 index 0000000000..3e4848f523 --- /dev/null +++ b/site/installing-weave.md @@ -0,0 +1,47 @@ +--- +title: Installing Weave Net +layout: default +--- + + +Ensure you are running Linux (kernel 3.8 or later) and have Docker +(version 1.6.0 or later) installed. + +Install Weave Net by running the following: + + sudo curl -L git.io/weave -o /usr/local/bin/weave + sudo chmod a+x /usr/local/bin/weave + +If you are on OSX and are using Docker Machine) you need to make sure +that a VM is running and configured before getting Weave Net. Setting up a VM is shown in [the Docker Machine +documentation](https://docs.docker.com/installation/mac/#from-your-shell). +After the VM is configured with Docker Machine, Weave can be launched directly from the OSX host. + +Weave respects the environment variable `DOCKER_HOST`, so that you can run +and control a Weave Network locally on a remote host. See [Using The Weave Docker API Proxy](/site/weave-docker-api/using-proxy.md) + +With Weave downloaded onto your VMs or hosts, you are ready to launch a Weave network and deploy apps onto it. See [Deploying Applications to Weave](/site/using-weave/deploying-applications.md#launching) + +CoreOS users see [here](https://github.com/fintanr/weave-gs/blob/master/coreos-simple/user-data) for an example of installing Weave using cloud-config. + +Amazon ECS users see +[here](https://github.com/weaveworks/guides/blob/master/aws-ecs/LATESTAMIs.md) +for the latest Weave AMIs and +[here](http://weave.works/guides/service-discovery-with-weave-aws-ecs.html) to get started with Weave on ECS. + + +###Quick Start Screencast + + + + + +**See Also** + + * [Using Weave Net](/site/using-weave/intro-example.md) + * [Getting Started Guides](http://www.weave.works/guides/) + * [Features](/site/features.md) + * [Troubleshooting](/site/troubleshooting.md) + * [Building](/site/building.md) + * [Using Weave with Systemd](/site/systemd.md) + \ No newline at end of file diff --git a/site/introducing-weave.md b/site/introducing-weave.md new file mode 100644 index 0000000000..7f56af9f6e --- /dev/null +++ b/site/introducing-weave.md @@ -0,0 +1,69 @@ +--- +title: Introducing Weave Net +layout: default +--- + + +##What is Weave Net? + +Weave creates a virtual network that connects Docker containers deployed across multiple hosts and enables their automatic discovery. With Weave Net, portable microservices-based applications consisting of multiple containers can run anywhere: on one host, multiple hosts or even across cloud providers and data centers. +Applications use the network just as if the containers were all plugged into the same network switch, without having to configure port mappings, or ambassador links. + + +Services provided by application containers on the weave network can be exposed to the outside world, regardless of where they are running. Similarly, existing internal systems can be opened to accept connections from application containers irrespective of their location. + + +##Why Weave? + +###Hassle Free Configuration + +Weave simplifies setting up a container network. Because containers on a Weave network use standard port numbers, (for example MySQL’s default is port 3306), managing microservices is straightforward. Every container can find the IP of any other container using a simple DNS query on the container's name, and it can also communicate directly without NAT, without using port mappings or complicated ambassador linking. And best of all deploying a Weave container network requires zero changes to your application’s code. + +![Weave Net Encapsulation](/images/weave-net-overview.png) + +###Service Discovery + +Weave implements service discovery by providing a fast "micro DNS" server at each node. You simply name containers and everything 'just works', including load balancing across multiple containers with the same name. + +###No External Cluster Store Required + +All other Docker networking plugins, including Docker's own "Overlay" driver, require that you set up Docker with a cluster store – a central database like Consul or Zookeeper – before you can even use them. Besides being difficult to set up, maintain and manage, every Docker host must also be in constant contact with the cluster store: if you lose the connection, even temporarily, then you cannot start or stop any containers. + +Weave Net is bundled with a Docker Network plugin that doesn't require an external cluster store. You can get started right away and you can start and stop containers even when there are network connectivity problems. +For information about the Weave Docker Plugin, see “Using the Weave Network Docker Plugin” + +###Operates in Partially Connected Networks + +Weave can forward traffic between nodes, and it works even if the mesh network is only partially connected. This means that you can have a mix of legacy systems and containerized apps and still use Weave Net to keep everything in communication. + +###Weave Net is Fast + +Weave Net automatically chooses the fastest path between two hosts, offering near native throughput and latency, all without your intervention. + +See [How Fast Datapath Works](site/fastdp/using-fastdp.md) for more information. + +###Network Operations Friendly + +Weave uses industry-standard VXLAN encapsulation between hosts. This means you can continue using your favorite packet analyzing tools, such as ‘Wireshark’ to inspect and troubleshoot protocols. + +###Weave Net is Secure With Built-in Encryption + +Weave Net traverses firewalls without requiring a TCP add-on. You can encrypt your traffic, which allows you to connect to apps on hosts even across an untrusted network. + +###Multicast Support + +Multicast addressing and routing is fully supported in Weave Net. Data can be sent to one multicast address and it will be automatically broadcast to all of its recipients. + +###NAT Traversal + +With Weave Net, deploy your peer-to-peer file sharing applications and voice over IP and take advantage of built-in NAT traversal. With Weave your app is portable, containerized and with its standardized approach to networking it gives you one less thing to worry about. + +###Run with Anything: Kubernetes, Mesos, Amazon ECS + +Weave Net is a good choice if you want one tool for everything. For example: In addition to Docker plugins, you can also use Weave as a Kubernetes plugin. You can also use Weave with Amazon ECS or with Mesos and Marathon. +Refer to our Getting Started and Integration Guides for more information. + +For a complete list and description of Weave Net’s current feature set see, [Weave Net Features](/site/features/features.md) + + + diff --git a/site/ip-addresses.md b/site/ip-addresses.md deleted file mode 100644 index 791f19f39a..0000000000 --- a/site/ip-addresses.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: IP Addresses, routes and networks -layout: default ---- - -# IP Addresses, routes and networks - -Weave Net lets you run containers on a private network, and so the IP -addresses those containers use are insulated from the rest of the -Internet, and you don't have to worry about them clashing. Except, if -they actually do clash with some addresses that you'd like the -containers to talk to. - -### Some definitions - -- _IP_ is the Internet Protocol, the fundamental basis of network - communication between billions of connected devices. -- The _IP address_ is (for most purposes) the four numbers separated - by dots, like `192.168.48.12`. Each number is one byte in size, so can - be between 0 and 255. -- Each IP address lives on a _Network_, which is some set of those - addresses that all know how talk to each other. The network address - is some prefix of the IP address, like `192.168.48`. To show - which part of the address is the network, we append a slash - and then the number of bits in the network prefix, like - `/24`. -- A _route_ is an instruction for how to deal with traffic destined - for somewhere else - it specifies a Network, and a way to talk to - that network. Every device using IP has a table of routes, so for - any destination address it looks up that table, finds the right - route, and sends it in the direction indicated. - -### Examples - -In the IP address `10.4.2.6/8`, the network prefix is the first 8 bits -- `10`. Written out in full, that network is `10.0.0.0/8`. - -The most common prefix lengths are 8, 16 and 24, but there is nothing -stopping you using a /9 network or a /26. E.g. `6.250.3.1/9` is on the -`6.128.0.0/9` network. - -Several websites offer calculators to decode this kind of address; for -example [IP Address Guide](http://www.ipaddressguide.com/cidr). - -Here is an example route table for a container attached to the Weave -network: - -```` -# ip route show -default via 172.17.42.1 dev eth0 -10.2.2.0/24 dev ethwe proto kernel scope link src 10.2.2.1 -172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.170 -```` - -It has two interfaces: one that Docker gave it called `eth0`, and one -that weave gave it called `ethwe`. They are on networks -`172.17.0.0/16` and `10.2.2.0/24` respectively, and if you want to -talk to any other address on those networks then the routing table -tells it to send directly down that interface. If you want to talk to -anything else not matching those rules, the default rule says to send -it to `172.17.42.1` down the eth0 interface. - -So, suppose this container wants to talk to another container at -address `10.2.2.9`; it will send down the ethwe interface and weave -Net will take care of routing the traffic. To talk an external server -at address `74.125.133.128`, it looks in its routing table, doesn't -find a match, so uses the default rule. - -### Configuring Weave - -The default configurations for both weave Net and Docker use [Private -Networks](https://en.wikipedia.org/wiki/Private_network), whose -addresses are never found on the public internet, so that reduces the -chances of overlap. But it could be that you or your hosting provider -are using some of these private addresses in the same range, which would -cause a clash. - -Here's an example: on `weave launch`, the following error message -can appear: - -```` -Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host. -ERROR: Default --ipalloc-range 10.32.0.0/12 overlaps with existing route on host. -You must pick another range and set it on all hosts. -```` - -As the message says, the default that weave Net would like to use is -`10.32.0.0/12` - a 12-bit prefix so all addresses starting with the bit -pattern 000010100010, or in decimal everything from 10.32.0.0 through -10.47.255.255. But the user's host already has a route for `10.0.0.0/8`, -which overlaps, because the first 8 bits are the same. If we went -ahead and used the default network, then for an address like -`10.32.5.6` the kernel would never be sure whether this meant the -weave Net network of `10.32.0.0/12` or the hosting network of -`10.0.0.0/8`. - -If you're sure the addresses you want are not really in use, then -explicitly setting the range with `--ipalloc-range` in the -command-line arguments to `weave launch` on all hosts will force Weave -Net to use that range, even though it overlaps. Otherwise, you can -pick a different range, preferrably another subset of the [Private -Networks](https://en.wikipedia.org/wiki/Private_network). For example -172.30.0.0/16. diff --git a/site/ip-addresses/configuring-weave.md b/site/ip-addresses/configuring-weave.md new file mode 100644 index 0000000000..8f3e2548d0 --- /dev/null +++ b/site/ip-addresses/configuring-weave.md @@ -0,0 +1,42 @@ +--- +title: Configuring Weave to Explicitly Use an IP Range +layout: default +--- + +The default configurations for both Weave Net and Docker use [Private +Networks](https://en.wikipedia.org/wiki/Private_network), whose +addresses are never found on the public internet, and subsequently reduces the +chance of IP overlap. However, it could be that you or your hosting provider +are using some of these private addresses in the same range, which will +cause a clash. + +If after `weave launch`, the following error message +appears: + + Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host. + ERROR: Default --ipalloc-range 10.32.0.0/12 overlaps with existing route on host. + You must pick another range and set it on all hosts. + +As the message indicates, the default range that Weave Net would like to use is +`10.32.0.0/12` - a 12-bit prefix, where all addresses start with the bit +pattern 000010100010, or in decimal everything from 10.32.0.0 through +10.47.255.255. + +However, your host is using a route for `10.0.0.0/8`, +which overlaps, since the first 8 bits are the same. In this case, if you used the default network +for an address like `10.32.5.6` the kernel would never be sure if this meant the +Weave Net network of `10.32.0.0/12` or the hosting network of +`10.0.0.0/8`. + +If you are sure the addresses you want are not in use, then +explicitly setting the range with `--ipalloc-range` in the +command-line arguments to `weave launch` on all hosts forces Weave +Net to use that range, even though it overlaps. Otherwise, you can +pick a different range, preferrably another subset of the [Private +Networks](https://en.wikipedia.org/wiki/Private_network). For example +172.30.0.0/16. + + +**See Also** + + * [IP Addresses, Routes and Networks](/site/ip-addresses/ip-addresses.md) \ No newline at end of file diff --git a/site/ip-addresses/ip-addresses.md b/site/ip-addresses/ip-addresses.md new file mode 100644 index 0000000000..8b9a9c0751 --- /dev/null +++ b/site/ip-addresses/ip-addresses.md @@ -0,0 +1,67 @@ +--- +title: IP Addresses, Routes and Networks +layout: default +--- + + +Weave Net runs containers on a private network, which means that IP addresses are isolated from the rest of the +Internet, and that you don't have to worry about addresses clashing. + +You can of course also manually change the IP of any given container or subnet on a Weave network. See, [How to Manually Specify IP Addresses and Subnets](/site/using-weave/manual-ip-address.md) + +### Some Definitions + +- _IP_ is the Internet Protocol, the fundamental basis of network + communication between billions of connected devices. +- The _IP address_ is (for most purposes) the four numbers separated + by dots, like `192.168.48.12`. Each number is one byte in size, so can + be between 0 and 255. +- Each IP address lives on a _Network_, which is some set of those + addresses that all know how talk to each other. The network address + is some prefix of the IP address, like `192.168.48`. To show + which part of the address is the network, we append a slash + and then the number of bits in the network prefix, like + `/24`. +- A _route_ is an instruction for how to deal with traffic destined + for somewhere else - it specifies a Network, and a way to talk to + that network. Every device using IP has a table of routes, so for + any destination address it looks up that table, finds the right + route, and sends it in the direction indicated. + +### IP Address Notation in Weave + +In the IP address `10.4.2.6/8`, the network prefix is the first 8 bits +- `10`. Written out in full, that network is `10.0.0.0/8`. + +The most common prefix lengths are 8, 16 and 24, but there is nothing +stopping you using a /9 network or a /26. For example, `6.250.3.1/9` is on the +`6.128.0.0/9` network. + +Several websites offer calculators to decode this kind of address, see: [IP Address Guide](http://www.ipaddressguide.com/cidr). + +The following is an example route table for a container that is attached to a Weave +network: + + # ip route show + default via 172.17.42.1 dev eth0 + 10.2.2.0/24 dev ethwe proto kernel scope link src 10.2.2.1 + 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.170 + + +It has two interfaces: one that Docker gave it called `eth0`, and one +that weave gave it called `ethwe`. They are on networks +`172.17.0.0/16` and `10.2.2.0/24` respectively, and if you want to +talk to any other address on those networks then the routing table +tells it to send directly down that interface. If you want to talk to +anything else not matching those rules, the default rule says to send +it to `172.17.42.1` down the eth0 interface. + +So, suppose this container wants to talk to another container at +address `10.2.2.9`; it will send down the ethwe interface and weave +Net will take care of routing the traffic. To talk an external server +at address `74.125.133.128`, it looks in its routing table, doesn't +find a match, so uses the default rule. + +**See Also** + + * [Configuring Weave to Explicitly Use an IP Range](/site/ip-addresses/configuring-weave.md) diff --git a/site/ipam.md b/site/ipam.md deleted file mode 100644 index 1e9c0d6214..0000000000 --- a/site/ipam.md +++ /dev/null @@ -1,241 +0,0 @@ ---- -title: Automatic IP Address Management -layout: default ---- - -# Automatic IP Address Management - -Weave automatically assigns containers an IP address that is unique -across the network, and releases that address when a container -exits. This happens for all invocations of the `run`, `start`, -`attach`, `detach`, `expose`, and `hide` commands, unless -the user explictly specified an address. Weave can also assign -addresses in multiple subnets. - - * [Initialisation](#initialisation) - * [Choosing an allocation range](#range) - * [Automatic allocation across multiple subnets](#subnets) - * [Mixing automatic and manual allocation](#manual) - * [Stopping and removing peers](#stop) - * [Troubleshooting](#troubleshooting) - -## Initialisation - -Just once, when the first automatic IP address allocation is requested -in the whole network, weave needs a majority of peers to be present in -order to avoid formation of isolated groups, which could lead to -inconsistency, i.e. the same IP address being allocated to two -different containers. Therefore, you must either supply the list of -all peers in the network to `weave launch` or add the -`--init-peer-count` flag to specify how many peers there will be. - -To illustrate, suppose you have three hosts, accessible to each other -as `$HOST1`, `$HOST2` and `$HOST3`. You can start weave on those three -hosts with these three commands: - - host1$ weave launch $HOST2 $HOST3 - - host2$ weave launch $HOST1 $HOST3 - - host3$ weave launch $HOST1 $HOST2 - -Or, if it is not convenient to name all the other hosts at launch -time, you can give the number of peers like this: - - host1$ weave launch --init-peer-count 3 - - host2$ weave launch --init-peer-count 3 $HOST3 - - host3$ weave launch --init-peer-count 3 $HOST2 - -The consensus mechanism used to determine a majority transitions -through three states: 'deferred', 'waiting' and 'achieved': - -* 'deferred' - no allocation requests or claims have been made yet; - consensus is deferred until then -* 'waiting' - an attempt to achieve consensus is ongoing, triggered by - an allocation or claim request; allocations will block. This state - persists until a quorum of peers are able to communicate amongst - themselves successfully -* 'achieved' - consensus achieved; allocations proceed normally - -### More on `--init-peer-count` - -TL;DR: it isn't a problem to over-estimate by a bit, but if you supply -a number that is too small then multiple independent groups may form. - -Weave uses the estimate of the number of peers at initialization to -compute a majority or quorum number - specifically floor(n/2) + 1. So, -if the actual number of peers is less than half the number stated then -they will keep waiting for someone else to join to reach a quorum. On -the other hand, if the actual number is more than twice the quorum -number then you could have two sets of peers each reach a quorum and -initialize independent data structures. You'd have to be quite unlucky -for that to happen in practice, as they have to go through the whole -agreement process without learning about each other, but it's -definitely possible. - -The quorum number is only used once at start-up (specifically, the -first time someone tries to allocate or claim an IP address), so once -a set of peers is initialized you can add more and they will join on -to the data structure used by the existing set. The one thing you -have to watch is if the early ones get restarted, you must restart -them with the current number of peers - if they use the smaller number -that was correct when they first started then they could form an -independent set again. - -To illustrate this last point, the following sequence of operations -would be safe wrt weave's startup quorum: - - host1$ weave launch - ...time passes... - host2$ weave launch $HOST1 - ...time passes... - host3$ weave launch $HOST1 $HOST2 - ...time passes... - ...host1 is rebooted... - host1$ weave launch $HOST2 $HOST3 - -## Choosing an allocation range - -By default, weave will allocate IP addresses in the 10.32.0.0/12 -range. This can be overridden with the `--ipalloc-range` option, e.g. - - host1$ weave launch --ipalloc-range 10.2.0.0/16 - -and must be the same on every host. - -The range parameter is written in -[CIDR notation](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) - -in this example "/16" means the first 16 bits of the address form the -network address and the allocator is to allocate container addresses -that all start 10.2. We have [a page with more information on IP -addresses and routes](ip-addresses.html). - -Weave shares the IP address range across all peers, dynamically -according to their needs. If a group of peers becomes isolated from -the rest (a partition), they can continue to work with the address -ranges they had before isolation, and can subsequently be re-connected -to the rest of the network without any conflicts arising. - -## Automatic allocation across multiple subnets - -IP subnets are used to define or restrict routing. By default, weave -puts all containers in a subnet that spans the entire allocation -range, so every weave-attached container can talk to every other -weave-attached container. - -If you want some [isolation](features.html#application-isolation), you -can choose to run containers on different subnets. To request the -allocation of an address from a particular subnet, set the -`WEAVE_CIDR` environment variable to `net:` when creating the -container, e.g. - - host1$ docker run -e WEAVE_CIDR=net:10.2.7.0/24 -ti ubuntu - -You can ask for multiple addresses in different subnets and add in -manually-assigned addresses (outside the automatic allocation range), -for instance: - - host1$ docker run -e WEAVE_CIDR="net:10.2.7.0/24 net:10.2.8.0/24 ip:10.3.9.1/24" -ti ubuntu - -(Note the ".0" and ".-1" addresses in a subnet are not used, as required by -[RFC 1122](https://tools.ietf.org/html/rfc1122#page-29)). - -When working with multiple subnets in this way, it is usually -desirable to constrain the default subnet - i.e. the one chosen by the -allocator when no subnet is supplied - so that it does not overlap -with others. One can specify that with `--ipalloc-default-subnet`: - - host1$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.3.0/24 - -`--ipalloc-range` should cover the entire range that you will ever use -for allocation, and `--ipalloc-default-subnet` is the subnet that will -be used when you don't explicitly specify one. - -When specifying addresses, the default subnet can be denoted -symbolically with `net:default`. - -## Mixing automatic and manual allocation - -You can start containers with a mixture of automatically-allocated -addresses and manually-chosen addresses in the same range, but you may -find that the automatic allocator has already reserved a specific -address that you wanted. - -To reserve a range for manual allocation in the same subnet as the -automatic allocator, you can specify an -`--ipalloc-range` that is smaller than `--ip-default-subnet`, For -example, if you launch weave with: - - host1$ weave launch --ipalloc-range 10.9.0.0/17 --ipalloc-default-subnet 10.9.0.0/16 - -then you can run all containers in the 10.9.0.0/16 subnet, with -automatic allocation using the lower half, leaving the upper half free -for manual allocation. - -## Stopping and removing peers - -You may wish to `weave stop` and re-launch to change a peer's -configuration or upgrade it to a new version. Provided the underlying -protocol hasn't changed the peer will pick up where it left off and -learn from other peers in the network which address ranges it was -previously using. - -The same normally happens when a peer is restarted any other way, -e.g. as result of a reboot, provided the -[system-uuid](http://linux.die.net/man/8/dmidecode) didn't change, -from which weave derives the peer name. In circumstances where the -system-uuid is not stable, the weave peer name can be specified on -`weave launch` with the `--name` option in form of MAC address, e.g. - - host1$ weave launch --name d2:e1:4d:cc:92:15 - -_NOTE: Do not do this unless absolutely necessary._ - -If you want to remove a peer from the network, run `weave reset`. This -removes the ranges allocated to the peer, thus allowing other peers to -allocate IPs in them. - -For a failed peer, the `weave rmpeer` command can be run on any other -peer to achieve the same result. The command should be used with -extreme caution - if the rm'd peer had transferred some range of IP -addresses to another peer but this is not known to the whole network, -or if it actually had not failed and later rejoins the network, the -same IP address may be allocated twice. - -Assuming we had started the three peers in the example earlier, and -host3 has caught fire, we can go to one of the other hosts and run: - - host1$ weave rmpeer host3 - -Weave will take all the IP address ranges owned by host3 and transfer -them to be owned by host1. The name "host3" is resolved via the -'nickname' feature of weave, which defaults to the local host -name. Alternatively, one can supply a peer name as shown in `weave -status`. - -## Troubleshooting - -The command - - weave status - -reports on the current status of the weave router and IP allocator: - -```` -... - - Service: ipam - Consensus: waiting(quorum: 2, known: 0) - Range: 10.32.0.0-10.47.255.255 - DefaultSubnet: 10.32.0.0/12 - -... -```` - -The first section covers the router; see the [troubleshooting -guide](troubleshooting.html#weave-status) for full details. - -The 'Service: ipam' section displays the consensus state as well as -the total allocation range and default subnet. diff --git a/site/ipam/allocation-multi-ipam.md b/site/ipam/allocation-multi-ipam.md new file mode 100644 index 0000000000..fe6b9daf65 --- /dev/null +++ b/site/ipam/allocation-multi-ipam.md @@ -0,0 +1,67 @@ +--- +title: Automatic Allocation Across Multiple Subnets +layout: default +--- + + +IP subnets are used to define or restrict routing. By default, Weave +puts all containers into a subnet that spans the entire allocation +range, so that every Weave-attached container can communicate with every other +Weave-attached container. + +If you want some [isolation](/site/using-weave/isolating-applications.md), you +can choose to run containers on different subnets. To request the +allocation of an address from a particular subnet, set the +`WEAVE_CIDR` environment variable to `net:` when creating the +container, for example: + + host1$ docker run -e WEAVE_CIDR=net:10.2.7.0/24 -ti ubuntu + +You can ask for multiple addresses in different subnets and add in +manually-assigned addresses (outside the automatic allocation range), +for instance: + + host1$ docker run -e WEAVE_CIDR="net:10.2.7.0/24 net:10.2.8.0/24 ip:10.3.9.1/24" -ti ubuntu + +>>**Note:** The ".0" and ".-1" addresses in a subnet are not used, as required by +[RFC 1122](https://tools.ietf.org/html/rfc1122#page-29)). + +When working with multiple subnets in this way, it is usually +desirable to constrain the default subnet - for example, the one chosen by the +allocator when no subnet is supplied - so that it does not overlap +with others. You can specify this by using `--ipalloc-default-subnet`: + + host1$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.3.0/24 + +`--ipalloc-range` should cover the entire range that you will ever use +for allocation, and `--ipalloc-default-subnet` is the subnet that will +be used when you don't explicitly specify one. + +When specifying addresses, the default subnet can be denoted +symbolically using `net:default`. + + +### Mixing automatic and manual allocation + +Containers can be started using a mixture of automatically-allocated +addresses and manually-chosen addresses in the same range. However, you may +find that the automatic allocator has already reserved a specific +address that you wanted. + +To reserve a range for manual allocation in the same subnet as the +automatic allocator, you can specify an +`--ipalloc-range` that is smaller than `--ip-default-subnet`, For +example, if you launch weave with: + + host1$ weave launch --ipalloc-range 10.9.0.0/17 --ipalloc-default-subnet 10.9.0.0/16 + +then you can run all containers in the 10.9.0.0/16 subnet, with +automatic allocation using the lower half, leaving the upper half free +for manual allocation. + + +**See Also** + + * [Address Allocation with IP Address Management (IPAM)](/site/ipam/overview-init-ipam.md) + * [Isolating Applications on a Weave Network](/site/using-weave/isolating-applications.md) + * [Starting, Stopping and Removing Peers](/site/ipam/stop-remove-peers-ipam.md) \ No newline at end of file diff --git a/site/ipam/overview-init-ipam.md b/site/ipam/overview-init-ipam.md new file mode 100644 index 0000000000..727216c9e8 --- /dev/null +++ b/site/ipam/overview-init-ipam.md @@ -0,0 +1,130 @@ +--- +title: Address Allocation with IP Address Management (IPAM) +layout: default +--- + + +Weave automatically assigns containers a unique IP address +across the network, and also releases that address when the container +exits. Unless you explicitly specify an address, this occurs for all +invocations of the `run`, `start`, +`attach`, `detach`, `expose`, and `hide` commands. Weave can also assign +addresses in multiple subnets. + +The following automatic IP address managment topics are discussed: + + * [Initializing Peers on a Weave Network](#initialization) + * [`--init-peer-count` and How Quorum is Achieved](#quorum) + * [Choosing an Allocation Range](#range) + + + +### Initializing Peers on a Weave Network + +Just once, when the first automatic IP address allocation is requested +in the whole network, Weave needs a majority of peers to be present in +order to avoid formation of isolated groups, which could lead to +inconsistency, for example, the same IP address being allocated to two +different containers. + +Therefore, you must either supply the list of all peers in the network at `weave launch` or add the +`--init-peer-count` flag to specify how many peers there will be. + +To illustrate, suppose you have three hosts, accessible to each other +as `$HOST1`, `$HOST2` and `$HOST3`. You can start weave on those three +hosts using these three commands: + + host1$ weave launch $HOST2 $HOST3 + + host2$ weave launch $HOST1 $HOST3 + + host3$ weave launch $HOST1 $HOST2 + +Or, if it is not convenient to name all the other hosts at launch +time, you can pass the number of peers like this: + + host1$ weave launch --init-peer-count 3 + + host2$ weave launch --init-peer-count 3 $HOST3 + + host3$ weave launch --init-peer-count 3 $HOST2 + +The consensus mechanism used to determine a majority, transitions +through three states: 'deferred', 'waiting' and 'achieved': + +* 'deferred' - no allocation requests or claims have been made yet; + consensus is deferred until then +* 'waiting' - an attempt to achieve consensus is ongoing, triggered by + an allocation or claim request; allocations will block. This state + persists until a quorum of peers are able to communicate amongst + themselves successfully +* 'achieved' - consensus achieved; allocations proceed normally + +#### `--init-peer-count` and How Quorum is Achieved + +Normally it isn't a problem to over-estimate `--init-peer-count`, but if you supply +a number that is too small, then multiple independent groups may form. + +Weave uses the estimate of the number of peers at initialization to +compute a majority or quorum number - specifically floor(n/2) + 1. + +If the actual number of peers is less than half the number stated, then +they keep waiting for someone else to join in order to reach a quorum. + +But if the actual number is more than twice the quorum +number, then you may end up with two sets of peers with each reaching a quorum and +initializing independent data structures. You'd have to be quite unlucky +for this to happen in practice, as they would have to go through the whole +agreement process without learning about each other, but it's +definitely possible. + +The quorum number is only used once at start-up (specifically, the +first time someone tries to allocate or claim an IP address). Once +a set of peers is initialized, you can add more and they will join on +to the data structure used by the existing set. + +The one issue to watch is if the earlier peers are restarted, you must restart +them using the current number of peers. If they use the smaller number +that was correct when they first started, then they could form an +independent set again. + +To illustrate this last point, the following sequence of operations +is safe with respect to Weave's startup quorum: + + host1$ weave launch + ...time passes... + host2$ weave launch $HOST1 + ...time passes... + host3$ weave launch $HOST1 $HOST2 + ...time passes... + ...host1 is rebooted... + host1$ weave launch $HOST2 $HOST3 + +### Choosing an Allocation Range + +By default, Weave allocates IP addresses in the 10.32.0.0/12 +range. This can be overridden with the `--ipalloc-range` option, e.g. + + host1$ weave launch --ipalloc-range 10.2.0.0/16 + +and must be the same on every host. + +The range parameter is written in +[CIDR notation](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) - +in this example "/16" means the first 16 bits of the address form the +network address and the allocator is to allocate container addresses +that all start 10.2. See [IP +addresses and routes](/site/using-weave/service-management.md#routing) for more information. + +Weave shares the IP address range across all peers, dynamically +according to their needs. If a group of peers becomes isolated from +the rest (a partition), they can continue to work with the address +ranges they had before isolation, and can subsequently be re-connected +to the rest of the network without any conflicts arising. + + **See Also** + + * [Automatic Allocation Across Multiple Subnets](/site/ipam/allocation-multi-ipam.md) + * [Plugin Command-line Arguments](/site/plugin/plug-in-command-line.md) + + \ No newline at end of file diff --git a/site/ipam/stop-remove-peers-ipam.md b/site/ipam/stop-remove-peers-ipam.md new file mode 100644 index 0000000000..0272927eaa --- /dev/null +++ b/site/ipam/stop-remove-peers-ipam.md @@ -0,0 +1,38 @@ +--- +title: Starting, Stopping and Removing Peers +layout: default +--- + + +You may wish to `weave stop` and re-launch to change some config or to +upgrade to a new version; provided the underlying protocol hasn't +changed it will pick up where it left off and learn from peers in the +network which address ranges it was previously using. If, however, you +run `weave reset` this will remove the peer from the network so +if Weave is run again on that node it will start from scratch. + +For failed peers, the `weave rmpeer` command can be used to +permanently remove the ranges allocated to said peer. This will allow +other peers to allocate IPs in the ranges previously owner by the rm'd +peer, and as such should be used with extreme caution - if the rm'd +peer had transferred some range of IP addresses to another peer but +this is not known to the whole network, or if it later rejoins +the Weave network, the same IP address may be allocated twice. + +Assuming we had started the three peers in the example earlier, and +host3 has caught fire, we can go to one of the other hosts and run: + + host1$ weave rmpeer host3 + +Weave will take all the IP address ranges owned by host3 and transfer +them to be owned by host1. The name "host3" is resolved via the +'nickname' feature of weave, which defaults to the local host +name. Alternatively, one can supply a peer name as shown in `weave +status`. + +**See Also** + + * [Address Allocation with IP Address Management (IPAM)](/site/ipam/overview-init-ipam.md) + * [Automatic Allocation Across Multiple Subnets](/site/ipam/allocation-multi-ipam.md) + * [Isolating Applications on a Weave Network](/site/using-weave/isolating-applications.md) + \ No newline at end of file diff --git a/site/ipam/troubleshooting-ipam.md b/site/ipam/troubleshooting-ipam.md new file mode 100644 index 0000000000..2f8c80fdd0 --- /dev/null +++ b/site/ipam/troubleshooting-ipam.md @@ -0,0 +1,28 @@ +--- +title: Troubleshooting the IP Allocator +layout: default +--- + + +The command + + weave status + +reports on the current status of the weave router and IP allocator: + +```` +... + + Service: ipam + Consensus: waiting(quorum: 2, known: 0) + Range: 10.32.0.0-10.47.255.255 + DefaultSubnet: 10.32.0.0/12 + +... +```` + +The first section covers the router; see the [troubleshooting +guide](/site/troubleshooting.md#weave-status) for full details. + +The 'Service: ipam' section displays the consensus state as well as +the total allocation range and default subnet. \ No newline at end of file diff --git a/site/plugin.md b/site/plugin.md deleted file mode 100644 index 72a376c21c..0000000000 --- a/site/plugin.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: Using Weave Net via Docker Networking -layout: default ---- - -# Weave Plugin - -Docker versions 1.9 and later have a plugin mechanism for adding -different network providers. Weave installs itself as a network plugin -when you start it with `weave launch`. To create a network that spans -multiple docker hosts, the weave peers must be connected in the usual -way, i.e. by specifying other hosts in `weave launch` or -[`weave connect`](features.html#dynamic-topologies). - -Subsequently you can start containers with, e.g. - - $ docker run --net=weave -ti ubuntu - -on any of the hosts, and they can all communicate with each other. - -> WARNING: It is inadvisable to attach containers to the weave network -> using both the Weave Docker Networking Plugin and -> [Weave Docker API Proxy](proxy.html) simultaneously. Such containers -> will end up with two weave network interfaces and two IP addresses, -> which is rarely desirable. To ensure the proxy is not being used, -> *do not run `eval $(weave env)`, or `docker $(weave config) ...`*. - -In order to use Weave's [Service Discovery](weavedns.html), you -need to pass the additional arguments `--dns` and `-dns-search`, for -which we provide a helper in the weave script: - - $ docker run --net=weave -h foo.weave.local $(weave dns-args) -tdi ubuntu - -Here is a complete example of using the plugin for connectivity -between two containers running on different hosts: - - host1$ weave launch - host1$ docker run --net=weave -h foo.weave.local $(weave dns-args) -tdi ubuntu - - host2$ weave launch $HOST1 - host2$ docker run --net=weave $(weave dns-args) -ti ubuntu - root@cb73d1a8aece:/# ping -c1 -q foo - PING foo (10.32.0.1) 56(84) bytes of data. - - --- foo ping statistics --- - 1 packets transmitted, 1 received, 0% packet loss, time 0ms - rtt min/avg/max/mdev = 1.341/1.341/1.341/0.000 ms - -## Under the hood - -The Weave plugin actually provides *two* network drivers to Docker -- one named `weavemesh` that can operate without a cluster store and -one named `weave` that can only work with one (like Docker's overlay -driver). - -Docker supports creating multiple networks via the plugin, although -Weave Net provides only one network underneath. So long as the IP -addresses for each network are on different subnets, containers on one -network will not be able to communicate with those on a different network. - -### `weavemesh` driver - -* Weave handles all co-ordination between hosts (referred to by Docker as a "local scope" driver) -* Uses Weave's partition tolerant IPAM -* User must pick different subnets if creating multiple networks. - -We create a network named `weave` for you automatically, using the -[default subnet](ipam.html#subnets) set for the `weave` router. - -To create additional networks using the `weavemesh` driver, pick a -different subnet and run this command on all hosts: - - $ docker network create --driver=weavemesh --ipam-driver=weavemesh --subnet= - -The subnets you pick must be within the range covered by Weave's [IP -Address Management](ipam.html#range) - -### `weave` driver - -* This runs in what Docker call "global scope"; requires a cluster store -* Used with Docker's cluster store based IPAM -* Docker can coordinate choosing different subnets for multiple networks. - -There's no specific documentation from Docker on using a cluster -store, but the first part of -[Getting Started with Docker Multi-host Networking](https://github.com/docker/docker/blob/master/docs/userguide/networking/get-started-overlay.md) -should point the way. - -To create a network using this driver, once you have your cluster store set up: - - $ docker network create --driver=weave - - -## Plugin command-line arguments - -If you want to give some arguments to the plugin independently, don't -use `weave launch`; instead run: - - $ weave launch-router [other peers] - $ weave launch-plugin [plugin arguments] - -The plugin command-line arguments are: - - * `--log-level=debug|info|warning|error`, which tells the plugin - how much information to emit for debugging. - * `--no-multicast-route`: stop weave adding a static IP route for - multicast traffic on its interface - -By default, multicast traffic will be routed over the weave network. -To turn this off, e.g. because you want to configure your own multicast -route, add the `--no-multicast-route` flag to `weave launch-plugin`. - -## Restarting - -We start the plugin with a policy of `--restart=always`, so that it is -there after a restart or reboot. If you remove this container -(e.g. using `weave reset`) before removing all endpoints created using -`--net=weave`, Docker may hang for a long time when it subsequently -tries to talk to the plugin. - -Unfortunately, [Docker 1.9 may also try to talk to the plugin before it has even started it](https://github.com/docker/libnetwork/issues/813). -If using `systemd`, we advise that you modify the Docker unit to -remove the timeout on startup, to give Docker enough time to abandon -its attempts. - -E.g. in the file `/lib/systemd/system/docker.service`, add under `[Service]`: - - TimeoutStartSec=0 diff --git a/site/plugin/plug-in-command-line.md b/site/plugin/plug-in-command-line.md new file mode 100644 index 0000000000..f690d8cf1a --- /dev/null +++ b/site/plugin/plug-in-command-line.md @@ -0,0 +1,33 @@ +--- +title: Plugin Command-line Arguments +layout: default +--- + + + +If you need to give additional arguments to the plugin independently, don't +use `weave launch`, but instead run: + + $ weave launch-router [other peers] + $ weave launch-plugin [plugin arguments] + +The plugin command-line arguments are: + + * `--log-level=debug|info|warning|error` --tells the plugin + how much information to emit for debugging. + * `--mesh-network-name=` -- set to blank to disable creation + of a default network, or include a name of your own choice. + * `--no-multicast-route` -- stops weave from adding a static IP route for + multicast traffic onto its interface + +By default, multicast traffic is routed over the weave network. +To turn this off, e.g. you want to configure your own multicast +route, add the `--no-multicast-route` flag to `weave launch-plugin`. + + +>>Note: When using the Docker Plugin, there is no need to run eval $(weave env) to enable the Proxy. Because Weave is running as a plugin within Docker, the Weave Docker API Proxy, at present, cannot detect between networks. + +**See Also** + + * [Using the Weave Net Docker Network Plugin](/site/plugin/weave-plugin-how-to.md) + * [How the Weave Network Plugin Works](/site/plugin/plugin-how-it-works.md) \ No newline at end of file diff --git a/site/plugin/plugin-how-it-works.md b/site/plugin/plugin-how-it-works.md new file mode 100644 index 0000000000..c0b8309c9e --- /dev/null +++ b/site/plugin/plugin-how-it-works.md @@ -0,0 +1,34 @@ +--- +title: How the Weave Docker Network Plugin Works +layout: default +--- + + +The Weave plugin actually provides *two* network drivers to Docker - one named `weavemesh` that can operate without a cluster store and another one named `weave` that can only work with one (like Docker's overlay driver). + +### `weavemesh` driver + +* Weave handles all co-ordination between hosts (referred to by Docker as a "local scope" driver) +* Supports a single network only. A network named `weave` is automatically created for you. +* Uses Weave's partition tolerant IPAM + +If you do create additional networks using the `weavemesh` driver, containers attached to them will be able to communicate with containers attached to `weave`. There is no isolation between those networks. + +### `weave` driver + +* This runs in what Docker calls "global scope", which requires an external cluster store +* Supports multiple networks that must be created using `docker network create --driver weave ...` +* Used with Docker's cluster-store-based IPAM + +There's no specific documentation from Docker on using a cluster +store, but the first part of +[Getting Started with Docker Multi-host Networking](https://github.com/docker/docker/blob/master/docs/userguide/networking/get-started-overlay.md) is a good place to start. + +>>**Note:** In the case of multiple networks using the `weave` driver, all containers are on the same virtual network but Docker allocates their addresses on different subnets so they cannot talk to each other directly. + + +**See Also** + + * [Using the Weave Net Docker Network Plugin](/site/plugin/weave-plugin-how-to.md) + * [Plugin Command-line Arguments](/site/plugin/plug-in-command-line.md) + diff --git a/site/plugin/weave-plugin-how-to.md b/site/plugin/weave-plugin-how-to.md new file mode 100644 index 0000000000..96d11a37c4 --- /dev/null +++ b/site/plugin/weave-plugin-how-to.md @@ -0,0 +1,90 @@ +--- +title: Using the Weave Net Docker Network Plugin +layout: default +--- + + +Docker versions 1.9 and later have a plugin mechanism for adding +different network providers. Weave installs itself as a network plugin +when you start it with `weave launch`. The Weave Docker Networking plugin is fast and easy to use and +best of all doesn't require an external cluster store in order to use it. + +To create a network which can span multiple Docker hosts, the Weave peers must be connected to each other, by specifying the other hosts during `weave launch` or via +[`weave connect`](/site/using-weave/finding-adding-hosts-dynamically.md). + +See [Deploying Applications to Weave Net](/site/using-weave/deploying-applications.md#peer-connections) for a discussion on peer connections. + +After you've launched Weave and peered your hosts, you can start containers using the following, for example: + + $ docker run --net=weave -ti ubuntu + +on any of the hosts, and they can all communicate with each other. + +>>**Warning!** It is inadvisable to attach containers to the Weave network using the Weave Docker Networking Plugin and Weave Docker API Proxy simultaneously. Such containers will end up with two Weave network interfaces and two IP addresses, which is rarely desirable. To ensure that the proxy is not being used, do not run eval $(weave env), or docker $(weave config). + +In order to use Weave's [Service Discovery](/site/weavedns/overview-using-weavedns.md) you +must pass the additional arguments `--dns` and `-dns-search`, for +which a helper is provided in the Weave script: + + $ docker run --net=weave -h foo.weave.local $(weave dns-args) -tdi ubuntu + $ docker run --net=weave -h bar.weave.local $(weave dns-args) -ti ubuntu + # ping foo + + + +###Launching Weave and Running Containers Using the Plugin + +Just launch the Weave Net router onto each host and make a peer connection with the other hosts: + +~~~bash +host1$ weave launch host2 +host2$ weave launch host1 +~~~ + +then run your containers using the Docker command-line: + +~~~bash +host1$ docker run --net=weave -ti ubuntu +root@1458e848cd90:/# hostname -i +10.32.0.2 +~~~ + +~~~bash +host2$ docker run --net=weave -ti ubuntu +root@8cc4b5dc5722:/# ping 10.32.0.2 + +PING 10.32.0.2 (10.32.0.2) 56(84) bytes of data. +64 bytes from 10.32.0.2: icmp_seq=1 ttl=64 time=0.116 ms +64 bytes from 10.32.0.2: icmp_seq=2 ttl=64 time=0.052 ms +~~~ + + +### Restarting the Plugin + +The plugin is started with a policy of `--restart=always`, so that it is always there after a restart or reboot. If you remove this container (for example, when using `weave reset`) before removing all endpoints created using `--net=weave`, Docker may hang for a long time when it subsequently tries to re-establish communications to the plugin. + +Unfortunately, [Docker 1.9 may also try to commmuncate with the plugin before it has even started it](https://github.com/docker/libnetwork/issues/813). + +If you are using `systemd`, it is advised that you modify the Docker unit to remove the timeout on startup. This gives Docker enough time to abandon its attempts. For example, in the file `/lib/systemd/system/docker.service`, add the following under `[Service]`: + +~~~bash + TimeoutStartSec=0 +~~~ + +###Bypassing the Central Cluster Store When Building Docker Apps + +To run a Docker cluster without a central database, you need to ensure the following: + + 1. Run in "local" scope. This tells Docker to ignore any cross-host coordination. + 2. Allow Weave to handle all the cross-host coordination and to set up all networks. This is done by using the `weave launch` command. + 3. Provide an IP Address Management (IPAM) driver, which links to Weave Net's own IPAM system + +All cross-host coordination is handled by Weave Net's "mesh" communication, using gossipDNS and eventual consistency to avoid the need for constant communication and dependency on a central cluster store. + + +**See Also** + + * [How the Weave Network Plugin Works](/site/plugin/plugin-how-it-works.md) + * [Plugin Command-line Arguments](/site/plugin/plug-in-command-line.md) + + diff --git a/site/proxy.md b/site/proxy.md deleted file mode 100644 index bf322b5b37..0000000000 --- a/site/proxy.md +++ /dev/null @@ -1,332 +0,0 @@ ---- -title: Weave Docker API Proxy -layout: default ---- - -# Weave Docker API Proxy - -The Docker API proxy automatically attaches containers to the weave -network when they are started using the ordinary Docker -[command-line interface](https://docs.docker.com/reference/commandline/cli/) -or -[remote API](https://docs.docker.com/reference/api/docker_remote_api/), -instead of `weave run`. - - * [Setup](#setup) - * [Usage](#usage) - * [Automatic IP address assignment](#ipam) - * [Automatic discovery](#dns) - * [Securing the docker communication with TLS](#tls) - * [Launching containers without the proxy](#without-proxy) - * [Troubleshooting](#troubleshooting) - -## Setup - -The proxy sits between the Docker client (command line or API) and the -Docker daemon, intercepting the communication between the two. You can -start it simultaneously with the router and weaveDNS via `launch`: - - host1$ weave launch - -or independently via `launch-proxy`: - - host1$ weave launch-router && weave launch-proxy - -The first form is more convenient, however you can only pass proxy -related configuration arguments to `launch-proxy` so if you need to -modify the default behaviour you will have to use the latter. - -By default, the proxy decides where to listen based on how the -launching client connects to docker. If the launching client connected -over a unix socket, the proxy will listen on /var/run/weave/weave.sock. If -the launching client connected over TCP, the proxy will listen on port -12375, on all network interfaces. This can be adjusted with the `-H` -argument, e.g. - - host1$ weave launch-proxy -H tcp://127.0.0.1:9999 - -If no TLS or listening interfaces are set, TLS will be autoconfigured -based on the docker daemon's settings, and listening interfaces will -be autoconfigured based on your docker client's settings. - -Multiple `-H` arguments can be specified. If you are working with a -remote docker daemon, then any firewalls inbetween need to be -configured to permit access to the proxy port. - -All docker commands can be run via the proxy, so it is safe to adjust -your `DOCKER_HOST` to point at the proxy. Weave provides a convenient -command for this: - - host1$ eval $(weave env) - host1$ docker ps - ... - -The prior settings can be restored with - - host1$ eval $(weave env --restore) - -Alternatively, the proxy host can be set on a per-command basis with - - host1$ docker $(weave config) ps - -The proxy can be stopped independently with - - host1$ weave stop-proxy - -or in conjunction with the router and weaveDNS via `stop`. - -If you set your `DOCKER_HOST` to point at the proxy, you should revert -to the original settings prior to stopping the proxy. - - -## Usage - -When containers are created via the weave proxy, their entrypoint will -be modified to wait for the weave network interface to become -available. When they are started via the weave proxy, containers will -be [automatically assigned IP addresses](#ipam) and connected to the -weave network. We can create and start a container via the weave proxy -with - - host1$ docker run -ti ubuntu - -or, equivalently with - - host1$ docker create -ti ubuntu - 5ef831df61d50a1a49272357155a976595e7268e590f0a2c75693337b14e1382 - host1$ docker start 5ef831df61d50a1a49272357155a976595e7268e590f0a2c75693337b14e1382 - -Specific IP addresses and networks can be supplied in the `WEAVE_CIDR` -environment variable, e.g. - - host1$ docker run -e WEAVE_CIDR=10.2.1.1/24 -ti ubuntu - -Multiple IP addresses and networks can be supplied in the `WEAVE_CIDR` -variable by space-separating them, as in -`WEAVE_CIDR="10.2.1.1/24 10.2.2.1/24"`. - -The docker NetworkSettings (including IP address, MacAddress, and -IPPrefixLen), will still be returned by `docker inspect`. If you want -`docker inspect` to return the weave NetworkSettings instead, then the -proxy must be launced with the `--rewrite-inspect` flag. This will -only substitute in the weave network settings when the container has a -weave IP. If a container has more than one weave IP, the inspect call -will only include one of them. - - host1$ weave launch-router && weave launch-proxy --rewrite-inspect - -By default, multicast traffic will be routed over the weave network. -To turn this off, e.g. because you want to configure your own multicast -route, add the `--no-multicast-route` flag to `weave launch-proxy`. - -## Automatic IP address assignment - -If [automatic IP address assignment](ipam.html) is enabled in weave, -which it is by default, then containers started via the proxy will be -automatically assigned an IP address, *without having to specify any -special environment variables or other options*. - - host1$ docker run -ti ubuntu - -To use a specific subnet, we pass a `WEAVE_CIDR` to the container, e.g. - - host1$ docker run -ti -e WEAVE_CIDR=net:10.32.2.0/24 ubuntu - -To start a container without connecting it to the weave network, pass -`WEAVE_CIDR=none`, e.g. - - host1$ docker run -ti -e WEAVE_CIDR=none ubuntu - -If you do not want an IP to be assigned by default, the proxy needs to -be passed the `--no-default-ipalloc` flag, e.g., - - host1$ weave launch-proxy --no-default-ipalloc - -In this configuration, containers with no `WEAVE_CIDR` environment -variable will not be connected to the weave network. Containers -started with a `WEAVE_CIDR` environment variable are handled as -before. To automatically assign an address in this mode, we start the -container with a blank `WEAVE_CIDR`, e.g. - - host1$ docker run -ti -e WEAVE_CIDR="" ubuntu - -## Name resolution via `/etc/hosts` - -When starting weave-enabled containers, the proxy will automatically -replace the container's `/etc/hosts` file, and disable Docker's control -over it. The new file contains an entry for the container's hostname -and weave IP address, as well as additional entries that have been -specified with `--add-host` parameters. This ensures that - -- name resolution of the container's hostname, e.g. via `hostname -i`, -returns the weave IP address. This is required for many cluster-aware -applications to work. -- unqualified names get resolved via DNS, i.e. typically via weaveDNS -to weave IP addresses. This is required so that in a typical setup -one can simply "ping ", i.e. without having to -specify a `.weave.local` suffix. - -In case you prefer to keep `/etc/hosts` under Docker's control (for -example, because you need the hostname to resolve to the Docker-assigned -IP instead of the weave IP, or you require name resolution for -Docker-managed networks), the proxy must be launched with the -`--no-rewrite-hosts` flag. - - host1$ weave launch-router && weave launch-proxy --no-rewrite-hosts - -## Automatic discovery - -Containers launched via the proxy will use [weaveDNS](weavedns.html) -automatically if it is running at the point when they are started - -see the [weaveDNS usage](weavedns.html#usage) section for an in-depth -explanation of the behaviour and how to control it. - -Typically, the proxy will pass on container names as-is to [weaveDNS](weavedns.html) -for registration. However, there are situations in which the final container -name is out of the user's control (e.g. when using Docker orchestrators which -append control/namespacing identifiers to the original container names). - -For those situations, the proxy provides a few flags: `--hostname-from-label -`, `--hostname-match ` and `--hostname-replacement -`. When launching a container, the hostname is initialized to the -value of the container label with key ``, if `` wasn't -provided, the container name is used. Additionally, the hostname is matched -against regular expression ``. Then, based on that match, -`` will be used to obtainer the final hostname, which will -ultimately be handed over to weaveDNS for registration. - -For instance, we can launch the proxy using all three flags - - host1$ weave launch-router && weave launch-proxy --hostname-from-label hostname-label --hostname-match '^aws-[0-9]+-(.*)$' --hostname-replacement 'my-app-$1' - host1$ eval $(weave env) - -Note how regexp substitution groups should be prepended with a dollar sign -(e.g. `$1`). For further details on the regular expression syntax please see -[Google's re2 documentation](https://github.com/google/re2/wiki/Syntax). - - -Then, running a container named `aws-12798186823-foo` without labels will lead -to weaveDNS registering hostname `my-app-foo` and not `aws-12798186823-foo`. - - host1$ docker run -ti --name=aws-12798186823-foo ubuntu ping my-app-foo - PING my-app-foo.weave.local (10.32.0.2) 56(84) bytes of data. - 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=1 ttl=64 time=0.027 ms - 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=2 ttl=64 time=0.067 ms - -Also, running a container named `foo` with label -`hostname-label=aws-12798186823-foo` leads to the same hostname registration. - - host1$ docker run -ti --name=foo --label=hostname-label=aws-12798186823-foo ubuntu ping my-app-foo - PING my-app-foo.weave.local (10.32.0.2) 56(84) bytes of data. - 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=1 ttl=64 time=0.031 ms - 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=2 ttl=64 time=0.042 ms - -This is because, as we explained above, when providing `--hostname-from-label` -to the proxy, the specified label has precedence over the container's name. - -## Securing the docker communication with TLS - -If you are [connecting to the docker daemon with -TLS](https://docs.docker.com/articles/https/), you will probably want -to do the same when connecting to the proxy. The proxy will -automatically detect the docker daemon's TLS configuration, and -attempt to duplicate it. In the standard auto-detection case you will -be able to launch a TLS-enabled proxy with: - - host1$ weave launch-proxy - -To disable auto-detection of TLS configuration, you can either pass -the `--no-detect-tls` flag, or manually configure the proxy's TLS with -the same TLS-related command-line flags as supplied to the docker -daemon. For example, if you have generated your certificates and keys -into the docker host's `/tls` directory, we can launch the proxy with: - - host1$ weave launch-proxy --tlsverify --tlscacert=/tls/ca.pem \ - --tlscert=/tls/server-cert.pem --tlskey=/tls/server-key.pem - -The paths to your certificates and key must be provided as absolute -paths which exist on the docker host. - -Because the proxy connects to the docker daemon at -`unix:///var/run/docker.sock`, you must ensure that the daemon is -listening there. To do this, you need to pass the `-H -unix:///var/run/docker.sock` option when starting the docker daemon, -in addition to the `-H` options for configuring the TCP listener. See -[the Docker documentation](https://docs.docker.com/articles/basics/#bind-docker-to-another-host-port-or-a-unix-socket) -for an example. - -With the proxy running over TLS, we can configure our regular docker -client to use TLS on a per-invocation basis with - - $ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem \ - --tlskey=key.pem -H=tcp://host1:12375 version - ... - -or, -[by default](https://docs.docker.com/articles/https/#secure-by-default), -with - - $ mkdir -pv ~/.docker - $ cp -v {ca,cert,key}.pem ~/.docker - $ eval $(weave env) - $ export DOCKER_TLS_VERIFY=1 - $ docker version - ... - -which is exactly the same configuration as when connecting to the -docker daemon directly, except that the specified port is the weave -proxy port. - -## Launching containers without the proxy - -If you cannot or do not want to use the proxy you can launch -containers on the weave network with `weave run`: - - $ weave run -ti ubuntu - -The arguments after `run` are passed through to `docker run` so you -can freely specify whichever docker options are appropriate. Once the -container is started, `weave run` attaches it to the weave network, in -this example with an automatically allocated IP. If you wish you can -specify addresses manually instead: - - $ weave run 10.2.1.1/24 -ti ubuntu - -`weave run` will rewrite `/etc/hosts` in the same way -[the proxy does](#etchosts). In case you prefer to keep -the original file, you must specify `--no-rewrite-hosts` when running -the container: - - $ weave run --no-rewrite-hosts 10.2.1.1/24 -ti ubuntu - -There are some limitations to starting containers with `weave run`: - -* containers are always started in the background, i.e. the equivalent - of always supplying the -d option to docker run -* the --rm option to docker run, for automatically removing containers - after they stop, is not available -* the weave network interface may not be available immediately on - container startup. - -Finally, there is a `weave start` command which starts existing -containers with `docker start` and attaches them to the weave network. - -## Troubleshooting - -The command - - weave status - -reports on the current status of various weave components, including -the proxy, if it is running: - -```` -... -weave proxy is running -```` - -Information on the operation of the proxy can be obtained from the -weaveproxy container logs with - - docker logs weaveproxy - diff --git a/site/router-topology/network-topology.md b/site/router-topology/network-topology.md new file mode 100644 index 0000000000..0a47406e07 --- /dev/null +++ b/site/router-topology/network-topology.md @@ -0,0 +1,150 @@ +--- +title: How Weave Inteprets Network Topology +layout: default +--- + +This section contains the following topics: + + * [Communicating Topology Among Peers](#topology) + * [How Messages are Formed](#messages) + * [Removing Peers](#removing-peers) + * [What Happens When The Topology is Out of Date?](#out-of-date-topology) + + +###Communicating Topology Among Peers + +Topology messages capture which peers are connected to other peers. +Weave peers communicate their knowledge of the topology +(and changes to it) to others, so that all peers learn about the +entire topology. + +Communication between peers occurs over TCP links using: +a) a spanning-tree based broadcast mechanism, and b) a +neighour gossip mechanism. + +Topology messages are sent by a peer in the following instances: + +- when a connection has been added; if the remote peer appears to be + new to the network, the entire topology is sent to it, and an + incremental update, containing information on just the two peers at + the ends of the connection, is broadcast, +- when a connection has been marked as 'established', indicating that + the remote peer can receive UDP traffic from the peer; an update + containing just information about the local peer is broadcast, +- when a connection has been torn down; an update containing just + information about the local peer is broadcast, +- periodically, on a timer, the entire topology is "gossiped" to a + subset of neighbours, based on a topology-sensitive random + distribution. This is done in case some of the aforementioned + broadcasts do not reach all peers, due to rapid changes in the + topology causing broadcast routing tables to become outdated. + +The receiver of a topology update merges that update with its own +topology model, adding peers hitherto unknown to it, and updating +peers for which the update contains a more recent version than known +to it. If there were any such new/updated peers, and the topology +update was received over gossip (rather than broadcast), then an +improved update containing them is gossiped. + +If the update mentions a peer that the receiver does not know, then +the entire update is ignored. + +####How Messages Are Formed + +Every gossip message is structured as follows: + + +-----------------------------------+ + | 1-byte message type - Gossip | + +-----------------------------------+ + | 4-byte Gossip channel - Topology | + +-----------------------------------+ + | Peer Name of source | + +-----------------------------------+ + | Gossip payload (topology update) | + +-----------------------------------+ + +The topology update payload is laid out like this: + + +-----------------------------------+ + | Peer 1: Name | + +-----------------------------------+ + | Peer 1: NickName | + +-----------------------------------+ + | Peer 1: UID | + +-----------------------------------+ + | Peer 1: Version number | + +-----------------------------------+ + | Peer 1: List of connections | + +-----------------------------------+ + | ... | + +-----------------------------------+ + | Peer N: Name | + +-----------------------------------+ + | Peer N: NickName | + +-----------------------------------+ + | Peer N: UID | + +-----------------------------------+ + | Peer N: Version number | + +-----------------------------------+ + | Peer N: List of connections | + +-----------------------------------+ + +Each List of connections is encapsulated as a byte buffer, within +which the structure is: + + +-----------------------------------+ + | Connection 1: Remote Peer Name | + +-----------------------------------+ + | Connection 1: Remote IP address | + +-----------------------------------+ + | Connection 1: Outbound | + +-----------------------------------+ + | Connection 1: Established | + +-----------------------------------+ + | Connection 2: Remote Peer Name | + +-----------------------------------+ + | Connection 2: Remote IP address | + +-----------------------------------+ + | Connection 2: Outbound | + +-----------------------------------+ + | Connection 2: Established | + +-----------------------------------+ + | ... | + +-----------------------------------+ + | Connection N: Remote Peer Name | + +-----------------------------------+ + | Connection N: Remote IP address | + +-----------------------------------+ + | Connection N: Outbound | + +-----------------------------------+ + | Connection N: Established | + +-----------------------------------+ + +####Removing Peers + +If a peer, after receiving a topology update, sees that another peer +no longer has any connections within the network, it drops all +knowledge of that second peer. + + +####What Happens When The Topology is Out of Date? + +The propagation of topology changes to all peers is not instantaneous. +Therefor, it is very possible for a node elsewhere in the network to have an +out-of-date view. + +If the destination peer for a packet is still reachable, then +out-of-date topology can result in it taking a less efficient route. + +If the out-of-date topology makes it look as if the destination peer +is not reachable, then the packet is dropped. For most protocols +(for example, TCP), the transmission will be retried a short time later, by +which time the topology should have updated. + + +**See Also** + + * [Weave Router Encapsulation](/site/router-topology/router-encapsulation.md) + + + diff --git a/site/router-topology/overview.md b/site/router-topology/overview.md new file mode 100644 index 0000000000..4c150f3047 --- /dev/null +++ b/site/router-topology/overview.md @@ -0,0 +1,51 @@ +--- +title: How Weave Net Works +layout: default +--- + + + +A Weave network consists of a number of 'peers' - Weave routers +residing on different hosts. Each peer has a name, which tends to +remain the same over restarts, a human friendly nickname for use in +status and logging output and a unique identifier (UID) that is +different each time it is run. These are opaque identifiers as far as +the router is concerned, although the name defaults to a MAC address. + +Weave routers establish TCP connections with each other, over which they +perform a protocol handshake and subsequently exchange +[topology](/site/router-topology/network-topology.md) information. +These connections are encrypted if +so configured. Peers also establish UDP "connections", possibly +encrypted, which carry encapsulated network packets. These +"connections" are duplex and can traverse firewalls. + +Weave creates a network bridge on the host. Each container is +connected to that bridge via a veth pair, the container side of which +is given an IP address and netmask supplied either by the user or +by Weave's IP address allocator. Also connected to the bridge is the +Weave router container. + +A Weave router captures Ethernet packets from its bridge-connected +interface in promiscuous mode, using 'pcap'. This typically excludes +traffic between local containers, and between the host and local +containers, all of which is routed straight over the bridge by the +kernel. Captured packets are forwarded over UDP to weave router peers +running on other hosts. On receipt of such a packet, a router injects +the packet on its bridge interface using 'pcap' and/or forwards the +packet to peers. + +Weave routers learn which peer host a particular MAC address resides +on. They combine this knowledge with topology information in order to +make routing decisions and thus avoid forwarding every packet to every +peer. Weave can route packets in partially connected networks with +changing topology. For example, in this network, peer 1 is connected +directly to 2 and 3, but if 1 needs to send a packet to 4 or 5 it must +first send it to peer 3: + +![Partially connected Weave Network](images/top-diag1.png "Partially connected Weave Network") + +**See Also** + + * [Weave Router Encapsulation](/site/router-topology/router-encapsulation.md) + * [How Weave Inteprets Network Topology](/site/router-topology/network-topology.md) \ No newline at end of file diff --git a/site/router-topology/router-encapsulation.md b/site/router-topology/router-encapsulation.md new file mode 100644 index 0000000000..ab68bdba60 --- /dev/null +++ b/site/router-topology/router-encapsulation.md @@ -0,0 +1,64 @@ +--- +title: Weave Router Encapsulation +layout: default +--- + + +When the Weave router forwards packets, the encapsulation looks +something like this: + + +-----------------------------------+ + | Name of sending peer | + +-----------------------------------+ + | Frame 1: Name of capturing peer | + +-----------------------------------+ + | Frame 1: Name of destination peer | + +-----------------------------------+ + | Frame 1: Captured payload length | + +-----------------------------------+ + | Frame 1: Captured payload | + +-----------------------------------+ + | Frame 2: Name of capturing peer | + +-----------------------------------+ + | Frame 2: Name of destination peer | + +-----------------------------------+ + | Frame 2: Captured payload length | + +-----------------------------------+ + | Frame 2: Captured payload | + +-----------------------------------+ + | ... | + +-----------------------------------+ + | Frame N: Name of capturing peer | + +-----------------------------------+ + | Frame N: Name of destination peer | + +-----------------------------------+ + | Frame N: Captured payload length | + +-----------------------------------+ + | Frame N: Captured payload | + +-----------------------------------+ + +The name of the sending peer enables the receiving peer to identify +the sender of the UDP packet. This is followed by the meta data and +a payload for one or more captured frames. The router will perform batching +if it captures several frames very quickly which all need forwarding to +the same peer. And in this instance, it will fit as many frames as possible into a single +UDP packet. + +The meta data for each frame contains the names of the capturing and +the destination peers. Since the name of the capturing peer name is +associated with the source MAC of the captured payload, it allows +receiving peers to build up their mappings of which client MAC +addresses are local to which peers. + +The destination peer name enables the receiving peer to identify whether this frame is destined for +itself or whether it should be forwarded on to some other peer, and +accommodate multi-hop routing. This works even when the receiving +intermediate peer has no knowledge of the destination MAC: only the +original capturing peer needs to determine the destination peer from +the MAC. In this way Weave peers never need to exchange the MAC addresses +of clients and need not take any special action for ARP traffic and +MAC discovery. + +**See Also** + + * [How Weave Inteprets Network Topology](/site/router-topology/network-topology.md) \ No newline at end of file diff --git a/site/systemd.md b/site/systemd.md index 1dcfbb020e..efc60df003 100644 --- a/site/systemd.md +++ b/site/systemd.md @@ -3,17 +3,16 @@ title: Using Weave with Systemd layout: default --- -# Using Weave with Systemd -Having installed `weave` as per [readme][], you might wish to configure the -init daemon to start it on boot. Most recent Linux distribution releases are -shipping with [systemd][]. The information below should provide you with some -initial guidance on getting a weave service configured on systemd-based OS. +Having installed `weave` as per [Installing Weave](/site/installing-weave.md), you might find it convenient to configure the +init daemon to start Weave on boot. Most recent Linux distribution releases are +shipping with [systemd](http://www.freedesktop.org/wiki/Software/systemd/). The information below should provide you with some +initial guidance on getting a Weave service configured on a systemd-based OS. ## Weave Service Unit and Configuration -A regular service unit definition for weave is shown below and you should -normally place it in `/etc/systemd/system/weave.service`. +A regular service unit definition for Weave is shown below. This file is +normally placed in `/etc/systemd/system/weave.service`. [Unit] Description=Weave Network @@ -29,25 +28,22 @@ normally place it in `/etc/systemd/system/weave.service`. WantedBy=multi-user.target -To specify the addresses or names of other weave hosts to join the network -you can create the `/etc/sysconfig/weave` environment file which would be of -the following format: +To specify the addresses or names of other Weave hosts to join the network, +create the `/etc/sysconfig/weave` environment file using the following format: PEERS="HOST1 HOST2 .. HOSTn" -You can also use the [connect][] command to add participating hosts dynamically. +You can also use the [`weave connect`](/site/using-weave/finding-adding-hosts-dynamically.md) command to add participating hosts dynamically. -Additionally, if you want to enable [encryption][] you can specify a -password with e.g. `WEAVE_PASSWORD="wfvAwt7sj"` in the -`/etc/sysconfig/weave` environment file, and it will get picked up by -weave on launch. Recommendations for choosing a suitably strong -password can be found [here](features.html#security). +Additionally, if you want to enable [encryption](/site/using-weave/security-untrusted-networks.md) you can specify a +password with e.g. `WEAVE_PASSWORD="wfvAwt7sj"` in the `/etc/sysconfig/weave` environment file, and it will get picked up by +Weave on launch. Recommendations for choosing a suitably strong password can be found [here](/site/using-weave/security-untrusted-networks.md). -You now should be able to launch weave with +You can now launch Weave using sudo systemctl start weave -To ensure weave launches after reboot, you need run +To ensure Weave launches after reboot, run: sudo systemctl enable weave @@ -56,18 +52,20 @@ by your distribution of Linux. ## SELinux Tweaks -If your OS has SELinux enabled and you wish to run weave as a systemd unit, -then you should follow the instructions below. These instructions apply to +If your OS has SELinux enabled and you want to run Weave as a systemd unit, +then follow the instructions below. These instructions apply to CentOS and RHEL as of 7.0. On Fedora 21, there is no need to do this. -Once you have installed `weave` in `/usr/local/bin`, set its execution +Once `weave` is installed in `/usr/local/bin`, set its execution context with the commands shown below. You will need to have the `policycoreutils-python` package installed. sudo semanage fcontext -a -t unconfined_exec_t -f f /usr/local/bin/weave sudo restorecon /usr/local/bin/weave -[readme]: https://github.com/weaveworks/weave/blob/master/README.md#installation -[connect]: features.html#dynamic-topologies -[systemd]: http://www.freedesktop.org/wiki/Software/systemd/ -[encryption]: features.html#security +**See Also** + + * [Using Weave Net](/site/using-weave/intro-example.md) + * [Getting Started Guides](http://www.weave.works/guides/) + * [Features](/site/features.md) + * [Troubleshooting](/site/troubleshooting.md) diff --git a/site/troubleshooting.md b/site/troubleshooting.md index c258651aa4..8db9028e87 100644 --- a/site/troubleshooting.md +++ b/site/troubleshooting.md @@ -5,46 +5,46 @@ layout: default # Troubleshooting Weave - * [Basic diagnostics](#diagnostics) - * [Status reporting](#weave-status) + * [Basic Diagnostics](#diagnostics) + * [Status Reporting](#weave-status) - [List connections](#weave-status-connections) - [List peers](#weave-status-peers) - [List DNS entries](#weave-status-dns) - [JSON report](#weave-report) - [List attached containers](#list-attached-containers) - * [Stopping weave](#stop) + * [Stopping Weave](#stop) * [Reboots](#reboots) - * [Snapshot releases](#snapshots) + * [Snapshot Releases](#snapshots) -## Basic diagnostics +## Basic Diagnostics -Check what version of weave you are running with +Check the version of Weave you are running using: weave version If it is not the latest version, as shown in the list of [releases](https://github.com/weaveworks/weave/releases), then it is -highly recommended that you upgrade by following the +recommended you upgrade using the [installation instructions](https://github.com/weaveworks/weave#installation). -Check the weave container logs with +To check the Weave container logs: docker logs weave A reasonable amount of information, and all errors, get logged there. -The log verbosity can be increased by supplying the -`--log-level=debug` option when launching weave. To log information on -a per-packet basis use `--pktdebug` - be warned, this can produce a +The log verbosity may be increased by using the +`--log-level=debug` option during `weave launch`. To log information on +a per-packet basis use `--pktdebug` - but be warned, as this can produce a lot of output. Another useful debugging technique is to attach standard packet capture and analysis tools, such as tcpdump and wireshark, to the `weave` network bridge on the host. -## Status reporting +## Status Reporting -A status summary can be obtained with `weave status`: +A status summary can be obtained using `weave status`: ```` $ weave status @@ -80,59 +80,55 @@ $ weave status ```` The terms used here are explained further at -[how it works](how-it-works.html). +[How Weave Net Works](/site/router-topology/overview.md). -The 'Version' line shows the weave version. + * **Version** - shows the Weave version. -The 'Protocol' line indicates the weave router's inter-peer + * **Protocol**- indicates the Weave Router inter-peer communication protocol name and supported versions (min..max). -The 'Name' line identifies the local weave router as a peer in the -weave network. The nickname shown in parentheses defaults to the name -of the host on which the weave container was launched; if desired it -can be overriden by supplying the `--nickname` argument to `weave + * **Name** - identifies the local Weave Router as a peer on the +Weave network. The nickname shown in parentheses defaults to the name +of the host on which the Weave container was launched. It +can be overriden by using the `--nickname` argument at `weave launch`. -The 'Encryption' line indicates whether -[encryption](features.html#security) is in use for communication + * **Encryption** - indicates whether +[encryption](/site/encryption/crypto-overview) is in use for communication between peers. -The 'PeerDiscovery' line indicates whether -[automatic peer discovery](features.html#dynamic-topologies) is + * **PeerDiscovery** - indicates whether +[automatic peer discovery](/site/ipam/allocation-multi-ipam.md) is enabled (which is the default). -'Targets' is the number of hosts that the local weave router has been -asked to connect to in `weave launch` and `weave connect`. The -complete list can be obtained with `weave status targets`. + * **Targets** - are the number of hosts that the local Weave Router has been +asked to connect to at `weave launch` and `weave connect`. The +complete list can be obtained using `weave status targets`. -'Connections' shows the total number connections between the local weave -router and other peers, and a break down of that figure by connection + * **Connections** - show the total number connections between the local Weave +Router and other peers, and a break down of that figure by connection state. Further details are available with [`weave status connections`](#weave-status-connections). -'Peers' shows the total number of peers in the network, and the total + * **Peers** - show the total number of peers in the network, and the total number of connections peers have to other peers. Further details are available with [`weave status peers`](#weave-status-peers). -'TrustedSubnets' shows subnets which the router trusts as specified by -the `--trusted-subnets` option to `weave launch`. + * **TrustedSubnets** - show subnets which the router trusts as specified by the `--trusted-subnets` option at `weave launch`. -There are further sections for the [IP address -allocator](ipam.html#troubleshooting), -[weaveDNS](weavedns.html#troubleshooting), and [Weave Docker API -Proxy](proxy.html#troubleshooting). -### List connections -Connections between weave peers carry control traffic over TCP and +### List Connections + +Connections between Weave peers carry control traffic over TCP and data traffic over UDP. For a connection to be fully established, the -TCP connection and UDP data path must be able to transmit information +TCP connection and UDP datapath must be able to transmit information in both directions. Weave routers check this regularly with heartbeats. Failed connections are automatically retried, with an exponential back-off. -Detailed information on the local weave router's connections can be -obtained with `weave status connections`: +Detailed information on the local Weave router's connections can be +obtained using `weave status connections`: ```` $ weave status connections @@ -160,7 +156,7 @@ The columns are as follows: the encryption mode, data transport method, remote peer name and nickname for pending and established connections -### List peers +### List Peers Detailed information on peers can be obtained with `weave status peers`: @@ -185,7 +181,7 @@ address and port number of the connection. In the above example, `host3` has connected to `host1` at `192.168.48.11:6783`; `host1` sees the `host3` end of the same connection as `192.168.48.13:49619`. -### List DNS entries +### List DNS Entries Detailed information on DNS registrations can be obtained with `weave status dns`: @@ -210,7 +206,7 @@ The columns are as follows: * Registering entity identifier (typically a container ID) * Name of peer from which the registration originates -### JSON report +### JSON Report $ weave report @@ -229,12 +225,12 @@ results in JSON format. $ weave report -f {% raw %}'{{json .DNS}}'{% endraw %} {% raw %}{"Domain":"weave.local.","Upstream":["8.8.8.8","8.8.4.4"],"Address":"172.17.0.1:53","TTL":1,"Entries":null}{% endraw %} -### List attached containers +### List Attached Containers weave ps -Produces a list of all the containers running on this host that are -connected to the weave network, like this: +Produces a list of all containers running on this host that are +connected to the Weave network, like this: weave:expose 7a:c4:8b:a1:e6:ad 10.2.5.2/24 b07565b06c53 ae:e3:07:9c:8c:d4 @@ -245,8 +241,8 @@ connected to the weave network, like this: On each line are the container ID, its MAC address, then the list of IP address/routing prefix length ([CIDR notation](http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)) -assigned on the weave network. The special container name `weave:expose` -displays the weave bridge MAC and any IP addresses added to it via the +assigned on the Weave network. The special container name `weave:expose` +displays the Weave bridge MAC and any IP addresses added to it via the `weave expose` command. You can also supply a list of container IDs/names to `weave ps`, like this: @@ -255,29 +251,30 @@ You can also supply a list of container IDs/names to `weave ps`, like this: able ce:15:34:a9:b5:6d 10.2.5.1/24 baker 7a:61:a2:49:4b:91 10.2.8.3/24 -## Stopping weave -To stop weave, if you have configured your environment to use the +## Stopping Weave + +To stop Weave, if you have configured your environment to use the Weave Docker API Proxy, e.g. by running `eval $(weave env)` in your shell, you must first restore the environment with eval $(weave env --restore) -Then run +Then run: weave stop -Note that this leaves the local application container network intact; -containers on the local host can continue to communicate, whereas +Note that this leaves the local application container network intact. +Containers on the local host can continue to communicate, whereas communication with containers on different hosts, as well as service -export/import, is disrupted but resumes when weave is relaunched. +export/import, is disrupted but resumes once Weave is relaunched. -To stop weave and completely remove all traces of the weave network on +To stop Weave and to completely remove all traces of the Weave network on the local host, run weave reset -Any running application containers will permanently lose connectivity -with the weave network and have to be restarted in order to +Any running application containers permanently lose connectivity +with the Weave network and will have to be restarted in order to re-connect. ## Reboots @@ -286,21 +283,21 @@ The router and proxy containers do not have Docker restart policies set, because the process of getting everything re-started and re-connected via restart policies is not entirely reliable. Until that changes, we recommend you create appropriate startup scripts to launch -weave and run application containers from -[your favourite process manager](systemd.html). +Weave and run application containers from +[your favourite process manager](/site/systemd.md). If you are shutting down or restarting a host deliberately, run `weave reset` to clear everything down. The Weave Docker plugin does restart automatically because it must always start with Docker, as described in -[its documentation](plugin.html). +[its documentation](/site/weave-docker-api/using-proxy.md). -## Snapshot releases +## Snapshot Releases -We sometimes publish snapshot releases, to provide previews of new -features, assist in validation of bug fixes, etc. One can install the -latest snapshot release with +Snapshot releases are published at times to provide previews of new +features, assist in the validation of bug fixes, etc. One can install the +latest snapshot release using: sudo curl -L git.io/weave-snapshot -o /usr/local/bin/weave sudo chmod a+x /usr/local/bin/weave @@ -308,3 +305,11 @@ latest snapshot release with Snapshot releases report the script version as "(unreleased version)", and the container image versions as git hashes. + + + +**See Also** + + * [Troubleshooting Weave](/site/troublehooting.md) + * [Troubleshooting IPAM](/site/ipam/troubleshooting.md) + * [Troubleshooting the Proxy](/site/weave-docker-api/using-proxy.md) diff --git a/site/using-weave/application-isolation.md b/site/using-weave/application-isolation.md new file mode 100644 index 0000000000..bf6bd04e0b --- /dev/null +++ b/site/using-weave/application-isolation.md @@ -0,0 +1,88 @@ +--- +title: Isolating Applications on a Weave Network +layout: default +--- + +A single Weave network can host multiple, isolated applications where each application's containers are able +to communicate with each other, but not with the containers of other applications. + +To isolate applications, you can make use of `isolation-through-subnets` technique. + This common strategy is an example of how with Weave many of your `on metal` + techniques can still be used to deploy applications to a container network. + +To begin isolating an application (or parts of an application), +configure Weave's IP allocator to manage multiple subnets. + +Using [the netcat example](/site/using-weave/deploying-applications.md), configure multiple subsets: + +~~~bash + host1$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 + host1$ eval $(weave env) + host2$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 $HOST1 + host2$ eval $(weave env) +~~~ + +This delegates the entire 10.2.0.0/16 subnet to Weave, and instructs +it to allocate from 10.2.1.0/24 within that, if a specific subnet is not +specified. + +Next, launch the two netcat containers onto the default subnet: + +~~~bash + host1$ docker run --name a1 -ti ubuntu + host2$ docker run --name a2 -ti ubuntu +~~~ + +And then to test the isolation, launch a few more containers onto a different subnet: + +~~~bash + host1$ docker run -e WEAVE_CIDR=net:10.2.2.0/24 --name b1 -ti ubuntu + host2$ docker run -e WEAVE_CIDR=net:10.2.2.0.24 --name b2 -ti ubuntu +~~~ + +Ping each container to confirm that they can talk to each other, but not to the containers of our first subnet: + +~~~bash + root@b1:/# ping -c 1 -q b2 + PING b2.weave.local (10.2.2.128) 56(84) bytes of data. + --- b2.weave.local ping statistics --- + 1 packets transmitted, 1 received, 0% packet loss, time 0ms + rtt min/avg/max/mdev = 1.338/1.338/1.338/0.000 ms +~~~ + +~~~bash + root@b1:/# ping -c 1 -q a1 + PING a1.weave.local (10.2.1.2) 56(84) bytes of data. + --- a1.weave.local ping statistics --- + 1 packets transmitted, 0 received, 100% packet loss, time 0ms +~~~ + +~~~bash + root@b1:/# ping -c 1 -q a2 + PING a2.weave.local (10.2.1.130) 56(84) bytes of data. + --- a2.weave.local ping statistics --- + 1 packets transmitted, 0 received, 100% packet loss, time 0ms +~~~ + +If required, a container can also be attached to multiple subnets when it is started using: + +~~~bash + host1$ docker run -e WEAVE_CIDR="net:default net:10.2.2.0/24" -ti ubuntu +~~~ + +`net:default` is used to request the allocation of an address from the default subnet in addition to one from an explicitly specified range. + +>>**Important:** Containers must be prevented from capturing and injecting raw network packets - this can be accomplished by starting them with the `--cap-drop net_raw` option. + +>>Note: By default docker permits communication between containers on the same host, via their docker-assigned IP addresses. For complete +isolation between application containers, that feature needs to be disabled by [setting `--icc=false`](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#communication-between-containers) in the docker daemon configuration. + +**See Also** + + * [Managing Services in Weave: Exporting, Importing, Binding and Routing](/site/using-weave/service-management.md) + * [Exposing Services to the Outside World] (/site/using-weave/service-export.md) + + + + + diff --git a/site/using-weave/deploying-applications.md b/site/using-weave/deploying-applications.md new file mode 100644 index 0000000000..8487422952 --- /dev/null +++ b/site/using-weave/deploying-applications.md @@ -0,0 +1,117 @@ +--- +title: Deploying Applications To Weave Net +layout: default +--- + +This section contains the following topics: + + * [Launching Weave Net](#launching) + * [Creating Peer Connections Between Hosts](#peer-connections) + * [Testing Container Communications](#testing) + * [Starting the Netcat Service](#start-netcat) + + +###Launching Weave Net + +Before launching Weave and deploying your apps, ensure that Docker is [installed]( https://docs.docker.com/engine/installation/) on both hosts. + +On `$HOST1` run: + + host1$ weave launch + host1$ eval $(weave env) + host1$ docker run --name a1 -ti ubuntu + +Where, + + * The first line runs Weave. + * The second line configures the Weave environment, so that containers launched via the Docker command line are automatically attached to the Weave network, and, + * The third line runs the application container using [Docker commands]( https://docs.Docker.com/engine/reference/commandline/daemon/). + +>>**Note:** If the first command results in an error like + `http:///var/run/Docker.sock/v1.19/containers/create: dial unix + /var/run/Docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?` then you likely need to be 'root' in order to connect to the Docker daemon. If so, run the above and all subsequent commands in a *single* root shell (e.g. one created with `sudo -s`). Do *not* prefix individual commands with `sudo`, since some commands modify environment entries and hence they all need to be executed from the same shell. + +>>**Important!** if you are running the Weave Docker Network Plugin do not run `eval $(weave env)`. See [Using the Weave Net Docker Network Plugin](/site/pluing/weave-plugin-how-to.md) for more information. + +Weave must be launched once per host. The relevant container images will be pulled down from Docker Hub on demand during `weave launch`. + +You can also preload the images by running `weave setup`. Preloaded images are useful for automated deployments, and ensure there are no delays during later operations. + +If you are deploying an application that consists of more than one container to the same host, launch them one after another using `docker run`, as appropriate. + + +###Creating Peer Connections Between Hosts + +To launch Weave on an additional host and create a peer connection, run the following: + + host2$ weave launch $HOST1 + host2$ eval $(weave env) + host2$ docker run --name a2 -ti ubuntu + +As noted above, the same steps are repeated for `$HOST2`. The only difference, besides the application container’s name, is that `$HOST2` is told to peer with Weave on `$HOST1` during launch. + +You can also peer with other hosts by specifying the IP address, and a `:port` by which `$HOST2` can reach `$HOST1`. + +>>**Note:** If there is a firewall between `$HOST1` and `$HOST2`, you must permit traffic to flow through TCP 6783 and UDP 6783/6798, which are Weave’s control nd data ports. + +There are a number of different ways that you can specify peers on a Weave network. You can launch Weave on `$HOST1` and then peer with `$HOST2`, or you can launch on `$HOST2` and peer with `$HOST1` or you can tell both hosts about each other at launch. The order in which peers are specified is not important. Weave automatically (re)connects to peers when they become available. + +####Specifying Multiple Peers at Launch + +To specify multiple peers, supply a list of addresses to which you want to connect, all separated by spaces. + +For example: +~~~bash + `host2$:weave launch` +~~~ + +Peers can also be dynamically added. See [Adding Hosts Dynamically](/site/using-weave/finding-adding-hosts-dynamically.md) for more information. + + +###Testing Container Communications + +With two containers running on separate hosts, test that both containers are able to find and communicate with one another using ping. + +From the container started on `$HOST1`... + + root@a1:/# ping -c 1 -q a2 + PING a2.weave.local (10.40.0.2) 56(84) bytes of data. + --- a2.weave.local ping statistics --- + 1 packets transmitted, 1 received, 0% packet loss, time 0ms + rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms + +Similarly, in the container started on `$HOST2`... + + root@a2:/# ping -c 1 -q a1 + PING a1.weave.local (10.32.0.2) 56(84) bytes of data. + --- a1.weave.local ping statistics --- + 1 packets transmitted, 1 received, 0% packet loss, time 0ms + rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms + +###Starting the Netcat Service + +The `netcat` service can be started using the following commands: + +~~~bash + root@a1:/# nc -lk -p 4422 +~~~ + +and then connected to from the another container on `$HOST2` using: + +~~~bash + root@a2:/# echo 'Hello, world.' | nc a1 4422 +~~~ + +Weave supports *any* protocol, and it doesn't have to be over TCP/IP. For example, a netcat UDP service can also be run by using the following: + +~~~bash + root@a1:/# nc -lu -p 5533 + root@a2:/# echo 'Hello, world.' | nc -u a1 5533 +~~~ + + +**See Also** + + * [Installing Weave Net](/site/installing-weave.md) + * [Using Fastdp With Weave](/site/fastdp/using-fastdp.md) + * [Using the Weave Net Docker Network Plugin](/site/plugin/weave-plugin-how-to.md) diff --git a/site/using-weave/dynamically-attach-containers.md b/site/using-weave/dynamically-attach-containers.md new file mode 100644 index 0000000000..12fc3d8f54 --- /dev/null +++ b/site/using-weave/dynamically-attach-containers.md @@ -0,0 +1,55 @@ +--- +title: Dynamically Attaching and Detaching Applications +layout: default +--- + + +When containers may not know the network to which they will be attached, Weave enables you to dynamically attach and detach containers to and from a given network, even when a container is already running. + +To illustrate these scenarios, imagine a netcat service running in a container on $host1 and you need to attach it to another subnet. To attach the netcat service container to a given subnet run: + + host1$ C=$(docker run -e WEAVE_CIDR=none -dti ubuntu) + host1$ weave attach $C + 10.2.1.3 + +Where, + + * `C=$(Docker run -e WEAVE_CIDR=none -dti ubuntu)` is a variable for the subnet on which to attach + * `weave attach` – the Weave command to attach to the specified subnet, which takes the variable for the subnet + * `10.2.1.3` - the allocated IP address output by `weave attach` and in this case, represents the default subnet + +>>Note If you are using the Weave Docker API proxy, it will have modified `DOCKER_HOST` to point to the proxy and therefore you will have to pass `-e WEAVE_CIDR=none` to start a container that _doesn't_ get automatically attached to the weave network for the purposes of this example. + +###Dynamically Detaching Containers + +A container can be detached from a subnet, by using the `weave detach` command: + + host1$ weave detach $C + 10.2.1.3 + +You can also detach a container from one network and then attach it to a different one: + + host1$ weave detach net:default $C + 10.2.1.3 + host1$ weave attach net:10.2.2.0/24 $C + 10.2.2.3 + +or, attach a container to multiple application networks, effectively sharing the same container between applications: + + host1$ weave attach net:default + 10.2.1.3 + host1$ weave attach net:10.2.2.0/24 + 10.2.2.3 + +Finally, multiple addresses can be attached or detached using a single command: + + host1$ weave attach net:default net:10.2.2.0/24 net:10.2.3.0/24 $C + 10.2.1.3 10.2.2.3 10.2.3.1 + host1$ weave detach net:default net:10.2.2.0/24 net:10.2.3.0/24 $C + 10.2.1.3 10.2.2.3 10.2.3.1 + +>>**Important!** Any addresses that were dynamically attached will not be re-attached if the container restarts. + +**See Also** + + * [Adding and Removing Hosts Dynamically](/site/using-weave/finding-adding-hosts-dynamically.md) diff --git a/site/using-weave/finding-adding-hosts-dynamically.md b/site/using-weave/finding-adding-hosts-dynamically.md new file mode 100644 index 0000000000..668b4eedab --- /dev/null +++ b/site/using-weave/finding-adding-hosts-dynamically.md @@ -0,0 +1,60 @@ +--- +title: Adding and Removing Hosts Dynamically +layout: default +--- + + +To add a host to an existing Weave network, simply launch +Weave on the host, and then supply the address of at least +one host. Weave automatically discovers any other hosts in +the network and establishes connections with them if it +can (in order to avoid unnecessary multi-hop routing). + +In some situations all existing Weave hosts may be +unreachable from the new host due to firewalls, etc. +However, it is still possible to add the new host, +provided that inverse connections, for example, +from existing hosts to the new hosts, are available. + +To accomplish this, launch Weave onto the new host +without supplying any additional addresses. And then, from one +of the existing hosts run: + + host# weave connect $NEW_HOST + +Other hosts in the Weave network will automatically attempt +to establish connections to the new host as well. + +Alternatively, you can also instruct a peer to forget a +particular host specified to it via `weave launch` or +`weave connect` by running: + + host# weave forget $DECOMMISSIONED_HOST + +This prevents the peer from trying to reconnect to that host +once connectivity to it is lost, and therfore can be used +to administratively remove any decommissioned peers +from the network. + +Hosts may also be bulk-replaced. All existing hosts +will be forgotten, and the new hosts will be added: + + host# weave connect --replace $NEW_HOST1 $NEW_HOST2 + +For complete control over the peer topology, automatic +discovery can be disabled using the `--no-discovery` +option with `weave launch`. + +If discovery if disabled, Weave only connects to the +addresses specified at launch time and with `weave connect`. + +A list of all hosts that a peer has been asked to connect +to with `weave launch` and `weave connect` +can be obtained with + + host# weave status targets + +**See Also** + + * [Enabling Multi-Cloud, Multi-Hop Networking and Routing](/site/using-weave/multi-cloud-multi-hop.md) + * [Stopping and Removing Peers](/site/ipam/stop-remove-peers-ipam.md) \ No newline at end of file diff --git a/site/using-weave/host-network-integration.md b/site/using-weave/host-network-integration.md new file mode 100644 index 0000000000..1d38efc0c6 --- /dev/null +++ b/site/using-weave/host-network-integration.md @@ -0,0 +1,47 @@ +--- +title: Integrating a Host Network with Weave +layout: default +--- + +Weave application networks can be integrated with an external host network, establishing connectivity between the host and with application containers running anywhere. + +For example, returning to the [netcat example]( /site/using-weave/intro-example.md), you’ve now decided that you need to have the application containers that are running on `$HOST2` accessible by other hosts and containers. + +On `$HOST2` run: + + host2$ weave expose + 10.2.1.132 + +This command grants the host access to all of the application containers in the default subnet. An IP address is allocated by Weave especially for that purpose, and is returned after running `weave expose`. + +Now you are able to ping the host: + + host2$ ping 10.2.1.132 + +And you can also ping the `a1` netcat application container residing on `$HOST1`: + + host2$ ping $(weave dns-lookup a1) + + +###Exposing Multiple Subnets + +Multiple subnet addresses can be exposed or hidden with a single command: + + host2$ weave expose net:default net:10.2.2.0/24 + 10.2.1.132 10.2.2.130 + host2$ weave hide net:default net:10.2.2.0/24 + 10.2.1.132 10.2.2.130 + + +###Adding Exposed Addresses to weavedns + +Exposed addresses can also be added to `weavedns` by supplying fully qualified domain names: + + host2$ weave expose -h exposed.weave.local + 10.2.1.132 + + +**See Also** + + * [Deploying Applications To Weave Net](/site/using-weave/deploying-applications.md) + * [Managing Services in Weave: Exporting, Importing, Binding and Routing](/site/using-weave/service-management.md) \ No newline at end of file diff --git a/site/using-weave/intro-example.md b/site/using-weave/intro-example.md new file mode 100644 index 0000000000..4049726349 --- /dev/null +++ b/site/using-weave/intro-example.md @@ -0,0 +1,19 @@ +--- +title: Using Weave Net - Example +layout: default +--- + +Weave Net provides a simple to deploy networking solution for containerized apps. The example described and built upon throughout this `Using Weave Net` section describes how to manage a Weave container network using a sample application which consists of two simple `netcat` services deployed to containers on two separate hosts, and sometimes extended across three hosts. + + The following topics are discussed: + + * [Deploying Applications to Weave](/site/using-weave/deploying-applications.md) + * [How to Manually Specify IP Addresses and Subnets](/site/using-weave/manual-ip-address.md) + * [Isolating Applications](/site/using-weave/application-isolation.md) + * [Dynamically Attaching and Detaching Containers](/site/using-weave/dynamically-attach-containers.md) + * [Integrating an External Host Network with Weave](/site/using-weave/host-network-integration.md) + * [Exposing Services to the Outside](/site/using-weave/service-export.md) + * [Managing Services in Weave: Importing, Binding and Routing](/site/using-weave/service-management.md) + * [Enabling Multi-cloud Networking and Multi-hop Routing](/site/using-weave/multi-cloud-multi-hop.md) + * [Adding Hosts Dynamically](/site/using-weave/finding-adding-hosts-dynamically.md) + diff --git a/site/using-weave/isolating-applications.md b/site/using-weave/isolating-applications.md new file mode 100644 index 0000000000..7eeb51552b --- /dev/null +++ b/site/using-weave/isolating-applications.md @@ -0,0 +1,76 @@ +--- +title: Isolating Applications on a Weave Network +layout: default +--- + +In some instances, you may need to run applications on the same container network, but at the same time keep the applications isolated from one another. + +To do this, you need to configure Weave's IP allocator to manage multiple subnets: + +~~~bash + host1$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 + host1$ eval $(weave env) + host2$ weave launch --ipalloc-range 10.2.0.0/16 --ipalloc-default-subnet 10.2.1.0/24 $HOST1 + host2$ eval $(weave env) +~~~ + +This delegates the entire 10.2.0.0/16 subnet to Weave, and instructs it to allocate from 10.2.1.0/24 within that if no specific subnet is specified. + +Now you can launch two containers onto the default subnet: + +~~~bash + host1$ docker run --name a1 -ti ubuntu + host2$ docker run --name a2 -ti ubuntu +~~~ + +And then launch several more containers onto a different subnet: + +~~~bash + host1$ docker run -e WEAVE_CIDR=net:10.2.2.0/24 --name b1 -ti ubuntu + host2$ docker run -e WEAVE_CIDR=net:10.2.2.0.24 --name b2 -ti ubuntu +~~~ + +Ping the containers from each subnet to confirm that they can communicate with each other, but not to the containers of our first subnet: + +~~~bash + root@b1:/# ping -c 1 -q b2 + PING b2.weave.local (10.2.2.128) 56(84) bytes of data. + --- b2.weave.local ping statistics --- + 1 packets transmitted, 1 received, 0% packet loss, time 0ms + rtt min/avg/max/mdev = 1.338/1.338/1.338/0.000 ms +~~~ + +~~~bash + root@b1:/# ping -c 1 -q a1 + PING a1.weave.local (10.2.1.2) 56(84) bytes of data. + --- a1.weave.local ping statistics --- + 1 packets transmitted, 0 received, 100% packet loss, time 0ms +~~~ + +~~~bash + root@b1:/# ping -c 1 -q a2 + PING a2.weave.local (10.2.1.130) 56(84) bytes of data. + --- a2.weave.local ping statistics --- + 1 packets transmitted, 0 received, 100% packet loss, time 0ms +~~~ + +###Attaching Containers to Multiple Subnets and Isolating Applications + +A container can also be attached to multiple subnets by using the following command line arguments with `docker run`: + +~~~bash + host1$ docker run -e WEAVE_CIDR="net:default net:10.2.2.0/24" -ti ubuntu +~~~ + +Where, + + *`net:default` is used to request the allocation of an address from the default subnet in addition to one from an explicitly specified range. + +>>Note: By default Docker permits communication between containers on the same host, via their Docker-assigned IP addresses. For complete isolation between application containers, that feature _must_ to be disabled by [setting `--icc=false`](https://docs.Docker.com/engine/userguide/networking/default_network/container-communication/#communication-between-containers) in the Docker daemon configuration. + +>>**Important:!** Containers must not be allowed to capture and inject raw network packets. This can be prevented by starting the containers with the `--cap-drop net_raw` option. + +**See Also** + + * [Dynamically Attaching and Detaching Applications](/site/using-weave/dynamically-attach-containers.md) + * [Automatic Allocation Across Multiple Subnets](/site/ipam/allocation-multi-ipam.md) diff --git a/site/using-weave/manual-ip-address.md b/site/using-weave/manual-ip-address.md new file mode 100644 index 0000000000..084d9a387e --- /dev/null +++ b/site/using-weave/manual-ip-address.md @@ -0,0 +1,64 @@ +--- +title: Manually Specifying the IP Address of a Container +layout: default +--- + +Containers are automatically allocated an IP address that is unique across the Weave network. You can see which address was allocated by running, [`weave ps`](/site/troubleshooting.md#weave-status): + + host1$ weave ps a1 + a7aee7233393 7a:44:d3:11:10:70 10.32.0.2/12 + +Weave detects when a container has exited and releases its allocated addresses so they can be re-used by the network. + +See the [Automatic IP Address Management](/site/ipam/overview-init-ipam.md) and also an explanation of [the basics of IP addressing](/site/ip-addresses/ip-addresses.md) for further details. + +Instead of allowing Weave to allocate IP addresses automatically (using IPAM), there may be instances where you need to control a particular container or a cluster by setting an IP address for it. + +You can specify an IP address and a network explicitly, using Classless Inter-Domain Routing or [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). + +For example, in the example, $HOST1 and $HOST2, in CIDR notation you could run your containers as follows: + +On `$HOST1`: + +~~~bash +host1$ Docker run -e WEAVE_CIDR=10.2.1.1/24 -ti ubuntu +root@7ca0f6ecf59f:/# +~~~ + +And $HOST2: + +~~~bash +host2$ Docker run -e WEAVE_CIDR=10.2.1.2/24 -ti ubuntu +root@04c4831fafd3:/# +~~~ + +Then test that the container on $HOST2 can be reached: + +~~~bash +root@7ca0f6ecf59f:/# ping -c 1 -q 10.2.1.2 +PING 10.2.1.2 (10.2.1.2): 48 data bytes +--- 10.2.1.2 ping statistics --- +1 packets transmitted, 1 packets received, 0% packet loss +round-trip min/avg/max/stddev = 1.048/1.048/1.048/0.000 ms +~~~ + +And do the same in the container on $HOST1: + +~~~bash +root@04c4831fafd3:/# ping -c 1 -q 10.2.1.1 +PING 10.2.1.1 (10.2.1.1): 48 data bytes +--- 10.2.1.1 ping statistics --- +1 packets transmitted, 1 packets received, 0% packet loss +round-trip min/avg/max/stddev = 1.034/1.034/1.034/0.000 ms +~~~ + +The IP addresses and netmasks can be set to anything, but ensure they don’t conflict with any of the IP ranges in use on the hosts or with IP addresses used by any external services to which the hosts or containers may need to connect. + +Individual IP addresses given to containers must, of course, be unique. If you pick an address that the automatic allocator has already assigned a warning appears. + +**See Also** + + * [Managing Services: Exporting, Importing, Binding and Routing](/site/using-weave/service-management.md) + * [Configuring Weave to Explicitly Use an IP Range](/site/ip-addresses/configuring-weave.md) + * [Automatic IP Address Management](/site/ipam/overview-init-ipam.md) + diff --git a/site/using-weave/multi-cloud-multi-hop.md b/site/using-weave/multi-cloud-multi-hop.md new file mode 100644 index 0000000000..5702bc0d41 --- /dev/null +++ b/site/using-weave/multi-cloud-multi-hop.md @@ -0,0 +1,44 @@ +--- +title: Enabling Multi-Cloud, Multi-Hop Networking and Routing +layout: default +--- + + +###Enabling Multi-Cloud Networking + +Before multi-cloud networking can be enabled, you must c +onfigure the network to allow connections through Weave's +control and data ports on the Docker hosts. By default, the +control port defaults to TCP 6783, and the data ports to +UDP 6783/6784. + +To override Weave’s default ports specify a port using +the `WEAVE_PORT` setting. For example, if WEAVE_PORT is +set to `9000`, then Weave uses TCP 9000 for its control +port and UDP 9000/9001 for its data port. + +>>**Important!** It is recommended that all peers be given +the same setting. + + +###Multi-hop routing + +A network of containers across more than two hosts can be +established even when there is only partial connectivity +between the hosts. + +Weave routes traffic between containers as long as +there is at least one *path* of connected hosts +between them. + +For example, if a Docker host in a local data center can +connect to hosts in GCE and EC2, but the latter two cannot +connect to each other, containers in the latter two can +still communicate and Weave in this instance will route the +traffic via the local data center. + +**See Also** + + * [Finding and Adding Hosts Dynamically](/site/using-weave/finding-adding-hosts-dynamically.md) + + diff --git a/site/using-weave/security-untrusted-networks.md b/site/using-weave/security-untrusted-networks.md new file mode 100644 index 0000000000..31cd65ebc2 --- /dev/null +++ b/site/using-weave/security-untrusted-networks.md @@ -0,0 +1,48 @@ +--- +title: Securing Connections Across Untrusted Networks +layout: default +--- + + +To connect containers across untrusted networks, Weave peers can be instructed to encrypt traffic by supplying a `--password` option or by using the `WEAVE_PASSWORD` environment variable during `weave launch`. + +For example: + + host1$ weave launch --password wfvAwt7sj + +or + + host1$ export WEAVE_PASSWORD=wfvAwt7sj + host1$ weave launch + +>NOTE: The command line option takes precedence over the environment variable._ + +> To avoid leaking your password via the kernel process table or your +> shell history, we recommend you store it in a file and capture it +> in a shell variable prior to launching weave: `export +> WEAVE_PASSWORD=$(cat /path/to/password-file)` + +To guard against dictionary attacks, the password needs to be reasonably strong with at least 50 bits of entropy is recommended. An easy way to generate a random password that satisfies this requirement is: + + < /dev/urandom tr -dc A-Za-z0-9 | head -c9 ; echo + +The same password must be specified for all Weave peers, by default both control and data plane traffic will then use authenticated encryption. + +Fast datapath does not support encryption. If you supply a +password at `weave launch` the router falls back to a slower +`sleeve` mode that does support encryption. + +If some of your peers are co-located in a trusted network (for example within the boundary of your own datacenter) you can use the `--trusted-subnets` argument to `weave launch` to selectively disable data plane encryption as an optimization. + +Both peers must consider the other to be in a trusted subnet for this to take place - if they do not agree, Weave [falls back to a slower method]( /site/fastdp/using-fastdp.md) for transporting data between peers, since fast datapath does not support encryption. + +Be aware that: + + * Containers will be able to access the router REST API if fast datapath is disabled. You can prevent this by setting: + [`icc=false`](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/#communication-between-containers) + * Containers are able to access the router control and data plane + ports, but this can be mitigated by enabling encryption. + +**See Also** + + * [Using Encryption With Weave](/site/encryption/crypto-overview.md) diff --git a/site/using-weave/service-export.md b/site/using-weave/service-export.md new file mode 100644 index 0000000000..feeb114160 --- /dev/null +++ b/site/using-weave/service-export.md @@ -0,0 +1,39 @@ +--- +title: Exposing Services to the Outside World +layout: default +--- + +Services running in containers on a Weave network can be made +accessible to the outside world (and, more generally, to other networks) +from any Weave host, irrespective of where the service containers are +located. + +Returning to the netcat example service, described in [Deploying Applications]( /site/using-weave/deploying-applications.md), +you can expose the netcat service running on `HOST1` and make it accessible to the outside world via `$HOST2`. + +First, expose the application network to `$HOST2`, as explained in [Integrating a Host Network with a Weave](/site/using-weave/host-network-integration.md): + + host2$ weave expose + 10.2.1.132 + +Then add a NAT rule that routes the traffic from the outside world to the destination container service. + + host2$ iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 2211 \ + -j DNAT --to-destination $(weave dns-lookup a1):4422 + +In this example, it is assumed that the "outside world" is connecting to `$HOST2` via 'eth0'. The TCP traffic to port 2211 on the external IPs will be routed to the 'nc' service running on port 4422 in the container a1. + +With the above in place, you can connect to the 'nc' service from anywhere using: + + echo 'Hello, world.' | nc $HOST2 2211 + +>>**Note:** Due to the way routing is handled in the Linux kernel, this won't work when run *on* `$HOST2`. + +Similar NAT rules to the above can be used to expose services not just to the outside world but also to other, internal, networks. + + + +**See Also** + + * [Using Weave Net](/site/using-weave/intro-example.md) + * [Managing Services in Weave: Exporting, Importing, Binding and Routing](/site/using-weave/service-management.md) \ No newline at end of file diff --git a/site/using-weave/service-management.md b/site/using-weave/service-management.md new file mode 100644 index 0000000000..3de56d2875 --- /dev/null +++ b/site/using-weave/service-management.md @@ -0,0 +1,116 @@ +--- +title: Managing Services in Weave - Exporting, Importing, Binding and Routing +layout: default +--- + +This section contains the following topics: + + * [Exporting Services](#exporting) + * [Importing Services](#importing) + * [Binding Services](#binding) + * [Routing Services](#routing) + * [Dynamically Changing Service Locations](#change-location) + + + +### Exporting Services + +Services running in containers on a Weave network can be made accessible to the outside world (and, more generally, to other networks) from any Weave host, regardless of where the service containers are located. + +Turning back to the netcat example, your netcat service, which is running in a container on `$HOST1`, needs to be accessible to the outside world via `$HOST2`. + +To do this, expose the application network to `$HOST2`, as explained in [Host Network Integration](/site/using-weave/host-network-integration.md) by running: + + host2$ weave expose + 10.2.1.132 + +Then add a NAT rule to route from the outside world to the destination container service. + + host2$ iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 2211 \ + -j DNAT --to-destination $(weave dns-lookup a1):4422 + +Here it is assumed that the "outside world" is connecting to `$HOST2` via 'eth0'. The TCP traffic to port 2211 on the external IPs will be routed to the 'nc' service, running in the a1 container and listening on port 4422. +With the above in place, you are ready to connect to the 'nc' service from anywhere using: + + echo 'Hello, world.' | nc $HOST2 2211 + +>>**Note:** Due to the way routing is handled in the Linux kernel, this won't work when run *on* `$HOST2`. + +Similar NAT rules to the above can used to expose services not just to the outside world but also other, internal, networks. + + + +###Importing Services + +Applications running in containers on a Weave network can be given access to services, which are only reachable from certain +Weave hosts, regardless of where the actual application containers are located. + +Using the netcat service example described in +[Deploying Applications](/site/using-weave/deploying-applications.md), and expanding upon it, you now decide to add a third containerized netcat service. This additional netcat service runs on `$HOST3`, and listens on port 2211, but it is not on the Weave network. + +An additional caveat is that `$HOST3` can only be reached from `$HOST1`, which is not accessible via `$HOST2`. Nonetheless, you still need to make the `$HOST3` service available to an application that is running in a container on `$HOST2`. + +To satisfy this scenario, first expose the [application network to the host](/site/using-weave/host-network-integration.md) by running the following on `$HOST1`: + + host1$ weave expose -h host1.weave.local + 10.2.1.3 + +Then add a NAT rule, which routes from the above IP to the destination service. + + host1$ iptables -t nat -A PREROUTING -p tcp -d 10.2.1.3 --dport 3322 \ + -j DNAT --to-destination $HOST3:2211 + +This allows any application container to reach the service by connecting to 10.2.1.3:3322. So if `$HOST3` is running a +netcat service on port 2211: + + host3$ nc -lk -p 2211 + +You can now connect to it from the application container running on `$HOST2` using: + + root@a2:/# echo 'Hello, world.' | nc host1 3322 + +Note that you should be able to run this command from any application container. + +###Binding Services + +Importing a service provides a degree of indirection that allows late and dynamic binding, similar to what can be achieved with a proxy. + +Referring back to the netcat services example that is running on three hosts (as explained in Importing Services above), the application containers are completely unaware that the service they are accessing at `10.2.1.3:3322` actually resides on `$HOST3:2211`. + +You can point application containers to another service location by changing the above NAT rule, without altering the applications. + +###Routing Services + +You can combine the service export and service import features to establish connectivity between applications and services residing on disjointed networks, even if those networks are separated by firewalls and have overlapping IP ranges. + +Each network imports its services into Weave, while at the same time, exports from Weave any services that are required by its applications. In this scenario, there are no application containers (although, there could be). Weave is acting as an address translation and routing facility, and uses the Weave container network as an intermediary. + +Expanding on the [netcat example](/site/using-weave/deploying-applications.md), you can also import an additional netcat service running on `$HOST3` into Weave via `$HOST1`. + +Begin importing the service onto `$HOST2` by first exposing the application network: + + host2$ weave expose + 10.2.1.3 + +Then add a NAT rule which routes traffic from the `$HOST2` network (i.e. anything which can connect to `$HOST2`) to the service endpoint on the Weave network: + + host2$ iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 4433 \ + -j DNAT --to-destination 10.2.1.3:3322 + +Now any host on the same network as `$HOST2` is able to access the service: + + echo 'Hello, world.' | nc $HOST2 4433 + + +###Dynamically Changing Service Locations + +Furthermore, as explained in Binding Services, service locations can be dynamically altered without having to change any of the applications that access them. + +For example, you can move the netcat service to `$HOST4:2211` and it will retain its 10.2.1.3:3322 endpoint on the Weave network. + + +**See Also** + + * [Adding and Removing Hosts Dynamically](/site/using-weave/finding-adding-hosts-dynamically.md) + * [Enabling Multi-Cloud, Multi-Hop Networking and Routing](/site/using-weave/multi-cloud-multi-hop.md) + diff --git a/site/weave-docker-api/automatic-discovery-proxy.md b/site/weave-docker-api/automatic-discovery-proxy.md new file mode 100644 index 0000000000..cb6fc49603 --- /dev/null +++ b/site/weave-docker-api/automatic-discovery-proxy.md @@ -0,0 +1,60 @@ +--- +title: Using Automatic Discovery With the Weave Proxy +layout: default +--- + +Containers launched via the proxy use [weavedns](/site/weavedns/overview-using-weavedns.md) +automatically if it is running when they are started - +see the [weavedns usage](/site/weavedns/overview-using-weavedns.md#usage) section for an in depth +explanation of the behaviour and how to control it. + +Typically, the proxy passes on container names as-is to weavedns +for registration. However, there are situations in which the final container +name may be out of your control (for example, if you are using Docker orchestrators which +append control/namespacing identifiers to the original container names). + +For those situations, the proxy provides the following flags: + + * `--hostname-from-label` + * `--hostname-match ` + * `--hostname-replacement ` + +When launching a container, the hostname is initialized to the +value of the container label using key ``. If no `` was +provided, then the container name is used. + +Additionally, the hostname is matched against a regular expression `` and based on that match, +`` is used to obtain the final hostname, and then handed over to weaveDNS for registration. + +For example, you can launch the proxy using all three flags, as follows: + + host1$ weave launch-router && weave launch-proxy --hostname-from-label hostname-label --hostname-match '^aws-[0-9]+-(.*)$' --hostname-replacement 'my-app-$1' + host1$ eval $(weave env) + +>>**Note:** regexp substitution groups must be pre-pended with a dollar sign +(for example, `$1`). For further details on the regular expression syntax see +[Google's re2 documentation](https://github.com/google/re2/wiki/Syntax). + +After launching the Weave proxy with these flags, running a container named `aws-12798186823-foo` without labels results in +weavedns registering the hostname `my-app-foo` and not `aws-12798186823-foo`. + + host1$ docker run -ti --name=aws-12798186823-foo ubuntu ping my-app-foo + PING my-app-foo.weave.local (10.32.0.2) 56(84) bytes of data. + 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=1 ttl=64 time=0.027 ms + 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=2 ttl=64 time=0.067 ms + +Also, running a container named `foo` with the label +`hostname-label=aws-12798186823-foo` leads to the same hostname registration. + + host1$ docker run -ti --name=foo --label=hostname-label=aws-12798186823-foo ubuntu ping my-app-foo + PING my-app-foo.weave.local (10.32.0.2) 56(84) bytes of data. + 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=1 ttl=64 time=0.031 ms + 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=2 ttl=64 time=0.042 ms + +This is because, as explained above, if providing `--hostname-from-label` +to the proxy, the specified label takes precedence over the container's name. + +**See Also** + + * [Name resolution via `/etc/hosts`](/site/weave-docker-api/name-resolution-proxy.md) + * [How Weave Finds Containers](/site/weavedns/how-works-weavedns.md) \ No newline at end of file diff --git a/site/weave-docker-api/ipam-proxy.md b/site/weave-docker-api/ipam-proxy.md new file mode 100644 index 0000000000..4c3d478fd9 --- /dev/null +++ b/site/weave-docker-api/ipam-proxy.md @@ -0,0 +1,42 @@ +--- +title: Automatic IP Allocation and the Weave Proxy +layout: default +--- + + +If [automatic IP address allocation](/site/ipam/overview-init-ipam.md) is enabled in Weave (by default IPAM is enabled), +then containers started via the proxy are automatically assigned an IP address, *without having to specify any +special environment variables or any other options*. + + host1$ docker run -ti ubuntu + +To use a specific subnet, we pass a `WEAVE_CIDR` to the container, e.g. + + host1$ docker run -ti -e WEAVE_CIDR=net:10.32.2.0/24 ubuntu + +To start a container without connecting it to the Weave network, pass +`WEAVE_CIDR=none`, for example: + + host1$ docker run -ti -e WEAVE_CIDR=none ubuntu + + +###Disabling Automatic IP Address Allocation + +If you do not want an IP to be assigned by default, the proxy needs to +be passed the `--no-default-ipalloc` flag, for example: + + host1$ weave launch-proxy --no-default-ipalloc + +In this configuration, containers with no `WEAVE_CIDR` environment +variable are not connected to the Weave network. + +Containers started with a `WEAVE_CIDR` environment variable are handled as before. +To automatically assign an address in this mode, start the +container with a blank `WEAVE_CIDR`, for example: + + host1$ docker run -ti -e WEAVE_CIDR="" ubuntu + +**See Also** + + * [Address Allocation with IP Address Management (IPAM)](/site/ipam/overview-init-ipam.md) + * [How to Manually Specify IP Addresses and Subnets](/site/using-weave/manual-ip-address.md) \ No newline at end of file diff --git a/site/weave-docker-api/launching-without-proxy.md b/site/weave-docker-api/launching-without-proxy.md new file mode 100644 index 0000000000..ef37c6fa0f --- /dev/null +++ b/site/weave-docker-api/launching-without-proxy.md @@ -0,0 +1,46 @@ +--- +title: Launching Containers With Weave Run (without the Proxy) +layout: default +--- + + +If you don't want to use the proxy, you can also launch +containers on to the Weave network using `weave run`: + + $ weave run -ti ubuntu + +The arguments after `run` are passed through to `docker run`. Therefore you +can freely specify whatever Docker options you need. + +Once the container is started, `weave run` attaches it to the Weave network, and in +this example, it obtains an automatically allocated IP. + +You can specify IP addresses manually instead: + + $ weave run 10.2.1.1/24 -ti ubuntu + +`weave run` rewrites `/etc/hosts` in the same way +[the proxy does](/site/weave-docker-api/name-resolution-proxy.md). If you need to keep +the original file, specify `--no-rewrite-hosts` when running +the container: + + $ weave run --no-rewrite-hosts 10.2.1.1/24 -ti ubuntu + +There are some limitations to starting containers using `weave run`: + +* containers are always started in the background, i.e. the equivalent + of always supplying the -d option to docker run +* the --rm option to docker run, for automatically removing containers + after they stop, is not available +* the Weave network interface may not be available immediately on + container startup. + +Finally, there is a `weave start` command which starts existing +containers using `docker start` and attaches them to the Weave network. + + +**See Also** + + * [Setting Up The Weave Docker API Proxy](/site/weave-docker-api/set-up-proxy.md) + * [Securing Docker Communications With TLS](/site/weave-docker-api/securing-proxy.md) + * [Name Resolution via `/etc/hosts`](/site/weave-docker-api/name-resolution-proxy.md) diff --git a/site/weave-docker-api/name-resolution-proxy.md b/site/weave-docker-api/name-resolution-proxy.md new file mode 100644 index 0000000000..0d1bc42a50 --- /dev/null +++ b/site/weave-docker-api/name-resolution-proxy.md @@ -0,0 +1,34 @@ +--- +title: Name resolution via `/etc/hosts` +layout: default +--- + + +When starting Weave-enabled containers, the proxy automatically +replaces the container's `/etc/hosts` file, and disables Docker's control +over it. The new file contains an entry for the container's hostname +and Weave IP address, as well as additional entries that have been +specified using the `--add-host` parameters. + +This ensures that: + +- name resolution of the container's hostname, for example, via `hostname -i`, +returns the Weave IP address. This is required for many cluster-aware +applications to work. +- unqualified names get resolved via DNS, for example typically via weavedns +to Weave IP addresses. This is required so that in a typical setup +one can simply "ping ", i.e. without having to +specify a `.weave.local` suffix. + +If you prefer to keep `/etc/hosts` under Docker's control (for +example, because you need the hostname to resolve to the Docker-assigned +IP instead of the Weave IP, or you require name resolution for +Docker-managed networks), the proxy must be launched using the +`--no-rewrite-hosts` flag. + + host1$ weave launch-router && weave launch-proxy --no-rewrite-hosts + +**See Also** + + * [Using Automatic Discovery With the Weave Proxy](/site/weave-docker-api/automatic-discovery-proxy.md) + \ No newline at end of file diff --git a/site/weave-docker-api/securing-proxy.md b/site/weave-docker-api/securing-proxy.md new file mode 100644 index 0000000000..542164e36a --- /dev/null +++ b/site/weave-docker-api/securing-proxy.md @@ -0,0 +1,60 @@ +--- +title: Securing the Docker Communications With TLS +layout: default +--- + +If you are [connecting to the docker daemon with +TLS](https://docs.docker.com/articles/https/), you most likely want +to do the same when connecting to the proxy. The proxy +automatically detects the Docker daemon's TLS configuration, and +attempts to duplicate it. + +In the standard auto-detection case you can launch a TLS-enabled proxy as follows: + + host1$ weave launch-proxy + +To disable auto-detection of TLS configuration, you can either pass +the `--no-detect-tls` flag, or you can manually configure the proxy's TLS using +the same TLS-related command-line flags supplied to the Docker +daemon. + +For example, if you generated your certificates and keys +into the Docker host's `/tls` directory, launch the proxy using: + + host1$ weave launch-proxy --tlsverify --tlscacert=/tls/ca.pem \ + --tlscert=/tls/server-cert.pem --tlskey=/tls/server-key.pem + +The paths to your certificates and key must be provided as absolute +paths as they exist on the Docker host. + +Because the proxy connects to the Docker daemon at +`unix:///var/run/docker.sock`, you must ensure that the daemon is actually +listening there. To do ensure this, pass the `-H unix:///var/run/docker.sock` option when starting the Docker daemon, +in addition to the `-H` options for configuring the TCP listener. See +[the Docker documentation](https://docs.docker.com/articles/basics/#bind-docker-to-another-host-port-or-a-unix-socket) +for an example. + +With the proxy running over TLS, you can configure the Docker +client to use TLS on a per-invocation basis by running: + + $ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem \ + --tlskey=key.pem -H=tcp://host1:12375 version + ... + +or, [by default](https://docs.docker.com/articles/https/#secure-by-default), using: + + $ mkdir -pv ~/.docker + $ cp -v {ca,cert,key}.pem ~/.docker + $ eval $(weave env) + $ export DOCKER_TLS_VERIFY=1 + $ docker version + ... + +This is exactly the same configuration used when connecting to the +Docker daemon directly, except that the specified port is the Weave +proxy port. + + +**See Also** + + * [Setting Up The Weave Docker API Proxy](/site/weave-docker-api/set-up-proxy.md) diff --git a/site/weave-docker-api/set-up-proxy.md b/site/weave-docker-api/set-up-proxy.md new file mode 100644 index 0000000000..2431a56fc9 --- /dev/null +++ b/site/weave-docker-api/set-up-proxy.md @@ -0,0 +1,78 @@ +--- +title: Setting Up The Weave Docker API Proxy +layout: default +--- + + +The Docker API proxy automatically attaches containers to the Weave +network when they are started using the ordinary Docker +[command-line interface](https://docs.docker.com/reference/commandline/cli/) +or the [remote API](https://docs.docker.com/reference/api/docker_remote_api/), +instead of `weave run`. + + +###Setting Up The Weave Docker API Proxy + +The proxy sits between the Docker client (command line or API) and the +Docker daemon, and intercepts the communication between the two. You can +start it simultaneously with the router and weavedns via `launch`: + + host1$ weave launch + +or independently via `launch-proxy`: + + host1$ weave launch-router && weave launch-proxy + +The first form is more convenient. But only `launch-proxy` can be passed configuration arguments. +Therefor if you need to modify the default behaviour of the proxy, you must use `launch-proxy`. + +By default, the proxy decides where to listen based on how the +launching client connects to Docker. If the launching client connected +over a UNIX socket, the proxy listens on `/var/run/weave/weave.sock`. If +the launching client connects over TCP, the proxy listens on port +12375, on all network interfaces. This can be adjusted using the `-H` +argument, for example: + + host1$ weave launch-proxy -H tcp://127.0.0.1:9999 + +If no TLS or listening interfaces are set, TLS is autoconfigured +based on the Docker daemon's settings, and the listening interfaces are +autoconfigured based on your Docker client's settings. + +Multiple `-H` arguments can be specified. If you are working with a +remote docker daemon, then any firewalls inbetween need to be +configured to permit access to the proxy port. + +All docker commands can be run via the proxy, so it is safe to adjust +your `DOCKER_HOST` to point at the proxy. Weave provides a convenient +command for this: + + host1$ eval $(weave env) + host1$ docker ps + ... + +The prior settings can be restored with + + host1$ eval $(weave env --restore) + +Alternatively, the proxy host can be set on a per-command basis with + + host1$ docker $(weave config) ps + +The proxy can be stopped independently with + + host1$ weave stop-proxy + +or in conjunction with the router and weaveDNS via `stop`. + +If you set your `DOCKER_HOST` to point at the proxy, you should revert +to the original settings prior to stopping the proxy. + + +**See Also** + + * [Using The Weave Docker API Proxy](/site/weave-docker-api/using-proxy.md) + * [Securing Docker Communications With TLS](securing-proxy.md) + * [Launching Containers With Weave Run (without the Proxy)](/site/weave-docker-api/launching-without-proxy.md) + + diff --git a/site/weave-docker-api/troubleshooting-proxy.md b/site/weave-docker-api/troubleshooting-proxy.md new file mode 100644 index 0000000000..b224244bc0 --- /dev/null +++ b/site/weave-docker-api/troubleshooting-proxy.md @@ -0,0 +1,25 @@ +--- +title: Troubleshooting +layout: default +--- + +The command + + weave status + +reports on the current status of various weave components, including +the proxy, if it is running: + +```` +... +weave proxy is running +```` + +Information on the operation of the proxy can be obtained from the +weaveproxy container logs using: + + docker logs weaveproxy + +**See Also** + + * [Troubleshooting Weave](/site/troubleshooting.md) \ No newline at end of file diff --git a/site/weave-docker-api/using-proxy.md b/site/weave-docker-api/using-proxy.md new file mode 100644 index 0000000000..1c95c066ff --- /dev/null +++ b/site/weave-docker-api/using-proxy.md @@ -0,0 +1,62 @@ +--- +title: Using The Weave Docker API Proxy +layout: default +--- + + +When containers are created via the Weave proxy, their entrypoint is +modified to wait for the Weave network interface to become +available. + +When they are started via the Weave proxy, containers are +[automatically assigned IP addresses](/site/ipam/overview-init-ipam.md) and connected to the +Weave network. + +###Creating and Starting Containers with the Weave Proxy + +To create and start a container via the Weave proxy run: + + host1$ docker run -ti ubuntu + +or, equivalently run: + + host1$ docker create -ti ubuntu + 5ef831df61d50a1a49272357155a976595e7268e590f0a2c75693337b14e1382 + host1$ docker start 5ef831df61d50a1a49272357155a976595e7268e590f0a2c75693337b14e1382 + +Specific IP addresses and networks can be supplied in the `WEAVE_CIDR` +environment variable, for example: + + host1$ docker run -e WEAVE_CIDR=10.2.1.1/24 -ti ubuntu + +Multiple IP addresses and networks can be supplied in the `WEAVE_CIDR` +variable by space-separating them, as in +`WEAVE_CIDR="10.2.1.1/24 10.2.2.1/24"`. + + +###Returning Weave Network Settings Instead of Docker Network Settings + +The Docker NetworkSettings (including IP address, MacAddress, and +IPPrefixLen), are still returned when `docker inspect` is run. If you want +`docker inspect` to return the Weave NetworkSettings instead, then the +proxy must be launched using the `--rewrite-inspect` flag. + +This command substitutes the Weave Network settings when the container has a +Weave IP. If a container has more than one Weave IP, then the inspect call +only includes one of them. + + host1$ weave launch-router && weave launch-proxy --rewrite-inspect + + +###Multicast Traffic and Launching the Weave Proxy + +By default, multicast traffic is routed over the Weave network. +To turn this off, e.g. because you want to configure your own multicast +route, add the `--no-multicast-route` flag to `weave launch-proxy`. + + +**See Also** + + * [Setting Up The Weave Docker API Proxy](/site/weave-docker-api/set-up-proxy.md) + * [Securing Docker Communications With TLS](securing-proxy.md) + * [Launching Containers With Weave Run (without the Proxy)](/site/weave-docker-api/launching-without-proxy.md) \ No newline at end of file diff --git a/site/weavedns-design.md b/site/weavedns-design.md new file mode 100644 index 0000000000..9059505c7f --- /dev/null +++ b/site/weavedns-design.md @@ -0,0 +1,83 @@ +--- +title: WeaveDNS (service discovery) Design Notes +layout: default +--- + +# WeaveDNS (service discovery) Design Notes + +The model is that each host has a service that is notified of +hostnames and weave addresses for containers on the host. Like IPAM, +this service is embedded within the router. It binds to +the host bridge to answer DNS queries from local containers; for +anything it can't answer, it uses the infomation in the host's +/etc/resolv.conf to query an 'fallback' server. + +The service is comprised of a DNS server, which answers all DNS queries +from containers, and a in-memory database of hostnames and IPs. The +database on each node contains a complete copy of the hostnames and IPs +for every containers in the cluster. + +For hostname queries in the local domain (default weave.local), the DNS +server will consult the in-memory database. For reverse queries, we +first consult the local database, and if not found we query the +upstream server. For all other queries, we consult the upstream +server. + +Updates to the in-memory database are broadcast to other DNS servers +within the cluster. The in-memory database only contains entries from +connected DNS servers; if a DNS server becomes partitioned from the +cluster, entries belonging to that server are removed from each node in +the cluster. When the partitioned DNS server reconnects, the entries +are re-broadcast around the cluster. + +The DNS server also listens to the Docker event stream, and removes +entries for containers when they die. Entries removed in this way are +tombstoned, and the tombstone lazily broadcast around the cluster. +After a short timeout the tombstones are independantly removed from +each host. + + +## DNS server API + +The DNS server accepts HTTP requests on the following URL (patterns) +and methods: + +`PUT /name//` + +Put a record for an IP, bound to a host-scoped identifier (e.g., a +container ID), in the DNS database. The request body must contain +a `fqdn=foo.weave.local` key pair. + +`DELETE /name//` + +Remove a specific record for an IP and host-scoped identifier. The request +body can optionally contain a `fqdn=foo.weave.local` key pair. + +`DELETE /name/` + +Remove all records for the host-scoped identifier. + +`GET /name/` + +List of all IPs (in JSON format) for givne FQDN. + +## DNS updater + +The updater component uses the Docker remote API to monitor containers +coming and going, and tells the DNS server to update its records via +its HTTP interface. It does not need to be attached to the weave +network. + +The updater starts by subscribing to the events, and getting a list of +the current containers. Any containers given a domain ending with +".weave" are considered for inclusion in the name database. + +When it sees a container start or stop, the updater checks the weave +network attachment of the container, and updates the DNS server. + +> How does it check the network attachment from within a container? + +> Will it need to delay slightly so that `attach` has a chance to run? +> Perhaps it could put containers on a watch list when it's noticed +> them. + diff --git a/site/weavedns.md b/site/weavedns.md deleted file mode 100644 index 12a5fe486a..0000000000 --- a/site/weavedns.md +++ /dev/null @@ -1,293 +0,0 @@ ---- -title: Automatic Discovery with WeaveDNS -layout: default ---- - -# Automatic Discovery with WeaveDNS - -The Weave DNS server answers name queries in a Weave network. This -provides a simple way for containers to find each other: just give -them hostnames and tell other containers to connect to those names. -Unlike Docker 'links', this requires no code changes and works across -hosts. - -* [Using weaveDNS](#usage) -* [How it works](#how-it-works) -* [Load balancing](#load-balancing) -* [Fault resilience](#fault-resilience) -* [Adding and removing extra DNS entries](#add-remove) -* [Resolve weaveDNS entries from host](#resolve-weavedns-entries-from-host) -* [Hot-swapping service containers](#hot-swapping) -* [Configuring a custom TTL](#ttl) -* [Configuring the domain search path](#domain-search-path) -* [Using a different local domain](#local-domain) -* [Troubleshooting](#troubleshooting) -* [Present limitations](#limitations) - -## Using weaveDNS - -WeaveDNS is deployed as an embedded service within the Weave router. -The service is automatically started when the router is launched: - -```bash -host1$ weave launch -host1$ eval $(weave env) -``` - -WeaveDNS related configuration arguments can be passed to `launch`. - -Application containers will use weaveDNS automatically if it is -running at the point when they are started. They will use it for name -resolution, and will register themselves if they have either a -hostname in the weaveDNS domain (`weave.local` by default) or have -been given an explicit container name: - -```bash -host1$ docker run -dti --name=pingme ubuntu -host1$ docker run -ti --hostname=ubuntu.weave.local ubuntu -root@ubuntu:/# ping pingme -... -``` - -> **Please note** if both hostname and container name are specified at -> the same time the hostname takes precedence; in this circumstance if -> the hostname is not in the weaveDNS domain the container will *not* -> be registered, but will still use weaveDNS for resolution. - -To disable application containers' use of weaveDNS, add the -`--without-dns` option to `weave run` or `weave launch-proxy`. - -## How it works - -The weaveDNS service running on every host acts as the nameserver for -containers on that host. It learns about hostnames for local containers -from the proxy and from the `weave run` command. If a hostname is in -the `.weave.local` domain then weaveDNS records the association of that -name with the container's weave IP address(es) in its in-memory -database, and broadcasts the association to other weave peers in the -cluster. - -When weaveDNS is queried for a name in the `.weave.local` domain, it -looks up the hostname its in memory database and responds with the IPs -of all containers for that hostname across the entire cluster. - -WeaveDNS returns IP addresses in a random order to facilitate basic -load balancing and failure tolerance. Most client side resolvers sort -the returned addresses based on reachability, placing local addresses -at the top of the list (see [RFC 3484](https://www.ietf.org/rfc/rfc3484.txt)). -For example, if there is container with the desired hostname on the local -machine, the application will receive that container's IP address. -Otherwise, the application will receive the IP address of a random -container with the desired hostname. - -When weaveDNS is queried for a name in a domain other than -`.weave.local`, it queries the host's configured nameserver, which is -the standard behaviour for Docker containers. - -So that containers can connect to a stable and always routable IP -address, weaveDNS listens on port 53 to the Docker bridge device, which -is assumed to be `docker0`. Some configurations may use a different -Docker bridge device. To supply a different bridge device, use the -environment variable `DOCKER_BRIDGE`, e.g., - -```bash -$ sudo DOCKER_BRIDGE=someother weave launch -``` - -In the event that weaveDNS is launched in this way, it's important that -other calls to `weave` also specify the bridge device: - -```bash -$ sudo DOCKER_BRIDGE=someother weave run ... -``` - -## Load balancing - -It is permissible to register multiple containers with the same name: -weaveDNS returns all addresses, in a random order, for each request. -This provides a basic load balancing capability. - -Returning to our earlier example, let us start an additional `pingme` -container, this time on the 2nd host, and then run some ping tests... - -```bash -host2$ weave launch -host2$ eval $(weave env) -host2$ docker run -dti --name=pingme ubuntu - -root@ubuntu:/# ping -nq -c 1 pingme -PING pingme.weave.local (10.32.0.2) 56(84) bytes of data. -... -root@ubuntu:/# ping -nq -c 1 pingme -PING pingme.weave.local (10.40.0.1) 56(84) bytes of data. -... -root@ubuntu:/# ping -nq -c 1 pingme -PING pingme.weave.local (10.40.0.1) 56(84) bytes of data. -... -root@ubuntu:/# ping -nq -c 1 pingme -PING pingme.weave.local (10.32.0.2) 56(84) bytes of data. -... -``` - -Notice how the ping reaches different addresses. - -## Fault resilience - -WeaveDNS removes the addresses of any container that dies. This offers -a simple way to implement redundancy. E.g. if in our example we stop -one of the `pingme` containers and re-run the ping tests, eventually -(within ~30s at most, since that is the weaveDNS -[cache expiry time](#ttl)) we will only be hitting the address of the -container that is still alive. - -## Adding and removing extra DNS entries - -If you want to give the container a name in DNS *other* than its -hostname, you can register it using the `dns-add` command. For example: - -```bash -$ C=$(docker run -ti ubuntu) -$ weave dns-add $C -h pingme2.weave.local -``` - -You can also use `dns-add` to add the container's configured hostname -and domain simply by omitting `-h `, or specify additional IP -addresses to be registered against the container's hostname e.g. -`weave dns-add 10.2.1.27 $C`. - -The inverse operation can be carried out using the `dns-remove` -command: - -```bash -$ weave dns-remove $C -``` - -By omitting the container name it is possible to add/remove DNS -records that associate names in the weaveDNS domain with IP addresses -that do not belong to containers, e.g. non-weave addresses of external -services: - -```bash -$ weave dns-add 192.128.16.45 -h db.weave.local -``` - -Note that such records get removed when stopping the weave peer on -which they were added. - -## Resolve weaveDNS entries from host - -You can resolve entries from any host running weaveDNS with `weave -dns-lookup`: - - host1$ weave dns-lookup pingme - 10.40.0.1 - -## Hot-swapping service containers - -If you would like to deploy a new version of a service, keep the old -one running because it has active connections but make all new -requests go to the new version, then you can simply start the new -server container and then [remove](#add-remove) the entry for the old -server container. Later, when all connections to the old server have -terminated, stop the container as normal. - - -## Configuring a custom TTL - -By default, weaveDNS specifies a TTL of 30 seconds in responses to DNS -requests. However, you can force a different TTL value by launching -weave with the `--dns-ttl` argument: - -```bash -$ weave launch --dns-ttl=10 -``` - -This will shorten the lifespan of answers sent to clients, so you will -be effectively reducing the probability of them having stale -information, but you will also be increasing the number of request this -weaveDNS instance will receive. - -## Configuring the domain search paths - -If you don't supply a domain search path (with `--dns-search=`), -`weave run ...` tells a container to look for "bare" hostnames, like -`pingme`, in its own domain (or in `weave.local` if it has no domain). -That's why you can just invoke `ping pingme` above -- since the -hostname is `ubuntu.weave.local`, it will look for -`pingme.weave.local`. - -If you want to supply other entries for the domain search path, -e.g. if you want containers in different sub-domains to resolve -hostnames across all sub-domains plus some external domains, you need -*also* to supply the `weave.local` domain to retain the above -behaviour. - -```bash -docker run -ti \ - --dns-search=zone1.weave.local --dns-search=zone2.weave.local \ - --dns-search=corp1.com --dns-search=corp2.com \ - --dns-search=weave.local ubuntu -``` - -## Using a different local domain - -By default, weaveDNS uses `weave.local.` as the domain for names on the -Weave network. In general users do not need to change this domain, but -you can force weaveDNS to use a different domain by launching it with -the `--dns-domain` argument. For example, - -```bash -$ weave launch --dns-domain="mycompany.local." -``` - -The local domain should end with `local.`, since these names are -link-local as per [RFC6762](https://tools.ietf.org/html/rfc6762), -(though this is not strictly necessary). - -## Troubleshooting - -The command - - weave status - -reports on the current status of various weave components, including -DNS: - -```` -... - - Service: dns - Domain: weave.local. - Upstream: 8.8.8.8, 8.8.4.4 - TTL: 1 - Entries: 9 - -... -```` - -The first section covers the router; see the [troubleshooting -guide](troubleshooting.html#weave-status) for more detail. - -The 'Service: dns' section is pertinent to weaveDNS, and includes: - -* The local domain suffix which is being served -* The list of upstream servers used for resolving names not in the local domain -* The response ttl -* The total number of entries - -You may also use `weave status dns` to obtain a [complete -dump](troubleshooting.html#weave-status-dns) of all DNS registrations. - -Information on the processing of queries, and the general operation of -weaveDNS, can be obtained from the container logs with - - docker logs weave - -## Present limitations - - * The server will not know about restarted containers, but if you - re-attach a restarted container to the weave network, it will be - re-registered with weaveDNS. - * The server may give unreachable IPs as answers, since it doesn't - try to filter by reachability. If you use subnets, align your - hostnames with the subnets. diff --git a/site/weavedns/how-works-weavedns.md b/site/weavedns/how-works-weavedns.md new file mode 100644 index 0000000000..c132ee0f6c --- /dev/null +++ b/site/weavedns/how-works-weavedns.md @@ -0,0 +1,59 @@ +--- +title: How Weave Finds Containers +layout: default +--- + + +The weavedns service running on every host acts as the nameserver for +containers on that host. It learns about hostnames for local containers +from the proxy and from the `weave run` command. + +If a hostname is in the `.weave.local` domain, then weavedns records the association of that +name with the container's Weave IP address(es) in its in-memory +database, and then broadcasts the association to other Weave peers in the +cluster. + +When weavedns is queried for a name in the `.weave.local` domain, it +looks up the hostname in its memory database and responds with the IPs +of all containers for that hostname across the entire cluster. + +###Basic Load Balancing and Fault Tolerance + +Weavedns returns IP addresses in random order to facilitate basic +load balancing and failure tolerance. Most client side resolvers sort +the returned addresses based on reachability, and place local addresses +at the top of the list (see [RFC 3484](https://www.ietf.org/rfc/rfc3484.txt)). + +For example, if there is container with the desired hostname on the local +machine, the application receives that container's IP address. +Otherwise, the application receives the IP address of a random +container with the desired hostname. + +When weavedns is queried for a name in a domain other than +`.weave.local`, it queries the host's configured nameserver, which is +the standard behaviour for Docker containers. + +###Specifying a Different Docker Bridge Device + +So that containers can connect to a stable and always routable IP +address, weavedns listens on port 53 to the Docker bridge device, which +is assumed to be `docker0`. Some configurations may use a different +Docker bridge device. To supply a different bridge device, use the +environment variable `DOCKER_BRIDGE`, e.g., + +```bash +$ sudo DOCKER_BRIDGE=someother weave launch +``` + +In the event that weavedns is launched in this way, it's important that +other calls to `weave` also specify the bridge device: + +```bash +$ sudo DOCKER_BRIDGE=someother weave run ... +``` + +**See Also** + + * [Using Weavedns](/site/weavedns/overview-using-weavedns.md) + * [Load Balancing with weavedns](/site/weavedns/load-balance-fault-weavedns.md) + * [Managing Domain Entries](/site/weavedns/managing-domains-weavedns.md) diff --git a/site/weavedns/load-balance-fault-weavedns.md b/site/weavedns/load-balance-fault-weavedns.md new file mode 100644 index 0000000000..f24350ef0b --- /dev/null +++ b/site/weavedns/load-balance-fault-weavedns.md @@ -0,0 +1,50 @@ +--- +title: Load Balancing and Fault Resilience with weavedns +layout: default +--- + + + +It is permissible to register multiple containers with the same name: +weavedns returns all addresses, in a random order, for each request. +This provides a basic load balancing capability. + +Returning to the earlier example, let us start an additional `pingme` +container, this time on the 2nd host, and then run some ping tests... + +```bash +host2$ weave launch +host2$ eval $(weave env) +host2$ docker run -dti --name=pingme ubuntu + +root@ubuntu:/# ping -nq -c 1 pingme +PING pingme.weave.local (10.32.0.2) 56(84) bytes of data. +... +root@ubuntu:/# ping -nq -c 1 pingme +PING pingme.weave.local (10.40.0.1) 56(84) bytes of data. +... +root@ubuntu:/# ping -nq -c 1 pingme +PING pingme.weave.local (10.40.0.1) 56(84) bytes of data. +... +root@ubuntu:/# ping -nq -c 1 pingme +PING pingme.weave.local (10.32.0.2) 56(84) bytes of data. +... +``` + +Notice how the ping reaches different addresses. + + +## Fault resilience + +WeaveDNS removes the addresses of any container that dies. This offers +a simple way to implement redundancy. E.g. if in our example we stop +one of the `pingme` containers and re-run the ping tests, eventually +(within ~30s at most, since that is the weaveDNS +[cache expiry time](#ttl)) we will only be hitting the address of the +container that is still alive. + +**See Also** + + * [How Weave Finds Containers](/site/weave-docker-api/how-works-weavedns.md) + * [Managing Domains](/site/weavedns/managing-domains-weavedns.md) + * [Managing Domain Entries](/site/weavedns/managing-entries-weavedns.md) \ No newline at end of file diff --git a/site/weavedns/managing-domains-weavedns.md b/site/weavedns/managing-domains-weavedns.md new file mode 100644 index 0000000000..85aa1b5bdb --- /dev/null +++ b/site/weavedns/managing-domains-weavedns.md @@ -0,0 +1,51 @@ +--- +title: Managing Domains +layout: default +--- + +The following topics are discussed: + +* [Configuring the domain search path](#domain-search-path) +* [Using a different local domain](#local-domain) + +## Configuring the domain search paths + +If you don't supply a domain search path (with `--dns-search=`), +`weave run ...` tells a container to look for "bare" hostnames, like +`pingme`, in its own domain (or in `weave.local` if it has no domain). +That's why you can just invoke `ping pingme` above -- since the +hostname is `ubuntu.weave.local`, it will look for +`pingme.weave.local`. + +If you want to supply other entries for the domain search path, +e.g. if you want containers in different sub-domains to resolve +hostnames across all sub-domains plus some external domains, you need +*also* to supply the `weave.local` domain to retain the above +behaviour. + +```bash +docker run -ti \ + --dns-search=zone1.weave.local --dns-search=zone2.weave.local \ + --dns-search=corp1.com --dns-search=corp2.com \ + --dns-search=weave.local ubuntu +``` + +## Using a different local domain + +By default, weaveDNS uses `weave.local.` as the domain for names on the +Weave network. In general users do not need to change this domain, but +you can force weaveDNS to use a different domain by launching it with +the `--dns-domain` argument. For example, + +```bash +$ weave launch --dns-domain="mycompany.local." +``` + +The local domain should end with `local.`, since these names are +link-local as per [RFC6762](https://tools.ietf.org/html/rfc6762), +(though this is not strictly necessary). + + + * [How Weave Finds Containers](/site/weave-docker-api/how-works-weavedns.md) + * [Load Balancing and Fault Resilience with weavedns](/site/weave-docker-api/load-balance-fault-weavedns.md) + * [Managing Domain Entries](/site/weavedns/managing-entries-weavedns.md) diff --git a/site/weavedns/managing-entries-weavedns.md b/site/weavedns/managing-entries-weavedns.md new file mode 100644 index 0000000000..3331fe9614 --- /dev/null +++ b/site/weavedns/managing-entries-weavedns.md @@ -0,0 +1,87 @@ +--- +title: Managing Domain Entries +layout: default +--- + + +The following topics are discussed: + +* [Adding and removing extra DNS entries](#add-remove) +* [Resolving Weavedns entries from the Host](#resolve-weavedns-entries-from-host) +* [Hot-swapping Service Containers](#hot-swapping) +* [Retaining DNS Entries When Containers Stop](#retain-stopped) +* [Configuring a Custom TTL](#ttl) + + + +### Adding and Removing Extra DNS Entries + +If you want to give the container a name in DNS *other* than its +hostname, you can register it using the `dns-add` command. For example: + +```bash +$ C=$(docker run -ti ubuntu) +$ weave dns-add $C -h pingme2.weave.local +``` + +You can also use `dns-add` to add the container's configured hostname +and domain simply by omitting `-h `, or specify additional IP +addresses to be registered against the container's hostname e.g. +`weave dns-add 10.2.1.27 $C`. + +The inverse operation can be carried out using the `dns-remove` +command: + +```bash +$ weave dns-remove $C +``` + +By omitting the container name it is possible to add/remove DNS +records that associate names in the weaveDNS domain with IP addresses +that do not belong to containers, e.g. non-weave addresses of external +services: + +```bash +$ weave dns-add 192.128.16.45 -h db.weave.local +``` + +Note that such records get removed when stopping the weave peer on +which they were added. + +### Resolving Weavedns Entries From the Host + +You can resolve entries from any host running weaveDNS with `weave +dns-lookup`: + + host1$ weave dns-lookup pingme + 10.40.0.1 + +### Hot-swapping service containers + +If you would like to deploy a new version of a service, keep the old +one running because it has active connections but make all new +requests go to the new version, then you can simply start the new +server container and then [remove](#add-remove) the entry for the old +server container. Later, when all connections to the old server have +terminated, stop the container as normal. + +### Configuring a custom TTL + +By default, weaveDNS specifies a TTL of 30 seconds in responses to DNS +requests. However, you can force a different TTL value by launching +weave with the `--dns-ttl` argument: + +```bash +$ weave launch --dns-ttl=10 +``` + +This will shorten the lifespan of answers sent to clients, so you will +be effectively reducing the probability of them having stale +information, but you will also be increasing the number of request this +weaveDNS instance will receive. + +**See Also** + + * [How Weave Finds Containers](/site/weave-docker-api/how-works-weavedns.md) + * [Load Balancing and Fault Resilience with weavedns](/site/weave-docker-api/load-balance-fault-weavedns.md) + * [Managing Domains](/site/weavedns/managing-domains-weavedns.md) diff --git a/site/weavedns/overview-using-weavedns.md b/site/weavedns/overview-using-weavedns.md new file mode 100644 index 0000000000..6259eb77e0 --- /dev/null +++ b/site/weavedns/overview-using-weavedns.md @@ -0,0 +1,48 @@ +--- +title: Using Weavedns +layout: default +--- + + + +The Weave DNS server answers name queries on a Weave network and provides a simple way for containers to find each other. Just give +the containers hostnames and then tell other containers to connect to those names. +Unlike Docker 'links', this requires no code changes and it works across +hosts. + +Weavedns is deployed as an embedded service within the Weave router. +The service is automatically started when the router is launched: + +```bash +host1$ weave launch +host1$ eval $(weave env) +``` + +Weavedns related configuration arguments can be passed to `launch`. + +Application containers use weavedns automatically if it is +running when they are started. They use it for name +resolution, and will register themselves if they either have a +hostname in the weavedns domain (`weave.local` by default) or are given an explicit container name: + +```bash +host1$ docker run -dti --name=pingme ubuntu +host1$ docker run -ti --hostname=ubuntu.weave.local ubuntu +root@ubuntu:/# ping pingme +... +``` + +>> **Note** If both hostname and container name are specified at +the same time, the hostname takes precedence. In this circumstance, if +the hostname is not in the weavedns domain, the container is *not* +registered, but it will still use weavedns for resolution. + +To disable an application container's use of weavedns, add the +`--without-dns` option to `weave run` or `weave launch-proxy`. + + +**See Also** + + * [How Weave Finds Containers](/site/weave-docker-api/how-works-weavedns.md) + * [Load Balancing and Fault Resilience with weavedns](/site/weave-docker-api/load-balance-fault-weavedns.md) + \ No newline at end of file diff --git a/site/weavedns/troubleshooting-weavedns.md b/site/weavedns/troubleshooting-weavedns.md new file mode 100644 index 0000000000..8c77435266 --- /dev/null +++ b/site/weavedns/troubleshooting-weavedns.md @@ -0,0 +1,55 @@ +--- +title: Troubleshooting and Present Limitations +layout: default +--- + + + + +### Troubleshooting + +The command: + + weave status + +reports on the current status of various Weave components, including +DNS: + +```` +... + + Service: dns + Domain: weave.local. + Upstream: 8.8.8.8, 8.8.4.4 + TTL: 1 + Entries: 9 + +... +```` + +The first section covers the router; see the [troubleshooting +guide](/site/troubleshooting.md#weave-status) for more details. + +The 'Service: dns' section is pertinent to weavedns, and includes: + +* The local domain suffix which is being served +* The list of upstream servers used for resolving names not in the local domain +* The response ttl +* The total number of entries + +You may also use `weave status dns` to obtain a [complete +dump](/site/troubleshooting.md#weave-status-dns) of all DNS registrations. + +Information on the processing of queries, and the general operation of +weaveDNS, can be obtained from the container logs with + + docker logs weave + +### Present Limitations + + * The server will not know about restarted containers, but if you + re-attach a restarted container to the weave network, it will be + re-registered with weaveDNS. + * The server may give unreachable IPs as answers, since it doesn't + try to filter by reachability. If you use subnets, align your + hostnames with the subnets.