|
4 | 4 | Linux Kernel TIPC |
5 | 5 | ================= |
6 | 6 |
|
7 | | -TIPC (Transparent Inter Process Communication) is a protocol that is |
8 | | -specially designed for intra-cluster communication. |
| 7 | +Introduction |
| 8 | +============ |
9 | 9 |
|
10 | | -For more information about TIPC, see http://tipc.sourceforge.net. |
| 10 | +TIPC (Transparent Inter Process Communication) is a protocol that is specially |
| 11 | +designed for intra-cluster communication. It can be configured to transmit |
| 12 | +messages either on UDP or directly across Ethernet. Message delivery is |
| 13 | +sequence guaranteed, loss free and flow controlled. Latency times are shorter |
| 14 | +than with any other known protocol, while maximal throughput is comparable to |
| 15 | +that of TCP. |
| 16 | + |
| 17 | +TIPC Features |
| 18 | +------------- |
| 19 | + |
| 20 | +- Cluster wide IPC service |
| 21 | + |
| 22 | + Have you ever wished you had the convenience of Unix Domain Sockets even when |
| 23 | + transmitting data between cluster nodes? Where you yourself determine the |
| 24 | + addresses you want to bind to and use? Where you don't have to perform DNS |
| 25 | + lookups and worry about IP addresses? Where you don't have to start timers |
| 26 | + to monitor the continuous existence of peer sockets? And yet without the |
| 27 | + downsides of that socket type, such as the risk of lingering inodes? |
| 28 | + |
| 29 | + Welcome to the Transparent Inter Process Communication service, TIPC in short, |
| 30 | + which gives you all of this, and a lot more. |
| 31 | + |
| 32 | +- Service Addressing |
| 33 | + |
| 34 | + A fundamental concept in TIPC is that of Service Addressing which makes it |
| 35 | + possible for a programmer to chose his own address, bind it to a server |
| 36 | + socket and let client programs use only that address for sending messages. |
| 37 | + |
| 38 | +- Service Tracking |
| 39 | + |
| 40 | + A client wanting to wait for the availability of a server, uses the Service |
| 41 | + Tracking mechanism to subscribe for binding and unbinding/close events for |
| 42 | + sockets with the associated service address. |
| 43 | + |
| 44 | + The service tracking mechanism can also be used for Cluster Topology Tracking, |
| 45 | + i.e., subscribing for availability/non-availability of cluster nodes. |
| 46 | + |
| 47 | + Likewise, the service tracking mechanism can be used for Cluster Connectivity |
| 48 | + Tracking, i.e., subscribing for up/down events for individual links between |
| 49 | + cluster nodes. |
| 50 | + |
| 51 | +- Transmission Modes |
| 52 | + |
| 53 | + Using a service address, a client can send datagram messages to a server socket. |
| 54 | + |
| 55 | + Using the same address type, it can establish a connection towards an accepting |
| 56 | + server socket. |
| 57 | + |
| 58 | + It can also use a service address to create and join a Communication Group, |
| 59 | + which is the TIPC manifestation of a brokerless message bus. |
| 60 | + |
| 61 | + Multicast with very good performance and scalability is available both in |
| 62 | + datagram mode and in communication group mode. |
| 63 | + |
| 64 | +- Inter Node Links |
| 65 | + |
| 66 | + Communication between any two nodes in a cluster is maintained by one or two |
| 67 | + Inter Node Links, which both guarantee data traffic integrity and monitor |
| 68 | + the peer node's availability. |
| 69 | + |
| 70 | +- Cluster Scalability |
| 71 | + |
| 72 | + By applying the Overlapping Ring Monitoring algorithm on the inter node links |
| 73 | + it is possible to scale TIPC clusters up to 1000 nodes with a maintained |
| 74 | + neighbor failure discovery time of 1-2 seconds. For smaller clusters this |
| 75 | + time can be made much shorter. |
| 76 | + |
| 77 | +- Neighbor Discovery |
| 78 | + |
| 79 | + Neighbor Node Discovery in the cluster is done by Ethernet broadcast or UDP |
| 80 | + multicast, when any of those services are available. If not, configured peer |
| 81 | + IP addresses can be used. |
| 82 | + |
| 83 | +- Configuration |
| 84 | + |
| 85 | + When running TIPC in single node mode no configuration whatsoever is needed. |
| 86 | + When running in cluster mode TIPC must as a minimum be given a node address |
| 87 | + (before Linux 4.17) and told which interface to attach to. The "tipc" |
| 88 | + configuration tool makes is possible to add and maintain many more |
| 89 | + configuration parameters. |
| 90 | + |
| 91 | +- Performance |
| 92 | + |
| 93 | + TIPC message transfer latency times are better than in any other known protocol. |
| 94 | + Maximal byte throughput for inter-node connections is still somewhat lower than |
| 95 | + for TCP, while they are superior for intra-node and inter-container throughput |
| 96 | + on the same host. |
| 97 | + |
| 98 | +- Language Support |
| 99 | + |
| 100 | + The TIPC user API has support for C, Python, Perl, Ruby, D and Go. |
| 101 | + |
| 102 | +More Information |
| 103 | +---------------- |
| 104 | + |
| 105 | +- How to set up TIPC: |
| 106 | + |
| 107 | + http://tipc.io/getting_started.html |
| 108 | + |
| 109 | +- How to program with TIPC: |
| 110 | + |
| 111 | + http://tipc.io/programming.html |
| 112 | + |
| 113 | +- How to contribute to TIPC: |
| 114 | + |
| 115 | +- http://tipc.io/contacts.html |
| 116 | + |
| 117 | +- More details about TIPC specification: |
| 118 | + |
| 119 | + http://tipc.io/protocol.html |
| 120 | + |
| 121 | + |
| 122 | +Implementation |
| 123 | +============== |
| 124 | + |
| 125 | +TIPC is implemented as a kernel module in net/tipc/ directory. |
11 | 126 |
|
12 | 127 | TIPC Base Types |
13 | 128 | --------------- |
|
0 commit comments