Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Newer
Older
100644 865 lines (755 sloc) 38.507 kb
719bcba1 » wzssyqa
2011-04-15 Imported Upstream version 2.35~RC6
1 \chapter{Multicast Routing}
2 \label{chap:multicast}
3
4 This section describes the usage and the internals of multicast
5 routing implementation in \ns.
6 We first describe
7 \href{the user interface to enable multicast routing}{Section}{sec:mcast-api},
8 specify the multicast routing protocol to be used and the
9 various methods and configuration parameters specific to the
10 protocols currently supported in \ns.
11 We then describe in detail
12 \href{the internals and the architecture of the
13 multicast routing implementation in \ns}{Section}{sec:mcast-internals}.
14
15 The procedures and functions described in this chapter can be found in
16 various files in the directories \nsf{tcl/mcast}, \nsf{tcl/ctr-mcast};
17 additional support routines
18 are in \nsf{mcast\_ctrl.\{cc,h\}},
19 \nsf{tcl/lib/ns-lib.tcl}, and \nsf{tcl/lib/ns-node.tcl}.
20
21 \section{Multicast API}
22 \label{sec:mcast-api}
23
24 Multicast forwarding requires enhancements
25 to the nodes and links in the topology.
26 Therefore, the user must specify multicast requirements
27 to the Simulator class before creating the topology.
28 This is done in one of two ways:
29 \begin{program}
30 set ns [new Simulator -multicast on]
31 {\rm or}
32 set ns [new Simulator]
33 $ns multicast
34 \end{program} %$
35 When multicast extensions are thus enabled, nodes will be created with
36 additional classifiers and replicators for multicast forwarding, and
37 links will contain elements to assign incoming interface labels to all
38 packets entering a node.
39
40 A multicast routing strategy is the mechanism by which
41 the multicast distribution tree is computed in the simulation.
42 \ns\ supports three multiast route computation strategies:
43 centralised, dense mode(DM) or shared tree mode(ST).
44
45 The method \proc[]{mrtproto} in the Class Simulator specifies either
46 the route computation strategy, for centralised multicast routing, or
47 the specific detailed multicast routing protocol that should be used.
48
49 %%For detailed multicast routing, \proc[]{mrtproto} will accept, as
50 %%additional arguments, a list of nodes that will run an instance of
51 %%that routing protocol.
52 %%Polly Huang Wed Oct 13 09:58:40 EDT 199: the above statement
53 %%is no longer supported.
54
55 The following are examples of valid
56 invocations of multicast routing in \ns:
57 \begin{program}
58 set cmc [$ns mrtproto CtrMcast] \; specify centralized multicast for all nodes;
59 \; cmc is the handle for multicast protocol object;
60 $ns mrtproto DM \; specify dense mode multicast for all nodes;
61 $ns mrtproto ST \; specify shared tree mode to run on all nodes;
62 \end{program}
63 Notice in the above examples that CtrMcast returns a handle that can
64 be used for additional configuration of centralised multicast routing.
65 The other routing protocols will return a null string. All the
66 nodes in the topology will run instances of the same protocol.
67
68 Multiple multicast routing protocols can be run at a node, but in this
69 case the user must specify which protocol owns which incoming
70 interface. For this finer control \proc[]{mrtproto-iifs} is used.
71
72 New/unused multicast address are allocated using the procedure
73 \proc[]{allocaddr}.
74 %%The default configuration in \ns\ provides 32 bits each for node address and port address space.
75 %%The procedure
76 %%\proc[]{expandaddr} is now obsoleted.
77
78 The agents use the instance procedures
79 \proc[]{join-group} and \proc[]{leave-group}, in
80 the class Node to join and leave multicast groups. These procedures
81 take two mandatory arguments. The first argument identifies the
82 corresponding agent and second argument specifies the group address.
83
84 An example of a relatively simple multicast configuration is:
85 \begin{program}
86 set ns [new Simulator {\bfseries{}-multicast on}] \; enable multicast routing;
87 set group [{\bfseries{}Node allocaddr}] \; allocate a multicast address;
88 set node0 [$ns node] \; create multicast capable nodes;
89 set node1 [$ns node]
90 $ns duplex-link $node0 $node1 1.5Mb 10ms DropTail
91
92 set mproto DM \; configure multicast protocol;
93 set mrthandle [{\bfseries{}$ns mrtproto $mproto}] \; all nodes will contain multicast protocol agents;
94 set udp [new Agent/UDP] \; create a source agent at node 0;
95 $ns attach-agent $node0 $udp
96 set src [new Application/Traffic/CBR]
97 $src attach-agent $udp
98 {\bfseries{}$udp set dst_addr_ $group}
99 {\bfseries{}$udp set dst_port_ 0}
100
101 set rcvr [new Agent/LossMonitor] \; create a receiver agent at node 1;
102 $ns attach-agent $node1 $rcvr
103 $ns at 0.3 "{\bfseries{}$node1 join-group $rcvr $group}" \; join the group at simulation time 0.3 (sec);
104 \end{program} %$
105
106 \subsection{Multicast Behavior Monitor Configuration}
107 \ns\ supports a multicast monitor module that can trace
108 user-defined packet activity.
109 The module counts the number of packets in transit periodically
110 and prints the results to specified files. \proc[]{attach} enables a
111 monitor module to print output to a file.
112 \proc[]{trace-topo} insets monitor modules into all links.
113 \proc[]{filter} allows accounting on specified packet header,
114 field in the header), and value for the field). Calling \proc[]{filter}
115 repeatedly will result in an AND effect on the filtering condition.
116 \proc[]{print-trace} notifies the monitor module to begin dumping data.
117 \code{ptype()} is a global arrary that takes a packet type name (as seen in
118 \proc[]{trace-all} output) and maps it into the corresponding value.
119 A simple configuration to filter cbr packets on a particular group is:
120
121 \begin{program}
122 set mcastmonitor [new McastMonitor]
123 set chan [open cbr.tr w] \; open trace file;
124 $mmonitor attach $chan1 \; attach trace file to McastMoniotor object;
125 $mcastmonitor set period_ 0.02 \; default 0.03 (sec);
126 $mmonitor trace-topo \; trace entire topology;
127 $mmonitor filter Common ptype_ $ptype(cbr) \; filter on ptype_ in Common header;
128 $mmonitor filter IP dst_ $group \; AND filter on dst_ address in IP header;
129 $mmonitor print-trace \; begin dumping periodic traces to specified files;
130
131 \end{program} %$
132
133 % SAMPLE OUTPUT?
134 The following sample output illustrates the output file format (time, count):
135 {\small
136 \begin{verbatim}
137 0.16 0
138 0.17999999999999999 0
139 0.19999999999999998 0
140 0.21999999999999997 6
141 0.23999999999999996 11
142 0.25999999999999995 12
143 0.27999999999999997 12
144 \end{verbatim}
145 }
146
147 \subsection{Protocol Specific configuration}
148
149 In this section, we briefly illustrate the
150 protocol specific configuration mechanisms
151 for all the protocols implemented in \ns.
152
153 \paragraph{Centralized Multicast}
154 The centralized multicast is a sparse mode implementation of multicast
155 similar to PIM-SM \cite{Deer94a:Architecture}.
156 A Rendezvous Point (RP) rooted shared tree is built
157 for a multicast group. The actual sending of prune, join messages
158 etc. to set up state at the nodes is not simulated. A centralized
159 computation agent is used to compute the forwarding trees and set up
160 multicast forwarding state, \tup{S, G} at the relevant nodes as new
161 receivers join a group. Data packets from the senders to a group are
162 unicast to the RP. Note that data packets from the senders are
163 unicast to the RP even if there are no receivers for the group.
164
165 The method of enabling centralised multicast routing in a simulation is:
166 \begin{program}
167 set mproto CtrMcast \; set multicast protocol;
168 set mrthandle [$ns mrtproto $mproto]
169 \end{program}
170 The command procedure \proc[]{mrtproto}
171 returns a handle to the multicast protocol object.
172 This handle can be used to control the RP and the boot-strap-router (BSR),
173 switch tree-types for a particular group,
174 from shared trees to source specific trees, and
175 recompute multicast routes.
176 \begin{program}
177 $mrthandle set_c_rp $node0 $node1 \; set the RPs;
178 $mrthandle set_c_bsr $node0:0 $node1:1 \; set the BSR, specified as list of node:priority;
179
180 $mrthandle get_c_rp $node0 $group \; get the current RP ???;
181 $mrthandle get_c_bsr $node0 \; get the current BSR;
182
183 $mrthandle switch-treetype $group \; to source specific or shared tree;
184
185 $mrthandle compute-mroutes \; recompute routes. usually invoked automatically as needed;
186 \end{program}
187
188 Note that whenever network dynamics occur or unicast routing changes,
189 \proc[]{compute-mroutes} could be invoked to recompute the multicast routes.
190 The instantaneous re-computation feature of centralised algorithms
191 may result in causality violations during the transient
192 periods.
193
194 \paragraph{Dense Mode}
195 The Dense Mode protocol (\code{DM.tcl}) is an implementation of a
196 dense--mode--like protocol. Depending on the value of DM class
197 variable \code{CacheMissMode} it can run in one of two modes. If
198 \code{CacheMissMode} is set to \code{pimdm} (default), PIM-DM-like
199 forwarding rules will be used. Alternatively, \code{CacheMissMode}
200 can be set to \code{dvmrp} (loosely based on DVMRP \cite{rfc1075}).
201 The main difference between these two modes is that DVMRP maintains
202 parent--child relationships among nodes to reduce the number of links
203 over which data packets are broadcast. The implementation works on
204 point-to-point links as well as LANs and adapts to the network
205 dynamics (links going up and down).
206
207 Any node that receives data for a particular group for which it has no
208 downstream receivers, send a prune upstream. A prune message causes
209 the upstream node to initiate prune state at that node. The prune
210 state prevents that node from sending data for that group downstream
211 to the node that sent the original prune message while the state is
212 active. The time duration for which a prune state is active is
213 configured through the DM class variable, \code{PruneTimeout}. A
214 typical DM configuration is shown below:
215 \begin{program}
216 DM set PruneTimeout 0.3 \; default 0.5 (sec);
217 DM set CacheMissMode dvmrp \; default pimdm;
218 $ns mrtproto DM
219 \end{program} %$
220
221 \paragraph{Shared Tree Mode}
222 Simplified sparse mode \code{ST.tcl} is a version of a shared--tree
223 multicast protocol. Class variable array \code{RP\_} indexed by group
224 determines which node is the RP for a particular group. For example:
225 \begin{program}
226 ST set RP_($group) $node0
227 $ns mrtproto ST
228 \end{program}
229 At the time the multicast simulation is started, the protocol will
230 create and install encapsulator objects at nodes that have multicast
231 senders, decapsulator objects at RPs and connect them. To join a
232 group, a node sends a graft message towards the RP of the group. To
233 leave a group, it sends a prune message. The protocol currently does
234 not support dynamic changes and LANs.
235
236 \paragraph{Bi-directional Shared Tree Mode}
237 \code{BST.tcl} is an experimental version of a bi--directional shared
238 tree protocol. As in shared tree mode, RPs must be configured
239 manually by using the class array \code{RP\_}. The protocol currently
240 does not support dynamic changes and LANs.
241
242 \section{Internals of Multicast Routing}
243 \label{sec:mcast-internals}
244
245 We describe the internals in three parts: first the classes to
246 implement and support multicast routing; second, the specific protocol
247 implementation details; and finally, provide a list of the variables
248 that are used in the implementations.
249
250 \subsection{The classes}
251 The main classes in the implementation are the
252 \clsref{mrtObject}{../ns-2/tcl/mcast/McastProto.tcl} and the
253 \clsref{McastProtocol}{../ns-2/tcl/mcast/McastProto.tcl}. There are
254 also extensions to the base classes: Simulator, Node, Classifier,
255 \etc. We describe these classes and extensions in this subsection.
256 The specific protocol implementations also use adjunct data structures
257 for specific tasks, such as timer mechanisms by detailed dense mode,
258 encapsulation/decapsulation agents for centralised multicast \etc.; we
259 defer the description of these objects to the section on the
260 description of the particular protocol itself.
261
262 \paragraph{mrtObject class}
263 There is one mrtObject (aka Arbiter) object per multicast capable
264 node. This object supports the ability for a node to run multiple
265 multicast routing protocols by maintaining an array of multicast
266 protocols indexed by the incoming interface. Thus, if there are
267 several multicast protocols at a node, each interface is owned by just
268 one protocol. Therefore, this object supports the ability for a node
269 to run multiple multicast routing protocols. The node uses the
270 arbiter to perform protocol actions, either to a specific protocol
271 instance active at that node, or to all protocol instances at that
272 node.
273 \begin{alist}
274 \proc[instance]{addproto} &
275 adds the handle for a protocol instance to its array of
276 protocols. The second optional argument is the incoming
277 interface list controlled by the protocol. If this argument
278 is an empty list or not specified, the protocol is assumed to
279 run on all interfaces (just one protocol). \\
280 \proc[protocol]{getType} &
281 returns the handle to the protocol instance active at that
282 node that matches the specified type (first and only
283 argument). This function is often used to locate a protocol's
284 peer at another node. An empty string is returned if the
285 protocol of the given type could not be found. \\
286 \proc[op, args]{all-mprotos} &
287 internal routine to execute ``\code{op}'' with ``\code{args}''
288 on all protocol instances active at that node. \\
289 \proc[]{start} & \\
290 \proc[]{stop} &
291 start/stop execution of all protocols. \\
292 \proc[dummy]{notify} &
293 is called when a topology change occurs. The dummy argument is
294 currently not used.\\
295 \proc[file-handle]{dump-mroutes} &
296 dump multicast routes to specified file-handle. \\
297 \proc[G, S]{join-group} &
298 signals all protocol instances to join \tup{S, G}. \\
299 \proc[G, S]{leave-group} &
300 signals all protocol instances to leave \tup{S, G}. \\
301 \proc[code, s, g, iif]{upcall} &
302 signalled by node on forwarding errors in classifier;
303 this routine in turn signals the protocol instance that owns
304 the incoming interface (\code{iif}) by invoking the
305 appropriate handle function for that particular code.\\
306 \proc[rep, s, g, iif]{drop} &
307 Called on packet drop, possibly to prune an interface. \\
308 \end{alist}
309
310 In addition, the mrtObject class supports the concept of well known
311 groups, \ie, those groups that do not require explicit protocol support.
312 Two well known groups, \code{ALL_ROUTERS} and \code{ALL_PIM_ROUTERS}
313 are predefined in \ns.
314
315 The \clsref{mrtObject}{../ns-2/tcl/mcast/McastProto.tcl} defines
316 two class procedures to set and get information about these well known groups.
317 \begin{alist}
318 \proc[name]{registerWellKnownGroups} &
319 assigns \code{name} a well known group address. \\
320 \proc[name]{getWellKnownGroup} &
321 returns the address allocated to well known group, \code{name}.
322 If \code{name} is not registered as a well known group,
323 then it returns the address for \code{ALL\_ROUTERS}.
324 \end{alist}
325
326 \paragraph{McastProtocol class}
327 This is the base class for the implementation of all the multicast protocols.
328 It contains basic multicast functions:
329 \begin{alist}
330 \proc[]{start}, \proc[]{stop} &
331 Set the \code{status\_} of execution of this protocol instance. \\
332 \proc[]{getStatus} &
333 return the status of execution of this protocol instance. \\
334 \proc[]{getType} &
335 return the type of protocol executed by this instance. \\
336 \proc[code args]{upcall} &
337 invoked when the node classifier signals an error, either due to
338 a cache-miss or a wrong-iif for incoming packet. This routine
339 invokes the protocol specific handle, \proc{handle-\tup{code}} with
340 specified \code{args} to handle the signal. \\
341 \end{alist}
342
343 A few words about interfaces. Multicast implementation in \ns\
344 assumes duplex links i.e. if there is a simplex link from node~1 to
345 node~2, there must be a reverse simplex link from node~2 to node~1.
346 To be able to tell from which link a packet was received, multicast
347 simulator configures links with an interface labeller at the end of
348 each link, which labels packets with a particular and unique label
349 (id). Thus, ``incoming interface'' is referred to this label and is a
350 number greater or equal to zero. Incoming interface value can be
351 negative (-1) for a special case when the packet was sent by a local
352 to the given node agent.
353
354 In contrast, an ``outgoing interface'' refers to an object handler,
355 usually a head of a link which can be installed at a replicator. This
356 destinction is important: \textit{incoming interface is a numeric label to
357 a packet, while outgoing interface is a handler to an object that is
358 able to receive packets, e.g. head of a link.}
359
360 \subsection{Extensions to other classes in \ns}
361 In \href{the earlier chapter describing nodes in
362 \ns}{Chapter}{chap:nodes}, we described the internal structure of the
363 node in \ns. To briefly recap that description, the node entry for a
364 multicast node is the
365 \code{switch\_}. It looks at the highest bit to decide if the
366 destination is a multicast or unicast packet. Multicast packets are
367 forwarded to a multicast classifier which maintains a list of
368 replicators; there is one replicator per \tup{source, group} tuple.
369 Replicators copy the incoming packet and forward to all outgoing
370 interfaces.
371
372 \paragraph{Class Node}
373 Node support for multicast is realized in two primary ways: it serves
374 as a focal point for access to the multicast protocols, in the areas
375 of address allocation, control and management, and group membership
376 dynamics; and secondly, it provides primitives to access and control
377 interfaces on links incident on that node.
378 \begin{alist}
379 \proc[]{expandaddr}, & \\
380 \proc[]{allocaddr} &
381 Class procedures for address management.
382 \proc[]{expandaddr} is now obsoleted.
383 \proc[]{allocaddr} allocates the next available multicast
384 address.\\[2ex]
385 \proc[]{start-mcast}, & \\
386 \proc[]{stop-mcast} &
387 To start and stop multicast routing at that node. \\
388 \proc[]{notify-mcast} &
389 \proc[]{notify-mcast} signals the mrtObject at that node to
390 recompute multicastroutes following a topology change or
391 unicast route update from a neighbour. \\[2ex]
392 \proc[]{getArbiter} &
393 returns a handle to mrtObject operating at that node. \\
394 \proc[file-handle]{dump-routes} &
395 to dump the multicast forwarding tables at that node. \\[2ex]
396 \proc[s g iif code]{new-group} &
397 When a multicast data packet is received, and the multicast
398 classifier cannot find the slot corresponding to that data
399 packet, it invokes \proc[]{Node~nstproc~new-group} to
400 establish the appropriate entry. The code indicates the
401 reason for not finding the slot. Currently there are two
402 possibilities, cache-miss and wrong-iif. This procedure
403 notifies the arbiter instance to establish the new group. \\
404 \proc[a g]{join-group} &
405 An \code{agent} at a node that joins a particular group invokes
406 ``\code{node join-group <agent> <group>}''. The
407 node signals the mrtObject to join the particular \code{group},
408 and adds \code{agent} to its list of agents at that
409 \code{group}. It then adds \code{agent} to all replicators
410 associated with \code{group}. \\
411 \proc[a g]{leave-group} &
412 \code{Node~instproc~leave-group} reverses the process
413 described earlier. It disables the outgoing interfaces to the
414 receiver agents for all the replicators of the group, deletes
415 the receiver agents from the local \code{Agents\_} list; it
416 then invokes the arbiter instance's
417 \proc[]{leave-group}.\\[2ex]
418 \proc[s g iif oiflist]{add-mfc} &
419 \code{Node~instproc~add-mfc} adds a \textit{multicast forwarding cache}
420 entry for a particular \tup{source, group, iif}.
421 The mechanism is:
422 \begin{itemize}
423 \item create a new replicator (if one does not already exist),
424 \item update the \code{replicator\_} instance variable array at the node,
425 \item add all outgoing interfaces and local agents to the
426 appropriate replicator,
427 \item invoke the multicast classifier's \proc[]{add-rep}
428 to create a slot for the replicator in the multicast
429 classifier.
430 \end{itemize} \\
431 \proc[s g oiflist]{del-mfc} &
432 disables each oif in \code{oiflist} from the replicator for \tup{s, g}.\\
433 \end{alist}
434
435 The list of primitives accessible at the node to control its interfaces are listed below.
436 \begin{alist}
437 \proc[ifid link]{add-iif}, & \\
438 \proc[link if]{add-oif} &
439 Invoked during link creation to prep the node about its
440 incoming interface label and outgoing interface object. \\
441
442 \proc[]{get-all-oifs} &
443 Returns all oifs for this node. \\
444 \proc[]{get-all-iifs} &
445 Returns all iifs for this node. \\
446
447 \proc[ifid]{iif2link} &
448 Returns the link object labelled with given interface
449 label. \\
450 \proc[link]{link2iif} &
451 Returns the incoming interface label for the given
452 \code{link}. \\
453
454 \proc[oif]{oif2link} &
455 Returns the link object corresponding to the given outgoing
456 interface. \\
457 \proc[link]{link2oif} &
458 Returns the outgoing interface for the \code{link} (\ns\
459 object that is incident to the node).\\
460
461 \proc[src]{rpf-nbr} &
462 Returns a handle to the neighbour node that is its next hop to the
463 specified \code{src}.\\
464
465 \proc[s g]{getReps} &
466 Returns a handle to the replicator that matches \tup{s, g}.
467 Either argument can be a wildcard (*). \\
468 \proc[s g]{getReps-raw} &
469 As above, but returns a list of \tup{key, handle} pairs. \\
470 \proc[s g]{clearReps} &
471 Removes all replicators associated with \tup{s, g}. \\[2ex]
472 \end{alist}
473
474 \paragraph{Class Link and SimpleLink}
475 This class provides methods to check the type of link, and the label it
476 affixes on individual packets that traverse it.
477 There is one additional method to actually place the interface objects on this link.
478 These methods are:
479 \begin{alist}
480 \proc[]{if-label?} &
481 returns the interface label affixed by this link to packets
482 that traverse it. \\
483 % \proc[]{enable-mcast} &
484 % Internal procedure called by the SimpleLink constructor to add
485 % appropriate objects and state for multicast. By default, (and
486 % for the point-to-point link case) it places a NetworkInterface
487 % object at the end of the link, and signals the nodes on
488 % incident on the link about this link.\\
489 \end{alist}
490
491 \paragraph{Class NetworkInterface}
492 This is a simple connector that is placed on each link. It affixes
493 its label id to each packet that traverses it. The packet id is used
494 by the destination node incident on that link to identify the link by
495 which the packet reached it. The label id is configured by the Link
496 constructor. This object is an internal object, and is not designed
497 to be manipulated by user level simulation scripts. The object only
498 supports two methods:
499 \begin{alist}
500 \proc[ifid]{label} &
501 assigns \code{ifid} that this object will affix to each packet. \\
502 \proc[]{label} &
503 returns the label that this object affixes to each packet.\\
504 \end{alist}
505 The global class variable, \code{ifacenum\_}, specifies the next
506 available \code{ifid} number.
507
508 \paragraph{Class Multicast Classifier}
509 \code{Classifier/Multicast} maintains a \emph{multicast forwarding
510 cache}. There is one multicast classifier per node. The node stores a
511 reference to this classifier in its instance variable
512 \code{multiclassifier\_}. When this classifier receives a packet, it
513 looks at the \tup{source, group} information in the packet headers,
514 and the interface that the packet arrived from (the incoming interface
515 or iif); does a lookup in the MFC and identifies the slot that should
516 be used to forward that packet. The slot will point to the
517 appropriate replicator.
518
519 However, if the classifier does not have an entry for this
520 \tup{source, group}, or the iif for this entry is different, it will
521 invoke an upcall \proc[]{new-group} for the classifier, with one of
522 two codes to identify the problem:
523
524 \begin{itemize}
525 \item \code{cache-miss} indicates that the classifier did not
526 find any \tup{source, group} entries;
527
528 \item \code{wrong-iif} indicates that the classifier found
529 \tup{source, group} entries, but none matching the interface
530 that this packet arrived on.
531 \end{itemize}
532 These upcalls to TCL give it a chance to correct the situation:
533 install an appropriate MFC--entry (for \code{cache-miss}) or change
534 the incoming interface for the existing MFC--entry (for
535 \code{wrong-iif}). The \emph{return value} of the upcall determines
536 what classifier will do with the packet. If the return value is
537 ``1'', it will assume that TCL upcall has appropriately modified MFC
538 will try to classify packet (lookup MFC) for the second time. If the
539 return value is ``0'', no further lookups will be done, and the packet
540 will be thus dropped.
541
542 \proc[]{add-rep} creates a slot in the classifier
543 and adds a replicator for \tup{source, group, iif} to that slot.
544
545 \paragraph{Class Replicator}
546 When a replicator receives a packet, it copies the packet to all of
547 its slots. Each slot points to an outgoing interface for a particular
548 \tup{source, group}.
549
550 If no slot is found, the C++ replicator invokes the class instance
551 procedure \proc[]{drop} to trigger protocol specific actions. We will
552 describe the protocol specific actions in the next section, when we
553 describe the internal procedures of each of the multicast routing
554 protocols.
555
556 There are instance procedures to control the elements in each slot:
557 \begin{alist}
558 \proc[oif]{insert} & inserting a new outgoing interface
559 to the next available slot.\\
560 \proc[oif]{disable} & disable the slot pointing to the specified oif.\\
561 \proc[oif]{enable} & enable the slot pointing to the specified oif.\\
562 \proc[]{is-active} & returns true if the replicator has at least one active slot.\\
563 \proc[oif]{exists} & returns true if the slot pointing to the specified oif is active.\\
564 \proc[source, group, oldiif, newiif]{change-iface} & modified the iif entry for the particular replicator.\\
565 \end{alist}
566
567 \subsection{Protocol Internals}
568 \label{sec:mcastproto-internals}
569
570 We now describe the implementation of the different multicast routing
571 protocol agents.
572
573 \subsubsection{Centralized Multicast}
574 \code{CtrMcast} is inherits from \code{McastProtocol}.
575 One CtrMcast agent needs to be created for each node. There is a
576 central CtrMcastComp agent to compute and install multicast routes for
577 the entire topology. Each CtrMcast agent processes membership dynamic
578 commands, and redirects the CtrMcastComp agent to recompute the
579 appropriate routes.
580 \begin{alist}
581 \proc[]{join-group} &
582 registers the new member with the \code{CtrMCastComp} agent, and
583 invokes that agent to recompute routes for that member. \\
584 \proc[]{leave-group} & is the inverse of \proc[]{join-group}. \\
585 \proc[]{handle-cache-miss} &
586 called when no proper forwarding entry is found
587 for a particular packet source.
588 In case of centralized multicast,
589 it means a new source has started sending data packets.
590 Thus, the CtrMcast agent registers this new source with the
591 \code{CtrMcastComp} agent.
592 If there are any members in that group, compute the new multicast tree.
593 If the group is in RPT (shared tree) mode, then
594 \begin{enumerate}
595 \item create an encapsulation agent at the source;
596 \item a corresponding decapsulation agent is created at the RP;
597 \item the two agents are connected by unicast; and
598 \item the \tup{S,G} entry points its outgoing interface to the
599 encapsulation agent.
600 \end{enumerate}
601 \end{alist}
602
603 \code{CtrMcastComp} is the centralised multicast route computation agent.
604 \begin{alist}
605 \proc[]{reset-mroutes} &
606 resets all multicast forwarding entries.\\
607 \proc[]{compute-mroutes} &
608 (re)computes all multicast forwarding entries.\\
609 \proc[source, group]{compute-tree} &
610 computes a multicast tree for one source to reach all the
611 receivers in a specific group.\\
612 \proc[source, group, member]{compute-branch} &
613 is executed when a receiver joins a multicast group. It could
614 also be invoked by \proc[]{compute-tree} when it itself is
615 recomputing the multicast tree, and has to reparent all
616 receivers. The algorithm starts at the receiver, recursively
617 finding successive next hops, until it either reaches the
618 source or RP, or it reaches a node that is already a part of
619 the relevant multicast tree. During the process, several new
620 replicators and an outgoing interface will be installed.\\
621 \proc[source, group, member]{prune-branch} &
622 is similar to \proc[]{compute-branch} except the outgoing
623 interface is disabled; if the outgoing interface list is empty
624 at that node, it will walk up the multicast tree, pruning at
625 each of the intermediate nodes, until it reaches a node that
626 has a non-empty outgoing interface list for the particular
627 multicast tree.
628 \end{alist}
629
630 \subsubsection{Dense Mode}
631 \begin{alist}
632 \proc[group]{join-group} &
633 sends graft messages upstream if \tup{S,G} does not contain
634 any active outgoing slots (\ie, no downstream receivers).
635 If the next hop towards the source is a LAN, icrements a
636 counter of receivers for a particular group for the LAN\\
637 \proc[group]{leave-group} &
638 decrements LAN counters. \\
639 \proc[srcID group iface]{handle-cache-miss} &
640 depending on the value of \code{CacheMissMode} calls either
641 \code{handle-cache-miss-pimdm} or
642 \code{handle-cache-miss-dvmrp}. \\
643 \proc[srcID group iface]{handle-cache-miss-pimdm} &
644 if the packet was received on the correct iif (from the node
645 that is the next hop towards the source), fan out the packet
646 on all oifs except the oif that leads back to the
647 next--hop--neighbor and oifs that are LANs for which this node
648 is not a forwarder. Otherwise, if the interface was incorrect,
649 send a prune back.\\
650 \proc[srcID group iface]{handle-cache-miss-dvmrp} &
651 fans out the packet only to nodes for which this node is a
652 next hop towards the source (parent).\\
653 \proc[replicator source group iface]{drop} &
654 sends a prune message back to the previous hop.\\
655 \proc[from source group iface]{recv-prune} &
656 resets the prune timer if the interface had been pruned
657 previously; otherwise, it starts the prune timer and disables
658 the interface; furthermore, if the outgoing interface list
659 becomes empty, it propagates the prune message upstream.\\
660 \proc[from source group iface]{recv-graft} &
661 cancels any existing prune timer, andre-enables the pruned
662 interface. If the outgoing interface list was previously
663 empty, it forwards the graft upstream.\\
664 \proc[srcID group iface]{handle-wrong-iif} &
665 This is invoked when the multicast classifier drops a packet
666 because it arrived on the wrong interface, and invoked
667 \proc[]{new-group}. This routine is invoked by
668 \proc[]{mrtObject~instproc~new-group}. When invoked, it sends
669 a prune message back to the source.\\
670 \end{alist}
671
672 \subsection{The internal variables}
673 \begin{alist}
674 \textbf{Class mrtObject}\hfill & \\
675 \code{protocols\_} &
676 An array of handles of protocol instances active at the node
677 at which this protocol operates indexed by incoming
678 interface. \\
679 \code{mask-wkgroups} &
680 Class variable---defines the mask used to identify well known
681 groups. \\
682 \code{wkgroups} &
683 Class array variable---array of allocated well known groups
684 addresses, indexed by the group name. \code{wkgroups}(Allocd)
685 is a special variable indicating the highest currently
686 allocated well known group. \\[3ex]
687
688 \textbf{McastProtocol}\hfill & \\
689 \code{status\_} &
690 takes values ``up'' or ``down'', to indicate the status of
691 execution of the protocol instance. \\
692 \code{type\_} &
693 contains the type (class name) of protocol executed by this
694 instance, \eg, DM, or ST. \\
695
696 \textbf{Simulator}\hfill & \\
697 \code{multiSim\_} &
698 1 if multicast simulation is enabled, 0 otherwise.\\
699 \code{MrtHandle\_} &
700 handle to the centralised multicast simulation object.\\[3ex]
701
702 \textbf{Node}\hfill & \\
703 \code{switch\_} &
704 handle for classifier that looks at the high bit of the
705 destination address in each packet to determine whether it is
706 a multicast packet (bit = 1) or a unicast packet (bit = 0).\\
707 \code{multiclassifier\_} &
708 handle to classifier that performs the \tup{s, g, iif} match. \\
709 \code{replicator\_} &
710 array indexed by \tup{s, g} of handles that replicate a
711 multicast packet on to the required links. \\
712 \code{Agents\_} &
713 array indexed by multicast group of the list of agents at the
714 local node that listen to the specific group. \\
715 \code{outLink\_} &
716 Cached list of outgoing interfaces at this node.\\
717 \code{inLink\_} &
718 Cached list of incoming interfaces at this node.\\
719
720 \textbf{Link} and \textbf{SimpleLink}\hfill & \\
721 \code{iif\_} &
722 handle for the NetworkInterface object placed on this link.\\
723 \code{head\_} &
724 first object on the link, a no-op connector. However, this
725 object contains the instance variable, \code{link\_}, that
726 points to the container Link object.\\
727
728 \textbf{NetworkInterface}\hfill & \\
729 \code{ifacenum\_} &
730 Class variable---holds the next available interface
731 number.\\
732 \end{alist}
733
734
735 \section{Commands at a glance}
736 \label{sec:mcastcommand}
737
738 Following is a list of commands used for multicast simulations:
739 \begin{flushleft}
740 \code{set ns [new Simulator -mcast on]}\\
741 This turns the multicast flag on for the the given simulation, at the time of
742 creation of the simulator object.
743
744
745 \code{ns_ multicast}\\
746 This like the command above turns the multicast flag on.
747
748
749 \code{ns_ multicast?}\\
750 This returns true if multicast flag has been turned on for the simulation
751 and returns false if multicast is not turned on.
752
753
754 \code{$ns_ mrtproto <mproto> <optional:nodelist>}\\
755 This command specifies the type of multicast protocol <mproto> to be used
756 like DM, CtrMcast etc. As an additional argument, the list of nodes <nodelist>
757 that will run an instance of detailed routing protocol (other than
758 centralised mcast) can also be passed.
759
760
761 \code{$ns_ mrtproto-iifs <mproto> <node> <iifs>}\\
762 This command allows a finer control than mrtproto. Since multiple mcast
763 protocols can be run at a node, this command specifies which mcast protocol
764 <mproto> to run at which of the incoming interfaces given by <iifs> in the <node>.
765
766
767 \code{Node allocaddr}\\
768 This returns a new/unused multicast address that may be used to assign a multicast
769 address to a group.
770
771
772 \code{Node expandaddr}\\
773 THIS COMMAND IS OBSOLETE NOW
774 This command expands the address space from 16 bits to 30 bits. However this
775 command has been replaced by \code{"ns_ set-address-format-expanded"}.
776
777
778 \code{$node_ join-group <agent> <grp>}\\
779 This command is used when the <agent> at the node joins a particular group <grp>.
780
781
782 \code{$node_ leave-group <agent> <grp>}\\
783 This is used when the <agent> at the nodes decides to leave the group <grp>.
784
785 Internal methods:\\
786
787 \code{$ns_ run-mcast}\\
788 This command starts multicast routing at all nodes.
789
790
791 \code{$ns_ clear-mcast}\\
792 This stopd mcast routing at all nodes.
793
794
795 \code{$node_ enable-mcast <sim>}\\
796 This allows special mcast supporting mechanisms (like a mcast classifier) to
797 be added to the mcast-enabled node. <sim> is the a handle to the simulator
798 object.
799
800 In addition to the internal methods listed here there are other methods specific to
801 protocols like centralized mcast (CtrMcast), dense mode (DM), shared tree
802 mode (ST) or bi-directional shared tree mode (BST), Node and Link class
803 methods and NetworkInterface and Multicast classifier methods specific to
804 multicast routing. All mcast related files may be found under
805 \ns/tcl/mcast directory.
806 \begin{description}
807
808 \item[Centralised Multicast] A handle to the CtrMcastComp object is
809 returned when the protocol is specified as `CtrMcast' in mrtproto.
810 Ctrmcast methods are: \\
811
812 \code{$ctrmcastcomp switch-treetype group-addr}\\
813 Switch from the Rendezvous Point rooted shared tree to source-specific
814 trees for the group specified by group-addr. Note that this method cannot
815 be used to switch from source-specific trees back to a shared tree for a
816 multicast group.
817
818 \code{$ctrmcastcomp set_c_rp <node-list>}\\
819 This sets the RPs.
820
821 \code{$ctrmcastcomp set_c_bsr <node0:0> <node1:1>}\\
822 This sets the BSR, specified as list of node:priority.
823
824 \code{$ctrmcastcomp get_c_rp <node> <group>}\\
825 Returns the RP for the group as seen by the node node for the multicast
826 group with address group-addr. Note that different nodes may see different
827 RPs for the group if the network is partitioned as the nodes might be in
828 different partitions.
829
830 \code{$ctrmcastcomp get_c_bsr <node>}\\
831 Returns the current BSR for the group.
832
833 \code{$ctrmcastcomp compute-mroutes}\\
834 This recomputes multicast routes in the event of network dynamics or a
835 change in unicast routes.
836
837
838 \item[Dense Mode]
839 The dense mode (DM) protocol can be run as PIM-DM (default) or DVMRP
840 depending on the class variable \code{CacheMissMode}. There are no methods
841 specific to this mcast protocol object. Class variables are:
842 \begin{description}
843 \item[PruneTimeout] Timeout value for prune state at nodes. defaults to
844 0.5sec.
845 \item[CacheMissMode] Used to set PIM-DM or DVMRP type forwarding rules.
846 \end{description}
847
848
849 \item[Shared Tree]
850 There are no methods for this class. Variables are:
851 \begin{description}
852 \item[RP\_] RP\_ indexed by group determines which node is the RP for a
853 particular group.
854 \end{description}
855
856
857 \item[Bidirectional Shared Tree]
858 There are no methods for this class. Variable is same as that of Shared
859 Tree described above.
860
861 \end{description}
862
863 \end{flushleft}
864
865 \endinput
Something went wrong with that request. Please try again.