-
Notifications
You must be signed in to change notification settings - Fork 298
SipStack Event Loop
[draft]
The SIP stack runs as one monolithic thread. That is, socket IO, transport layer management, message parsing, message encoding, and the transaction state machine all run in one thread. A particular application can choose to run higher level code (e.g., dialog layer, dialog users, and application) in the same thread as the SIP stack, or give the SIP stack its own thread.
footnote: Previously (when?) the stack supported running each transport in its own thread, but this was removed.
The SipStack object itself it threadsafe: upper layer code can call it from "other" threads (e.g., to send a message). The SipStack will queue the message (using thread-safe queue) and then request that its "own" thread wakeup. It does this using an AsyncProcessHandler.
Currently, there are two event loop "styles" that can be used by an owning thread to provide cycles to the SIP stack. The original style (denoted "FdSet Style" below) was built around select(). It is flexible and works well for (small) clients, but doesn't scale well to servers with many open sockets. The newer style (denoted "Callback Style" below) was added in late 2010 to support servers with 100k+ open sockets.
The Callback style has multiple backing implementations available. There is an implementation based on select() that is available on all platforms. Early performance results indicate that this implementation has comparable performance to the older FdSet Style.
The stack obtains cycles in 3 ways:
- By registering socket IO callbacks with an FdPollGrp object. See the pollGrp argument to SipStack constructor and discussion below.
- Via the SipStack methods:
getTimeTillNextProcessMS() processTimers()
- Via the asynchronous process handler. See "handler" argument to SipStack constructor and discussion above.
In this style, socket IO is handled via callbacks, but timers are still handled synchronously. At some point it might make sense to convert the timers to callbacks as well. This remains to be investigated.
This class (in rutil) manage the socket IO callbacks for the stack. The class is misnamed, it should really be something like "SocketEventGrp" or such, but has the old name for legacy reasons. The FdPollGrp is not complete (it has pure virtual methods) and must be backed by an implementation object.
An implementation object is created via the static factory method:
FdPollGrp::create(const char *implName)
where implName is:
- NULL, empty string or "event". Will create instance of "best" event loop that is available on platform.
- "epoll". Instance of FdPollImplEpoll below.
- "fdset". Instance of FdPollImpFdSet below.
- "poll". Instance of FdPollImplPoll below.
This implementation uses the FdSet class, which is a light wrapper around the select() system call. As such, it is available on all platforms. Due to limitations of the select system call, no more than around 1000 sockets can be open (60 or so on Windows), and performance degrades with more than a hundred or so sockets. It is suitable for use by clients, which typically have only a handful of sockets open.
This implementation uses the epoll() system call. The key advantage of this implementation is that it allows 100k+ open sockets. It is available on Linux and some other POSIX platforms; it is not available on Windows.
This implementation is compiled in via the HAVE_EPOLL preprocessor macro or the configure script option "--enable-epoll"; see Configuration-Options.
This implementation uses the WSAPoll/poll() system call. The key advantage of this implementation is that it allows thousands of open sockets. It is available on Linux and some other POSIX platforms (although Epoll is more efficient if available); it available on Windows and recommended for use on Windows platforms.
This implementation is compiled in via the HAVE_POLL preprocessor macro.
Note: It was not possible to continue to support the old buildFdSet / process (StackThread.hxx) with this mechanism. The new Event loop processing (EventStackThread.hxx) MUST be used.
This doesn't exist yet; just and idea! It should be pretty easy to make an FdPollGrp implementation that is backed by libevent. It would need to be libevent version 2+; version 1.x of libevent has globals that would be difficult to hide. Probably this could be added without any further change to the resip/stack classes. This would provide access to all the platform-specific optimizations built into libevent.
The SipStack class has methods:
buildFdSet() getTimeTillNextProcessMS() process()
The "owning" stack thread must call the above functions in infinite loop to provide the stack with cycles. The application could either make the calls from its own process (and thus its own event loop), or create an instance of StackThread or InterruptableStackThread to provide an autonomous event loop.
Eventually (my opinion) this style (and specifically the buildFdSet and process() methods) should go away. This is because there is sigificant branching (if statements) in the stack code to handle both styles.
The test program resip/stack/test/testStack along with matching script resip/stack/test/testStackFlavors.py can be used to obtain a simple, relative performance metric for different event loop implementations. This program sends a larger number (currently 50000) of messages and reports the transaction rate. The absolute values reported (transactions per second) shouldn't be taken too seriously: any real application is will have a lower rate because it needs to do "real" work, not just send messages around.
The test script measures additional test cases that are not reported here. These other value are used to understand where the time is going but aren't relevant to performance of a "real" application.
The current test script values are not particular repeatable; the script really needs to be extended to perform multiple runs and compute a standard deviation. So consider the values only meaningful to within say 10%. Equivalently, differences less than 10% should be ignored. Note that the "9444" for UDP epoll is NOT a typo, but I don't know why it is so high. The "n/a" is because select()-based event loops cannot handle more than 1000 open sockets.
  | Transactions/Second (tps) | |||||
---|---|---|---|---|---|---|
Protocol | TCP | UDP | ||||
NumPorts | 1 | 100 | 10000 | 1 | 100 | 10000 |
InterruptableStackThread | 5065 | 4440 | n/a | 4370 | 4611 | n/a |
EventStackThread/fdset | 5035 | 4325 | n/a | 4450 | 4608 | n/a |
EventStackThread/epoll | 5135 | 4620 | 3681 | 4291 | 9444 | 4492 |
- Navigation
- Developers
- Packages
- Community