You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since sending a Message may involve a context switch between the
application and the Transport Services system, sending patterns that
involve multiple small Messages can incur high overhead if each needs
to be enqueued separately. To avoid this, the application can
indicate a batch of Send actions through the API. When this is used,
the implementation can defer the processing of Messages until the
batch is complete.
Since the API is asynchronous I would have thought that would have avoided the
high overheads associated with synchronous blocking IPC, since calling the
send() action shouldn't cause the sender to yield their time slice. Or is the
concern here a context switch between kernel space and user space? Or a shared
lock? Are there implementations of this new API with associated performance
data and benchmarks against the legacy sockets API?
The text was updated successfully, but these errors were encountered:
Interestiing questions (which would be fun to answer, but that's not what this is about). I think this is arising from our text being about an example, rather than using the example to illustrate batch processing.
I'd make a PR that says more like:
involve multiple Messages can incur high overhead if each needs
to be enqueued separately (e.g., each Message might involve a context switch between the
application and the Transport Services System). To avoid this, the application can
indicate a batch of Send actions through the API. When this is used,
the implementation can defer the processing of Messages until the
batch is complete.
From the review by Robert Wilton:
(2) p 25, sec 5.1.3. Batching Sends
Since the API is asynchronous I would have thought that would have avoided the
high overheads associated with synchronous blocking IPC, since calling the
send() action shouldn't cause the sender to yield their time slice. Or is the
concern here a context switch between kernel space and user space? Or a shared
lock? Are there implementations of this new API with associated performance
data and benchmarks against the legacy sockets API?
The text was updated successfully, but these errors were encountered: