Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why batch sends? #1321

Closed
mwelzl opened this issue Sep 14, 2023 · 2 comments · Fixed by #1407
Closed

Why batch sends? #1321

mwelzl opened this issue Sep 14, 2023 · 2 comments · Fixed by #1407

Comments

@mwelzl
Copy link
Contributor

mwelzl commented Sep 14, 2023

From the review by Robert Wilton:

(2) p 25, sec 5.1.3. Batching Sends

   Since sending a Message may involve a context switch between the
   application and the Transport Services system, sending patterns that
   involve multiple small Messages can incur high overhead if each needs
   to be enqueued separately.  To avoid this, the application can
   indicate a batch of Send actions through the API.  When this is used,
   the implementation can defer the processing of Messages until the
   batch is complete.

Since the API is asynchronous I would have thought that would have avoided the
high overheads associated with synchronous blocking IPC, since calling the
send() action shouldn't cause the sender to yield their time slice. Or is the
concern here a context switch between kernel space and user space? Or a shared
lock? Are there implementations of this new API with associated performance
data and benchmarks against the legacy sockets API?

@gorryfair
Copy link
Contributor

Interestiing questions (which would be fun to answer, but that's not what this is about). I think this is arising from our text being about an example, rather than using the example to illustrate batch processing.

I'd make a PR that says more like:

   involve multiple Messages can incur high overhead if each needs
   to be enqueued separately (e.g., each Message might involve a context switch between the
   application and the Transport Services System).  To avoid this, the application can
   indicate a batch of Send actions through the API.  When this is used,
   the implementation can defer the processing of Messages until the
   batch is complete.

mwelzl added a commit that referenced this issue Oct 11, 2023
@mwelzl
Copy link
Contributor Author

mwelzl commented Oct 11, 2023

I agree - there can be overheads, but these details really depend on the specific implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants