Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What about Batched Invocations #1

Closed
BLaurent opened this issue Jan 15, 2015 · 1 comment
Closed

What about Batched Invocations #1

BLaurent opened this issue Jan 15, 2015 · 1 comment

Comments

@BLaurent
Copy link

Hi, I wondering,
if you ever consider using Batched Invocations which supposed to reduce
round trip and therefore latency

Best regards
Ben

@kentonv
Copy link
Owner

kentonv commented Jan 17, 2015

Hi Ben,

As I understand it, ICE's "batched invocations" allow you to send multiple messages in one batch but still do not allow the second message in the batch to depend on the response to the first message. This can reduce the number of network packets sent but does not reduce the number of round trips required.

(I could be wrong on this, but I did ask one of the ZeroC authors if they have anything like promise pipelining and they told me they did not.)

Some systems (not ZeroC ICE, AFAICT) support a concept of "batched invocation chaining" which is very close to promise pipelining, but still requires that all the calls be made in a single batch. Promise Pipelining is more flexible in that you can kick off the first call before you're ready to make the second, and you also receive the responses when they're ready rather than all at once. This also tends to mean you can keep your code much cleaner, because the code responsible for filling out the first request does not necessarily need to know whether anyone plans to pipeline on its result.

@kentonv kentonv closed this as completed Jan 17, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants