-
Notifications
You must be signed in to change notification settings - Fork 592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for serialized method execution #2208
Comments
For this use-case, you would typically use "bi-dir" connections for the calls to the This solves the issue of "several path" back to the client, since now both The next issue, as you described, is the ObjectAdapter in the client could read With "bi-dir connections", the thread pool that your object adapter uses to read off requests off the network and dispatch them is the client thread pool - not the server thread pool nor the object adapter's thread pool. By default, there is a single thread in the client thread pool, so |
This means that whether the described use case works correctly depends on whether the client is configured correctly (i.e. whether its client thread pool has size of one). As a developer of the server, I would prefer not to depend on the correctness of the client's configuration - I would like the notifications to be correctly ordered regardless of the client's thread pool settings. Besides, having a single thread handling all notifications from the server (which there may be many, through different listeners) sounds too limiting in terms of concurrency. I would prefer the server not to impose such a restriction on the client. That's the reason behind the design of my proposed solution - it is implemented entirely on the caller's side and doesn't depend on the callee's configuration and doesn't limit concurrency beyond what's necessary to maintain ordering.
Given that we're talking about one bidirectional connection between the client and the server, the |
The other side (client here) can use multiple threads to dispatch the incoming requests. The second request read off the network can overtake the first request before it reaches the application code, unless there is some specific logic in this "other side". Here, this specific logic is a client thread pool with a single thread or with multiple threads and Serialize set to true. If you don't trust the "other side" is coded / configured properly when sending these requests and you want to ensure the dispatches are serialized, your only option is to make two-way invocations and wait for the "void" return value before sending the next request. That's actually one QoS option we offer in IceStorm: You'd use this QoS if you don't trust the subscriber implementation.
Most / all of the above already occurs when a connection is established. The Ice protocol does not support interleaved requests: when one thread writes a request to a connection, no other thread can write anything else to that connection. And with asynchronous invocations, the call order is maintained, even when the sending is not performed synchronously. |
This is what I'm doing now. But this logic needs to be implemented separately for every such listener interface, and I have quite a few. Actually, even more often than that, as we're using the same listener interfaces in different contexts, too. It would make sense for ICE to provide such facility out of the box, universally for any interfaces, hence I'm asking for this feature.
We don't use IceStorm.
This is new information. Are you saying that when the server invokes If so, how do concurrent calls work? Do they? Also, if so, does this guarantee the correct order of the calls, as observed by the client? |
The only way to guarantee serialized dispatches in your client without proper coding / configuration in your client is for your server to make two-way invocations one after the other. Your proposal is to add a new proxy setting, "serialized", that makes this automatic? So when you use p->being_on_activity_finished(); It won't send any frame until the previous invocation on this proxy has returned a response? While I understand this proposal, I don't find it compelling since you can easily achieve the same behavior at the application level. It's also based on the requirement you can't count on the peer to be written / configured properly, which should not be that common.
That's not what I am saying. The sender sends protocol messages (request, responses, control frames) on the connection. When you call
Note that in most if not all language mappings, there is not really any difference between an asynchronous and a synchronous invocation. You can think of a synchronous invocation as a wrapper for: prx->on_activity_startedAsync().get(); // std::future::get |
I wouldn't characterize it as "easily achievable". As I said before, we have many places where the logic described in this issue needs to be implemented, and you have to implement call queueing in every instance where call ordering matters, which is pretty much everywhere. So, while the concept of call queueing in itself is simple, the amount of work that needs to be done to implement it is far from trivial.
I would think it's the opposite. Basing behavior correctness on third party settings is a recipe for trouble and simply poor design, so I would expect most ICE users would want their system to behave correctly regardless of the ICE settings. Also, your suggested solution (either limiting thread pool to 1 or using the "Serialize" option) basically disables ICE call concurrency, which I don't think will be acceptable to everyone.
I understand that the calls sent over a TCP connection do not get reordered, but this fact doesn't guarantee the order of the calls on the callee's side and therefore has little to no relevance to the issue at hand. When you say "Most / all of the above already occurs when a connection is established", this is not true, as my proposed solution does guarantee call ordering. Also, the way you describe it, the Ice protocol does support request interleaving, as it it possible to have multiple sent requests without received responses. In any case, if you do not think this problem is worth solving, either using my proposed solution or in some other way, you can close this issue. Thanks. |
I would write a wrapper around the With IceRPC, you could achieve this "serialize" behavior by writing an interceptor that you insert in your proxy's invocation pipeline. There is however no comparable feature in Ice.
I am curious about your use-case here. You don't write these clients? They are written by third-parties?
The Serialize option disables call concurrency on a per connection basis. You can still dispatch requests received from different connections in separate, concurrent threads. |
Yes and no. There are cases where I write both the server and the client, and there are cases where I only write the server and the clients are written by a third party. In the latter case, the ones who write the clients are not necessarily very familiar with ICE and its settings. In all cases, the ones who deploy the entire solution (i.e. the server and the clients) are definitely not familiar with ICE at all; they likely have no idea what ICE is, let alone how to properly configure it to make the whole thing work, beyond setting up IP addresses to connect the components together. The ones who deploy the solution are a fourth party.
This is equivalent to no concurrency at all, since the client only connects to the server, i.e. there is only one connection in each client. |
We have like a few dozens of interfaces like this, with varying number of methods and their parameters, and slightly different rules as to which calls may or may not be reordered. Even just writing one wrapper per interface is considerable work. Then, due to our implementation specifics, reusing these wrappers in different places where these interfaces are used may be problematic. So currently, we have to use slightly different implementations of these wrappers in different places. |
The most elegant solution here would be to provide clean APIs to these third-parties - APIs that don't depend on the RPC (Ice) layer. Then you can describe the semantics you guarantee and implement them with Ice. For example: // started and completed will execute in a separate thread - the same thread - one after the other.
void submit_job_to_remote_server(Job job, std::function<void()> started, std::function<void()> completed); Then, you would decide what the best way to implement these guarantees. It could be:
I wouldn't recommend giving these developers the Slice files and a few instructions on how to use them. Help them by wrapping the RPC details. |
This certainly wouldn't make my life any easier compared to the status quo. Implementing the wrapping APIs - in addition to ensuring the correct call ordering - only increases the amount of our work on the server with no clear benefit. |
Consider a use case like this. A server allows a client to register a listener for receiving notifications of the server activity. The listener implements an interface like this:
The server is not interested in results of these notifications, and therefore it simply calls
begin_on_activity_started()
andbegin_on_activity_finished()
when the respective events happen. This allows the server to not block on notifications and continue with its operations.The problem is that, presumably, these notifications may be reordered when they reach the client, even if the order of these calls in the server is correct. That is, the client may observe the
on_activity_finished()
notification before theon_activity_started()
one. First, there may be multiple transports connecting the client and the server, and the two calls may get transported through different transports. Second, there may be multiple ICE threads in the client processing incoming ICE calls, and it is possible that the thread handling theon_activity_finished()
notification reaches the listener object before the one handlingon_activity_started()
.Provided that my understanding is correct, and reordering may indeed happen, it would be useful to have a way to guarantee the correct notification order in the above use case. For example, add a new option to ICE proxies so that a new call made through a proxy blocks (for synchronous calls) or enqueues for sending (for asynchronous calls) until the previous call through that proxy is complete. It would work on per-proxy granularity, so that calls made through different proxies, even if the proxies represent the same object, would still be unordered.
The text was updated successfully, but these errors were encountered: