Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Emphasize that door performance derives from scheduling #12

Open
robertdfrench opened this issue Apr 9, 2019 · 3 comments
Open

Emphasize that door performance derives from scheduling #12

robertdfrench opened this issue Apr 9, 2019 · 3 comments
Labels
blocked Pending another task

Comments

@robertdfrench
Copy link
Owner

As u/jking13 puts it:

The other bit that might be worth mentioning is that the advantage of a door vs. say a unix socket is that during a door call, the scheduler directly transfers control to the server thread -- so there is a very minimal scheduler overhead compared to other forms of IPC. There's a bit more to it, but probably a bit much for a tutorial.

It would be good to nail down the language as precisely as possible here. Is this the same as bypassing the scheduler? Do we avoid a context switch? Are we guaranteed that the server thread shares the same cpu timeslice as the client thread? Is it appropriate to make a comparison to cooperative scheduling, or has that got different implications

@robertdfrench
Copy link
Owner Author

Or, instead of explaining it (which is above my head for the moment anyhow), it might be suitable to just claim that doors are faster than other forms of IPC and either cite the comparison given in the Stevens book or whip up an alternate client & server using sockets, and do the same data transfer comparison on that.

@jasonbking
Copy link
Collaborator

It's probably easier to just say it's faster, especially for small (in terms of CPU usage) requests.

It doesn't bypass the scheduler (that'd be bad -- a nefarious process could hog some/all of the available CPUs if it did that). ISTR the net effect is that the client's remaining time slice is effectively loaned to the server thread, so either thread can still go off cpu if the time slice is exceed.

Normally, an RPC request over say a pipe or localhost socket means the client gets put on a wait queue, it has to wait until the scheduler decides to run the server process, then wait until the scheduler decides to run the client thread again (which could be a while on a busy system). With a door, the scheduler just immediately switches to the server process and then back to the client and skips all the queueing (so you could say it takes a short cut through the scheduler), though still subject to time slice limits.

@robertdfrench
Copy link
Owner Author

@jasonbking thank you, that is clarifying.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Pending another task
Projects
None yet
Development

No branches or pull requests

2 participants