Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: use mojo for electron internal IPC #17406

Merged
merged 41 commits into from Apr 2, 2019

Conversation

Projects
None yet
5 participants
@nornagon
Copy link
Contributor

nornagon commented Mar 15, 2019

Description of Change

This refactors Electron's internal IPC to use Mojo, instead of legacy IPC.

This PR is a straightforward port that swaps out AtomFrameMsg_Message and AtomFrameHostMsg_Message and their siblings to use Mojo as their underlying transport. In future I intend to further expand our use of Mojo and describe the various IPC interfaces we expose as fleshed-out Mojo IDLs, using the Mojo JS binding.

Checklist

Release Notes

Notes: no-notes

@nornagon nornagon added the wip label Mar 15, 2019

@nornagon nornagon requested a review from electron/wg-upgrades Mar 15, 2019

@electron-cation electron-cation bot added new-pr 🌱 and removed new-pr 🌱 labels Mar 15, 2019

@deepak1556
Copy link
Member

deepak1556 left a comment

Just a couple of design questions.

Show resolved Hide resolved atom/app/manifests.cc Outdated
Show resolved Hide resolved atom/browser/api/atom_api_web_contents.h Outdated
Show resolved Hide resolved atom/common/api/BUILD.gn

@nornagon nornagon marked this pull request as ready for review Mar 20, 2019

@nornagon nornagon requested a review from electron/wg-security Mar 20, 2019

@nornagon nornagon removed the wip label Mar 20, 2019

@nornagon

This comment has been minimized.

Copy link
Contributor Author

nornagon commented Mar 21, 2019

Here are some things I've learned about mojo along the path to this PR. This comment intended as both informative & also as a ping, please review this :)

Associated Interfaces

Associated Interfaces are a concept in Mojo that allows interfaces to be associated with an existing message pipe. Normally, every interface in Mojo gets its own message pipe (& so can receive messages out-of-order compared with other interfaces), but using an associated interface means you'll receive messages in order with the associated pipe.

The linked document above talks about an associated mojom keyword, which we aren't using. We're using associated interfaces so we can share the message pipe with the legacy IPC system, meaning that message order won't be affected by this change (messages over the new Mojo IPCs will remain in-order with legacy IPCs). I don't know if this is important for the IPCs that I've converted (renderer->browser: Message, MessageSync and browser->renderer: Message, TakeHeapSnapshot), but it seemed prudent to stick with the same order. In future we can experiment with switching to regular, non-associated Mojo interfaces.

RenderFrames, RenderFrameHosts, and InterfaceProviders

Mojo pipes aren't inherently linked to frames or renderers or anything, they're totally generic. So there needs to be plumbing hooked up if you want to keep track of which frame belongs to which Mojo pipe. Fortunately, Chromium already has some infrastructure for this, in the form of {RenderFrame,RenderFrameHost}::GetRemote{,Associated}Interfaces(). These functions return service_manager::InterfaceProviders which are hooked up to track the RenderFrame / RenderFrameHost, and provide interfaces to the other end, like so:

    //  we're in the browser process here.
    // this will become a pointer to an interface provided by the renderer.
    mojom::ElectronRendererAssociatedPtr electron_ptr;
    // request the interface via the InterfaceProvider
    render_frame_host->GetRemoteAssociatedInterfaces()->GetInterface(
        mojo::MakeRequest(&electron_ptr));
    // call a method on the interface.
    electron_ptr->Message(internal, channel, args.Clone(), sender_id);

Note that in the above code, there are no round-trips over the IPC pipe! Everything is pipelined. If the interface fails to bind on the remote end, the method will never be called and an error will be raised on the ElectronRendererAssociatedPtr (interceptible by set_connection_error_handler).

Interface lifetimes

When you request an interface, you get a handle to it (something like mojom::ElectronRendererAssociatedPtr). A handle is also created on the remote end. If either end of the pipe is closed, the other end will finish processing whatever messages it has left in its queue, then close its own end of the pipe.

The above code sample requests an interface, sends a message over it, then closes its end of the pipe (implicitly, when the ElectronRenderAssociatedPtr goes out of scope). The other end of the pipe will receive the request, bind a new interface (in this case, a new instance of ElectronApiServiceImpl will be created), call the method on the interface, then receive a notification that the other end of the pipe has been closed. ElectronApiServiceImpl listens for the remote pipe being closed and cleans itself up.

It would also be a reasonable design to have a single handle to a remote interface through which all communication would be routed. I'm not sure yet which is better, but this design was simpler for now because there are a fair few places around the code that send IPCs and I wasn't sure where to store such a single handle per frame. This way ends up with a little more traffic on the IPC pipe (requesting the interface anew & closing it for each message), as well as a few more function calls and allocations on the receiving end, but the cost is small and the tradeoff is that the code can be simpler on the calling side.

@nornagon nornagon requested a review from deepak1556 Mar 29, 2019

@MarshallOfSound
Copy link
Member

MarshallOfSound left a comment

I know you stress tested the IPC locally to ensure things like in-order delivery remained consistent. Could we potentially add a test that just sends a few thousand IPC messages back and forth and asserts they arrive in the right processes in the correct order?

Show resolved Hide resolved spec/chromium-spec.js Outdated
@nornagon

This comment has been minimized.

Copy link
Contributor Author

nornagon commented Mar 29, 2019

Could we potentially add a test that just sends a few thousand IPC messages back and forth and asserts they arrive in the right processes in the correct order?

Yes, but I think this would essentially just be testing a property that's already guaranteed by Mojo?

@MarshallOfSound

This comment has been minimized.

Copy link
Member

MarshallOfSound commented Mar 29, 2019

Yes, but I think this would essentially just be testing a property that's already guaranteed by Mojo?

I was referring more to testing the work you did in making our sync messages work.

MessageSync is no longer a mojo [Sync] call; instead, we call it asynchronously in a background thread and manually block on the response. This fixes the out-of-order issue.

☝️

@nornagon

This comment has been minimized.

Copy link
Contributor Author

nornagon commented Mar 29, 2019

It would be difficult to replicate these circumstances exactly, because the only time that messages would get reordered is if the main process is waiting on a response to synchronous call, and the main process almost never actually does that (synchronous calls are actually banned in the browser process, you have to specifically allow them and only a small set of classes are permitted to do so). One of the very few synchronous calls that the browser makes is called during window close (it has to do with GL contexts); outside of that situation, the [Sync] method works fine & delivers messages in order.

So IMO, to test this effectively we would have to:

  1. Set up the browser to call a synchronous method, probably by adding our own test-only synchronous message,
  2. Call Message in a renderer,
  3. Call MessageSync in a renderer,
  4. Complete the synchronous call in the browser,
  5. Ensure that the Message arrived before the MessageSync.

Ultimately I don't think such a test is worth the effort it would take to create it, because it's testing a guarantee that Mojo already provides: that async messages are always delivered in order.

Oh also in order to call that test-only synchronous method we'd have to patch Chromium in order to permit the call.

@nornagon nornagon referenced this pull request Mar 30, 2019

Merged

refactor: mojofy MessageTo and MessageHost #17613

4 of 6 tasks complete
@deepak1556
Copy link
Member

deepak1556 left a comment

@YurySolovyov

This comment has been minimized.

Copy link
Contributor

YurySolovyov commented Apr 2, 2019

Any idea if/how this might affect performance or responsiveness of the renderer process?

@nornagon

This comment has been minimized.

Copy link
Contributor Author

nornagon commented Apr 2, 2019

Any idea if/how this might affect performance or responsiveness of the renderer process?

I don't expect this change to significantly affect performance, as legacy IPC has been using mojo under the hood for several years. I intend to check throughput + latency before/after this change in order to confirm this thesis.

@nornagon

This comment has been minimized.

Copy link
Contributor Author

nornagon commented Apr 2, 2019

Perf test for sync IPC round-trip time:

renderer.js
const {ipcRenderer} = require('electron')

const n = 10000
let total = 0
for (let i = 0; i < n; i++) {
    const begin = performance.now()
    ipcRenderer.sendSync("hello")
    const end = performance.now()
    total += end - begin
}
console.log(`average: ${total / n}`)
main.js
const {ipcMain} = require('electron')

ipcMain.on('hello', (e) => {
  e.returnValue = 'hi'
})

Results (3 runs):
legacy: 0.457ms, 0.452ms, 0.472ms (0.461ms avg)
mojo: 0.421ms, 0.418ms, 0.435ms (0.425ms avg)

mojo 8.5% faster.

Testing async call performance is a little trickier since performance.now() isn't available in the main process, and process.hrtime() doesn't seem to be comparable between processes.

@MarshallOfSound
Copy link
Member

MarshallOfSound left a comment

:shipit:

@nornagon nornagon merged commit 53f6cbc into master Apr 2, 2019

8 checks passed

Semantic Pull Request ready to be squashed
Details
appveyor: win-ia32-debug AppVeyor build succeeded
Details
appveyor: win-ia32-testing AppVeyor build succeeded
Details
appveyor: win-ia32-testing-pr AppVeyor build succeeded
Details
appveyor: win-x64-debug AppVeyor build succeeded
Details
appveyor: win-x64-testing AppVeyor build succeeded
Details
appveyor: win-x64-testing-pr AppVeyor build succeeded
Details
release-notes Release notes found
@release-clerk

This comment has been minimized.

Copy link

release-clerk bot commented Apr 2, 2019

No Release Notes

@jkleinsc jkleinsc deleted the mojo branch Apr 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.