Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waterfall startup sequence #17108

Closed
jrieken opened this issue Dec 13, 2016 · 12 comments
Closed

Waterfall startup sequence #17108

jrieken opened this issue Dec 13, 2016 · 12 comments
Assignees
Labels
debt Code quality issues perf
Milestone

Comments

@jrieken
Copy link
Member

jrieken commented Dec 13, 2016

The overall startup sequence from starting the main process to starting the renderer process is waterfall'ish, esp. since we don't just wait for the app to be ready but because we do many things before starting the renderer (which itself doesn't do anything else than loading code for the first 1 second). Let's identify what can run after or in parallel to getting the renderer ready. Also, I have created a benchmark, stripped down, electron app, which should set the baseline - we can't get faster but should strive to become as close as possible.

The timings for a simple app (attached) are

app,started: (absolute) 0ms, (relative) 0
app,ready: (absolute) 58ms, (relative) 58
mainWindow,created: (absolute) 90ms, (relative) 32
renderer,html: (absolute) 378ms, (relative) 288

VS Code (on my mac) takes ~750ms before loading the main window (compare to mainWindow,created) and ~1200ms before loading the big chunk of code (workbench.min.js, compare with renderer,html). That makes roughly 600ms that are spending somewhere. Since that is roughly the time it takes to load the main portion of our code, I wonder if those 600ms initialisation work can run in parallel, while we load the code of the workbench.

@jrieken jrieken added the perf label Dec 13, 2016
@jrieken
Copy link
Member Author

jrieken commented Dec 13, 2016

The baseline electron app can be found here: https://github.com/jrieken/electron-startup-perf-baseline

@jrieken
Copy link
Member Author

jrieken commented Dec 13, 2016

related to #15455

@joaomoreno
Copy link
Member

Can you put the timings for vanilla vs code side-by-side on a table, along with their relative difference? Makes it easy to reason about things.

@jrieken
Copy link
Member Author

jrieken commented Dec 13, 2016

Step Electron VSCode
start → app ready 50ms 50ms
start → loadURL 90ms (+40) 750ms (+710)
start → index.html 378ms (+288) 1200ms (+450)

@bpasero bpasero added this to the January 2017 milestone Dec 13, 2016
jrieken added a commit that referenced this issue Dec 14, 2016
jrieken added a commit that referenced this issue Dec 20, 2016
…ke use not spawn a process which saves at least 10ms, #17108
jrieken added a commit that referenced this issue Dec 22, 2016
…ikely make use not spawn a process which saves at least 10ms, #17108"

This reverts commit 8a41bdb.
@jrieken
Copy link
Member Author

jrieken commented Dec 30, 2016

The following are some operation we perform that take long, like spawning a process or listening on a port

The startup logic should be optimised to start loading workbench.main.js as soon as possible. That is because it takes quite some time (~800ms) to complete and we should run all other operations in parallel to that and not sequentially, prior to it

@jrieken jrieken assigned joaomoreno and unassigned jrieken Dec 30, 2016
@joaomoreno joaomoreno modified the milestones: February 2017, March 2017 Feb 21, 2017
@bpasero
Copy link
Member

bpasero commented Mar 7, 2017

Would it not be possible to wait creating the shared process until the first window is loading? All shared process related methods are promises anyways and it seems ok to wait until a window is visible.

Now that we look into moving the shared process into a real Electron window, I fear that startup would even get slower compared to just spawning a process.

For the setJumpList call, I simply wait for the first window via win.webContents.on('did-start-loading',...)

@jrieken
Copy link
Member Author

jrieken commented Mar 7, 2017

Yes - we are moving the shared process into a browser window to begin with and should then also be able to make it after the first, user-windows started loading: #22091

@joaomoreno joaomoreno removed this from the March 2017 milestone Mar 29, 2017
@joaomoreno joaomoreno modified the milestones: April 2017, March 2017 Mar 29, 2017
@joaomoreno joaomoreno modified the milestones: Backlog, April 2017 Apr 21, 2017
@joaomoreno joaomoreno added the debt Code quality issues label Apr 21, 2017
@jrieken
Copy link
Member Author

jrieken commented May 4, 2017

fyi - I have moved waiting for the ready-state into the main.ts such that we at least load the loader and code while waiting

jrieken added a commit that referenced this issue Jun 9, 2017
jrieken added a commit that referenced this issue Jun 9, 2017
@jens1o
Copy link
Contributor

jens1o commented Aug 20, 2017

is there any further progress here @jrieken since you apparently started working on it?

joaomoreno added a commit that referenced this issue Aug 25, 2017
@joaomoreno
Copy link
Member

I've changed the getCommonHttpHeaders thing to simply use a UUID which we store after generating it. Getmac won't be called anymore. Unfortunately this is still happening on startup in a sync fashion. The reason for this is that we don't have an atomic storage mechanism across all processes. If we don't generate it sync on startup, we might have 2 processes generating 2 different UUIDs. The mitigation for this is to use a single machineid file for this in user data. This will make reading and writing to this file extremely fast.


@jrieken setupIPC is a hard nut to make faster. We use sockets/named pipes to make sure we don't execute two instances at the same time. Do you know of anything else that has the same atomic properties yet is faster?

@jrieken
Copy link
Member Author

jrieken commented Aug 25, 2017

Do you know of anything else that has the same atomic properties yet is faster?

Yeah, no real clue. The good news is that I haven't seen it take much longer on other machines. So in contrast to launching (part of getmac) which can take a second on some machines this seems to be more or less constant. Ok for me to leave it as is

@joaomoreno
Copy link
Member

Cool, let's close it!

@joaomoreno joaomoreno modified the milestones: August 2017, Backlog Aug 25, 2017
@vscodebot vscodebot bot locked and limited conversation to collaborators Nov 17, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
debt Code quality issues perf
Projects
None yet
Development

No branches or pull requests

4 participants