Skip to content

fix(perf): sign multiple files at once #347

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

erikian
Copy link
Member

@erikian erikian commented Feb 16, 2025

I was recently debugging a code-signing issue and noticed in the logs that we're signing one file at a time, which translates to a few hundreds of childProcess.exec('codesign') calls. codesign supports passing multiple files at once, so we can take advantage of that to keep the number of child processes we have to spawn (and the associated overhead) to a minimum.

At first, I tried to sign all files with identical args with a single codesign call, but the tests for the v7.0.0-beta.3-darwin-x64 and v7.0.0-beta.3-mas-x64 artifacts were failing. I figured that was related to fact that that we're supposed to sign the bundle from the inside out as described here, so I changed the logic to take into account the position of the files in the file tree, and now all files at the same depth that share the same arguments are signed at the same time.

I did some crude benchmarking on my machine with the Electron artifacts used in @electron/osx-sign's test cases by console.timeing the total time spent and counting how many child processes we're spawning. Using the main branch, the heaviest artifact is v6.0.3-darwin-x64, which takes 55.737s to complete and calls codesign 227 times in total — with my branch, the same artifact is signed in 47.938s (~14% less time) while spawning only 13 child processes.

I didn't realize we had an open PR for improving code-signing performance until just before opening this one. @SuperDisk, I couldn't test your branch locally because of some build errors, but if my PR lands before yours let me know if it helps with your use case. My PR uses a different approach, so if it doesn't solve your problem completely it could potentially speed things up some more when combined with your changes.

Closes #305.

@erikian erikian requested a review from a team as a code owner February 16, 2025 19:45
@SuperDisk
Copy link

SuperDisk commented Feb 17, 2025

Looks like this does essentially the same thing as mine, although it uses one invocation per "level" rather than spawning multiple codesign processes; I think it probably will address the same use case. Nice work, if either of our PRs gets merged I'll be happy.

EDIT: that said, my thing fixes an unrelated bug where a stat should have been an lstat, so if we do merge this then I can put up another PR with that fix.

@erickzhao
Copy link
Member

Hey @erikian @SuperDisk, @electron/wg-ecosystem discussed this PR synchronously during a meeting:

  • A few maintainers were concerned about potential pitfalls with symlinks at different levels of the filesystem tree.
  • Could we get an example of a sample repo that takes a very long time to sign (as per perf: parallelize signing #341 comment)?
  • If we want this feature, could it be an opt-in API option? We were discussing something like an array of arrays to batch sign. Signing all files at once would be basically be passing in something like files.map(f => [f]).

@SuperDisk
Copy link

SuperDisk commented Mar 19, 2025

A few maintainers were concerned about potential pitfalls with symlinks at different levels of the filesystem tree.

Just anecdotally, the application I've been signing with my fork includes GStreamer.framework, which has symlinks to different levels and it works ok, but I can try to break it and see if there's an issue here.

Could we get an example of a sample repo that takes a very long time to sign

I can put something together soon, but I think an easy-to-reproduce case would just be copying a simple binary 1000x and including it in the package.

If we want this feature, could it be an opt-in API option? We were discussing something like an array of arrays to batch sign.

I kind of think it's better to leave as an implementation detail internally rather than expose as an API; really this is just a performance improvement rather than a semantic change. Also, since we need to sign the files in a depth-first manner, I'm not sure how specifying ones to batch sign would work.

@erikian erikian force-pushed the fix/sign-multiple-files branch from 806b9d4 to 1fcdaef Compare March 21, 2025 00:30
@erikian erikian force-pushed the fix/sign-multiple-files branch from 1fcdaef to 316c77d Compare March 21, 2025 00:34
@erikian
Copy link
Member Author

erikian commented Apr 1, 2025

A few maintainers were concerned about potential pitfalls with symlinks at different levels of the filesystem tree.

Nothing's changed about how we handle symlinks AFAICT. All Electron apps have a symlink in /Contents/Frameworks/Electron Framework.framework/Resources that points to /Contents/Frameworks/Electron Framework.framework/Versions/A/Resources which is two levels deeper than the symlink, and things work just as they did before because the files at deeper levels are still signed first.

I did ln -s /path/to/Electron.app/Contents/Resources/some-binary /path/to/Electron.app/Contents/Resources/some/deep/folder/some-binary to see what happens in the opposite situation (symlink deeper than the target), and I get invalid destination for symbolic link in bundle — but I get the exact same error on main, so any issues in symlink signing are unrelated to this PR.

Could we get an example of a sample repo that takes a very long time to sign (as per #341 comment)?

In @SuperDisk's original PR, he mentions that his app includes GStreamer, so I made a script to create a default Electron app with that package copied to the Contents/Resources folder. Save this to @electron/osx-sign's root directory, switch branches and run node ./downloadAndSignGstreamer.mjs:

// `downloadAndSignGstreamer.mjs`

import { download } from '@electron/get';
import extract from 'extract-zip';
import path from 'node:path';
import fs, { promises as fsPromises } from 'node:fs';
import { execFile, execSync } from 'child_process';

const branchName = execSync('git rev-parse --abbrev-ref HEAD').toString();

console.log(`running yarn build on ${branchName}`);
execSync('rm -rf dist && yarn build');

const zipPath = await download('35.1.2');
const tempPath = path.resolve('temp');
const appPath = path.resolve(tempPath, 'Electron.app');

const gstreamerPkgPath = path.resolve('gstreamer.pkg');

if (!fs.existsSync(gstreamerPkgPath)) {
  const response = await fetch(
    'https://gstreamer.freedesktop.org/data/pkg/osx/1.26.0/gstreamer-1.0-1.26.0-universal.pkg',
  );
  await fsPromises.writeFile(gstreamerPkgPath, Buffer.from(await response.arrayBuffer()));
}

const testRuns = 10;
const startTime = new Date();

for (let i = 0; i < testRuns; i++) {
  // remove the previous test bundle and create a fresh copy
  await fsPromises.rm(appPath, {
    recursive: true,
    force: true,
  });

  await extract(zipPath, { dir: tempPath });

  const gstreamDestination = path.join(appPath, 'Contents/Resources/gstreamer');

  execSync(`pkgutil --expand-full ${gstreamerPkgPath} ${gstreamDestination}`);

  // this directory contains multiple broken symlinks, remove them before signing
  const brokenSymlinksPkgPath = path.join(gstreamDestination, 'osx-framework-1.26.0-universal.pkg');

  for (const entry of fs.globSync(path.join('**/*'), { cwd: brokenSymlinksPkgPath })) {
    const filePath = path.join(brokenSymlinksPkgPath, entry);

    try {
      if (fs.lstatSync(filePath).isSymbolicLink()) {
        await fs.unlinkSync(filePath);
      }
    } catch (err) {
      if (err.code !== 'ENOENT') {
        throw err;
      }
    }
  }

  const timeLabel = `sign ${i}`;

  console.time(timeLabel);

  const interval = setInterval(() => {
    console.timeLog(timeLabel, '(in progress)');
  }, 10000);

  await new Promise((resolve, reject) => {
    const childProcess = execFile(`${path.resolve('bin/electron-osx-sign.js')}`, [
      appPath,
      '--platform=darwin',
    ]);
    childProcess.on('error', reject);
    childProcess.on('close', (code) => {
      if (code === 0) return resolve();
      reject();
    });
    childProcess.stderr.on('data', (data) => process.stderr.write(data));
  });

  clearInterval(interval);

  console.timeLog(timeLabel, '(finished)');
}

const endTime = new Date();
const average = Math.round((endTime.getTime() - startTime.getTime()) / testRuns);
const milliseconds = average % 60_000;
const minutes = (average - milliseconds) / 60_000;

console.log(
  `average signing time on "${branchName}": ${average}ms (${minutes}min${(milliseconds / 1000).toFixed(2)}sec)`,
);

Luckily, it doesn't take hours to sign this example app (@SuperDisk's app is probably set up differently so it takes longer for some reason), but it does take a healthy 6min48sec on average over 10 runs on main for me. On my branch, it takes ~5min7sec on average, or ~25% less time.

I tried the same approach to sign other apps and I consistently get a more modest ~10-15% improvement, but it looks like apps that ship tons of native modules can benefit more from this change.

If we want this feature, could it be an opt-in API option? We were discussing something like an array of arrays to batch sign.

I wouldn't call this a feature, I'm just combining multiple files that should be signed with the same options in a single call to codesign. That's similar to the following scenario — both achieve the same result, and if one of them was faster than the other, I don't see why users should have to opt-in to have their files deleted faster:

# remove those folders
rm -rf dir1
rm -rf dir2
rm -rf dir3
rm -rf dir4

# remove those folders and list all removed files
rm -rfv dir5
rm -rfv dir6
rm -rfv dir7
rm -rfv dir8

## vs ##

# remove those folders
rm -rf dir1 dir2 dir3 dir4

# remove those folders and list all removed files
rm -rfv dir5 dir6 dir7 dir8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Codesign all files in one step
3 participants