-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add support for asynchronous building and pushing of profiles #271
base: master
Are you sure you want to change the base?
Conversation
src/cli.rs
Outdated
// await both the remote builds and the local builds to speed up deployment times | ||
try_join!( | ||
// remote builds can be run asynchronously since they do not affect the local machine | ||
try_join_all(remote_builds.into_iter().map(|data| async { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From try_join_all
docs:
If any future returns an error then all other futures will be canceled and an error will be returned immediately
IMO, it might be better to wait for all futures to be completed/failed instead of canceling everything on the first failure, thus the non-failed builds will complete despite the error in an unrelated build. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the builds are started concurrently, the build progress information is pretty much useless and flickers a lot.
I'm not sure how viable it is to implement, but perhaps, it's possible to collect stdout
and stderr
for each future, if it is, we can collect and report detailed output synchronously once all remote builds are completed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, it might be better to wait for all futures to be completed/failed instead of canceling everything on the first failure, thus the non-failed builds will complete despite the error in an unrelated build. What do you think?
I'll see how easy it would be to implement that, if it is that might be worth checking out so that consecutive builds don't have to build as much.
perhaps, it's possible to collect stdout and stderr for each future, if it is, we can collect and report detailed output synchronously once all remote builds are completed
I was considering that maybe one line from the bottom per build would be neat e.g.:
[previous output]
host 1: [build progress]
host 2: [build progress]
This will, however, require some more fine grained control over the terminal output i.e. raw output instead of cooked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be neat
Agree, but indeed sounds quite non-trivial
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, another somewhat conceptual concern. Profiles can potentially depend on each other (see profilesOrder
flake option), so perhaps it's worth doing the parallelization on per-host basis instead of per-profile
build remote builds (remote_build = true) asynchronously to speed up the deployment process. local builds should not be run asynchronously to prevent running into hardware deadlocks
bb4a111
to
1cc6e35
Compare
mentioning #46 for visibility the current version works as expected, the only issue is the log flickering with multiple invocations of nix writing to stdout in parallel. I was considering utilising the raw-internal-json output similarly to how nix-output-monitor does it but that might be out of scope here 🤔 |
activation is fully synchronous but that is usually the part that takes the least amount of time |
src/cli.rs
Outdated
async { | ||
// run local builds synchronously to prevent hardware deadlocks | ||
for data in &local_builds { | ||
deploy::push::build_profile(data).await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a huge fan of using "partial functions", isn't unhandled panic from unwrap
going to kill the main thread?
Also, AFAICS, at the moment we completely ignore non-remote build/push results
The new approach seems good 👍 |
52f5c53
to
dd7ec8c
Compare
NB: I have opened this Pull Request as a draft since I intend to continue working on it by improving the logging output.
Since the builds are started concurrently, the build progress information is pretty much useless and flickers a lot.
Problem
The current implementation builds every single profile synchronously, including remote builds.
Since remote builds were introduced back in #175, remote builds could be pipelined to deploy
new configurations in a more time-efficient manner.
Solution
Build configurations that support remote builds concurrently.
Sidenote: I have decided to continue building local builds in a synchronous manner because I have run into hardware deadlocks previously when trying to evaluate and/or build multiple systems at the same time.
I have tested this code by deploying to my personal infrastructure (https://github.com/philtaken/dotfiles) and it works as intended.