-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ci] #2690: Add cargo chef caching #2911
[ci] #2690: Add cargo chef caching #2911
Conversation
Codecov Report
@@ Coverage Diff @@
## iroha2-dev #2911 +/- ##
==============================================
- Coverage 67.61% 62.24% -5.37%
==============================================
Files 140 168 +28
Lines 26173 30028 +3855
==============================================
+ Hits 17696 18692 +996
- Misses 8477 11336 +2859
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
We've tried something like this and it didn't work out. This was actually the first step in optimising the CI. If you try to cache with What I suggest we do instead, is continue our work on using a custom CI image. If we bake the compiled dependencies into the CI image and trigger a re-creation of the CI image each time we have a change in |
As for the coverage diff… Myself and @6r1d came to the conclusion that codecov.io has a lot of fancy features that we can't use, and is far too unstable. We want to try using Coveralls. |
What if we will revert to keeping ci-image as dedicated docker image that is compiled only when its corresponding |
If it works with RUST language correctly... Just didn't find RUST in the list of supported languages on the Coveralls site. |
Good point. It is supported, but we'd need to run some tests before we make the switch, particularly to know if it accurately shows coverage changes. |
e68fb6d
to
4336ecc
Compare
Dockerfile.planner
Outdated
@@ -0,0 +1,8 @@ | |||
#planner stage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we keep image with cargo cached dependencies separately from Linux Arch based ci-image
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, this takes us back to the architecture that we moved away from. I don't think that doing things this way is tenable. It's preferable for us to compile the binary separately and simply copy it into the container and produce an image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean to combine Arch based ci-image and cargo chef
in one Dockerfile.base file? Or to join cargo chef
and cargo build
together? In this case, we have to pay attention to Cargo.lock
change trigger and where to use it properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean to combine Arch based ci-image and cargo chef in one Dockerfile.base file?
Yes.
In this case, we have to pay attention to Cargo.lock change trigger and where to use it properly.
We should just have a manual workflow trigger. This is needed anyway so that we can update the image when there's a new toolchain.
Cargo lock changes far more often than is necessary to re-run this operation. If one line changed, it makes the compilation process take 1/600th longer, if that. Re-caching the dependencies only makes sense if we have a large rewrite and version bump all of the external dependencies. Plus if Cargo.lock
is slightly outdated, the compilation will take only slightly longer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean to combine Arch based ci-image and cargo chef in one Dockerfile.base file?
Yes.
It was combined in one Dockerfile.base
Dockerfile
Outdated
: | ||
|
||
# builder stage | ||
WORKDIR /iroha | ||
COPY . . | ||
RUN rm -f rust-toolchain.toml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we delete rust-toolchain.toml
file before cargo build
command or before cargo chef prepare
&& cargo chef cook
commands on the planner
stage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should delete it from the repository. Add it to .gitignore
too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without rust-toolchain.toml
we got this error while trying to compile ci-image: error: rustup could not choose a version of rustup to run, because one wasn't specified explicitly, and no default is configured.
681a277
to
1792d43
Compare
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
1792d43
to
b02537f
Compare
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
Description of the Change
cargo chef
Codecov
byCoverall
coverage solutionIssue
docker build
fails on M1 Mac #2690diff
doesn't work correctly: issue Fix base coverage #2781Benefits
diff
by replacing it byCoveralls
Possible Drawbacks
Could be a reason in the
Cargo.lock
anddeploy profile
? @s8sato @appetrosyanCoverall
coverage solution is not really tested.