Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ci] #2690: Add cargo chef caching #2911

Conversation

BAStos525
Copy link
Member

@BAStos525 BAStos525 commented Oct 25, 2022

Description of the Change

  1. Enable Cargo cache the dependencies with cargo chef
  2. Substitute Codecov by Coverall coverage solution

Issue

  1. No caching for cargo crates: issue [BUG]: Local docker build fails on M1 Mac #2690
  2. Codecov coverage diff doesn't work correctly: issue Fix base coverage #2781

Benefits

  1. Speed up the iroha2 CI
  2. Apply a possible way to fix Codecov coverage diff by replacing it by Coveralls

Possible Drawbacks

  1. Within the current PR, telemetry couldn't be compiled:
#20 933.9    Compiling iroha_client_cli v2.0.0-pre-rc.9 (/client_cli)
#20 934.2 error: failed to run custom build command for `iroha_telemetry v2.0.0-pre-rc.9 (/telemetry)`
#20 934.2 
#20 934.2 Caused by:
#20 934.2   process didn't exit successfully: `/target/release/build/iroha_telemetry-aee0cffb91c378a9/build-script-build` (exit status: 1)
#20 934.2   --- stderr
#20 934.2   Error: could not find repository from '/telemetry'; class=Repository (6); code=NotFound (-3)

Could be a reason in the Cargo.lock and deploy profile? @s8sato @appetrosyan

  1. Coverall coverage solution is not really tested.

@BAStos525 BAStos525 added the CI label Oct 25, 2022
@BAStos525 BAStos525 self-assigned this Oct 25, 2022
@github-actions github-actions bot added the iroha2-dev The re-implementation of a BFT hyperledger in RUST label Oct 25, 2022
@codecov
Copy link

codecov bot commented Oct 25, 2022

Codecov Report

Merging #2911 (e68fb6d) into iroha2-dev (a16d9c3) will decrease coverage by 5.36%.
The diff coverage is 70.32%.

❗ Current head e68fb6d differs from pull request most recent head d13e160. Consider uploading reports for the commit d13e160 to get more accurate results

@@              Coverage Diff               @@
##           iroha2-dev    #2911      +/-   ##
==============================================
- Coverage       67.61%   62.24%   -5.37%     
==============================================
  Files             140      168      +28     
  Lines           26173    30028    +3855     
==============================================
+ Hits            17696    18692     +996     
- Misses           8477    11336    +2859     
Impacted Files Coverage Δ
actor/src/actor_id.rs 90.00% <ø> (ø)
actor/src/deadlock.rs 85.93% <ø> (-14.07%) ⬇️
cli/derive/src/lib.rs 92.30% <ø> (+17.58%) ⬆️
cli/src/event.rs 0.00% <0.00%> (-41.87%) ⬇️
cli/src/main.rs 1.09% <0.00%> (-0.26%) ⬇️
cli/src/stream.rs 0.00% <ø> (-81.40%) ⬇️
client/src/http.rs 51.16% <0.00%> (+3.33%) ⬆️
client/src/http_default.rs 40.17% <0.00%> (-20.01%) ⬇️
client_cli/src/main.rs 0.25% <0.00%> (-0.01%) ⬇️
config/base/src/runtime_upgrades.rs 49.42% <0.00%> (ø)
... and 225 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

appetrosyan
appetrosyan previously approved these changes Oct 25, 2022
@appetrosyan
Copy link
Contributor

We've tried something like this and it didn't work out. This was actually the first step in optimising the CI.

If you try to cache with cache@v3 you'll end up discarding the layer cache each time Cargo.lock changes, which isn't what you want, given that Cargo.lock can change for reasons not related to dependencies.

What I suggest we do instead, is continue our work on using a custom CI image. If we bake the compiled dependencies into the CI image and trigger a re-creation of the CI image each time we have a change in Cargo.lock after the PR is merged, we can make use of a partially compiled set of dependencies (cargo is smart enough to find them when it needs to), improve the compile times for regular workflows, not just the part where we publish the docker image, and win on PR latency.

@appetrosyan
Copy link
Contributor

As for the coverage diff… Myself and @6r1d came to the conclusion that codecov.io has a lot of fancy features that we can't use, and is far too unstable. We want to try using Coveralls.

@BAStos525
Copy link
Member Author

What if we will revert to keeping ci-image as dedicated docker image that is compiled only when its corresponding Dockerfile has been changed: perhaps from iroha building on M1 perspective it doesn't matter either builder is presented as cache or as docker image that should be pulled from DockerHub - it should not be compiled on M1 in any case. Along with this, we can try an attempt to cache cargo dependencies. It will also help do not have an over limit of Github cache size.

@BAStos525
Copy link
Member Author

As for the coverage diff… Myself and @6r1d came to the conclusion that codecov.io has a lot of fancy features that we can't use, and is far too unstable. We want to try using Coveralls.

If it works with RUST language correctly... Just didn't find RUST in the list of supported languages on the Coveralls site.

@appetrosyan
Copy link
Contributor

If it works with RUST language correctly... Just didn't find RUST in the list of supported languages on the Coveralls site.

Good point. It is supported, but we'd need to run some tests before we make the switch, particularly to know if it accurately shows coverage changes.

https://github.com/corebreaker/coveralls-rust

@BAStos525 BAStos525 force-pushed the feature/DOPS-2045/cargo-caching branch 3 times, most recently from e68fb6d to 4336ecc Compare October 31, 2022 09:08
@@ -0,0 +1,8 @@
#planner stage
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we keep image with cargo cached dependencies separately from Linux Arch based ci-image?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, this takes us back to the architecture that we moved away from. I don't think that doing things this way is tenable. It's preferable for us to compile the binary separately and simply copy it into the container and produce an image.

Copy link
Member Author

@BAStos525 BAStos525 Nov 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean to combine Arch based ci-image and cargo chef in one Dockerfile.base file? Or to join cargo chef and cargo build together? In this case, we have to pay attention to Cargo.lock change trigger and where to use it properly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean to combine Arch based ci-image and cargo chef in one Dockerfile.base file?

Yes.

In this case, we have to pay attention to Cargo.lock change trigger and where to use it properly.

We should just have a manual workflow trigger. This is needed anyway so that we can update the image when there's a new toolchain.

Cargo lock changes far more often than is necessary to re-run this operation. If one line changed, it makes the compilation process take 1/600th longer, if that. Re-caching the dependencies only makes sense if we have a large rewrite and version bump all of the external dependencies. Plus if Cargo.lock is slightly outdated, the compilation will take only slightly longer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean to combine Arch based ci-image and cargo chef in one Dockerfile.base file?

Yes.

It was combined in one Dockerfile.base

Dockerfile Outdated
:

# builder stage
WORKDIR /iroha
COPY . .
RUN rm -f rust-toolchain.toml
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we delete rust-toolchain.toml file before cargo build command or before cargo chef prepare && cargo chef cook commands on the planner stage?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should delete it from the repository. Add it to .gitignore too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without rust-toolchain.toml we got this error while trying to compile ci-image: error: rustup could not choose a version of rustup to run, because one wasn't specified explicitly, and no default is configured.

@BAStos525 BAStos525 force-pushed the feature/DOPS-2045/cargo-caching branch from 681a277 to 1792d43 Compare November 7, 2022 12:27
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
@appetrosyan appetrosyan force-pushed the feature/DOPS-2045/cargo-caching branch from 1792d43 to b02537f Compare November 21, 2022 16:17
@appetrosyan appetrosyan marked this pull request as draft November 22, 2022 08:03
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
Signed-off-by: BAStos525 <jungle.vas@yandex.ru>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI iroha2-dev The re-implementation of a BFT hyperledger in RUST
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants