Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Immense build time #93

Closed
StarpTech opened this issue Nov 16, 2021 · 17 comments · Fixed by #94
Closed

Immense build time #93

StarpTech opened this issue Nov 16, 2021 · 17 comments · Fixed by #94

Comments

@StarpTech
Copy link
Contributor

StarpTech commented Nov 16, 2021

Hi, first of all, thanks for the great tooling. We use it as a library in our tests. In our POC, we have observed an enormous build time in the Github Actions. Not sure, if it is related to Rust in general or if we can optimize it further.

@voxpelli
Copy link

Are you using actions/cache to cache anything in between your builds? https://github.com/actions/cache

See eg: https://stackoverflow.com/a/64702025/20667

That could probably help if you're not already doing it.

If you could share your full workflow file that could probably also help.

@StarpTech
Copy link
Contributor Author

StarpTech commented Nov 16, 2021

Hi, yes we are doing that. We build the compute service and run integration tests with viceroy. CI builds for 7min with one simple test.

name: CI
on:
  push:
    branches-ignore:
      - main
    paths-ignore:
      - '**.md'
      - '*.md'

jobs:
  ci:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2

    - name: Install Rust toolchain
      uses: actions-rs/toolchain@v1
      with:
          toolchain: 1.54.0 # current Rust toolchain for Compute@Edge

    - name: Install Compute@Edge tooling
      uses: fastly/compute-actions/setup@main

    - uses: Swatinem/rust-cache@v1

    - name: Run tests
      working-directory: integration_tests
      run: |
          rm -f service/bin/main.wasm
          cd service/ && fastly compute build --skip-verification
          cargo test -- --show-output

@cratelyn
Copy link
Contributor

I'm not familiar with the Swatinem/rust-cache@v1 action, but I see some Rust-specific examples in the actions/cache documentation that @/voxpelli mentioned above: https://github.com/actions/cache/blob/main/examples.md#rust---cargo

To save you a click, their example looks like this:

- uses: actions/cache@v2
  with:
    path: |
      ~/.cargo/bin/
      ~/.cargo/registry/index/
      ~/.cargo/registry/cache/
      ~/.cargo/git/db/
      target/
    key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}

Perhaps it would be worth giving this one a try, and seeing whether it improves your build times @StarpTech?

@StarpTech
Copy link
Contributor Author

Hi @cratelyn, thanks. I'll give it a shoot.

@Integralist
Copy link
Contributor

Also, interesting side note, if you add an id: example_id then other 'steps' in the Actions workflow can identify if there was a cache hit/miss and cause a separate step to be executed or skipped...

- name: "do something if cache miss"
  if: steps.example_id.outputs.cache-hit != 'true'
  run: make do_the_thing
  shell: bash

@StarpTech
Copy link
Contributor Author

StarpTech commented Nov 16, 2021

Thanks @cratelyn and @Integralist. The snippet worked much better than the custom action. I could reduce it to around 2:20. Do you have an idea why the tests aren't running in parallel and take so long?

test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 74.86s

I have a very simple test setup. In each test, we build a new execution context. We use #[tokio::test].

@StarpTech
Copy link
Contributor Author

StarpTech commented Nov 16, 2021

99% of the time is spent in. The compute service is pretty dumb with one call to a local backend. I could exclude timeouts.

    // Load the wasm module into an execution context
        let mut ctx = ExecuteCtx::new("./../compute_service/bin/main.wasm")?
            .with_log_stderr(true)
            .with_log_stdout(true);

        let config_path = "./../compute_service/fastly.toml";

        let cfg = format!(
            r#"
    # This file describes a Fastly Compute@Edge package. To learn more visit:
    # https://developer.fastly.com/reference/fastly-toml/

    language = "rust"
    manifest_version = 2
    name = "service"
    
    [local_server]
        [local_server.backends]
            [local_server.backends.content_api]
                    url = "{}"
    
    "#,
            port
        );

        let config = FastlyConfig::from_str(&cfg).unwrap();
        let backends = config.backends();
        let dictionaries = config.dictionaries();
        let backend_names = itertools::join(backends.keys(), ", ");

        ctx = ctx
            .with_backends(backends.clone())
            .with_dictionaries(dictionaries.clone())
            .with_config_path(PathBuf::from(config_path));

        Ok(ctx
            .handle_request(req, "127.0.0.1".parse().unwrap())
            .await?)

@cratelyn
Copy link
Contributor

Thanks @cratelyn and @Integralist. The snippet worked much better than the custom action. I could reduce it to around 2:20. Do you have an idea why the tests aren't running in parallel and take so long?

test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 74.86s

I have a very simple test setup. In each test, we build a new execution context. We use #[tokio::test].

ah! #[tokio::test] helped me spot the problem.

according to the documentation there, the default test runtime is single-threaded. you might have luck parallelizing this with something like...

#[tokio::test(flavor = "multi_thread", worker_threads = 1)]
async fn example_test() {
    // some tests
}

(where 1 is replaced with some other number)

@StarpTech
Copy link
Contributor Author

I already tried that without success 😄

@mgattozzi
Copy link
Contributor

Rust runs all tests in parallel by default, unless you do something like --test-threads=1 in your invocation of cargo test, so I'd think something else is causing it to be slow. I could be wrong though.

@StarpTech
Copy link
Contributor Author

StarpTech commented Nov 16, 2021

@mgattozzi exactly that is my conclusion too. I removed everything except executing a simple request against the service with. The tests are still running sequentially.

@pchickey
Copy link
Contributor

ExecuteCtx::new is running the Cranelift code generator to produce native code for the WebAssembly module. Cranelift is known to be extremely slow when compiled in debug mode. Could you see if cargo test --release is faster?

@voxpelli
Copy link

Are GitHub Action runners multi-core?

You can launch many parallel steps by adding a matrix, if that would help. See eg: https://github.com/voxpelli/eslint-config/blob/1d16466a8e5abdd5aac10d3d64dfa1564cf2c65f/.github/workflows/external.yml#L10-L27

If one can instruct cargo test to run individual tests?

@StarpTech
Copy link
Contributor Author

StarpTech commented Nov 16, 2021

Wow @pchickey you're right. Running in release mode reduces it from 8s to 0.34s.

@StarpTech
Copy link
Contributor Author

I can create a PR to improve the docs 😄

@StarpTech
Copy link
Contributor Author

Could anybody review the linked PR? Thanks.

@Integralist
Copy link
Contributor

Hi @StarpTech 👋🏻

The PR looks good to me, but I'll let the Viceroy team give it an official thumbs up when they're online (that particular team are in the US timezone).

Thanks again! ❤️

cratelyn pushed a commit that referenced this issue Nov 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants