Skip to content
This repository has been archived by the owner on Apr 18, 2022. It is now read-only.

GitHub Actions CI, Take 3 #2382

Merged
merged 13 commits into from
Aug 10, 2020
Merged

GitHub Actions CI, Take 3 #2382

merged 13 commits into from
Aug 10, 2020

Conversation

CleanCut
Copy link
Member

@CleanCut CleanCut commented Jul 21, 2020

To Do:

  • Matrix: [mac, linux, windows] * [stable, beta, nightly]
    • cargo fmt --check -- only on stable
    • cargo clippy -- only on stable
    • cargo test
      • code compiles
      • tests & API doctests run
    • mdbook build book
    • mdbook test -L ./target/debug/deps book -- only on stable

To Do Later separate from this PR:

  • Use mdbook-linkcheck to detect broken book links
  • Use cargo-deadlinks to detect broken links
  • Speed up checks. Goal: 10 minutes or less
    • use sccache with S3 for cargo test
    • use self-hosted runners
    • Experiment with various compiler settings for reducing compile times and/or sizes
  • Automatically push rendered docs to website
    • master merges
    • releases
  • Automatically push rendered book to website
    • master merges
    • releases
  • Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run cargo clippy and fix all errors..." -- I don't know if this is feasible
  • Find some way to indicate failures on nightly without making the build look like a red X. (You can let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience. ☹️_ )

@CleanCut

This comment has been minimized.

@CleanCut CleanCut changed the title Restore build and test steps GitHub Actions CI, Take 3 Jul 21, 2020
@Blisto91
Copy link
Contributor

Blisto91 commented Jul 23, 2020

@CleanCut is it because it doesnt run with the new [ $default-branch ]?
I havn't been able to make it work with it either. But still works fine with - master.
Even tho $default-branch is already on the starter workflows the changelog was only published yesterday.
So might not be fully enabled yet or bugged.

on:
  push:
    branches: 
      - master
  pull_request:
    branches: 
      - master

This should run for pushes and PR's against the master branch

@CleanCut
Copy link
Member Author

@Blisto91 Woot! That did it. I must have grabbed that macro feature out of the template before it was supported. Or maybe we've gotten off the legacy plan billing plan and gotten Actions enabled between now and then. Either way, we can now move forward. 🎉

@Blisto91
Copy link
Contributor

Blisto91 commented Jul 26, 2020

Would this help with allowing it to fail on nightly?
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idcontinue-on-error
Edit:
Tho the problem with that is it would show a green tick on nightly even if it failed and one would have to dig through the log.
Probably need something like this
actions/runner#2347

@Blisto91
Copy link
Contributor

Blisto91 commented Jul 26, 2020

Extra note.
By installing rust from nightly with the components fmt and clippy you might not get the latest version since they aren't always included.
https://rust-lang.github.io/rustup-components-history/x86_64-unknown-linux-gnu.html

If they aren't available on the latest release the toolchain action will downgrade until it finds the latest version with the components included.

@CleanCut
Copy link
Member Author

CleanCut commented Jul 27, 2020

Would this help with allowing it to fail on nightly?

I came to the same conclusion as you. GitHub actions doesn't yet seem to have any way to let something fail but not make it look like a big, red failure. I don't know what else we can do other than just not requiring nightly checks to pass and cross our fingers and hope it doesn't fail much making it look like we can't merge even though we can.

By installing rust from nightly with the components fmt and clippy you might not get the latest version since they aren't always included.

Good catch! I had no idea. I modified the setup so 1) we don't use the action to install the components, and 2) we only install the components (and use them) on stable, in later steps.


I've put a lot of effort over the last couple days into getting sccache working on macos/linux using "local" storage and GitHub Actions' cache. If I can get that working, I'll talk to @fhaynes about getting access to an Amethyst S3 bucket to use for sccache for all OS's.

I feel like we're getting a lot more traction with GitHub Actions vs. GitLab CI -- though admittedly the lower friction of direct integration with GitHub is a huge part of that.

@Blisto91
Copy link
Contributor

Looking good! Your clean cut on this CI stuff is God's work!

Regarding sccache, how will it work with cleaning or pruning it over time?
When I've looked into the subject a lot here lately I found that rust doesn't currently have a good bulletproof way to clean unused build artifacts.
So when depencies change, updates are release or new toolchains come out that need a total recompile the target folder starts accumulating alot of unused crap.
Tho I'm still not much into how sccache works so I don't know if it has an inbuilt mechanism to deal with such things?

Not that it would be a problem in the short or maybe even mid term.

@onelson
Copy link

onelson commented Jul 27, 2020

@Blisto91 Per the sccache readme

The default cache size is 10 gigabytes. To change this, set SCCACHE_CACHE_SIZE, for example SCCACHE_CACHE_SIZE="1G"

Since we're using an FS cache, it looks like it's doing an LRU of a fashion, probably by checking mtimes. Ejection of least recently used artifacts would happen as you approach whatever the limit is set to.

Since you're relying on github's cache for your disk, you also have whatever the TTL is there as an extra knob you can turn. I'm not sure what github's cache policies are like but if they have a hard limit for artifacts you may be looking at a cold start from time to time whenever they dump the volume out from under you.

@Blisto91
Copy link
Contributor

Blisto91 commented Jul 27, 2020

Ah I see.
It probably works fine since sccache seems pretty popular
Cargo itself seems to not always update mtimes because there are some problems with it on some OS's?
It's one of the reasons a tool like cargo-sweep doesn't work so well.

Githubs own cache storage have a hard limit of 5GB so even with the impressive Zstd compression you quickly hit a wall where it will start evicting cache until it is under limit. It will also evict caches that haven't been used for a week.
Especially the examples folder gets very large and trying to prune only main crate artifacts can produce dependency failures on subsequent runs. Current stable rust have some quirks in that regard.
Storage on the self hosted runner itself or on the cloud like S3 is probably required for good caching currently.

Edit: If Amethyst at some point decides it is too much hassle maintaining their own self hosted runner then I think acceptable performance could still be achieved by using the shared runners in combination with sccache and cloud storage.

@CleanCut
Copy link
Member Author

@Blisto91 @onelson I haven't been able to get sccache to actually run as far as I can tell. That's my next task, but if you can see what I'm doing wrong please point it out! Theoretically, everything is set up perfectly...except sccache is not actually being run.

The GitHub cache of local files side of things is sort of irrelevant, because I'm switching to using S3 for sccache storage as soon as I can verify that I can actually run sccache at all and it results in a speedup like we expect. Though it was very interesting to learn about the GitHub cache side of things.

@onelson
Copy link

onelson commented Jul 28, 2020

I haven't been able to get sccache to actually run as far as I can tell.

Wondering how you were checking this. There's no additional output when used the way you are using it now, so I guess you'd either need to check the cache dir to see if there were files in there, or reason about the overall build times.

If the "wrapper" env var is set, then the server is started and stopped around a cargo build-invoking command automatically, but this cuts out of the loop for diagnostics since stats appear to be tracked in-process while the server is running.

I would recommend bookending your commands with sccache --start-server and sccache --stop-server since that way you will actually get a printout of the cache stats.

It'll report overall cache size and hit/miss ratios.

@CleanCut
Copy link
Member Author

Wondering how you were checking this.

I've been observing cache creation time. "1 second" to create and upload a tarfile of all dependencies? No cache created. Also cache restore time shows a 22-byte tarfile.

Using sccache --start-server and sccache --stop-server is a great idea 👍

@Blisto91
Copy link
Contributor

Blisto91 commented Jul 28, 2020

So the problem seems to be that the variables in the os matrix doesn't get set as environmental variables and you can't set it directly per os matrix as far as i can see.
But you can set it per job and do something like this

 name: Tests
    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: true
      matrix:
        os: [macOS-latest, windows-latest, ubuntu-latest]
        toolchain: [stable, beta, nightly]
        include:
        - os: macos-latest
          FEATURES: metal
          RUSTC_WRAPPER: /usr/local/bin/sccache
          SCCACHE_DIR: ~/.sccache
        - os: windows-latest
          FEATURES: vulkan
        - os: ubuntu-latest
          FEATURES: vulkan
          RUSTC_WRAPPER: ~/.bin/sccache
          SCCACHE_DIR: ~/.sccache

      env: 
         RUSTC_WRAPPER: ${{ matrix.RUSTC_WRAPPER }}
         SCCACHE_DIR: ${{ matrix.SCCACHE_DIR }}
              

This will add them on top of the global env: so it wont get overwritten or anything.

Note that the job env: doesn't have to be at the bottom.
Also note that if one of the os's doesnt have RUSTC_WRAPPER: defined then it will just add the variable with an empty value, but i don't think that would break anything?

@CleanCut
Copy link
Member Author

@fhaynes For the first time ever, GitHub has released a public product roadmap. On that roadmap, there is an item on the Jul-Sep column for Actions: Pull requests from private forks run Actions workflows. I think that alleviates the last concern I had about using GitHub actions going forward, and my disposition is now to focus on making the GitHub Actions workflow work well.

@CleanCut CleanCut mentioned this pull request Jul 28, 2020
@CleanCut
Copy link
Member Author

Sigh. I discovered that I had omitted the "--workspace" option, so I was only running tests against the top-level amethyst crate. Now that I've enabled all tests, they aren't passing. Naturally. Tomorrow's task. Sheesh. Seemed like the mdbook part was nearly there.

@Blisto91
Copy link
Contributor

Blisto91 commented Jul 31, 2020

The failures in the latest workflow is because the Linux virtual enviroment doesn't have any sound devices.
Installing pulseaudio along with the other Linux deps seems to make it work.
pulseaudio will setup a dummy sound device on installation.

Edit:
The workflow before that one failed because the timing test took 1.116179945s instead of the max allowed 1.1s.
But I'm not sure about that one. Maybe the runner hung for a split second.

@fosskers
Copy link
Contributor

fosskers commented Jul 31, 2020

Was https://github.com/peaceiris/actions-mdbook considered for getting the book to build? It can also auto-publish to Github Pages.

@Blisto91
Copy link
Contributor

@fosskers The PR currently uses it to fetch the latest mdbook.
Amethyst doesn't have a GitHub Pages page as far as i'm aware.

@Blisto91
Copy link
Contributor

Blisto91 commented Aug 4, 2020

It's weird that the timing test seems to be randomly failing. Haven't seen it before during my own testing.
Will try to look into it.
Edit:
So the reason i havn't seen it is because it only seems to be a thing on mac and i normally test on Ubuntu.

In the tests where we check that the stopwatch reports the correct numbers we use a thread::sleep.

watch.start();
thread::sleep(Duration::from_secs(DURATION));
watch.stop();

This seems to fluctuate somewhere in the range 1.003 - 1.130 ish seconds on mac.
On windows and Ubuntu the timings are much more consistent down in the 1.000 - 1.005 ish range.

It does not seem to be related to any amethyst code because using std timing methods shows the same weirdness.

let now = Instant::now();
thread::sleep(Duration::from_secs(DURATION));
let elapsed = now.elapsed().as_secs() as f64 + now.elapsed().subsec_nanos() as f64 * 1e-9;

Using spin-sleep yielded the same results.

Some extra weirdness is that the fluctuation seems to be consistent within the same run.
I added an extra test that ran the std timing methods next to the Amethyst stopwatch.
When the timing showed 1.035 or 1.110 etc. in one of the unit tests it would also be almost the same in the other one.
This was with spin-sleep just for info.

Investigation continues.

@Blisto91
Copy link
Contributor

Blisto91 commented Aug 4, 2020

So the timing issue seems to be with thread::sleep specifically.

I looked more into spin-sleep and the default native accuracy it sets outside Windows is 125us. Which means it will do a thread::sleep until 125us before the target and spin the rest.
If i spin all the way by setting the native accuracy to 1s then the timing is picture perfect with a consistent overshoot of only a few micro seconds.

let sleeper = spin_sleep::SpinSleeper::new(1_000_000_000);

watch.start();
sleeper.sleep(Duration::from_secs(DURATION));
watch.stop();

Using the spinning part directly is also the same picture perfect timing

let duration = Duration::new(DURATION, 0);
let now = Instant::now();

watch.start();
while now.elapsed() < duration {
   thread::yield_now();
}
watch.stop();

Trusting the thread::sleep until 150ms before the target by setting native accuracy seemed to also yield the same results.
But when i tried with a native accuracy of 50ms i had one that went up to 1.084s.

I'm still not sure why the normal sleep function is behaving this way and don't own a mac myself to test on.
Some people might have performance issues on the Github mac runner but i'm not certain that it is related.

@CleanCut
Copy link
Member Author

CleanCut commented Aug 4, 2020

@Blisto91 I think the mac runners are more oversubscribed and prone to have spiky performance. I see three viable options:

  • Just boost UNCERTAINTY and be done with it.
  • Conditionally boost UNCERTAINTY on Mac CI via env vars.
  • Increase the precision of the operation.

I hesitate to do the first because then if we get some performance regression, we won't notice it. I hesitate to do the third via spinning, because that can really affect CPU usage.

My vote is the second option. What are your thoughts?

@Blisto91
Copy link
Contributor

Blisto91 commented Aug 5, 2020

I think upping the uncertainty on just Mac is fine for now. I would also not be happy about a general increase.
It could always be looked at again on the self hosted runner.

@CleanCut
Copy link
Member Author

CleanCut commented Aug 7, 2020

❓❓❓ The latest run failed to compile an mdbook test from a file of the book that doesn't exist in this branch. Maybe GitHub actions messed up and checked out the wrong branch? Or didn't clean up from an earlier run? I can't think of anything to do other than make a new change and see if it happens again.

@CleanCut CleanCut mentioned this pull request Aug 7, 2020
29 tasks
@CleanCut
Copy link
Member Author

CleanCut commented Aug 7, 2020

We're about ready to merge! I'm excited to get CI working for folks in general again. I split out all the to-do items I think can be done later and copied it into this issue: #2407

@Blisto91
Copy link
Contributor

Blisto91 commented Aug 7, 2020

Is it weird that I'm just as excited about the ci work as I am with the legion port? 😁

@CleanCut
Copy link
Member Author

CleanCut commented Aug 7, 2020

@Blisto91 I'm excited too!

All we've got blocking us that I know of is these tile book files showing up in this branch only in CI (the files aren't in this branch!!!) and breaking everything. I thought it was just a one-time fluke, but the next run failed on it as well! 😡 What the heck??? I'm going to bed. Maybe the world will be saner tomorrow.

@Blisto91
Copy link
Contributor

Blisto91 commented Aug 7, 2020

Thinking out loud here so if i misunderstand something please correct me.

The CI with the Checkout action doesn't run on your PR as it exists in your branch. It runs on a new virtual version where your changes are already merged into the target branch (master in this case).

checkout2

So the version we test on here is actually the current master branch, including new book pages and all, with your branch changes merged into it.
The CI only verifies against the target branch as it existed when the workflow was run.
So if your last commit was a week ago, and since then there have been several changes on master, it might fail once merged because the changes in the PR were testet against an older version.

The reason some projects use Bors is because it will put all PR's waiting for merge in a queue and merge them one by one. When one in the queue has been merged the next one will then be testet against the new target version before merging.
If a PR CI fails against the latest target version then it won't be merged.

Now why this specifically fails with the new tiles examples i don't know.
But i'll try to look into it!

@bors
Copy link
Contributor

bors bot commented Aug 10, 2020

Canceled.

@CleanCut
Copy link
Member Author

bors r+

bors bot added a commit that referenced this pull request Aug 10, 2020
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut

## To Do:

- Matrix: `[mac, linux, windows] * [stable, beta, nightly]`
  - [x] `cargo fmt --check` -- only on stable
  - [x] `cargo clippy` -- only on stable
  - [x] `cargo test`
    - [x] code compiles
    - [x] tests & API doctests run
  - [x] `mdbook build book`
  - [x] `mdbook test -L ./target/debug/deps book` -- only on stable

## To Do Later separate from this PR:
- [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links
- [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links
- [ ] Speed up checks.  Goal: 10 minutes or less
  - [ ] use `sccache` with S3 for `cargo test`
  - [ ] use self-hosted runners
  - [ ] Experiment with various compiler settings for reducing compile times and/or sizes
- [ ] Automatically push rendered docs to website
  - [ ] master merges
  - [ ] releases
- [ ] Automatically push rendered book to website
  - [ ] master merges
  - [ ] releases
- [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_
- [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience. ☹️_ )

Co-authored-by: Nathan Stocks <cleancut@github.com>
@CleanCut
Copy link
Member Author

Adding staging and trying branches to trigger on pushes worked. status = [ "CI / Test" ] in bors.toml did not.

I'm going to run through the following status values again in this order to see if any of them work: Test, CI, CI%.

@bors
Copy link
Contributor

bors bot commented Aug 10, 2020

Canceled.

@CleanCut
Copy link
Member Author

bors r+

bors bot added a commit that referenced this pull request Aug 10, 2020
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut

## To Do:

- Matrix: `[mac, linux, windows] * [stable, beta, nightly]`
  - [x] `cargo fmt --check` -- only on stable
  - [x] `cargo clippy` -- only on stable
  - [x] `cargo test`
    - [x] code compiles
    - [x] tests & API doctests run
  - [x] `mdbook build book`
  - [x] `mdbook test -L ./target/debug/deps book` -- only on stable

## To Do Later separate from this PR:
- [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links
- [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links
- [ ] Speed up checks.  Goal: 10 minutes or less
  - [ ] use `sccache` with S3 for `cargo test`
  - [ ] use self-hosted runners
  - [ ] Experiment with various compiler settings for reducing compile times and/or sizes
- [ ] Automatically push rendered docs to website
  - [ ] master merges
  - [ ] releases
- [ ] Automatically push rendered book to website
  - [ ] master merges
  - [ ] releases
- [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_
- [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience. ☹️_ )

Co-authored-by: Nathan Stocks <cleancut@github.com>
@bors
Copy link
Contributor

bors bot commented Aug 10, 2020

Canceled.

@CleanCut
Copy link
Member Author

bors r+

bors bot added a commit that referenced this pull request Aug 10, 2020
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut

## To Do:

- Matrix: `[mac, linux, windows] * [stable, beta, nightly]`
  - [x] `cargo fmt --check` -- only on stable
  - [x] `cargo clippy` -- only on stable
  - [x] `cargo test`
    - [x] code compiles
    - [x] tests & API doctests run
  - [x] `mdbook build book`
  - [x] `mdbook test -L ./target/debug/deps book` -- only on stable

## To Do Later separate from this PR:
- [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links
- [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links
- [ ] Speed up checks.  Goal: 10 minutes or less
  - [ ] use `sccache` with S3 for `cargo test`
  - [ ] use self-hosted runners
  - [ ] Experiment with various compiler settings for reducing compile times and/or sizes
- [ ] Automatically push rendered docs to website
  - [ ] master merges
  - [ ] releases
- [ ] Automatically push rendered book to website
  - [ ] master merges
  - [ ] releases
- [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_
- [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience. ☹️_ )

Co-authored-by: Nathan Stocks <cleancut@github.com>
@bors
Copy link
Contributor

bors bot commented Aug 10, 2020

Canceled.

@CleanCut
Copy link
Member Author

bors r+

bors bot added a commit that referenced this pull request Aug 10, 2020
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut

## To Do:

- Matrix: `[mac, linux, windows] * [stable, beta, nightly]`
  - [x] `cargo fmt --check` -- only on stable
  - [x] `cargo clippy` -- only on stable
  - [x] `cargo test`
    - [x] code compiles
    - [x] tests & API doctests run
  - [x] `mdbook build book`
  - [x] `mdbook test -L ./target/debug/deps book` -- only on stable

## To Do Later separate from this PR:
- [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links
- [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links
- [ ] Speed up checks.  Goal: 10 minutes or less
  - [ ] use `sccache` with S3 for `cargo test`
  - [ ] use self-hosted runners
  - [ ] Experiment with various compiler settings for reducing compile times and/or sizes
- [ ] Automatically push rendered docs to website
  - [ ] master merges
  - [ ] releases
- [ ] Automatically push rendered book to website
  - [ ] master merges
  - [ ] releases
- [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_
- [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience. ☹️_ )

Co-authored-by: Nathan Stocks <cleancut@github.com>
@bors
Copy link
Contributor

bors bot commented Aug 10, 2020

Canceled.

@CleanCut
Copy link
Member Author

bors r+

bors bot pushed a commit that referenced this pull request Aug 10, 2020
2382: GitHub Actions CI, Take 3 r=CleanCut a=CleanCut

## To Do:

- Matrix: `[mac, linux, windows] * [stable, beta, nightly]`
  - [x] `cargo fmt --check` -- only on stable
  - [x] `cargo clippy` -- only on stable
  - [x] `cargo test`
    - [x] code compiles
    - [x] tests & API doctests run
  - [x] `mdbook build book`
  - [x] `mdbook test -L ./target/debug/deps book` -- only on stable

## To Do Later separate from this PR:
- [ ] Use [`mdbook-linkcheck`](https://github.com/Michael-F-Bryan/mdbook-linkcheck) to detect broken book links
- [ ] Use [`cargo-deadlinks`](https://github.com/deadlinks/cargo-deadlinks) to detect broken links
- [ ] Speed up checks.  Goal: 10 minutes or less
  - [ ] use `sccache` with S3 for `cargo test`
  - [ ] use self-hosted runners
  - [ ] Experiment with various compiler settings for reducing compile times and/or sizes
- [ ] Automatically push rendered docs to website
  - [ ] master merges
  - [ ] releases
- [ ] Automatically push rendered book to website
  - [ ] master merges
  - [ ] releases
- [ ] Find way to post comments to PRs with guidance for figuring out how to fix some CI failures. "Clippy failed, please run `cargo clippy` and fix all errors..." -- _I don't know if this is feasible_
- [ ] Find some way to indicate failures on `nightly` without making the build look like a red X. (You _can_ let people merge by not requiring a specific check to pass, but the build will be a big red failure even if the failing check wasn't required. That's not a good experience. ☹️_ )

Co-authored-by: Nathan Stocks <cleancut@github.com>
@Blisto91
Copy link
Contributor

I don't know anything about bors and the documentation I've found isn't that great.
But bors might not be able to merge something that edits a workflow file.

@bors
Copy link
Contributor

bors bot commented Aug 10, 2020

Timed out.

@CleanCut CleanCut self-assigned this Aug 10, 2020
@CleanCut
Copy link
Member Author

I don't know anything about bors and the documentation I've found isn't that great.
But bors might not be able to merge something that edits a workflow file.

Ahh, nice find. Okay, I'm going to revert to what I think ought to work for bors and manually merge this, and then I'll open a separate PR if bors still needs to be fixed. Hopefully that won't be necessary 🤞, but I'll make sure not to touch the workflow file in that PR if it is.

...which _should_ work. We can't use bors to merge this particular
 branch because we've edited a workflow. See
https://forum.bors.tech/t/resource-not-accessible-by-integration/408/7
@CleanCut CleanCut merged commit 62e58db into master Aug 10, 2020
@CleanCut CleanCut deleted the github-actions3 branch August 10, 2020 18:51
@erlend-sh erlend-sh mentioned this pull request Aug 19, 2020
8 tasks
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants