Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate test and build by exploiting checks API #848

Closed
mcking65 opened this issue Aug 21, 2018 · 6 comments
Closed

Separate test and build by exploiting checks API #848

mcking65 opened this issue Aug 21, 2018 · 6 comments
Labels
enhancement Any addition or improvement that doesn't fix a code bug or prose inaccuracy regression-testing Related to AVA regression tests of example pages or AVA framework implementation within repo

Comments

@mcking65
Copy link
Contributor

While our current integration of testing into our Travis CI script is a big step forward from where we started, it is becoming pretty cumbersome, especially when reviewing PRs that have failures. And, as we integrate more stuff, like CSpell and our webdriverIO regression tests along with the NUChecker and ESLint, our more robust testing, which ultimately makes PR review more thorough and efficient, will not necessarily make it easier ... at least for me as a screen reader user.

To understand the source of failures from our current monolythic beast of a process, you have to scan a Travis job log containing multiple thousands of lines of text that are wickedly hard to listen to with a screen reader. I can't imagine that they are a ton better for people who can see them. You still have to scroll to the end and then read back a ways to the end of setup and find the first signs of failure. The log would be even harder to scan if we allow the script to proceed to the next type of testing after one type fails.

And, that is another problem with our current process; we have it set up to abort as soon as we have a test run with failures. So, it could fail with ESLint errors and then not test HTML validity. Or, HTML validation may fail and prevent regression testing or spell checking. This slows down fixing.

If I understand things correctly, I think the Github checks API can help solve these problems. And, according to the Github checks API documentation, it is supported by Github Travis CI integration.

I think by exploiting the checks API support, we would be able to:

  1. run each type of testing independently on every PR and commit
  2. Easily see which ones fail and where on the checks page within a PR or commit
  3. Conditionally build and push to gh-pages based on the check results
  4. Optionally, block merges to protected branches based on check failures

I hope my hunches based on the ssmall amount of reading I've done so far are right. I almost see these capabilities, especially seeing an easy to read summary of failures, as essential to the success of our regression test project. But, it is not @spectranaut's responsibility to re-do our entire Travis integration ... at least not by herself.

@michael-n-cooper, we would need your help because owner priveleges are needed to set it up. But, given your full plate, I think our team could do enough heavy lifting to make it easy for you.

So, in addition to @spectranaut, I'm hoping a few of the other super smart and active contributors to this project would be able to help figure out what really is the best approach. What say you @jessebeach, @sh0ji, @tatermelon, @nschonni?

Is this something that we can easily work out asynchronously in this issue? Would there be some benefit to a meeting on this topic?

@mcking65 mcking65 added enhancement Any addition or improvement that doesn't fix a code bug or prose inaccuracy regression-testing Related to AVA regression tests of example pages or AVA framework implementation within repo labels Aug 21, 2018
@mcking65 mcking65 added this to the 1.1 APG Release 3 milestone Aug 21, 2018
@nschonni
Copy link
Contributor

they could be split out by using a matrix of builds in Travis. The only downside to that is that there are resources limits to the amount of parallel builds an organization can run.
Alternately/additionally you could look at one of the service that will comment directly on the PRs like https://app.codacy.com/project/nschonni/aria-practices/dashboard?branchId=8476612. Not sure how accessible their UI is, but the bots comment directly on the PR lines instead of parsing the Travis logs for the lint issues

@mcking65
Copy link
Contributor Author

@nschonni, I'm not sure what you mean by a matrix of builds.

Travis breaks builds into phases, jobs, and stages. I'm not clear where the checks fit into that. Does a phase, a a job, or something else correspond to a checkrun event in travis? Or, to get each type of testing to have its own checkruns, do we have to build a Github App to run the tests? My impression from the high-level documentation of Travis is that Travis is that app. But, I I haven't found where it says how to create a checkrun event in your .travis.yml.

@nschonni
Copy link
Contributor

Ah, got what you mean. They're still in the process of migrating projects from the old hooks to the new GitHub Checks API https://blog.travis-ci.com/2018-05-07-announcing-support-for-github-checks-api-on-travis-ci-com
I'm submitting a Travis stages PR now. "Matrix" is what you used to do to expand out multiple builds before

mcking65 pushed a commit that referenced this issue Aug 31, 2018
For issue #848, implement travis stages so linting and HTML validation are in separate jobs that run in parallel and show up on the checks page of a PR.
michael-n-cooper pushed a commit that referenced this issue Aug 31, 2018
Use Travis Build stages (pull #851)

For issue #848, implement travis stages so linting and HTML validation are in separate jobs that run in parallel and show up on the checks page of a PR.
@mcking65
Copy link
Contributor Author

mcking65 commented Oct 1, 2018

@nschonni, For a little while, names for the build jobs were showing on the checks tab of a PR. But, for a while now, I've only been seeing numbers and not job names. It looks like the names are in the config. Might you know why the names are not showing up on the checks tab?

@nschonni
Copy link
Contributor

nschonni commented Oct 1, 2018

I think Travis would need to be returning those with the response to the api https://developer.github.com/v3/checks/runs/#output-object
Maybe trying reporting the missing details on https://travis-ci.community (it's blocked by my work proxy)

@mcking65
Copy link
Contributor Author

The use of travis build stages resolves this issue and the glitches have apparently been resolved from the travis side. Thank you again @nschonni for your help with this. It has tremendously improved our processes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Any addition or improvement that doesn't fix a code bug or prose inaccuracy regression-testing Related to AVA regression tests of example pages or AVA framework implementation within repo
Projects
None yet
Development

No branches or pull requests

2 participants