-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate test and build by exploiting checks API #848
Comments
they could be split out by using a matrix of builds in Travis. The only downside to that is that there are resources limits to the amount of parallel builds an organization can run. |
@nschonni, I'm not sure what you mean by a matrix of builds. Travis breaks builds into phases, jobs, and stages. I'm not clear where the checks fit into that. Does a phase, a a job, or something else correspond to a checkrun event in travis? Or, to get each type of testing to have its own checkruns, do we have to build a Github App to run the tests? My impression from the high-level documentation of Travis is that Travis is that app. But, I I haven't found where it says how to create a checkrun event in your .travis.yml. |
Ah, got what you mean. They're still in the process of migrating projects from the old hooks to the new GitHub Checks API https://blog.travis-ci.com/2018-05-07-announcing-support-for-github-checks-api-on-travis-ci-com |
For issue #848, implement travis stages so linting and HTML validation are in separate jobs that run in parallel and show up on the checks page of a PR.
@nschonni, For a little while, names for the build jobs were showing on the checks tab of a PR. But, for a while now, I've only been seeing numbers and not job names. It looks like the names are in the config. Might you know why the names are not showing up on the checks tab? |
I think Travis would need to be returning those with the response to the api https://developer.github.com/v3/checks/runs/#output-object |
The use of travis build stages resolves this issue and the glitches have apparently been resolved from the travis side. Thank you again @nschonni for your help with this. It has tremendously improved our processes. |
While our current integration of testing into our Travis CI script is a big step forward from where we started, it is becoming pretty cumbersome, especially when reviewing PRs that have failures. And, as we integrate more stuff, like CSpell and our webdriverIO regression tests along with the NUChecker and ESLint, our more robust testing, which ultimately makes PR review more thorough and efficient, will not necessarily make it easier ... at least for me as a screen reader user.
To understand the source of failures from our current monolythic beast of a process, you have to scan a Travis job log containing multiple thousands of lines of text that are wickedly hard to listen to with a screen reader. I can't imagine that they are a ton better for people who can see them. You still have to scroll to the end and then read back a ways to the end of setup and find the first signs of failure. The log would be even harder to scan if we allow the script to proceed to the next type of testing after one type fails.
And, that is another problem with our current process; we have it set up to abort as soon as we have a test run with failures. So, it could fail with ESLint errors and then not test HTML validity. Or, HTML validation may fail and prevent regression testing or spell checking. This slows down fixing.
If I understand things correctly, I think the Github checks API can help solve these problems. And, according to the Github checks API documentation, it is supported by Github Travis CI integration.
I think by exploiting the checks API support, we would be able to:
I hope my hunches based on the ssmall amount of reading I've done so far are right. I almost see these capabilities, especially seeing an easy to read summary of failures, as essential to the success of our regression test project. But, it is not @spectranaut's responsibility to re-do our entire Travis integration ... at least not by herself.
@michael-n-cooper, we would need your help because owner priveleges are needed to set it up. But, given your full plate, I think our team could do enough heavy lifting to make it easy for you.
So, in addition to @spectranaut, I'm hoping a few of the other super smart and active contributors to this project would be able to help figure out what really is the best approach. What say you @jessebeach, @sh0ji, @tatermelon, @nschonni?
Is this something that we can easily work out asynchronously in this issue? Would there be some benefit to a meeting on this topic?
The text was updated successfully, but these errors were encountered: