The past has been shown that we might not want to have this job given the constant updates of the mozharness revision. When we run our jobs in TaskCluster it will even always use the latest mozharness version of the given branch. So that's even something we do not do. With the removal of the mozharness job and making the necessary calls in each job instead we could benefit from the following:
@mjzffr and @sydvicious what do you think?
I agree. Respecting branches is good. Matching the behaviour in buildbot/taskcluster is good.
The revision used for mozharness needs to match the revision used for Firefox and the revision for the tests.zip package. On pf-jenkins, I added another job to get mozharness for each branch, and then set up a dependency chain with the job that downloads firefox, the job that downloads tests.zip file, the job that downloads mozharness, and the jobs that run the tests.
This is not something we have to worry about. The job in mozmill-ci will always have a branch and revision parameter. So that one would be used to determine the version of mozharness. There might be a problem with Tinderbox builds (revisions) which archiver might not support but the package lays beside the build in the same folder:
Maybe it would be good to check how Taskcluster and Buildbot are handling this before changing anything in mozmill-ci.
If you have the revision, you can always just use hg to checkout the mozmill directory.
No, we don't want to get mozmill! ;) Anyway, there is no way for a partial checkout of the appropriate branch via hg. We might want to enhance the archiver to also make packages available for tinderbox builds.
Re taskcluster: my understanding is that after the source tree is checked out on the builder, mozharness gets packaged (https://dxr.mozilla.org/mozilla-central/source/toolkit/mozapps/installer/packager.mk#59) and made available as a build-task artifact "mozharness.zip" (https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/scripts/builder/build-mulet-linux.sh#39). Then test tasks can specify where to obtain the mozharness package from the build-task-image via MOZHARNESS_URL. (https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/tasks/test.yml#22)
Ok, so for mozmill-ci this might be a missing feature of archiver then. Thanks Maja! I will have a look at this in the next weeks. Whereby I might not want to do it before all of our jobs make use of test-packages.
We had total breakage of our tests again the last days because of changes I made on mozilla-central. I did not bump the revision of mozharness in mozmill-ci. So no tests have been found for our tests against Nightly builds.
On the other side I'm happy that I did not push the mh revision update because those changes are backward incompatible and would have caused bustage on all other branches as well. That means we definitely need a branch specific mozharness checkout!
So we cannot have the fetch of the archiver client as part of each individual job, given that on Windows wget is not known. On the other side we also do not have to download the archiver client each time given that it changes that rarely. So I would suggest we make it a step in the scripts job, which will update the archiver when run. The individual test jobs can then simply call the archiver Python script which will fetch the corresponding mozharness archive.
It would be nice if there is a way to cache artifacts for a parametrized job and other jobs are using those artifacts until one value changes. So we would only have to download the mozharness archive once for builds of the same revision and repository. Sadly I don't find a way for that, so I will implement the first part of this comment.
Couldn't we either make get_mozharness parameterized, or have a get_mozharness job for nightly, aurora, etc? That's what I would do in pf-jenkins.
I don't want to let this job only run on master because it would be a bottle-neck. Also all slaves would have to copy the artifacts which takes a couple of seconds due to re-compressing of mozharness. That means fetching the zip archive and unpacking on the slaves is way faster.
Makes sense. I developed my methodology remotely with a low pipe, so having the master fetch everything and the jobs copying from master was much faster. mozmill-ci has a fast pipe for all builders, so this should be fine.
I would do the same if the mozharness archive would be larger. But it's really only some kB so not worth of any kind of caching and such on the master. Also keep in mind that we have a much higher number of jobs we run due to releases and all the locales.
Move code from get_mozharness job into scripts and tests jobs (#745)
PR #768 landed on master and I will push it to staging and production ASAP.
Somehow this merge is marked as broken by Travis:
Not sure how this change came in?
I definitely didn't change it and also cannot reproduce this failure locally. A retrigger of the Travis build also fixed it.
This change is now active on staging and production.