Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce a new "prototype" pipeline for new potential platforms to initially run in #473

Closed
andrew-m-leonard opened this issue Oct 14, 2022 · 30 comments
Assignees

Comments

@andrew-m-leonard
Copy link
Contributor

New platforms that are not candidates for release need time to bed in and prove their stability. We should introduce a new "prototype" (or suitably named) pipeline that is perhaps run once a week for such platforms.

@cornelia247
Copy link
Contributor

@andrew-m-leonard Is this up for trials by interns? I would love to tryout!

@andrew-m-leonard
Copy link
Contributor Author

hi @cornelia247 , this item is quite extensive, but we will not be implementing it until after the October release, so into November. Also, this requires certain Admin access for some of the updates/testing.

@zdtsw
Copy link
Contributor

zdtsw commented Nov 8, 2022

I think Loongson could be a good candidate for this issue.

@zdtsw
Copy link
Contributor

zdtsw commented Nov 21, 2022

weekly build for jdk8/11 has bisheng corretto and dragonwell, this prevents archive temurin.
We should consider moving these 3 variants into "prototype"

@zdtsw
Copy link
Contributor

zdtsw commented Nov 21, 2022

^ this is also on jdk17 https://ci.adoptopenjdk.net/job/build-scripts/job/weekly-openjdk17-pipeline/: bisheng and openj9

@zdtsw zdtsw self-assigned this Nov 29, 2022
@zdtsw
Copy link
Contributor

zdtsw commented Dec 2, 2022

maybe risc-v should also be in the prototype, rather than manually created by coping from old jobs

@zdtsw
Copy link
Contributor

zdtsw commented Dec 8, 2022

Mark: when code is done, we need to update docs of "how to generate these prototype pipelines"

@zdtsw
Copy link
Contributor

zdtsw commented Dec 8, 2022

one more question: do we want to run the targets of prototype in the " weekly"? @andrew-m-leonard
I am thinking, we can have the same scheduler (as-is) for both "nightly" and "prototype" two to three times a week, but weekly only runs for nightlies'
=> we do not change logic in the weekly->nightly
so we probably can get weekly's result of AQA tests within 24hrs (will be much faster than now, when we move non-temurin variants into prototype)

@andrew-m-leonard
Copy link
Contributor Author

one more question: do we want to run the targets of prototype in the " weekly"? @andrew-m-leonard I am thinking, we can have the same scheduler (as-is) for both "nightly" and "prototype" two to three times a week, but weekly only runs for nightlies' => we do not change logic in the weekly->nightly so we probably can get weekly's result of AQA tests within 24hrs (will be much faster than now, when we move non-temurin variants into prototype)

@zdtsw Yes agree, weekly just runs the nightly targets

@sxa
Copy link
Member

sxa commented Dec 12, 2022

Just to be clear, are we suggesting in the last two comments that the prototype builds would NOT be included in the weekly runs? That would seem undesirable as it would not allow us to regularly run and see the output of the full set of test suites on the prototype platforms.

@zdtsw
Copy link
Contributor

zdtsw commented Dec 12, 2022

We will not have prototype running in the "normal" weekly pipeline,
but can set up a prototype-weekly(or prototype-biweekly) pipeline to run full test suites (extended, not sure tck needed or not) on these prototype platforms.

the result from prototype weekly no need archive, to save some disk space,
the "normal" weekly still do the archive as-is , more for comparison purpose.

@sxa
Copy link
Member

sxa commented Dec 20, 2022

@zdtsw @andrew-m-leonard Any ETA on getting this running regularly? @luhenry is keen to get the RISC-V pipelines live again now that the testing has been included as per #545

@zdtsw
Copy link
Contributor

zdtsw commented Dec 21, 2022

after some comments, we have decided to rename "prototype" into "evaluation" for the new type of pipelines

@zdtsw
Copy link
Contributor

zdtsw commented Dec 21, 2022

@zdtsw @andrew-m-leonard Any ETA on getting this running regularly? @luhenry is keen to get the RISC-V pipelines live again now that the testing has been included as per #545

The implementation is done, changes are under review, hopefully we can get this issue closed by this week

@zdtsw
Copy link
Contributor

zdtsw commented Jan 4, 2023

@gounthar
Copy link

gounthar commented Jan 4, 2023

The latest release (and a few previous ones too) contains risc-v binaries.

@zdtsw
Copy link
Contributor

zdtsw commented Jan 4, 2023

The latest release (and a few previous ones too) contains risc-v binaries.

so, the risc-v binaries you saw from each nightly builds (we do not call them as release, only nightly build, or pre-release) were from the old nightly pipelines (before evaluation pipelines were setup)

The first run of evaluation pipeline was done on 3rd Jan. and we do not "publish" binaries from evaluation pipeline to GitHub binary repo, any nightly builds after 3rd Jan should not include risc-v.
Once we populate from evaluation to normal nightly pipeline, they should appear again

@sxa
Copy link
Member

sxa commented Jan 4, 2023

we do not "publish" binaries from evaluation pipeline to GitHub binary repo

@zdtsw IMHO we should be publishing them in teh GitHub repositories so that users can get to previous binaries to do comparisons between them if things start failing. Pulling them out of CI isn't ideal.

This would make it easier for people to get a new platform out of evaluation state and be able to retest with earlier builds if required when looking at problems.

It wasn't quite clear to me from your earlier comment whether you plannned to set up a prototype-weekly pipeline that would publish.

@zdtsw
Copy link
Contributor

zdtsw commented Jan 4, 2023

we do not "publish" binaries from evaluation pipeline to GitHub binary repo

@zdtsw IMHO we should be publishing them in teh GitHub repositories so that users can get to previous binaries to do comparisons between them if things start failing. Pulling them out of CI isn't ideal.

This would make it easier for people to get a new platform out of evaluation state and be able to retest with earlier builds if required when looking at problems.

It wasn't quite clear to me from your earlier comment whether you plannned to set up a prototype-weekly pipeline that would publish.

standard weekly pipeline will archive the last 2's binaries in Jenknis disk, but not publish to GitHub binary repo.
evaluation weekly pipeline will not archive in Jenkins and will not publish to GitHub binary repo.
The difference mainly is they have the different releaeseType value.
All evaluation (running parallel with nightly and running parallel with weekly) have "Nightly Without Publish" as releaseType thats why the uploader job is skipped

If we want to publish evaluation from night builds, we need to set it to "Nightly" instead.

@zdtsw
Copy link
Contributor

zdtsw commented Jan 4, 2023

An open discussion for evaluation pipeline:

the implementation made for evaluation pipeline has certain difference than the "nightly" we have, and this need more input from everyone:

  1. by setting releaseType in evaluation to "Nightly Without Publish" , this will not upload the binary into GH repo. Is this acceptable? If not, we should set it to "Nightly" (same as for the nightly build)
    The reason I chose "Nightly Without Publish" than "Nightly" is to not confuse enduser that the binary in GH is ready to be consumed.
  2. since evaluation and nightly pipeline are two different pipelines, even they can be triggered at the same time (accurate to minute) still the upload part take the timestamp when it is called and create "release" in the format of jdkXXu-YY-MM-DD-HH-MM-beta So, it will be one for "evaluation" another for "nightly(non-evalutaion)" in the same termuinXX-binaries repo. Is this acceptable? or you would prefer having only one "release" to include both "nightly" and "evaluation"s binaries?

@sxa
Copy link
Member

sxa commented Jan 4, 2023

My view is that ideally I'd like there two be one github release for each nightly run, however I appreciate that would require some magic to synchronise up the releases, and we should see if that's feasible somehow.

Without that, each alternate one would have "normal" and "experimental" nightlies which could be somewhat confusing for the end users unless it was made clear in the name somehow.

Looking forward to hearing any other views on the subject :-)

@andrew-m-leonard
Copy link
Contributor Author

My vote would be maybe to ensure it differs, although that maybe tricky as the target release doesn't exist until the pipeline finishes.
Failing that, just keep them the same way as it is now.

@andrew-m-leonard
Copy link
Contributor Author

Nightly pipelines at the moment have no association to a single github published release, so if 2 nightly pipelines run one night they will be published to two different releases.

@luhenry
Copy link
Contributor

luhenry commented Jan 4, 2023

What's the bar to take a pipeline from "experimental" to "normal"?

For risc-v, it's been stable for a while, but we have been missing regular runs as it was relying on this work to land. From looking at Jenkins, I also can't seem to find the latest run for jdk; where could I find those (regardless of the platform)?

@smlambert
Copy link
Contributor

My expectation was that the differentiation between "evaluation"/"experimental" pipeline versus "normal"/"release" pipeline was to have only the set of platforms we officially release (slide 27 of the 2023 Program plan / those platforms that we TCK and publish). Noting that risc-v is in plan to graduate to a releasable platform in 2023 (at which point the program plan would be updated/adjusted as it currently states it as "in development").

I would also expect that we would still publish nightly/beta builds of platforms found in evaluation pipeline (and had not remembered the ole' timestamp dilemna), but as @andrew-m-leonard noted in #473 (comment) there is no obligation for a single nightly/beta release (no matching timestamp requirement).

@sxa
Copy link
Member

sxa commented Jan 6, 2023

My vote would be maybe to ensure it differs
Nightly pipelines at the moment have no association to a single github published release, so if 2 nightly pipelines run one night they will be published to two different releases.

This may be controversial but I disagree ...TL;DR The end-users of this shouldn't really have to care about what we do internally when they are looking to locate a build on a platform they are interested in.:

  1. Someone wanting the nightlies shouldn't have to worry about the internal implementation when they look at the releases repository. "The latest nightly for X is missing my platform - what's gone wrong?" "Nothing, you have to look at a different nightly pipeline" is not a conversation I want to keep having. Either that or we just get diligent about pointing people e xplicitly at the API for nightlies (although that doesn't help my next point, and it assumes all experimental builds are in the API which is not necessarily going to be the case)
  2. If you have twice as many nightly releases, it becomes more work to scroll backwards to find e.g. the nightly from "one month ago" if you're doing comparisons as you'v'e got twice as many to scroll through.
  3. There was another comment somewhere about "Well we can have out-of-band nightlies anyway if we run the pipelines ourselves". Correct me if I'm wrong but I'd suggest it's very unusual to be running a full pipeline between nightlies without using "Nightly without publish" (I can only think of once in the last year where I've needed to do that) so I'd consider that very much an edge case rather than something used to justify extra end-user complexity.

Marking the nightly releases as easily identifiable as being the experimental ones mitigates point 1 a little but I'd still suggest it isn't an ideal solution.

@zdtsw
Copy link
Contributor

zdtsw commented Jan 13, 2023

as for the downstream job name, it was added in


for evaluation and release job.

@smlambert if this creates difficulties for TRSS, we can change the logic.

@smlambert
Copy link
Contributor

smlambert commented Jan 13, 2023

re: #473 (comment) - I was just raising the question on whether it was needed to rename child jobs, agree that top-level job name should take the extra word as a distinction, just posing question on whether it was needed or desirable to rename children (to consider pros/cons/value).

Screen Shot 2023-01-13 at 11 14 03 AM

related: adoptium/aqa-test-tools#765 - and associated PR which will ignore evaluation and release words to allow us to incorporate build and smoke test results into the same grid as the AQA test results.

@llxia
Copy link
Contributor

llxia commented Jan 13, 2023

I share the same concern about renaming some child jobs. The release and evaluation pipelines trigger some renamed child jobs by adding -release- (i.e., child build and smoke test job) and some existing child jobs (i.e., AQA test jobs). I also would like to understand if the renaming child build and smoke test job are needed.

@zdtsw
Copy link
Contributor

zdtsw commented Jan 31, 2023

gathering a bunch of related open issues:

Will have this issue closed first (feel free to re-open)

@zdtsw zdtsw closed this as completed Jan 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Status: Done
Development

No branches or pull requests

8 participants