New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
*: add groundwork for muiti-arch bootimages #2885
*: add groundwork for muiti-arch bootimages #2885
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we'll need a few small changes in Hive to support. I'll add a Jira.
Will there be separate release images for each architecture, or one multiarch image? |
This is fine as is; just an optional comment; we have separate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this. Thank you @crawford.
separate release images for each arch. |
https://github.com/openshift/installer/pull/2885/files#diff-9f95494109d2a540a4a321c151e20c63R733 may not produce the error as expected in the test ... possibly the strings of Unsupported versus Invalid and verbosity of error:
|
@ashcrow Thanks, that extra test sneaked its way in from my branch that enabled s390x. I'll reintroduce that later :) |
nit: needs a bump to the docs to describe the new property. I'm also personally in favor of Colin's file per arch approach, as long as the RHCOS pipeline is building separate metadata files for each architecture. Or bump hack/update-rhcos-bootimage.py 44.81.201912131839.0 amd64 s390x ... or whatever and have it automatically pull in all the metadata for each arch and assemble the single file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I know there are people referencing rhcos.json directly my inclination would be to preserve compatibility by making amd64 the default and only defining new architectures. However this was never a documented API as far as I know so I'm fine with ripping the bandaid off as long as we make sure to message this change to internal developers. This PR is sufficient for messaging externally.
/hold I'm going to rework this so we don't break these consumers. As much as I like ripping off band-aids, there is a bit too much flying around right now. |
Not everyone has python3 installed into /usr/bin/.
/hold cancel Okay, this now preserves the original |
This splits the RHCOS build metadata into architecture-specific files. This will allow the metadata to contain information about bootimages of multiple architectures. In order to preserve backward compatibility (there are a few users, including certain CI jobs, that pull rhcos.json from GitHub directly), I've opted to use separate files for each architecture. Normally, we could have just symlinked the legacy metadata file, but when hosted on raw.githubcontent.com, the symlinks aren't followed. When updating the RHCOS bootimages, this script will need to be run once for each architecture that is being updated.
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ashcrow, cgwalters, sdodson The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
/retest Please review the full test history for this PR and help us cut down flakes. |
This adds an architecture parameter to the RHCOS image lookup process and a corresponding field to MachinePool. This is a backward-compatible change, defaulting the architecture to AMD64 if none has been specified. This also enforces that the control plane and compute nodes share an architecture, since we don't support heterogeneous clusters today.
/lgtm |
/retest Please review the full test history for this PR and help us cut down flakes. |
9 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/test shellcheck |
@crawford: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This doesn't actually include support for s390x, since we don't have any s390x builds yet. The intent is that this will be backported (where we do have builds) and the master branch will be updated once builds are available.
/cc @cgwalters @jaypoulz @dgoodwin
/hold