-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scripts/install-release-image: Helper for launching from an install payload #1221
Conversation
7f49dac
to
edfe954
Compare
CC @sjenning |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
edfe954
to
fe01197
Compare
Haven't seen that before. Maybe @smarterclayton reaping instances ;). But this new script is not excercised e2e-aws so... /retest |
IMO we don't want users overriding release images. |
Agreed.
There have been a number of users running into trouble getting an installer from somewhere and pointing it at an incompatible update payload. And there's the whole underscore-prefix issue for released vs. unreleased installers that trips people up. This script makes it easy to avoid those issues for dev use. I'm fine adding a big warning message to embarrass any users that find it, if that would help address your concerns. Maybe talking about installer releases having gone through more vetting than whichever update payload they're about to use. |
I can say I would not be likely to use this. Using the env var works fine for my dev, and people downloading from the official releases shouldn't be overriding IMHO. |
How do you want to get the word out on this, though? I field a few requests a week about whether underscores should be used or not, and regardless of released installers or not, folks with update-payload opinions should not need to also have installer-version opinions. With a script like this, we could have more consistent messaging about how folks with update-payload opinions should operate. And assuming that QE and others who need update-payload overrides will pick a compatible installer on their own seems like it's just one more potential tripping point. I don't have strong feelings about whether the tooling should be a shell script or something else, but I do think that folks with update-payload opinions should be getting their installer from the update payload. |
@wking I'd like the debug msg to be more obvious when launching with |
I've pushed ad4d7c0, updating this script to extract the RHCOS AMI ID associated with the release image (like #1286, but outside of the installer). This restricts this script to AWS clusters, but we can revert it once #1286 lands (or give the script a platform-selecting knob if that's going to take too long and folks want to use it on other platforms in the meantime). |
2e71784
to
34e4a3e
Compare
And with the recent RHCOS issues (discussion in #1325), I just fielded some questions where folks were running into trouble due to floating RHCOS. But in case I misunderstood your initial question, you can use this approach to launch with an install-config too: $ mkdir wking
$ cat >wking/install-config.yaml <<EOF
> apiVersion: v1beta3
> baseDomain: devcluster.openshift.com
> metadata:
> name: wking
> platform:
> aws:
> region: us-west-1
> pullSecret: REDACTED
> EOF
$ ASSETS=wking AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials AWS_PROFILE=openshift-dev AWS_DEFAULT_REGION=us-west-1 RELEASE=registry.svc.ci.openshift.org/openshift/origin-release:4.0.0-0.alpha-2019-02-26-194020 install-release-image.sh create manifests
WARNING Found override for OS Image. Please be warned, this is not advised
INFO Consuming "Install Config" from target directory |
34e4a3e
to
9916732
Compare
…ayload A number of users have had issues mixing and matching installers and update payloads. This script removes a degree of freedom by extracting the installer from the update payload itself. You'll need a fairly new oc for this, since --image-for was only added in openshift/origin@f1d50464 (Add `--image-for` and `--output` to `oc adm release info`, 2019-01-10, openshift/origin#21766). I've taken a fairly conservative approach to pushing host information into the container. If you leave off the AWS_* variables, the installer will prompty you, so that's a bit tedious, but not the end of the world. With all the trappings for an AWS cluster, this could look like: $ SSH_PUBKEY=~/.ssh/id_rsa.pub \ > AWS_PROFILE=openshift-dev \ > AWS_CONFIG_FILE=~/.aws/config \ > AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials \ > RELEASE=registry.svc.ci.openshift.org/openshift/origin-release:4.0.0-0.alpha-2019-02-06-200409 \ > install-release-image create cluster which is a bit of a mouthful :p. We could set defaults for AWS_SHARED_CREDENTIALS_FILE and mount them into the cluster by default, but I don't want callers to be concerned about leaking information they may consider highly sensitive. I'm less concerned about SSH public keys or AWS_CONFIG_FILE being considered sensitive, so the default behavior there is to mount them in from the usual locations. I'm setting HOME so I can mount in ~/.ssh, ~/.aws, etc. without mounting those into the asset directory at /output. We want the mounted (semi-)secret data to be reaped with the container, with no chance of persisting in the asset directory. The mkdir call avoids: $ ASSETS=does-not-exist install-release-image.sh realpath: ‘does-not-exist’: No such file or directory failed to resolve asset path since folks are likely to expect the installer's semantics (where it creates the requested asset directory if it didn't already exist). We can't wait on the installer though, because we are using realpath to convert to an absolute path so we can set up the volume options for Podman. The SC2154 disable avoids [2]: ./scripts/install-release-image.sh:50:9: note: Don't use variables in the printf format string. Use printf "..%s.." "$foo". [SC2059] Folks calling 'die' should follow that advice. This 'die' code that calls 'printf' is just passing along the value given by the 'die' caller. [1]: https://github.com/openshift/machine-config-operator/blob/master/docs/Update-SSHKeys.md [2]: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/1221/pull-ci-openshift-installer-shellcheck/3407/build-log.txt
9916732
to
e2c90d2
Compare
@wking: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
The release pipeline provides easier ways of just downloading the installer binary with pinned release, and this script shouldn't be required, as binaries are primary use-case for installer. closing for now. |
@abhinavdahiya: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
A number of users have had issues mixing and matching installers and update payloads. This script removes a degree of freedom by extracting the installer from the update payload itself. You'll need a fairly new oc for this, since
--image-for
was only added in openshift/origin@f1d50464 (openshift/origin#21766).I've taken a fairly conservative approach to pushing host information into the container. If you leave off the
AWS_*
variables, the installer will prompty you, so that's a bit tedious, but not the end of the world. With all the trappings for an AWS cluster, this could look like:which is a bit of a mouthful :p. We could set defaults for
AWS_SHARED_CREDENTIALS_FILE
and mount them into the cluster by default, but I don't want callers to be concerned about leaking information they may consider highly sensitive. I'm less concerned about SSH public keys orAWS_CONFIG_FILE
being considered sensitive, so the default behavior there is to mount them in from the usual location.I'm setting
HOME
so I can mount in~/.ssh
,~/.aws
, etc. without mounting those into the asset directory at/output
. We want the mounted (semi-)secret data to be reaped with the container, with no chance of persisting in the asset directory.