Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prow: Remove old branched openshift/os:latest build #1000

Merged

Conversation

cgwalters
Copy link
Member

We're moving to a container in the rhcos/ namespace; see
https://github.com/openshift/release/issues/972

This is part of an effort to consolidate the "source of truth" for RHCOS content.
We'd like to get to one container, and short term, having it built from
the same process that builds other content internally, and pushed to the osorgci registry
makes things far easier.

We're moving to a container in the rhcos/ namespace; see
https://github.com/openshift/release/issues/972

This is part of an effort to consolidate the "source of truth" for RHCOS content.
We'd like to get to one container, and short term, having it built from
the same process that builds other content internally, and pushed to the osorgci registry
makes things far easier.
@openshift-ci-robot openshift-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jun 25, 2018
@cgwalters
Copy link
Member Author

See also openshift/os#138

Copy link
Member

@ashcrow ashcrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@smarterclayton
Copy link
Contributor

How are you going to build the image? It's built here and then mirrored into rhcos....

@cgwalters
Copy link
Member Author

Where is the code that does the mirroring? After openshift/os#138 we're overwriting the rhcos/ version.

This is an incremental step; after this, we're going to work on having the container be the canonical source of data. There's a lot of nontrivial bits to this, and getting down to one current source of truth helps.

@smarterclayton
Copy link
Contributor

So you're not using the cluster to build images, but your jenkins jobs?

@cgwalters
Copy link
Member Author

cgwalters commented Jun 27, 2018

So you're not using the cluster to build images, but your jenkins jobs?

Yes but only short term. And really those aren't orthogonal; we can probably now consider moving all of the build infra out of internal (we'd definitely continue to use some Jenkins but as a pod obviously). We moved that direction for our initial work we had expertise/inertia/tooling/investment there. I've been playing a bit with GCE nested virt and I think that's pretty viable for both testing and vm-image building. Existing CL (as well as Quay) has various packet.net investment which we can continue as well.

Incremental steps here are the anyuid sa or we could go ahead and add a privileged one too.

@ashcrow
Copy link
Member

ashcrow commented Jun 28, 2018

I think have there is a lot of value in having one build that is used rather than 2 possible builders pushing to 1 or 2 places. I propose we slim it to 1 build and push where at least a subset of members of the CoreOS teams as well as Clayton (and others who need it) have access to modify the build and push.

Note: This discussion is a prereq for continuing work for container content work.

@smarterclayton
Copy link
Contributor

smarterclayton commented Jun 29, 2018

Whats' the status of non-privileged ostree builds inside of containers? I ask because we are ramping up to get CI jobs in place that pull together content, mostly from ci-operator and other tools we build on top, into the prototypes of the entire release content tree. As of a few days ago, every origin 3.11 image was built and pushed on openshift into a unified output location. I'd like to start integrating coreos jobs into that matrix. Knowing that timeline helps make the decision about whether to maintain these jobs.

And the bigger context is that I would like to do the unified flow demo in late July that shows the entire openshift 4 new flow together, and so I'm trying to get CI infra in place now to prop it up once we show it. We can do it with an RHCoS AMI, but I would much prefer to show it as we want customers to interact (which includes getting the token from the website, plumbing it through with the pivot, bringing up the cluster, and then delivering an update).

@cgwalters
Copy link
Member Author

Whats' the status of non-privileged ostree builds inside of containers?

I agree with the medium-term vision there but...my feeling here is that the non-privileged builds don't add direct user/customer value, we've been trying to just get Ignition integrated and we still have major outstanding items like SELinux+Ignition. A much bigger win on this side IMO is getting RPM-style builds into OpenShift (instead of koji), which has the same privilege problem today, but it's fixable even more easily than for host-ostree builds.

I guess a deep problem is in trying to support both pivot and non-pivot flows. And in particular - there's an obvious fundamental clash between pivoting and Ignition that briefly came up before, but we haven't been exploring much yet. Most of our Ignition testing has been non-pivot since it's just easier. I broke that issue out here: openshift/os#148

Anyways I feel like this is an important discussion to have, but...it's much higher level than this which is about cleaning up our build system right? Like I said I'd like to use the CI cluster more but it'd be helpful to make this incremental step first.

@smarterclayton
Copy link
Contributor

Talked with colin - we clarified a mid year / late year priority (october?) made sense, since the short term work is much more important. We'll touch base in a month or so.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 29, 2018
@openshift-merge-robot openshift-merge-robot merged commit 6b87c3f into openshift:master Jun 29, 2018
@openshift-ci-robot
Copy link
Contributor

@cgwalters: Updated the config configmap using the following files:

  • key config.yaml using file cluster/ci/config/prow/config.yaml

In response to this:

We're moving to a container in the rhcos/ namespace; see
https://github.com/openshift/release/issues/972

This is part of an effort to consolidate the "source of truth" for RHCOS content.
We'd like to get to one container, and short term, having it built from
the same process that builds other content internally, and pushed to the osorgci registry
makes things far easier.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

derekhiggins pushed a commit to derekhiggins/release that referenced this pull request Oct 24, 2023
derekhiggins pushed a commit to derekhiggins/release that referenced this pull request Oct 24, 2023
* Drop CentOS 7 support completely

CentOS 7 was marked deprecated in openshift#1000.  Everyone really needs to be
on RHEL 8 or CentOS 8 at this point.  Drop CentOS 7 support completely
now that a warning has been in place for a bit.

RHEL 8 (or CentOS 8) is required because that's the environment we
would expect customers to use to run the installer.  Any issues with
CentOS 7 are not worth our focus anymore.

* Drop dead code after removing centos 7 support

firewalld is now always in use after dropping CentOS 7.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
5 participants