Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DEVEXP-423: Mounting node's pull secrets on build pod. #76

Merged
merged 1 commit into from Apr 6, 2020
Merged

DEVEXP-423: Mounting node's pull secrets on build pod. #76

merged 1 commit into from Apr 6, 2020

Conversation

ricardomaraschini
Copy link
Contributor

Mounting node pull secrets on build pod, according to:

openshift/enhancements/enhancements/node-pull-credentials

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 17, 2020
@ricardomaraschini
Copy link
Contributor Author

/retest

3 similar comments
@ricardomaraschini
Copy link
Contributor Author

/retest

@ricardomaraschini
Copy link
Contributor Author

/retest

@ricardomaraschini
Copy link
Contributor Author

/retest

@ricardomaraschini
Copy link
Contributor Author

/test

@ricardomaraschini
Copy link
Contributor Author

/retest

4 similar comments
@ricardomaraschini
Copy link
Contributor Author

/retest

@ricardomaraschini
Copy link
Contributor Author

/retest

@ricardomaraschini
Copy link
Contributor Author

/retest

@ricardomaraschini
Copy link
Contributor Author

/retest

@ricardomaraschini ricardomaraschini changed the title WIP - DEVEXP-423: Mounting node's pull secrets on build pod. DEVEXP-423: Mounting node's pull secrets on build pod. Mar 23, 2020
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 23, 2020
@ricardomaraschini
Copy link
Contributor Author

/assign @gabemontero @adambkaplan

{
Name: "node-pullsecrets",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this comment here @ricardomaraschini is as much of a broader comment for builds in general as it is for this piece (though we could start addressing things with your changes).

Namely, we've added another thing that necessitates a privileged pod.

So, thoughts that come to mind @ricardomaraschini @adambkaplan @bparees :

  • we have reqs on the books for devising a build solution that does not require privileged pods, no?
  • if I'm wrong there, ignore the rest :-)
  • are we hoping to address that with build v1, or will we defer to build v2 for this?
  • if not build v1, ignore the rest :-)
  • do we have the concise list of places we need privileged? and if so, should this feature be added to that list?
  • and finally, to facilitate that future work, should we start abstracting out privileged requiring elements like this into utility methods, decorating/mutating abstractions, that in the future could consume a config setting and if these privileged elements should be added to the build pod?

thanks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have reqs on the books for devising a build solution that does not require privileged pods, no?

Unprivileged builds won't happen until we drop support for RHEL7 workers. We need fuse-overlay to give us some level of performance parity.

are we hoping to address that with build v1, or will we defer to build v2 for this?

It's unclear at this point if builds v2 will be the place where we land the unprivileged builds feature. I like this idea since it lowers the barrier for the upstream community to consume. I.e. builds v2 could be a regular workload on any k8s cluster, but it works best if you have Fedora CoreOS/RHCOS as your host nodes.

do we have the concise list of places we need privileged? and if so, should this feature be added to that list?

We're already going down the path of adding more host path dependencies with the build image cache enhancement, so I wouldn't worry about this right now.

and finally, to facilitate that future work, should we start abstracting out privileged requiring elements like this into utility methods, decorating/mutating abstractions, that in the future could consume a config setting and if these privileged elements should be added to the build pod?

I'm not inclined to do this right now. In many respects I'd prefer builds v2 avoid host dependencies so that it could run on any k8s cluster. If there's demand for it, then we can look to refactor things out.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have reqs on the books for devising a build solution that does not require privileged pods, no?

Unprivileged builds won't happen until we drop support for RHEL7 workers. We need fuse-overlay to give us some level of performance parity.

are we hoping to address that with build v1, or will we defer to build v2 for this?

It's unclear at this point if builds v2 will be the place where we land the unprivileged builds feature. I like this idea since it lowers the barrier for the upstream community to consume. I.e. builds v2 could be a regular workload on any k8s cluster, but it works best if you have Fedora CoreOS/RHCOS as your host nodes.

do we have the concise list of places we need privileged? and if so, should this feature be added to that list?

We're already going down the path of adding more host path dependencies with the build image cache enhancement, so I wouldn't worry about this right now.

and finally, to facilitate that future work, should we start abstracting out privileged requiring elements like this into utility methods, decorating/mutating abstractions, that in the future could consume a config setting and if these privileged elements should be added to the build pod?

I'm not inclined to do this right now. In many respects I'd prefer builds v2 avoid host dependencies so that it could run on any k8s cluster. If there's demand for it, then we can look to refactor things out.

my last comment was meant to be in the context of build v1, not build v2. Regardless, it only has bearing if the RHEL7/fuse_overlay items you noted initially are addressed.

@@ -39,6 +39,7 @@ func (bs *DockerBuildStrategy) CreateBuildPod(build *buildv1.Build, additionalCA

privileged := true
strategy := build.Spec.Strategy.DockerStrategy
typeFile := v1.HostPathType("File")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be a constant in the API?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out we do have a constant on corev1. I have sent over a commit to address this.

{
Name: "node-pullsecrets",
VolumeSource: v1.VolumeSource{
HostPath: &v1.HostPathVolumeSource{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have reqs on the books for devising a build solution that does not require privileged pods, no?

Unprivileged builds won't happen until we drop support for RHEL7 workers. We need fuse-overlay to give us some level of performance parity.

are we hoping to address that with build v1, or will we defer to build v2 for this?

It's unclear at this point if builds v2 will be the place where we land the unprivileged builds feature. I like this idea since it lowers the barrier for the upstream community to consume. I.e. builds v2 could be a regular workload on any k8s cluster, but it works best if you have Fedora CoreOS/RHCOS as your host nodes.

do we have the concise list of places we need privileged? and if so, should this feature be added to that list?

We're already going down the path of adding more host path dependencies with the build image cache enhancement, so I wouldn't worry about this right now.

and finally, to facilitate that future work, should we start abstracting out privileged requiring elements like this into utility methods, decorating/mutating abstractions, that in the future could consume a config setting and if these privileged elements should be added to the build pod?

I'm not inclined to do this right now. In many respects I'd prefer builds v2 avoid host dependencies so that it could run on any k8s cluster. If there's demand for it, then we can look to refactor things out.

t.Fatalf("Expected 11 volumes in container, got %d", len(container.VolumeMounts))
// node pull secrets
if len(container.VolumeMounts) != 12 {
t.Fatalf("Expected 12 volumes in container, got %d", len(container.VolumeMounts))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ricardomaraschini Need to add the node pull secret to the source strategy (sti) and custom strategy builds, too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I forgot those. I think we have a problem when it comes to custom strategy as by mounting these secrets one may easily copy them to the outside world, no?

Our documentation states that:

"Custom builds run with a very high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds."

What do you reckon?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good question @ricardomaraschini. my view would be that it's reasonable to leave this out of custom builds for now and force those users to still provide their own pull secrets. People doing custom builds already have to be pretty sophisticated.

If we get a lot of pushback about the pain that causes we can revisit the decision, since as you note, someone doing custom builds also already has to be a highly trusted user.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bparees that sounds like a fair compromise to me. We need to doc this in JIRA so we make this clear in the release notes.

@ricardomaraschini
Copy link
Contributor Author

/retest

Mounting node pull secrets on build pod, according to:

openshift/enhancements/enhancements/node-pull-credentials
@ricardomaraschini
Copy link
Contributor Author

/retest

Copy link
Contributor

@adambkaplan adambkaplan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 6, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: adambkaplan, ricardomaraschini

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 6, 2020
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit de95d93 into openshift:master Apr 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants