New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
imageimportcontroller can't use host CAs anymore #20022
Comments
We need to start moving config into the apiserver as configmaps or CRD.
Lots of other components have similar needs.
What is the change that puts us on that path?
On Jun 15, 2018, at 4:27 PM, Ben Parees <notifications@github.com> wrote:
With oc cluster up and v3.10 both moving controllers into containers, they
no long have access to the host system CAs.
That means that for components like imageimport, the admin cannot add
self-signed certs to the host system to enable imageimport to talk to
in-house registries.
@deads2k <https://github.com/deads2k> has suggested this is basically a
flaw in how imageimport always worked (it never should have used system
certs, it should have had its own list of certs provided in master-config,
or some other mechanism, since the certificates you may want to trust for
image import may not be the same as what you trust generally. In fact
ideally it should be under user (not admin/cluster-admin) control which
registries I want to trust when importing an image since my credentials are
being sent to that registry).
I don't necessarily disagree, though i'm also not sure I want to burden
admins with having to put certs into master-config when they are used to
managing host system certs (let alone whatever a "user specific trusted
certs" architecture would look like). It will also make the cluster up
experience more painful since you'll have to bring up a cluster to get a
config, then edit the config and restart the cluster.
I realize "mount the certs from the host" is likely to be rejected by
@smarterclayton <https://github.com/smarterclayton> but I want to have the
discussion so people can weigh in before we pick a path forward, and see if
there is a third alternative available. (For cluster up, perhaps a path
forward is to pull all the host certs into a configmap and mount that
configmap into the controller pod?)
This is urgent for 3.11 and frankly it's probably already going to break
3.10 users (similar to the late breaking imagesignature import issue we
just caught) since we don't have a straightforward story for how admins can
add certs for the controllers to leverage. Today I think they'd have to
either hack the pod yaml to mount the host certs, or rebuild the image.
(I'm also not sure if there are any other controllers in the same boat as
imageimport, that would need to talk to servers using potentially
self-signed or otherwise untrusted certs).
@smarterclayton <https://github.com/smarterclayton> @deads2k
<https://github.com/deads2k> @dmage <https://github.com/dmage> @legionus
<https://github.com/legionus> @jwforres <https://github.com/jwforres>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#20022>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_pxKcQqbV6fzW54oFA0O1Wdatfg2Pks5t9BiZgaJpZM4UqG4G>
.
|
I think the controller should load a secret or config map from a known
location and refresh periodically.
That’s roughly equivalent to secrets and also sets the stage for global
import secrets (look at a docker config json secret for all imports and
layer it in). We’d need for that to try multiple on failure so that can be
a later thing. That then bridges the gap for centrally managed generic
registry pull secrets moving from nodes to the cluster.
On Jun 15, 2018, at 4:27 PM, Ben Parees <notifications@github.com> wrote:
With oc cluster up and v3.10 both moving controllers into containers, they
no long have access to the host system CAs.
That means that for components like imageimport, the admin cannot add
self-signed certs to the host system to enable imageimport to talk to
in-house registries.
@deads2k <https://github.com/deads2k> has suggested this is basically a
flaw in how imageimport always worked (it never should have used system
certs, it should have had its own list of certs provided in master-config,
or some other mechanism, since the certificates you may want to trust for
image import may not be the same as what you trust generally. In fact
ideally it should be under user (not admin/cluster-admin) control which
registries I want to trust when importing an image since my credentials are
being sent to that registry).
I don't necessarily disagree, though i'm also not sure I want to burden
admins with having to put certs into master-config when they are used to
managing host system certs (let alone whatever a "user specific trusted
certs" architecture would look like). It will also make the cluster up
experience more painful since you'll have to bring up a cluster to get a
config, then edit the config and restart the cluster.
I realize "mount the certs from the host" is likely to be rejected by
@smarterclayton <https://github.com/smarterclayton> but I want to have the
discussion so people can weigh in before we pick a path forward, and see if
there is a third alternative available. (For cluster up, perhaps a path
forward is to pull all the host certs into a configmap and mount that
configmap into the controller pod?)
This is urgent for 3.11 and frankly it's probably already going to break
3.10 users (similar to the late breaking imagesignature import issue we
just caught) since we don't have a straightforward story for how admins can
add certs for the controllers to leverage. Today I think they'd have to
either hack the pod yaml to mount the host certs, or rebuild the image.
(I'm also not sure if there are any other controllers in the same boat as
imageimport, that would need to talk to servers using potentially
self-signed or otherwise untrusted certs).
@smarterclayton <https://github.com/smarterclayton> @deads2k
<https://github.com/deads2k> @dmage <https://github.com/dmage> @legionus
<https://github.com/legionus> @jwforres <https://github.com/jwforres>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#20022>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_pxKcQqbV6fzW54oFA0O1Wdatfg2Pks5t9BiZgaJpZM4UqG4G>
.
|
This is likely going to be painful for dealing w/ things like certs. I haven't inspected the code, so perhaps there's an easy solution, but I would guess the cert configuration is going to be controlled by libraries/dependencies deeper in the stack than the controller is operating at, and the easiest way for the controller to add a trusted certificate to its request flow is to put the cert in the system cert path, not to figure out how to load it itself and pass it down the callstack to a low level https library that it's not directly invoking. It also makes for more admin overhead if they have to provide copies of self-signed certs to every controller's configmap, rather than simply supplying them to some global cluster configuration (which could also be a configmap, but one that gets mounted into the control plane pod). Of course global config has the limitation of not being able to do fine grained control over which certs are consumable by which controllers, but I'm not aware of that having come up as a use case to date. Personally, at least w/ respect to cert problem, i'd like to see a solution in which a well known configmap is mounted into the control plane pod. Admins can then use that configmap to push files that used to be on the host, into their controller+apiserver pods with minimal effort. We can then easily introduce tools to turn local host files into appropriate the appropriate configmap, and things like tldlr: our code knows how to read the local filesystem, rather than change the code, let's make it easy for an admin to make the container's local filesystem look like the code already expects it to. |
(I don't disagree that for configuration data that didn't previously come from the local filesystem, having code pick it up directly from a configmap makes sense) |
I don't like seeing pods directly inspecting configmaps. As we start moving to components running in pods, I'd expect those pods to read configuration from disk that was mounted via a configmap. Using the master-config.yaml with a file reference (like we commonly use today for other things), would allow an admin to manage a configmap and config pointing to it. I do not think that all trust is the same, so I would not expect the same CA bundle for all controllers and communication to distinct backends. Different endpoints are different with potentially different levels of trust. I do think that specifically for importing images, a user trying to do an import has a reasonable expectation that they can choose the CA. Seems like that should at least be a requirement or an image import is going to require a cluster-admin trust a CA that they should not trust since the cluster-admin configured trust would be global. |
So the proposal as I understand it is:
This means yet another hostpath mount in the control plane pod. Using a configmap was deemed not possible today since the controller pod is static, but in the future we can potentially move the content into a config map. @smarterclayton @deads2k accurate? |
Correct for 1 and 2. I'm not sure of the exact mechanism for 3, but that seems plausible. |
ok, slightly modified plan. Since we already mount /etc/origin/master into the control plane, my intent is to do the following:
In 3.11(hopefully) we'll replace (1) with "create a configmap" and (2) with "mount the configmap". The advantages of this approach are:
To make things work users will just have to copy certs into |
(there is officially a bug for this now: https://bugzilla.redhat.com/show_bug.cgi?id=1592936) |
With oc cluster up and v3.10 both moving controllers into containers, they no long have access to the host system CAs.
That means that for components like imageimport, the admin cannot add self-signed certs to the host system to enable imageimport to talk to in-house registries.
@deads2k has suggested this is basically a flaw in how imageimport always worked (it never should have used system certs, it should have had its own list of certs provided in master-config, or some other mechanism, since the certificates you may want to trust for image import may not be the same as what you trust generally. In fact ideally it should be under user (not admin/cluster-admin) control which registries I want to trust when importing an image since my credentials are being sent to that registry).
I don't necessarily disagree, though i'm also not sure I want to burden admins with having to put certs into master-config when they are used to managing host system certs (let alone whatever a "user specific trusted certs" architecture would look like). It will also make the cluster up experience more painful since you'll have to bring up a cluster to get a config, then edit the config and restart the cluster.
I realize "mount the certs from the host" is likely to be rejected by @smarterclayton but I want to have the discussion so people can weigh in before we pick a path forward, and see if there is a third alternative available. (For cluster up, perhaps a path forward is to pull all the host certs into a configmap and mount that configmap into the controller pod?)
This is urgent for 3.11 and frankly it's probably already going to break 3.10 users (similar to the late breaking imagesignature import issue we just caught) since we don't have a straightforward story for how admins can add certs for the controllers to leverage. Today I think they'd have to either hack the pod yaml to mount the host certs, or rebuild the image.
(I'm also not sure if there are any other controllers in the same boat as imageimport, that would need to talk to servers using potentially self-signed or otherwise untrusted certs).
@smarterclayton @deads2k @dmage @legionus @jwforres
The text was updated successfully, but these errors were encountered: