Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added in cert generation for openshift #35

Merged
merged 1 commit into from
Apr 2, 2020

Conversation

JPinkney
Copy link
Contributor

@JPinkney JPinkney commented Mar 31, 2020

What does this PR do?

This PR adds in cert generation for openshift using https://docs.openshift.com/container-platform/4.2/authentication/certificates/service-serving-certificate.html.

According to the documentation:

The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which signs the service certificates, is only valid for one year after OpenShift Container Platform is installed.

What issues does this PR fix or reference?

eclipse-che/che#16251

Is it tested? How?

Currently testing on crc with webhooks enabled and che.default_routing_class: "openshift-oauth".

Test image is: jpinkney/che-workspace-controller:7.3.0

@JPinkney JPinkney changed the title Added in cert generation for openshift WIP: Added in cert generation for openshift Apr 1, 2020
@JPinkney JPinkney changed the title WIP: Added in cert generation for openshift Added in cert generation for openshift Apr 1, 2020
containerPort: 8443
volumeMounts:
- name: webhook-tls-certs
mountPath: /tmp/k8s-webhook-server/serving-certs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider duplicating it into env var and then use the value here https://github.com/che-incubator/che-workspace-operator/blob/2dc7f2e252f7912548fa405dac1d24cfda30e9fb/pkg/webhook/server/server.go#L26
it would make a bit transparent that the controller is configured with this value.
@amisevsk WDYT?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand fully, but I feel like reading the value from a mounted file is the more canonical approach; that's what's done in e.g. openshift oauth proxy:

      volumeMounts:
        - mountPath: /etc/tls/private
          name: proxy-tls
...
  volumes:
    - name: proxy-tls
      secret:
        defaultMode: 420
        secretName: proxy-tls

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just meant here moving hard-coded path to an environment variable, and we would be able to have:

     env:
       - name: webhooks.certs.path 
         value: /tmp/k8s-webhook-server/serving-certs
...
      volumeMounts:
        - mountPath: /tmp/k8s-webhook-server/serving-certs
          name: webhook-tls
...
  volumes:
    - name: webhook-tls
      secret:
        secretName: webhook-tls

but it's not critical

projected:
sources:
- configMap:
name: che-workspace-controller-secure-service
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will deployment be triggered when OpenShift rotates sources?
if no - maybe it makes sense to watch this configmap and secret from WorkspaceController
Or maybe there is another mechanism on how to restart the controller in such case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a mechanism in place for watching the main controller configmap (e.g. here) but I don't know if it's suitable if we need to restart the controller pod. I don't know that controllers typically roll out their own deployments, especially since this could look like an issue in the controller itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will deployment be triggered when OpenShift rotates sources?

The documentation as far as I can tell doesn't say what would happen when the sources are rotated. All it says is:

The certificate and key are automatically replaced when they get close to expiration

I'll probably have to dig a little bit deeper into that code to see what would happen

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot_20200402_123226

It makes me thinking that instead of mounting files we could:

  • watch secret - at the starttime - initialize files on the filesystem with the current values.
    • once secret is modified - update the files content. Probably cert watcher will get updated values and restart webbhook server - it's needed to be checked.
  • watch configmap - update WebhooksCfgs when certs are changed in configmap

But since by default certificates has 1-year expiration - we could postpone this task.

Copy link
Collaborator

@amisevsk amisevsk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; we can address further issues later (e.g. how to pick up new certs)

containerPort: 8443
volumeMounts:
- name: webhook-tls-certs
mountPath: /tmp/k8s-webhook-server/serving-certs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand fully, but I feel like reading the value from a mounted file is the more canonical approach; that's what's done in e.g. openshift oauth proxy:

      volumeMounts:
        - mountPath: /etc/tls/private
          name: proxy-tls
...
  volumes:
    - name: proxy-tls
      secret:
        defaultMode: 420
        secretName: proxy-tls

projected:
sources:
- configMap:
name: che-workspace-controller-secure-service
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a mechanism in place for watching the main controller configmap (e.g. here) but I don't know if it's suitable if we need to restart the controller pod. I don't know that controllers typically roll out their own deployments, especially since this could look like an issue in the controller itself.

Copy link
Member

@sleshchenko sleshchenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested. Works fine!
Let's merge it and address other improvements as separate issues

@JPinkney JPinkney merged commit df4e88a into devfile:master Apr 2, 2020
amisevsk added a commit to amisevsk/devworkspace-operator that referenced this pull request Apr 3, 2020
The makefile was rebased over a commit which moves some deployment yamls
around, breaking many of the rules.

Signed-off-by: Angel Misevski <amisevsk@redhat.com>
amisevsk added a commit to amisevsk/devworkspace-operator that referenced this pull request Apr 3, 2020
The makefile was rebased over a commit which moves some deployment yamls
around, breaking many of the rules.

Signed-off-by: Angel Misevski <amisevsk@redhat.com>
amisevsk added a commit to amisevsk/devworkspace-operator that referenced this pull request Apr 3, 2020
The makefile was rebased over a commit which moves some deployment yamls
around, breaking many of the rules.

Signed-off-by: Angel Misevski <amisevsk@redhat.com>
sleshchenko pushed a commit that referenced this pull request Apr 3, 2020
The makefile was rebased over a commit which moves some deployment yamls
around, breaking many of the rules.

Signed-off-by: Angel Misevski <amisevsk@redhat.com>
JPinkney pushed a commit to JPinkney/che-workspace-operator that referenced this pull request May 13, 2021
Use correct variable name in release script
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants