Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(deploy/kubernetes): custom volume mnt per svc #1110

Merged
merged 1 commit into from
Nov 29, 2018

Conversation

ethanfrogers
Copy link
Contributor

add custom ConfigMap or Secret volume mounts at the service level.
This allows you to mount extra config or secrets as needed.

@ethanfrogers
Copy link
Contributor Author

spinnaker/spinnaker#3669

@lwander PTAL!

add custom `ConfigMap` or `Secret` volume mounts at the service level.
This allows you to mount extra config or secrets as needed.
@lwander
Copy link
Member

lwander commented Nov 29, 2018

Nice

@ethanfrogers
Copy link
Contributor Author

I'll add this to the documentation but for now, here's an example of how to add it to your service settings

# ~/.hal/default/service-settings/{service}.yml
kubernetes:
  volumes:
  - id: my-secret-name
    type: secret
    mountPaht: /some/path

@ethanfrogers ethanfrogers merged commit 87e7f20 into spinnaker:master Nov 29, 2018
@ethanfrogers ethanfrogers deleted the volume-service-setting branch November 29, 2018 16:39
kevinawoo added a commit to armory-io/halyard that referenced this pull request Nov 30, 2018
* master:
  feat(deploy/kubernetes): custom volume mnt per svc (spinnaker#1110)
  refactor(core): Push validation logic to base class (spinnaker#1108)
@tillig
Copy link

tillig commented Dec 6, 2018

I just installed Halyard nightly 1.13.0-20181206020509 to try this out but it doesn't seem to be there. I added a mount to ~/.hal/default/service-settings/echo.yml along with a pod annotation just to ensure things were getting picked up. The pod annotation does get applied; the mount does not.

kubernetes:
  volumes:
  - id: echo-webhook-templates
    type: secret
    mountPath: /mnt/webhook-templates
  podAnnotations:
    testannotation: here

Does something need to happen to get this change into a nightly to try out?

@lwander
Copy link
Member

lwander commented Dec 7, 2018

The date on the nightly looks correct, was built yesterday morning -- maybe @ethanfrogers knows what's up?

@ethanfrogers
Copy link
Contributor Author

Very odd! I'll double check it.

@ethanfrogers
Copy link
Contributor Author

@tillig I just installed the same build and it's working for me. How did you install the nightly? Install script or Docker image?

@tillig
Copy link

tillig commented Dec 10, 2018

I used the install script. Admittedly I had the stable version installed already and installed nightly over the top. Maybe I need to do a clean uninstall and restore my config from backup post-reinstall? I'll give it a shot today.

@tillig
Copy link

tillig commented Dec 10, 2018

I'm having no luck and I'm not sure why.

Backed up my config and did a full hal deploy clean and sudo ~/.hal/uninstall.sh as noted in the docs.

Noticed that /opt/halyard was still full of stuff so I deleted all that manually.

Downloaded the nightly version of InstallHalyard.sh and did an install. hal --version yields 1.13.0-20181208020509

Restored my configuration with hal backup restore. hal config shows everything right.

~/.hal/default/service-settings/echo.yml looks like this (I removed the podAnnotation from earlier):

kubernetes:
  volumes:
  - id: echo-webhook-templates
    type: secret
    mountPath: /mnt/webhook-templates

I can get the secret, no problem. kubectl get secret/echo-webhook-templates -n spinnaker does retrieve it, so I know it's there and it's not a typo.

However, after hal deploy apply I can look at the resulting deployment and there's no mount. kubectl get deploy/spin-echo -n spinnaker -o yaml yields:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: "<truncated>"
    moniker.spinnaker.io/application: '"spin"'
    moniker.spinnaker.io/cluster: '"echo"'
  creationTimestamp: 2018-12-10T16:45:23Z
  generation: 1
  labels:
    app: spin
    app.kubernetes.io/managed-by: halyard
    app.kubernetes.io/name: echo
    app.kubernetes.io/part-of: spinnaker
    app.kubernetes.io/version: 1.10.5
    cluster: spin-echo
  name: spin-echo
  namespace: spinnaker
  resourceVersion: "5294022"
  selfLink: /apis/extensions/v1beta1/namespaces/spinnaker/deployments/spin-echo
  uid: fd335fe6-fc9a-11e8-8561-ee8b8d3710af
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: spin
      cluster: spin-echo
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: spin
        app.kubernetes.io/managed-by: halyard
        app.kubernetes.io/name: echo
        app.kubernetes.io/part-of: spinnaker
        app.kubernetes.io/version: 1.10.5
        cluster: spin-echo
    spec:
      containers:
      - env:
        - name: JAVA_OPTS
          value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
            -XX:MaxRAMFraction=2
        - name: SPRING_PROFILES_ACTIVE
          value: local
        image: gcr.io/spinnaker-marketplace/echo:2.1.2-20181113042810
        imagePullPolicy: IfNotPresent
        lifecycle: {}
        name: echo
        ports:
        - containerPort: 8089
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - wget
            - --no-check-certificate
            - --spider
            - -q
            - http://localhost:8089/health
          failureThreshold: 3
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/spinnaker/config
          name: spin-echo-files-1024083987
        - mountPath: /opt/spinnaker-monitoring/config
          name: spin-echo-files-1144163964
        - mountPath: /opt/spinnaker-monitoring/registry
          name: spin-echo-files-1658044194
      - env:
        - name: JAVA_OPTS
          value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
            -XX:MaxRAMFraction=2
        - name: SPRING_PROFILES_ACTIVE
          value: local
        image: gcr.io/spinnaker-marketplace/monitoring-daemon:0.9.2-20181108172516
        imagePullPolicy: IfNotPresent
        lifecycle: {}
        name: monitoring-daemon
        ports:
        - containerPort: 8008
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 8008
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/spinnaker/config
          name: spin-echo-files-1024083987
        - mountPath: /opt/spinnaker-monitoring/config
          name: spin-echo-files-1144163964
        - mountPath: /opt/spinnaker-monitoring/registry
          name: spin-echo-files-1658044194
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 60
      volumes:
      - name: spin-echo-files-1024083987
        secret:
          defaultMode: 420
          secretName: spin-echo-files-1024083987
      - name: spin-echo-files-1658044194
        secret:
          defaultMode: 420
          secretName: spin-echo-files-1658044194
      - name: spin-echo-files-1144163964
        secret:
          defaultMode: 420
          secretName: spin-echo-files-1144163964
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-12-10T16:46:26Z
    lastUpdateTime: 2018-12-10T16:46:26Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2018-12-10T16:45:23Z
    lastUpdateTime: 2018-12-10T16:46:26Z
    message: ReplicaSet "spin-echo-7ff78f6479" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

I truncated the last applied configuration for easier reading. Point being, there's no secret mount in there from my echo.yml. If I search for anything with templates...

kubectl get deploy -n spinnaker -o yaml | grep templates

...there are no results. It's not in there.

Am I doing something wrong? Putting the echo.yml in the wrong location? Bad format? I don't know how to troubleshoot it.

@ethanfrogers
Copy link
Contributor Author

@tillig feel free to ping me on slack to do some more debugging!

@tillig
Copy link

tillig commented Dec 10, 2018

For better or worse (mostly worse)... our company has Slack blocked. 😢

@ethanfrogers
Copy link
Contributor Author

Let me give it a shot with the nightly. I installed 1.13.0-20181208020509 using the OSX install script with --version 1.13.0-20181208020509.

@ethanfrogers
Copy link
Contributor Author

@tillig apparently I can't' install the nightly using the nightly script because I'm on OSX, however, I moved to using the 1.13.0-20181208020509 Docker image and was able to get volume mounts to work properly. Would you be able to try it this way? Perhaps something is wrong with the script to install nightlies and pulling the wrong version?

@tillig
Copy link

tillig commented Dec 12, 2018

I had a good time learning about challenges with Docker for Windows alongside Windows Subsystem for Linux and Hyper-V in the last day trying to get this working. (Halyard works just fine in WSL running Ubuntu, BTW, but Docker on Windows is a nightmare.) Anyway, turns out while I was doing all that Halyard got released 1.13 as stable and I could throw all that out, run update-halyard... and it works just fine now.

Thanks for your patience in helping me work through this. I'm guessing there may be odd challenges around installing a nightly over a stable release; clean uninstalls of Halyard (it isn't clean at all); and all that combining to be a bad install that didn't include everything it needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants