Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployments created by jiva don't have a lot of its attributes #3305

Closed
alexppg opened this issue Nov 24, 2020 · 3 comments
Closed

Deployments created by jiva don't have a lot of its attributes #3305

alexppg opened this issue Nov 24, 2020 · 3 comments
Labels
Community Community Reported Issue

Comments

@alexppg
Copy link

alexppg commented Nov 24, 2020

Description

The PVC deployments that the Jiva provisioner creates, do not have tolerations, nodeSelector nor service account (so no PSP).

What I've done so far is:

  • Install openebs using the helm chart
  • Create StoragePool
  • Create SC using the storage pool and configuration with nodeSelectors and targetTolerations
  • Create PVC
  • Create deploy that uses PVC

When the deploy is created, the Jiva provisioner creates the PVC deploy controller and n deploys per replica (3 in my case). But those deploys never start, since they're missing the tolerations, nodeSelector and SA, so they don't have a permissive PSP associated.

Expected Behavior

The Jiva provisioner should transfer all the necessary attributes to the deploys that it creates.

Current Behavior

It does not.

Possible Solution

Make the Jiva provisioner create the PVC deployments with all it's configured.

Steps to Reproduce

helm upgrade --install openebs --namespace kube-system openebs/openebs --set rbac.pspEnabled=true --version 2.3.0
kubectl apply -n kube-system manifests.yml

manifests.yml:

---
apiVersion: openebs.io/v1alpha1
kind: StoragePool
metadata:
  name: gp2-pool
  type: hostdir
spec:
  path: "/mnt/openebs/1/"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-jiva
  annotations:
    openebs.io/cas-type: jiva
    cas.openebs.io/config: |
      - name: ReplicaCount
        value: "3"
      - name: StoragePool
        value: gp2-pool
      - name: TargetNodeSelector
        value: |-
          node.kubernetes.io/role: admin
      - name: TargetTolerations
        value: |-
          t1:
            key: node.kubernetes.io/role
            operator: Equal
            value: admin
            effect: NoSchedule
      - name: FSType
        value: ext4
      - name: DeployInOpenEBSNamespace
        enabled: "true"
provisioner: openebs.io/provisioner-iscsi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: openebs-jiva-test
spec:
  storageClassName: openebs-jiva
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4G
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: openebs-jiva-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: openebs-jiva-test
  template:
    metadata:
      labels:
        app: openebs-jiva-test
    spec:
      containers:
      - image: nginx
        name: openebs-jiva-test
        volumeMounts:
        - name: openebs-jiva-test
          mountPath: /example/
      serviceAccount: openebs-claim
      serviceAccountName: openebs-claim
      nodeSelector:
        node.kubernetes.io/role: admin
      tolerations:
      - effect: NoSchedule
        key: node.kubernetes.io/role
        operator: Equal
        value: admin
      volumes:
        - name: openebs-jiva-test
          persistentVolumeClaim:
            claimName: openebs-jiva-test

The workaround is to add the stuff manually:

for deploy in $(ku get deploy -n kube-system -l "openebs.io/cas-type=jiva" | cut -d' ' -f1 | grep -v NAME);
do
    kubectl patch deploy -n kube-system $deploy -p "$(cat patch.yml)";
done

patch.yml:

spec:
  template:
    spec:
      serviceAccount: openebs
      nodeSelector:
        node.kubernetes.io/role: admin
      tolerations:
      - effect: NoSchedule
        key: node.kubernetes.io/role
        operator: Equal
        value: admin

Once patched, the deployments start correctly and the test deploy that uses the PVC can finally start.

Your Environment

  • Openebs version: 2.3.0
  • Kubernetes flavor: eks
  • Kubernetes version: 1.16.13
  • Environment: Default restricted PSP and no pod can't start without nodeselector and tolerations
@prateekpandey14 prateekpandey14 added the Community Community Reported Issue label Nov 25, 2020
@kmova kmova added this to Pre-commits and Designs - Due: Nov 30 2020 in 2.4 Release Tracker - Due Dec 15th. Nov 26, 2020
@alexppg
Copy link
Author

alexppg commented Dec 1, 2020

Just to update, @prateekpandey14 has commented on the slack thread that he can't reproduce the problem. I'm not sure how to continue debugging, any hint on how to do so, will be welcomed.

@akhilerm akhilerm moved this from Pre-commits and Designs - Due: Nov 30 2020 to RC1 - Due: Dec 5 2020 in 2.4 Release Tracker - Due Dec 15th. Dec 2, 2020
@akhilerm akhilerm moved this from RC1 - Due: Dec 5 2020 to RC2 - Due: Dec 10 2020 in 2.4 Release Tracker - Due Dec 15th. Dec 4, 2020
@akhilerm akhilerm moved this from RC2 - Due: Dec 10 2020 to Release Items in 2.4 Release Tracker - Due Dec 15th. Dec 10, 2020
@alexppg
Copy link
Author

alexppg commented Dec 11, 2020

If somebody reads this, there's two things to comment.

First one, pvc-##-rep-## didn't work because of a configuration problem. It seems that TargetTolerations and TargetNodeSelector only work for pvc-##-ctrl-##. So to configure the replicas, you must use ReplicaNodeSelector and ReplicaTolerations.

And the second, it seems there was indeed a bug that only affected the controller deploy. The image prateek14/m-apiserver:jiva-sa fixes this problem. It's just a test, so it shouldn't be used in production. It seems the fix will be released next week, with the 2.4 release.

@prateekpandey14
Copy link
Member

Fixed in openebs/maya#1773, thanks @alexppg for raising an issue

2.4 Release Tracker - Due Dec 15th. automation moved this from Release Items to Done Dec 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community Community Reported Issue
Projects
No open projects
Development

No branches or pull requests

2 participants