Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Fail to start longhorn with k3d #206

Closed
acefei opened this issue Mar 11, 2020 · 4 comments
Closed

[BUG] Fail to start longhorn with k3d #206

acefei opened this issue Mar 11, 2020 · 4 comments
Assignees
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@acefei
Copy link

acefei commented Mar 11, 2020

What did you do?
I tried to deployment longhorn using kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml.
It works perfectly on k3s, but failed on k3d as Error: failed to generate container "2e4bb7f592a59476546b25d1c224c8bdb6eaeb8fb2d31709769f95cb61d7c1f5" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount

What did you expect to happen?
A successful deployment looks like this:

# kubectl get pods -n longhorn-system
NAME                                        READY   STATUS    RESTARTS   AGE
svclb-longhorn-frontend-9cd46               0/1     Pending   0          19d
longhorn-manager-cd2b9                      1/1     Running   0          19d
longhorn-ui-b97b74b8-dktjp                  1/1     Running   0          19d
instance-manager-e-6cee2e57                 1/1     Running   0          19d
instance-manager-r-2b03cc42                 1/1     Running   0          19d
longhorn-driver-deployer-5558df9859-c8dq4   1/1     Running   0          19d
csi-attacher-5d9cffdbd6-fxhms               1/1     Running   0          19d
csi-attacher-5d9cffdbd6-v75p8               1/1     Running   0          19d
csi-attacher-5d9cffdbd6-l5hbn               1/1     Running   0          19d
compatible-csi-attacher-7b9757dc9c-2bn7w    1/1     Running   0          19d
csi-provisioner-75dddf86b9-5wmtf            1/1     Running   0          19d
csi-provisioner-75dddf86b9-k44bq            1/1     Running   0          19d
csi-provisioner-75dddf86b9-5hsrq            1/1     Running   0          19d
longhorn-csi-plugin-jsm9c                   4/4     Running   0          19d
engine-image-ei-ec95b5ad-hgd6s              1/1     Running   0          19d

Screenshots or terminal output

$ kubectl describe pods -n longhorn-system longhorn-manager-vrr6g
Name:         longhorn-manager-vrr6g
Namespace:    longhorn-system
Priority:     0
Node:         k3d-k3s-default-worker-0/172.18.0.4
Start Time:   Wed, 11 Mar 2020 03:14:55 +0000
Labels:       app=longhorn-manager
              controller-revision-hash=5854c57455
              pod-template-generation=1
Annotations:  <none>
Status:       Pending
IP:           10.42.0.4
IPs:
  IP:           10.42.0.4
Controlled By:  DaemonSet/longhorn-manager
Containers:
  longhorn-manager:
    Container ID:
    Image:         longhornio/longhorn-manager:v0.8.0
    Image ID:
    Port:          9500/TCP
    Host Port:     0/TCP
    Command:
      longhorn-manager
      -d
      daemon
      --engine-image
      longhornio/longhorn-engine:v0.8.0
      --instance-manager-image
      longhornio/longhorn-instance-manager:v1_20200301
      --manager-image
      longhornio/longhorn-manager:v0.8.0
      --service-account
      longhorn-service-account
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Readiness:      tcp-socket :9500 delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAMESPACE:         longhorn-system (v1:metadata.namespace)
      POD_IP:                 (v1:status.podIP)
      NODE_NAME:              (v1:spec.nodeName)
      DEFAULT_SETTING_PATH:  /var/lib/longhorn-setting/default-setting.yaml
    Mounts:
      /host/dev/ from dev (rw)
      /host/proc/ from proc (rw)
      /var/lib/longhorn-setting/ from longhorn-default-setting (rw)
      /var/lib/longhorn/ from longhorn (rw)
      /var/run/ from varrun (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from longhorn-service-account-token-bdqrf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  dev:
    Type:          HostPath (bare host directory volume)
    Path:          /dev/
    HostPathType:
  proc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc/
    HostPathType:
  varrun:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/
    HostPathType:
  longhorn:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/longhorn/
    HostPathType:
  longhorn-default-setting:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      longhorn-default-setting
    Optional:  false
  longhorn-service-account-token-bdqrf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  longhorn-service-account-token-bdqrf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                From                               Message
  ----     ------     ----               ----                               -------
  Normal   Scheduled  <unknown>          default-scheduler                  Successfully assigned longhorn-system/longhorn-manager-vrr6g to k3d-k3s-default-worker-0
  Warning  Failed     85s                kubelet, k3d-k3s-default-worker-0  Error: failed to generate container "ea9f03c43771061bb8d5e8332fd2487e90b1438e91b08f7f6c0a2291aab626bd" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount
  Warning  Failed     82s                kubelet, k3d-k3s-default-worker-0  Error: failed to generate container "4b7af16d2811e5be8f3dd76ec2497af04f5a75ebb8b99ef6f282c5cc63de3ebc" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount
  Warning  Failed     66s                kubelet, k3d-k3s-default-worker-0  Error: failed to generate container "7d361629f5ef3bfd851fa61e8d5d40d76604eca627b61346d57396dc9fd55d32" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount
  Warning  Failed     50s                kubelet, k3d-k3s-default-worker-0  Error: failed to generate container "a5b38c84577a28d6636da1a7512a365690f80e4b5a74cc1c7e4cc6e6ffcd5c97" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount
  Warning  Failed     35s                kubelet, k3d-k3s-default-worker-0  Error: failed to generate container "6a1e3a81639c6b494be1ebfc71e8f785591b72e879ef3d59f9ca45e526587a20" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount
  Warning  Failed     18s                kubelet, k3d-k3s-default-worker-0  Error: failed to generate container "f32505d4d7907403bcd6a7482e929fc011592b12e5085ade4780e1c8f2909f78" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount
  Normal   Pulling    5s (x7 over 104s)  kubelet, k3d-k3s-default-worker-0  Pulling image "longhornio/longhorn-manager:v0.8.0"
  Normal   Pulled     3s (x7 over 85s)   kubelet, k3d-k3s-default-worker-0  Successfully pulled image "longhornio/longhorn-manager:v0.8.0"
  Warning  Failed     3s                 kubelet, k3d-k3s-default-worker-0  Error: failed to generate container "2e4bb7f592a59476546b25d1c224c8bdb6eaeb8fb2d31709769f95cb61d7c1f5" spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount

Which OS & Architecture?

  • Ubuntu 18.04

Which version of k3d?

  • k3d version v1.6.0

Which version of docker?

Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.1
 Git commit:        2d0083d
 Built:             Fri Aug 16 14:20:06 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.1
  Git commit:       2d0083d
  Built:            Wed Aug 14 19:41:23 2019
  OS/Arch:          linux/amd64
  Experimental:     false
@acefei acefei added the bug Something isn't working label Mar 11, 2020
@iwilltry42 iwilltry42 self-assigned this Mar 18, 2020
@iwilltry42
Copy link
Member

Hi there, thanks for opening this issue.
k3d may not be the perfect place to test longhorn.
Your error is because /var/lib/longhorn is not a shared/propagated mount but longhorn requires a bidirectional mount. You can workaround this by mounting some directory in shared mode, e.g. k3d create --enable-registry --workers 2 --auto-restart --api-port 0.0.0.0:6443 -v $HOME/tmp/longhorn:/var/lib/longhorn:shared.
However, you will run into follow-up issues.. 🤔

@iwilltry42 iwilltry42 added the help wanted Extra attention is needed label Mar 18, 2020
@JinLinGan
Copy link

JinLinGan commented May 29, 2020

@iwilltry42 I have the same problem with is issues. when i try your solution in k3d v3.0.0-beta.1 by this command:

k3d create cluster demo2 --masters 3 --workers 3 -a 0.0.0.0:6550 -p 8082:80@loadbalancer -v $HOME/tmp/longhorn:/var/lib/longhorn:share

it say

FATA[0000] Invalid volume mount '/Users/xxx/tmp/longhorn:/var/lib/longhorn:share': only one ':' allowed

It seems that it is not allowed in v3.0.0-beta.1
i check the sourcecode , this check is in volumes.go#L42-L45

@iwilltry42
Copy link
Member

Hi @JinLinGan, sorry for that. Fixed it in ae9be06

@blaggacao
Copy link

Annoyingly https://cert-manager.io/docs/usage/csi/ also depends on shared mounts.

But nicely so, first sight, this seemed to work

$ mkdir -p /tmp/k3d/kubelet/pods
$ k3d cluster create [...] --agents 2 -v /tmp/k3d/kubelet/pods:/var/lib/kubelet/pods:shared
$ kubectl apply -f "https://github.com/jetstack/cert-manager/releases/download/v0.16.1/cert-manager.yaml"
$ kubectl apply -f  "https://raw.githubusercontent.com/jetstack/cert-manager-csi/7fa27a6d05111a038fa5a21cefdcde2613f3bf4f/deploy/cert-manager-csi-driver.yaml"

mergify bot pushed a commit to kubeshop/botkube that referenced this issue Jun 9, 2022
##### ISSUE TYPE
 - Feature Pull Request

##### SUMMARY

- Allows providing communicator configuration via env variables
- Env variables have higher priority that config from file
- Helm chart has:
    - `extraEnv`
    - `extraVolumeMounts`
    - `extraVolumes`

Fixes #480

Related documentation: kubeshop/botkube-docs#82

##### TESTING

Unit test proves that the reading configuration works as expected. However, below you will find an e2e tutorial.

**BotKube with Vault via CSI driver**

1. Create K8s cluster, e.g. k3s via `lima-vm`: `limactl start template://k3s`
    > **NOTE:** The CSI needs to be supported, on k3d is problematic: k3d-io/k3d#206. Alternative is to just not play with the CSI driver and create your own volume that will be mounted, e.g. with predefined secret.
2. Install Vault:
    ```bash
    helm repo add hashicorp https://helm.releases.hashicorp.com
    helm repo update
    helm install vault hashicorp/vault \
        --set "server.dev.enabled=true" \
        --set "injector.enabled=false" \
        --set "csi.enabled=true"
    ```
3. Set Slack token:
    ```bash
    kubectl exec -it vault-0 -- /bin/sh
    ```
    ```bash
    vault kv put secret/slack token={token}
    ```
4. Configure Kubernetes authentication:
    ```bash
    vault auth enable kubernetes
    vault write auth/kubernetes/config \
        kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
    ```
    ```bash
    vault policy write internal-app - <<EOF
    path "secret/data/slack" {
      capabilities = ["read"]
    }
    EOF
    ```
    ```bash
    vault write auth/kubernetes/role/database \
        bound_service_account_names=botkube-sa \
        bound_service_account_namespaces=default \
        policies=internal-app \
        ttl=20m
    ```
5. Install the secrets store CSI driver:
    ```bash
    helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
    helm install csi secrets-store-csi-driver/secrets-store-csi-driver --set syncSecret.enabled=true
    ```
6. Create install parameters:
    
    ```bash
    cat > /tmp/values.yaml << ENDOFFILE
    extraObjects:
      - apiVersion: secrets-store.csi.x-k8s.io/v1
        kind: SecretProviderClass
        metadata:
          name: vault-database
        spec:
          provider: vault
          secretObjects:
            - data:
                - key: token
                  objectName: "slack-token"
              secretName: communication-slack
              type: Opaque
          parameters:
            vaultAddress: "http://vault.default:8200"
            roleName: "database"
            objects: |
              - objectName: "slack-token"
                secretPath: "secret/data/slack"
                secretKey: "token"
    
    communications:
      # Settings for Slack
      slack:
        enabled: true
        channel: 'random'
        notiftype: short
        # token - specified via env
    
    extraEnv:
      - name: COMMUNICATION_SLACK_TOKEN
        valueFrom:
          secretKeyRef:
            name: communication-slack
            key: token
    
    extraVolumeMounts:
      - name: secrets-store-inline
        mountPath: "/mnt/secrets-store"
        readOnly: true
    
    extraVolumes:
      - name: secrets-store-inline
        csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: "vault-database"
    image:
      registry: mszostok
      repository: botkube
      tag: env-test-v2
    ENDOFFILE
    ```
7. Checkout this PR: `gh pr checkout 601`
8. Install BotKube:
    ```bash
    helm install botkube -f /tmp/values.yaml ./helm/botkube
    ```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants