Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BR] Folder is not writable by user abc on K3s using longhorn #62

Open
agustinvinao opened this issue Jan 23, 2023 · 5 comments
Open

[BR] Folder is not writable by user abc on K3s using longhorn #62

agustinvinao opened this issue Jan 23, 2023 · 5 comments
Assignees

Comments

@agustinvinao
Copy link

agustinvinao commented Jan 23, 2023

Folder is not writable by user abc on K3s using longhorn
On a k3s cluster of PIs using SSD drivers and longhorn for kubernetes, when I try to do any action to write files from Sonarr I got the error:

[Warn] SonarrErrorPipeline: Invalid request Validation failed: 
 -- Path: Folder is not writable by user abc

Checking folders and permissions I saw only a few folders are allow for ABC user. What should be the folder for sonarr as rootFolder?

To Reproduce
Steps to reproduce the behavior:

  • enable general and sonarr
  • try to add a new serie
  • check sonarr pod logs

Expected behavior
create the folder of the added serie

Screenshots
If applicable, add screenshots to help explain your problem.

Environment:

  • K8s version: k3s lates
  • CNI Plugin:
  • CSI Type:

Additional context
Add any other context about the problem here.

I've tried to ser the user and group of sonarr pod to 1000, doesn't let the pod start.

folders permissions

drwxr-xr-x   1 root root  4096 Jan 23 23:22 .
drwxr-xr-x   1 root root  4096 Jan 23 23:22 ..
drwxr-xr-x   1 abc  abc   4096 Jan 17 13:43 app
lrwxrwxrwx   1 root root     7 Mar 16  2022 bin -> usr/bin
drwxr-xr-x   2 root root  4096 Apr 15  2020 boot
drwxr-xr-x   2 root root 12288 Aug 29 20:09 command
drwxrwxrwx   4 abc  abc   4096 Jan 23 23:23 config
drwxr-xr-x   1 abc  abc   4096 Jan 10 04:47 defaults
drwxr-xr-x   5 root root   340 Jan 23 23:22 dev
-rwxrwxr-x   1 root root  9252 Jan 10 04:47 docker-mods
-rwxrwxr-x   1 root root    33 Nov  1 14:07 donate.txt
drwxrwxrwx   4 abc  abc   4096 Jan 23 22:42 downloads
drwxrwxr-x   1 root root  4096 Jan 23 23:22 etc
drwxr-xr-x   2 root root  4096 Apr 15  2020 home
-rwxr-xr-x   1 root root   907 Aug 29 20:09 init
lrwxrwxrwx   1 root root     7 Mar 16  2022 lib -> usr/lib
drwxr-xr-x   2 root root  4096 Mar 16  2022 media
drwxr-xr-x   2 root root  4096 Mar 16  2022 mnt
drwxr-xr-x   2 root root  4096 Mar 16  2022 opt
drwxr-xr-x   6 root root  4096 Aug 29 20:09 package
dr-xr-xr-x 299 root root     0 Jan 23 23:22 proc
drwx------   2 root root  4096 Mar 16  2022 root
drwxr-xr-x   1 root root  4096 Jan 23 23:22 run
lrwxrwxrwx   1 root root     8 Mar 16  2022 sbin -> usr/sbin
drwxr-xr-x   2 root root  4096 Mar 16  2022 srv
dr-xr-xr-x  12 root root     0 Jan 23 23:22 sys
drwxrwxrwt   1 root root  4096 Jan 23 23:22 tmp
drwxrwxrwx   2 root root  4096 Jan 23 22:38 tv
drwxrwxr-x   1 root root  4096 Jan 10 04:47 usr
drwxr-xr-x   1 root root  4096 Mar 16  2022 var
@kubealex
Copy link
Owner

Hi @agustinvinao, thank you for reporting this, do you mind sharing the k8s-mediaserver.yml you used for the setup? Thank you!

@agustinvinao
Copy link
Author

agustinvinao commented Jan 25, 2023

I've cloned the repo and Im using the origin yml file.

Something I've notice is I can use any folder with abc user (app, config, defaults and downloads). I've tried setting user and group 1000 but it doesn't work.

Im using another file for the ingress, currently have traefik accessing.

here is my values file (Im enabling and disabling differente apps for testing):

# Default values for k8s-mediaserver.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

general:
  ingress_host: media.codespacelabs.com
  plex_ingress_host: plex.codespacelabs.com
  image_tag: latest
  #UID to run the process with
  puid: 1000
  #GID to run the process with
  pgid: 1000
  #Persistent storage selections and pathing
  storage:
    customVolume: false  #set to true if not using a PVC (must provide volume below)
    pvcName: mediaserver-pvc
    size: 50Gi
    pvcStorageClass: longhorn
    accessMode: ""
    # the path starting from the top level of the pv you're passing. If your share is server.local/share/, then tv is server.local/share/media/tv
    subPaths:
      tv: media/tv
      movies: media/movies
      downloads: downloads
      transmission: transmission
      sabnzbd: sabnzbd
      config: config
    volumes: 
      hostPath:
        path: /mnt/share
  # ingress:
  #   ingressClassName: ""

sonarr:
  enabled: true
  container:
    image: docker.io/linuxserver/sonarr
    nodeSelector: {}
    port: 8989
  service:
    type: ClusterIP
    port: 8989
    nodePort:
    extraLBService: false
    # Defines an additional LB service, requires cloud provider service or MetalLB
  ingress:
    enabled: false
    annotations: {}
    path: /sonarr
    tls:
      enabled: true
      certResolver: leresolver
      # secretName: ""
  resources: {}
  volume: {}
    # name: pvc-sonarr-config
    # storageClassName: longhorn
    # storage: 5Gi
    # accessModes: ReadWriteOnce
    #annotations:
    #  my-annotation/test: my-value
    #labels:
    #  my-label/test: my-other-value
    #selector: {}

transmission:
  enabled: true
  container:
    image: docker.io/linuxserver/transmission
    nodeSelector: {}
    port:
      utp: 9091
      peer: 51413
  service:
    utp:
      type: ClusterIP
      port: 9091
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
    peer:
      type: ClusterIP
      port: 51413
      nodePort:
      nodePortUDP:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
  ingress:
    enabled: false
    annotations: {}
    path: /transmission
    tls:
      enabled: false
      secretName: ""
  config:
    auth:
      enabled: true
      username: "admin"
      password: "Chester848"
  resources: {}
  volume:
    name: pvc-transmission-config
    storageClassName: longhorn
    storage: 5Gi
    accessModes: ReadWriteOnce
  #  annotations: {}
  #  labels: {}
  #  selector: {}

radarr:
  enabled: false
  container:
    image: docker.io/linuxserver/radarr
    nodeSelector: {}
    port: 7878
  service:
    type: ClusterIP
    port: 7878
    nodePort:
    extraLBService: false
    # Defines an additional LB service, requires cloud provider service or MetalLB
  ingress:
    enabled: false
    annotations: {}
    path: /radarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume:
    name: pvc-radarr-config
    storageClassName: longhorn
    storage: 5Gi
    accessModes: ReadWriteOnce
    #annotations: {}
    #labels: {}
    #selector: {}

prowlarr:
  enabled: true
  container: 
    image: docker.io/linuxserver/prowlarr
    tag: develop
    nodeSelector: {}
    port: 9696
  service:
    type: ClusterIP
    port: 9696
    nodePort: 
    extraLBService: false
  ingress:
    enabled: false
    annotations: {}
    path: /prowlarr
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
    # name: pvc-prowlarr-config
    # storageClassName: longhorn
    # storage: 5Gi
    # accessModes: ReadWriteOnce
  #  annotations: {}
  #  labels: {}
  #  selector: {}

plex:
  enabled: false
  claim: "CHANGEME"
  replicaCount: 1
  container:
    image: docker.io/linuxserver/plex
    nodeSelector: {}
    port: 32400
  service:
    type: ClusterIP
    port: 32400
    nodePort:
    # Defines an additional LB service, requires cloud provider service or MetalLB
    extraLBService: false
  ingress:
    enabled: false
    annotations: {}
    tls:
      enabled: false
      secretName: ""
  resources:
    limits:
      cpu: 100m
      memory: 100Mi
    requests:
      cpu: 100m
      memory: 100Mi
  volume:
    name: pvc-plex-config
    storageClassName: longhorn
    storage: 50Gi
    accessModes: ReadWriteOnce
  # #  annotations: {}
  # #  labels: {}
  # #  selector: {}


jackett:
  enabled: false
  container:
    image: docker.io/linuxserver/jackett
    nodeSelector: {}
    port: 9117
  service:
    type: ClusterIP
    port: 9117
    nodePort:
    extraLBService: false
    # Defines an additional LB service, requires cloud provider service or MetalLB
  ingress:
    enabled: false
    annotations: {}
    path: /jackett
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-jackett-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

sabnzbd:
  enabled: false
  container:
    image: docker.io/linuxserver/sabnzbd
    nodeSelector: {}
    port:
      http: 8080
      https: 9090
  service:
    http:
      type: ClusterIP
      port: 8080
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
    https:
      type: ClusterIP
      port: 9090
      nodePort:
      # Defines an additional LB service, requires cloud provider service or MetalLB
      extraLBService: false
  ingress:
    enabled: true
    annotations: {}
    path: /sabnzbd
    tls:
      enabled: false
      secretName: ""
  resources: {}
  volume: {}
  #  name: pvc-plex-config
  #  storageClassName: longhorn
  #  annotations: {}
  #  labels: {}
  #  accessModes: ReadWriteOnce
  #  storage: 5Gi
  #  selector: {}

@kubealex
Copy link
Owner

thank you for the details, my question is, does longhorn require setting fsGroup or similar to work? I've seen this
longhorn/longhorn#1713 that shows something similar to what you are facing.

All containers run as non-privileged users (default 1000).

@emze9
Copy link

emze9 commented Jul 18, 2023

Hi,
did you manage to resolve this issue ? I've almost the same values file and got the same error.

@davidfrickert
Copy link

davidfrickert commented Jul 29, 2023

fsGroup seems to work.
edit deployments in the helm chart to include securityContext fsGroup, like so:

### DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sonarr
  labels:
    {{- include "k8s-mediaserver.labels" . | nindent 4 }}
spec:
  strategy:
    type: Recreate
  replicas: 1
  selector:
    matchLabels:
      {{- include "k8s-mediaserver.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "k8s-mediaserver.selectorLabels" . | nindent 8 }}
        app: sonarr
    spec:
      securityContext:
        fsGroup: {{ .Values.general.pgid }}
      initContainers:
#(....)

an alternative is to add an init container that does chown but from my experience that is very clunky

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants