Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

将 Jellyfin 迁移到 k8s 集群中 #325

Open
Bpazy opened this issue Apr 6, 2024 · 0 comments
Open

将 Jellyfin 迁移到 k8s 集群中 #325

Bpazy opened this issue Apr 6, 2024 · 0 comments

Comments

@Bpazy
Copy link
Owner

Bpazy commented Apr 6, 2024

在继 迁移 docker Jellyfin 到全新机器 之后,现在要把它迁入 k8s 了。

先看现在的 docker compose 配置:

version: '3'
services:
  jellyfin:
    image: jellyfin/jellyfin:10.8.5
    # user: 1000:1000
    user: 0:0
    restart: unless-stopped
    network_mode: host
    # entrypoint: ./jellyfin/jellyfin --datadir /config --cachedir /cache --ffmpeg /usr/local/bin/ffmpeg
    volumes:
      - /home/ziyuan/jellyfin/config:/config
      - /mnt/media:/media
      - /home/ziyuan/jellyfin/cache:/cache
    environment:
      - JELLYFIN_PublishedServerUrl=media.example.com

这里做了几件事:

  1. 将本地的 /home/ziyuan/jellyfin/config 映射到 /config
  2. 将本地的 /home/ziyuan/jellyfin/cache 映射到 /cache
  3. 将远程的 SMB 挂载的 /mnt/media 目录映射到 /media
  4. 指定环境变量
  5. 指定 uid, gid
  6. 指定网络,我这里是 host 模式,是当时图省事所做

我们翻译一下,翻译为 k8s yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jellyfin-config-local-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: jellyfin-config-local-storage
  local:
    path: /home/ziyuan/jellyfin/config
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - pve-ubuntu

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: jellyfin-config-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jellyfin-cache-local-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: jellyfin-cache-local-storage
  local:
    path: /home/ziyuan/jellyfin/cache
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - pve-ubuntu

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: jellyfin-cache-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jellyfin-config-pvc-local
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: jellyfin-config-local-storage

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jellyfin-cache-pvc-local
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: jellyfin-cache-local-storage

---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: smb.csi.k8s.io
  name: jellyfin-media-pv-smb
spec:
  capacity:
    storage: 1Ti
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: smb
  mountOptions:
  - dir_mode=0777
  - file_mode=0777
  csi:
    driver: smb.csi.k8s.io
    volumeHandle: 192.168.31.20/media##
    volumeAttributes:
      source: //192.168.31.20/media
    nodeStageSecretRef:
      name: jellyfin-smb
      namespace: default

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jellyfin-config-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: jellyfin-config-pv-smb
  storageClassName: smb

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jellyfin-cache-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 300Mi
  volumeName: jellyfin-cache-pv-smb
  storageClassName: smb

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jellyfin-media-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Ti
  volumeName: jellyfin-media-pv-smb
  storageClassName: smb

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jellyfin
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      name: jellyfin
  template:
    metadata:
      labels:
        name: jellyfin
    spec:
      nodeSelector:
        type: local
      containers:
      - name: jellyfin
        image: jellyfin/jellyfin:10.8.5
        ports:
        - containerPort: 8096
        volumeMounts:
        - name: config
          mountPath: /config
        - name: cache
          mountPath: /cache
        - name: media-smb
          mountPath: /media
        # resources:
        #   limits:
        #     memory: 2Gi
        #     cpu: "10"
        securityContext:
          runAsUser: 0
          runAsGroup: 0
        env:
        - name: JELLYFIN_PublishedServerUrl
          value: "media.example.com"
      volumes:
      - name: config
        persistentVolumeClaim:
          claimName: jellyfin-config-pvc-local
      - name: cache
        persistentVolumeClaim:
          claimName: jellyfin-cache-pvc-local
      - name: media-smb
        persistentVolumeClaim:
          claimName: jellyfin-media-pvc

---
apiVersion: v1
kind: Service
metadata:
  name: jellyfin
spec:
  type: ClusterIP
  clusterIP: 10.43.235.120
  selector:
    name: jellyfin
  ports:
  - name: web
    port: 8096
    targetPort: 8096

本来想要将 /config/cache 这两个目录也挂载到高可用 NAS 的目录上的,结果发现 Jellyfin 不支持 /config 目录放在 SMB 里,这会导致 Jellyfin 启动后报错:

Busy: SQLitePCL.pretty.SQLiteException: database is locked

查看官方的 Discuss 后,发现这是 sqlite 的限制,需要等他们将所有 SQL 迁移到 ORM 后,支持其他数据库后才能解决该问题。

jellyfin/jellyfin#9184

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant