Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SQLite on Network Share #1886

Open
lokkju opened this issue May 1, 2017 · 37 comments

Comments

@lokkju
Copy link

commented May 1, 2017

Sonarr currently uses WAL mode for journals with SQLite. WAL mode has some advantages, but one major disadvantage is that it can not safely be used over non-local filesystems (https://sqlite.org/wal.html); docker for windows and other virtualization systems using CIFS mounted host paths often fail with sqlite locking or corruption errors when using WAL with sqlite file on host shared paths.

Providing an option to disable WAL mode (perhaps using standard DELETE mode) for transactions would be very useful for virtualizing Sonarr, or other cases where the config files and sqlite databases need to live on a SMB/CIFS/NFS path.

Could we have some sort of config file option or command line option that disables WAL mode journaling throughout the program?

This was already addressed for OSX in #167 - may as well just make it an advanced option, so we can set it when needed, and by default on OSX.

@Taloth

This comment has been minimized.

Copy link
Member

commented May 2, 2017

docker and other virtualization systems often fail with sqlite locking or corruption errors when using WAL with sqlite file on host shared paths.

Do you have a link for that with some evidence? I'd expect a linux docker to behave largely the same. A linux docker in a windows hyper-v, that's different, since it's essentially a network share unless you mount a datacontainer or mobylinux mount, instead of a windows host mount/share.
I'm not saying it can't go wrong, docker is a special animal, but I need more info to be able to determine the best course of action.

sqlite databases need to live on a SMB/CIFS/NFS path.

It shouldn't, like never. Even other synchronization modes aren't reliable over networks, it's horrible for performance too.

This was already addressed for OSX in #167 - may as well just make it an option.

Euh, no, rule 1 of the Fool Proof Handbook is to never add an option that the user must set to avoid breaking stuff. Either detect the edge-case and deal with it automatically, or throw a big fat warning saying it's unsupported. 😄
For example, we might be able to detect if the db is on a known reliable fs and use wal in those cases. Or detect it's on a network or cloud drive and simply refuse to start. Or inside a docker, and force non-wal mode. Things like that.

But as I said, need more info.... please

@lokkju

This comment has been minimized.

Copy link
Author

commented May 2, 2017

Updated the description a bit.

To be clear, as the title of the issue said, I'm only discussing docker for windows; which uses CIFS/SMB to mount host paths. It mounts them with the "nobrl" option, which causes lock requests not to be sent to the server (docker/for-win#11). This is unique to docker for windows, though similar problems arise on docker for osx.

If your solution is that network paths are not supported for the database files, then that's fine; it just means that anyone using docker for windows will have massive problems; and perhaps a startup warning that the appdata filesystem must be local would be nice.

I agree that requiring the user to add an option for normal behavior is bad; the flip side of that rule is that anything you set automatically should be able to be overridden by the user. Go ahead and set it on OSX; but let the end user override it if they want. I don't think application code should know about every edge case; that's what configuration files or advanced command line options are for.

At any rate, there are various complaints of Sonarr (and Radarr, and Plex) not working right/at all/being corrupted on docker for windows; CIFS is, I believe, the root cause.

@Taloth

This comment has been minimized.

Copy link
Member

commented May 2, 2017

If your solution is that network paths are not supported for the database files, then that's fine; ...; and perhaps a startup warning that the appdata filesystem must be local would be nice.

That's my preferred solution for network shares, coz it's just inviting disaster regardless of sync mode.
For docker for windows i'd just recommend not mount on the windows host, but mount on the mobyvm or use a data volume or datacontainer (volumes_from). I think we might be able to detect that scenario and force non-wal mode, but I'll have to do some testing of /proc/mount show how the volume is mounted.
Tnx for the info. btw: Docker for Windows via hyper-v (win10) or virtualbox (win7)?

I agree that requiring the user to add an option for normal behavior is bad; the flip side of that rule is that anything you set automatically should be able to be overridden by the user. Go ahead and set it on OSX; but let the end user override it if they want. I don't think application code should know about every edge case; that's what configuration files or advanced command line options are for.

In our experience you shouldn't. Yes, advanced users are quite capable of making those decisions. But Sonarr isn't intended for advanced users and any option is likely to be abused/misused (we have empirical evidence on that, and dozens of wasted support hours to drive the point home). So any (hidden/config-file only) option should be carefully considered, and avoided as much as possible. There usually is a better solution.
As long as the average user doesn't even bother reading the info tooltips in the UI... sigh I digress.
I'd argue that if Sonarr errs on the side of caution, and only use WAL in cases it knows it would work. Then no option is needed. We just need to find out if that's feasible.

@lokkju

This comment has been minimized.

Copy link
Author

commented May 2, 2017

My understanding of the teminology is that Docker for Windows uses Hyper-V (Windows 10 only), while Docker Toolbox for Windows uses Virtualbox (Windows 7+). In this issue, I'm discussing Hyper-V with MobyLinuxVM; the host paths are from the parent Windows host; using a config path in the MobyLinuxVM is an option, but there is no easy way to tell docker to do that; and afaik, all docker volume drivers on windows will use CIFS as well.

I'm a long-time backend services coder, so don't think so well about normal user usability issues grin. That said, perhaps taking a progressive enhancement approach for things like WAL mode might be better: by default, use the most compatible journaling mode (DELETE, iirc); if you detect a supported filesystem, enable WAL mode. This will allow for usage on unknown filesystems without code changes.

Still, I agree using sqlite on what is essentially a network filesystem is a bad idea; I just don't know of a better solution. The only other options I can think of involve rsync/unison with inotify; and that has it's own problems.

Really, though, this isn't a Sonarr problem; it's a Docker for Windows problem, that they made worse by disabling file locking.

@Grimeton

This comment has been minimized.

Copy link

commented May 11, 2017

Hi,

look at this config file. Maybe it's worth a shot: https://system.data.sqlite.org/index.html/artifact?ci=trunk&filename=System.Data.SQLite/Configurations/System.Data.SQLite.dll.config

And just for information: If docker is running on top of a Linux system (virtualized in Hyper-V or not), path mapping works as expected and the database works as expected.

I'm running a Linux VM inside Hyper-V that contains a docker environment containing Sonarr. The storage backend is LVM and the config and data paths are mapped into the container. Works.

Have a look at creater_container():

# cat /etc/docker/containers/sonarr.on
container_name="sonarr"
container_hostname="$container_name"
container_image="linuxserver/sonarr"
container_update_auto=1

function stop_container {
    docker stop "$container_name"
}
function start_container {
    docker start "$container_name";
}

function delete_container {
    docker rm "$container_name"
}
function create_container {

    docker create \
        -e PUID=1002 \
        -e PGID=1006 \
        --hostname "$container_hostname" \
        --ip 10.1.1.3 \
        --name "$container_name" \
        --net vidnet \
        --restart always \
        -v /etc/ssl/certs:/etc/ssl/certs:ro \
        -v /dev/rtc:/dev/rtc:ro \
        -v /srv/data/sonarr/config:/config \
        -v /srv/data/sabnzbd/downloads:/downloads \
        -v /srv/data/sonarr/pickup:/pickup \
        -v /srv/data/sonarr/recycle:/recycle \
        -v /srv/videos/tv:/tv \
        "$container_image"
}


function canbestopped_container {
        return 0;
}

And then in /srv/data/sonarr/config:

# ls {logs,nzbdrone}.*
logs.db  logs.db-shm  logs.db-wal  nzbdrone.db  nzbdrone.db-shm  nzbdrone.db-wal

Cu

@lokkju

This comment has been minimized.

Copy link
Author

commented May 11, 2017

@Grimeton - of course it does. We're specifically discussing Docker for Windows, which uses MobyLinuxVM running on HyperVM, with paths on the windows host. In this (standard) configuration, any paths on the Windows host are mounted via SMB/CIFS.

@Grimeton

This comment has been minimized.

Copy link

commented May 12, 2017

@lokkju Yeah the question came up, so I clarified it.

@trunet

This comment has been minimized.

Copy link

commented Jul 18, 2017

I have an armv7 docker swarm cluster, running a sonarr container among a lot of other things. This cluster, have a glusterfs server on all the nodes, setup as a replication. I mount locally on all nodes using glusterfs fuse filesystem to localhost. In short, I have a local filesystem on all the nodes with the same data.

This works for everything, but sonarr, that corrupt the sqlite3 database in average once each two days in average.

My workaround is to backup the database (.dump) every hour. If database corrupts, it automatically remove all sqlite databases and restore a new one from the last working dump.

Would be nice to have an option "use WAL" or something like that on the configuration to get rid of this. Or support external relational databases (mysql, postgres, ...). I think external databases would be a lot of work mainly because of version migrations, advanced selects, so on, but the option to use wal or not, should be simple to add.

@Andy2244

This comment has been minimized.

Copy link

commented May 9, 2018

Just some update, the latest way to use docker on windows is LCOW, which uses "linuxkit" running inside hyper-v. It seems they now use 9p to share the volumes, which also results in a lot of errors and makes any container that uses sqlite in WAL mode unusable or any locking operation for that matter.

I got the answer #1385 that this is a Microsoft problem. Its still crazy to me, that after all those years docker can't handle sqlite + WAL on windows. Its a real shame since LCOW works great otherwise and is a huge improvement over the old mode and docker toolkit.

@fergalmoran

This comment has been minimized.

Copy link

commented Aug 29, 2018

This isn't a Docker/Windows/CIFS issue. I get the same behaviour on Docker Swarm on Ubuntu using NFS. Oddly, this worked fine with Kubernetes even though the NFS server was the same.

@someCynic

This comment has been minimized.

Copy link

commented Sep 17, 2018

As others have mentioned, there actually are valid scenarios in which you may have to mount configuration and the database from a network share.
I also run Sonarr within Docker Swarm (with only one replica), and it is quite common for the container to be moved from one node to another when the original one goes offline or while re-balancing load. Local storage isn't an option in this scenario.
For it to work on CIFS, the nobrl option was necessary, and when using NFS, it is very common for background tasks to throw the "The database is locked" error in the logs. Fortunately I haven't seen database corruption yet.
It clearly wasn't built with the intention to be deployed in this manner, so I'm quite surprised that it actually works and performance isn't bad at all. But the database locking is indeed still an issue, and at some point I assume I'll start seeing database corruption.
Given that supporting remote databases would be a huge rewrite, it seems the WAL setting might be an interesting workaround.
It would be great if Sonarr could determine what it should use on its own, but it might be a bit difficult. I mount the network shares on the hosts, and use docker bind volumes, so Sonarr would just see it as any other volume. Maybe it could try to execute one of these commands that cause locking problems, and make the suggestion to change the setting, or something like that.

@yamlCase

This comment has been minimized.

Copy link

commented Nov 1, 2018

This isn't a Windows only issue. I get the same errors on Rancher 2.1 Kubernetes, using NFS Persistent Volume to a ReadyNAS NFSv4.

My research shows it is a known issue with sqlite not playing nice with NFS's locking and the answer might be to allow nolock as an option.

@markus101 markus101 changed the title Various SQLite issues when using with Docker for Windows SQLite on Network Share Nov 4, 2018

@onedr0p

This comment has been minimized.

Copy link

commented Nov 24, 2018

I'd like to chime in too with this problem. I use a Docker container for Sonarr. It only happens when I use NFS as the datastore. This would be great to get working as others also want their persistent data stored on a NFS server. The nolock mount option does nothing in my case. Sonarr appears to be functioning just fine giving these System.Data.SQLite.SQLiteException (0x80004005): database is locked errors, but I could see it leading greater problems.

My NFS mount options are:

[Mount]
What=freenas.lan:/mnt/Pergamum/Docker
Where=/nas/freenas.lan/Docker
Type=nfs4
Options=noatime,nolock,soft,rsize=32768,wsize=32768,timeo=900,retrans=5,_netdev

Logs:
https://pastebin.com/rXZR7yxq

@yamlCase

This comment has been minimized.

Copy link

commented Nov 24, 2018

The nolock mount option does nothing in my case.

Oh well, it was a stab in the dark and thanks for eliminating that.

@onedr0p

This comment has been minimized.

Copy link

commented Nov 24, 2018

I would love to have the priority bumped on this issue. Out of 30 containers, Lidarr, Radarr & Sonarr are the only applications I run that cannot use NFS for application data. :(

For now I have just stored their application data to the VM instead of my NFS share.

@fergalmoran

This comment has been minimized.

Copy link

commented Nov 27, 2018

Can confirm - this is still an issue in the Sonarrv3 previews.
REALLY sucky...

@Xaelias

This comment has been minimized.

Copy link

commented Feb 4, 2019

Just wanted to give my $.02. I have the same issue. Tryong to run sonarr in a kubernetes cluster has been... Painful.

I ended up having a container first grabbing a copy of the data from the nfs share and then putting it on a local share. Then start sonarr and have another container do a copy back to the nfs share to have a somewhat reliable backup of it.

It's gross. The db is going to get corrupted someday because the container is gonna crash in the middle of the transfer. And while it's in no way sonarr's fault that sqlite is garbage over network share... It would be really nice to have a fix, or be able to run against mysql/postgres/...

And yeah like mentioned before, the nolock option doesn't solve that issue.

@pryorda

This comment has been minimized.

Copy link

commented Mar 12, 2019

Looks like it does this with sqlite on nfs for me too. nolock did not fix the issue. @Xaelias I will probably do the same thing as you.

@onedr0p

This comment has been minimized.

Copy link

commented Mar 24, 2019

@markus101 @Taloth would it be possible to include a start up argument to disable wal? Seeing that it's disabled on OSX it would be nice to make it configurable for people that use NFS shares.

@tscibilia

This comment has been minimized.

Copy link

commented Apr 15, 2019

As linked in the comment above, I'm getting 'database disk image is malformed' errors I have my persistent docker storage mounted by a glusterfs share. I tried using the local disk as suggested by @markus101 and I'm not getting the errors anymore, but I really want the safety and redundancy of the glusterfs server I painstakingly setup.

Can't we opt for a separate mariadb or postgres db instead of sqlite?

@yacn

This comment has been minimized.

Copy link

commented Apr 21, 2019

FYI to everyone saying "nolock didn't work", the option @yamlCase mentioned/linked is a SQLite option used when loading the database file, not an NFS mount option.

@benfff85

This comment has been minimized.

Copy link

commented May 5, 2019

+1 would love to see an enhancement to address this.

Grafana and Sonarqube have the option to connect to external persistent data stores such as MySQL. It would be wonderful to see something similar with Sonarr.

@lokkju

This comment has been minimized.

Copy link
Author

commented May 25, 2019

This is still a headache for me. If an config flag to disable WAL mode isn't an option, how about just an environment variable that advanced users can set?

@Andy2244

This comment has been minimized.

Copy link

commented Jun 6, 2019

Did retest this with all the latest docker/lcow/windows stuff and still get disk I/O error and NzbDrone.Core.Datastore.CorruptDatabaseException.

Docker version master-dockerproject-2019-06-05, build c02f389c
Kernel Version: 10.0 18362 (18362.1.amd64fre.19h1_release.190318-1202)
Operating System: Windows 10 Pro Version 1903 (OS Build 18362.145)
4.19.27-linuxkit

Seems the 9p filesystem still lacks compatible locking options and many linux containers wont run correctly via LCOW, see: linux-containers

@ggzengel

This comment has been minimized.

Copy link

commented Jun 16, 2019

connectionBuilder.JournalMode = OsInfo.IsOsx ? SQLiteJournalModeEnum.Truncate : SQLiteJournalModeEnum.Wal;

If it's work with Osx why couldn't it be used with Linux?

@onedr0p

This comment has been minimized.

Copy link

commented Jun 17, 2019

@ggzengel that is exactly what I asked. There should be a start up parameter that disables wal

@namebrandon

This comment has been minimized.

Copy link

commented Jun 20, 2019

Hitting the same issues here with config directories hosted on NFS and mounted to container running via rancher2/k8s.�

System.Data.SQLite.SQLiteException (0x80004005): database is locked database is locked at System.Data.SQLite.SQLite3.Step (System.Data.SQLite.SQLiteStatement stmt) [0x00088] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteDataReader.NextResult () [0x0016b] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteDataReader..ctor (System.Data.SQLite.SQLiteCommand cmd, System.Data.CommandBehavior behave) [0x00090] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at (wrapper remoting-invoke-with-check) System.Data.SQLite.SQLiteDataReader..ctor(System.Data.SQLite.SQLiteCommand,System.Data.CommandBehavior) at System.Data.SQLite.SQLiteCommand.ExecuteReader (System.Data.CommandBehavior behavior) [0x0000c] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery (System.Data.CommandBehavior behavior) [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery () [0x00006] in <61a20cde294d4a3eb43b9d9f6284613b>:0 at Marr.Data.QGen.UpdateQueryBuilder1[T].Execute () [0x0003b] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\QGen\UpdateQueryBuilder.cs:157
at Marr.Data.DataMapper.Update[T] (T entity, System.Linq.Expressions.Expression1[TDelegate] filter) [0x00000] in C:\BuildAgent\work\5d7581516c0ee5b3\src\Marr.Data\DataMapper.cs:674 at NzbDrone.Core.Datastore.BasicRepository1[TModel].Update (TModel model) [0x0002a] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Datastore\BasicRepository.cs:125
at NzbDrone.Core.Tv.SeriesService.UpdateSeries (NzbDrone.Core.Tv.Series series, System.Boolean updateEpisodesToMatchSeason) [0x000a9] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Tv\SeriesService.cs:160
at NzbDrone.Core.Tv.RefreshSeriesService.RefreshSeriesInfo (NzbDrone.Core.Tv.Series series) [0x00213] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Tv\RefreshSeriesService.cs:110
at NzbDrone.Core.Tv.RefreshSeriesService.Execute (NzbDrone.Core.Tv.Commands.RefreshSeriesCommand message) [0x00072] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Tv\RefreshSeriesService.cs:175
at NzbDrone.Core.Messaging.Commands.CommandExecutor.ExecuteCommand[TCommand] (TCommand command, NzbDrone.Core.Messaging.Commands.CommandModel commandModel) [0x000f6] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandExecutor.cs:95
at (wrapper dynamic-method) System.Object.CallSite.Target(System.Runtime.CompilerServices.Closure,System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandExecutor,object,NzbDrone.Core.Messaging.Commands.CommandModel)
at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid3[T0,T1,T2] (System.Runtime.CompilerServices.CallSite site, T0 arg0, T1 arg1, T2 arg2) [0x00035] in <35ad2ebb203f4577b22a9d30eca3ec1f>:0
at (wrapper dynamic-method) System.Object.CallSite.Target(System.Runtime.CompilerServices.Closure,System.Runtime.CompilerServices.CallSite,NzbDrone.Core.Messaging.Commands.CommandExecutor,object,NzbDrone.Core.Messaging.Commands.CommandModel)
at NzbDrone.Core.Messaging.Commands.CommandExecutor.ExecuteCommands () [0x00027] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\Messaging\Commands\CommandExecutor.cs:41 `

@onedr0p

This comment has been minimized.

Copy link

commented Jun 24, 2019

@markus101 @Taloth would you be open to a MR that adds a command line argument to disable WAL? It's only a work around but it's honestly all we have besides adding a way to connect to an external DB.

@Taloth

This comment has been minimized.

Copy link
Member

commented Jun 24, 2019

@onedr0p Preferably not, as mentioned before it's a nice workaround for advanced user, but you don't want users to actively have to configure something for it work, if it can be helped at all. (The irony of that statement doesn't escape me with respect to how long this issue has been open.)
I'd prefer it it works in reverse: use wal only if it's on a local drive.
I can whip something up on a v3 feature branch, but that will have to be tested on various setups.
In fact, if you wish you can try make the necessary change yourself: Run IDiskProvider.GetMount(...) on the appdata dir during the ConnectionStringFactory call, it should contain the necessary info to determine whether the appdata dir is a local drive and use WAL/Journal accordingly. Getting an IDiskProvider instance is possible by adding it to the ConnectionStringFactory constructor.

@Andy2244

This comment has been minimized.

Copy link

commented Jun 24, 2019

I'd prefer it it works in reverse: use wal only if it's on a local drive.

The issue has nothing to-do with local vs network, as the underlying problem is about locking and other FS features, which are not implemented on some FS or work "funny", limited in others. As noted we get similar errors with a 9P "local" filesystem on Docker for Windows mounts. So if WAL needs certain FS features to work correctly, that it needs to probe for the specific features on the FS itself.

@onedr0p onedr0p referenced this issue Jun 24, 2019
1 of 2 tasks complete
@onedr0p

This comment has been minimized.

Copy link

commented Jun 24, 2019

@Taloth like @Andy2244 said it is much more than network filesystems, but that is what most people are struggling with including me.

I have opened PR #3180 to add a start up arg to disable wal. I have tested locally and should work :)

@Taloth

This comment has been minimized.

Copy link
Member

commented Jun 24, 2019

that it needs to probe for the specific features on the FS itself.

That's a valid point. It might be doable but I'm not sure if we can do that properly for all supported platforms, it's worth an attempt.
Although I have to note that 9P is a network filesystem protocol, not a local filesystem. I'm not sure if it's detected as such by mono and/or /proc/mounts but that we can deal with.
moby used to connect to the windows host via CIFS, also a network filesystem protocol and is already detected as such by sonarr.

@onedr0p

This comment has been minimized.

Copy link

commented Jun 24, 2019

PR updated to #3183

@putty182

This comment has been minimized.

Copy link

commented Jul 9, 2019

Jumping in to share my shitty hack workaround for Kubernetes users that rely on NFS for persistence; use a sidecar container, mount an ext4 image file backed by a local disk or ram, and fsfreeze it every 5 minutes to copy a snapshot of Sonarr's database files.

YMMV, but this seems to work because sqlite still makes atomic writes to disk, even in WAL mode. Using fsfreeze on a real filesystem like ext4 prevents Sonarr from writing further changes until you've finished copying them off to NFS storage.

Tradeoff is that you might lose the last 5 minutes of activity if there's an unexpected outage.

Show YAML
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: sonarr
  name: sonarr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sonarr
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: sonarr
    spec:
      containers:
      - command:
        - sh
        - -c
        - |-
          dd if=/dev/zero of=/ramdisk/image.ext4 count=0 bs=1 seek=400M; \
          mkfs.ext4 /ramdisk/image.ext4; \
          mount /mnt/sonarr-ramdisk-mount /ramdisk/image.ext4; \
          cp -fvp /sonarr-config/*.* /mnt/sonarr-ramdisk-mount; \
          while true; do \
            sleep 890; \
            sync /mnt/sonarr-ramdisk-mount/*.*; \
            fsfreeze --freeze /mnt/sonarr-ramdisk-mount; \
            sleep 10; \
            cp -fvp /mnt/sonarr-ramdisk-mount/*.* /sonarr-config/; \
            fsfreeze --unfreeze /mnt/sonarr-ramdisk-mount; \
          done;
        image: ubuntu
        imagePullPolicy: Always
        lifecycle:
          preStop:
            exec:
              command:
              - umount /mnt/sonarr-ramdisk-mount;
        name: sonarr-config
        resources: {}
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /ramdisk
          name: ramdisk
        - mountPath: /sonarr-config
          name: sonarr-config
        - mountPath: /mnt/sonarr-ramdisk-mount
          mountPropagation: Bidirectional
          name: sonarr-ramdisk-mount
      - command:
        - sh
        - -c
        - until [ -f "/config/sonarr.db" ]; do sleep 1; done; /init
        env:
        - name: PUID
          value: "1000"
        - name: PGID
          value: "1000"
        image: linuxserver/sonarr:preview
        imagePullPolicy: IfNotPresent
        name: sonarr
        ports:
        - containerPort: 8989
          name: sonarr
          protocol: TCP
        readinessProbe:
          tcpSocket:
            port: 8989
        resources: {}
        volumeMounts:
        - mountPath: /config
          mountPropagation: HostToContainer
          name: sonarr-ramdisk-mount
        - mountPath: /config/Backups
          name: sonarr-config
          subPath: Backups
        - mountPath: /config/MediaCover
          name: sonarr-config
          subPath: MediaCover
        - mountPath: /config/logs
          name: sonarr-config
          subPath: logs
        - mountPath: /config/xdg
          name: sonarr-config
          subPath: xdg
        - mountPath: /media
          mountPropagation: HostToContainer
          name: media
      volumes:
      - emptyDir:
          medium: Memory
          sizeLimit: 400M
        name: ramdisk
      - name: sonarr-config
        persistentVolumeClaim:
          claimName: sonarr-config
      - emptyDir: {}
        name: sonarr-ramdisk-mount
      - hostPath:
          path: /tmp/media
          type: Directory
        name: media
@SerialVelocity

This comment has been minimized.

Copy link

commented Aug 30, 2019

A few tips for those on network filesystems:

  • NFS
    • Make sure you are using a very recent NFS server and NFS client
    • Make sure you are on a recent version of the linux kernel (afaik, 3.12+ should be ok which is still pretty old)
    • If you are using NFSv3, make sure you have rpcbind, lockd, and rpc.statd daemons running on every NFS client and server
    • If possible, use NFSv4. If an NFSv3 client can't connect to lockd, locks won't work. The filesystem can still mount though!
    • If you use use Sonarr only on a single host and it will never move, try local_lock=all.
      • If you use Kubernetes, you cannot use a deployment. It must be a statefulset. Even then, the database can become corrupted if the host/pod is marked abandoned and a replacement is started. A standard eviction/host restart should be fine though.
    • Turn off all caching. e.g. lookupcache=none, noac,sync,sharecache,forcedirectio,
    • Note: I do not use NFS so there may be more things to consider
  • GlusterFS
    • Make sure you are on at least 3.8.
    • Make sure you are using FUSE on a recent kernel (2.x kernels will not work, 3.x not sure what the minimum is)
    • Locks should "just work" with the above. If they don't, you have set a volume option which is not compatible.
    • Note: There are bugs like https://bugzilla.redhat.com/show_bug.cgi?id=1397085 where Sonarr may hang but it should not cause corruption
    • Note: There is no lock healing in GlusterFS. If your network connection is bad, locks will be lost!

Note: This probably won't apply to windows as locks are completely broken there

Hope this helps!

@btowntkd

This comment has been minimized.

Copy link

commented Sep 16, 2019

Adding yet another voice to the chorus of people here; we definitely need a workaround for filesystems which don't support WAL.

@onedr0p

This comment has been minimized.

Copy link

commented Sep 17, 2019

@btowntkd the issue we found was that disabling WAL causes major lag when using a large collection of series. I still want this problem to be solved don't get me wrong. It seems like SQLite and Sonarr don't play well for our use case. :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.