ZFS still writes to HDDs vdev while dataset is configure to store only in special vdev for Docker #17254
Replies: 2 comments 6 replies
-
Interesting question. I can only guess few things. First, recordsize reduction might be ineffective once the data is already written, since it affects only files written beyond that size after that. Second, if the workload includes any sync writes, then in absence of SLOG they will go to Embedded SLOG on normal vdev. This are might benefit from some attention. |
Beta Was this translation helpful? Give feedback.
-
My system is also showing this behavior, with a very similar setup. I migrated from a seperate mirrored hard drive pool (2x12TB) and mirrored NVME pool (2x1.6TB) to a single pool, hard drives with the SSDs as a special vdev. The system: ZFS version:
Pool layout:
Atime, xattr and record sizes for ix-apps (automatically created by TrueNAS) and data/docker (manually created), here lives most of the data used by Docker containers:
Zpool list output
The space used on the special vdev is within expectations of space usage, as well as for the standard hard drive mirror. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm facing a strange behaviour when using Docker together with ZFS.
I have one pool with 2x10TB HDDs as mirror vdev and 2x500GB SSDs as mirror for special vdev (to store metadata and datasets with small config files etc).
I decided to put my Docker Root (containers/images) in ZFS dataset using ZFS driver:
Docker deamon.json:
I've also set the recordsize of /docker-root dataset to 32K and special_small_blocks to 128K, to make sure all files store in dataset get allocated and store in special vdev with fast SSDs. (I understood if you set the recordsize equal or smaller than special_small_blocks, all files go to special vdev)
But the problem is that I still see quite a lot writes to HDDs vdevs when I'm starting, stopping and update container's image. As well frequent rights while containers are running, which is not the case when I stop all containers. I also made sure that all containers mounted volumes are mounted in config dataset that is configured the same to store all files special vdev.
I also notice when HDDs vdev are scrubing, which can take hours, managing the containers becomes super slow.
The atime property is also off for datasets in the pool.
Why this is still happening even though I'm forcing all docker related files to be store in special vdev? Is this still expected or I'm missing something?
Layout of the pool:
Dataset Config:
System Information:
Beta Was this translation helpful? Give feedback.
All reactions