-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mount volume for just data #236
Comments
So I did a "workaround" for this and mounted my So I guess this is mostly solved for me but maybe someone else needs this? Feel free to close if not. Final snippet that I used: app:
image: nextcloud:fpm
restart: always
volumes:
- nextcloud:/var/www/html
- /mnt/nas/nextcloud/data:/mnt/nas/nextcloud/data:rw
environment:
- POSTGRES_HOST=db
env_file:
- db.env
depends_on:
- db |
@ulrikstrid |
I'm not sure how I should do the second, would love a environment variable to set it. |
I'm doing the same I think: |
@topas-rec I am running into the same problem, but not sure how to solve it. Would you post your compose file? |
My local drive path is /mnt/DATA |
Hello,
In nextcloud: the folder say this: thank. |
Hi folks, I also mount some host folder into NC's container in order to use it as NC external storage and access files from outside of the container. However, since NC is running as user www-data inside the container, outside of the container file permissions do not allow me to write/delete/... files with my regular host user account. How did you solve this problem? Do you even care and edit files from within NC only? Thanks+Cheers |
@bbesser But i agree that it would be nice to have a way to change the UID. |
@tilosp I have a occ files:scan job running, indeed :-D External storage in NC can also reside in some SFTP account (among WebDAV, local folder, ...). Using SFTP lets me log into my host user's account. Another approach would be to use an sshfs docker volume (vieux driver) and mount it into NC's container, such that external storage could be configured using a local folder. Both approaches have upsides and downsides: Do you see any pitfalls? Thanks! |
For everyone having problems currently: I built the setup (mount my dual boot ntfs hard drive into NC by mounting the drive into the NC container (shown above)) the second time now. I use this NC more or less because of this setup because it's simple and data goes direct to my hard drive. I am not aware of security issues, though. For everyone's information... |
I suppose you're responding to the scan-job @tilosp and I mentioned.
Your setup is fine if you're modifying files only from within NC and consequently data only has to travel from NC to your hard drive. If you also need to modify files on your hard drive ('outside modifications') and need outside modifications to be reflected in NC, then NC won't recognize those by itself: you have to tell NC about outside modifications--hence the scan cron job. There is no need, however, to tell NC about outside modifications in case you're using external storage. Therefore I'm taking this approach. Problem remaining to be solved: External storage in form of a local folder (mounted within NC's container) is read/written with NC's user and according permissions, which prevents outside modifications with my actual user account. This problem can be solved by explicitly logging into my user account, e.g. with sshfs. At this point there are the options to configure sshfs from within NC or to use an sshfs docker volume (quite large performance overhead due to sshfs, but ok for my personal use case). EDIT: Option two does also suffer from ownership-mappping problems. File owner in the mounted volume is not www-data, in general. |
Yes you're right. Thanks for your detailed explanation. I wasn't aware that the outside-modifications which I definitely do are reflected in NC just because I use the external storage. I think I should also have the issue that you described and which is still not solved because my setup is the same as yours. The only difference seems to be that I use an external folder with NTFS filesystem. (The filesystem does not take care of user and owners I guess. Ntfs-3g makes all files belonging to root, doesn't it?) |
I had exactly this problem. I wanted to access my existing data with nextcloud but had UID/GID problems between the docker container and the host system. Docker doesn't provide (currently) very good options for this so I used a workaround with bindfs.
Where volumes:
- nextcloud:/var/www/html
- <docker volume path>:/var/www/html/data/admin/files/<pathname> I have not performance tested this but as I use Nextcloud only for personal access to files it seems entirely adequate for this. |
@danyill I do not use a bind mount thought, but an extra docker volume with the local driver. |
Just curious, but is there a reason why no one in here followed @tilosp suggestion of modifying I'm also running nextcloud in docker, and did the following: Attach to container matt@server:~/docker/nextcloud$ docker exec -it nextcloud_app_1 bash
root@fb9f144c428b:/var/www/html# edit nextcloud config file root@fb9f144c428b:/var/www/html# nano config/config.php and then added the required config: ....
'datadirectory' => '/var/www/html/data',
'dbtype' => 'sqlite3',
'version' => '15.0.0.10',
'overwrite.cli.url' => 'http://xxxx:9006',
'installed' => true,
+ 'check_data_directory_permissions' => false,
'maintenance' => false,
); I stopped getting the original log @ulrikstrid posted after doing this. However I then ran into another issue. Because I'm accessing my nextcloud via proxy_pass with nginx, I also had to add my external domain into the 'trusted_domains' =>
array (
0 => 'internal-server-hostname:9006',
+ 1 => 'nextcloud.externalserver.com',
), After doing above it's all working for me now :-). bonus tip For anyone setting this up for the first time, and wanting to add their filesystems they've attached to the docker container e.g. my docker-compose file version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=xxxx
- MYSQL_PASSWORD=xxxx
- MYSQL_DATABASE=xxxx
- MYSQL_USER=xxxx
app:
build: .
ports:
- 9006:80
links:
- db
volumes:
- nextcloud:/var/www/html
- /media/mydrive/4TB/nextcloud:/var/www/html/data:rw <---------- my nextcloud config is also on my mounted drive.
- /media/mydrive/4TB/Media:/media/4TB/Media:rw <------------ actual mounted drive You need to enable "External storage support" app from within nextcloud. After doing so, navigate to settings from an admin account, and click on external storage: From here simply add it into nextcloud, and specify the location where it is mounted within the docker container: Hope that helps someone... Thanks, Matt |
Hello,
than change owner of /var/www/html/data/ in container
and add permission to access on host machine
|
Hi,
(the second one applies to data inside a docker managed volume) |
@mattie47 , the external storage solution seems to be the cleanest. Are there any downsides to this compared to mounting at /var/www/html/data? |
The main question seems to be answered, so I'll close this. |
Related discussion, resolution, and screenshot at https://help.nextcloud.com/t/add-storage-mounted-volume/23606/4 |
I want to mount a volume that is just for my data (images, documents etc...) so that it lives outside of docker.
I tried to do something like this:
But I'm getting the following message
The most important use case for this is to put the data on my NAS instead of on the server running nextcloud. And as it's just for personal use I don't care if anyone can see my files outside of nextcloud.
The text was updated successfully, but these errors were encountered: