Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volumes created with ontap-nas may not be mountable right away when load-sharing mirrors are in use #84

Closed
remys89 opened this issue Jun 12, 2017 · 4 comments

Comments

@remys89
Copy link

remys89 commented Jun 12, 2017

Hi,

As discussed with the team on slack today, the following issue:

tl;dr: Create a post-creation check to verify if a volume exists before starting the container who uses it.

We have a single test filer for running PoCs. We run NetApp ONTAP 9, Docker 1.13 & The latest nDVP plugin. When we perform a "Docker volume create" towards the netapp storage driver, the volume is not directly available on the filer, possibly because of a configured advertisement delay on our configuration.

[root@vm~]# docker volume create -d netapp --name=netappslack2
netappslack2
netapp:latest netappslack2

showmount gives /netappdvp_netappslack2 (everyone)

[root@vm~]# mount .99:/netappdvp_netappslack2 test
mount.nfs: mounting .99:/netappdvp_netappslack2 failed, reason given by server: No such file or directory

Yet, containers expect the volume to be there for writing data. Now, the container starts and exits directly using status 32, saying the "file or directory is not available".

Waiting a few minutes, around 4, we perform the docker run again using the created volume. Then, it works fine and mounts the volume.

[root@VM~]# docker run --rm -it -v netappslack2:/take2 alpine ash
/ #

@dutchiechris
Copy link
Contributor

I would be curious to see the entries made to the nDVP log and the internal mgt log of the cluster to check and compare timings.

For the nDVP logs check here how to get them.

For the cluster logs check here on how to get them. The specific logfile to look at is /mroot/etc/log/mlog/mgwd.log on the node that owns the aggregate.

Once you have access to both logs then (1) create a volume and then (2) check both logs and reply to this issue with the details.

In my lab I did the above steps and from the cluster logs the vol create took only 1 second from entries with Calling doCreate to doCreate complete so I am curious what is reported on your cluster. I have seen connect failures when using iSCSI storage on the 1st connect. In my research I learned that on the 1st connect the device will be formatted, and for a very large device, or with a low max QoS IOP throttle, the formatting takes longer than Docker is willing to wait. With NFS however you don't have this format phase so your connect issue must be different.

I hope with the logs we can see what is going on. Thanks!

@remys89
Copy link
Author

remys89 commented Jun 13, 2017

At the moment of mounting, this was the logging @ nDVP.log:

INFO[2017-06-12T13:27:01Z] Starting NetApp Docker Volume Plugin.         port= storageDriverName=ontap-nas volumeDriver=netapp volumePath="/var/lib/docker-volumes/netapp"
INFO[2017-06-12T13:35:40Z] Initialized logging.                          logFileLocation="/var/log/netappdvp/netapp.log" logLevel=info
INFO[2017-06-12T13:35:41Z] Initialized Ontap NAS storage driver.         driverVersion=17.04.0 extendedDriverVersion=native
INFO[2017-06-12T13:35:41Z] Starting NetApp Docker Volume Plugin.         port= storageDriverName=ontap-nas volumeDriver=netapp volumePath="/var/lib/docker-volumes/netapp"
ERRO[2017-06-12T13:36:50Z] Problem mounting volume: netappdvp_netapptest mountpoint: /var/lib/docker-volumes/netapp/netappdvp_netapptest error: exit status 32'

Ill come back to you with the storage logging.

@remys89
Copy link
Author

remys89 commented Jun 13, 2017

@dutchiechris uploaded to NetApp FTP, Scott Stanton will move it if i am correct.

@innergy
Copy link
Contributor

innergy commented Jun 13, 2017

As we discussed on Slack, this appears to be caused by the use of load-sharing mirrors, which means that new volumes appear in the namespace based on a snapshot schedule. This will only be an issue with the 'ontap-nas' driver.

There are multiple potential solutions that would get us closer to the desired behavior: the volume is ready to be mounted as soon as the create operation returns. We'll investigate.

Thanks for the report!

@innergy innergy changed the title nDVP Volume creation delay handling Volumes created with ontap-nas may not be mountable right away when load-sharing mirrors are in use Jun 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants