-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not able to mount on AWS EKS #32
Comments
Hi @aupadh12! Thank you for the report. A few questions:
I noticed this part of the command line |
Hi @juliohm1978 , I am using the latest available version which is 2.3 and my password does not have commas but it does have '@' which I think is acceptable. |
Can you try running this on any node of the cluster as root?
The mount command itself should work. |
Hi @juliohm1978 , I tried few other things like including ver=1.0, and now I am getting below error: Warning FailedMount 3s (x5 over 17s) kubelet, ip-10-12-199-220.ec2.internal MountVolume.SetUp failed for volume "app-2" : mount command failed, status: Failure, reason: Error: Error running cmd [cmd=/usr/bin/mount -t cifs -o rw,username=user1,password=password,vers=1.0,sec=ntlmssp,uid=60425,nodfs //server/vol/server_pdms/pdms_prd/[directory] /var/lib/kubelet/pods/1a12345e2-675a-43db-hh54-9b43re5t7gid4/volumes/org~cifs/app-2]] [response=Retrying with upper case share name |
I see. It is important that you make sure the mount command actually works outside the driver. From any node in the cluster, as root, you can test the mount using the same command line the driver is showing:
Does it show the same error? |
Hi @juliohm1978 , We tried mounting it but we got below mentioned error: Retrying with upper case share name |
Sounds like you need to get your mount command line straight before using the driver. Post more details if you need more help 😄 |
Hi @juliohm1978 , I was able to mount it using this driver. |
Great to hear! |
Hello,
I am trying to mount a NAS windows share hosted on-prem having NTFS file system on the AWS EKS as CIFS mount.
WE need this drive mounted in a pod which is hosting an application and this application reads data from this drive.
We have installed cifs-utils on the worker nodes and also have the cifs volume driver installed using daemonset in my cluster.
But, even after these steps, I am getting below error when I am trying to mount the NAS drive as CIFS:
Warning FailedMount 12s (x7 over 45s) kubelet, ip-10-12-199-220.ec2.internal MountVolume.SetUp failed for volume "app-2" : mount command failed, status: Failure, reason: Error: Error running cmd [cmd=/usr/bin/mount -t cifs -o rw,username=user1,password=password,sec=ntlmssp,uid=0,ro 0 0 //server1/folder1$/subfolder\040Store\040MI /var/lib/kubelet/pods/1a12345e2-675a-43db-hh54-9b43re5t7gid4/volumes/org~cifs/app-2] [response=mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
]: exit status 32
Can you please let me know what am I missing?
The text was updated successfully, but these errors were encountered: