New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vSphere: Cannot add disks to more than one scsi adapter (disk count per node > 16) #42399
Comments
A fix for this is coming up.. |
kerneltime
changed the title
vSphere: Cannot add disks to more than one scsi adapter
vSphere: Cannot add disks to more than one scsi adapter (> 16)
Mar 2, 2017
kerneltime
pushed a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Mar 2, 2017
kerneltime
pushed a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Mar 2, 2017
kerneltime
changed the title
vSphere: Cannot add disks to more than one scsi adapter (> 16)
vSphere: Cannot add disks to more than one scsi adapter (disk count per node > 16)
Mar 2, 2017
cc @robdaemon |
Huh, I wonder if something changed between 1.3 and newer? We had tested up to 40-something volumes on 1.3. There was a fun discussion internally because it would always fail at SCSI ID 7 and I had to explain SCSI to some younger engineers :) |
kerneltime
pushed a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Mar 2, 2017
kerneltime
pushed a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Mar 2, 2017
kerneltime
pushed a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Mar 2, 2017
kerneltime
pushed a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Mar 2, 2017
k8s-github-robot
pushed a commit
that referenced
this issue
Mar 23, 2017
Automatic merge from submit-queue Fix adding disks to more than one scsi adapter. Fixes #42399 **What this PR does / why we need it**: Allows a single node to use more than 16 disks. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #42399 **Special notes for your reviewer**: **Release note**: ```release-note Fix adding disks to more than one scsi adapter. ```
BaluDontu
pushed a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Apr 6, 2017
BaluDontu
added a commit
to vmware-archive/kubernetes-archived
that referenced
this issue
Apr 12, 2017
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Kubernetes version (use
kubectl version
): v1.4.x v1.5.x v1.6.x masterEnvironment: vSphere based install with vSphere Cloud Provider enabled.
uname -a
):NAWhat happened: vSphere Cloud Provider configures additional SCSI adapters to scale the number of volumes that can be attached to a VM (1 adapter = 16 disks). The code within the cloud provider does not handle this correctly. Additional disks attach successfully to the VM but when k8s verifies the attach status it fails leading to k8s retrying the attach that never succeeds. In addition to this when the SCSI adapter is created for the first time the code does not use it to attach the disk leading to a failed attach disk request to the cloud provider.
What you expected to happen: Additional disks should be added to the VM upto the max number supported.
How to reproduce it (as minimally and precisely as possible): Create a single node cluster and create a statefulset with > 16 replicas or have a pod with more than 16 disks.
Anything else we need to know:
The text was updated successfully, but these errors were encountered: