Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pvresize]Display of pvc capacity did not make corresponding changes when pv resized #61259

Closed
taoyu27 opened this issue Mar 16, 2018 · 18 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@taoyu27
Copy link

taoyu27 commented Mar 16, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:
Display of pvc capacity did not make corresponding changes when pv resized

What you expected to happen:
display of pvc capacity did make corresponding changes when pv resized

How to reproduce it (as minimally and precisely as possible):

  1. create pvc, request storage: 1Gi
apiVersion: v1
metadata:
  name: helloworld-claim-demo
  annotations:
    volume.beta.kubernetes.io/storage-class: "rbd-demo"
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: rbd-demo
  resources:
    requests:
      storage: 1Gi

kubectl get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
helloworld-claim-demo   Bound     pvc-0da06c71-2832-11e8-8857-fa163e4e23c0   1Gi        RWO            rbd-demo       18h
  1. kubectl edit pvc, make request storage 4Gi
update pvc.yaml file
apiVersion: v1
metadata:
  name: helloworld-claim-demo
  annotations:
    volume.beta.kubernetes.io/storage-class: "rbd-demo"
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: rbd-demo
  resources:
    requests:
      storage: 4Gi


kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                           STORAGECLASS   REASON    AGE
pvc-0da06c71-2832-11e8-8857-fa163e4e23c0   4Gi        RWO            Delete           Bound     default/helloworld-claim-demo   rbd-demo                 18h
  1. check the display of pvc capacity
kubectl get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
helloworld-claim-demo   Bound     pvc-0da06c71-2832-11e8-8857-fa163e4e23c0   1Gi       RWO            rbd-demo       18h

the display of pvc capacity is still 1Gi...

Environment:

  • Kubernetes version (use kubectl version):
    1.10 bata 4
  • OS (e.g. from /etc/os-release):
    centos 7.2
  • Kernel (e.g. uname -a):
    3.10.0-327.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  • Others:
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Mar 16, 2018
@taoyu27
Copy link
Author

taoyu27 commented Mar 16, 2018

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 16, 2018
@gnufied
Copy link
Member

gnufied commented Mar 16, 2018

/assign

@leon-g-xu
Copy link

leon-g-xu commented Apr 26, 2018

I am also hitting this issue. Any update? On the node itself I can see that the mounted volume capacity has been updated though. Is the pvc capacity only a label or does it reflect the underlying available storage capacity mounted in the container?

@leon-g-xu
Copy link

leon-g-xu commented Apr 26, 2018

Also is there any manual workaround for this issue? We are on k8s 1.9.4

@etienn01
Copy link

etienn01 commented May 4, 2018

Seeing the same issue on k8s 1.9.3.
I enabled ExpandPersistentVolumes feature-gate, edited the StorageClass to set allowVolumeExpansion and then increased the spec.resources.requests.storage in a PVC. The PV size has been increased (also the corresponding AWS EBS volume has the right size now) but the change is not reflected on the PVC.

@etienn01
Copy link

etienn01 commented May 7, 2018

Found a way to get it to work thanks to this comment:

  • run resize2fs after EBS volume resize has completed to extend your partition (you can use lsblk to check the actual volume size)
  • use etcdctl to update the claim status with the new size

@gnufied
Copy link
Member

gnufied commented Jun 12, 2018

This could be because for something like RBD, for entire resizing operation to complete, you must [re]create a pod that was using the PVC. Likely the resize operation is just waiting for file system resize to be completed on the node. In 1.11, the PVC will have condition FileSystemResizePending when this happens.

Also in 1.11 we will be moving this feature to beta. @guodongx @etienn01 can you guys retest this with 1.11 and report?

@wminshew
Copy link

wminshew commented Aug 24, 2018

Doing this manually with k8s 1.9, but having trouble with resize2fs. Some outputs below:

/ # lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda       8:0    0  100G  0 disk 
├─sda1    8:1    0 95.9G  0 part /etc/hosts
├─sda2    8:2    0   16M  0 part 
├─sda3    8:3    0    2G  0 part 
├─sda4    8:4    0   16M  0 part 
├─sda5    8:5    0    2G  0 part 
├─sda6    8:6    0  512B  0 part 
├─sda7    8:7    0  512B  0 part 
├─sda8    8:8    0   16M  0 part 
├─sda9    8:9    0  512B  0 part 
├─sda10   8:10   0  512B  0 part 
├─sda11   8:11   0    8M  0 part 
└─sda12   8:12   0   32M  0 part 
sdb       8:16   0   10G  0 disk /devpi
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  94.3G      2.5G     91.8G   3% /
tmpfs                     1.8G         0      1.8G   0% /dev
tmpfs                     1.8G         0      1.8G   0% /sys/fs/cgroup
/dev/sdb                975.9M    619.0M    289.7M  68% /devpi
/dev/sda1                94.3G      2.5G     91.8G   3% /dev/termination-log
/dev/sda1                94.3G      2.5G     91.8G   3% /etc/resolv.conf
/dev/sda1                94.3G      2.5G     91.8G   3% /etc/hostname
/dev/sda1                94.3G      2.5G     91.8G   3% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     1.8G     12.0K      1.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                     1.8G         0      1.8G   0% /proc/kcore
tmpfs                     1.8G         0      1.8G   0% /proc/timer_list
tmpfs                     1.8G         0      1.8G   0% /sys/firmware
/ # resize2fs /dev/sdb
resize2fs 1.43.3 (04-Sep-2016)
open: No such file or directory while opening /dev/sdb
/ # fdisk -l
/ # 

Any ideas?

@wminshew
Copy link

Following up here: was able to solve the problem by re-launching the container with securityContext: privileged: true

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 22, 2018
@florianrusch
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 28, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 28, 2019
@florianrusch
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 28, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 26, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 26, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

8 participants