Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Centos 6 AMIs do not resize the root partition #34

Closed
jeremiahsnapp opened this issue Feb 19, 2016 · 12 comments
Closed

AWS Centos 6 AMIs do not resize the root partition #34

jeremiahsnapp opened this issue Feb 19, 2016 · 12 comments

Comments

@jeremiahsnapp
Copy link
Contributor

Problem

The AWS Centos 6 AMIs do not resize the root partition to use all of the attached EBS volume. This is a big problem because the default size is only 10GB which fills up very quickly especially when chef-server-ctl marketplace-setup ends up upgrading Chef packages.

The problem is that the AMI is relying on cloud-init's growpart module to resize the partition but apparently this only works for kernels > 3.8.

Reference: http://lists.openstack.org/pipermail/openstack/2014-August/008721.html

Growpart called by cloud-init only works for kernels >3.8. Only newer kernels support changing the partition size of a mounted partition. When using an older kernel the resizing of the root partition happens in the initrd stage before the root partition is mounted and the subsequent cloud-init growpart run is a no-op.

Here's the kernel in my Marketplace Chef Server I launched today.

[ec2-user@ip-172-31-5-119 ~]$ uname -r
2.6.32-573.12.1.el6.x86_64

Solutions

One solution would be to rebuild the Centos 6 AMIs using the old style of resizing which is described pretty well in the Mail List thread above and also in this blog post.

http://blog.backslasher.net/growroot-centos.html

After talking with @ryancragun though it seems that the better solution would be to build the AMIs using Centos 7 which is able to correctly use the cloud-init growpart module. Centos 7 build work is already underway so I think this is the direction @ryancragun plans to take to fix this issue.

Workaround

In the meantime you can use fdisk to resize the partition entry in the partition table while the partition is mounted and then reboot the instance. After the reboot finishes the entire EBS volume is completely available to the root partition without needing to run resize2fs. As long as you follow the steps below carefully resizing the partition can be done without losing any data.

The following is an example of the partition resizing procedure.

Be sure to login as the root user when running the following commands.

If you just launched the Marketplace Chef Server then you will want to wait approximately 15 minutes for the cloud-init work to finish running. Yes, it really takes that long. :)

You can run pgrep -lf S53cloud-final to see if the cloud-init process is still running.

Once cloud-init is finished you can run lsblk to see that the EBS volume attached as xvda has 30GB available but the xvda1 partition only has 10GB.

[root@ip-172-31-3-55 ~]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0  10G  0 part /

First determine the Start sector of the xvda1 partition. Make sure you use sectors for units instead of the default of cylinders. The -u flag changes the units to sectors.

In this case the Start sector is 2000.

[root@ip-172-31-3-55 ~]# fdisk -lu /dev/xvda

Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders, total 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002370f

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *        2000    20971519    10484760   83  Linux

Now use the following command to set the units to sectors, delete the partition entry from the partition table and create a new partition entry using the same Start sector value, 2000 in this case. The empty line accepts the default End sector which will be the last sector of the disk. The w saves the modifications to the partition table.

[root@ip-172-31-3-55 ~]# fdisk /dev/xvda <<END
u
d
n
p
1
2000

p
w
END

Now reboot the instance and when you log back in you can run lsblk or df -h to see that the partition has been resized.

[root@ip-172-31-3-55 ~]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  30G  0 disk
└─xvda1 202:1    0  30G  0 part /
@ghost
Copy link

ghost commented Apr 7, 2016

I can't see to ssh back into my aws instances once I run the commands. Did you have this issue?

@mihir-govil
Copy link

Hello Team,
LVM didn't create in the CentOS Linux release 7.2.1511 (Core) because of that we didn't able to extend the root partitions. Please share the details or steps how we extend the root partitions without LVM ?

[root@ip-******** ec2-user]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.8G 8.0G 1.3G 87% /

@ghost
Copy link

ghost commented Apr 22, 2016

Hey mihir-govil, here is how I was able to solve it

Use 'lsblk' to verify that you do indeed have more space on your device than is showing in your root partition. If so, run the following command

"sudo growpart /dev/xvda 1"
"sudo reboot"

@emachnic
Copy link

I'd also like to add that you may need to install the cloud-utils-growpart package with yum to be able to do what @Purple90 said. Otherwise, his solution worked like a charm.

@glasschef
Copy link

FYI, we're also seeing this issue in Centos 7: https://getchef.zendesk.com/agent/tickets/10386 (internal Chef ticket)

@jeremiahsnapp
Copy link
Contributor Author

The newer Marketplace Chef Server built on Centos 7 doesn't auto resize its root partition because the cloud-utils-growpart package is not installed. I've tested this by using the cloud-boothook hook that cloud-init provides to install the cloud-utils-growpart package very early in the boot process. The package installs and cloud-init's growpart module automatically uses it to expand the partition.

As a workaround until cloud-utils-growpart gets built into the AMI you can put the following in your instance's user-data.

#cloud-boothook

#!/bin/bash

yum install -y cloud-utils-growpart

Reference: https://help.ubuntu.com/community/CloudInit

@ryancragun
Copy link
Contributor

The images published today contain growpart and expand as expected. Thanks for reporting it y'all.

@hilaby
Copy link

hilaby commented Apr 3, 2017

@Purple90 thanks, it worked:

After I typed sudo growpart /dev/xvda 1
This output came out CHANGED: partition=1 start=2048 old: size=62908492 end=62910540 new: size=209710462,end=209712510

Then I didn't have to reboot, I just typed sudo xfs_growfs -d / and it all just went smoothly.

@shonry27
Copy link

Disk /dev/xvda: 530 GiB, 569083166720 bytes, 1111490560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos

Here's my partition table
Device Boot Start End Sectors Size Id Type
/dev/xvda1 * 4096 16773119 16769024 8G 83 Linux
/dev/xvda2 16773120 1048575999 1031802880 492G 83 Linux

When I try to resize partition using growpart

growpart /dev/xvda 1

NOCHANGE: partition 1 is size 16769024. it cannot be grown

ANy suggestion on how to go ahead with this?

@CrashLaker
Copy link

@shonry27 same issue here
did you happen to find a solution?

@jackbtran
Copy link

@CrashLaker, I have the same issue as @shonry27 and can't seem to find an answer for it.
ubuntu@ip-192-0-2-100:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 24.9M 1 loop /snap/amazon-ssm-agent/7628
loop1 7:1 0 55.7M 1 loop /snap/core18/2812
loop2 7:2 0 63.9M 1 loop /snap/core20/2182
loop3 7:3 0 87M 1 loop /snap/lxd/27037
loop4 7:4 0 40.4M 1 loop /snap/snapd/20671
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 19.9G 0 part /
├─nvme0n1p14 259:2 0 4M 0 part
└─nvme0n1p15 259:3 0 106M 0 part /boot/efi
nvme1n1 259:4 0 800G 0 disk
├─nvme1n1p1 259:5 0 500M 0 part
├─nvme1n1p2 259:6 0 583.5G 0 part
│ ├─vgroot-lvu01 252:0 0 93.1G 0 lvm
│ ├─vgroot-lvu0x 252:1 0 372.4G 0 lvm
│ ├─vgroot-lvvar 252:2 0 12G 0 lvm
│ ├─vgroot-lvlog 252:3 0 24G 0 lvm
│ ├─vgroot-lvtmp 252:4 0 8G 0 lvm
│ ├─vgroot-lvitvmgr 252:5 0 48G 0 lvm
│ ├─vgroot-lvusr 252:6 0 4G 0 lvm
│ ├─vgroot-lvhome 252:7 0 16G 0 lvm
│ └─vgroot-lvroot 252:8 0 6G 0 lvm
└─nvme1n1p3 259:7 0 16G 0 part

ubuntu@ip-192-0-2-100:~$ sudo growpart /dev/nvme1n1 2
NOCHANGE: partition 2 is size 1223710720. it cannot be grown

@CrashLaker
Copy link

hi @jackbtran
i cant remember the specific details now.
but at the time i documented what i did here
https://crashlaker.github.io/2022/03/19/ec2_grow_disk_size.html
regards,c.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants