-
Notifications
You must be signed in to change notification settings - Fork 799
feat(contrib/ec2): add m3 instances to template #768
Conversation
The current generation of EC2 instances (M3) is faster and cheaper than the old M1 instances. It's a more reasonable default to use m3.medium as the default instance size. fixes #689
LGTM. |
Does an m3.medium really only have 4GB of storage? http://aws.amazon.com/ec2/instance-types/ I'm going to launch one and see what storage it has and where it's mounted before I weigh in. |
When using our contrib/ec2 scripts with m3.medium, you end up with a 5+GB SSD volume as root (see below). I think this will actually work fine for CoreOS and Deis, but I want to do more testing first. core@ip-203-0-133-5 ~ $ mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=1911116k,nr_inodes=477779,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
/dev/xvda9 on / type btrfs (rw,relatime,ssd,space_cache)
/dev/xvda4 on /usr type ext4 (ro,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
tmpfs on /media type tmpfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
xenfs on /proc/xen type xenfs (rw,relatime)
/dev/xvda6 on /usr/share/oem type ext4 (rw,nodev,relatime,commit=600,data=ordered)
core@ip-203-0-133-5 ~ $ df -H
Filesystem Size Used Avail Use% Mounted on
rootfs 6.2G 370M 5.8G 7% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 177k 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/xvda9 6.2G 370M 5.8G 7% /
/dev/xvda4 1.1G 293M 765M 28% /usr
tmpfs 2.0G 0 2.0G 0% /media
tmpfs 2.0G 0 2.0G 0% /tmp
/dev/xvda6 114M 78k 104M 1% /usr/share/oem |
A 5GB root volume is too small. We need a larger default volume size. Running
Most importantly, btrfs doesn't do well with low disk space. Quoting from http://blog.docker.io/2013/05/btrfs-support-for-docker/:
|
A 5GB root volume is too small. We might run into BTRFS issues with low disk space.
bumped to m3.large, which has ~32GB for each instance. |
Btw. are old containers removed from Nodes in CoreOS? I couldn't find any documentation about that. With Deis 0.7 I see a quite fast growth in space usage on runtime nodes. According to NewRelic our 64GB EBS will be filled in ~1 month. |
No, we do not do any node cleanup, though it's possible to do that with the scheduler that'll be in v0.8.0. |
32GB storage and 7.5 GB memory per instance seems appropriate for Deis. At $0.14/hr we're talking $100/mo/instance or $300/mo for a 3-node cluster. LGTM. |
feat(contrib/ec2): add m3 instances to template
The current generation of EC2 instances (M3) is faster and cheaper than the old T1 instances. It's a more reasonable default to use m3.medium as the default instance size.
fixes #689