Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

feat(contrib/ec2): add m3 instances to template #768

Merged
merged 2 commits into from
Apr 19, 2014

Conversation

bacongobbler
Copy link
Member

The current generation of EC2 instances (M3) is faster and cheaper than the old T1 instances. It's a more reasonable default to use m3.medium as the default instance size.

fixes #689

The current generation of EC2 instances (M3) is faster and cheaper
than the old M1 instances. It's a more reasonable default to use
m3.medium as the default instance size.

fixes #689
@carmstrong
Copy link
Contributor

LGTM.

@mboersma
Copy link
Member

Does an m3.medium really only have 4GB of storage? http://aws.amazon.com/ec2/instance-types/

I'm going to launch one and see what storage it has and where it's mounted before I weigh in.

@mboersma
Copy link
Member

When using our contrib/ec2 scripts with m3.medium, you end up with a 5+GB SSD volume as root (see below). I think this will actually work fine for CoreOS and Deis, but I want to do more testing first.

core@ip-203-0-133-5 ~ $ mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=1911116k,nr_inodes=477779,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
/dev/xvda9 on / type btrfs (rw,relatime,ssd,space_cache)
/dev/xvda4 on /usr type ext4 (ro,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
tmpfs on /media type tmpfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
xenfs on /proc/xen type xenfs (rw,relatime)
/dev/xvda6 on /usr/share/oem type ext4 (rw,nodev,relatime,commit=600,data=ordered)
core@ip-203-0-133-5 ~ $ df -H
Filesystem      Size  Used Avail Use% Mounted on
rootfs          6.2G  370M  5.8G   7% /
devtmpfs        2.0G     0  2.0G   0% /dev
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           2.0G  177k  2.0G   1% /run
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/xvda9      6.2G  370M  5.8G   7% /
/dev/xvda4      1.1G  293M  765M  28% /usr
tmpfs           2.0G     0  2.0G   0% /media
tmpfs           2.0G     0  2.0G   0% /tmp
/dev/xvda6      114M   78k  104M   1% /usr/share/oem

@gabrtv
Copy link
Member

gabrtv commented Apr 18, 2014

A 5GB root volume is too small. We need a larger default volume size.

Running make pull after a fresh vagrant install results in a 2.8GB /var/lib/docker. Beyond that we maintain cedar stack layers 2 more places at runtime:

  1. in the builder's docker graph (pulled when the builder boots)
  2. in the registry (via seed-deis-registry.service)

Most importantly, btrfs doesn't do well with low disk space. Quoting from http://blog.docker.io/2013/05/btrfs-support-for-docker/:

  • BTRFS IS VERY SENSITIVE TO LOW DISK SPACE CONDITIONS. So if you test that code, make sure that you have plenty of disk space. If disk space drops below 1 GB, stop, and enlarge your volume. Otherwise, the Go runtime might crash or freeze (even if there seems to be disk space available). I wasted almost 1 day to debug this issue (I was trying to understand why some tests would randomly fail…), I hope nobody else will hit it :-)

A 5GB root volume is too small. We might run into BTRFS issues
with low disk space.
@bacongobbler
Copy link
Member Author

bumped to m3.large, which has ~32GB for each instance.

@johanneswuerbach
Copy link
Contributor

Btw. are old containers removed from Nodes in CoreOS? I couldn't find any documentation about that.

With Deis 0.7 I see a quite fast growth in space usage on runtime nodes. According to NewRelic our 64GB EBS will be filled in ~1 month.

@bacongobbler
Copy link
Member Author

No, we do not do any node cleanup, though it's possible to do that with the scheduler that'll be in v0.8.0.

@gabrtv
Copy link
Member

gabrtv commented Apr 19, 2014

32GB storage and 7.5 GB memory per instance seems appropriate for Deis. At $0.14/hr we're talking $100/mo/instance or $300/mo for a 3-node cluster. LGTM.

bacongobbler pushed a commit that referenced this pull request Apr 19, 2014
feat(contrib/ec2): add m3 instances to template
@bacongobbler bacongobbler merged commit 3270dbc into master Apr 19, 2014
@bacongobbler bacongobbler deleted the add-m3-instances branch April 20, 2014 06:26
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Amazon recommends m3 instances
5 participants