New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default node volume size is too small #780
Comments
Yeah this makes sense. FWIW, coincidentally I have been using |
Goldilocks disagrees, |
I think it's plausible to increase it. But we will need to devise a way doing it, as some instance types use local disks, so attaching EBS to those seems unhelpful. |
Now we have launch template support, can custom disk arrangements be configured into the template by users? Not just a sized EBS, but also two EBS or local+EBS e.g. where you need a whole device for ceph? |
Yes, evidently we need a way to configure more then one EBS volume. Could
you please open a separate issue for that?
…On Mon, 13 May 2019, 2:41 pm Aaron Roydhouse, ***@***.***> wrote:
Now we have launch template support, can custom disk arrangements be
configured into the template by users? Not just a sized EBS, but also two
EBS or local+EBS e.g. where you need a whole device for ceph?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#780>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAB5MS7EWHWAA7HER3BIJL3PVFVYDANCNFSM4HKYL62Q>
.
|
Getting FreeDiskSpaceFailed i have defined
--- kubectl describe node Allocated resources: cpu 1810m (22%) 38 (475%) Warning FreeDiskSpaceFailed 46m kubelet, failed to garbage collect required amount of images. Wanted to free 9439938969 bytes, but freed 2353231631 bytes |
+1 for making it larger. I'm trying to deploy jupyterhub to k8s and new to both tools. Even upping this to 50GB still caused DiskPressure warnings because of the large Docker images pulled for jupyterhub and my notebooks (which were a bloated 17GB). As a newbie, it was confusing when the DiskPressure the out of space caused things to start crashing. |
17GB image!?! 😮 I'm guessing you don't run that with In the past Whatever size is default, it won't be right for everybody. If this is an important choice for new users with heavier requirements, then I suggest it should be highlighted in the documentation. The Getting Started page could have a call-out box mentioning root disk size; "For large container images, you might need to specify a larger size". The Creating Clusters and Creating Node Groups pages currently don't include disk size in the example config file/options. They could do so, so people realize it is a choice and are prompted to think about it. |
Well, I was 😄 (I inherited it from someone else and didn't realize how big it actually was until i built it myself and saw that push was taking forever) I agree with the highlighting in documentation, along with the instance types. I was following the tutorials that assume you're deploying a small image like nginx, and deploying jupyterhub on a 3 x t3.medium w/ 20GB of disk basically crashed the cluster, but I didn't know enough about what to expect from various commands to know that that was what was happening. I was also trying to merge together the zero-to-jupyterhub-k8s instructions for deploying without eksctl (with lots of manual steps and kubectl) with the ekctl getting started, and didn't do a good job of that. I might look into rewriting that tutorial myself. |
yeah.. how do we specify the volume size we want in your yaml config file? |
oh, nevermind... I see that its
under |
* Fix applying default volume size to config * Increase default volume size to 80G Closes #780
Running a dozen workloads like Istio and Weave Cloud agents will result in pods being evicted due to disk pressure. I think we should increase the default volume size from 20GB to 100GB (default size on GKE).
The
--node-volume-size
does not have a default value so you'll find out about the 20GB limit after running into the disk space issue.The text was updated successfully, but these errors were encountered: