-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not specifying volumes defaults to emptyDir #46950
Comments
It is indeed the default behavior - primarily because it aligns with how
docker behaves and is the simplest of the storage options. It should
definitely be documented.
…On Sun, Jun 4, 2017 at 11:48 PM, Peter Zhao ***@***.***> wrote:
*Is this a request for help?* (If yes, you should use our troubleshooting
guide and community support channels, see https://kubernetes.io/docs/
tasks/debug-application-cluster/troubleshooting/.):
NO
*Note:* Please file issues for subcomponents under the appropriate repo
Component Repo
kubectl kubernetes/kubectl
<https://github.com/kubernetes/kubectl/issues/new>
kubeadm kubernetes/kubeadm
<https://github.com/kubernetes/kubeadm/issues/new>
*What keywords did you search in Kubernetes issues before filing this one?*
(If you have found any duplicates, you should instead reply there.):
------------------------------
*Is this a BUG REPORT or FEATURE REQUEST?* (choose one):
*Kubernetes version* (use kubectl version):
HEAD
*Environment*:
- *Cloud provider or hardware configuration*:
- *OS* (e.g. from /etc/os-release):
- *Kernel* (e.g. uname -a):
- *Install tools*:
- *Others*:
*What happened*:
I have a user who faced an odd problem.
42m 42m 1 {kubelet 172.168.200.150} spec.containers{vr2} Warning Failed Failed to start container with docker id 0f4f50e2e7bb with error: Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"no such file or directory\\\"\\n\""}
Finally we found that it was due to mounting an emptyDir volume to /dev
in container.
He intended to configure volumes in pod manifest as below and mount the
volume to /dev in container.
"volumes": [
{
"name": "dev",
"hostPath": {
"path": "/dev"
}
}
]
But for whatever reason he configured it as below. No volume type was
specified. This led to an emptyDir type volume.
"volumes": [
{
"name": "dev"
}
]
*What you expected to happen*:
Validate if volume type is not specified, and error out.
Maybe there is some reason for defaulting to emptyDir by design. But I
didn't find any description about this in design-proposals
<https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volumes.md>
or user guide docs <https://kubernetes.io/docs/concepts/storage/volumes>.
At least we should document this default behavior.
*How to reproduce it* (as minimally and precisely as possible):
- configure a volume without type in pod manifest
- create the pod
- describe the pod
*Anything else we need to know*:
/sig storage
/sig api-machinery
/cc @jingxu97 <https://github.com/jingxu97> @smarterclayton
<https://github.com/smarterclayton>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#46950>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/ABG_p4QNangn9j8Lvj9KLDuxiUStBsJJks5sA3qlgaJpZM4NvnTc>
.
|
echo @smarterclayton, You're right. In volume spec, if the obj.VolumeSource = VolumeSource{
EmptyDir: &EmptyDirVolumeSource{},
} |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):
NO
Note: Please file issues for subcomponents under the appropriate repo
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Kubernetes version (use
kubectl version
):HEAD
Environment:
uname -a
):What happened:
I have a user who faced an odd problem.
Finally we found that it was due to mounting an emptyDir volume to
/dev
in container.He intended to configure volumes in pod manifest as below and mount the volume to
/dev
in container.But for whatever reason he configured it as below. No volume type was specified. This led to an emptyDir type volume.
What you expected to happen:
Validate if volume type is not specified, and error out.
Maybe there is some reason for defaulting to emptyDir by design. But I didn't find any description about this in design-proposals or user guide docs.
At least we should document this default behavior.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
/sig storage
/sig api-machinery
/cc @jingxu97 @smarterclayton
The text was updated successfully, but these errors were encountered: