Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP (Provisional): quotas for ephemeral storage #2638

Merged

Conversation

RobertKrawitz
Copy link
Contributor

No description provided.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 6, 2018
@k8s-ci-robot k8s-ci-robot added sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/node Categorizes an issue or PR as relevant to SIG Node. labels Sep 6, 2018
@RobertKrawitz
Copy link
Contributor Author

/assign @dchen1107
/assign @derekwaynecarr

@RobertKrawitz
Copy link
Contributor Author

@sjenning

@RobertKrawitz
Copy link
Contributor Author

ping @dchen1107 @derekwaynecarr

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 10, 2018
Copy link
Contributor

@dashpole dashpole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation details are very helpful, and well fleshed out. If behavior will be different between filesystems, perhaps the kubelet should fail on startup if someone tries to enable quota-based limits on an unsupported fs? I don't like behavior silently changing without an explicitly different configuration.

I am also not sure if the scope of the proposal makes sense to me. There is enormous value from less expensive metrics, but I don't think we solve any of the other goals of the proposal by enforcing limits on empty-dir volumes in isolation. IMO, it would make more sense to either tackle just metrics, or to expand the scope to include container limits, and pod limits as well.


### Non-Goals

* Enforcing limits on total pod storage consumption by any means, such
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also add to the non-goals enforcing node allocatable. This is something we should do eventually, as it prevents the abuses you list for containers that don't set limits. But it has the same complications as enforcing limits on total pod storage.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Enforcing node allocatable" -- what precisely do you mean?

Failing on startup if someone enables the quota feature gate but the filesystem backing the primary partition does not support the necessary quota mechanism is a good idea. It means that the kubelet proper will need to know about the quota mechanism. But it adds some complexity to the CRI. The kubelet will need to be able to ask the runtime whether it can support quotas on its local storage backend. Whom would you suggest I discuss that with?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Enforcing Node Allocatable" would mean enforcing a limit over the sum of all pod's disk usage. For memory, for example, this means the /kubepods cgroup has a memory limit of allocatable. The purpose of it is to protect system daemons even in the case where containers do not set limits. In this case, it would mean that the abuse vectors you point out cannot cause the node to run out of disk, but at worst cause temporary problems for other pods. However, currently our image accounting design has problems (images are considered node "overhead", which makes setting kube-reserved rather strange), so we probably don't want to enforce Node Allocatable until that is solved as well.

cc @yujuhong, who drove the initial CRI design and improvements.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is strictly referring to local storage capacity isolation aka ephemeral storage, at least at this point.

I will add "enforcing node allocatable" to the non-goals and make clear that the goals apply only to ephemeral storage.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

node allocatable exists for ephemeral storage, we just don't want to enforce it anytime soon.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding non-goal.

## Proposal

This proposal applies project quotas to emptydir volumes on qualifying
filesystems (ext4fs and xfs with project quotas enabled). Project
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do we plan to do for other filesystems? If we are changing the behavior for exceeding container limits (eviction vs "no space left on device"), ideally we want to keep the behavior the same across filesystems. If that isn't possible, we should explicitly point it out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exceeding container limits will still result in eviction. And there's always the risk, even today, of running out of storage; we don't (and can't!) promise containers that they will have an unlimited amount of local storage to play with.

I'd be happy to support quotas on as many filesystems as we can, but if the underlying filesystem doesn't support project quotas, there's little we can do.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exceeding container limits will still result in eviction

Is this because we are excluding the enforcement of container limits from the proposal, and leaving the current implementation as is?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the behavior changing for empty-dirs (eviction-> "no space left on device")?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hitting the limit is still hitting the limit. If you hit the limit and quickly got back under you'd avoid eviction, but again, no different from today. If you hit the limit and stay there, the eviction code will pick up on it.

Consider setting the quota to one filesystem block more than the ephemeral limit (which I've considered doing). If you write up to the quota limit and stop, you will definitely be above the ephemeral limit, so the eviction manager will toss you when it sees it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also consider what happens if you have two emptydir volumes with a total limit of 1 MiB. If you write 513 KiB to each you won't hit the quota on either, but you will still be evicted.

This proposal will certainly make "no space left on device" and similar failures more common, but they can still happen today.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you hit the limit and stay there, the eviction code will pick up on it.

Ah, I see. I was under the impression that pods that hit the disk quota limit would just stay there. Might be worth clarifying that this is not replacing evictions for exceeding container limits (or empty-dir, for that matter).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding an explicit non-goal that eviction will be eliminated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's worth adding a section to clarify the what number to use when setting the quota, and how this interact with eviction. It's not very clear (at least to me) from reading the proposal.

error status and it is up to the caller to utilize a fallback
mechanism (such as the directory walk performed today).

### Operation Flow -- Removing a Quota.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do we remove a quota?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When a volume is torn down (including at kubelet restart).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does kubelet restart require tearing down the volumes? kubelet restart should not affect running containers.

* The SIG raised the possibility of a container being unable to exit
should we enforce quotas, and the quota interferes with writing the
log. This can be mitigated by either not applying a quota to the
log directory and using the du mechanism, or by applying a separate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree that a non-enforcing quota seems like the best solution here.


### Operation Flow -- Applying a Quota

* Caller (emptydir volume manager or container runtime) creates an
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is enforcing quotas on container writable layers (or logs) in-scope, or out-of-scope for this proposal? They are not listed in goals (and empty-dir quota is explicitly called out). If it is in-scope, you should detail the changes to the CRI that are required for it. If it is not in-scope, it should be added to non-goals. If it is out-of-scope, I am not sure if the risk about container shutdown still applies?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like it to be in scope, at least for writable layers, but who should I discuss the CRI changes with?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already listed as a stretch goal. I am clarifying that it will only be done if we can apply an enforcing quota to the writable layer.

@dashpole
Copy link
Contributor

cc @jingxu97
who worked on the initial local ephemeral storage proposal and implementation

@derekwaynecarr
Copy link
Member

@RobertKrawitz - this is a great write-up, and i have a few questions.

i understand that if the kubelet owns the project id mapping for a given directory, the quota code (assumed vendored and managed in kubelet) can handle assignment of a project id to a given directory it manages (i.e. emptyDir). for container runtime managed layers, its not clear to me if you are saying that a) the container runtime asks the kubelet for a project id, b) the kubelet informs the container runtime of a project id (maybe as part of LinuxContainerConfig in StartContainer), c) something else?

can a non-privileged user write a file with an alternate project id and mess up accounting? basically, what is the security boundary that we can depend upon?

how many project ids would you recommend we have per pod? is there a project id per container and emptyDir volume?

@RobertKrawitz
Copy link
Contributor Author

@derekwaynecarr:

  1. If we're going to go with non-enforcing quotas, or enforcing quotas with a separate quota on each filesystem (and as I stated, those two alternatives are my preference), I think the container runtime should ask its instance the quota code to allocate it a project ID. In other words, (c).

  2. A non-privileged user cannot create a file with an alternate project ID. Nor can the user cheat by hard linking a file with a different project ID into the directory tree, for instance. A privileged user can change the project ID of a file, but a non-privileged user cannot. See https://www.systutorials.com/docs/linux/man/8-xfs_quota/#lbAK. I will add a short background and a link to this.

  3. One project ID per ephemeral volume. So that means one per emptyDir volume, one per writable layer (one per container), and one (non-enforcing) per logdir.

@RobertKrawitz
Copy link
Contributor Author

@dashpole "IMO, it would make more sense to either tackle just metrics, or to expand the scope to include container limits, and pod limits as well."

Are container and pod limits referring to ephemeral storage or to other resources? If the former, I agree, but I haven't succeeded thus far in devising a mechanism that would accomplish that that doesn't either add a lot of complexity or break the solution for metrics. If the other, what other resources have similar limits that could be covered by this mechanism? Persistent local volumes would be a simple extension (basically, just add the same code as I added to emptydir volumes).

@dashpole
Copy link
Contributor

Are container and pod limits referring to ephemeral storage or to other resources? If the former, I agree, but I haven't succeeded thus far in devising a mechanism that would accomplish that that doesn't either add a lot of complexity or break the solution for metrics. If the other, what other resources have similar limits that could be covered by this mechanism? Persistent local volumes would be a simple extension (basically, just add the same code as I added to emptydir volumes).

I am referring to ephemeral storage. If we want to scope this to just monitoring + empty-dir volumes, I would be interested in an example of a use-case this solves (other than monitoring), or a goal it accomplishes. The abuse vector appears to still exist, as I can just write to my container's writable layer instead of the empty-dir. Is this more about more than preventing abuse (e.g. user error)? Or is there a way for cluster admins to enforce that empty-dir volumes have size limits, and disable container writable layers, and thus prevent this abuse vector?

@RobertKrawitz
Copy link
Contributor Author

@dashpole I agree that in the absence of a solution for writable layers, enabling enforcement by default does not make sense. A feature gate, off by default, for enabling enforcement may be useful to give administrators and developers experience with the enforcing environment prior to enabling by default, if and when we go that route. For clusters that choose to go that route, it would provide us with soak.

@RobertKrawitz
Copy link
Contributor Author

@derekwaynecarr adding a descriptive section on project quotas.

error status and it is up to the caller to utilize a fallback
mechanism (such as the directory walk performed today).

### Operation Flow -- Removing a Quota.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does kubelet restart require tearing down the volumes? kubelet restart should not affect running containers.

should we enforce quotas, and the quota interferes with writing the
log. This can be mitigated by either not applying a quota to the
log directory and using the du mechanism, or by applying a separate
non-enforcing quota to the log directory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-enforcing quota sgtm for now.

kubelet manages the log directories for CRI runtimes, with docker being an exception. To be backward-compatible with docker, we'll still have to support monitoring through du, or perform hacks to set quota for docker's log directories.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was simply pointing out that as long as whatever tears down a volume calls into the quota code that the quota code can remove the quota, regardless of whether the kubelet has been restarted in the interim. I presume that a volume is torn down by whatever mechanism (e. g. volume plugin) created it, otherwise we need another way to ensure quotas are removed.

It sounds like we're all agreed on non-enforcing quotas for now. I'd like to keep a feature gate for enforcing quotas, to make it easier to experiment with them.

The quota code attempts to extract the usage by quota. If it can't (e. g. because no quota is set on the directory) it returns an error and the mechanism above falls back to using du. Please reference the PR (kubernetes/kubernetes#66928).

non-enforcing quota to the log directory.

As log directories are write-only by the container, and consumption
can be limited by other means (as the log is filtered by the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't support rate-limiting logs.

In the long run, I think a setting a (higher) quota for the log directory as a safeguard is not a bad idea.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logs are currently subject to the ephemeral storage request and limit, which limits the amount of log data (I have verified this experimentally).

Note in addition that even without quotas it is possible for writes
to fail due to lack of filesystem space, which is effectively (and
in some cases operationally) indistinguishable from exceeding quota,
so even at present code must be able to handle those situations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When quota is enforced, can we send an event when the container hit the limit? I think surfacing the information is important.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not aware of a mechanism to send (push) an event when a quota is hit. The kernel would need to be able to notify userland of this.

here](http://oss.sgi.com/pipermail/xfs/2015-March/040879.html) is no
longer available.

* Bugs in the quota code could result in a variety of regression
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How stable is the quota feature in general, and how many CVEs are related to it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quotas in general have been around for many years. Project quotas have been in XFS since very early (prior to incorporation into Linux), and in ext4fs since 2014. I will research bugs related to quotas.

offers the pqnoenforce mount option that makes all quotas
non-enforcing.

We should offer two feature gates, one to enable quotas at all (on
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A thought: Can we enable quota for monitoring side-by-side with du, so that we can compare how stable it is?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no guarantee that the numbers will match up at any given moment in time. That's partly because of the issue of deleted files, and partly because a quota reflects a snapshot at a given moment in time while du is not atomic.

The caller could use both mechanisms and compare them; the code calling into the quota for this purpose is isolated.

## Proposal

This proposal applies project quotas to emptydir volumes on qualifying
filesystems (ext4fs and xfs with project quotas enabled). Project
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's worth adding a section to clarify the what number to use when setting the quota, and how this interact with eviction. It's not very clear (at least to me) from reading the proposal.


* Decision: who is responsible for quota management of all volume
types (and especially ephemeral volumes of all types). At present,
emptydir volumes are managed by the kubelet and logdirs and writable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can start a discussion with runtime vendors to set quotas for the writable layers. Prior to that, I think we can explore more on how this is expected to implement in kubelet and what to pass to the runtime. IIUC, the ephemeral storage limit is shared among writable layers, logs, emptyDirs, etc. If that's the case, what limit are we going to pass to the runtime to set? Passing the project id may not work because the writable layer may be in a separate filesystem (do we still need to include it?).

Maybe we can have a separate proposal for enforcement, given that this proposal already reaches 700 lines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't pass in the project ID; the quota code picks one. We only pass in the requested quota (or -1 for monitoring only).

I will clarify this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I should note that as the writable layer may be in a different filesystem from the emptydir volume(s), it may not be possible to apply an enforcing quota that applies to the entire ephemeral storage.

For example, if there is an ephemeral limit of 10 MB on the pod, and the writable layer is on a different filesystem from the emptydir volume(s), we cannot enforce a quota of 10 MB on the total. We can enforce a limit of 10 MB on the emptydir volume(s) and a separate limit of 10 MB on the writable layer in this case. That's inherent in the fact that quotas are always per-filesystem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is already a section "Selecting a Project ID" that desribes how project IDs are selected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The (non)interaction with eviction is listed under non-goals.

present, this defaults to False, but the intention is that this will
default to True by initial release.

* `FSQuotaForLSCIEnforcement` must be enabled, in addition to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if we delayed adding the FSQuotaForLSCIEnforcement feature gate until after we have settled on a proposal for how enforcement will work. Otherwise, what code would we place behind it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The low level code is very similar; see the PR. It's not a problem to remove it if it would be problematic, although having it in there makes it possible to experiment with enforcing quotas without additional code changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed this from the code and moved it to future in the KEP.

@dashpole
Copy link
Contributor

dashpole commented Oct 3, 2018

This proposal lgtm wrt monitoring changes.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 11, 2018
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 11, 2018
Copy link
Member

@derekwaynecarr derekwaynecarr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand how the kubelet can use this capability to manage emptyDir volume types, but I am less clear how the mechanism will work for the container runtime copy-on-write layer. If the intent is to defer that to the runtime entirely (which is fine), it would be good to clarify that explicitly in the proposal. At that point, the kubelet would efficiently account what it manages, and we can work with container runtimes to do the same for what they manage.

@@ -0,0 +1,810 @@
---
kep-number: 0
title: My First KEP
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update title: Quotas for Ephemeral Storage

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

kep-number: 0
title: My First KEP
authors:
- "@janedoe"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add your name: @RobertKrawitz

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

title: My First KEP
authors:
- "@janedoe"
owning-sig: sig-xxx
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sig-node

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

reviewers:
- TBD
- "@alicedoe"
approvers:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be @dchen1107 and @derekwaynecarr

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

* `LocalStorageCapacityIsolation` must be enabled for any use of
quotas.

* `FSQuotaForLSCIMonitoring` must be enabled in addition. If this is
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it took me a while to realize that LSCI was short for LocalStorageCapacityIsolation.

prefer full names: LocalStorageCapacityIsolationFSQuotaMonitoring

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

* _`FSQuotaForLSCIEnforcement` must be enabled, in addition to
`FSQuotaForLSCIMonitoring`, to use quotas for enforcement._

### Operation Flow -- Applying a Quota
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this flow makes sense to me for directories created by the kubelet (i.e. emptyDir where medium != memory).

where i get a little confused is how you propose handling the copy-on-write layer managed by the container runtime. i think you are saying that the container runtime (and not the kubelet) is responsible for managing this, but the mechanics to realize that are less clear to me at the moment. does the container runtime select the projectid and kubelet only discovers it when it sees a projectid is tracked to that directory? are there any CRI changes needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding non-goal for management of anything but emptydir, noting that writable layers will be a future project involving the container runtime.

For monitoring, I don't think any changes to the CRI per se are needed. Runtimes can use the quota mechanism to implement monitoring; they should follow the same protocol we're using for picking project IDs to use. For enforcement, changes would be needed at a minimum to support stating the cap.

@thockin thockin changed the title Provisional: quotas for ephemeral storage KEP (Provisional): quotas for ephemeral storage Oct 18, 2018
@derekwaynecarr
Copy link
Member

Thank you for updates. I think this is good enough to proceed to work through code.

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 25, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: derekwaynecarr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 25, 2018
@dashpole
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Oct 29, 2018
@k8s-ci-robot k8s-ci-robot merged commit 60d2637 into kubernetes:master Oct 29, 2018
justaugustus pushed a commit to justaugustus/community that referenced this pull request Dec 1, 2018
…meral-storage

KEP (Provisional): quotas for ephemeral storage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/node Categorizes an issue or PR as relevant to SIG Node. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants