Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split NodeDiskPressure into NodeInodePressure and NodeDiskPressure #33218

Merged
merged 4 commits into from
Oct 4, 2016

Conversation

dashpole
Copy link
Contributor

@dashpole dashpole commented Sep 21, 2016

Added NodeInodePressure as a NodeConditionType. SignalImageFsInodesFree and SignalNodeFsInodesFree signal this pressure. Also added simple pieces to the scheduler predicates so that it takes InodePressure into account.


This change is Reviewable

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for the commit author(s). If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.

@k8s-github-robot k8s-github-robot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API size/L Denotes a PR that changes 100-499 lines, ignoring generated files. release-note-label-needed labels Sep 21, 2016
@googlebot
Copy link

CLAs look good, thanks!

@mtaufen mtaufen added cla: no area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. release-note-none Denotes a PR that doesn't merit a release note. and removed cla: yes labels Sep 21, 2016
@mtaufen mtaufen assigned mtaufen and unassigned davidopp Sep 21, 2016
@k8s-ci-robot
Copy link
Contributor

Jenkins GCE e2e failed for commit a6cecf1aae18973300c25760e66eb76c46537e17.

The magic incantation to run this job again is @k8s-bot gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@dashpole
Copy link
Contributor Author

fixed gofmt

@mtaufen
Copy link
Contributor

mtaufen commented Sep 22, 2016

Reviewed 12 of 12 files at r1.
Review status: all files reviewed at latest revision, 24 unresolved discussions.


pkg/kubelet/kubelet_node_status.go, line 725 [r1] (raw file):

// setNodeInodePressureCondition for the node.
// TODO: this needs to move somewhere centralized...

nit: use TODO(username): syntax for TODOs, e.g. TODO(mtaufen): Make it sing beautiful melodies.


pkg/kubelet/eviction/eviction_manager_test.go, line 919 [r1] (raw file):

func TestDiskPressureNodeFsInodes(t *testing.T) {
  // TODO: we need to know inodes used when cadvisor supports per container stats

You might want to assign this and related TODOs to yourself since you're owning this.


pkg/kubelet/eviction/eviction_manager_test.go, line 946 [r1] (raw file):

      return result
  }
  // TODO: pass inodes used in future when supported by cadvisor.

ditto this one wrt owning TODOs


pkg/kubelet/eviction/eviction_manager_test.go, line 1016 [r1] (raw file):

  manager.synchronize(diskInfoProvider, activePodsFunc)

  // we should not have disk pressure

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1018 [r1] (raw file):

  // we should not have disk pressure
  if manager.IsUnderInodePressure() {
      t.Errorf("Manager should not report disk pressure")

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1031 [r1] (raw file):

  manager.synchronize(diskInfoProvider, activePodsFunc)

  // we should have disk pressure

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1033 [r1] (raw file):

  // we should have disk pressure
  if !manager.IsUnderInodePressure() {
      t.Errorf("Manager should report disk pressure since soft threshold was met")

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1046 [r1] (raw file):

  manager.synchronize(diskInfoProvider, activePodsFunc)

  // we should have disk pressure

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1048 [r1] (raw file):

  // we should have disk pressure
  if !manager.IsUnderInodePressure() {
      t.Errorf("Manager should report disk pressure since soft threshold was met")

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1073 [r1] (raw file):

  // we should not have disk pressure
  if manager.IsUnderInodePressure() {
      t.Errorf("Manager should not report disk pressure")

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1083 [r1] (raw file):

  // we should have disk pressure
  if !manager.IsUnderInodePressure() {
      t.Errorf("Manager should report disk pressure")

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1106 [r1] (raw file):

  manager.synchronize(diskInfoProvider, activePodsFunc)

  // we should have disk pressure (because transition period not yet met)

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1108 [r1] (raw file):

  // we should have disk pressure (because transition period not yet met)
  if !manager.IsUnderInodePressure() {
      t.Errorf("Manager should report disk pressure")

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1127 [r1] (raw file):

  manager.synchronize(diskInfoProvider, activePodsFunc)

  // we should not have disk pressure (because transition period met)

/s/disk/inode


pkg/kubelet/eviction/eviction_manager_test.go, line 1129 [r1] (raw file):

  // we should not have disk pressure (because transition period met)
  if manager.IsUnderInodePressure() {
      t.Errorf("Manager should not report disk pressure")

/s/disk/inode


plugin/pkg/scheduler/algorithm/predicates/predicates.go, line 1172 [r1] (raw file):

}

// CheckNodeDiskPressurePredicate checks if a pod can be scheduled on a node

/s/CheckNodeDiskPressurePredicate/CheckNodeInodePressurePredicate


plugin/pkg/scheduler/algorithm/predicates/predicates.go, line 1173 [r1] (raw file):

// CheckNodeDiskPressurePredicate checks if a pod can be scheduled on a node
// reporting disk pressure condition.

/s/disk/inode


plugin/pkg/scheduler/algorithm/predicates/predicates_test.go, line 3050 [r1] (raw file):

          Conditions: []api.NodeCondition{
              {
                  Type:   "Ready",

Use api.NodeReady instead of a raw string.


plugin/pkg/scheduler/algorithm/predicates/predicates_test.go, line 3051 [r1] (raw file):

              {
                  Type:   "Ready",
                  Status: "True",

Use api.ConditionTrue instead of a raw string.


plugin/pkg/scheduler/algorithm/predicates/predicates_test.go, line 3062 [r1] (raw file):

          Conditions: []api.NodeCondition{
              {
                  Type:   "InodePressure",

Use api.NodeInodePressure instead of a raw string.


plugin/pkg/scheduler/algorithm/predicates/predicates_test.go, line 3063 [r1] (raw file):

              {
                  Type:   "InodePressure",
                  Status: "True",

Use api.ConditionTrue instead of a raw string.


plugin/pkg/scheduler/algorithm/predicates/predicates_test.go, line 3079 [r1] (raw file):

          nodeInfo: makeEmptyNodeInfo(noPressureNode),
          fits:     true,
          name:     "pod schedulable on node without pressure condition on",

Add something to the name to indicate that this is for inode pressure.


plugin/pkg/scheduler/algorithm/predicates/predicates_test.go, line 3085 [r1] (raw file):

          nodeInfo: makeEmptyNodeInfo(pressureNode),
          fits:     false,
          name:     "pod not schedulable on node with pressure condition on",

Add something to the name to indicate that this is for inode pressure.


plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go, line 155 [r1] (raw file):

      factory.RegisterFitPredicate("CheckNodeDiskPressure", predicates.CheckNodeDiskPressurePredicate),

      // Fit is determined by node disk pressure condition.

/s/disk/inode


Comments from Reviewable

@dashpole
Copy link
Contributor Author

Fixed

@mtaufen
Copy link
Contributor

mtaufen commented Sep 27, 2016

:lgtm:


Reviewed 4 of 4 files at r3.
Review status: all files reviewed at latest revision, 24 unresolved discussions.


Comments from Reviewable

@mtaufen mtaufen added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 27, 2016
@dashpole
Copy link
Contributor Author

@k8s-bot gke e2e test this

@mtaufen mtaufen added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 28, 2016
@mtaufen
Copy link
Contributor

mtaufen commented Sep 29, 2016

@k8s-bot unit test this issue #IGNORE

@dashpole
Copy link
Contributor Author

@k8s-bot unit test this

@k8s-ci-robot
Copy link
Contributor

Jenkins unit/integration failed for commit da0a5cad40fba1de9757cdb274008a10025dc9e3. Full PR test history.

The magic incantation to run this job again is @k8s-bot unit test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@mtaufen
Copy link
Contributor

mtaufen commented Oct 3, 2016

I think this is the fix for the TestNamespaceController unit test flake: #33866

@k8s-github-robot k8s-github-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 3, 2016
@dashpole
Copy link
Contributor Author

dashpole commented Oct 3, 2016

issue: #33382

@dashpole
Copy link
Contributor Author

dashpole commented Oct 3, 2016

@k8s-bot gci gce e2e test this

@dashpole
Copy link
Contributor Author

dashpole commented Oct 3, 2016

@k8s-bot gce e2e test this

@dashpole
Copy link
Contributor Author

dashpole commented Oct 3, 2016

@k8s-bot kubemark gce e2e test this

@dashpole
Copy link
Contributor Author

dashpole commented Oct 3, 2016

@k8s-bot kube gce e2e test this

@dashpole
Copy link
Contributor Author

dashpole commented Oct 3, 2016

@k8s-bot test this issue #IGNORE

@mtaufen mtaufen added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 3, 2016
@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit 092f9ed into kubernetes:master Oct 4, 2016
@dashpole dashpole deleted the NodeInodePressure_type branch October 4, 2016 15:34
dashpole added a commit to dashpole/kubernetes that referenced this pull request Nov 4, 2016
k8s-github-robot pushed a commit that referenced this pull request Nov 6, 2016
Automatic merge from submit-queue

We only report diskpressure to users, and no longer report inodepressure

See #36180 for more information on why #33218 was reverted.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants