Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix e2e limits #70329

Merged
merged 2 commits into from
Oct 30, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 0 additions & 2 deletions test/e2e/scheduling/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,8 @@ go_library(
"//staging/src/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/labels:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/runtime:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/types:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/util/intstr:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/util/sets:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/util/strategicpatch:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/util/uuid:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/util/version:go_default_library",
"//staging/src/k8s.io/apimachinery/pkg/util/wait:go_default_library",
Expand Down
23 changes: 10 additions & 13 deletions test/e2e/scheduling/priorities.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,6 @@ import (
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/strategicpatch"
"k8s.io/apimachinery/pkg/util/uuid"
utilfeature "k8s.io/apiserver/pkg/util/feature"
clientset "k8s.io/client-go/kubernetes"
Expand Down Expand Up @@ -281,14 +279,22 @@ var _ = SIGDescribe("SchedulerPriorities [Serial]", func() {
Expect(found).To(Equal(true))
nodeOriginalMemoryVal := nodeOriginalMemory.Value()
nodeOriginalCPUVal := nodeOriginalCPU.MilliValue()
err := updateNodeAllocatable(cs, nodeName, int64(10000), int64(12000))
err := updateNodeAllocatable(cs, nodeName, int64(10737418240), int64(12000))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to set this particular value? Isn't 10000 (greater than incoming limit 5000m) good enough?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This value is for memory not cpu

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sry, I misread... Let me rephrase: so the original mem val 10000 (greater than incoming limit 3000Mi) should be put as a form 10737418240, to represent 10Gi, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is right and that code path was not getting exercised as patch had not effect.

Expect(err).NotTo(HaveOccurred())
defer func() {
// Resize the node back to its original allocatable values.
if err := updateNodeAllocatable(cs, nodeName, nodeOriginalMemoryVal, nodeOriginalCPUVal); err != nil {
framework.Logf("Failed to revert node memory with %v", err)
}
// Reset the node list with its old entry.
nodeList.Items[len(nodeList.Items)-1] = lastNode
}()
// update nodeList with newNode.
newNode, err := cs.CoreV1().Nodes().Get(nodeName, metav1.GetOptions{})
if err != nil {
framework.Logf("Failed to get node", err)
}
nodeList.Items[len(nodeList.Items)-1] = *newNode
err = createBalancedPodForNodes(f, cs, ns, nodeList.Items, podRequestedResource, 0.5)
framework.ExpectNoError(err)
// After the above we should see 50% of node to be available which is 5000MiB memory, 6000m cpu for large node.
Expand Down Expand Up @@ -457,20 +463,11 @@ func addRandomTaitToNode(cs clientset.Interface, nodeName string) *v1.Taint {
func updateNodeAllocatable(c clientset.Interface, nodeName string, memory, cpu int64) error {
node, err := c.CoreV1().Nodes().Get(nodeName, metav1.GetOptions{})
framework.ExpectNoError(err)
oldData, err := json.Marshal(node)
if err != nil {
return err
}
node.Status.Allocatable[v1.ResourceMemory] = *resource.NewQuantity(memory, resource.BinarySI)
node.Status.Allocatable[v1.ResourceCPU] = *resource.NewMilliQuantity(cpu, resource.DecimalSI)
newData, err := json.Marshal(node)
if err != nil {
return err
}
patchBytes, err := strategicpatch.CreateTwoWayMergePatch(oldData, newData, v1.Node{})
if err != nil {
return err
}
_, err = c.CoreV1().Nodes().Patch(string(node.Name), types.StrategicMergePatchType, patchBytes)
_, err = c.CoreV1().Nodes().UpdateStatus(node)
Copy link
Member

@Huang-Wei Huang-Wei Oct 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that to say, Patch() doesn't work for API fields in status? Or, it has bug on updating both memory and cpu fields? (b/c I checked the history, it seems that our code was always using Patch(), but updating memory only)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that to say, Patch() doesn't work for API fields in status?
Yeah, that's right. Patch wasn't working for updating cpu or memory.

In order to test it, I added taints to node and applied patch, this worked while cpu and memory were not getting updated. So, I had to use UpdateStatus.

return err
}