Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added oc status checks for hpa missing cpu target and overlapping hpas #7799

Merged
merged 1 commit into from May 2, 2016

Conversation

stevekuznetsov
Copy link
Contributor

TODO: overlapping HPAs

Fixes #7722

@deads2k @Kargakis PTAL

)

func TestHPAMissingCPUTargetWarning(t *testing.T) {
g, _, err := osgraphtest.BuildGraph("./../../../api/graph/test/hpa-missing-cpu-target.yaml")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this step the CPUUtilization is defaulted to 80 by some API machinery. It's unclear to me how to get that not to happen.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

submitting to extensions/v1beta1 defaults to 80%. submitting to autoscaling/v1 does not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unable to decode "<snip>/hpa-missing-cpu-target.yaml": no kind "HorizontalPodAutoscaler" is registered for version "autoscaling/v1"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, origin doesn't have that group yet

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it even currently possible for an HPA in Origin to be missing the CPU target?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless an admin edits one to remove it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's possible in the API, but it will be as soon as we get the autoscaling group in the rebase :-/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@deads2k based on this discussion, unless I'm misunderstanding why you need these changes ASAP, let's postpone this PR for a while

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overlapping is valuable. missing will be valuable in a week

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, the timeline is that close. Ok. Should I just t.Skip() the test for now? Do we want this before the rebase?

@0xmichalis
Copy link
Contributor

@DirectXMan12 fyi

@0xmichalis
Copy link
Contributor

also @jwforres @spadgett

@stevekuznetsov stevekuznetsov changed the title [wip] added oc status checks for hpa missing cpu target and overlapping hpas added oc status checks for hpa missing cpu target and overlapping hpas Mar 4, 2016
@stevekuznetsov
Copy link
Contributor Author

@deads2k this is ready for a look

HPAOverlappingScaleRefWarning = "HPAOverlappingScaleRef"
)

func FindHPASpecsMissingCPUTargets(g osgraph.Graph, f osgraph.Namer) []osgraph.Marker {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

godoc

@stevekuznetsov
Copy link
Contributor Author

@Kargakis addressed comments

continue
}

if otherHPA, ok := references[hpa.Spec.ScaleRef]; ok {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to warn about all hpas that may overlap and not just two.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which HPA node will we tie the warning marker to? Or should we make a marker for each one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I am not sure about the main node (definitely one of the younger hpas) but all should be included in the related nodes (except the main one)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also the dc/rc should be in the related nodes too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which HPA node will we tie the warning marker to? Or should we make a marker for each one?

RCs create a marker on each node. That seems reasonable and it allows good filtering by Node.

@stevekuznetsov stevekuznetsov force-pushed the skuznets/status-hpa branch 2 times, most recently from bb4b33c to 793e5d5 Compare March 4, 2016 18:19
@stevekuznetsov
Copy link
Contributor Author

@Kargakis How should we find the DC/RC/Job to put in the list of related nodes? Won't we have to get all of the DC/RC/Jobs we know about and iterate through them all to find which one is referenced in the ScaleRef? That seems pretty computationally inefficient for what we're trying to achieve.

@deads2k
Copy link
Contributor

deads2k commented Mar 4, 2016

@Kargakis How should we find the DC/RC/Job to put in the list of related nodes? Won't we have to get all of the DC/RC/Jobs we know about and iterate through them all to find which one is referenced in the ScaleRef? That seems pretty computationally inefficient for what we're trying to achieve.

Why aren't the edges relating the hpa to the things its scaling not already in the graph?

@stevekuznetsov
Copy link
Contributor Author

@deads2k good question :)

@jwforres
Copy link
Member

jwforres commented Mar 4, 2016

can you include a sample of the cli output when these warnings appear

@stevekuznetsov
Copy link
Contributor Author

@deads2k would appreciate a look while I write tests

if len(hpas) > 1 {
for i, node := range hpas {
others := append(hpas[:i], hpas[i:]...)
relatedNodes := append(g.SuccessorNodesByEdgeKind(node, kubegraph.ReferencedReplicationControllerEdgeKind), g.SuccessorNodesByEdgeKind(node, kubegraph.ReferencedDeplotmentConfigEdgeKind)...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

@stevekuznetsov stevekuznetsov force-pushed the skuznets/status-hpa branch 2 times, most recently from 65ea64e to 8413c12 Compare March 9, 2016 16:14
)

if len(scaledObjects) < 1 {
markers = append(markers, createMissingScaleRefMarker(node, nil, namer))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return here explicitly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return here explicitly

I don't think this returns. It checks the next one, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a continue

// - for every resulting edge in the new sub-graph, create an edge in the reverse direction
// - find the shortest paths between all HPA nodes in the graph
// - shortest paths connecting two horizontal pod autoscalers are used to create markers for the graph
func FindOverlappingHPAs(graph osgraph.Graph, namer osgraph.Namer) []osgraph.Marker {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, tight, and readable

@deads2k
Copy link
Contributor

deads2k commented Mar 23, 2016

Relatively minor comments overall. Not sure I want something this big in 1.2 though.

@stevekuznetsov
Copy link
Contributor Author

[test]me

@stevekuznetsov stevekuznetsov force-pushed the skuznets/status-hpa branch 2 times, most recently from 2bbb994 to 9a45db1 Compare March 30, 2016 13:05
@stevekuznetsov
Copy link
Contributor Author

@deads2k we have green

@deads2k deads2k added this to the 1.2.x milestone Apr 18, 2016
@stevekuznetsov
Copy link
Contributor Author

@deads2k ping

@stevekuznetsov
Copy link
Contributor Author

@deads2k updated not to embed the HPA.

Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
@deads2k
Copy link
Contributor

deads2k commented May 2, 2016

[merge]

@openshift-bot
Copy link
Contributor

openshift-bot commented May 2, 2016

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/5782/) (Image: devenv-rhel7_4078)

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to bd7e128

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to bd7e128

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/3529/)

@openshift-bot openshift-bot merged commit 737ca2a into openshift:master May 2, 2016
@stevekuznetsov stevekuznetsov deleted the skuznets/status-hpa branch June 15, 2016 18:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants