New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

removes custom scalers from kubectl #60455

Merged
merged 1 commit into from Mar 26, 2018

Conversation

@p0lyn0mial
Contributor

p0lyn0mial commented Feb 26, 2018

What this PR does / why we need it: this PR removes custom scalers from kubectl and uses the genericScaler instead.

Release note:

NONE
@p0lyn0mial

This comment has been minimized.

Contributor

p0lyn0mial commented Feb 26, 2018

/assign @deads2k
/assign @DirectXMan12

testcore "k8s.io/client-go/testing"
)
func FakeScaleClient(discoveryResources []*metav1.APIResourceList, pathsResources map[string]runtime.Object, pathsOnError map[string]map[string]kerrors.StatusError) (scale.ScalesGetter, error) {

This comment has been minimized.

@p0lyn0mial

p0lyn0mial Feb 26, 2018

Contributor

yeah, I can hear you :) How bad this actually is ?

FYI: #56865

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

yeah, I can hear you :) How bad this actually is ?

@DirectXMan12 We really need a fake scale client we can use. Is it just a matter of finding time?

This comment has been minimized.

@DirectXMan12

DirectXMan12 Feb 27, 2018

Contributor

Yeah, IIRC, I have one that was usuable with a fake rest mapper, but it's a bit clunky. Let me try and take some time today and make it better.

This comment has been minimized.

@DirectXMan12

DirectXMan12 Feb 28, 2018

Contributor

p0lyn0mial#2
^---------- the current fake scale client works fine for this

This comment has been minimized.

@DirectXMan12

DirectXMan12 Feb 28, 2018

Contributor

that should also close #56865

This comment has been minimized.

@p0lyn0mial

p0lyn0mial Mar 9, 2018

Contributor

@DirectXMan12 thanks for showing me how to use the fake client :) I think it will work and we can remove this custom code.

@@ -233,6 +235,19 @@ func (f *ring0Factory) RESTClient() (*restclient.RESTClient, error) {
return restclient.RESTClientFor(clientConfig)
}
func (f *ring0Factory) ScaleClient(mapper meta.RESTMapper) (scaleclient.ScalesGetter, error) {

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

move this out to the ring2Factory and it can be no-arg. See the example here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/util/factory_builder.go#L46

return nil, err
}
restClient, err := f.clientAccessFactory.RESTClient()
mapper, _ := f.Object()

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

The Scaler function can temporarily move to ring2Factory. It should have very few callers. Open an issue to remove all of them.

@@ -309,7 +300,12 @@ func (f *ring1Factory) Scaler(mapping *meta.RESTMapping) (kubectl.Scaler, error)
func (f *ring1Factory) Reaper(mapping *meta.RESTMapping) (kubectl.Reaper, error) {

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

@pwittrock @adohe @soltysh I think we need to start trying to kill reapers. GC doesn't resolve problems for objects that are nested (remove this SA subject from that rolebinding), but we should start thinking of alternatives.

@@ -309,7 +300,12 @@ func (f *ring1Factory) Scaler(mapping *meta.RESTMapping) (kubectl.Scaler, error)
func (f *ring1Factory) Reaper(mapping *meta.RESTMapping) (kubectl.Reaper, error) {
mappingVersion := mapping.GroupVersionKind.GroupVersion()
clientset, clientsetErr := f.clientAccessFactory.ClientSetForVersion(&mappingVersion)
reaper, reaperErr := kubectl.ReaperFor(mapping.GroupVersionKind.GroupKind(), clientset)
mapper, _ := f.Object()

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

Move out to ring 2.

@@ -148,7 +155,7 @@ func getOverlappingControllers(rcClient coreclient.ReplicationControllerInterfac
func (reaper *ReplicationControllerReaper) Stop(namespace, name string, timeout time.Duration, gracePeriod *metav1.DeleteOptions) error {
rc := reaper.client.ReplicationControllers(namespace)
scaler := &ReplicationControllerScaler{reaper.client}
scaler := ScalerFor(schema.GroupKind{}, nil, reaper.scaleClient, schema.GroupResource{Resource: "replicationcontrollers"})

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

Create a NewScaler(scalesGetter scaleclient.ScalesGetter, gr schema.GroupResource) function. When scaling jobs is dead, we'll remove ScalerFor

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

When scaling jobs is dead, we'll remove ScalerFor

And when that dies, we'll update the Scaler interface to accept GroupResource as an arg so we don't have to choose ahead of time.

@@ -196,7 +196,7 @@ func (c *namespacedScaleClient) Update(resource schema.GroupResource, scale *aut
Body(scaleUpdateBytes).
Do()
if err := result.Error(); err != nil {
return nil, fmt.Errorf("could not update the scale for %s %s: %v", resource.String(), scale.Name, err)

This comment has been minimized.

@deads2k

deads2k Feb 27, 2018

Contributor

I'm guessing you changed this to stop masking errors so that IsNotFound and the like would work? If so, please add a comment.

@deads2k

This comment has been minimized.

Contributor

deads2k commented Feb 27, 2018

As much as I want it, I don't think we're going to be landing this for 1.10.

/cc @kubernetes/sig-cli-maintainers

@p0lyn0mial

This comment has been minimized.

Contributor

p0lyn0mial commented Mar 13, 2018

@deads2k @DirectXMan12 I think I have addressed all the comments. PTAL

@@ -61,10 +55,17 @@ func ScalerFor(kind schema.GroupKind, jobsClient batchclient.JobsGetter, scalesG
case batch.Kind("Job"):
return &jobScaler{jobsClient} // Either kind of job can be scaled with Batch interface.

This comment has been minimized.

@deads2k

deads2k Mar 14, 2018

Contributor

@p0lyn0mial are you up for removing this in a follow-up pull. @soltysh got it deprecated in 1.10, so we'll be good to remove in 1.11.

This comment has been minimized.

@p0lyn0mial

p0lyn0mial Mar 14, 2018

Contributor

sure, no problem, does it mean that we actually want to have a scale endpoint for jobs ?
(#58468)

This comment has been minimized.

@deads2k

deads2k Mar 16, 2018

Contributor

sure, no problem, does it mean that we actually want to have a scale endpoint for jobs ?
(#58468)

No. We'll remove the special case here and you'll no longer be able to scale them from kubectl since it wasn't scaling in the same sense.

@soltysh

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Mar 20, 2018

@soltysh

This comment has been minimized.

Contributor

soltysh commented Mar 22, 2018

@p0lyn0mial mind rebasing this one?

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Mar 22, 2018

[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process

@DirectXMan12 @deads2k @p0lyn0mial @soltysh

Pull Request Labels
  • sig/cli: Pull Request will be escalated to these SIGs if needed.
  • priority/important-longterm: Escalate to the pull request owners; move out of the milestone after 1 attempt.
  • kind/cleanup: Adding tests, refactoring, fixing old bugs.
Help
@deads2k

This comment has been minimized.

Contributor

deads2k commented Mar 26, 2018

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Mar 26, 2018

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Mar 26, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: deads2k, p0lyn0mial, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Mar 26, 2018

@p0lyn0mial: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-unit 76920f1 link /test pull-kubernetes-unit

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@p0lyn0mial

This comment has been minimized.

Contributor

p0lyn0mial commented Mar 26, 2018

/retest

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Mar 26, 2018

Automatic merge from submit-queue (batch tested with PRs 60455, 61365, 61375, 61597, 61491). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-merge-robot k8s-merge-robot merged commit eda9fab into kubernetes:master Mar 26, 2018

14 checks passed

Submit Queue Queued to run github e2e tests a second time.
Details
cla/linuxfoundation p0lyn0mial authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
@janetkuo

This comment has been minimized.

Member

janetkuo commented Mar 27, 2018

test-cmd starts flaking around the same time this PR got merged: #61748

@p0lyn0mial Would you take a look to see if it's related?

@deads2k

This comment has been minimized.

Contributor

deads2k commented Mar 27, 2018

test-cmd starts flaking around the same time this PR got merged: #61748

@p0lyn0mial Would you take a look to see if it's related?

It looks like k8s.io/kubernetes/test/integration/deployment.TestDeploymentLabelAdopted started failing at the same time. That doesn't use the scaler right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment