Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement scale endpoint for jobs #38756

Closed
soltysh opened this issue Dec 14, 2016 · 21 comments
Closed

Implement scale endpoint for jobs #38756

soltysh opened this issue Dec 14, 2016 · 21 comments
Assignees
Labels
area/batch area/workload-api/job sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@soltysh
Copy link
Contributor

soltysh commented Dec 14, 2016

Currently Deployments, RC and RS have scale endpoint which allows directly scaling those instead of mutating the object's spec when scaling up/down. kubectl scale job modifies .spec.parallelism directly. I'm proposing jobs should have /scale endpoint to leverage scale functionality.

@kubernetes/sig-apps ideas/objections?

@soltysh soltysh added area/batch area/workload-api/job sig/apps Categorizes an issue or PR as relevant to SIG Apps. labels Dec 14, 2016
@soltysh soltysh self-assigned this Dec 14, 2016
@0xmichalis
Copy link
Contributor

SGTM

@adohe-zz
Copy link

SGTM.

@wanghaoran1988
Copy link
Contributor

/cc

@wanghaoran1988
Copy link
Contributor

@soltysh @Kargakis I am try to add the scale endpoint for the Job, but I have a problem now, I am using the scale struct in the api extentions pkg,I am not sure whether it's correct, or should I create a new one in batch pkg ?
Also I found the parallelism filed doesn't exits in the JobStatus struct, should we update to add this to the struct to show how many replicas are actually running?

@0xmichalis
Copy link
Contributor

Also I found the parallelism filed doesn't exits in the JobStatus struct, should we update to add this to the struct to show how many replicas are actually running?

Yeah, I have thought about this before and I think it deserves its own issue. Can you please open one and discuss that there?

@wanghaoran1988
Copy link
Contributor

wanghaoran1988 commented Jun 5, 2017

After I go through the logic, I find that actual parallelism (number of pods running at any instant) may be more or less than requested parallelism,docs. So I think to align with the doc, we need have it's own Scale struct, here is the spec in my mind, the Completions field is immutable, but the real parallelism pods depends on this, so I put that in the ScaleStatus spec.
@Kargakis @soltysh Could you please take a look, if it's ok, I will start working on this.

type ScaleSpec struct {
	// Specifies the maximum desired number of pods the job should
	// run at any given time. The actual number of pods running in steady state will
	// be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism),
	// i.e. when the work left to do is less than max parallelism.
	// +optional
	Parallelism *int32
}

// represents the current status of a scale subresource.
type ScaleStatus struct {
	// Specifies the desired number of successfully finished pods the
	// job should be run with.  Setting to nil means that the success of any
	// pod signals the success of all pods, and allows parallelism to have any positive
	// value.  Setting to 1 means that parallelism is limited to 1 and the success of that
	// pod signals the success of the job.
	// +optional
	Completions *int32

	// The number of actively running pods.
	// +optional
	Active int32

	// label query over pods that should match the replicas count.
	// More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
	// +optional
	Selector *metav1.LabelSelector
}

// represents a scaling request for a resource.
type Scale struct {
	metav1.TypeMeta
	// Standard object metadata; More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata.
	// +optional
	metav1.ObjectMeta

	// defines the behavior of the scale. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status.
	// +optional
	Spec ScaleSpec

	// current status of the scale. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status. Read-only.
	// +optional
	Status ScaleStatus
}

@0xmichalis
Copy link
Contributor

@soltysh can you ptal on @wanghaoran1988's proposal?

@soltysh
Copy link
Contributor Author

soltysh commented Aug 2, 2017

@wanghaoran1988 here's the thing. I've talked some time ago with @deads2k and we both agreed that having Scale structure inside every API group isn't the right solution. The idea was to have one Scale structure that would work with every resource out there. To do that we'd need some dynamic scale client, see #32523 (comment). If you could work on it I'd be more than happy reviewing it :)

@deads2k
Copy link
Contributor

deads2k commented Aug 2, 2017

I have a pull adding the required information to discovery here: #49971

@soltysh
Copy link
Contributor Author

soltysh commented Aug 2, 2017

There's also #29698 touching this topic.

@wanghaoran1988
Copy link
Contributor

Seems @DirectXMan12 is working on this from the comments here:#49971 (comment)

@wanghaoran1988
Copy link
Contributor

@DirectXMan12 Are you working on this?

@DirectXMan12
Copy link
Contributor

Once #49971 lands, I'll revive my polymorphic scale client PR in some form.

k8s-github-robot pushed a commit that referenced this issue Sep 1, 2017
Automatic merge from submit-queue (batch tested with PRs 49971, 51357, 51616, 51649, 51372)

add information for subresource kind determination

xref #38810 #38756

Polymorphic subresources usually have different groupVersions for their discovery kinds than their "native" groupVersions.  Even though the APIResourceList shows the kind properly, it does not reflect the group or version of that kind, which makes it impossible to unambiguously determine if the subresource matches you and it is impossible to determine how to serialize your data.  See HPA controller.

This adds an optional Group and Version to the discovery doc, which can be used to communicate the "native" groupversion of an endpoint.  Doing this does not preclude fancier contenttype negotiation in the future and doesn't prevent future expansion from indicating equivalent types, but it does make it possible to solve the problem we have today or polymorphic categorization.

@kubernetes/sig-api-machinery-misc @smarterclayton 
@cheftako since @lavalamp is out.

```release-note
Adds optional group and version information to the discovery interface, so that if an endpoint uses non-default values, the proper value of "kind" can be determined. Scale is a common example.
```
@sttts
Copy link
Contributor

sttts commented Oct 4, 2017

@DirectXMan12 #49971 landed. Any plans for the polymorphic scale?

k8s-github-robot pushed a commit that referenced this issue Oct 18, 2017
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

allow */subresource in rbac policy rules

xref #29698
xref #38756
xref #49504
xref #38810

Allow `*/subresource` format in RBAC policy rules to support polymorphic subresources like `*/scale` for HPA.

@DirectXMan12 fyi

```release-note
RBAC PolicyRules now allow resource=`*/<subresource>` to cover `any-resource/<subresource>`.   For example, `*/scale` covers `replicationcontroller/scale`.
```
k8s-github-robot pushed a commit that referenced this issue Oct 23, 2017
…-client

Automatic merge from submit-queue (batch tested with PRs 53743, 53564). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Polymorphic Scale Client

This PR introduces a polymorphic scale client based on discovery information that's able to scale scalable resources in arbitrary group-versions, as long as they present the scale subresource in their discovery information.

Currently, it supports `extensions/v1beta1.Scale` and `autoscaling/v1.Scale`, but supporting other versions of scale if/when we produce them should be fairly trivial.

It also updates the HPA to use this client, meaning the HPA will now work on any scalable resource, not just things in the `extensions/v1beta1` API group.

**Release note**:
```release-note
Introduces a polymorphic scale client, allowing HorizontalPodAutoscalers to properly function on scalable resources in any API group.
```

Unblocks #29698
Unblocks #38756
Unblocks #49504 
Fixes #38810
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 7, 2018
@nikhita
Copy link
Member

nikhita commented Jan 8, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 8, 2018
@DirectXMan12
Copy link
Contributor

DirectXMan12 commented Jan 10, 2018

Polymorphic scale landed in 1.9, FWIW (see the referenced commits in the activity stream above), so this is no longer blocked by that

@soltysh
Copy link
Contributor Author

soltysh commented Feb 7, 2018

Based on #58468 (comment) this is not going to happen. Additionally, we're starting to deprecate kubectl scale job functionality in 1.10.

@cometta
Copy link

cometta commented May 24, 2019

Will there be a new scale endpoint feature after deprecated scale job command?

@soltysh
Copy link
Contributor Author

soltysh commented May 28, 2019

Nope, scaling job was considered a mistake, since they are different from all other resources that allow scaling. You can still modify it's spec field, though.

@shangmu
Copy link

shangmu commented Jun 5, 2019

Then what's the expected behavior (or fundamental difference) of modifying the spec field? If there's no fundamental difference in the two (between kubectl scale and editing the spec field), why not just keep the scale job command and explicitly state the limitations?

akhilerm pushed a commit to akhilerm/apimachinery that referenced this issue Sep 20, 2022
Automatic merge from submit-queue (batch tested with PRs 49971, 51357, 51616, 51649, 51372)

add information for subresource kind determination

xref kubernetes/kubernetes#38810 kubernetes/kubernetes#38756

Polymorphic subresources usually have different groupVersions for their discovery kinds than their "native" groupVersions.  Even though the APIResourceList shows the kind properly, it does not reflect the group or version of that kind, which makes it impossible to unambiguously determine if the subresource matches you and it is impossible to determine how to serialize your data.  See HPA controller.

This adds an optional Group and Version to the discovery doc, which can be used to communicate the "native" groupversion of an endpoint.  Doing this does not preclude fancier contenttype negotiation in the future and doesn't prevent future expansion from indicating equivalent types, but it does make it possible to solve the problem we have today or polymorphic categorization.

@kubernetes/sig-api-machinery-misc @smarterclayton 
@cheftako since @lavalamp is out.

```release-note
Adds optional group and version information to the discovery interface, so that if an endpoint uses non-default values, the proper value of "kind" can be determined. Scale is a common example.
```

Kubernetes-commit: d56f6ef81634cc75a2d81e61ee93e9c2c69424cd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/batch area/workload-api/job sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

No branches or pull requests