Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Scale and HPA support for VMIRS #1881

Merged
merged 4 commits into from Jan 8, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 4 additions & 0 deletions api/openapi-spec/swagger.json
Expand Up @@ -5893,6 +5893,10 @@
"$ref": "#/definitions/v1.VirtualMachineInstanceReplicaSetCondition"
}
},
"labelSelector": {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is the HPA detecting load to know when to scale? Is it using this labelSelector and calculating load based on the Pods that match this selector?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The metrics api needs to be active inside the cluster: https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/.

We will have to improve the situation a little bit where no readiness check is explicitly configured on the VMI and migrations come into play, but apart from that the algorithm can work with non-ready pods, which allows to cope with "boot" load.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, batch migrations may confuse HPA. The workload's calculated metrics during a bulk migrations might not accurately reflect the actual workload due to migration overhead.

"description": "Canonical form of the label selector for HPA which consumes it through the scale subresource.",
"type": "string"
},
"readyReplicas": {
"description": "The number of ready replicas for this replica set.\n+optional",
"type": "integer",
Expand Down
2 changes: 1 addition & 1 deletion automation/test.sh
Expand Up @@ -46,7 +46,7 @@ if [[ $TARGET =~ openshift-.* ]]; then
export KUBEVIRT_PROVIDER="os-3.11.0"
fi
elif [[ $TARGET =~ .*-1.10.4-.* ]]; then
export KUBEVIRT_PROVIDER="k8s-1.10.4"
export KUBEVIRT_PROVIDER="k8s-1.10.11"
elif [[ $TARGET =~ .*-multus-1.12.2-.* ]]; then
export KUBEVIRT_PROVIDER="k8s-multus-1.12.2"
elif [[ $TARGET =~ .*-genie-1.11.1-.* ]]; then
Expand Down
Expand Up @@ -9,7 +9,7 @@ local machine and are the pushed to a registry which is exposed at
## Bringing the cluster up

```bash
export KUBEVIRT_PROVIDER=k8s-1.10.4
export KUBEVIRT_PROVIDER=k8s-1.10.11
export KUBEVIRT_NUM_NODES=2 # master + one nodes
make cluster-up
```
Expand All @@ -26,7 +26,7 @@ node02 NotReady <none> 5s v1.10.4
## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=k8s-1.10.4
export KUBEVIRT_PROVIDER=k8s-1.10.11
make cluster-down
```

Expand Down
Expand Up @@ -2,7 +2,7 @@

set -e

image="k8s-1.10.4@sha256:b60a61ca03a1a6c504481020709a04f65e4dd9c929a8bcad18821c5f80d1b2b6"
image="k8s-1.10.11@sha256:b97e556795a56b9aa1763ddf3a49322b49f96877dccb7a164bbca779df078536"

source cluster/ephemeral-provider-common.sh

Expand Down
4 changes: 2 additions & 2 deletions docs/env-providers.md
Expand Up @@ -26,7 +26,7 @@ Requires:
Usage:

```bash
export KUBEVIRT_PROVIDER=k8s-1.10.4 # choose this provider
export KUBEVIRT_PROVIDER=k8s-1.10.11 # choose this provider
export KUBEVIRT_NUM_NODES=3 # master + two nodes
make cluster-up
```
Expand Down Expand Up @@ -66,4 +66,4 @@ make cluster-up
* Create a `cluster/$KUBEVIRT_PROVIDER` directory
* Create a `cluster/$KUBEVIRT_PROVIDER/provider.sh` files
* This file has to contain the functions `up`, `build`, `down` and `_kubectl`
* Have a look at `cluster/k8s-1.10.4/provider.sh` for a reference implementation
* Have a look at `cluster/k8s-1.10.11/provider.sh` for a reference implementation
2 changes: 1 addition & 1 deletion docs/getting-started.md
Expand Up @@ -31,7 +31,7 @@ dockerizied environment, clone the KubeVirt repository, `cd` into it, and:

```bash
# Build and deploy KubeVirt on Kubernetes 1.10.4 in our vms inside containers
export KUBEVIRT_PROVIDER=k8s-1.10.4 # this is also the default if no KUBEVIRT_PROVIDER is set
export KUBEVIRT_PROVIDER=k8s-1.10.11 # this is also the default if no KUBEVIRT_PROVIDER is set
make cluster-up
make cluster-sync
```
Expand Down
2 changes: 1 addition & 1 deletion hack/common.sh
Expand Up @@ -31,7 +31,7 @@ function build_func_tests_container() {
--label ${bin_name} .
}

KUBEVIRT_PROVIDER=${KUBEVIRT_PROVIDER:-k8s-1.10.4}
KUBEVIRT_PROVIDER=${KUBEVIRT_PROVIDER:-k8s-1.10.11}
KUBEVIRT_NUM_NODES=${KUBEVIRT_NUM_NODES:-1}
KUBEVIRT_MEMORY_SIZE=${KUBEVIRT_MEMORY_SIZE:-5120M}

Expand Down
5 changes: 5 additions & 0 deletions manifests/generated/vmirs-resource.yaml
Expand Up @@ -32,4 +32,9 @@ spec:
- vmirss
singular: virtualmachineinstancereplicaset
scope: Namespaced
subresources:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This requires the CustomResourceSubresources feature gate to be enables for clusters prior to 1.11.

Is this CRD going to get rejected for 1.10 if the feature gate isn't enabled?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, will just strip the fields away from the CRD. I find that actually a little bit unpleasant in general but good in this case ...

scale:
labelSelectorPath: .status.labelSelector
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
version: v1alpha2
2 changes: 1 addition & 1 deletion pkg/api/v1/deepcopy_generated.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

9 changes: 8 additions & 1 deletion pkg/api/v1/openapi_generated.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 7 additions & 0 deletions pkg/api/v1/types.go
Expand Up @@ -32,6 +32,7 @@ import (
"encoding/json"
"fmt"

"k8s.io/api/autoscaling/v1"
k8sv1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
Expand Down Expand Up @@ -88,6 +89,9 @@ func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(metav1.Unversioned,
&metav1.Status{},
)
scheme.AddKnownTypes(schema.GroupVersion{Group: "autoscaling", Version: "v1"},
&v1.Scale{},
)
return nil
}

Expand Down Expand Up @@ -603,6 +607,9 @@ type VirtualMachineInstanceReplicaSetStatus struct {
ReadyReplicas int32 `json:"readyReplicas,omitempty" protobuf:"varint,4,opt,name=readyReplicas"`

Conditions []VirtualMachineInstanceReplicaSetCondition `json:"conditions,omitempty" optional:"true"`

// Canonical form of the label selector for HPA which consumes it through the scale subresource.
LabelSelector string `json:"labelSelector,omitempty"`
}

// ---
Expand Down
1 change: 1 addition & 0 deletions pkg/api/v1/types_swagger_generated.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pkg/api/v1/zz_generated.defaults.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.