New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MGMT-9797: Determine number of replicas according to DefaultPlacement
#728
Conversation
DefaultPlacement
/cc @Miciah |
DefaultPlacement
DefaultPlacement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a few stylistic concerns, but overall the changes look good. You need to fix calls to desiredRouterDeployment
in unit tests though. Also, please add unit test coverage for the changes to desiredRouterDeployment
.
By the way, #712 refactors the test code. Please define a new ETA: I have something like the following in mind (this is an example and may need more test cases or other tweaks). Example unit testfunc TestDesiredRouterDeploymentDefaultPlacement(t *testing.T) {
var (
workerNodeSelector = map[string]string{
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/worker": "",
}
controlPlaneNodeSelector = map[string]string{
"kubernetes.io/os": "linux",
"node-role.kubernetes.io/master": "",
}
)
testCases := []struct {
name string
ingressConfig *configv1.Ingress
infraConfig *configv1.Infrastructure
expectedNodeSelector map[string]string
expectedReplicas int
}{
{
name: "empty ingress/infra config",
ingressConfig: &configv1.Ingress{},
infraConfig: &configv1.Infrastructure{
Status: configv1.InfrastructureStatus{
PlatformStatus: &configv1.PlatformStatus{
Type: configv1.AWSPlatformType,
},
},
},
expectedNodeSelector: workerNodeSelector,
expectedReplicas: 2,
},
{
name: "empty ingress config, single-node, worker",
ingressConfig: &configv1.Ingress{},
infraConfig: &configv1.Infrastructure{
Status: configv1.InfrastructureStatus{
InfrastructureTopology: configv1.SingleReplicaTopologyMode,
PlatformStatus: &configv1.PlatformStatus{
Type: configv1.AWSPlatformType,
},
},
},
expectedNodeSelector: workerNodeSelector,
expectedReplicas: 1,
},
{
name: "empty ingress config, single-node, control-plane",
ingressConfig: &configv1.Ingress{
Status: configv1.IngressStatus{
DefaultPlacement: configv1.DefaultPlacementControlPlane,
},
},
infraConfig: &configv1.Infrastructure{
Status: configv1.InfrastructureStatus{
ControlPlaneTopology: configv1.SingleReplicaTopologyMode,
PlatformStatus: &configv1.PlatformStatus{
Type: configv1.AWSPlatformType,
},
},
},
expectedNodeSelector: controlPlaneNodeSelector,
expectedReplicas: 1,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
var (
ic = &operatorv1.IngressController{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
},
Spec: operatorv1.IngressControllerSpec{},
Status: operatorv1.IngressControllerStatus{
EndpointPublishingStrategy: &operatorv1.EndpointPublishingStrategy{
Type: operatorv1.PrivateStrategyType,
},
},
}
ingressControllerImage = "quay.io/openshift/router:latest"
apiConfig = &configv1.APIServer{}
networkConfig = &configv1.Network{}
)
proxyNeeded, err := IsProxyProtocolNeeded(ic, tc.infraConfig.Status.PlatformStatus)
if err != nil {
t.Fatal(err)
}
deployment, err := desiredRouterDeployment(ic, ingressControllerImage, tc.ingressConfig, tc.infraConfig, apiConfig, networkConfig, proxyNeeded, false, nil)
if err != nil {
t.Error(err)
}
checkDeploymentHash(t, deployment)
expectedReplicas := int32(tc.expectedReplicas)
if !reflect.DeepEqual(deployment.Spec.Replicas, &expectedReplicas) {
t.Errorf("expected replicas to be %v, got %v", expectedReplicas, *deployment.Spec.Replicas)
}
if !reflect.DeepEqual(deployment.Spec.Template.Spec.NodeSelector, tc.expectedNodeSelector) {
t.Errorf("expected node selector to be %v, got %v", tc.expectedNodeSelector, deployment.Spec.Template.Spec.NodeSelector)
}
checkDeploymentHash(t, deployment)
})
}
} |
47eb580
to
3452bff
Compare
The verify job found some fmt issues:
|
/label px-approved |
8d0c346
to
7a01a89
Compare
Force push includes running |
/retest-required |
Closed by mistake |
/retest |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
8 similar comments
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
/retest-required Please review the full test history for this PR and help us cut down flakes. |
Implementation of the https://github.com/openshift/enhancements/blob/master/enhancements/single-node/single-node-openshift-with-workers.md enhancement. The installation process now sets the new `DefaultPlacement` API field in the Ingress Config CR's status (openshift/installer#5746 still unmerged at the time of writing this message). This commit introduced two new changes - - While creating the default `IngressController`, or while creating deployments corresponding to `IngressController` resources which don't have their number of replicas set, we determine the number of replicas according to the value of the new `DefaultPlacement` parameter. - When creating Deployments out of `IngressController` resources which don't have their nodeSelector set, we determine the default nodeSelector according to the new `DefaultPlacement` API field. See the enhancement, new API documentation, implementation and code comments in the diff for the exact details on how, why and when we set the fields and to which values.
/retest-required |
/retest-required |
1 similar comment
/retest-required |
@omertuc: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Miciah, omertuc The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/label docs-approved We do not follow no-ff for this feature |
Implementation of the https://github.com/openshift/enhancements/blob/master/enhancements/single-node/single-node-openshift-with-workers.md
enhancement.
The installation process now sets the new
DefaultPlacement
API field in the Ingress Config CR's status (openshift/installer#5746
still unmerged at the time of writing this message).
This PR introduces two new changes -
While creating the default
IngressController
, or while creatingdeployments corresponding to
IngressController
resources whichdon't have their number of replicas set, we determine the number
of replicas according to the value of the new
DefaultPlacement
parameter.
When creating Deployments out of
IngressController
resourceswhich don't have their nodeSelector set, we determine the default
nodeSelector according to the new
DefaultPlacement
API field.See the enhancement, new API documentation, implementation and code
comments in the diff for the exact details on how, why and when we
set the fields and to which values.