Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scaling deployment replicas #3910

Conversation

msivasubramaniaan
Copy link
Collaborator

Fixes: #3822

Signed-off-by: msivasubramaniaan <msivasub@redhat.com>
Copy link

codecov bot commented Feb 16, 2024

Codecov Report

Attention: Patch coverage is 35.71429% with 27 lines in your changes are missing coverage. Please review.

Project coverage is 45.21%. Comparing base (da60441) to head (b81e032).
Report is 61 commits behind head on main.

❗ Current head b81e032 differs from pull request most recent head 1f0cbb5. Consider uploading reports for the commit 1f0cbb5 to get more accurate results

Files Patch % Lines
src/deployment.ts 42.30% 15 Missing ⚠️
src/explorer.ts 20.00% 8 Missing ⚠️
src/oc/ocWrapper.ts 0.00% 3 Missing ⚠️
src/openshift/component.ts 66.66% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #3910       +/-   ##
===========================================
+ Coverage   32.37%   45.21%   +12.83%     
===========================================
  Files          85       85               
  Lines        6505     6602       +97     
  Branches     1349     1358        +9     
===========================================
+ Hits         2106     2985      +879     
+ Misses       4399     3617      -782     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

src/explorer.ts Outdated
@@ -300,6 +323,10 @@ export class OpenShiftExplorer implements TreeDataProvider<ExplorerItem>, Dispos
result = [...helmRepos.sort(Helm.ascRepoName)];
}
}
} else if ('kind' in element && element.kind === 'Deployment') {
const pods = await Oc.Instance.getKubernetesObjects('pods');
const filteredPods: DeploymentPodObject[] = pods.filter((pod) => pod.metadata.name.indexOf(element.metadata.name) !== -1 && pod);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think && pod is needed here. && pod will check if pod is not null or undefined, but since pod has a field metadata that you access, we know it's not null.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed pod

Copy link
Collaborator

@datho7561 datho7561 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works pretty well! Here are some things I noticed

if (trimmedValue.length === 0) {
return 'Scale value cannot be empty';
}
if (parseInt(trimmedValue, 10) < 0) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am allowed to enter 0.001 into the wizard. When this is passed to oc scale deployment, it fails.

I think we should do one of:

  • run parseInt on count before passing it to oc scale deployment
  • Only allow integer values in the wizard

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scale value now accepts only positive integer

);
const response = await Oc.Instance.scalePod(component.metadata.name, count);
if (response.indexOf('scaled') !== -1) {
OpenShiftExplorer.getInstance().refresh(component);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are increasing the number of replicas (eg. 1 --> 3), then the new pods don't show up when you refresh the tree here, since the pods take a while to reach the "Running" state. I think this can be a bit confusing for the user.

However, I can't think of a way to address this other than refreshing the Application Explorer on a timer, which we are trying to avoid, so I think this is good for now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refreshing the tree view based on 'Pod is Running' text on terminal view. Now we can get the updated pods

src/explorer.ts Outdated
description: `${contextElement.kind.substring(0, 1).toLocaleUpperCase()}${contextElement.kind.substring(1)}`,
collapsibleState: TreeItemCollapsibleState.None,
iconPath: new ThemeIcon('layers-active'),
tooltip: `${contextElement.status.phase} (1/1)\n${contextElement.status.podIP}`,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's worth listing (1/1) if this value is hard coded. For example, what if pod contains a sidecar container, so there should be two containers listed?

Also, you filter out pods that are in the "Running" phase, so the first part will always be "Running"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't get the meaning of that hard coded (1/1) string in pod's tooltip.

Maybe this is a place that later is to be changed to something like (pod #x of y) or something like this, but if not, I don't see any point in hard coding it here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were no possibilities of getting the number of pod on running state. So removed the (1/1) hardcoded value

src/explorer.ts Outdated
} else if ('kind' in element && element.kind === 'Deployment') {
const pods = await Oc.Instance.getKubernetesObjects('pods');
const filteredPods: DeploymentPodObject[] = pods.filter((pod) => pod.metadata.name.indexOf(element.metadata.name) !== -1 && pod);
return filteredPods.filter((pod) => pod.status.phase === 'Running');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May I ask what's the reason for only listing pods that are running? I think it could be helpful to show pods that are starting or stopping.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May I ask what's the reason for only listing pods that are running? I think it could be helpful to show pods that are starting or stopping.

+1

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now listing all the pods from cluster

Copy link
Contributor

@vrubezhny vrubezhny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with David's notes. +see some questions inline.

if (!component) {
return;
}
let count = '1';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we find out somehow and use the current scale number as an initial value?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking list of running pods as initial value

if (!component) {
return;
}
let count = '1';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we store somewhere or can we find out somehow and use the current scale number as an initial value?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

src/explorer.ts Outdated
} else if ('kind' in element && element.kind === 'Deployment') {
const pods = await Oc.Instance.getKubernetesObjects('pods');
const filteredPods: DeploymentPodObject[] = pods.filter((pod) => pod.metadata.name.indexOf(element.metadata.name) !== -1 && pod);
return filteredPods.filter((pod) => pod.status.phase === 'Running');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May I ask what's the reason for only listing pods that are running? I think it could be helpful to show pods that are starting or stopping.

+1

Signed-off-by: msivasubramaniaan <msivasub@redhat.com>
@msivasubramaniaan
Copy link
Collaborator Author

@datho7561 and @vrubezhny addressed all the comments which you mentioned. Please have a look on the latest code

@vrubezhny
Copy link
Contributor

When I'm scaling to 0, I see all the running pods are shutting down (and disappearing from the App explorer tree) with the following lines output to the OS Terminal:

Cleaning resources, please wait
 ✗  Finished executing the application (command: debug) [43s]
 ✗  watch command was unable to push component: fail starting the port forwarding: multiple Pods exist for the selector: component=nodejs-starter-1. Only one must be present

Is this normal? I mean the second error: .watch command was unable to push component:...

@vrubezhny
Copy link
Contributor

vrubezhny commented Feb 26, 2024

After I scaled to 2, when clicking on a Deploypent or on one of the running Pods the Deployment YAML is to be opened... So after a while it stops opening Deployment YAML when clicking on a Pod and suddenly stops Dev for a Component...

Not sure if this is related to Scaling or is it any different issue:

image

Screencast from 2024-02-26 18-37-24.webm
This time the error is Shutting down the API server...:

 ✗  Finished executing the application (command: debug) [43s]
 ✗  watch command was unable to push component: fail starting the port forwarding: multiple Pods exist for the selector: component=nodejs-starter-1. Only one must be present
I0226 18:38:39.986832  259197 starterserver.go:147] Shutting down the API server: context canceled

@msivasubramaniaan
Copy link
Collaborator Author

After I scaled to 2, when clicking on a Deploypent or on one of the running Pods the Deployment YAML is to be opened... So after a while it stops opening Deployment YAML when clicking on a Pod and suddenly stops Dev for a Component...

Not sure if this is related to Scaling or is it any different issue:

image

Screencast from 2024-02-26 18-37-24.webm This time the error is Shutting down the API server...:

 ✗  Finished executing the application (command: debug) [43s]
 ✗  watch command was unable to push component: fail starting the port forwarding: multiple Pods exist for the selector: component=nodejs-starter-1. Only one must be present
I0226 18:38:39.986832  259197 starterserver.go:147] Shutting down the API server: context canceled

We don't have control on getting the terminating state of the pod, that's why I have shown only running pods. If you check it on the cluster the mentioned pod was terminated and there were no config of it. so it fails.

src/deployment.ts Outdated Show resolved Hide resolved
src/openshift/component.ts Outdated Show resolved Hide resolved
Signed-off-by: msivasubramaniaan <msivasub@redhat.com>
Copy link
Collaborator

@datho7561 datho7561 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works like a charm! Thanks, Muthu!

@datho7561
Copy link
Collaborator

Do you want to merge this for 1.12.0 or wait until after we release? Personally, I think it should be good to include with 1.12.0.

@msivasubramaniaan
Copy link
Collaborator Author

Do you want to merge this for 1.12.0 or wait until after we release? Personally, I think it should be good to include with 1.12.0.

I felt it good to go with 1.12.0 :)

Copy link
Contributor

@vrubezhny vrubezhny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.
The scaling actually happens - the pods are created and deleted according the Scale value requested, the only the positive integers are accepted.

@msivasubramaniaan msivasubramaniaan merged commit 9c8d8ab into redhat-developer:main Feb 27, 2024
4 checks passed
@msivasubramaniaan msivasubramaniaan deleted the 3822-context-menu-item-to-scale-deployments branch February 27, 2024 14:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Context menu item to scale Deployments
3 participants