Skip to content

libvirt: bump memory of machines to 16G in the provider config#5230

Closed
Prashanth684 wants to merge 1 commit intoopenshift:masterfrom
Prashanth684:libvirt-memory
Closed

libvirt: bump memory of machines to 16G in the provider config#5230
Prashanth684 wants to merge 1 commit intoopenshift:masterfrom
Prashanth684:libvirt-memory

Conversation

@Prashanth684
Copy link
Copy Markdown
Contributor

While the terraform defaults for the libvirt master memory size was set to 16G through this PR:#5069, the sizes were still fixed to 8G because they were being overwritten by the value set in the provider. Change the provider value to 16G to address this issue.

Note: This will also set the worker memory size to 16G which should be fine, but can also be overriden by changing the machineset manifest.

While the terraform defaults for the libvirt master memory size was set to 16G through this PR:openshift#5069,
the sizes were still fixed to 8G because they were being overwritten by the value set in the provider. Change the provider value to 16G to
address this issue.

Note: This will also set the worker memory size to 16G which should be fine, but can also be overriden by changing the machineset manifest.
@Prashanth684
Copy link
Copy Markdown
Contributor Author

cc @praveenkumar @cfergeau

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Sep 20, 2021

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign cfergeau after the PR has been reviewed.
You can assign the PR to them by writing /assign @cfergeau in a comment when ready.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Prashanth684
Copy link
Copy Markdown
Contributor Author

@praveenkumar this will also set the worker memory to 16G , is that ok? or do we want to keep that at 8G?

@Prashanth684
Copy link
Copy Markdown
Contributor Author

/retest

1 similar comment
@praveenkumar
Copy link
Copy Markdown
Contributor

/retest

Kind: "LibvirtMachineProviderConfig",
},
DomainMemory: 8192,
DomainMemory: 16384,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Prashanth684 we are using https://cloud.google.com/compute/docs/general-purpose-machines#n1_machines (n1-standard-16) and if we are increased the resource that much it means we are not able to accommodate cluster in this machine (I think standard e2e-libvirt runs 3master-3worker mode).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm..ok yeah n1-standard-15 has only 60G..so should we reduce worker nodes to 2 and keep it at 8192 and increase master memory to 12G then ?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Prashanth684 I will take a look tomorrow how it all link together and update you here, first we need to manual create a gcp instance (same resource) and try to start the cluster and see if it able to execute test as expected.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you get a chance to take a look at this?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i did kick off a libvirt e2e and as I am monitoring, i see that Praveen is right. the standard machines have 64G memory which is not sufficient for this change. But even otherwise, i think we need to look into bumping the gcp instance type to something beefier because the libvirt master defaults need to be 16G anyway for consistent CI runs

Copy link
Copy Markdown
Contributor

@barbacbd barbacbd May 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Prashanth684 the gcp request you mentioned should be accomplished in #5841

@Prashanth684
Copy link
Copy Markdown
Contributor Author

/retest

@Prashanth684
Copy link
Copy Markdown
Contributor Author

/test e2e-libvirt

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Jun 29, 2022

@Prashanth684: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/okd-images 1b613f7 link true /test okd-images
ci/prow/okd-verify-codegen 1b613f7 link true /test okd-verify-codegen
ci/prow/verify-vendor 1b613f7 link true /test verify-vendor
ci/prow/unit 1b613f7 link true /test unit
ci/prow/golint 1b613f7 link true /test golint
ci/prow/gofmt 1b613f7 link true /test gofmt
ci/prow/e2e-aws-workers-rhel8 1b613f7 link false /test e2e-aws-workers-rhel8
ci/prow/verify-codegen 1b613f7 link true /test verify-codegen
ci/prow/govet 1b613f7 link true /test govet
ci/prow/e2e-crc 1b613f7 link false /test e2e-crc
ci/prow/images 1b613f7 link true /test images
ci/prow/e2e-aws-single-node 1b613f7 link false /test e2e-aws-single-node
ci/prow/e2e-alibaba 1b613f7 link true /test e2e-alibaba
ci/prow/e2e-gcp-upgrade 1b613f7 link true /test e2e-gcp-upgrade
ci/prow/e2e-aws-upgrade 1b613f7 link true /test e2e-aws-upgrade
ci/prow/e2e-aws-upi 1b613f7 link true /test e2e-aws-upi
ci/prow/e2e-azure-upi 1b613f7 link true /test e2e-azure-upi
ci/prow/e2e-gcp-upi 1b613f7 link true /test e2e-gcp-upi
ci/prow/e2e-aws 1b613f7 link true /test e2e-aws
ci/prow/e2e-azure 1b613f7 link true /test e2e-azure
ci/prow/e2e-gcp 1b613f7 link true /test e2e-gcp
ci/prow/e2e-vsphere 1b613f7 link true /test e2e-vsphere

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@sadasu
Copy link
Copy Markdown
Contributor

sadasu commented Jul 27, 2022

@Prashanth684 It appears that we are not pursuing this change anymore. If that is the case, could you please close this PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants