Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

automatically mount cgroup2 at kmesh startup #229

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

LiZhenCheng9527
Copy link
Contributor

What type of PR is this?
/kind enhancement

What this PR does / why we need it:
automatically mount cgroup2 at kmesh startup

Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
@kmesh-bot kmesh-bot added the kind/enhancement New feature or request label Apr 16, 2024
@kmesh-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign hzxuzhonghu for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@@ -89,6 +89,10 @@ func Start() error {
return err
}

if err = mountCgroup2(&config); err != nil {
return err
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, cgroup v2 only available in linux kernel v5.8 or above.

Is mouting cgroup v2 a necessary step use kmesh? If not we could just print a warning message and not return error directly, otherwise kmesh could not run on older versions of the kernel.

I guess v5.8 is a pretty high version for production environments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/kmesh-net/kmesh/blob/main/build/docker/README.md#start_kmeshsh

It is stated that to start kmesh, have to mount cgroupv2.

@@ -142,3 +146,17 @@ func Stop() {
}
}
}

func mountCgroup2(cfg *Config) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to do some cleanups during pod termination.

Otherwise this should fail.

Take a look at kmesh-start-pre.sh

Copy link
Member

@hzxuzhonghu hzxuzhonghu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/assign @bitcoffeeiux

@kmesh-bot
Copy link
Collaborator

@hzxuzhonghu: GitHub didn't allow me to assign the following users: bitcoffeeiux.

Note that only kmesh-net members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @bitcoffeeiux

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
@kmesh-bot kmesh-bot added size/M and removed size/S labels Apr 16, 2024

if err := syscall.Mount("none", cfg.Cgroup2Path, "cgroup2", 0, ""); err != nil {
log.Errorf("failed to mount %s: %v", cfg.Cgroup2Path, err)
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need clean up resources before return err

Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement New feature or request size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants