Skip to content

Conversation

@kvaps
Copy link
Member

@kvaps kvaps commented Oct 7, 2024

Add MachineHealthCheck resource to continiusly checking Machine state.
If Machine is not ready it will be recreated in 60 seconds after unavailabilty. (30 sec kubelet to stop posing the status + 30 sec MachineHealthCheck timeout)

Fixes #365

Signed-off-by: Andrei Kvapil kvapss@gmail.com

Summary by CodeRabbit

  • New Features

    • Introduced a MachineHealthCheck resource to monitor the health of machine deployments in Kubernetes.
  • Version Updates

    • Updated the Kubernetes chart version from 0.11.1 to 0.12.0.
    • Various packages' versions have been updated to reflect the latest revisions, ensuring accuracy in versioning.

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 7, 2024

Walkthrough

This pull request includes updates to the Kubernetes application configuration files. The Chart.yaml file has been modified to increment the version from 0.11.1 to 0.12.0. A new MachineHealthCheck resource has been added to the cluster.yaml template, enhancing the monitoring capabilities of machine deployments. Additionally, the versions_map has been updated to reflect the latest package versions, including the addition of commit hashes for several packages.

Changes

File Path Change Summary
packages/apps/kubernetes/Chart.yaml Version updated from 0.11.1 to 0.12.0; appVersion remains 1.30.1.
packages/apps/kubernetes/templates/cluster.yaml New MachineHealthCheck resource added with apiVersion: cluster.x-k8s.io/v1beta1.
packages/apps/versions_map Multiple package versions updated; kubernetes versions updated to include commit hashes.

Possibly related PRs

  • use bridge binding for Kubernetes VMs #315: This PR updates the version field in the Chart.yaml file for a Kubernetes application, similar to the version increment from 0.11.1 to 0.12.0 in the main PR's Chart.yaml.
  • Upgrade Flux operator to 0.10 #387: This PR also involves an update to the Chart.yaml file, changing the version from 0.9.0 to 0.10.0, which aligns with the versioning changes seen in the main PR.

Poem

In the land of code where rabbits play,
A version update brightens the day.
With health checks added, machines will thrive,
In Kubernetes fields, our apps come alive! 🐇✨
Hopping along, with each new release,
We celebrate changes that bring us peace!


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
packages/apps/kubernetes/templates/cluster.yaml (1)

213-232: LGTM with suggestions for improvement

The addition of the MachineHealthCheck resource is a good practice for maintaining the health of your Kubernetes cluster. The structure and integration with existing resources look correct. However, I have a few suggestions to enhance its flexibility and clarity:

  1. Consider making the timeouts configurable. Currently, nodeStartupTimeout and the unhealthy condition timeouts are hardcoded. You could make these values customizable through Helm values:
nodeStartupTimeout: {{ $group.healthCheck.nodeStartupTimeout | default "10m" }}
  1. Verify if creating a MachineHealthCheck for each nodeGroup is intentional. If not, consider moving it outside the nodeGroups loop and adjusting the selector accordingly.

  2. Add comments to explain the purpose of the MachineHealthCheck and how it works. This will help future maintainers understand the resource's role in the cluster's health management.

Would you like me to provide an example of how to implement these suggestions?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 111e9a5 and 5bffdbc.

📒 Files selected for processing (3)
  • packages/apps/kubernetes/Chart.yaml (1 hunks)
  • packages/apps/kubernetes/templates/cluster.yaml (1 hunks)
  • packages/apps/versions_map (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/apps/kubernetes/Chart.yaml
🧰 Additional context used
🔇 Additional comments (4)
packages/apps/versions_map (2)

Line range hint 38-85: Consider the implications of updating multiple packages to HEAD.

Several packages (mysql, nats, rabbitmq, redis, tcp-balancer, tenant, virtual-machine, and vpn) have been updated to HEAD. While this brings in the latest features and fixes, it also introduces potential instability.

To ensure system stability, please run comprehensive tests. Consider the following script to check for recent commits in these repositories:

#!/bin/bash
# Description: Check for recent commits in updated repositories

repos=("mysql" "nats" "rabbitmq" "redis" "tcp-balancer" "tenant" "virtual-machine" "vpn")

for repo in "${repos[@]}"; do
    echo "Checking $repo repository:"
    gh repo clone "$repo/$repo" "/tmp/$repo" 2>/dev/null
    cd "/tmp/$repo"
    git fetch --all
    echo "Latest commit:"
    git log -1 --pretty=format:"%h - %s (%cr)" origin/main
    echo -e "\n"
done

This script will help verify that the HEAD versions are stable and suitable for your use case.

Consider implementing a strategy for managing package versions:

  1. Use specific version tags for production environments to ensure stability.
  2. Regularly update and test HEAD versions in a staging environment before promoting to production.
  3. Document the process for updating and testing package versions to maintain consistency across the team.

36-37: LGTM! Verify the commit hash for version 0.11.1.

The addition of new kubernetes versions (0.11.1 and 0.12.0) is consistent with the PR objective. The use of a specific commit hash for 0.11.1 and marking 0.12.0 as HEAD follows good versioning practices.

Please run the following script to verify the commit hash for version 0.11.1:

packages/apps/kubernetes/templates/cluster.yaml (2)

Line range hint 1-232: Overall assessment: LGTM with minor suggestions

The addition of the MachineHealthCheck resource to this Helm chart is a positive enhancement for maintaining the health of the Kubernetes cluster. The new resource is well-integrated with the existing MachineDeployment and aligns with the Cluster API specifications.

Key points:

  1. The MachineHealthCheck resource is correctly structured and placed within the nodeGroups loop.
  2. It uses appropriate selectors to match the corresponding MachineDeployment.
  3. The unhealthy conditions are defined to check for Ready status.

Suggestions for improvement:

  1. Make timeout values configurable through Helm values for greater flexibility.
  2. Add comments to explain the purpose and functionality of the MachineHealthCheck resource.
  3. Verify the intentionality of creating a MachineHealthCheck for each nodeGroup.

Overall, this change enhances the robustness of the cluster by automating health checks on nodes. With the suggested improvements, it will be more flexible and easier to maintain in the future.


213-232: Verify linkage between MachineHealthCheck and MachineDeployment

The MachineHealthCheck resource seems to be correctly aligned with the MachineDeployment resource, as they share the same cluster name and deployment name in their selectors. However, to ensure proper functionality, please verify that:

  1. The labels in the MachineDeployment's template match the selector in the MachineHealthCheck.
  2. The MachineHealthCheck is created in the same namespace as the MachineDeployment.

To confirm this, you can run the following script:

This script will help ensure that the MachineHealthCheck is properly linked to the MachineDeployment.

@kvaps kvaps merged commit 31a45c4 into main Oct 7, 2024
@kvaps kvaps deleted the vmhealth branch October 7, 2024 12:53
Copy link
Member

@themoriarti themoriarti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, it looks like a good start to stability. But it would be good to configure.

unhealthyConditions:
- type: Ready
status: Unknown
timeout: 30s
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be good to be able to configure these timeouts via values.yaml

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your review, I think for now it makes no sense since we're running on controlled environment.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The impact of this setting being too low is really severe, my md went into a tailspin and only recovered when I reset this value much higher

kvaps added a commit that referenced this pull request Oct 9, 2024
Add `MachineHealthCheck` resource to continiusly checking Machine state.
If Machine is not ready it will be recreated in 60 seconds after
unavailabilty. (30 sec kubelet to stop posing the status + 30 sec
MachineHealthCheck timeout)

Fixes #365

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a `MachineHealthCheck` resource to monitor the health of
machine deployments in Kubernetes.
  
- **Version Updates**
	- Updated the Kubernetes chart version from `0.11.1` to `0.12.0`.
- Various packages' versions have been updated to reflect the latest
revisions, ensuring accuracy in versioning.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
@coderabbitai coderabbitai bot mentioned this pull request Oct 10, 2024
chumkaska pushed a commit to chumkaska/cozystack that referenced this pull request Oct 15, 2024
Add `MachineHealthCheck` resource to continiusly checking Machine state.
If Machine is not ready it will be recreated in 60 seconds after
unavailabilty. (30 sec kubelet to stop posing the status + 30 sec
MachineHealthCheck timeout)

Fixes cozystack#365

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a `MachineHealthCheck` resource to monitor the health of
machine deployments in Kubernetes.
  
- **Version Updates**
	- Updated the Kubernetes chart version from `0.11.1` to `0.12.0`.
- Various packages' versions have been updated to reflect the latest
revisions, ensuring accuracy in versioning.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Kubernetes workers can't rejoin after reboot

3 participants