-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Closed
Labels
Description
Sealos Version
Sealos Version: 5.0.1
How to reproduce the bug?
-
In this environment:
- Sealos 5.0.1 version installed.
- Cluster consists of 5 nodes:
- Master nodes: 192.168.37.10, 192.168.37.11, 192.168.37.12
- Worker nodes: 192.168.37.13, 192.168.37.14
- Using default Sealos configurations for cluster setup.
-
With this config:
- Default configuration with Kubernetes setup and basic network settings.
- No custom modifications made to the Sealos configuration.
-
Run:
Runsealos reset --nodes 192.168.37.13to reset only the node192.168.37.13. -
See error:
Instead of resetting just the node192.168.37.13, the command resets the entire cluster, including master and other worker nodes. This behavior is not expected, as I only intended to reset the specified node.
What is the expected behavior?
I expect that when running sealos reset --nodes 192.168.37.13, only the node 192.168.37.13 should be reset, leaving the rest of the cluster (master and other worker nodes) intact.
What do you see instead?
Instead of resetting just the specified node 192.168.37.13, the entire cluster gets reset, including the master and other worker nodes, which is not the intended behavior.
Operating environment
- Sealos version: 5.0.1
- Docker version:
- Kubernetes version: 1.28.10
- Operating system: Ubuntu 22.04
- Runtime environment: Physical machine (16G memory, 8 core CPU, 200GB storage)
- Cluster size: 3 master, 2 node
- Additional information: No additional services like Istio or Dashboard enabled.Additional information
- The issue occurred when running the
sealos reset --nodescommand, where the reset command affected the entire cluster instead of the specified node. - Cluster is using default Sealos configuration with Kubernetes setup.
- No custom configurations were applied to Sealos or Kubernetes.
Reactions are currently unavailable