-
Notifications
You must be signed in to change notification settings - Fork 94
Add doc how to check harvester status #900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Adding documentation on how to check the status of the harvester components to the troublshooting/installation section. The documentation is added for 1.7/1.6/1.5/1.4 Signed-off-by: Martin Dekov <martin.dekov@suse.com>
990298c to
f932be5
Compare
jillian-maroket
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review done
docs/troubleshooting/installation.md
Outdated
| ## Check status of harvester components | ||
|
|
||
| Before checking the status of the harvester components, acquire the kubeconfig following preferrably [the second step in the FAQ](../faq.md#how-can-i-access-the-kubeconfig-file-of-the-harvester-cluster). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ## Check status of harvester components | |
| Before checking the status of the harvester components, acquire the kubeconfig following preferrably [the second step in the FAQ](../faq.md#how-can-i-access-the-kubeconfig-file-of-the-harvester-cluster). | |
| ## Check the status of Harvester components | |
| Before checking the status of Harvester components, obtain a copy of the Harvester cluster's kubeconfig file using either of the following methods: | |
| - On the Harvester UI, go to the **Harvester Support** screen and then click **Download KubeConfig**. | |
| - Run the following commands on any of the management nodes: | |
| ```shell | |
| $ sudo su | |
| $ cat /etc/rancher/rke2/rke2.yaml | |
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The FAQ content will likely be restructured or moved to other sections, so we avoid linking to that page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jillian, since I reverted the reference I unresolved this conversation, please resolve once you double check the change
Addressing feedback from Jillian: * Replace subtitles with dashes * Fix identation of nodes * Remove references to FAQ section * Fix by suggested wording Signed-off-by: Martin Dekov <martin.dekov@suse.com>
|
Thanks Jillian, addressed your comments ! |
w13915984028
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few points to double check, thanks.
Addressing feedback from Jian and Ivan including the following: * kubeconfig command merged into one line command * multiple kubectl commands replaced with single bash script for readiness * the VIP note elaborates what is the actual VIP among the values in the referenced link Signed-off-by: Martin Dekov <martin.dekov@suse.com>
docs/troubleshooting/installation.md
Outdated
| ```shell | ||
| $ sudo -i cat /etc/rancher/rke2/rke2.yaml | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the step to ssh to the node is not strictly required. A user can fetch the script from anywhere on the network with something like rsync --rsync-path="sudo rsync" rancher@<vip>:/etc/rancher/rke2/rke2.yaml ./ (once <vip> is reachable). With Terraform shell provisioner, this can be used together with the readiness script below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the input Ivan, looking at what you wrote I am starting to think that we just need to go back to referencing the section on how to get the kubeconfig, we can go around and around on how exactly kubeconfig acquisition happens, but we already have a place and the "proper explanation" should be there to be referenced. Also Jian pointed out some additional explanations I need to add, this will become guide on how to get kubeconfig as well as guide on checking cluster status which takes away from the main point which is the latter.
@jillian-maroket we initially removed the reference, it seems the explanation is becoming more complex - "how to take the kubeconfig" I will return the reference and then once we remove/move/update the kubeconfig section we can update all references to the new place.
I will leave this discussion open, but will revert the kubeconfig section reference, you can resolve the discussion if this explanation is fine when you get back online
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM - Personally, I'd expect the step to fetch the kubeconfig is also scripted to work in an automated workflow, hence, my earlier comment that the whole bash script should be something user can just copy and paste into their terraform shell provisioner.
Revert references to point to the FAQ section on how to take the kubeconfig of harvester. Discussing how to explain it made me think that the new section should be focused on how to check the cluster state, not how to get kubeconfig as we already have such section. In case we change the place of the section or the contents we need to only fix the reference, and it would be easier that way as opposed to finding where we explained how kubeconfig can be taken so we fix explanation with reference. Signed-off-by: Martin Dekov <martin.dekov@suse.com>
w13915984028
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks.
ihcsim
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this.
Co-authored-by: Daria Vladykina <daria.vladykina@suse.com> Signed-off-by: Martin Dekov <martin.dekov@suse.com>
Co-authored-by: Daria Vladykina <daria.vladykina@suse.com> Signed-off-by: Martin Dekov <martin.dekov@suse.com>
Co-authored-by: Daria Vladykina <daria.vladykina@suse.com> Signed-off-by: Martin Dekov <martin.dekov@suse.com>
Co-authored-by: Daria Vladykina <daria.vladykina@suse.com> Signed-off-by: Martin Dekov <martin.dekov@suse.com>
|
Thanks for the review @dariavladykina can you please approve if those were the suggestions? I will squash btw directly commited your suggestions |
|
Since the suggestions were missed commas only, no content changes I will merge with Daria's approval. Thank you all for the reviews and suggestions! |
|
Thanks @martindekov |
Adding documentation on how to check the status of the harvester components to the troublshooting/installation section.
The documentation is added for 1.7/1.6/1.5/1.4
Problem:
Documentation does not mention how to check harvester components status as part of the troubleshooting section
Solution:
Describe the harvester status
Related Issue(s):
harvester/harvester#9327
Test plan:
Ran locally the change to see how it looks:

Additional documentation or context
N/A