-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
previewctl install-context --watch #10898
Conversation
713f7ea
to
ff384fd
Compare
6185cef
to
2b671ad
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Aleks, really brilliant work here ✨! I wasn't expecting that we would get deeper into Kubernetes go-client intricacies so soon with previewctl
, but you found a very good use for it with the watch control-loop 🙂. Not only that but you also introduced kubevirt's go-client as well
I believe that the introduction of KubeVirt's lib and k8s informers is great for us. We'll need them for future projects and we can get used to them while previewctl
is still simple 😉 With that said, this PR is quite big and my lack of experience with KubeVirt's lib and informers will block me from giving you a super detailed review 😅. I really want to get better at it tho, future reviews will be better! Maybe @liam-j-bennett's experience with k8s controllers could help us here :)
For now, I focused on understanding the changes made to the Preview
struct, why they were needed and made some comments about it.
I understand you had to make changes to some commands to make the kubernetes clients work as you expected, but I think the abstractions look a bit weird 🤔.
Would it make sense to split the Kubernetes clients and Preview into separate objects? Something like:
type Harvester struct {
kClient kubernetes.Interface
dynamicClient dynamic.Interface
previews []Preview
logger *logrus.Entry
}
type Preview struct {
branch string
name string
namespace string
vm kubervirtv1.VirtualMachine // Not sure about this one... Maybe we could reuse their library for SSH access? https://github.com/kubevirt/kubevirt/blob/f60c13703d4bc73fe4ebb3c44c6d8379d6ba0577/pkg/virtctl/ssh/ssh.go
}
I really don't want to sound like a blocker here, just trying to surface some ideas to get the abstraction in a good shape so they can be easily enhanced in future interactions :)
p, err := preview.New(branch, logger) | ||
if err != nil { | ||
return err | ||
} | ||
|
||
err = p.ListAllPreviews() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks a bit strange that in the command get-name
you decided to not create a Preview object, but for list
you decided to do so 🤨
For me at least I think get-name
is correlated to a single Preview and thus creating an object might make sense. list
, on the other hand, is not correlated to a single preview and might make sense to not create an object for this command
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For ListPreviews
you need to talk to the k8s API, and we initialise the configs in the .New()
. See my comment for the current preview env.
Run: func(cmd *cobra.Command, args []string) { | ||
p := preview.New(branch, logger) | ||
RunE: func(cmd *cobra.Command, args []string) error { | ||
|
||
err := p.SSHPreview() | ||
err := preview.SSHPreview(branch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as list
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comments above for Get and List :)
_, err := p.harvesterKubeClient.CoreV1().Namespaces().Get(ctx, p.namespace, metav1.GetOptions{}) | ||
if err != nil { | ||
return errors.Wrap(errSvcNotReady, err.Error()) | ||
} | ||
|
||
svc, err := p.harvesterKubeClient.CoreV1().Services(p.namespace).Get(ctx, proxySvcName, metav1.GetOptions{}) | ||
if err != nil { | ||
return errors.Wrap(errSvcNotReady, err.Error()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really really like the use of wrapping errors here! 🤌
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Aleks, really brilliant work here ✨! I wasn't expecting that we would get deeper into Kubernetes go-client intricacies so soon with
previewctl
, but you found a very good use for it with the watch control-loop 🙂. Not only that but you also introduced kubevirt's go-client as wellI believe that the introduction of KubeVirt's lib and k8s informers is great for us. We'll need them for future projects and we can get used to them while
previewctl
is still simple 😉 With that said, this PR is quite big and my lack of experience with KubeVirt's lib and informers will block me from giving you a super detailed review 😅. I really want to get better at it tho, future reviews will be better! Maybe @liam-j-bennett's experience with k8s controllers could help us here :)For now, I focused on understanding the changes made to the
Preview
struct, why they were needed and made some comments about it.I understand you had to make changes to some commands to make the kubernetes clients work as you expected, but I think the abstractions look a bit weird 🤔.
Would it make sense to split the Kubernetes clients and Preview into separate objects? Something like:
type Harvester struct { kClient kubernetes.Interface dynamicClient dynamic.Interface previews []Preview logger *logrus.Entry } type Preview struct { branch string name string namespace string vm kubervirtv1.VirtualMachine // Not sure about this one... Maybe we could reuse their library for SSH access? https://github.com/kubevirt/kubevirt/blob/f60c13703d4bc73fe4ebb3c44c6d8379d6ba0577/pkg/virtctl/ssh/ssh.go }I really don't want to sound like a blocker here, just trying to surface some ideas to get the abstraction in a good shape so they can be easily enhanced in future interactions :)
Thanks for the feedback! I agree that it would make more sense to have the k8s components split out - I'll do this in a following PR. I also want to change some things around the informers, as currently it's not strictly correct the way they are being used - but I did it this way as it was quicker and it kind kind of grew out a bit more than I intended. Also didn't see the virtctl/ssh
package 👍 I think that will make our lives easier. I'll see how it works and add it with the work of porting the install-context
:)
started the job as gitpod-build-aa-prev-ctx.22 because the annotations in the pull request description changed |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nicely done!
Let's get this merged and make the improvements you've mentioned in following interactions
/unhold |
Description
Implements the
--watch
flag, using kubeinformers.Related Issue(s)
Fixes ##10387
How to test
Checkout a new branch:
Start a new job with preview
Run previewctl
Release Notes
Documentation
Werft options: