-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GIT-60: enable helper function for conditional waits #73
GIT-60: enable helper function for conditional waits #73
Conversation
Welcome @harshanarayana! |
Hi @harshanarayana. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
ee28ba1
to
b79a789
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harshanarayana This is a great start and captures most of everything I was thinking about. I left some comments because I think this can be refined further.
Design consideration
The main thing that I would like to see is to be able to invoke the For
function from the wait
package without the need to create a wait.Interface instance
. One way that could be done is to create a new subpackage wait/cond
where all pre-conditions are defined. The cond
package would expose Condition
value with all the pre-conditions defined
// package wait/cond
type Condition struct {
res *resources.Resources
}
func New(*resources.Resrouces) *Condition
func (c *Condition) ResourceScaled(...)
func (c *Condition) JobMatche(...)
func (c *Condition)
...
Then, using the wait
package could look like this:
import ".../.../wait"
import ".../.../wait/cond"
func DoK8sStuff(){
pod := createPod(...)
wait.For(cond.New(*resources.Resources).PodRunning(&pod))
}
Hope this makes sense.
Absolutely. I like the notion of adding an inlined resource so that we can use multiple difference instance of the |
@vladimirvivien I have addressed all the changes you suggested and refactored it accordingly. PTAL when you manage to get some time. Thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harshanarayana First apologies for the late review.
Anyway, this is awesome work and you delivered exactly what the design called for.
I left a few comments (mostly housekeeping and cosmetics). Once done, this should be merged soon!!!!
klient/wait/conditions/conditions.go
Outdated
|
||
func (c *Condition) ResourceScaled(obj k8s.Object, scaleFetcher func(object k8s.Object) int32, replica int32) apimachinerywait.ConditionFunc { | ||
return func() (done bool, err error) { | ||
log.Printf("Checking if the resource %s/%s has been scaled to %d", obj.GetNamespace(), obj.GetName(), replica) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These log statements will be a bit verbose when used. I would consider removing them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can I make these optional? By default it will be turned off but can be enabled by the users if required. Would that be ok ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think outputting logs from the framework code could be an issue because, as a user, when I write tests, I don't expect extra output unless I use something like t.Log (etc).
As far as making it optional, here is what we can do:
- Get this merged (without logs)
- Create a new PR to introduce more robust log capabilities via a level log framework
- Then we can research for something that make sense for the project.
How does that sound ?
No worries at all @vladimirvivien . Thanks again for taking your time to review this. I have addressed the changes you requested. Let me know if you think anything else needs to be modified |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harshanarayana
I still have reservation about how logging is done (it feels a bit one-off). I left comments.
Other than than that, we should be ready to merge soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harshanarayana Awesome work.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: harshanarayana, vladimirvivien The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@harshanarayana
Thanks again!!! |
@vladimirvivien let me take care of it right away. |
|
This is a very needed feature, thanks @harshanarayana for the great effort. @vladimirvivien When do we expect a new release so we can use it? |
Closes #60