New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update VM state pre migration #127
Conversation
pkg/virt-controller/watch/pod.go
Outdated
queue.AddRateLimited(key) | ||
return true | ||
} | ||
if putVm(vm, restClient, queue, key) {return true} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer
if err := putVM(); err != nil {
return true
}
pkg/virt-controller/watch/pod.go
Outdated
|
||
} | ||
logger.V(3).Info().Msg("Enqueuing VM again.") | ||
queue.AddRateLimited(key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you move the queue logic out here back to the controller loop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we don't need to re-try on failure for the "Starting migration" case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do. But I would prefer if you return the error and do the queue.AddRateLimited in the main controller loop.
pkg/virt-controller/watch/pod.go
Outdated
} else if vm.Status.Phase == corev1.Running { | ||
vmCopy := corev1.VM{} | ||
model.Copy(&vmCopy, vm) | ||
vmCopy.Status.MigrationNodeName = vm.Status.NodeName |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That needs to be vmCopy.Status.MigrationNodeName = pod.Status.NodeName
pkg/virt-controller/watch/pod.go
Outdated
logger.Info().Msgf("VM successfully scheduled to %s.", vmCopy.Status.NodeName) | ||
} else if vm.Status.Phase == corev1.Running { | ||
vmCopy := corev1.VM{} | ||
model.Copy(&vmCopy, vm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case of coping from one object to another object of the same type you can use the kubernetes copier api.Scheme.Copy()
. It does the same thing but looks nicer ;)
pkg/virt-controller/watch/pod.go
Outdated
model.Copy(&vmCopy, vm) | ||
vmCopy.Status.MigrationNodeName = vm.Status.NodeName | ||
logger := logging.DefaultLogger() | ||
// TODO: the migration should be started here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be started after the VM put.
pkg/virt-controller/watch/pod.go
Outdated
|
||
} | ||
logger.V(3).Info().Msg("Enqueuing VM again.") | ||
queue.AddRateLimited(key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do. But I would prefer if you return the error and do the queue.AddRateLimited in the main controller loop.
pkg/virt-controller/watch/pod.go
Outdated
} | ||
logger.V(3).Info().Msg("Enqueuing VM again.") | ||
queue.AddRateLimited(key) | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shoud be return err
One test is currently failing. I've changed something with respect to the original pod scheduling behavior. |
pkg/virt-controller/watch/pod.go
Outdated
} | ||
|
||
} | ||
return fmt.Errorf("failed to set vm state: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't you just return the initial err
here?
pkg/virt-controller/watch/pod.go
Outdated
obj, err := kubeapi.Scheme.Copy(vm) | ||
if err != nil { | ||
logger.Error().Reason(err).Msg("could not copy vm object") | ||
// FIXME: should the VM be re-enqueued? failure to copy is a pretty big deal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure myself here. Probably rate limitting a few times and then dismiss the key and forget about it's enqueue history ... (queue.Forget())
pkg/virt-controller/watch/pod.go
Outdated
queue.AddRateLimited(key) | ||
return true | ||
} | ||
// TODO: the migration should be started here | ||
logger.Info().Msgf("VM successfully scheduled to %s.", vmCopy.Status.NodeName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This message is wrong ..
8841461
to
9666d6e
Compare
Failing test was caused by erroneously passing a pointer by reference |
9666d6e
to
8dc699f
Compare
Change default pull policy to IfNotPresent to support offline demoing.
* Add 1.15.1 provider * Fix nodes startup script * Remove outdated comments * Add shasum
@rmohr