Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update VM state pre migration #127

Merged
merged 2 commits into from Mar 8, 2017
Merged

Conversation

stu-gott
Copy link
Member

@stu-gott stu-gott commented Mar 7, 2017

@stu-gott stu-gott changed the title WIP Update VM state post migration WIP Update VM state pre migration Mar 7, 2017
queue.AddRateLimited(key)
return true
}
if putVm(vm, restClient, queue, key) {return true}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer

if err := putVM(); err != nil {
return true
}


}
logger.V(3).Info().Msg("Enqueuing VM again.")
queue.AddRateLimited(key)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you move the queue logic out here back to the controller loop?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don't need to re-try on failure for the "Starting migration" case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do. But I would prefer if you return the error and do the queue.AddRateLimited in the main controller loop.

} else if vm.Status.Phase == corev1.Running {
vmCopy := corev1.VM{}
model.Copy(&vmCopy, vm)
vmCopy.Status.MigrationNodeName = vm.Status.NodeName
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That needs to be vmCopy.Status.MigrationNodeName = pod.Status.NodeName

logger.Info().Msgf("VM successfully scheduled to %s.", vmCopy.Status.NodeName)
} else if vm.Status.Phase == corev1.Running {
vmCopy := corev1.VM{}
model.Copy(&vmCopy, vm)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case of coping from one object to another object of the same type you can use the kubernetes copier api.Scheme.Copy(). It does the same thing but looks nicer ;)

model.Copy(&vmCopy, vm)
vmCopy.Status.MigrationNodeName = vm.Status.NodeName
logger := logging.DefaultLogger()
// TODO: the migration should be started here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be started after the VM put.


}
logger.V(3).Info().Msg("Enqueuing VM again.")
queue.AddRateLimited(key)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do. But I would prefer if you return the error and do the queue.AddRateLimited in the main controller loop.

}
logger.V(3).Info().Msg("Enqueuing VM again.")
queue.AddRateLimited(key)
return nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shoud be return err

@stu-gott
Copy link
Member Author

stu-gott commented Mar 7, 2017

One test is currently failing. I've changed something with respect to the original pod scheduling behavior.

}

}
return fmt.Errorf("failed to set vm state: %v", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't you just return the initial err here?

obj, err := kubeapi.Scheme.Copy(vm)
if err != nil {
logger.Error().Reason(err).Msg("could not copy vm object")
// FIXME: should the VM be re-enqueued? failure to copy is a pretty big deal
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure myself here. Probably rate limitting a few times and then dismiss the key and forget about it's enqueue history ... (queue.Forget())

queue.AddRateLimited(key)
return true
}
// TODO: the migration should be started here
logger.Info().Msgf("VM successfully scheduled to %s.", vmCopy.Status.NodeName)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This message is wrong ..

@stu-gott stu-gott changed the title WIP Update VM state pre migration Update VM state pre migration Mar 7, 2017
@stu-gott
Copy link
Member Author

stu-gott commented Mar 7, 2017

Failing test was caused by erroneously passing a pointer by reference

@rmohr rmohr merged commit 7526fc5 into kubevirt:master Mar 8, 2017
@stu-gott stu-gott deleted the migrate-vm-status branch April 27, 2017 17:51
kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this pull request Nov 6, 2020
Change default pull policy to IfNotPresent to support offline demoing.
kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this pull request Dec 7, 2021
* Add 1.15.1 provider

* Fix nodes startup script

* Remove outdated comments

* Add shasum
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants