Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add leader election mechanism to virt-controller #461

Merged

Conversation

@cynepco3hahue
Copy link
Member

cynepco3hahue commented Sep 24, 2017

Resolves #412

@kubevirt-bot

This comment has been minimized.

Copy link
Contributor

kubevirt-bot commented Sep 24, 2017

Can one of the admins verify this patch?

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch from fca5cd1 to 6a0f9c6 Sep 24, 2017
Copy link
Member

davidvossel left a comment

Great contribution!

I posted a few suggestions.

Also, before we can merge this we need a functional test that verifies leader election is working properly. This should be as simple as going through a few rounds of killing off virt-controller pods and ensuring VMs can still be scheduled properly.

kubeinformers "kubevirt.io/kubevirt/pkg/informers"
"kubevirt.io/kubevirt/pkg/kubecli"
"kubevirt.io/kubevirt/pkg/logging"
"kubevirt.io/kubevirt/pkg/virt-controller/rest"
"kubevirt.io/kubevirt/pkg/virt-controller/services"
)

const (
DefaultLeaseDuration = 15 * time.Second

This comment has been minimized.

Copy link
@davidvossel

davidvossel Sep 25, 2017

Member

These look like sane defaults. I see k8s uses the same values as defaults.

We'll want this to be configurable at some point

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Sep 26, 2017

Author Member

I thought about it, do we want to use k8s approach and to have additional parameters under virt-controller?
For example virt-controller --leader-elect ...

recorder := createRecorder(vca.clientSet)

rl, err := resourcelock.New(resourcelock.EndpointsResourceLock,
"default",

This comment has been minimized.

Copy link
@davidvossel

davidvossel Sep 25, 2017

Member

Is there a more appropriate kubevirt system related namespace for the time lease locks?

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 26, 2017

Member

"kube-system" is our target namespace at the end. Tracked by #300

"virt-controller",
vca.clientSet.CoreV1(),
resourcelock.ResourceLockConfig{
Identity: vca.host,

This comment has been minimized.

Copy link
@davidvossel

davidvossel Sep 25, 2017

Member

Host is related to the interface virt controller should bind to. It's usually going to be 0.0.0.0.

vca.host is a bad name for what it actually represents. We should change it to listening_addr or something like that.

For the Identify field here, we os.Hostname() would be a better choice.

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 26, 2017

Member

+1 is the same what k8s does.

go vca.migrationController.Run(3, stop)
httpLogger := logger.With("service", "http")
httpLogger.Info().Log("action", "listening", "interface", vca.host, "port", vca.port)
if err := http.ListenAndServe(vca.host+":"+strconv.Itoa(vca.port), nil); err != nil {

This comment has been minimized.

Copy link
@davidvossel

davidvossel Sep 25, 2017

Member

I believe we want the logger to run regardless of which instance is currently the leader.

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 26, 2017

Member

+1 we need the server to always run. I would say that we need two REST based checks. One should be /healthz (alrady exists, if I remember that correctly), and one we e.g. call /ready.

If we get elected, ready should return 200. Otherwise e.g. 503. This way we can make sure, that the k8s service always redirects the traffic to the one ready pod, if we add a rediness check, based on this endpoint.

The health endpoint would be mapped to a health check on the pod, so that k8s knows if something is obviously wrong, and restarts the pod.

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Sep 27, 2017

Author Member

@davidvossel @rmohr For me it looks a little bit problematic because of both
http.ListenAndServe and leaderElector.Run() run in the infinitive loop until failure, so if I run some of some of them, the second does not run at all. And I can not check leader election status before I run leaderElector.Run().
k8s use kube-api service for healthz checking, so they do not have such sort of problem.
Any thought?

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 27, 2017

Member

@cynepco3hahue You can e.g. run the http server in another go routine. If you get elected you can e.g. switch a boolean to yes from no and reflect that in the ready state. Does this make sense?

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 27, 2017

Member

I can see that in my setup the kubernetes-controller-manager has a livenessProbe configured.

Copy link
Member

rmohr left a comment

Very nice. Like said by @davidvossel functional tests are missing, and we will need the readiness and health checks. Otherwise our service will loadbalance requests to both controller pods. It is not that bad, since we don't have a REST API which has a specific function, but health checking and gathering metrics would definitely not work properly, since every second request would be sent to a follower.

go vca.migrationController.Run(3, stop)
httpLogger := logger.With("service", "http")
httpLogger.Info().Log("action", "listening", "interface", vca.host, "port", vca.port)
if err := http.ListenAndServe(vca.host+":"+strconv.Itoa(vca.port), nil); err != nil {

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 26, 2017

Member

+1 we need the server to always run. I would say that we need two REST based checks. One should be /healthz (alrady exists, if I remember that correctly), and one we e.g. call /ready.

If we get elected, ready should return 200. Otherwise e.g. 503. This way we can make sure, that the k8s service always redirects the traffic to the one ready pod, if we add a rediness check, based on this endpoint.

The health endpoint would be mapped to a health check on the pod, so that k8s knows if something is obviously wrong, and restarts the pod.

recorder := createRecorder(vca.clientSet)

rl, err := resourcelock.New(resourcelock.EndpointsResourceLock,
"default",

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 26, 2017

Member

"kube-system" is our target namespace at the end. Tracked by #300

"virt-controller",
vca.clientSet.CoreV1(),
resourcelock.ResourceLockConfig{
Identity: vca.host,

This comment has been minimized.

Copy link
@rmohr

rmohr Sep 26, 2017

Member

+1 is the same what k8s does.

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch 3 times, most recently from 278df70 to e960632 Sep 28, 2017
@cynepco3hahue cynepco3hahue changed the title [WIP] Add leader election mechanism to virt-controller Add leader election mechanism to virt-controller Oct 1, 2017
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(stopCh <-chan struct{}) {
go vca.vmController.Run(3, stop)
//FIXME when we have more than one worker, we need a lock on the VM

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 2, 2017

Member

The FIXME is not relevant anymore

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 2, 2017

Author Member

Will remove it

RenewDeadline: vca.LeaderElection.RenewDeadline.Duration,
RetryPeriod: vca.LeaderElection.RetryPeriod.Duration,
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(stopCh <-chan struct{}) {

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 2, 2017

Member

Is it ok, if this function just starts go-routines and continues? Should it block?

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 2, 2017

Author Member

If I understand correct leaderElector.Run() just run callback OnStartedLeading function and enter to infinitive loop that tries to renew a lease, so it must be ok.

This comment has been minimized.

Copy link
@davidvossel

davidvossel Oct 3, 2017

Member

I took a look at kubernete's controllers. They end the OnStartedLeading function with a select{} statement, which blocks indefinitely. I guess we should do the same

This comment has been minimized.

Copy link
@davidvossel

davidvossel Oct 3, 2017

Member

It really shouldn't matter though. I think the select{} isn't there so much to ensure that callback blocks as it is to ensure a deadlock doesn't occur which leaves all goroutines idle.

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

ok, so not needed. Thx for investigating.

return
}

endpoints, err := cli.CoreV1().Endpoints(DefaultNamespace).Get(DefaultEndpointName, metav1.GetOptions{})

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 2, 2017

Member

That's a nice idea. I think something more simple would be appropriate too, since the connectivity is already covered with the health check.

Maybe just switching a boolean to true after the leader election is won might already be good enough. What do you think?

@rmohr

This comment has been minimized.

Copy link
Member

rmohr commented Oct 2, 2017

ok to test

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch from e960632 to 3e1d33b Oct 2, 2017
@cynepco3hahue

This comment has been minimized.

Copy link
Member Author

cynepco3hahue commented Oct 3, 2017

@rmohr I think we need to adapt our jenkins to cases when we have two replicas of the same container and only one of them is ready.
Because under jenkins job I can see

+ grep false
virt-controller-2671887288-21hck               false
virt-controller-2671887288-dx7xc               false
virt-manifest-4235766445-tc3td                 true,false
+ sleep 10
++ kubectl get pods '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers
++ cluster/kubectl.sh --core get pods '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers
++ grep false
+ '[' -n 'false
true,false' ']'
+ echo 'Waiting for KubeVirt containers to become ready ...'
Waiting for KubeVirt containers to become ready ...
+ kubectl get pods '-ocustom-columns=name:metadata.name,ready:status.containerStatuses[*].ready'
+ cluster/kubectl.sh --core get pods '-ocustom-columns=name:metadata.name,ready:status.containerStatuses[*].ready'
+ grep false
virt-controller-2671887288-dx7xc               false
virt-manifest-4235766445-tc3td                 true,false
@rmohr

This comment has been minimized.

Copy link
Member

rmohr commented Oct 3, 2017

@cynepco3hahue you can fix that in automation/tests.sh. Maybe change it to one check, which checks everything except the controller if it is ready, and add one extra check for the controller below, which checks, that at least one controller is ready?

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch 2 times, most recently from 3d75ee9 to 5924266 Oct 3, 2017
@cynepco3hahue cynepco3hahue requested review from davidvossel and rmohr Oct 3, 2017
# Make sure all containers are ready
while [ -n "$(kubectl get pods -o'custom-columns=status:status.containerStatuses[*].ready' --no-headers | grep false)" ]; do
# Make sure all containers except virt-controller are ready
while [ -n "$(kubectl get pods -o'custom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers | awk '!/virt-controller/ && /false/')" ]; do

This comment has been minimized.

Copy link
@davidvossel

davidvossel Oct 3, 2017

Member

This command isn't working for me.

$ kubectl get pods -o'custom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
error: containerStatuses is not found

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 3, 2017

Author Member

@davidvossel It is very strange because it just passed Jenkins tests and it also worked for me locally, can you give output of kubectl get pods

My output

./cluster/kubectl.sh get pods -o'custom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers
false         haproxy-574c8d574-b2f24
true          haproxy-574c8d574-dbz66
false         iscsi-auth-demo-target-tgtd-6d76d6498d-c75xx
true          iscsi-auth-demo-target-tgtd-6d76d6498d-md2gw
false         iscsi-demo-target-tgtd-65f5dcf6c6-47k6t
true          iscsi-demo-target-tgtd-65f5dcf6c6-s5dpn
true          kubevirt-cockpit-demo-75867f467d-fwcml
true,true     libvirt-r446k
false         spice-proxy-9f4649f9b-cwj4d
true          spice-proxy-9f4649f9b-xdclz
true          virt-api-54fdcfcb4f-hw5xt
true          virt-controller-6f55bf764c-6scz2
false         virt-controller-6f55bf764c-g4x9x
true          virt-handler-rqqh2
true,true     virt-manifest-564cd555d5-bxztw
false,false   virt-manifest-564cd555d5-lrztm

This comment has been minimized.

Copy link
@davidvossel

davidvossel Oct 3, 2017

Member

ah, I see. I'm fairly confident something is wrong with my environment now.

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 3, 2017

Author Member

@davidvossel I saw @vladikr encountered this issue kubernetes/kubernetes#53356, so it can be also your case too

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

I think that there is also a small time window, directly after the pod post, where the container status is not yet there, then you would also see that. The bash script will and should retry in such a case ...

Copy link
Member

davidvossel left a comment

This looks great. It's really close. I just had one comment concerning concurrent accesse to some shared data.

RetryPeriod: vca.LeaderElection.RetryPeriod.Duration,
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(stopCh <-chan struct{}) {
vca.isLeader = true

This comment has been minimized.

Copy link
@davidvossel

davidvossel Oct 3, 2017

Member

I don't think a write to a boolean is an atomic operation.

Since isLeader is being accessed by multiple goroutines, it needs either to use the https://golang.org/pkg/sync/atomic/ package, or synchronize the isLeader using a lock.

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

EDIT: The lock is much simpler, go with it.

The atomic write is not the problem, since we don't need atomic compare+swap, but go is allowed to cache variables in go-routines (no volatile support). Did not think about that, when I advised the usage of the bool.

How about using a channel and do a non-blocking select on every ready function invocation?

	case _, notReady := <- app.readyChan:
           if !notReady {
            // send 200, true
           } else {
            // send 503, false
           }

	default:
            // nothing in the channel, we are not the leader
           // send 503, false
	}

You would create a channel like this:

app.readyChan = make(chan struct{}, 1)

and when you get elected, you would simply do

close(app.readyChan)

Might make sense to hide that inside a nice helper struct, with nice methods like app.readyness.MarkReady() and app.readyness.IsReady().

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

or you simply use the lock, like @davidvossel suggested.

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 4, 2017

Author Member

@rmohr @davidvossel Can you guys explain me a little bit more? Because the single place where isLeader written is under callback methods, and readiness method only reads it, so I do not see here some problem. I will appreciate some example as well.

This comment has been minimized.

Copy link
@davidvossel

davidvossel Oct 4, 2017

Member

Two goroutines are accessing a shared boolean. One goroutine is reading the value, one is writing to the value.

If the read and writes are atomic operations, this would be okay. In the case of a golang boolean, we are not guaranteed reading/writing involves a single operation. This means you could have a read collide in the middle of a write. That's why packages like this exist, https://github.com/tevino/abool, to ensure getting/setting a boolean results in a single operation. I don't know what the result of such a collision would look like, all I know is this isn't a safe way to coordinate between threads.

If you wrap the boolean access logic in a lock, everything will be fine. In general, we shouldn't be coordinating between goroutines using shared data like this. The golang way is to use a channel. I agree that a channel seems a little complex for this scenario though.

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 4, 2017

Author Member

@davidvossel thanks for the answer, I was sure that primitive types do not have such problems

Copy link
Member

rmohr left a comment

We need to adapt the readiness detection a little bit, like @davidvossel said, almost there.

res := map[string]interface{}{}
if !app.isLeader {
res["apiserver"] = map[string]interface{}{"leader": "failed", "error": "current pod is not leader"}
response.WriteHeaderAndJson(http.StatusInternalServerError, res, restful.MIME_JSON)

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

I think 503 would be a better return value. It is not an internal server error, if we did not yet win an election.

readinessFunc := func(_ *restful.Request, response *restful.Response) {
res := map[string]interface{}{}
if !app.isLeader {
res["apiserver"] = map[string]interface{}{"leader": "failed", "error": "current pod is not leader"}

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

Error does not seem appropriate. Could we use: "leader": "true" and "leader": "false" instead?

RenewDeadline: vca.LeaderElection.RenewDeadline.Duration,
RetryPeriod: vca.LeaderElection.RetryPeriod.Duration,
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(stopCh <-chan struct{}) {

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

ok, so not needed. Thx for investigating.

go vca.rsController.Run(3, stop)
},
OnStoppedLeading: func() {
vca.isLeader = false

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

Not wrong but also not necessary, since we panic anyway.

RetryPeriod: vca.LeaderElection.RetryPeriod.Duration,
Callbacks: leaderelection.LeaderCallbacks{
OnStartedLeading: func(stopCh <-chan struct{}) {
vca.isLeader = true

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 4, 2017

Member

EDIT: The lock is much simpler, go with it.

The atomic write is not the problem, since we don't need atomic compare+swap, but go is allowed to cache variables in go-routines (no volatile support). Did not think about that, when I advised the usage of the bool.

How about using a channel and do a non-blocking select on every ready function invocation?

	case _, notReady := <- app.readyChan:
           if !notReady {
            // send 200, true
           } else {
            // send 503, false
           }

	default:
            // nothing in the channel, we are not the leader
           // send 503, false
	}

You would create a channel like this:

app.readyChan = make(chan struct{}, 1)

and when you get elected, you would simply do

close(app.readyChan)

Might make sense to hide that inside a nice helper struct, with nice methods like app.readyness.MarkReady() and app.readyness.IsReady().

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch 2 times, most recently from 79128a7 to 6640c59 Oct 4, 2017
Copy link
Member

rmohr left a comment

Locking on read is missing. LGTM to the rest.

restful.Add(rest.WebService)
readinessFunc := func(_ *restful.Request, response *restful.Response) {
res := map[string]interface{}{}
if !app.isLeader {

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 6, 2017

Member

You have to also use locks here. The problem is, that ony the use of the same lock in both go-routines will force a syncrhonziation of shared variables between the two go-routines. Otherwise it can still happen that this go-routine will never see any update to app.isLeader. I think https://golang.org/ref/mem is a pretty good read to better understand that.

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 7, 2017

Author Member

👍 and thanks for article

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch from 6640c59 to 71b26ef Oct 15, 2017
@cynepco3hahue

This comment has been minimized.

Copy link
Member Author

cynepco3hahue commented Oct 15, 2017

@rmohr In the end I somehow finished with channels instead of mutex 😄

Copy link
Member

rmohr left a comment

@cynepco3hahue looks nice now. The functional test needs to be improved a little, to also test that an already running pod can take over the lock.

leaderPodName := getLeader()
Expect(virtClient.CoreV1().Pods(leaderelectionconfig.DefaultNamespace).Delete(leaderPodName, &metav1.DeleteOptions{})).To(BeNil())

Eventually(getLeader, 30*time.Second, 5*time.Second).ShouldNot(Equal(leaderPodName))

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 16, 2017

Member

I am not yet completely happy with that test. It does not test, if the already waiting pod gets the lease. Since we use a Deployment, it can as well just recover of the respawn of the new pod.

By repeatedly killing every pod which is not the second already running one, we could test that, but maybe you can think of something more elegant.

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 16, 2017

Author Member

@rmohr Hm, I can get all virt-controller pods before destroying a leader pod, and after destroy I will verify that new leader among old pods, will it be good enough?

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 18, 2017

Member

That sounds good.

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 18, 2017

Member

We might end up with flaky tests then. Maybe we can start two virt-controller pods by hand and kill one. Implementing that might be complex. It would probably involve scaling down the Deployment to 0, taking the pod template and start two pods out of the template ...

}
res["apiserver"] = map[string]interface{}{"leader": "false", "error": "current pod is not leader"}
response.WriteHeaderAndJson(http.StatusServiceUnavailable, res, restful.MIME_JSON)
return

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 16, 2017

Member

Not needed.

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 16, 2017

Author Member

forgot to remove this one

@@ -71,7 +80,25 @@ func Execute() {

app.restClient = app.clientSet.RestClient()

restful.Add(rest.WebService)
readinessFunc := func(_ *restful.Request, response *restful.Response) {

This comment has been minimized.

Copy link
@rmohr

rmohr Oct 16, 2017

Member

Could you add a unit test for that?

This comment has been minimized.

Copy link
@cynepco3hahue

cynepco3hahue Oct 16, 2017

Author Member

Will check how can I do it.

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch from 379df9b to c8314b2 Oct 16, 2017
@cynepco3hahue

This comment has been minimized.

Copy link
Member Author

cynepco3hahue commented Oct 16, 2017

retest this please

1 similar comment
@rmohr

This comment has been minimized.

Copy link
Member

rmohr commented Oct 17, 2017

retest this please

@cynepco3hahue

This comment has been minimized.

Copy link
Member Author

cynepco3hahue commented Oct 17, 2017

Looks like both problems does not relate to this PR

  • jenkins fails on
[Fail] Console New VM with a serial console given [It] should be allowed to connect to the console 
/var/lib/jenkins/workspace/kubevirt-functional-tests/go/src/kubevirt.io/kubevirt/tests/utils.go:163
  • travis fails on
• Failure [0.102 seconds]
Inotify
/home/travis/gopath/src/kubevirt.io/kubevirt/pkg/inotify-informer/inotify_test.go:146
  When watching files in a directory
  /home/travis/gopath/src/kubevirt.io/kubevirt/pkg/inotify-informer/inotify_test.go:145
    should detect multiple creations and deletions [It]
    /home/travis/gopath/src/kubevirt.io/kubevirt/pkg/inotify-informer/inotify_test.go:112
    Expected
        <bool>: false
    to equal
        <bool>: true
    /home/travis/gopath/src/kubevirt.io/kubevirt/pkg/inotify-informer/inotify_test.go:106
------------------------------
•level=error timestamp=2017-10-16T16:43:16.741384Z pos=inotify.go:121 component= reason="Invalid file path: /tmp/kubevirt815824892/test" msg="Invalid content detected, ignoring and continuing."
level=error timestamp=2017-10-16T16:43:16.741604Z pos=inotify.go:121 component= reason="Invalid file path: /tmp/kubevirt815824892/test" msg="Invalid content detected, ignoring and continuing."
@rmohr

This comment has been minimized.

Copy link
Member

rmohr commented Oct 17, 2017

@cynepco3hahue looks like that. Could you do a rebase?

@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch from c8314b2 to 9c59e22 Oct 17, 2017
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
Signed-off-by: Artyom Lukianov <alukiano@redhat.com>
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
Signed-off-by: Lukianov Artyom <alukiano@redhat.com>
@cynepco3hahue cynepco3hahue force-pushed the cynepco3hahue:add_lead_election_to_controller branch from 9c59e22 to 4dcb03c Oct 18, 2017
@cynepco3hahue cynepco3hahue requested review from rmohr and davidvossel Oct 18, 2017
@rmohr
rmohr approved these changes Oct 18, 2017
Copy link
Member

rmohr left a comment

LGTM. @davidvossel?

Copy link
Member

davidvossel left a comment

lgtm 👍

@davidvossel davidvossel merged commit 5a76441 into kubevirt:master Oct 18, 2017
3 checks passed
3 checks passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
coverage/coveralls Coverage decreased (-0.5%) to 57.102%
Details
kubevirt-functional-tests/jenkins/pr All is well
Details
@cynepco3hahue cynepco3hahue deleted the cynepco3hahue:add_lead_election_to_controller branch Oct 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.