Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fabric8 client API version 1.2.2 order of create rep controller / create watch didn't matter. in 1.3.91 it does matter. #432

Closed
buildlackey opened this issue May 27, 2016 · 9 comments

Comments

@buildlackey
Copy link

buildlackey commented May 27, 2016

Hi.

i filed an issue on stack overflow here ->
http://stackoverflow.com/questions/37475114/does-fabric8-and-kubernetes-1-2-4-not-support-watches-on-replication-controllers

which illustrates the steps to reproduce this issue, using a small scala program. My colleague Vijay Samuel (mad respect to him) helped me find a solution to the problem, which is expressed via a working java program in that same stack overflow post.

Initially we thought the key tweak to making things work was to upgrade from 1.3.x of client API to 1.4-SNAPSHOT, but it turns out that this is immaterial. The main thing one must do is to make sure that watches are created before the object that is being watched is created.

For example, in Vijay's if you switched the order of these statements

client.replicationControllers().inNamespace(namespace).withLabel("l", "v").watch(watcher);
createRc(client, namespace, podName, image);

to this:

createRc(client, namespace, podName, image);
client.replicationControllers().inNamespace(namespace).withLabel("l", "v").watch(watcher);

the program would cease to work. Switching the order would have been fine in 1.2.2 as far as i can tell from the testing i have done.

Vijay's working example

import com.fasterxml.jackson.databind.ObjectMapper;
import com.typesafe.scalalogging.StrictLogging;
import io.fabric8.kubernetes.api.model.Quantity;
import io.fabric8.kubernetes.api.model.ReplicationController;
import io.fabric8.kubernetes.client.Watcher.Action;
import io.fabric8.kubernetes.client.*;

import java.util.HashMap;
import java.util.Map;


public class Vijay {


  public static DefaultKubernetesClient getConnection ()  {
    ConfigBuilder
         configBuilder =   Config.builder() ;
    Config config =
      configBuilder.
        withMasterUrl("http://localhost:8080").
        build();
    return new DefaultKubernetesClient(config);
  }


  public static void main(String[] args) throws Exception {
    DefaultKubernetesClient client = getConnection();

    String namespace = "junk6";
    String podName = "prom";
    String image = "nginx";

    Watcher<ReplicationController> watcher = new Watcher<ReplicationController>() {

      @Override
      public void onClose(KubernetesClientException cause) {
        // TODO Auto-generated method stub

      }

      @Override
      public void eventReceived(Action action, ReplicationController resource) {
        System.out.println(action + ":" + resource);

      }
    };


    client.replicationControllers().inNamespace(namespace).withLabel("l", "v").watch(watcher);

    createRc(client, namespace, podName, image);

  }

  private static void createRc(DefaultKubernetesClient client, String namespace, String podName, String image) {
    try {
      Map<String, String> labels = new HashMap<String, String>();
      labels.put("l", "v");
      ReplicationController rc = client.replicationControllers().inNamespace(namespace)
          .createNew()
          .withNewMetadata()
          .withName(podName)
          .addToLabels(labels)
          .endMetadata()
          .withNewSpec()
          .withReplicas(1)
          .withSelector(labels)
          .withNewTemplate()
          .withNewMetadata()
          .addToLabels(labels)
          .endMetadata()
          .withNewSpec()
          .addNewContainer()
          .withName(podName)
          .withImage(image)
          .withImagePullPolicy("Always")
          .withNewResources()
          .addToLimits("cpu", new Quantity("100m"))
          .addToLimits("memory", new Quantity("100Mi"))
          .endResources()
          .endContainer()
          .endSpec()
          .endTemplate()
          .endSpec()
          .done();
    } catch (Exception e) {
      e.printStackTrace();
    }
  }

}
@jimmidyson
Copy link
Contributor

jimmidyson commented May 27, 2016

When you say "cease to work" what do you mean? You don't receive any events at all?

These statements are supposed to work in any order of course, but it depends on what you want events you want to receive. Each resource in Kubernetes has a resource version which is updated each time the resource is updated. When running a watch you request a resource version to start watching from. If you specify a resource version of 0 you will receive all events back to the oldest still retained for that resource(list). Note that the resource history is pruned in Kubernetes regularly so specifying 0 doesn't necessary go back to the creation of the resource, but to the oldest version still present in Kubernetes' etcd.

This is where the difference between 1.2.2 & latest version comes in I think. In 1.2.2 there was no way to specify a resource version to watch from. This defaulted to 0 (all events as described above). In later versions the resource version is configurable via the DSL (withResourceVersion) & defaults to the latest in the API server, therefore only receiving future events from the time of starting the watch. If the RC is updated after you start the watch you should see events come through.

So if you could test it out by using withResourceVersion(0)withResourceVersion("0") in your watch statement that should work for you.

Now I will also go & test it out!

@buildlackey
Copy link
Author

correct.

by 'cease to work' i mean i receive no events at all in latest version (unless i change the order).

I will try withResourceVersion("0").
.
thanks for your help.
/chris

@jimmidyson
Copy link
Contributor

Dug a little deeper & find that it's not quite what I said above... With the move from 1.2 to 1.3 we also switched to okhttp client (awesome!). The problem occurs because okhttp correctly encodes URL according to standard URL encoding rules (in this case replacing = with %3D). It seems that Kubernetes doesn't handle encoded URLs properly. I will raise an issue with kubernetes & hope it gets fixed there. I will also try to prevent the encoding in the request URL to work around the issue in the meantime.

@buildlackey
Copy link
Author

withResourceVersion("0") i receive events using latest version of API even with ordering such that object is created first, then watch on object is created.

Also, from my experience the URL encoding of the selector does not seem to throw a wrench at kubernetes. The line below is from the log output of my sample program. Please note that the selector -- fieldSelector=metadata.name%3Dspark-master-rc&resourceVersion -- seems to be properly URL encoded, and things still seem to work fine.

http://localhost:8080/api/v1/ncreating ok http websocket for http://localhost:8080/api/v1/namespaces/junk7/replicationcontrollers?fieldSelector=metadata.name%3Dspark-master-rc&resourceVersion=0&watch=true

2016-05-27 15:45:26 thread:[main] DEBUG c.e.k.provisioner.server.BugReport$ - after rc watch - name spark-master-rc
2016-05-27 15:45:26 thread:[OkHttp http://localhost:8080/api/v1/namespaces/junk7/replicationcontrollers?fieldSelector=metadata.name%3Dspark-master-rc&resourceVersion=0&watch=true] DEBUG c.e.k.provisioner.server.BugReport$ - event recv'd for REPCONTROLLER spark-master-rc. action=ADDED [ ReplicationController(apiVersion=v1, etc etc etc )

thx /chris

@buildlackey
Copy link
Author

By the way - if you do file an issue with kubernetes folks, can you please update this bug thread. My colleagues are maintaining a fork of k8s, and they will want to cherry pick whatever k8s fix materializes as soon as it is available so that your fabric8 api and our k8s fork can work well together. much appreciated if you can do this. thx /chris

@jimmidyson
Copy link
Contributor

There's no issue with Kubernetes as we I thought there was. Found out we were sending through an empty fieldSelector which broke stuff when you only want to list via labels. Fixing up now.

@jimmidyson
Copy link
Contributor

Hmm this is too late on a Friday night. Will look at it over the weekend or early next week. Something odd going on that my tired brain can't quite pick right now.

@buildlackey
Copy link
Author

... thanks very much for your help /cb

@jimmidyson
Copy link
Contributor

Well I think is working correctly now... #433 stops sending an empty fieldSelector query param & now I can consistently watch events after creating objects (e.g. scaling an RC receives events properly). Closing now, but if this is still an issue with next release feel free to reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants