Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial implementation of Guestbook example #11

Merged
merged 4 commits into from Feb 20, 2018
Merged

Conversation

lukehoban
Copy link
Member

Add a copy of the Guestbook example based on https://kubernetes.io/docs/tutorials/stateless-application/guestbook/.

Also move to use a fork of the Terraform Kubernetes provider which supports Deployment (and other newer) Kubernetes resources. We may need to maintain our own fork to provide a more stable baseline.

Fixes #4.

Add a copy of the Guestbook example based on https://kubernetes.io/docs/tutorials/stateless-application/guestbook/.

Also move to use a fork of the Terraform Kubernetes provider which supports Deployment (and other newer) Kubernetes resources.  We may need to maintain our own fork to provide a more stable baseline.
LoadBalancer isn't necessarily always supported - including apparently on our Travis Minikube setup.
Copy link
Member

@joeduffy joeduffy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great to see this coming alive!

Gopkg.toml Outdated
@@ -35,7 +35,8 @@

[[constraint]]
name = "github.com/terraform-providers/terraform-provider-kubernetes"
version = "1.0.1"
source = "github.com/sl1pm4t/terraform-provider-kubernetes"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm slightly nervous about depending on an unofficial fork. If we do this, let's make sure to file a bug to track. Do you know of the plans to merge this in, and/or the overall roadmap for Kubernetes support? Is this something we're going to have to take into our own hands?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Kubernetes is not in practice particularly usable in the state it's in in the mainline TF provider. Seems that almost everyone using it in practice is using some fork. There are lots though, and no one that is clearly on track to get merged to mainline.

We will need to work with either TF folks or with one of the fork maintainers to have a robust plan here.

But I don't think there is any world where we can have a valuable Kubernetes story without supporting these resources - so it's just a questions of if/when we depend on them via the mainline or via some fork that community + ourselves commit to making sure is in good state.

hashicorp/terraform-provider-kubernetes#3
hashicorp/terraform-provider-kubernetes#70
hashicorp/terraform-provider-kubernetes#100
hashicorp/terraform-provider-kubernetes#101

I suggest we create our own fork, depend on that, and set it for now to match the one I'm using here. We can then reach our to the maintainer of that, ask that he open a PR against mainline, and help advocate for that PR getting merged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, surprising that the community is in this state, but maybe it's good news in that it's an area we can help with and help the community.

As to our own fork, I'm happy with that approach, that's consistent with our other providers.

import * as pulumi from "pulumi";
import * as kubernetes from "@pulumi/kubernetes";

// REDIS MASTER
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of a single file, is it worth breaking this apart more similar to https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook? Feels like this will make it a little easier to consume in addition to being more familiar to those who have seen that example code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, there's a Ksonnet example here https://github.com/ksonnet/ksonnet/blob/master/examples/guestbook.jsonnet which is a single file. May be worth looking at that for inspiration for what "abstractions", if any, might make sense to introduce in the name of shrinking the example even further.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's only separate files in the default guestbook example so that the tutorial can deploy one part at a time and talk about it that way. Like in the ksonnet version, I'd rather find ways to introduce abstractions to simplify the existing code so that it's easier to read in one file. It's rather insane the mount of text needed to do what is logically a very simple set of work here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. I wrote this before looking at the Ksonnet example and I definitely like the idea of using abstractions to simplify to the point where it obviously belongs in one file 😄


// REDIS MASTER

let redisMasterLabels = { app: "redis", tier: "backend", role: "master"};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, variable reuse 😀

spec: [{
selector: [redisMasterLabels],
replicas: 1,
strategy: [],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be omitted? (It isn't in the original YAML/Ksonnet examples.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - it can - dropped.

name: "redis-master",
}],
spec: [{
selector: [redisMasterLabels],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see the selector in the YAML/Ksonnet example. Is this correct?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry guess I totally missed that!

spec: [{
selector: [redisSlaveLabels],
replicas: 1,
strategy: [],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we omit?

spec: [{
selector: [frontendLabels],
replicas: 3,
strategy: [],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we omit?

selector: [redisMasterLabels],
}],
});
let redisMasterDeployment = new kubernetes.Deployment("redis-master", {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding Ksonnet, it's interesting that they got the creation of services down to a single line, and it's done by the service borrowing fields from the deployment:

redis_master_service: example.service("redis-master") {
    targetPod_: $.redis_master_deployment.spec.template,
},

I assume that's just a simple helper library? Is there any way to get this cheaply "inline" in this file?

labels: [frontendLabels],
}],
spec: [{
type: "NodePort",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems different from the example at https://github.com/kubernetes/kubernetes/blob/master/examples/guestbook/frontend-service.yaml, which says:

spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Intentional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The copy of this in the actual walkthrough uses NodePort by default, but then has notes about optionally using LoadBalancer.

Unfortunately, neither appears to work correctly on Docker for Mac Kubernetes install currently. There's lots of mentions of problems with this on the web - including this issue explicitly asking for documentation of what is actually supported: docker/docs#5690.

I do want this example to ultimately be able to provide an output property with a URL you can open in the browser to try the app. It seems that may not be possible with deployments to Docker for Mac today. And cannot in general be done in a a single consistent way across providers with Kubernetes. 😢

}],
});

export let frontendPort: pulumi.Output<number> = frontendService.spec.apply(spec => spec[0].port![0].nodePort);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious...why is the ! required here?

Also, is the pulumi.Output<number> part required too? Seems strange to explicitly type it.

Also, use pulumi fork of provider.
@lukehoban
Copy link
Member Author

Going ahead and merging this, as a raw translation of the core Guestbook.

We would like to have a version more like that ksonnet version, with useful custom helpers for composing and reusing pieces of the specification - but that is largely dependent on solving #12.

@lukehoban lukehoban merged commit 7ba71a7 into master Feb 20, 2018
@lukehoban lukehoban deleted the 4_guestbook branch February 20, 2018 02:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants