Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Calico self hosted integration #124

Merged
merged 12 commits into from Jan 15, 2017

Conversation

heschlie
Copy link
Contributor

@heschlie heschlie commented Dec 6, 2016

  • Migrated Calico to self hosted install
  • Updated Calico versions

- Migrated Calico to self hosted install
- Updated Calico versions
@codecov-io
Copy link

codecov-io commented Dec 6, 2016

Current coverage is 72.65% (diff: 100%)

Merging #124 into master will increase coverage by 3.28%

@@             master       #124   diff @@
==========================================
  Files             4          4          
  Lines          1126       1415   +289   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits            781       1028   +247   
- Misses          259        279    +20   
- Partials         86        108    +22   

Powered by Codecov. Last update 66d25c0...d9ea73f

@@ -298,7 +241,7 @@ coreos:
http://localhost:8080/api/v1/nodes/$(hostname)"
{{end}}

{{if .Experimental.EphemeralImageStorage.Enabled}}
{{if .Experimental.EphemeralImageStorage.Enabled}}mount
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary mount added to the end of line?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now how did that get there...

@mumoshu
Copy link
Contributor

mumoshu commented Dec 6, 2016

Hi @heschlie, thanks for the pull request 👍

I don't have a decent knowledge of calico self-hosting therefore would you mind:

  • explaining about how self-hosting is achieved by this?
  • sharing related documentation(s) or announcement(s) relating to calico self-hosting?

@heschlie
Copy link
Contributor Author

heschlie commented Dec 6, 2016

Hi @mumoshu you are most welcome!

Self hosting is simply having Kubernetes manage Calico instead of doing so with systemd files, or manually managing containers. This is achieved in a few ways. We use a deamonset to manage Calico Node and installing the CNI binaries. This runs on every node so it insures it is deployed everywhere, and in the same manner. This also let's us manage the CNI config from a single place.

There are no announcements, we have simply been migrating our Kubernetes installations to self hosted, as it is a bit easier to manage and I believe the preferred way for Kubernetes (maybe @caseydavenport could chime in with more info there) I have a similar PR open with coreos-kubernetes here:

coreos/coreos-kubernetes#768

You can find more info on Calico self hosted installs here:

http://docs.projectcalico.org/v1.6/getting-started/kubernetes/installation/hosted/

annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
Copy link
Contributor

@mumoshu mumoshu Dec 8, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this assume controller nodes to be tainted like kubectl taint <node> dedicated=master:NoSchedule while they're Schedulable?
Currently, kube-aws created controller nodes are not tainted like that and Unschedulable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, If I recall correctly, daemonsets don't respect Unschedulable so it doesn't matter.
I'm still wondering the need to add a toleration like this though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you are correct here in regards to daemonsets, this was mainly trying to keep our manifest consistent across deployments. We should be able to remove this if it is an issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like consistency 😄 Just curious but in which deployment do you taint master nodes like this and then make pods tolerate it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe kops taints the masters by default, emphasis on the think!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubeadm also does this by default.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heschlie @caseydavenport Thanks to your help, I've successfully spotted these 🙇

kops: https://github.com/kubernetes/kops/blob/6c66d18a9c360eac836fb1baf335c09c8597d8e4/protokube/pkg/protokube/tainter.go#L59
kubeadm: https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/getting-started-guides/kubeadm.md#24-initializing-your-master
kubernetes: kubernetes/kubernetes#33530

Please leave the [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, part as is.
I'm going to make kube-aws adapt to it i.e. master nodes will be schedulable and tainted shortly 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for back and forth but would you please make key be node.alpha.kubernetes.io/role rather than dedicated according to kubernetes/kubernetes#36272?

@@ -233,9 +197,6 @@ coreos:
[Service]
Type=oneshot
ExecStartPre=/usr/bin/bash -c "while sleep 1; do if /usr/bin/curl --insecure -s -m 20 -f https://127.0.0.1:10250/healthz > /dev/null ; then break ; fi; done"
{{ if .UseCalico }}
ExecStartPre=/usr/bin/systemctl is-active calico-node
{{ end }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any idea how we could hold on/delay running cfn-sginal until self-hosted calico becomes ready?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably do something with calicoctl, let me dig into it a bit and if so I'll push an update to do so

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heschlie We might be able to get away with a calicoctl node status here.

@heschlie
Copy link
Contributor Author

@mumoshu I added in the new tolerations and added a new ExecStartPre for ensuring Calico is running.

@mumoshu
Copy link
Contributor

mumoshu commented Dec 14, 2016

Note to self: Not depends on but wants #150

@mumoshu
Copy link
Contributor

mumoshu commented Dec 16, 2016

@heschlie Hi, thanks your support here!

Running our E2E test suite revealed that this seems to cause controller nodes to fail while launching. IFAICS, the docker image referenced by calico/cni:v1.5.2 seems to not exist.

Could you point me a valid image or would you mind fixing it?

CoreOS stable (1185.5.0)
Failed Units: 1
  system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service

journalctl -e:

Dec 16 00:18:18 ip-10-0-0-12.ap-northeast-1.compute.internal bash[2817]: Status: Downloaded newer image for calico/ctl:latest
Dec 16 00:18:18 ip-10-0-0-12.ap-northeast-1.compute.internal kubelet-wrapper[1510]: E1216 00:18:18.372871    1510 cni.go:163] error updating cni config: No networks found in /etc/kubernetes/cni/net.d
Dec 16 00:18:28 ip-10-0-0-12.ap-northeast-1.compute.internal kubelet-wrapper[1510]: E1216 00:18:28.377445    1510 cni.go:163] error updating cni config: No networks found in /etc/kubernetes/cni/net.d
Dec 16 00:18:33 ip-10-0-0-12.ap-northeast-1.compute.internal kubelet-wrapper[1510]: W1216 00:18:33.227208    1510 container.go:352] Failed to create summary reader for "/docker/354a7c31321c6116af6b28d825b62f4118cd91470062e23f476b1263347aa505": none of the resources are being tracked.
Dec 16 00:18:38 ip-10-0-0-12.ap-northeast-1.compute.internal kubelet-wrapper[1510]: E1216 00:18:38.378094    1510 cni.go:163] error updating cni config: No networks found in /etc/kubernetes/cni/net.d
Dec 16 00:18:45 ip-10-0-0-12.ap-northeast-1.compute.internal kubelet-wrapper[1510]: W1216 00:18:45.752590    1510 container.go:352] Failed to create summary reader for "/docker/f70ecf8f1dd23734b78c11f78a50ea7ae2305ca605be417399c9b2894c889438": none of the resources are being tracked.
Dec 16 00:18:47 ip-10-0-0-12.ap-northeast-1.compute.internal kubelet-wrapper[1510]: E1216 00:18:47.786922    1510 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
Dec 16 00:18:48 ip-10-0-0-12.ap-northeast-1.compute.internal dockerd[1402]: time="2016-12-16T00:18:48.017117980Z" level=error msg="Handler for GET /images/calico/cni:v1.5.2/json returned error: No such image: calico/cni:v1.5.2"
Dec 16 00:18:48 ip-10-0-0-12.ap-northeast-1.compute.internal kubelet-wrapper[1510]: E1216 00:18:48.390832    1510 cni.go:163] error updating cni config: No networks found in /etc/kubernetes/cni/net.d
*snip*
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service: Start operation timed out. Terminating.
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: Failed to start Load cloud-config from /usr/share/oem/cloud-config.yml.
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: Dependency failed for Load system-provided cloud configs.
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: Dependency failed for Load user-provided cloud configs.
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: user-config.target: Job user-config.target/start failed with result 'dependency'.
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: system-config.target: Job system-config.target/start failed with result 'dependency'.
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service: Unit entered failed state.
Dec 16 00:24:00 ip-10-0-0-12.ap-northeast-1.compute.internal systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service: Failed with result 'timeout'.

@mumoshu mumoshu modified the milestones: v0.9.3-rc.2, v0.9.3-rc.1 Dec 16, 2016
@heschlie
Copy link
Contributor Author

@mumoshu I'm not seeing this problem, when I setup the cluster all of the Calico related pods come online without any issues. I've verified the image is up and tagged on dockerhub as well.

@heschlie
Copy link
Contributor Author

@mumoshu I've gone and updated the image anyway so they now match Calico v2.0 instead of v1.6 (we just released v2.0 this week) could you give it another shot and let me know?

@mumoshu
Copy link
Contributor

mumoshu commented Dec 17, 2016

@heschlie Thanks for the quick follow up!

I've investigated it a bit further. Now, it seems to be hanging up retrying calico/ctl node status:

● cfn-signal.service
   Loaded: loaded (/etc/systemd/system/cfn-signal.service; static; vendor preset: disabled)
   Active: activating (start-pre) since Sat 2016-12-17 01:26:50 UTC; 6h ago
  Process: 1408 ExecStartPre=/usr/bin/bash -c while sleep 1; do if /usr/bin/curl -s -m 20 -f  http://127.0.0.1:8080/healthz > /dev/null &&  /usr/bin/curl -s -m 20 -f  http://127.0.0.1:10252/healthz > /dev/null && /usr/bin/curl -s -m 20 -f  http://127.0.0.1:10251/healthz > /dev/null &&  /usr/bin/curl --insecure -s -m 20 -f  https://127.0.0.1:10250/healthz > /dev/null ; then break ; fi;  done (code=exited, status=0/SUCCESS)
Cntrl PID: 3288 (bash)
    Tasks: 2
   Memory: 1.7M
      CPU: 2min 3.271s
   CGroup: /system.slice/cfn-signal.service
           └─control
             ├─ 3288 /usr/bin/bash -c until /usr/bin/docker run --rm --net=host -e ETCD_ENDPOINTS=https://ip-10-0-0-4.ap-northeast-1.compute.internal:2379 calico/ctl node status > /dev/null; do sleep 3; done
             └─20259 sleep 3

Dec 17 01:30:10 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: b7c0cb9514db: Download complete
Dec 17 01:30:11 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: 7417e9c0298e: Verifying Checksum
Dec 17 01:30:11 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: 7417e9c0298e: Download complete
Dec 17 01:30:11 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: fae91920dcd4: Pull complete
Dec 17 01:30:11 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: a3ed95caeb02: Pull complete
Dec 17 01:30:12 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: da6b2fec8d2d: Pull complete
Dec 17 01:30:17 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: b7c0cb9514db: Pull complete
Dec 17 01:30:17 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: 7417e9c0298e: Pull complete
Dec 17 01:30:17 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: Digest: sha256:9409070962eb43bc3aa5e890f1fdbddf12625eca1d3fc8444916d8e411e310a4
Dec 17 01:30:17 ip-10-0-0-52.ap-northeast-1.compute.internal bash[3288]: Status: Downloaded newer image for calico/ctl:latest

I guess you can reproduce this by enabling an experimental feature called waitSignal by editing cluster.yaml:

experimental:
  waitSignal:
    enabled: true

@mumoshu
Copy link
Contributor

mumoshu commented Dec 17, 2016

Running calico/ctl node status alone produces messages like:

core@ip-10-0-0-52 ~ $ /usr/bin/docker run --rm --net=host -e ETCD_ENDPOINTS=https://ip-10-0-0-4.ap-northeast-1.compute.internal:2379 calico/ctl node status
Calico process is not running.

@mumoshu mumoshu modified the milestones: v0.9.3-rc.3, v0.9.3-rc.2 Dec 20, 2016
During the waitsignal we instead download the calicoctl binary and run
it as opposed to using docker contianer
@heschlie
Copy link
Contributor Author

@mumoshu I've sorted the WaitSignal stuff, there is a process namespace issue with running in a docker container. We can run around it but I found just downloading the binary to be more elegant, let me know if that is an issue.

If it all checks out on your end I'll squash the commits!

@@ -723,7 +903,7 @@ write_files:
}
{{ end }}

- path: /srv/kubernetes/manifests/kube-dns-autoscaler-de.yaml
- path: /srv/kubernetes/manifests/kube-dns-rc.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for rebasing but this should be kube-dns-autoscaler-de.yaml as kube-dns has been migrated from rc to de and is at https://github.com/coreos/kube-aws/pull/124/files#diff-a6019b6709ad6c3c74954ac000740bc5R942!

@heschlie
Copy link
Contributor Author

heschlie commented Jan 5, 2017

@mumoshu scratch the part about the --availability-zones it turns out my master branch was behind. Now that it is updated I am hitting the same error on both my branch and master:

Creating AWS resources. This should take around 5 minutes.
Error: Error creating cluster: Stack creation failed: CREATE_FAILED : The following resource(s) failed to create: [InstanceEtcd0, AutoScaleController, LaunchConfigurationWorker]. 

Printing the most recent failed stack events:
CREATE_FAILED AWS::CloudFormation::Stack kubeawstest1 The following resource(s) failed to create: [InstanceEtcd0, AutoScaleController, LaunchConfigurationWorker].
CREATE_FAILED AWS::AutoScaling::LaunchConfiguration LaunchConfigurationWorker Security groups list cannot contain empty or null element

The stack.json for the LaunchConfigurationWorker seems to contain two empty strings:

"SecurityGroups": [
                    "",
                    "",
                    {
                        "Ref": "SecurityGroupWorker"
                    }
                ],

Is there an environment variable I need to set to have this filled out?

@mumoshu
Copy link
Contributor

mumoshu commented Jan 6, 2017

@heschlie Thanks for your continuous efforts on this 🙇

That's odd. AFAICS, security groups are populated here and/or here if and only if the KUBE_AWS_DEPLOY_TO_EXISTING_VPC env var is non-empty. Could you provide me the list of env vars you've provided to kube-aws?

For me, they're at minimum:

  • KUBE_AWS_KEY_NAME
  • KUBE_AWS_KMS_KEY_ARN
  • KUBE_AWS_DOMAIN
  • KUBE_AWS_REGION
  • KUBE_AWS_AVAILABILITY_ZONE
  • KUBE_AWS_HOSTED_ZONE_ID
  • KUBE_AWS_S3_DIR_URI
  • KUBE_AWS_SSH_KEY
  • KUBE_AWS_AZ_1
  • DOCKER_REPO=quay.io/mumoshu/
  • SSH_PRIVATE_KEY

And I tend to run the ./run script like:

$ KUBE_AWS_DEPLOY_TO_EXISTING_VPC=1 KUBE_AWS_CLUSTER_AUTOSCALER_ENABLED=1 KUBE_AWS_NODE_POOL_INDEX=1 KUBE_AWS_AWS_NODE_LABELS_ENABLED=1 KUBE_AWS_NODE_LABELS_ENABLED=1 KUBE_AWS_WAIT_SIGNAL_ENABLED=1 KUBE_AWS_AWS_ENV_ENABLED=1 KUBE_AWS_USE_CALICO=true KUBE_AWS_CLUSTER_NAME=kubeawstest1 sh -c './run all'

@heschlie
Copy link
Contributor Author

heschlie commented Jan 6, 2017

Heya here is my env vars:

export KUBE_AWS_KEY_NAME=<my-key>
export KUBE_AWS_KMS_KEY_ARN="arn:aws:kmsmy-arn-key"
export KUBE_AWS_DOMAIN=testing.heschlie.com
export KUBE_AWS_REGION=us-east-1
export KUBE_AWS_AVAILABILITY_ZONE="us-east-1b"
export KUBE_AWS_HOSTED_ZONE_ID=<zone-id>
export KUBE_AWS_AZ_1="us-east-1b"
export KUBE_AWS_USE_CALICO=true
export KUBE_AWS_CLUSTER_NAME="kubeawstest1"
export KUBE_AWS_S3_DIR_URI="s3://schlie-kube-aws"

export DOCKER_REPO=quay.io/mumoshu/
export SSH_PRIVATE_KEY=/home/heschlie/.ssh/id_dsa

KUBE_AWS_DEPLOY_TO_EXISTING_VPC=1 \
KUBE_AWS_CLUSTER_AUTOSCALER_ENABLED=1 \
KUBE_AWS_NODE_POOL_INDEX=1 \
KUBE_AWS_AWS_NODE_LABELS_ENABLED=1 \
KUBE_AWS_NODE_LABELS_ENABLED=1 \
KUBE_AWS_WAIT_SIGNAL_ENABLED=1 \
KUBE_AWS_AWS_ENV_ENABLED=1 \
KUBE_AWS_USE_CALICO=true \
KUBE_AWS_CLUSTER_NAME=kubeawstest1 sh -c './run all'

Couple of then env vars are duplicated there, but I don't see that causing any issues.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 6, 2017

@heschlie I don't see any suspicious points for now but could you try again without KUBE_AWS_DEPLOY_TO_EXISTING_VPC=1?

@heschlie
Copy link
Contributor Author

heschlie commented Jan 9, 2017

@mumoshu Thanks, that seemed to do it. I've had one successful run, going to try again to see if I can get it into a failed state. As @caseydavenport had mentioned it does look like an issue with the kube-proxy, hopefully I can get a bit more insight as to what is wrong once I get it into a failed state.

@mumoshu mumoshu modified the milestones: v0.9.3, v0.9.3-rc.3 Jan 11, 2017
@heschlie
Copy link
Contributor Author

@mumoshu I'm having trouble getting a cluster into a failed state, my E2E tests seems to be passing without on my branch. If you still have a cluster in a failed state or can get one into a failed state can you get some info for me?

  • Describe and logs from the proxy pods
  • Describe and logs from the calico-node and calico-policy pods (or calicoctl diags dump info)

@heschlie heschlie closed this Jan 11, 2017
@heschlie heschlie reopened this Jan 11, 2017
@redbaron
Copy link
Contributor

@heschlie , just for my eductation, am I right that this calico-node integration affects network policies only, but networking itself still done with flanneld?

@caseydavenport
Copy link
Contributor

@redbaron When using canal (Calico + Flannel), Calico will enforce network policies, and also configure the container<->host networking (i.e the container's veth and corresponding route) through its CNI plugin, and flannel will perform host<->host networking.

@redbaron
Copy link
Contributor

why flanneld is needed then? hosts can speak to each other just fine

@caseydavenport
Copy link
Contributor

caseydavenport commented Jan 11, 2017

@redbaron Sorry, should have been more clear.

Calico is used to get traffic from the container to the host it lives on and vice-versa.
Flannel is used to get container traffic from the host it lives on to another host (not host traffic to another host). It uses encapsulation so that this can occur even when the underlying network infrastructure isn't aware of container IP addresses.

You may want to check out this repo: https://github.com/projectcalico/canal

heschlie and others added 2 commits January 13, 2017 13:09
Bumped the policy controller to 0.5.2 to get a NoneType bugfix in
@mumoshu mumoshu merged commit 74efe5d into kubernetes-retired:master Jan 15, 2017
@mumoshu
Copy link
Contributor

mumoshu commented Jan 15, 2017

@caseydavenport Thanks for your supports here 🙇

@heschlie No way to know what had been happening on my cluster before but anyways the conformance test is now passing! I'm merging this.

It is already a month since you've submitted this PR!
I'm sorry for leaving this open too long but I was glad to collaborate with you. Thanks a lot for your continuous efforts on this 🙇

@@ -367,6 +313,10 @@ write_files:
owner: root:root
content: |
#!/bin/bash -e
{{ if .UseCalico }}
/bin/bash /opt/bin/populate-tls-calico-etcd
/usr/bin/docker run --rm --net=host -v /srv/kubernetes/manifests:/host/manifests {{.HyperkubeImageRepo}}:{{.K8sVer}} /hyperkube kubectl apply -f /host/manifests/calico.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not curl -XPOST like the rest of the script? or maybe it's better change curl to kubectl call?


/usr/bin/cp /srv/kubernetes/manifests/calico-policy-controller.yaml /etc/kubernetes/manifests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/srv/kubernetes/manifests/calico-policy-controller.yaml file seems to be left in userdata, doesn't seem to be used

content: |
#!/bin/bash -e
/usr/bin/curl -H "Content-Type: application/json" -XPOST --data-binary @"/srv/kubernetes/manifests/calico-system.json" "http://127.0.0.1:8080/api/v1/namespaces"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like /srv/kubernetes/manifests/calico-system.json is left in userdata and not use anywhere

@redbaron
Copy link
Contributor

Just discovered that k8s doesn't support rolling update of DaemonSet, meaning this change made it both easier and harder to update Calico daemons :(

@caseydavenport
Copy link
Contributor

@redbaron yeah, currently what you can do is update the DaemonSet and then do a manual update of each pod.

There is a proposal expected to land in v1.6 for server-side DaemonSet rolling updates: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/daemonset-update.md

@cknowles
Copy link
Contributor

@redbaron also this daemonset upgrade controller if you are ok patching in the meantime

camilb added a commit to camilb/kube-aws that referenced this pull request Feb 28, 2017
* coreos/master: (132 commits)
  fix: Spot Fleet doesn't support the t2 instance family
  Fix node pools on master
  Allow option to disable certificates management (kubernetes-retired#243)
  Bump to k8s 1.5.2
  Update README.md
  Update ROADMAP.md
  Update ROADMAP.md
  Update ROADMAP.md
  Update the inline documentation in cluster.yaml
  typo
  Don't fail sed if some files are missing
  Workaround systemd issues with oneshot autorestarts
  etcd static IP addressing overhaul
  Calico self hosted integration (kubernetes-retired#124)
  Fix lint.
  bugfix for a typo in install-kube-system scripts
  Update README.md
  fix(e2e): Correctly wait for a node pool stack for deletion
  Don't require key-name param during cluster init
  Propagate SSHAuthorizedKeys to nodepools
  ...
camilb added a commit to camilb/kube-aws that referenced this pull request Apr 21, 2017
* coreos/master: (49 commits)
  fix: Spot Fleet doesn't support the t2 instance family
  Fix node pools on master
  Allow option to disable certificates management (kubernetes-retired#243)
  Bump to k8s 1.5.2
  Update README.md
  Update ROADMAP.md
  Update ROADMAP.md
  Update ROADMAP.md
  Update the inline documentation in cluster.yaml
  typo
  Don't fail sed if some files are missing
  Workaround systemd issues with oneshot autorestarts
  etcd static IP addressing overhaul
  Calico self hosted integration (kubernetes-retired#124)
  Fix lint.
  bugfix for a typo in install-kube-system scripts
  Update README.md
  fix(e2e): Correctly wait for a node pool stack for deletion
  Don't require key-name param during cluster init
  Propagate SSHAuthorizedKeys to nodepools
  ...
kylehodgetts pushed a commit to HotelsDotCom/kube-aws that referenced this pull request Mar 27, 2018
feat: Calico self hosted integration
* Migrated Calico to self hosted install
* Updated Calico to v2.0 versions
* Bumped the policy controller to 0.5.2 to get a NoneType bugfix in
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants