Skip to content
This repository has been archived by the owner. It is now read-only.
A simple Cluster API provisioner using SSH
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets Added sample files to create a cluster with kubectl Aug 31, 2018
cloud/ssh Add node labels during bootstrapping. (#168) Dec 4, 2018
clusterctl Merge branch 'master' into lowercase2 Dec 7, 2018
cmd Rename provider-ssh imports from the sigs.k8s.io org to the (#88) Sep 13, 2018
hack
vendor
.gitignore
.versionfile
CONTRIBUTING.md
Gopkg.lock
Gopkg.toml
LICENSE
Makefile
OWNERS Change all references of aws to ssh. Also fixup README.md, etc. Jul 12, 2018
OWNERS_ALIASES
README.md Note deprecation of repository. (#177) May 13, 2019
RELEASE.md
SECURITY_CONTACTS Initial project skeleton Jul 11, 2018
boilerplate.go.txt
code-of-conduct.md
pipeline.yaml

README.md

This repository has been deprecated because it is based on a no longer supported pre-v1alpha1 version of the Cluster API. Please see cma-ssh or the Cluster API for alternative implementations.

Kubernetes cluster-api-provider-ssh Project

This repository hosts an implementation of a provider using SSH for the cluster-api project.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Development notes

Obtaining the code

go get github.com/samsung-cnct/cluster-api-provider-ssh
cd $GOPATH/src/samsung-cnct/cluster-api-provider-ssh

Generating cluster, machine, and provider-components files

Follow the instructions here.

Deploying a cluster

clusterctl needs access to the private key in order to finalize the new internal cluster.

eval $(ssh-agent)
ssh-add <private key file>

Build the clusterctl binary:

 make compile
  • Run using minikube1:

⚠️ Warning: You must only use minikube version 0.28.0

bin/clusterctl create cluster --provider ssh \
    -c ./clusterctl/examples/ssh/out/cluster.yaml \
    -m ./clusterctl/examples/ssh/out/machines.yaml \
    -p ./clusterctl/examples/ssh/out/provider-components.yaml
  • Run using external cluster:
./bin/clusterctl create cluster --provider ssh \
    --existing-bootstrap-cluster-kubeconfig /path/to/kubeconfig \
    -c ./clusterctl/examples/ssh/out/cluster.yaml \
    -m ./clusterctl/examples/ssh/out/machines.yaml \
    -p ./clusterctl/examples/ssh/out/provider-components.yaml

Validate your new cluster:

export KUBECONFIG=${PWD}/kubeconfig
kubectl get nodes

Building and deploying new controller images for development

To test custom changes to either of the machine controller or the cluster controller, you need to build and push new images to a repository. There are make targets to do this.

For example:

  • push both ssh-cluster-controller and ssh-machine-controller images
make dev_push
  • push ssh-machine-controller image
make dev_push_machine
  • push ssh-cluster-controller image
make dev_push_cluster

The images will be tagged with the username of the account you used to build and push the images:

Remember to change the provider-components.yaml manifest to point to your images. For example:

diff --git a/clusterctl/examples/ssh/provider-components.yaml.template b/clusterctl/examples/ssh/provider-components.yaml.template
index 8fac530..3d6c246 100644
--- a/clusterctl/examples/ssh/provider-components.yaml.template
+++ b/clusterctl/examples/ssh/provider-components.yaml.template
@@ -45,7 +45,7 @@ spec:
             cpu: 100m
             memory: 30Mi
       - name: ssh-cluster-controller
-        image: gcr.io/k8s-cluster-api/ssh-cluster-controller:0.0.1
+        image: gcr.io/k8s-cluster-api/ssh-cluster-controller:paul
         volumeMounts:
           - name: config
             mountPath: /etc/kubernetes
@@ -69,7 +69,7 @@ spec:
             cpu: 400m
             memory: 500Mi
       - name: ssh-machine-controller
-        image: gcr.io/k8s-cluster-api/ssh-machine-controller:0.0.1
+        image: gcr.io/k8s-cluster-api/ssh-machine-controller:paul
         volumeMounts:
           - name: config
             mountPath: /etc/kubernetes

1 If using minikube on linux, you may prefer to use the kvm2 driver. To do so, add the --vm-driver=kvm2 flag after installing the driver.

You can’t perform that action at this time.