- Download and Install Terraform
- Please create Service Credential of type JSON via https://console.cloud.google.com/apis/credentials, download and save as google.json in credentials folder.
- Clone this repository
- Upload your public ssh key at https://console.cloud.google.com/compute/metadata/sshKeys and use the corresponding
Username
value in the console fordefault_user_name
value invars.tf
terraform init && terraform plan -out "run.plan" && terraform apply "run.plan"
. Please note the Environment name prompted during plan may be dev/tst or any other stage.
Ansible now has a terraform module and a playbook yml file is included in this repository with a sample inventory with
localhost
-
Clone this repository in the ansible box as
cd /data && git clone https://github.com/dwaiba/gcp-terraform && cd gcp-terraform
. -
Check the
project_dir
variable and change accordingly as required ingcp-terraform_playbook.yml
file. -
Change the variables as required in
gcp-terraform_playbook.yml
. -
Kick as
ansible-playbook -i inventory gcp-terraform_playbook.yml
.
https://github.com/dwaiba/gcp-terraform
Pre-reqs:
-
gcloud should be installed. Silent install is -
export USERNAME="<<you_user_name>>" && export SHARE_DATA=/data && su -c "export SHARE_DATA=/data && export CLOUDSDK_INSTALL_DIR=$SHARE_DATA export CLOUDSDK_CORE_DISABLE_PROMPTS=1 && curl https://sdk.cloud.google.com | bash" $USERNAME && echo "export CLOUDSDK_PYTHON="/usr/local/opt/python@3.8/libexec/bin/python" /etc/profile.d/gcloud.sh && echo "source $SHARE_DATA/google-cloud-sdk/path.bash.inc" >> /etc/profile.d/gcloud.sh && echo "source $SHARE_DATA/google-cloud-sdk/completion.bash.inc" >> /etc/profile.d/gcloud.sh &&
-
Please create Service Credential of type JSON via https://console.cloud.google.com/apis/credentials, download and save as google.json in credentials folder of the gcp-terraform
-
Default user name is the local username
Plan:
terraform init && terraform plan -var distro=ubuntu_or_centos count_vms=1 -var default_user_name=Your_User_Name -var disk_default_size=100 -var environment=dev -var region=europe-west4 -var machinetag=dev -var zone=europe-west4-a -var projectname=The_Project_Name -out "run.plan"
Apply:
terraform apply "run.plan"
Destroy:
terraform destroy -var count_vms=1 -var default_user_name=Your_User_Name -var disk_default_size=100 -var environment=dev -var region=europe-west4 -var machinetag=dev -var zone=europe-west4-a -var projectname=The_Project_Name
This is presently only for distro ubuntu
## Kill the notebook server
ps -eaf|grep jupyter|awk '{print $2}'|head -n 1|xargs kill -9
## Install IRkernel package
install_pack_irk='install.packages("IRkernel", repos="https://cran.rstudio.com")'
echo $install_pack_irk | sudo R --no-save
## ir installspec
Rscript -e 'IRkernel::installspec()'
## start the notebook server
cd /data
jupyter notebook --ip 0.0.0.0> /data/jupyter-notebook-server.log 2>&1 &
## Get the token from /data/jupyter-notebook-server.log
## login to <ip>:8888 with the token
One can create a Fully HA k8s Cluster using k3sup
curl -sLSf https://get.k3sup.dev | sh && sudo install -m k3sup /usr/local/bin/
One can now use k3sup
-
Obtain the Public IPs for the instances running as such
gcloud compute instances --list
or obtain just the Public IPs asgcloud compute instances list|awk '{print $5}'
-
one can use to create a cluster with first ip as master
k3sup install --cluster --ip <<Any of the Public IPs>> --user <<Your default gcloud user>> --ssh-key <<the location of the gcp_compute_engine private key like ~/.ssh/google_compute_engine>>
-
one can also join another IP as master or node For master:
k3sup join --server --ip <<Any of the other Public IPs>> --user <<Your default gcloud user>> --ssh-key <<the location of the gcp_compute_engine private key like ~/.ssh/google_compute_engine>> --server-ip <<The Server Public IP>>
or also as normal node:
k3sup join --ip <<Any of the other Public IPs>> --user <<Your default gcloud user>> --ssh-key <<the location of the gcp_compute_engine private key like ~/.ssh/google_compute_engine>> --server-ip <<The Server Public IP>>
or one can do it on three boxes via this simple script
terraform init && terraform plan -var count_vms=3 -var default_user_name=<> -var disk_default_size=20 -var environment=dev -var projectname=<> -out gcp.plan && terraform apply gcp.plan
export SERVER_IP=$(gcloud compute instances list --filter=tags.items=rancher --format json|jq -r '.[].networkInterfaces[].accessConfigs[].natIP'|head -n 1)
k3sup install --cluster --ip $SERVER_IP --user $(whoami) --ssh-key ~/.ssh/google_compute_engine --k3s-extra-args '--no-deploy traefik --docker'
gcloud compute instances list --filter=tags.items=rancher --format json|jq -r '.[].networkInterfaces[].accessConfigs[].natIP'|tail -n+2|xargs -I {} k3sup join --server-ip $SERVER_IP --ip {} --user $(whoami) --ssh-key ~/.ssh/google_compute_engine --k3s-extra-args --docker
export KUBECONFIG=`pwd`/kubeconfig
kubectl get nodes -o wide -w
kubectl apply -f pd.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass slow -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Please report bugs by opening an issue in the GitHub Issue Tracker. Bugs have auto template defined. Please view it here
Patches can be submitted as GitHub pull requests. If using GitHub please make sure your branch applies to the current master as a 'fast forward' merge (i.e. without creating a merge commit). Use the git rebase
command to update your branch to the current master if necessary.
- Please see the LICENSE file for licensing information.
- Please see the Code of Conduct