forked from RocketChat/Rocket.Chat
-
Notifications
You must be signed in to change notification settings - Fork 1
[DEPRECATED] messaging deployed in the core infrastructure
ear-dev edited this page Sep 24, 2021
·
2 revisions
DEPRECATED in favor of https://github.com/WideChat/Rocket.Chat/wiki/messaging-deployed-using-gitOps-into-the-vega-infrastructures
- The messaging app can be deployed in the core-infrastructure through spinnaker.
- The pipeline configurations can be found here
- The deployment files referenced in the pipelines can be found here
- The docker image you want to deploy must be available in
vega-docker-<dev, preprod, or prod>.docker.artifactory.viasat.com/messaging:<tag_name>
- Use rancher to create a namespace
messaging
kubectl apply -f mongodb_config_for_helm.yaml
kubectl create secret tls messaging-tls --cert=./secrets/messaging-vega-dev-viasat-io-bundle.crt --key=./secrets/messaging-vega-dev-key.key --namespace messaging
- Run the pipeline in spinnaker
messaging_deploy
- Run the pipeline in spinnaker
mongo_backup_deploy
- Attach the backup IAM policy to the rancher role as described below under
Backup job requirements
- There are two pipelines available in spinnaker under the Application name 'messaging'
- One pipeline deploys the app, and one deploys the mongodb backup cronjobs, pods, and configurations.
- TODO: another pipeline for on demand mongo restore jobs
- Before running the deploy pipeline you must manually deploy a configmap into the cluster/namespace that contains the mongodb secrets that will be baked into the values.yaml file at the time of deployment.
- You can find this file in s3://environment-name-spinnaker-secrets (ex. vega-preprod-spinnaker-secrets) or the secrets in lastpass under the
Shared-Vega
directory. - Download the file to your local machine and run
kubectl apply -f mongodb_config_for_helm.yaml
- Backup cron jobs rely on an s3 bucket which, for the moment, will be manually created in the AWS console.
- Bucket naming convention is "
infrastructure-name
-messaging-mongobackups" (exs. vega-dev-messaging-mongobackups, vega-preprod-messaging-mongobackups) NOTE: In prod we may have more than one deployment of a messaging server and the bucket name should include a differentiator.... ex:vega-prod-chatbot-messaging-mongobackups
- The rancher-role in AWS IAM must include a policy which will allow access from the cluster to the bucket.
- Manually create this policy and attach it to the rancher-role
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::vega-dev-messaging-mongobackups"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::vega-dev-messaging-mongobackups/*"
}
]
}
- The messaging app hostname is configured as a parameter in the pipeline.
- We use the viasat.io CLI to configure CNAME records which will point at the NLB in AWS where our core-infrastructure clusters can be reached.
- The hostname should use this convention messaging.
infrastructure-name
.viasat.io (exs. messaging.vega-dev.viasat.io, messaging.vega-preprod.viasat.io)
- A kubernetes tls secret must be configured manually for now.
- Create a new cert in digicert if needed.
- The keyfile and the cert file should be on your local machine in this repo under a directory named
secrets
. These files will not be versioned in git. - Run this command:
kubectl create secret tls messaging-tls --cert=./secrets/messaging-vega-dev-viasat-io-bundle.crt --key=./secrets/messaging-vega-dev-key.key --namespace messaging
- TODO: host these secrets in lastpass
- Run these two files in this order to restore mongodb from an archive in S3:
kubectl create -f yamls/skbn-mongo-restore-job.yml
-
kubectl create -f yamls/mongo-restore-job.yml
- NOTE: you must first manually edit the src file at line 29 to reflect the actual archive name you want to restore.
Docker images are pushed directly from github actions to our ECR repositories in AWS on the Verana-dev and Veranda-prod accounts. In order to use them with spinnaker they have to be pushed manually to Artifactory. We can do this with the kluster
tool found here
- Make sure that your kubectl config is pointing at either the dev or prod cluster.
- ex:
./kluster pod -n widechat -cmd artifactory_push -tag pr-num-157 --new_tag latest