New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jinja2 template system #278
Conversation
08eddfb
to
43a2447
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @jkfran, overall it looks great.
I've made a lot of comments. Please don't be put off by the number of comments, but this is a large PR with massive changes, so I suspect it will take a few rounds of review to get it ready.
43a2447
to
8b71de3
Compare
Hi @nottrobin , I have made the changes we agreed. It would be great if you could check this again. I have seen that usn.ubuntu.com have this customization to stop de container: And it's going to disappear with the templates, should we keep it? Also, I have to add the property Let me know what do you think about this. Thank you. |
Issues I've found in the diff: staging-apiYou're creating ubuntu.com/blogThe generated rules:
- host: ubuntu.com
http: &http_service
paths:
- path: /
backend:
serviceName: ubuntu-com
servicePort: 80
- path: /blog
backend:
serviceName: ubuntu-com-blog
servicePort: 80 Whereas the current one looks like this: rules:
- host: ubuntu.com
http: &ubuntu-com_service
paths:
- path: /blog
backend:
serviceName: ubuntu-com-blog
servicePort: 80
- path: /
backend:
serviceName: ubuntu-com
servicePort: 80 I believe the order of these paths is actually very important - it only works if ubuntu.com SSL redirectI forget what we discussed about this, but if we're to actually release the production configs this side of Christmas (before the content-cache is applied to ubuntu.com), we probably need to keep the Extra 360 staging serviceFor some reason, just for 360, we seem to be creating a --- /tmp/LIVE-804280665/v1.Service.default.threesixty-staging-canonical-com 2019-11-20 14:29:32.651462463 +0000
+++ /tmp/MERGED-686682852/v1.Service.default.threesixty-staging-canonical-com 2019-11-20 14:29:32.651462463 +0000
@@ -0,0 +1,20 @@
+apiVersion: v1
+kind: Service
+metadata:
+ creationTimestamp: "2019-11-20T14:29:32Z"
+ name: threesixty-staging-canonical-com
+ namespace: default
+ selfLink: /api/v1/namespaces/default/services/threesixty-staging-canonical-com
+ uid: e79413ee-e6a6-493b-bd00-a945b57f4509
+spec:
+ ports:
+ - name: http
+ port: 80
+ protocol: TCP
+ targetPort: http
+ selector:
+ app: 360.canonical.com
+ sessionAffinity: None
+ type: ClusterIP
+status:
+ loadBalancer: {} This should just be called |
The next test I'll do is to actually try deploying all the sites with these new configs on a local microk8s. But I'll do that after we've resolved the issues in my previous comment. |
Hey @nottrobin staging-apiFor this site we are currently not using two environments, we just have staging so I have made some modifications to allow to specify the staging domain instead of generating a new one. The ubuntu.com/blogDone :) ubuntu.com SSL redirectI thought we had agreed to remove it. Okay, I just added a new property: Extra 360 staging serviceI fixed it, it was only 360 because it had a name assigned since the name can't be generated with the domain. |
@jkfran thanks for all that.
Yeah, I remember that. I think my thinking then was that we would be moved to the content-cache pretty soon which will support HTTPS. But actually we should probably merge this one sooner than that. |
@jkfran when I do
|
I believe you can now delete |
@jkfran okay I tested this on microk8s with both staging and production configs for Once you've addressed my points above I think we can merge this, and then we just have to very cautiously roll it out next week, check each site as we go. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM works great 👍 thanks for all the hard work
Description
This PR will change completely the structure of our deployments configs, and we will no longer save Kubernetes objects directly in this repo.
Instead, we will have small files with variables values per each project, and we will generate the Kubernetes output when needed.
List of changes
templates
folder.sites
folder.New requirements
We have to be sure that the following system dependencies are installed:
How it works
Take a look to the updated README.md file to understand how konf.py works.
QA
Download this gist and place it on the root of this branch.
qa-templates-checker.sh
is a script that I created in order to verify that this template system will not produce any unwanted changes to our current configs. Feel free to take a look into it before using it.How the QA script works
Basically we are using
kubectl diff
to compare against all our configuration in the git branch "master". We first load all the Kubernetes objects in our system and then we dokubectl diff
for each project.QA Instructions
./qa-templates-checker.sh
Interpret the output
We can see if there are changes by the environment (production or staging) or missing projects.
If there are changes a new file called
diff.log
has been generated where you can see these changes.The
generation
property in thediff.log
file can be ignored, it's going to appear always that we made a change in any Kubernetes object:It doesn't make sense to keep this QA script in the repository after the merge but it has been necessary for the development and also to validate this PR
Next steps
When the approval of this PR occurs, it will be necessary to start working on the necessary changes in Jenkins