-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes playbooks for postgresql #23
Conversation
updates for repo and container name change
Created by command: /usr/bin/tito tag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial impression is that there must be a better way to accomplish this than having distinct roles based on the cluster type. I understand that the APB must be aware of the cluster, but I would think that these deployments would be more alike than not alike.
At first glance this method puts a lot of the onus on the APB developer to create and maintain separate paths for OpenShift vs Kubernetes, which is already true...but if we could make it easier for them that would be better.
I'm still trying to think this through so this idea hasn't been totally thought out. What if you were to not separate roles but conditional logic when you must deploy an OpenShift/Kubernetes specific object. Is there something that an OpenShift DeploymentConfig buys us that isn't available in a Kubernetes Deployment? Are we hurt in OpenShift if we simply create a Deployment?
These are just some of the thoughts that I have. What's happening looks good, I just want to encourage some more conversation about the implications of splitting on the cluster this way for APB development.
The apps we have here are more alike than not alike, but this only gets trickier as they get more complex. For instance, a route is not equal to an endpoint. A route is an ingress in kubernetes which requires you setup a loadbalancer like haproxy in you cluster to handle that resource.
I understand your point. Here are some of the other options I looked at:
To you point about maintaining two - If folks are using the broker on k8s then they only need to maintain one playbook. Maintaining two occurs if you go openshift -> kubernetes. This is part of why I harp on making kubernetes the default because it makes 3) the solution for the maintaining two playbooks problem.
I don't think we will be hurt by a deployment. There are some things that I've heard are different between deploymentconfigs and deployments, but nothing game changing. A bigger change might be a route. I don't know all the details on how they are different, but again I don't think it impacts functionality.
Thanks David. |
1ac6ee8
to
a7e92e1
Compare
Bug 1510804 - Change tag to database.
…pdate Preserve data when migrating between plans and versions
e340892
to
4cb007d
Compare
I'm having trouble getting the conflict to resolve. Closing in favor of #27 |
Mediawiki123 apb
We're going to use the extra_var CLUSTER to determine which
playbook to use. This will allow us to keep the playbooks separate.