Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes playbooks for postgresql #23

Closed

Conversation

rthallisey
Copy link
Contributor

We're going to use the extra_var CLUSTER to determine which
playbook to use. This will allow us to keep the playbooks separate.

@rthallisey
Copy link
Contributor Author

Copy link
Contributor

@djzager djzager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial impression is that there must be a better way to accomplish this than having distinct roles based on the cluster type. I understand that the APB must be aware of the cluster, but I would think that these deployments would be more alike than not alike.

At first glance this method puts a lot of the onus on the APB developer to create and maintain separate paths for OpenShift vs Kubernetes, which is already true...but if we could make it easier for them that would be better.

I'm still trying to think this through so this idea hasn't been totally thought out. What if you were to not separate roles but conditional logic when you must deploy an OpenShift/Kubernetes specific object. Is there something that an OpenShift DeploymentConfig buys us that isn't available in a Kubernetes Deployment? Are we hurt in OpenShift if we simply create a Deployment?

These are just some of the thoughts that I have. What's happening looks good, I just want to encourage some more conversation about the implications of splitting on the cluster this way for APB development.

@rthallisey
Copy link
Contributor Author

My initial impression is that there must be a better way to accomplish this than having distinct roles based on the cluster type. I understand that the APB must be aware of the cluster, but I would think that
these deployments would be more alike than not alike.

The apps we have here are more alike than not alike, but this only gets trickier as they get more complex. For instance, a route is not equal to an endpoint. A route is an ingress in kubernetes which requires you setup a loadbalancer like haproxy in you cluster to handle that resource.

At first glance this method puts a lot of the onus on the APB developer to create and maintain separate paths for OpenShift vs Kubernetes, which is already true...but if we could make it easier for them that would be better.

I understand your point. Here are some of the other options I looked at:

  1. jinja2 template the resources: I really don't like this one because it makes the templates un-readable.
  2. abstract differing resource calls behind an ansible module: This is very challenging. The module would have to correctly identify the resource the user is asking for, but also correctly render the template.
  3. Use kubernetes resources everywhere

To you point about maintaining two - If folks are using the broker on k8s then they only need to maintain one playbook. Maintaining two occurs if you go openshift -> kubernetes. This is part of why I harp on making kubernetes the default because it makes 3) the solution for the maintaining two playbooks problem.

I'm still trying to think this through so this idea hasn't been totally thought out. What if you were to not separate roles but conditional logic when you must deploy an OpenShift/Kubernetes specific object. Is there something that an OpenShift DeploymentConfig buys us that isn't available in a Kubernetes
Deployment? Are we hurt in OpenShift if we simply create a Deployment?

I don't think we will be hurt by a deployment. There are some things that I've heard are different between deploymentconfigs and deployments, but nothing game changing. A bigger change might be a route. I don't know all the details on how they are different, but again I don't think it impacts functionality.

These are just some of the thoughts that I have. What's happening looks good, I just want to encourage
some more conversation about the implications of splitting on the cluster this way for APB development.

Thanks David.

@rthallisey rthallisey force-pushed the k8s-plays branch 5 times, most recently from e340892 to 4cb007d Compare December 20, 2017 17:32
@rthallisey
Copy link
Contributor Author

I'm having trouble getting the conflict to resolve. Closing in favor of #27

@rthallisey rthallisey closed this Dec 20, 2017
jcpowermac pushed a commit to jcpowermac/postgresql-apb that referenced this pull request Mar 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants