-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kubernetes example #3
Comments
Not me. But would be no problem to add something like this to the repo. Just open a pull request with the additions :-) |
Hi here is a pull request from me #58 |
Thanks! :-) |
I think you could test it with tectonics sandbox, never tried the sandbox by myself. |
I'll probably use Minikube.
Also have some credits left on google cloud :-)
|
Thats fine too :) testing on both platforms ( minikube and google ) is even better 👍 I deployed this on bare metal. The sysctl stuff could be implemented in an init container too. I just saw that I missed the backup container. This one would be nice to give it the possibility to transfer the backup to S3 / datastore or so and check and pull it if the cluster is just reseted. |
Hi @monotek, I just put another commit onto the PR. It contains the missing zammad-backup routine and an optional S3 sync executed by a cronjob. I would like to use zammad-backup as an cronjob too because I want to make hourly backups. But it won't stop because of the So I added a zammad-backup-once function that can be used with a cronjob. |
Just saw that zammad uses wss on port 6042. Is there a way that we can make the port in the frontend using a custom port, so the NodesPort from the service is used? |
I don't understand. Why should we change the websockets port? What's the nodesport? |
In container based setups the ports are only visible to their networks and not the outside world. The problem here is, that my browser can't speak to the websocket, because first, kubernetes doesn't publish the port on the node itself. And second my firewall only allows traffic to port 80 and 443. To fix that it would be better to use nginx websocket upstream functionality as shown here: https://www.nginx.com/blog/websocket-nginx/ The NodePort would be workaround. Read more about here: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport It will create a port on the node itself but the kubernetes ports are starting at 30000 and up. But the port changes after delete and recreating the service, tough it only would be a workaround and not a everytime working solution. |
So if we'd use the ingress controller as websockets proxy this would not be necessary, right? |
yes, and I think that would fix the error that I get:
|
Cool 😎 |
ah, that was simpler than i thought. Got it working locally with websocket upstream. No change needed except my ingress yaml. will commit that when i have ported that to kubernetes nginx ingress. |
As I understood you ask to leave the zammad-nginx away? I don't know if kubernetes ingress is working like that. I think the ingress only acts as a loadbalancer and proxy server. So you will always need a custom webserver in the backend to serve public files and sockets. |
True. Don't had the files in mind. Files could be delivered by rails server too, but i think nginx has a better performance delivering static files. |
Yes, but just thought about the websocket, I do now let nginx do the proxy work at least for the wss. |
We could build another zammad-nginx docker image without the wss proxy. But for first testing this should be not so important. |
Having the files delivery in mind makes the whole approach of using ingress for websocket kind of senseless. Sorry, for not thinking it through the end. Just keep the nginx container. If it's doing its job like before it's also not important which ingress Controlzeit you use because it does not have to support websockets. |
Not really, the current solution has only one proxy for wss in use, not a double one. Currently: Browser -> k8s ingress -> zammad-websocket I think we should leave it like its in the PR now. |
Yes, but like I said, you're forced to use nginx ingress. Haproxy for example would not work. So in the end you lose a bit flexibility. |
Added Kubernetes files from #58 with some additions. |
removed the old kubernetes example. |
Since kubernetes/ docker swarm became much more popular lately I find it very useful to have an easy option to deploy zammad to my kubernetes cluster.
Is there anyone using kubernetes here besides me?
The text was updated successfully, but these errors were encountered: