Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added kubernetes yaml files #58

Closed
wants to merge 6 commits into from
Closed

Conversation

kersten
Copy link
Contributor

@kersten kersten commented Oct 24, 2017

A first set of yaml files for kubernetes.

@kersten kersten mentioned this pull request Oct 24, 2017
@monotek
Copy link
Member

monotek commented Oct 24, 2017

Question: why do you use Zammad nginx container?

Imho the kubernetes ingress controller could do the proxy work too.

@kersten
Copy link
Contributor Author

kersten commented Oct 24, 2017

I just copied the docker setup over to kubernetes.

Is the kubernetes Ingress able to mount the pvc and use a custom config with wss?

Another question would be, eg: Is the tectonic ingress or some other ingresses able to do that too?

@kersten
Copy link
Contributor Author

kersten commented Oct 24, 2017

Another thing: I don't know the zammad core, would it be possible to use nginx websocket upstream functionality instead of the custom wss port?

@monotek
Copy link
Member

monotek commented Oct 24, 2017

I'm not sure that all ingress controllers support it but at least nginx & traefik ingress controlles can do it:

Nginx example: https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/websocket/README.md

@kersten
Copy link
Contributor Author

kersten commented Oct 24, 2017

Ok, I will have a look at it, but that's stuff for tomorrow or so. I have to see.

@monotek
Copy link
Member

monotek commented Oct 24, 2017

No hurry!
Thanks for your work :-)

@kersten
Copy link
Contributor Author

kersten commented Oct 25, 2017

This one is now working as expected. Let me know if you have another wish :)

@kersten
Copy link
Contributor Author

kersten commented Oct 25, 2017

Maybe I could add a couple more Readme.md stuff.

At least tell people that the need ReadWriteMany persistent volumes. This can be delivered from glusterfs or storageos.

@monotek
Copy link
Member

monotek commented Oct 28, 2017

Google cloud test failed :-(

Had to remove the s3 & cronjob yml files to get through "kubectl apply -f ." but containers wont start because of errors like:

  • warning PersistentVolumeClaim is not bound: "zammad-home" (repeated 3 times) default-scheduler
  • warning Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported

I'll try with minkube later.

@monotek
Copy link
Member

monotek commented Oct 29, 2017

It also did not work out of the box on minikube.

GCE was not so much of a suprise. Maybe it works by adding some special storage container (nfs, or glusterfs). But i don't have enough time at the moment to try it out.

Did you tried it somewhere else as in your environment? At least on Minikube it should work out of the box, to be able to test it.

@kersten
Copy link
Contributor Author

kersten commented Oct 29, 2017

Hi,

this is correct. For Zammad you will need ReadWriteMany disks. This is possible using eg GlusterFs or storageos. But that setup would be overkill to list here and hardly to maintain. As I said, maybe the documentation needs to be a bit more clear.

I never tried minikube, so I don't know if we could get this working out of the box.

@monotek
Copy link
Member

monotek commented Oct 29, 2017

As minikube is the official kubernetes testing environment we should try to get it work there. This should also be generic enough to use it as a starting point to get it work in other environments.

@monotek
Copy link
Member

monotek commented Oct 31, 2017

Update....
Got it running on Minikube, by starting with a fresh installation. Seems my Minikube installation was to old or fucked up in some way. I'll push soon with some changes...

@monotek
Copy link
Member

monotek commented Oct 31, 2017

After a while the websocket and scheduler containers crashed.

PersistentVolumeClaim is not bound: "zammad-home"
Back-off restarting failed container
Error syncing pod

Pod log of scheduler and websocket server look like:

scheduler can access raillsserver now...
Bundler::GemNotFound: Could not find rake-12.0.0 in any of the sources
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/spec_set.rb:87:in `block in materialize'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/spec_set.rb:81:in `map!'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/spec_set.rb:81:in `materialize'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/definition.rb:159:in `specs'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/definition.rb:218:in `specs_for'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/definition.rb:207:in `requested_specs'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/runtime.rb:109:in `block in definition_method'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/runtime.rb:21:in `setup'
  /usr/local/lib/ruby/gems/2.4.0/gems/bunhungen für dieses Kino möglichdler-1.15.4/lib/bundler.rb:101:in `setup'
  /usr/local/lib/ruby/gems/2.4.0/gems/bundler-1.15.4/lib/bundler/setup.rb:19:in `<top (required)>'
  /usr/local/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
  /usr/local/lib/ruby/site_ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
bundler: failed to load command: script/scheduler.rb (script/scheduler.rb)

I guess its still a problem that the all containers try to write to zammads tmp dir.

I tend to create kubernetes containers which use nfs or glusterfs to share a network directory. Also trying to figure out if redis, memcached or something similar can be used for tmp dir replacement.

This is my current progress: https://github.com/monotek/zammad-docker-compose/tree/beyond-agentur-master

@kersten
Copy link
Contributor Author

kersten commented Nov 1, 2017

Hi, yes this sounds like a volume problem. I use glusterfs as ReadWriteMany volumes. I had that error too, restarting railsserver helped. But that would not be the solution.

We will get this working soon, I hope :)

@kersten
Copy link
Contributor Author

kersten commented Nov 1, 2017

The removal of S3 is correct I think. It's something like glusterfs that should not be handled here. I think its not working because you didn't pass correct config values.

@monotek
Copy link
Member

monotek commented Nov 1, 2017

We'll evaluate to use memcached like described in: zammad/zammad#1601

@Geo719
Copy link

Geo719 commented Nov 4, 2017

Hi,
I pulled the docker-compose version of Zammad. Next I tried running kompose convert / kompose up after altering the docker-compose.yml version from 3.3 to 3.

The system starts up but there are problems:

  1. After kompose up:
    FATA Error while deploying application: persistentvolumeclaims "data-zammad" already exists
    Along with this the zammad-nginx pod gives following errors:
PersistentVolumeClaim is not bound: "data-zammad"
Back-off restarting failed container
Error syncing pod 

  1. zammad-nginx exits with error because of:
    nginx: [emerg] host not found in upstream "zammad-railsserver:3000" in /etc/nginx/conf.d/zammad.conf:8

any clues?

@monotek
Copy link
Member

monotek commented Nov 4, 2017

It does not work, without shared storage.
Thats the reason its not merged.

We're currently evaluating to use memcached to fix the problem. See: zammad/zammad#1601
Vote for the issue, if you're interested.

@Geo719
Copy link

Geo719 commented Nov 4, 2017

Yes, I did read - maybe I did not understand.
So minikube does not "support" ReadWriteMany.
Would it be an option to use NFS?
minikube start --feature-gates=DynamicVolumeProvisioning=false
and then hook on a NFS?

Beside, why using docker-compose v3.3? There are no v3.3 specific rules?!

@Geo719
Copy link

Geo719 commented Nov 4, 2017

monotek, I had not seen the issue #1601
Thanks for the hint

PS: are you aware of minikube-nfs?
Wouldn´t that a more "production" like setup instead of using memcached?

@monotek
Copy link
Member

monotek commented Nov 4, 2017

Yes, NFS, Glusterfs or whatever network filesystem would work. Unfortunately it's kind of overhead to add this to Zammad, as additional containers (storage server) and changes to existing ones (storage clients) would be needed.

That's the reason we thinking about to implement memcached instead, as parts of Zammad (cache) would already support it.

Unfortunately we don't have a schedule for that at the moment. So voting for the linked issue could help speed up things a bit ;-)

Can't remember the reason for compose v3.3. Maybe there is none but it's offtopic anyway.

Minikube NFS is not a solution. It should be possible to run Zammad in any kubernetes cloud. Not just minikube.

@Geo719
Copy link

Geo719 commented Nov 4, 2017

OK, I think I got it and already voted :)

@monotek
Copy link
Member

monotek commented Nov 21, 2017

I just merged your changes to master.
Many thanks again :-)
Also added a NFS server that serves 1 GB tmpfs dir to fix the volume issues.

@monotek monotek closed this Nov 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants