New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add server config generator #340

Merged
merged 20 commits into from Aug 11, 2017

Conversation

Projects
None yet
3 participants
@nicksellen
Copy link
Member

nicksellen commented Aug 9, 2017

Ok, I made it a generator instead of just fixed files.

Might be a good idea to move some of the server config over to ansible, it's great for this kind of stuff... and get do it incrementally.

Websockets seem to work through all the layers too yay! You can try it in the console with (and then just see that it had a 101 protocol upgrade thing - can't actually check messages actually pass properly yet...):

var url = [
  window.location.protocol.replace(/^http/, "ws"),
  "//",
  window.location.host,
  "/api/ws"
].join("");
new WebSocket(url)
@codecov-io

This comment has been minimized.

Copy link

codecov-io commented Aug 9, 2017

Codecov Report

Merging #340 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #340   +/-   ##
=======================================
  Coverage   98.21%   98.21%           
=======================================
  Files         153      153           
  Lines        4157     4157           
  Branches      181      181           
=======================================
  Hits         4083     4083           
  Misses         54       54           
  Partials       20       20

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d63e489...b3a05aa. Read the comment docs.

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Aug 9, 2017

This bit is nice:

One of the benefits of decoupling the client connection handling from work processing is that it means you can run new code without dropping client connections; this is especially useful for WebSockets.
http://channels.readthedocs.io/en/stable/deploying.html

So it should mean that during restarts, if a client sent a websocket message, but the workers were already shutting down, it will just hang around in the redis queue, then when the new worker is loaded, it can read the message and handle it. The client will stay connected the whole time and never know something was restarted.

@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented Aug 10, 2017

Daphne handles the client connections, right? So there seem two kinds of restarts:

  1. After changing "our" code base
  2. After daphne upgrade

What needs to be done in case (1)? So far the deploy script replaces the code and then does uwsgi touch-reload. With the workers, would the deploy script need to restart the individual workers?

systemctl restart foodsaving-world-worker@1
systemctl restart foodsaving-world-worker@2
systemctl restart foodsaving-world-worker@3
...

or can it simply do:

systemctl restart foodsaving-world-worker

Another point: how can we do that without root privileges? There's a way to allow sudo only for certain commands: https://unix.stackexchange.com/questions/192706/how-could-we-allow-non-root-users-to-control-a-system-d-service

After a daphne upgrade (2), we would need to restart foodsaving-world-daphne.
I guess we can do that manually for now. (It would also drop all client connections). It would make sense to me to pin the daphne version inside the requirements.in file and upgrade separately from other packages.

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Aug 10, 2017

With the workers, would the deploy script need to restart the individual workers?

Yes, we have to handle the reloading ourselves. uwsgi was managing that before by stopping/starting processes. The runworker command is very simple.

Because of the queueing it will not take the website down and all requests will be serviced.

If they receive a SIGTERM they'll shutdown cleanly [1], which is the systemd default [2].

systemctl restart foodsaving-world-worker

I/we can look to see if we can define a master service that allows all the seperate instance service to be controlled as one...

After a daphne upgrade (2), we would need to restart foodsaving-world-daphne

I would not worry too much about automating this. It would also need to be restarted if we changed the channel layer config, which won't happen very frequently. I doubt minor version updates would make much difference either. So, I would keep to doing it manually for now.

[1] http://channels.readthedocs.io/en/stable/deploying.html#deploying-new-versions-of-code
[2] https://www.freedesktop.org/software/systemd/man/systemd.service.html

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Aug 10, 2017

I/we can look to see if we can define a master service that allows all the seperate instance service to be controlled as one...

BindsTo seems the directive to use [1]. I'll try it out on the foodsaving-world-channels config

[1] https://www.freedesktop.org/software/systemd/man/systemd.unit.html#BindsTo=

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Aug 10, 2017

or PartOf ... I'll play with them

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Aug 10, 2017

Ok, I added a target that can control all the worker instances. The PartOf makes the service instance start/stop/restart when the target is.

I also removed the link between daphne and the workers, I think it keeps it simpler as they will be started/stopped for very different reasons and don't want to be knee deep in the systemd docs all the time trying to understand the interactions.

@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented on cabf80e Aug 10, 2017

Great tool!

nicksellen added some commits Aug 10, 2017

Make worker target not require daphne
They should be controllable independently

Start the service:
```
systemctl start foodsaving-world-channels-daphne.server

This comment has been minimized.

@tiltec

tiltec Aug 10, 2017

Member

This should end on .service instead of .server, right?

This comment has been minimized.

@nicksellen

nicksellen Aug 10, 2017

Member

Ah yeah, typo.

sudo systemctl restart "${name}-worker.target"
}

backend_dir="/var/www/${name}/www"

This comment has been minimized.

@tiltec

tiltec Aug 10, 2017

Member

Do you intend to change the backend dir from /home/deploy/ to /var/www/? If yes, why?

You would need to update the cronjobs as well:

image

This comment has been minimized.

@nicksellen

nicksellen Aug 10, 2017

Member

Yes, I did intend it. To make it consistent with all the other apps on the server. Good point for the cronjobs. Although perhaps we could go all-out on the systemd, using timers.

This comment has been minimized.

@nicksellen

nicksellen Aug 10, 2017

Member

Or, perhaps add celery finally, and setup the timer things in there.

This comment has been minimized.

@nicksellen

nicksellen Aug 10, 2017

Member

Either way, it's nice to have these things defined inside the app, or at least the repo if possible.

This comment has been minimized.

@tiltec

tiltec Aug 10, 2017

Member

Hm yes. Well if we go for celery in the foreseeable future, it would need another systemd service. Or this recommended python alternative: http://supervisord.org
Which could also make sense to use for daphne and the workers.

This comment has been minimized.

@nicksellen

nicksellen Aug 10, 2017

Member

Personally I'm not into these language-ecosystem-specific developer-friendly kind of process managers (god, pm2, supervisord, etc) - the os already does all this stuff, systemd, upstart, smf, etc..

This comment has been minimized.

@tiltec

tiltec Aug 10, 2017

Member

I remember looking into supervisord some time ago and it looked quite good. But right now I don't see any reason to use it over systemd.
Config has yet another format with similar complexity than systemd service files: https://github.com/celery/celery/blob/master/extra/supervisord/supervisord.conf

This comment has been minimized.

@tiltec

tiltec Aug 10, 2017

Member

Celery will become necessary to send out mails and notifications triggered by user interaction. But maybe not everything at once.

This comment has been minimized.

@nicksellen

nicksellen Aug 10, 2017

Member

Celery looks easy to add in the future, but unrelated to this change now.

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Aug 10, 2017

So, proposal:

  1. edit deploy.sh to use the systemd deployment method for dev branch only
  2. switch over nginx config to use channels backend
  3. switch over cronjobs to use new path
  4. merge to master
@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented Aug 11, 2017

seems good to me!

nicksellen added some commits Aug 11, 2017

Use $http_x_forwarded_proto instead of $scheme
Makes it consistent with other configs on yuca

@nicksellen nicksellen merged commit 19c9a29 into master Aug 11, 2017

1 check was pending

ci/circleci CircleCI is running your tests
Details

@tiltec tiltec deleted the add-systemd-configs branch Aug 11, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment