Skip to content
This repository has been archived by the owner on Mar 28, 2019. It is now read-only.

Better uwsgi deploy #226

Closed
almet opened this issue Mar 27, 2015 · 10 comments
Closed

Better uwsgi deploy #226

almet opened this issue Mar 27, 2015 · 10 comments

Comments

@almet
Copy link
Contributor

almet commented Mar 27, 2015

I've been reading the uwsgi documentation and while it's a bit hard to follow by times, it seems a bunch of interesting things could be tweaked in our setup. If you want to read a doc but don't know where to start, the best practices doc is a good one.

@Micheletto , I think you'll like that!

What could (should?) be tweaked in our deployment

  • Nginx natively includes support for upstream servers speaking the uwsgi protocol since version 0.8.40. We have 1.6.2 on stage, so it's available;
  • It's possible to configure uwsgi directly with a socket (not with -http). This will spare us parsing the requests twice and will speed up a lot our service.
  • We should not enable thread supports as we don't need them in our app (at least for now) and it makes the uwsgi server slower;
  • Aparrently, the best way to integrate uwsgi with systemd is with what they call an "emperor"

Other interesting things to know

  • There is a uwsgitop tool which can help us to find the right number of processes (apparently, Simple math like processes = 2 * cpucores is not enough);
  • If our server seems to have lots of idle workers, but performance is still sub-par, we may want to look at the value of the ip_conntrack_max system variable (/proc/sys/net/ipv4/ip_conntrack_max) and increase it to see if it helps.
@almet
Copy link
Contributor Author

almet commented Mar 27, 2015

Here is what I see on stage now:

*** Operational MODE: single process ***

I don't really understand why we have only one process here, and that might be a source of the problems.

*** uWSGI is running in multiple interpreter mode ***

We should deactivate that because we don't need multiple interpreters here, to do so, the single-interpreter option can be used.

@almet
Copy link
Contributor Author

almet commented Mar 27, 2015

In this post it's recommended to have multiple wsgi instances rather than only one, and it seems to perform better. Something to investigate.

@tarekziade
Copy link
Contributor

I don't really understand why we have only one process here, and that might be a source of the problems.

I thint this just means we're using a process that preforks workers itself, which is different from an emperor mode. you can verify this by ps aux'ing|wc -l the workers

@pdehaan
Copy link
Contributor

pdehaan commented Mar 27, 2015

ping @Micheletto

@almet
Copy link
Contributor Author

almet commented Mar 30, 2015

After some more debugging by @tarekziade and @Micheletto it seems that uwsgi wasn't using the correct ini file, and as such was only spawning one process.

@tarekziade pushed a fix, which is uwsgi --ini /data/readinglist/config/readinglist.ini.

@almet
Copy link
Contributor Author

almet commented Mar 30, 2015

After some testing, here is a good uwsgi configuration:

[uwsgi]
wsgi-file = app.wsgi
enable-threads = true
socket = /tmp/readinglist.sock
processes = 10
master = true
module = readinglist
harakiri = 30
uid = readinglist
gid = readinglist
virtualenv = .venv
lazy = true
lazy-apps = true
single-interpreter = true
buffer-size = 65535
post-buffering = 65535

The nginx configuration should have this setup:

uwsgi_pass unix:///tmp/readinglist.sock;
include uwsgi_params;

In case of permissions problems chmod-socket and chown-socket can be used in the uwsgi configuration.

@almet
Copy link
Contributor Author

almet commented Mar 31, 2015

local nginx config /etc/nginx/sites-enabled/readinglist on debuntu:

server {
  listen                80;
  server_name           readinglist.local;

  location / {
            proxy_redirect off;
            proxy_intercept_errors on;
            uwsgi_pass unix:///tmp/rl.sock;
            include uwsgi_params;
            error_page 500 502 503 504 =503 /503.html; # Change 5x to 503
        }
    }

@Micheletto
Copy link

I believe the basic concerns of this issue have been addressed. Should we close it out?

@Natim Natim closed this as completed Apr 4, 2015
@almet
Copy link
Contributor Author

almet commented Apr 4, 2015

yes! However, can we paste the final version of the configuration file here for later reference? (cc @Micheletto)

@almet almet reopened this Apr 4, 2015
@Micheletto
Copy link

[uwsgi]
wsgi-file = app.wsgi
enable-threads = true
socket = /run/uwsgi/cliquet.sock
chmod-socket = 666
cheaper-algo = busyness
cheaper = 5
cheaper-initial = 9
workers = 14
cheaper-step = 1
cheaper-overload = 30
cheaper-busyness-verbose = true
master = true
module = readinglist
harakiri = 120
uid = readinglist
gid = readinglist
virtualenv = .
lazy = true
lazy-apps = true
single-interpreter = true
buffer-size = 65535
post-buffering = 65535

@Natim Natim closed this as completed Apr 7, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants