New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uWSGI support of nginx ingress controller? #143

Closed
yue9944882 opened this Issue May 15, 2017 · 4 comments

Comments

Projects
None yet
4 participants
@yue9944882
Copy link

yue9944882 commented May 15, 2017

I didn't find a usable configuration in the source code when rendering ingress to a nginx conf file, for example, "uwsgi_pass" option support?

Any one have a practice to integrate uWSGI to the controller ?

@pleshakov

This comment has been minimized.

Copy link
Collaborator

pleshakov commented May 18, 2017

@yue9944882
uWSGI doesn't fit into Ingress. Could you share your use case?

@ghost

This comment has been minimized.

Copy link

ghost commented Jun 13, 2017

I'll explain this as I understand it.

uwsgi has its own binary protocol to avoid the overhead of gratuitous http parsing. http://uwsgi-docs.readthedocs.io/en/latest/FAQ.html#why-not-simply-use-http-as-the-protocol

When running uwsgi as an app server behind nginx, it is customary to use uwsgi_pass, as recommended in uwsgi's docs (e.g. http://uwsgi-docs.readthedocs.io/en/latest/Nginx.html) to tell nginx where to find the upstream and to talk to that upstream using the uwsgi binary protocol.

To be clear: nginx itself does NOT embed uwsgi in-process here, or get involved with WSGI or python: uwsgi is running in another process that may be somewhere else, to take care of the job of an (e.g.) python app server, and it simply talks to nginx as an upstream using the uwsgi protocol. This means all that is required on the nginx side is the ability to accept configuration of uwsgi as an upstream and the ability to speak that protocol as is often done with nginx.

So, not speaking for yue9944882, the very typical use case would be:

  • running a Python web application,
  • via a WSGI interface (most common and typically best option for most web apps written in e.g. Django or Flask),
  • using uwsgi as the app server (arguably one of the most common and best options for running WSGI apps behind nginx),
  • in the idiomatic recommended way for uwsgi (i.e. using the uwsgi binary protocol),
  • WITHOUT putting another superfluous proxy in the middle to translate between uwsgi protocol and http.

Supporting uwsgi_pass should be analogous to supporting other protocols that are used for app servers to talk to nginx.

Hope this helps.

@pleshakov

This comment has been minimized.

Copy link
Collaborator

pleshakov commented Jun 16, 2017

@harts-boundless
thanks for such a thorough response. The use case you described is valid.

However, for an edge load balancer, such as an Ingress controller, it is not typical to support uwsgi.
It is more common to have nginx as a reverse proxy along an uwsgi application server, and then an HTTP load balancer for load balancing those nginx proxies.

@vdboor

This comment has been minimized.

Copy link

vdboor commented Nov 29, 2017

What I'd rather do, is using the embedded HTTP server that uWSGI has: http://uwsgi-docs.readthedocs.io/en/latest/HTTP.html Your ingres can connect directly with that, so there is no need for a second nginx pod. This way you can run a single container that has uwsgi as main process. Use a config file like:

[uwsgi]
module = $(UWSGI_MODULE)
processes = $(UWSGI_PROCESSES)
threads = $(UWSGI_THREADS)
procname-prefix-spaced = uwsgi: $(UWSGI_MODULE)

http-socket = :8080
http-enable-proxy-protocol = 1
http-auto-chunked = true
http-keepalive = 75
http-timeout = 75
stats = :1717
stats-http = 1
offload-threads = $(UWSGI_OFFLOAD_THREADS)

# Better startup/shutdown in docker:
die-on-term = 1
lazy-apps = 0

vacuum = 1
master = 1
enable-threads = true
thunder-lock = 1
buffer-size = 65535

# Logging
log-x-forwarded-for = true
#memory-report = true
#disable-logging = true
#log-slow = 200
#log-date = true

# Avoid errors on aborted client connections
ignore-sigpipe = true
ignore-write-errors = true
disable-write-exception = true

#listen=1000
#max-fd=120000
no-defer-accept = 1

# Limits, Kill requests after 120 seconds
harakiri = 120
harakiri-verbose = true
post-buffering = 4096

# Custom headers
add-header = X-Content-Type-Options: nosniff
add-header = X-XSS-Protection: 1; mode=block
add-header = Strict-Transport-Security: max-age=16070400
add-header = Connection: Keep-Alive

# Static file serving with caching headers and gzip
static-map = /static=/app/web/static
static-map = /media=/app/web/media
static-safe = /usr/local/lib/python3.6/site-packages/
static-safe = /app/src/frontend/static/
static-gzip-dir = /app/web/static/
static-expires = /app/web/static/CACHE/* 2592000
static-expires = /app/web/media/cache/* 2592000
static-expires = /app/web/static/frontend/img/* 2592000
static-expires = /app/web/static/frontend/fonts/* 2592000
static-expires = /app/web/* 3600
route-uri = ^/static/ addheader:Vary: Accept-Encoding
error-route-uri = ^/static/ addheader:Cache-Control: no-cache

# Cache stat() calls
cache2 = name=statcalls,items=30
static-cache-paths = 86400

# Redirect http -> https
route-if = equal:${HTTP_X_FORWARDED_PROTO};http redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}

and set default values for your environment variables in your container:

ENV UWSGI_THREADS=10 \
    UWSGI_PROCESSES=2 \
    UWSGI_OFFLOAD_THREADS=10 \
    UWSGI_MODULE=myapp.wsgi:application

# Have gzipped versions ready for direct serving by uwsgi
RUN gzip --keep --best --force --recursive /app/web/static/

CMD ["/usr/local/bin/uwsgi", "--ini", "/app/uwsgi.ini"]
EXPOSE 8080
VOLUME /app/web/media

When you use Django, you can use whitenoise (http://whitenoise.evans.io/) to serve media/static files from your container. In this example, I've used UWSGI directly for that. Preferably, those are cached at your loadbalancer / ingres / cloudfrond environment to avoid hitting the container each time. Thanks to the static-gzip-dir setting these are already prepared to be sent as gzipped.

cdent added a commit to cdent/placedock that referenced this issue Mar 17, 2018

Update to get things working with kubernetes
This is mostly for the sake of learning: the fact that the sqlite
database is in container means it's fairly useless: you can't scale
horizontally. Future work will explore different ways of dealing
what that.

The Dockerfile is update to reflect the latest placement extraction
changes and to put config in the container rather than using a
shared volume.

The placement-uwsgi.ini is update so it runs http directly.
There's no need for an nginx or apache setup, a k8s LoadBalancer
in deployment.yaml takes care of that.

The uwsgi ini is based on info at:
nginxinc/kubernetes-ingress#143 (comment)

Using minikube the steps are:

    eval $(minikube docker-env)
    docker build -t placedock:1.0 .
    kubectl apply -f deployment.yaml
    kubectl expose deployment placement-deployment --type=LoadBalancer
    minikube service placement-deployment --url

@isaachawley isaachawley closed this Aug 9, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment