Skip to content

Migration from Nginx

EvgeniiMekhanik edited this page May 6, 2024 · 5 revisions

This article describes gotchas on converting an Nginx configuration to Tempesta FW's one.

Enabling Session Persistence

NGINX supports three methods to configure session persistence: Sticky Cookies, Sticky routes, and sticky learn method. Tempesta doesn't support sticky routes, but it supports the rest two methods.

While session persistence in NGINX is configured inside of upstream block, which defines a server group for load balancing, in Tempesta session persistence is configured as sticky block inside of named virtual server (vhost) block or at top-level for implicit default virtual server.

Sticky Cookie configuration can be migrated as follows:

NGINX:

upstream backend {
    server 192.168.0.1;
    server 192.168.0.2;

    sticky cookie srv_id expires=1h domain=.example.com path=/;
}

Tempesta:

srv_group backend {
    server 192.168.0.1;
    server 192.168.0.2;
}

vhost example.com {
    proxy_pass backend;
    sticky {
        cookie name=srv_id;
        cookie_options="Expires=1h; domain=.example.com" Path=/;
        sticky_sessions allow_failover;
        sess_lifetime 60;
    }
}

In Tempesta sticky cookie contains cryptographically strong hash string, depends on client properties, and is unique for every HTTP session. The primary aim of sticky cookie is to provide cookie challenge, which is still can be used to mitigate simple DDoS attack. The options string parameter is passed into 'set-cookie' header to provide additional lifetime and security hints for a client browser.

The directive sticky_sessions allows Tempesta to use the cookie value to pin HTTP sessions to a specific server. The option allow_failover for the directive allows to permanently re-pin the session to another server from the same server group if the original one goes offline.

sess_lifetime directive controls how long the session is stored in Tempesta's session cache.

Since Tempesta's sticky cookie contains strong hashing, client can't generate a valid cookie on it's own and force Tempesta to follow their choice of a backend server.

Sticky learn method can be migrated in the same way:

NGINX:

upstream backend {
   server 192.168.0.1;
   server 192.168.0.2;
   sticky learn
       create=$upstream_cookie_examplecookie
       lookup=$cookie_examplecookie
       zone=client_sessions:1m
}

Tempesta:

srv_group backend {
    server 192.168.0.1;
    server 192.168.0.2;
}

vhost example.com {
    proxy_pass backend;
    sticky {
        learn name=backend_cookie_name;
        sticky_sessions allow_failover;
        sess_lifetime 60;
    }
}

Unlike the sticky cookie, with sticky learn Tempesta "learns" from 'set-cookie' response header which backend server creates a session identifier for a client and then uses this information to route future requests in this HTTP session to the origin backend server.

DDoS Mitigation

Limiting Access to Proxied HTTP Resources

Using NGINX it is possible to limit:

  • The number of connections per key value (for example, per IP address)
  • The request rate per key value (the number of requests that are allowed to be processed during a second or minute)
  • The download speed for a connection

NGINX requires to create a separate key-value shared cache. The key can contain text, variables, and their combination. Every request is matched through the shared cache, which describes long-living entities such as connections or sessions, and blocked if the limit is reached.

Security events are logged to a file, and external programs (e.g. Fail2ban) parse it to block malicious clients.

E.g. connection and request rate limits can be configured in NGINX within server {}, location {}, or http {} context with the following syntax:

http {
    limit_conn_zone $binary_remote_addr zone=addr:10m;
    limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;

    server {
        limit_conn addr 1000;
        imit_req zone=one;
    }
}

Tempesta provides wider set of limits, it supports both connection and request rate limits, but lacks download speed limit. The limits in Tempesta are split into two groups: connection-level and message-level limits. Connection-level limits are checked before a target virtual server is identified. Connection and request rate limits are examples of such limits. All connections with the same client IP address use the same context for the limits.

Another group, message-level limits, are validated when the HTTP parser defines the target virtual servers and locations.

Connection-level limits can be defined only at a top-level context, while message-level limits can be defined either in a virtual server or in a location context. The limits are defined under frang_limits section.

Unlike NGINX, Tempesta maintains in-memory database of blocked IP addresses, and the blocked addresses are forwarded to netfilter to block the malicious clients as soon as possible. This is described by ip_block directive.

The configuration above can be migrated to the following:

frang_limits {
    concurrent_tcp_connections 1000;
    request_rate 1;
    ip_block;
}

Other examples on Frang limits can be found on corresponding wiki page.

Migrating from testcookie module

An external testcookie-nginx-module module provides cookie and JavaScript challenge to mitigate various DDoS attacks.

The module is not included into the Nginx code base.

With this challenge cookies can be set using different methods:

  • "Set-Cookie" + 302/307 HTTP Location redirect,
  • "Set-Cookie" + HTML meta refresh redirect,
  • Custom template, JavaScript can be used here.

Tempesta supports both cookie and JavaScript challenges, and it's possible to configure custom HTML and JavaScript templates. Sticky cookie can be used as a basic sticky challenge. js_challenge directive will harden the challenge, so only clients, that support JavaScript, can pass it.

JavaScript challenge in testcookie-nginx-module is used to set a cookie using indirect methods, and cookies are sent as cipher text, so a client has to decrypt them. A slow AES implementation in JavaScript is used to slow down clients and to avoid flood on connection establishing. However, an attacker still can use an ad-hoc the cookie solving code, which efficiently solves the cookie challenge. In Tempesta JavaScript challenge is more strict, client has to repeat a request within an exact time frame or the challenge will be failed. There is no computations on client side that can be optimised or reduced, client simply can't bypass the wait period, otherwise it will be blocked.

Typical testcookie-nginx-module module configuration:

http {
    testcookie_name BPC;
    testcookie_max_attempts 3;

    testcookie_secret keepmesecret;

    testcookie_redirect_via_refresh on;
    testcookie_refresh_template '<html_template_content>';

    server {
        listen 80;
        server_name test.com;

        testcookie on;
        proxy_pass http://127.0.0.1:80;
    }
}

can be converted to the following Tempesta configuration:

sticky {
    cookie name=BPC enforce max_misses=3;

    secret keepmesecret;

    js_challenge resp_code=503 delay_min=1000 delay_range=1000
                 /etc/ddos_redirect.html;
}

server 127.0.0.1:80;

The testcookie-nginx-module can be disabled for selected clients, e.g. Google search engine testcookie_whitelist, but only IPv4 CIDR addresses can be defined. Configuration in Tempesta is different. Instead of defining the list of client addresses, that can bypass any checks, Tempesta relies on traffic marks added by netfilter/iptables or by http_table rules. This provides much greater flexibility on clients whitelisting.

Common security misconfigurations

As all flexible servers, Nginx can be configured incorrectly or insecurely. A separate article contains tricky configuration issues and how the same issues are resolved in TempestaFW.

Clone this wiki locally