Skip to content
Alexander Krizhanovsky edited this page Jun 19, 2023 · 2 revisions

TempestaFW maintains user defined number of connections to the each backend server. Each connection can be in one of the next states: active, failovering or dead.

After server connection is established it became active and schedulable: new incoming request can be assigned to the connection by placing into the it's forward queue and will be forwarded as soon as possible. If the connection is terminated for any reason several attempts is made to restore connection. During restoring (failovering) server connection is not schedulable, i.e. it doesn't accept new requests. Once connection is restored it becomes schedulable again and proceeds with forwarding requests from forward queue.

If a connection can't be restored within given number of tries it is considered as dead. All requests from it's forward queue will be rescheduled to other server connections if possible. TempestaFW will continue trying to restore the connection, but it will remain not schedulable until the connection would be finally established.

Unlike Nginx, HAProxy and Varnish, TempestaFW supports and use by defaults pipelining of HTTP messages. Option server_queue_size controls size of pipelined messages queue (forward queue).

Directive to add a single server:

server <IPADDR>[:<PORT>] [conns_n=<N>] [weight=<N>];

IPADDR: Either IPv4 or IPv6 address of the server. IPv6 address must be enclosed in square brackets. Hostnames are not allowed.
Defaults: -
Example:

server 192.168.1.1;
server [fc00::1];

PORT: Port used on backend server, optional parameter.
Defaults: 80
Example:

server 192.168.1.1:8080;
server 192.168.1.1:12345;

conns_n=<N>: The number of parallel connections to the server in range [1, 65535].
Defaults: 32
Example:

server 192.168.1.1 conns_n=344;

weight=<N>: The static weight of the server in range [1, 100].
Defaults: 50
The option is used only with the static ratio scheduler. See schedulers page for more info.
Example:

server 192.168.1.1 weight=42;
server 192.168.1.2 weight=93;

Server Groups

Interchangeable backend servers can be grouped together into a single control unit called server group. Load distribution between servers within the group is controlled by scheduler attached to the group. To see how different scheduling models affects load distribution refer to schedulers page.

Each server group can contain up to 65535 servers and the total amount of server groups is unlimited.

srv_group <NAME> {
	server <IPADDR>[:<PORT>] [options];
	server <IPADDR>[:<PORT>] [options];
	server <IPADDR>[:<PORT>] [options];
	...
	<OPTIONS>
}

NAME: Unique identifier of the group to reference it later in configuration file. Servers that are defined outside of any group implicitly form a special group called default.
Defaults: -
Example:

# Group called "static_storage":
srv_group static_storage {
	server 10.10.0.1:8080;
	server 10.10.0.2:8080;
	server [fc00::3]:8081;
}

# Implicit "default" group:
server 192.168.1.1;
server [fc00::1]:80 conns_n=1000;

<OPTIONS>: Options applied to entire server group, described in next Section.

You can use the default server group name, e.g.

listen 80;
cache 0;

server 127.0.0.1:8000;

vhost default_vhost {
	proxy_pass default; # reference to the server group
}
http_chain {
	-> default_vhost;
}

Server Group Options

A number of options allow to tune load distribution within servers in group, performance and to swich off/on some features on per group basis. Options defined outside of any server group can be used to override default options for the following server groups.


server_connect_retries <N>: The maximum number of re-connect attempts after which the server connection is considered dead. The value of zero means unlimited number of attempts.
Defaults: 10
Example:

srv_group static_storage {
	server 10.10.0.1:8080;
	
	server_connect_retries 5;
}

If a server connection fails intermittently, then requests may sit in the connection's forwarding queue for some time. The next two directives set certain allowed limits before these requests are considered failed. When one or both of these limits is exceeded for a request, the request is evicted and an error is returned to a client.

server_forward_retries <N>: Number of attempts to re-forward the same request to a server. The value of zero means that requests will be sent only once.
Defaults: 5
Example:

srv_group static_storage {
	server 10.10.0.1:8080;
	
	server_forward_retries 0;
}

server_forward_timeout: Maximum time frame in seconds within which a request may still be forwarded. The value of zero means unlimited timeout.
Defaults: 60
Example:

srv_group static_storage {
	server 10.10.0.1:8080;
	
	server_forward_timeout 10;
}

server_retry_nonidempotent: If stated allow re-forwarding and re-scheduling of non-idempotent requests in a failed server connection.
Defaults: re-forwarding and re-scheduling of non-idempotent requests is not allowed.
Example:

srv_group static_storage {
	server 10.10.0.1:8080;
	
	server_retry_nonidempotent;
}

server_queue_size <N>: Size of forward queue for each server connection in range [0, 2147483647].
0 - maximum alowed size of queue;
1 - pipelining is disabled.
Defaults: 1000
Example:

srv_group static_storage {
	server 10.10.0.1:8080;
	
	server_queue_size 2000;
}

sched <SCHED_NAME> [OPTIONS]: Scheduler used to distribute load within servers in group. SCHED_NAME - type of scheduler, OPTIONS - scheduler-specific options. For more information refer to schedulers page.
Defaults: ratio scheduler with default options.
Example:

srv_group static_storage {
	server 10.10.0.1:8080;
	
	sched ratio dynamic;
}

health <ID>: Health monitor with name ID to track HTTP availability of servers in server group. For more information refer to health monitor page.
Defaults: health monitor is disabled for server group.
Example:

srv_group main {
	server 10.10.0.1:8080;
	server 10.10.0.2:8080;

	health h_monitor1;
}

Grace shutdown

When the on-the-fly reconfiguration is requested a removed backend server remain used by TempestaFW until there is active sessions pinned to the server. See on-the-fly reconfiguration for more details.

grace_shutdown_time <LIMIT>: maximum time limit in seconds to wait before all connections to the removed server are terminated.

Maximum Supported Connections

Optimal configuration for an HTTP server that is hidden behind a reverse proxy differs from optimal configuration for a server that serves clients directly. A server behind TempestaFW doesn't need to handle lots of connections from different clients. Instead it handles the exact number of connections created by TempestaFW. The number is specified in TempestaFW configuration file with the option conns_n of the directive server, e.g.:

server 192.168.0.1:8080 conns_n=64;

Most modern HTTP servers allow to configure the number of parallel threads or processes and the maximum number of connections that can be handled by a single thread/process. If each thread/process is configured to handle significantly more connections than the number of connections that will actually be active, then a single thread/process of a backend server may pull all the load. That would lead to misuse of resources. Thus, the configured number of connections across all threads/processes should be twice as large as the value of conns_n option in TempestaFW configuration. That would allow TempestaFW to successfully reopen connections closed by a backend server. As a result, the load handled by a backend server would be distributed fairly across all threads/processes.

E.g.: TempestaFW configuration:

server 192.168.0.1:8080 conns_n=256;

Nginx configuration:

worker_processes 4;

events {
    worker_connections 128;
    use epoll;
}

...

Connection Lifetime

TempestaFW maintains the exact number of connections with each backend server. If a server closes connections too soon, then the overall throughput will remain low under high load since TempestaFW-to-backend connections will be closed and reopened too often. For example, Nginx closes connections every 100 requests by default. To maximize the overall throughput the backend servers should be configured to keep connections open as long and possible. That would reduce the frequency of reopening TempestaFW-to-backend connections.

E.g., Ngnix configuration:

http {
    keepalive_requests 100000;
    ...
}

...

Request Timeout

Some HTTP servers can send 408 (Request Timeout) responses if a client doesn't send a request for long time. Since Tempesta FW establishes persistent connections with a backend server, the server can generate such responses, so you can find many records like the blow in your access log:

127.0.0.1 - - [22/Jul/2017:18:03:31 +0300] "-" 408 0 "-" "-"

It's recommended to disable request timeouts on a backend server side. For Apache HTTPD you can do this with following configuration option (read the official documentation for details):

RequestReadTimeout header=0 body=0
Clone this wiki locally