Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions docs/man/renderd.conf.5
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,19 @@ Specify the number of threads to be used for \fBrenderd\fR.
A value of \fB'-1'\fR will configure \fBnum_threads\fR to the number of cores on the system.
The default value is \fB'4'\fR (macro definition \fB'NUM_THREADS'\fR).

.TP
.B request_queue_limit
Specify the maximum number of requests kept in each time-critical request queue: priority, normal, low priority, and bulk.
When one of these queues is full, new requests overflow into the dirty queue if space is available.
The default value is \fB'256'\fR when metatiles are enabled, or \fB'512'\fR without metatiles (macro definition \fB'DEFAULT_REQUEST_QUEUE_LIMIT'\fR).

.TP
.B dirty_queue_limit
Specify the maximum number of requests kept in the dirty queue.
The dirty queue is the background queue for expired or overflowed tiles where no client is waiting for the response.
Larger values can preserve more expired work for quiet periods, but they can also increase memory use and make it easier for important overflowed tiles to sit behind lower-priority background work.
The default value is \fB'8000'\fR when metatiles are enabled, or \fB'10000'\fR without metatiles (macro definition \fB'DEFAULT_DIRTY_QUEUE_LIMIT'\fR).

.TP
.B pid_file
Specify the file path into which the PID will be written by \fBrenderd\fR.
Expand Down Expand Up @@ -189,6 +202,8 @@ Only used by \fBrenderd\fR.

.SH SEE ALSO
.BR renderd(1)
.PP
See \fBdocs/renderd-queue-tuning.md\fR in the source tree for queue tuning and rollout guidance.
.BR

.SH AUTHOR
Expand Down
76 changes: 76 additions & 0 deletions docs/renderd-queue-tuning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# renderd Queue Tuning

`renderd` keeps incoming render requests in a priority queue with five priority
classes:

- priority request queue: missing tiles where a client is waiting
- request queue: stale tiles where a client is waiting
- low priority request queue: less urgent stale/style refresh work
- dirty queue: background work where no client waits for the response
- bulk request queue: explicit bulk rendering work

Requests are fetched by effective priority, preserving FIFO order within the
same priority class. When a time-critical request class is full, new requests
overflow into the dirty class. That prevents immediate drops, but it also means
the overflowed request no longer gets client-waiting priority until a later
client request for the same tile can promote it back into the appropriate
priority class.

## Configuration

Set the queue limits in the active `[renderd]` section of `renderd.conf`:

```ini
[renderd]
num_threads=4
request_queue_limit=256
dirty_queue_limit=8000
```

`request_queue_limit` applies separately to each of the priority, normal, low
priority, and bulk queues. `dirty_queue_limit` applies to the background dirty
queue. If the dirty queue is also full, new overflow work is dropped.

The defaults keep the previous compiled-in behavior:

- with metatiles: `request_queue_limit=256`, `dirty_queue_limit=8000`
- without metatiles: `request_queue_limit=512`, `dirty_queue_limit=10000`

## Operating Guidance

Increase `dirty_queue_limit` when dirty tiles are dropped during peak load but
the server has quiet periods later where it can catch up. This is useful for
large public tile services where preserving expired work can improve freshness
after the peak passes.

Do not increase `dirty_queue_limit` blindly on an overloaded server. If the
higher-priority queues are continuously non-empty, dirty work may still starve,
and a larger dirty queue mainly stores more backlog. Watch queue length, queue
time, render throughput, and dropped-tile metrics before and after the change.
When a client requests a tile that is already queued as lower-priority work,
`renderd` raises that existing item in the priority queue if the target priority
class has capacity. If the target class is still full, the request remains
background work and the client receives the usual not-done response.

Increase gradually. For example, move from the default to a limit sized for
roughly 10 minutes of dirty work, then one hour, before trying day-scale values.
The right value depends on metatile size, render throughput, expiry volume, and
whether the service has predictable low-load windows.

## Rollout Checklist

1. Record current `renderd_queue`, `renderd_queue_time`, and
`renderd_processed` Munin graphs.
2. Set `dirty_queue_limit` in `renderd.conf`.
3. Restart `renderd` during a maintenance window.
4. Confirm startup logs show the intended `request_queue_limit` and
`dirty_queue_limit`.
5. Watch whether dropped dirty work decreases without sustained growth in queue
time.
6. Roll back to the previous values if dirty queue time grows continuously or
missing-tile requests appear to be delayed by old background work.

This setting changes per-priority capacity and works with duplicate promotion
for already queued tiles. It does not implement bounded overtaking or broader
fairness between unrelated queued tiles; those would require a larger queue
scheduler change.
2 changes: 2 additions & 0 deletions etc/renderd/renderd.conf
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ stats_file=/run/renderd/renderd.stats
socketname=/run/renderd/renderd.sock
num_threads=4
tile_dir=/var/cache/renderd/tiles
;request_queue_limit=256
;dirty_queue_limit=8000

[mapnik]
plugins_dir=/usr/lib/mapnik/3.1/input
Expand Down
4 changes: 4 additions & 0 deletions etc/renderd/renderd.conf.examples
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,16 @@ stats_file=/run/renderd/renderd.stats
socketname=/run/renderd/renderd.sock
num_threads=4
tile_dir=/var/cache/renderd/tiles
;request_queue_limit=256
;dirty_queue_limit=8000

;[renderd]
;iphostname=::1
;ipport=7654
;num_threads=4
;tile_dir=rados://tiles/etc/ceph/ceph.conf
;request_queue_limit=256
;dirty_queue_limit=8000
;pid_file=/run/renderd/renderd_rados.pid
;stats_file=/run/renderd/renderd.stats

Expand Down
2 changes: 2 additions & 0 deletions etc/renderd/renderd.conf.in
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ pid_file=@RENDERD_PIDFILE@
socketname=@RENDERD_SOCKET@
stats_file=@RENDERD_RUN_DIR@/renderd.stats
tile_dir=@RENDERD_TILE_DIR@
; request_queue_limit=256
; dirty_queue_limit=8000

[mapnik]
font_dir=@MAPNIK_FONTS_DIR@
Expand Down
9 changes: 4 additions & 5 deletions includes/render_config.h
Original file line number Diff line number Diff line change
Expand Up @@ -114,14 +114,13 @@
// Metatiles are much larger in size so we don't need big queues to handle large areas
#ifdef METATILE
#define QUEUE_MAX (64)
#define REQ_LIMIT (256)
#define DIRTY_LIMIT (8000)
#define DEFAULT_REQUEST_QUEUE_LIMIT (256)
#define DEFAULT_DIRTY_QUEUE_LIMIT (8000)

#else
#define QUEUE_MAX (1024)
#define REQ_LIMIT (512)
#define DIRTY_LIMIT (10000)
#define HASHIDX_SIZE 22123
#define DEFAULT_REQUEST_QUEUE_LIMIT (512)
#define DEFAULT_DIRTY_QUEUE_LIMIT (10000)
#endif

// Penalty for client making an invalid request (in seconds)
Expand Down
2 changes: 2 additions & 0 deletions includes/renderd.h
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,11 @@ typedef struct {
const char *socketname;
const char *stats_filename;
const char *tile_dir;
int dirty_queue_limit;
int ipport;
int mapnik_font_dir_recurse;
int num_threads;
int request_queue_limit;
} renderd_config;

typedef struct {
Expand Down
5 changes: 4 additions & 1 deletion includes/request_queue.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,10 @@ struct item_idx {
};

struct request_queue {
int dirtyLimit;
int hashidxSize;
struct item reqHead, reqPrioHead, reqLowHead, reqBulkHead, dirtyHead, renderHead;
int requestLimit;
struct item pendingHead, renderHead;
struct item_idx *item_hashidx;
int reqNum, reqPrioNum, reqLowNum, reqBulkNum, dirtyNum;
pthread_mutex_t qLock;
Expand All @@ -60,6 +62,7 @@ struct request_queue {
};

struct request_queue *request_queue_init();
struct request_queue *request_queue_init_with_limits(int request_limit, int dirty_limit);
void request_queue_close(struct request_queue *queue);

struct item *request_queue_fetch_request(struct request_queue *queue);
Expand Down
2 changes: 1 addition & 1 deletion src/renderd.c
Original file line number Diff line number Diff line change
Expand Up @@ -809,7 +809,7 @@ int main(int argc, char **argv)
}

g_logger(G_LOG_LEVEL_INFO, "Initialising request queue");
render_request_queue = request_queue_init();
render_request_queue = request_queue_init_with_limits(config.request_queue_limit, config.dirty_queue_limit);

if (render_request_queue == NULL) {
g_logger(G_LOG_LEVEL_CRITICAL, "Failed to initialise request queue");
Expand Down
16 changes: 16 additions & 0 deletions src/renderd_config.c
Original file line number Diff line number Diff line change
Expand Up @@ -425,7 +425,9 @@ void process_renderd_sections(dictionary *ini, const char *config_file_name, ren
copy_string(section, &configs_dest[renderd_section_num].name, renderd_strlen + 2);

process_config_int(ini, section, "ipport", &configs_dest[renderd_section_num].ipport, 0);
process_config_int(ini, section, "dirty_queue_limit", &configs_dest[renderd_section_num].dirty_queue_limit, DEFAULT_DIRTY_QUEUE_LIMIT);
process_config_int(ini, section, "num_threads", &configs_dest[renderd_section_num].num_threads, NUM_THREADS);
process_config_int(ini, section, "request_queue_limit", &configs_dest[renderd_section_num].request_queue_limit, DEFAULT_REQUEST_QUEUE_LIMIT);
process_config_string(ini, section, "iphostname", &configs_dest[renderd_section_num].iphostname, "", INILINE_MAX);
process_config_string(ini, section, "pid_file", &configs_dest[renderd_section_num].pid_filename, RENDERD_PIDFILE, PATH_MAX);
process_config_string(ini, section, "socketname", &configs_dest[renderd_section_num].socketname, RENDERD_SOCKET, PATH_MAX);
Expand All @@ -436,6 +438,16 @@ void process_renderd_sections(dictionary *ini, const char *config_file_name, ren
configs_dest[renderd_section_num].num_threads = sysconf(_SC_NPROCESSORS_ONLN);
}

if (configs_dest[renderd_section_num].request_queue_limit < 1) {
g_logger(G_LOG_LEVEL_CRITICAL, "Specified request_queue_limit (%i) is too small, must be greater than or equal to %i.", configs_dest[renderd_section_num].request_queue_limit, 1);
exit(7);
}

if (configs_dest[renderd_section_num].dirty_queue_limit < 0) {
g_logger(G_LOG_LEVEL_CRITICAL, "Specified dirty_queue_limit (%i) is too small, must be greater than or equal to %i.", configs_dest[renderd_section_num].dirty_queue_limit, 0);
exit(7);
}

if (strnlen(configs_dest[renderd_section_num].socketname, PATH_MAX) >= renderd_socketname_maxlen) {
g_logger(G_LOG_LEVEL_CRITICAL, "Specified socketname (%s) exceeds maximum allowed length of %i.", configs_dest[renderd_section_num].socketname, renderd_socketname_maxlen);
exit(7);
Expand Down Expand Up @@ -510,6 +522,8 @@ void process_config_file(const char *config_file_name, int active_renderd_sectio
}

g_logger(G_LOG_LEVEL_DEBUG, "\trenderd(%i): num_threads = '%i'", i, config_slaves[i].num_threads);
g_logger(G_LOG_LEVEL_DEBUG, "\trenderd(%i): request_queue_limit = '%i'", i, config_slaves[i].request_queue_limit);
g_logger(G_LOG_LEVEL_DEBUG, "\trenderd(%i): dirty_queue_limit = '%i'", i, config_slaves[i].dirty_queue_limit);
g_logger(G_LOG_LEVEL_DEBUG, "\trenderd(%i): pid_file = '%s'", i, config_slaves[i].pid_filename);

if (strnlen(config_slaves[i].stats_filename, PATH_MAX)) {
Expand All @@ -526,6 +540,8 @@ void process_config_file(const char *config_file_name, int active_renderd_sectio
}

g_logger(log_level, "\trenderd: num_threads = '%i'", config.num_threads);
g_logger(log_level, "\trenderd: request_queue_limit = '%i'", config.request_queue_limit);
g_logger(log_level, "\trenderd: dirty_queue_limit = '%i'", config.dirty_queue_limit);

if (active_renderd_section_num == 0 && num_slave_threads > 0) {
g_logger(log_level, "\trenderd: num_slave_threads = '%i'", num_slave_threads);
Expand Down
Loading