Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Newer
Older
100644 97 lines (81 sloc) 4.727 kb
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
1 == Design
2
3 * Simplicity: Unicorn is a traditional UNIX prefork web server.
4 No threads are used at all, this makes applications easier to debug
5 and fix. When your application goes awry, a BOFH can just
6 "kill -9" the runaway worker process without worrying about tearing
7 all clients down, just one. Only UNIX-like systems supporting
8 fork() and file descriptor inheritance are supported.
9
c2ad5e16 » Eric Wong
2009-10-09 DESIGN: clarification and possibly improve HTML validity
10 * The Ragel+C HTTP parser is taken from Mongrel. This is the
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
11 only non-Ruby part and there are no plans to add any more
12 non-Ruby components.
13
9de69c47 » Eric Wong
2011-03-22 DESIGN: fix redundant wording
14 * All HTTP parsing and I/O is done much like Mongrel:
4e81d6f2 » Eric Wong
2009-08-17 Documentation updates
15 1. read/parse HTTP request headers in full
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
16 2. call Rack application
17 3. write HTTP response back to the client
18
19 * Like Mongrel, neither keepalive nor pipelining are supported.
20 These aren't needed since Unicorn is only designed to serve
21 fast, low-latency clients directly. Do one thing, do it well;
22 let nginx handle slow clients.
23
24 * Configuration is purely in Ruby and eval(). Ruby is less
25 ambiguous than YAML and lets lambdas for
26 before_fork/after_fork/before_exec hooks be defined inline. An
27 optional, separate config_file may be used to modify supported
28 configuration changes (and also gives you plenty of rope if you RTFS
29 :>)
30
31 * One master process spawns and reaps worker processes. The
32 Rack application itself is called only within the worker process (but
33 can be loaded within the master). A copy-on-write friendly garbage
8478a540 » Eric Wong
2012-01-28 doc: update doc for Ruby 2.0.0dev CoW-friendliness
34 collector like the one found in Ruby 2.0.0dev or Ruby Enterprise Edition
35 can be used to minimize memory usage along with the "preload_app true"
36 directive (see Unicorn::Configurator).
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
37
38 * The number of worker processes should be scaled to the number of
39 CPUs, memory or even spindles you have. If you have an existing
539ca9a0 » Eric Wong
2009-04-03 Documentation updates
40 Mongrel cluster on a single-threaded app, using the same amount of
41 processes should work. Let a full-HTTP-request-buffering reverse
42 proxy like nginx manage concurrency to thousands of slow clients for
43 you. Unicorn scaling should only be concerned about limits of your
44 backend system(s).
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
45
46 * Load balancing between worker processes is done by the OS kernel.
47 All workers share a common set of listener sockets and does
48 non-blocking accept() on them. The kernel will decide which worker
49 process to give a socket to and workers will sleep if there is
50 nothing to accept().
51
52 * Since non-blocking accept() is used, there can be a thundering
53 herd when an occasional client connects when application
54 *is not busy*. The thundering herd problem should not affect
55 applications that are running all the time since worker processes
56 will only select()/accept() outside of the application dispatch.
57
d85480a2 » Eric Wong
2009-10-10 DESIGN: address concerns about on-demand and thundering herd
58 * Additionally, thundering herds are much smaller than with
59 configurations using existing prefork servers. Process counts should
60 only be scaled to backend resources, _never_ to the number of expected
61 clients like is typical with blocking prefork servers. So while we've
62 seen instances of popular prefork servers configured to run many
63 hundreds of worker processes, Unicorn deployments are typically only
64 2-4 processes per-core.
65
66 * On-demand scaling of worker processes never happens automatically.
67 Again, Unicorn is concerned about scaling to backend limits and should
68 never configured in a fashion where it could be waiting on slow
69 clients. For extremely rare circumstances, we provide TTIN and TTOU
70 signal handlers to increment/decrement your process counts without
71 reloading. Think of it as driving a car with manual transmission:
72 you have a lot more control if you know what you're doing.
73
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
74 * Blocking I/O is used for clients. This allows a simpler code path
75 to be followed within the Ruby interpreter and fewer syscalls.
539ca9a0 » Eric Wong
2009-04-03 Documentation updates
76 Applications that use threads continue to work if Unicorn
77 is only serving LAN or localhost clients.
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
78
539ca9a0 » Eric Wong
2009-04-03 Documentation updates
79 * SIGKILL is used to terminate the timed-out workers from misbehaving apps
80 as reliably as possible on a UNIX system. The default timeout is a
81 generous 60 seconds (same default as in Mongrel).
32a5a5e9 » Eric Wong
2009-03-09 Documentation updates
82
83 * The poor performance of select() on large FD sets is avoided
84 as few file descriptors are used in each worker.
85 There should be no gain from moving to highly scalable but
86 unportable event notification solutions for watching few
87 file descriptors.
88
89 * If the master process dies unexpectedly for any reason,
90 workers will notice within :timeout/2 seconds and follow
91 the master to its death.
c0d79dbb » Eric Wong
2009-04-01 Documentation updates, prep for 0.4.1 release
92
93 * There is never any explicit real-time dependency or communication
539ca9a0 » Eric Wong
2009-04-03 Documentation updates
94 between the worker processes nor to the master process.
95 Synchronization is handled entirely by the OS kernel and shared
96 resources are never accessed by the worker when it is servicing
97 a client.
Something went wrong with that request. Please try again.