Embedded HTTP Server / HTTP Middleware
- Asynchronous I/O and request processing
- Offers response caching for added efficiency
- Autoscaling threadpools: uses less memory when load is light
- Supports multiple listeners for improved load-distribution
- Does not overload the CPU or memory when not required
- File streaming and serving custom streams with async data generation
- Read/Write throttling, both for the server as a whole and for individual connections
- Conditional request message body fetching & rejection
- Uses modern C++ and is easy to use, and even to customize the code
- Built-in Profiler to help debug potential performance issues
- Public API documented with Doxygen
- Thoroughly tested with Google Test unit & integration tests
- Stress-tested for memory leaks with Valgrind Memcheck
- Security tested with SlowHTTPTest (DoS simulator)
int main() {
auto endpoint = Chili::IPEndpoint({{127, 0, 0, 1}}, 3000);
auto handler = Chili::RequestHandler([](Chili::Channel& c) {
auto& res = c.GetResponse();
res.AppendHeader("Content-Type", "text/html");
res.SetContent("<h1>Hello world!</h1>");
res.SetStatus(Chili::Status::Ok);
c.SendResponse();
});
auto server = Chili::HttpServer(endpoint, handler);
auto task = server.Start();
std::cout << "HelloWorld server started!\n";
task.wait();
}
Assuming you're on Ubuntu/Debian,
$ sudo apt-get install cmake libcurl4-gnutls-dev libgtest-dev google-mock libunwind-dev
$ git clone https://github.com/ymarcov/http
$ cd http
$ git submodule update --init
$ cmake .
$ make hello_world
$ bin/hello_world
Then connect to http://localhost:3000
from your browser.
Documentation may be generated by running doxygen Doxyfile
.
After that, it'll be available under doc/html/index.html
.
Running on Intel i7-3630QM (Laptop) @ 2.40GHz with 8GB of memory, here are SIEGE results. In this test, Chili outperformed all other servers, with Lighttpd coming in second. However, keep in mind that those other servers are big, configurable software systems--not simply a library. The test itself was to serve Apache's "It works!" HTML page from each server, as many times as possible, to mainly compare the number of requests served per second.
$ bin/sandbox_echo 3000 4 0 & # Run sandbox server with 4 threads
$ siege -b -c20 -t5S http://localhost:<PORT>
Transactions: 155471 hits
Availability: 100.00 %
Elapsed time: 4.50 secs
Data transferred: 1586.51 MB
Response time: 0.00 secs
Transaction rate: 34549.11 trans/sec
Throughput: 352.56 MB/sec
Concurrency: 19.30
Successful transactions: 155474
Failed transactions: 0
Longest transaction: 0.01
Shortest transaction: 0.00
Transactions: 89438 hits
Availability: 100.00 %
Elapsed time: 4.37 secs
Data transferred: 259.38 MB
Response time: 0.00 secs
Transaction rate: 20466.36 trans/sec
Throughput: 59.36 MB/sec
Concurrency: 19.67
Successful transactions: 89439
Failed transactions: 0
Longest transaction: 0.03
Shortest transaction: 0.00
Transactions: 64993 hits
Availability: 100.00 %
Elapsed time: 4.08 secs
Data transferred: 212.29 MB
Response time: 0.00 secs
Transaction rate: 15929.66 trans/sec
Throughput: 52.03 MB/sec
Concurrency: 19.63
Successful transactions: 64994
Failed transactions: 0
Longest transaction: 0.02
Shortest transaction: 0.00
Transactions: 27975 hits
Availability: 100.00 %
Elapsed time: 4.40 secs
Data transferred: 81.13 MB
Response time: 0.00 secs
Transaction rate: 6357.95 trans/sec
Throughput: 18.44 MB/sec
Concurrency: 19.74
Successful transactions: 27975
Failed transactions: 0
Longest transaction: 0.68
Shortest transaction: 0.00