Permalink
Browse files

releng work for v0.04.

  • Loading branch information...
1 parent 5755211 commit 0398fb1d40b35afdfc804d988ff724369d69c84d @agentzh agentzh committed Dec 22, 2009
Showing with 76 additions and 12 deletions.
  1. +42 −9 README
  2. +34 −3 doc/readme.wiki
View
51 README
@@ -6,9 +6,9 @@ Name
installation instructions.
Version
- This document describes memc-nginx-module v0.03
+ This document describes memc-nginx-module v0.04
(<http://github.com/agentzh/memc-nginx-module/downloads>) released on
- Dec 7, 2009.
+ Dec 22, 2009.
Synopsis
# GET /foo?key=dog
@@ -94,16 +94,40 @@ Description
I've used Ragel (<http://www.complang.org/ragel/>) to generate the
memcached response parsers (in C) for joy :)
+ Keep-alive connections to memcached servers
+ You need Maxim Dounin's ngx_upstream_keepalive module
+ (<http://mdounin.ru/hg/ngx_http_upstream_keepalive/>) together with this
+ module for keep-alive TCP connections to your backend memcached servers.
+
+ Here's a sample configuration:
+
+ http {
+ upstream backend {
+ server 127.0.0.1:11211;
+
+ # a pool with at most 1024 connections
+ # and do not distinguish the servers
+ keepalive 1024 single;
+ }
+
+ server {
+ ...
+ location /memc {
+ set $memc_cmd get;
+ set $memc_key $arg_key;
+ memc_pass backend;
+ }
+ }
+ }
+
How it works
It implements the memcached TCP protocol all by itself, based upon the
- "upstream" mechansim. Everything involving I/O is non-blocking but it
- does not keep TCP connections to the upstream memcached servers across
- requests, just like other upstream modules.
+ "upstream" mechansim. Everything involving I/O is non-blocking.
- You need Maxim Dounin's ngx_upstream_keepalive module
- (<http://mdounin.ru/hg/ngx_http_upstream_keepalive/>) together with this
- module for keep-alive TCP connections to your backend memcached servers
- ;)
+ The module itself does not keep TCP connections to the upstream
+ memcached servers across requests, just like other upstream modules. For
+ a working solution, see section Keep-alive connections to memcached
+ servers.
Memcached commands supported
The memcached storage commands set, add, replace, prepend, and append
@@ -381,6 +405,15 @@ Source Repository
(<http://github.com/agentzh/memc-nginx-module>).
ChangeLog
+ v0.04
+ * to ensure Maxim's ngx_http_upstream_keepalive
+ (<http://mdounin.ru/hg/ngx_http_upstream_keepalive/>) module caches
+ our connections even if "u->headers_in->status" is 201 (Created).
+
+ * updated docs to make it clear that this module can work with
+ "upstream" multi-server backends. thanks Bernd Dorn for reporting
+ it.
+
v0.03
* fixed a connection leak caused by an extra "r->main->count++"
operation: we should NOT do "r->main->count++" after calling the
View
@@ -6,7 +6,7 @@
= Version =
-This document describes memc-nginx-module [http://github.com/agentzh/memc-nginx-module/downloads v0.03] released on Dec 7, 2009.
+This document describes memc-nginx-module [http://github.com/agentzh/memc-nginx-module/downloads v0.04] released on Dec 22, 2009.
= Synopsis =
@@ -93,11 +93,38 @@ It allows you to define a custom [http://en.wikipedia.org/wiki/REST REST] interf
This module is not supposed to be merged into the Nginx core because I've used [http://www.complang.org/ragel/ Ragel] to generate the memcached response parsers (in C) for joy :)
+== Keep-alive connections to memcached servers ==
+
+You need Maxim Dounin's [http://mdounin.ru/hg/ngx_http_upstream_keepalive/ ngx_upstream_keepalive module] together with this module for keep-alive TCP connections to your backend memcached servers.
+
+Here's a sample configuration:
+
+<geshi lang="nginx">
+ http {
+ upstream backend {
+ server 127.0.0.1:11211;
+
+ # a pool with at most 1024 connections
+ # and do not distinguish the servers
+ keepalive 1024 single;
+ }
+
+ server {
+ ...
+ location /memc {
+ set $memc_cmd get;
+ set $memc_key $arg_key;
+ memc_pass backend;
+ }
+ }
+ }
+</geshi>
+
== How it works ==
-It implements the memcached TCP protocol all by itself, based upon the <code>upstream</code> mechansim. Everything involving I/O is non-blocking but it does not keep TCP connections to the upstream memcached servers across requests, just like other upstream modules.
+It implements the memcached TCP protocol all by itself, based upon the <code>upstream</code> mechansim. Everything involving I/O is non-blocking.
-You need Maxim Dounin's [http://mdounin.ru/hg/ngx_http_upstream_keepalive/ ngx_upstream_keepalive module] together with this module for keep-alive TCP connections to your backend memcached servers ;)
+The module itself does not keep TCP connections to the upstream memcached servers across requests, just like other upstream modules. For a working solution, see section [[#Keep-alive connections to memcached servers|Keep-alive connections to memcached servers]].
= Memcached commands supported =
@@ -366,6 +393,10 @@ Available on github at [http://github.com/agentzh/memc-nginx-module agentzh/memc
= ChangeLog =
+== v0.04 ==
+* to ensure Maxim's [http://mdounin.ru/hg/ngx_http_upstream_keepalive/ ngx_http_upstream_keepalive] module caches our connections even if <code>u->headers_in->status</code> is 201 (Created).
+* updated docs to make it clear that this module can work with "upstream" multi-server backends. thanks Bernd Dorn for reporting it.
+
== v0.03 ==
* fixed a connection leak caused by an extra <code>r->main->count++</code> operation: we should NOT do <code>r->main->count++</code> after calling the <code>ngx_http_read_client_request_body</code> function in our content handler.

0 comments on commit 0398fb1

Please sign in to comment.