Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection pooling for php-fpm with nginx unix sockets and resty mysql #99

Closed
edo888 opened this issue Apr 12, 2020 · 4 comments
Closed

Comments

@edo888
Copy link

edo888 commented Apr 12, 2020

Hi,

I have implemented a basic setup for connection pooling with resty mysql for php and I would like to know your opinion about it. May be I'm missing an important part, which can break it while using in production.

Nginx is configured to listen to unix socket as follows:

server {
        listen unix:/var/run/nginx.sock;
        server_name local_mysql_bucket;
 
        default_type text/plain;

        location / {
            content_by_lua_block {
                local db = require 'mysql-multiplexing'
                ngx.req.read_body()
                local query = ngx.req.get_body_data()
    
                ngx.print(db.run_query(query))
            }
        }
    }

mysql-multiplexing.lua looks like this:

local _M = {}
local db = require 'db'
local mysql = require 'resty.mysql'
local cjson = require 'cjson'

-- todo: add error reporting
function _M.run_query(query)
    local my_lnk, err = mysql:new()
    my_lnk:set_timeout(1000)
    local ok, err, errcode, sqlstate = my_lnk:connect(db.my_lnk_params)

    -- run query
    local res, err, errcode, sqlstate = my_lnk:query(query)
    -- todo: form json encoded result for response

   -- put mysql connection into the pool of size 20 with 10000 ms max idle timeout
   local ok, err = my_lnk:set_keepalive(10000, 20)

   -- return response
end

return _M

php script to run mysql query:

function do_query($q) {
    $fp = fsockopen('unix:///var/run/nginx.sock');

    $req = "POST / HTTP/1.0\r\n";
    $req .= "Host: local_mysql_bucket\r\n";
    $req .= "Content-Length: ".strlen($q)."\r\n";
    $req .= "Connection: close\r\n\r\n";
    $req .= $q;

    fwrite($fp, $req);

    $resp = '';
    while(!feof($fp)) {
        $resp .= fgets($fp, 128);
    }

    $resp = explode("\r\n\r\n", $resp, 2)[1];

    fclose($fp);

    return $resp;
}

echo do_query($q);
// this will show the json encoded query result

Thanks!

@doujiang24
Copy link
Member

which can break it
@edo888 can you describe it in detail?

@edo888
Copy link
Author

edo888 commented Apr 14, 2020

I'm not sure, I'm asking your opinion about it. Can this approach handle thousands of concurrent requests without causing issues? Can you see any pitfalls or disadvantages?

Thanks!

@doujiang24
Copy link
Member

@edo888
It should works fine in production。
one thing we can improve is to limit the max concurrency connections to the MySQL server, but it depends on this TODO:
#100

even without this TODO, it should be ok.

@edo888
Copy link
Author

edo888 commented Apr 15, 2020

Thank you for your input! :)

@edo888 edo888 closed this as completed May 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants