Flex changed the second argument of yy_scan_bytes() from int to size_t a great while ago, while not updating the documentation. To make the prototype in plproxy match, use flex --header-file to create a header file, and include that, instead of maintaining an explicit prototype ourselves. See also: - <https://sourceforge.net/p/flex/bugs/184/> - <https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750163>
Otherwise it can still hang.
Previous check was too broad and left plproxy hanging. Reported-By: Tarvi Pillessaar
Refactor some internals to make this possible. Mainly, FunctionCallInfo is not available when validating, so avoid accessing that if not required. Rename plproxy_compile() to plproxy_compile_and_cache() and the previously internal fn_compile() to plproxy_compile(). This matches their purpose better and allows the validator to call plproxy_compile() without invoking execution-time dependent code. Many error test cases have changed because the validator catches errors when the function is created, not when it is called. Raise the extension version to 2.5.1 to be able to upgrade from non-validator installations.
* use installed pg_buildext * use installed pgxs_debian_control.mk * don't override 'clean' target in debian/rules * 'make deb' forces control file regeneration This reduces crap maintained locally. It also does mean that to build against server-dev-X.Y package that is not for Debians default Postgres version, following packages need to be installed from PGDG (wiki.postgresql.org/wiki/Apt) repo: postgresql-client-common postgresql-common postgresql-server-dev-all
Previously, as soon as cancel requests were send, plproxy re-throwed the error, without waiting for reaction from backend. Such behaviour creates 2 problems: - If plproxy backend is closed immediately, the bouncer will see plproxy close before cancel from backend, thus seeing mid-tx close, thus dropping the connection. - If new query comes in to plproxy backend, plproxy itself will see dirty connection, closing it, thus also causing close of server connection in bouncer. In both cases it can cause server connection drop in pooler. New behaviour of waiting query result should fix it.