Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Cached plan must not change result type" when using PreparedStatement cache across schemas #496

Closed
gmcguire opened this issue Jan 19, 2016 · 9 comments
Assignees
Milestone

Comments

@gmcguire
Copy link

I'm using 9.4.1207 against a PostgreSQL 9.4 system, with a multi-tenant setup that has a single Postgres schema per customer. Table names may be repeated across customer schemas, but with different columns, datatypes, etc.

The java application sets the search_path as the first statement in any transaction, but keeps the jdbc Connection open between transactions (via a Tomcat jdbc pool, version 8.0.24.)

This appears to lead to errors like this:

SELECT * FROM some_table LIMIT 1
ERROR: cached plan must not change result type
org.postgresql.core.v3.QueryExecutorImpl in receiveErrorResponse at line 2182
org.postgresql.core.v3.QueryExecutorImpl in processResults at line 1911
org.postgresql.core.v3.QueryExecutorImpl in execute at line 173
org.postgresql.jdbc.PgStatement in execute at line 622
org.postgresql.jdbc.PgStatement in executeWithFlags at line 472
org.postgresql.jdbc.PgStatement in executeUpdate at line 429
...

Looking at the code it seems like the problem could be with the re-use of PreparedStatements via the cache implemented in #319. The CallableQueryKey only keys off the parsed SQL, when it might also need to pay attention to the search_path.

All of the above may be moot, however, if the issue is instead with Postgres itself. See the Notes section for the behavior of PREPARE - http://www.postgresql.org/docs/current/static/sql-prepare.html This seems to imply that statement reparsing occurs when the search_path changes (as of 9.3), but this test case can trigger the issue (again on 9.4):

create schema testa;
create schema testb;

create table testa.test (a int, b int);
create table testb.test (a text, b text);

set search_path to testa;

prepare testplan as select * from test;

execute testplan;

set search_path to testb;

execute testplan;

Thanks for your help.

@vlsi
Copy link
Member

vlsi commented Jan 19, 2016

when it might also need to pay attention to the search_path

All of the above may be moot, however, if the issue is instead with Postgres itself

I think it is a driver's issue. It should not issue "execute server-prepared statement" commands and rely on backend to properly replan/pick cached version for the specific schema_path.

I'm afraid your only workaround is to ether avoid "altering the schema_path", or disable server-prepared statements.

I think #451 would help you, however it is not yet merged in.
It would be nice if you could test that in your environment though.

The CallableQueryKey only keys off the parsed SQL, when it might also need to pay attention to the search_path.

I think you are right here. Adding schema_path to the equation should help. Do you want to try implementing the fix?
Do you see more keys that should be added to the "cache key"?

@vlsi
Copy link
Member

vlsi commented Jan 19, 2016

Here's discussion of similar problem in pgsql-hackers: http://www.postgresql.org/message-id/22921.1358876659@sss.pgh.pa.us

@gmcguire
Copy link
Author

Thanks for the help. Based on the patch that came out of that discussion, it looks like schema_path is the only other major thing that impacts the plan cache, aside from DDL changes.

How would you suggest fixing this? I assume it would be too expensive to get the current search path from postgres each time you cache a prepared statement, but relying on Connection.setSchema(String) to pick up search path changes seems fragile. (For example, we set our search_path to customerX, public to pick up functions in public, and the current implementation of setSchema can't handle this, so we execute it as a Statement instead.)

@vlsi
Copy link
Member

vlsi commented Jan 20, 2016

How would you suggest fixing this?

I've started discussion on hackers list to get idea of "the proper fix": http://www.postgresql.org/message-id/flat/CAB=Je-GtFy=Qa1Eku1mgRZ8RabUfqnUE2Z6ye2CJJZNN9B3Odg@mail.gmail.com#CAB=Je-GtFy=Qa1Eku1mgRZ8RabUfqnUE2Z6ye2CJJZNN9B3Odg@mail.gmail.com

I assume it would be too expensive to get the current search path from postgres each time you cache a prepared statement

I think we can detect "SET" statements and renew search_path value after it.
set statements result in "SET" responses from the database, thus it can be detected.

Here's sample trace:

01:17:09.346 (1) simple execute, handler=org.postgresql.jdbc.PgStatement$StatementResultHandler@497470ed, maxRows=0, fetchSize=0, flags=17
01:17:09.346 (1)  FE=> Parse(stmt=null,query="SET search_path TO "$user",public,schema2",oids={})
01:17:09.346 (1)  FE=> Bind(stmt=null,portal=null)
01:17:09.346 (1)  FE=> Describe(portal=null)
01:17:09.346 (1)  FE=> Execute(portal=null,limit=0)
01:17:09.346 (1)  FE=> Sync
01:17:09.347 (1)  <=BE ParseComplete [null]
01:17:09.347 (1)  <=BE BindComplete [unnamed]
01:17:09.347 (1)  <=BE NoData
01:17:09.347 (1)  <=BE CommandStatus(SET)
01:17:09.347 (1)  <=BE ReadyForQuery(I)

However, set search_path from plpgsql would get unnoticed. Will that impact you?
Another issue is as follows:

  1. set search_path to customer1, customer2, public;
  2. create table customer2.abc;
  3. prepare WTF select * from abc
  4. create table customer1.abc;
  5. execute WTF

Proper dependency tracking should detect that prepared statement is "not that valid". In other words, the result depends on the order of 3 & 4.

However, I think the case should not be wildly used in public, thus it can be neglected.

Alternative direction is to create a connection property that would disable server-side prepared statements caching across PreparedStatement#close() calls.
Well, in my opinion that would effectively disable caching for everyone since java applications almost never cache PreparedStatement objects. Everybody just closes them after use.

vlsi added a commit that referenced this issue Jan 21, 2016
@vlsi vlsi added this to the 9.4.1210 milestone Aug 12, 2016
@vlsi vlsi self-assigned this Aug 12, 2016
@vlsi vlsi closed this as completed in 17c8afb Aug 19, 2016
@unix-junkie
Copy link

unix-junkie commented Mar 7, 2017

Guys, I can steadily reproduce the same issue using the latest driver v42.0.0, server v9.5.1 and our integration tests. If a test class has a parameterized test which is executed multiple times with different parameters, every second test run fails (i.e. odds pass, evens fail). If there's a single run, then such test passes just fine.

I can't share our tests, but can help diagnosing this from my side.

Previously we were using driver v9.2-1002, and everything went just fine.

Sample stack trace:

org.postgresql.util.PSQLException: ERROR: cached plan must not change result type
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2412)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2125)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:297)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:169)
	at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:117)

@vlsi
Copy link
Member

vlsi commented Mar 7, 2017

@unix-junkie , I wonder if you can enable, capture and share pgjdbc trace logs: https://jdbc.postgresql.org/documentation/head/logging.html#configuration

For instance, add ?loggerLevel=TRACE&loggerFile=pgjdbc.log to the connection properties.

Do you use set search_path=... and or other set ... connection properties?
Do you use DDL statements (e.g. alter column type, alter view columns) during setup/teardown?

@unix-junkie
Copy link

unix-junkie commented Mar 7, 2017

@vlsi First, thanks for the quick response.

I will capture the logs and send them to you this Thursday or Friday.

Speaking of DDL -- yes we do. Basically, it looks like prepared statements are not invalidated when tables they're related to get changed in an incompatible way (i.e. the same table gets recreated, but its columns have different (incompatible) types).

I observe the same when running parameterized prepared statements (the driver keeps using a stale cached instance instead of preparing a new one), with a slightly different error message:

ERROR: invalid integer value: "1984-11-02 02:00:00+03"

-- this one showed up when inserting a timestamp after a similar test which was inserting an integer.

Forcing prepareThreshold to 0 greatly reduces the chances that the problem occurs. Setting prepareThreshold along with increased values of preparedStatementCacheQueries and preparedStatementCacheSizeMiB (like 10000 and 30, correspondingly) eliminates the problem entirely.

@vlsi
Copy link
Member

vlsi commented Mar 7, 2017

Speaking of DDL -- yes we do. Basically, it looks like prepared statements are not invalidated when tables they're related to get changed in an incompatible way

Unfortunately, backend does not notify clients which statements get invalidated by a particular DDL execution.

If it is just test code that fails, then you might have luck if you could issue DEALLOCATE ALL statement after DDL changes. pgjdbc would recognize that and invalidate statement cache.

In case your production code requires that DDL, you might want to drop a message to PostgreSQL mailing list.

The third option is to keep adding "invalidate statement cache on each ALTER, CREATE, etc, etc". The flip side there is prepared statement cache improves performance a lot, so invalidating it here and there might hurt.

@unix-junkie
Copy link

@vlsi Thank you Vladimir.

I will use the workarounds you suggested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants