Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postgrest 8 fails to start if standard_conforming_strings=off #1992

Closed
dgtrapeze opened this issue Oct 27, 2021 · 20 comments · Fixed by #1995
Closed

Postgrest 8 fails to start if standard_conforming_strings=off #1992

dgtrapeze opened this issue Oct 27, 2021 · 20 comments · Fixed by #1995
Labels
bug ci Related to CI setup difficulty: beginner Pure Haskell task idea Needs of discussion to become an enhancement, not ready for implementation

Comments

@dgtrapeze
Copy link

dgtrapeze commented Oct 27, 2021

Environment

  • PostgreSQL version: 9.6 (although tried postgresql 12 for the same outcome)
  • PostgREST version: 8.0.0
  • Operating system: Tried on RHEL8 and Centos6

Description of issue

I just download postgrest 8.0.0 to try it out. We previously used postgrest 7.0.1. However, on starting up postgrest it just continually output

27/Oct/2021:08:43:40 +1000: Attempting to connect to the database...
27/Oct/2021:08:43:40 +1000: Connection successful
27/Oct/2021:08:43:40 +1000: Listening on port 3000
27/Oct/2021:08:43:40 +1000: Config re-loaded
27/Oct/2021:08:43:40 +1000: Listening for notifications on the pgrst channel
27/Oct/2021:08:43:40 +1000: An error ocurred when loading the schema cache
27/Oct/2021:08:43:40 +1000: {"hint":null,"details":"Token \"QUERY\" is invalid.","code":"22P02","message":"invalid input syntax for type json"}
The pg logs show the query being executed as the pfkSourceColumns query.
2021-10-27 08:43:59 AEST [2076]: [13-1] db=testdb,user=web_authenticator,app=Postgrest pool connection,client=[local] LOG:  execute 4:
              with recursive
              pks_fks as (
                -- pk + fk referencing col
                select
                  conrelid as resorigtbl,
                  unnest(conkey) as resorigcol
                from pg_constraint
                where contype IN ('p', 'f')
                union
                -- fk referenced col
                select
                  confrelid,
                  unnest(confkey)
                from pg_constraint
                where contype='f'
              ),
              views as (
                select
                  c.oid       as view_id,
                  n.nspname   as view_schema,
                  c.relname   as view_name,
                  r.ev_action as view_definition
                from pg_class c
                join pg_namespace n on n.oid = c.relnamespace
                join pg_rewrite r on r.ev_class = c.oid
                where c.relkind in ('v', 'm') and n.nspname = ANY($1 || $2)
              ),
              transform_json as (
                select
                  view_id, view_schema, view_name,
                  -- the following formatting is without indentation on purpose
                  -- to allow simple diffs, with less whitespace noise
                  replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    regexp_replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                    replace(
                      view_definition::text,
                    -- This conversion to json is heavily optimized for performance.
                    -- The general idea is to use as few regexp_replace() calls as possible.
                    -- Simple replace() is a lot faster, so we jump through some hoops
                    -- to be able to use regexp_replace() only once.
                    -- This has been tested against a huge schema with 250+ different views.
                    -- The unit tests do NOT reflect all possible inputs. Be careful when changing this!
                    -- -----------------------------------------------
                    -- pattern           | replacement         | flags
                    -- -----------------------------------------------
                    -- `,` is not part of the pg_node_tree format, but used in the regex.
                    -- This removes all `,` that might be part of column names.
                       ','               , ''
                    -- The same applies for `{` and `}`, although those are used a lot in pg_node_tree.
                    -- We remove the escaped ones, which might be part of column names again.
                    ), '\{'              , ''
                    ), '\}'              , ''
                    -- The fields we need are formatted as json manually to protect them from the regex.
                    ), ' :targetList '   , ',"targetList":'
                    ), ' :resno '        , ',"resno":'
                    ), ' :resorigtbl '   , ',"resorigtbl":'
                    ), ' :resorigcol '   , ',"resorigcol":'
                    -- Make the regex also match the node type, e.g. `{QUERY ...`, to remove it in one pass.
                    ), '{'               , '{ :'
                    -- Protect node lists, which start with `({` or `((` from the greedy regex.
                    -- The extra `{` is removed again later.
                    ), '(('              , '{(('
                    ), '({'              , '{({'
                    -- This regex removes all unused fields to avoid the need to format all of them correctly.
                    -- This leads to a smaller json result as well.
                    -- Removal stops at `,` for used fields (see above) and `}` for the end of the current node.
                    -- Nesting can't be parsed correctly with a regex, so we stop at `{` as well and
                    -- add an empty key for the followig node.
                    ), ' :[^}{,]+'       , ',"":'              , 'g'
                    -- For performance, the regex also added those empty keys when hitting a `,` or `}`.
                    -- Those are removed next.
                    ), ',"":}'           , '}'
                    ), ',"":,'           , ','
                    -- This reverses the "node list protection" from above.
                    ), '{('              , '('
                    -- Every key above has been added with a `,` so far. The first key in an object doesn't need it.
                    ), '{,'              , '{'
                    -- pg_node_tree has `()` around lists, but JSON uses `[]`
                    ), '('               , '['
                    ), ')'               , ']'
                    -- pg_node_tree has ` ` between list items, but JSON uses `,`
                    ), ' '             , ','
                    -- `<>` in pg_node_tree is the same as `null` in JSON, but due to very poor performance of json_typeof
                    -- we need to make this an empty array here to prevent json_array_elements from throwing an error
                    -- when the targetList is null.
                    ), '<>'              , '[]'
                  )::json as view_definition
                from views
              ),
              target_entries as(
              ),
              target_entries as(
                select
                  view_id, view_schema, view_name,
                  json_array_elements(view_definition->0->'targetList') as entry
                from transform_json
              ),
              results as(
                select
                  view_id, view_schema, view_name,
                  (entry->>'resno')::int as view_column,
                  (entry->>'resorigtbl')::oid as resorigtbl,
                  (entry->>'resorigcol')::int as resorigcol
                from target_entries
              ),
              recursion as(
                select r.*
                from results r
                where view_schema = ANY ($1)
                union all
                select
                  view.view_id,
                  view.view_schema,
                  view.view_name,
                  view.view_column,
                  tab.resorigtbl,
                  tab.resorigcol
                from recursion view
                join results tab on view.resorigtbl=tab.view_id and view.resorigcol=tab.view_column
              )
              select
                sch.nspname as table_schema,
                tbl.relname as table_name,
                col.attname as table_column_name,
                rec.view_schema,
                rec.view_name,
                vcol.attname as view_column_name
              from recursion rec
              join pg_class tbl on tbl.oid = rec.resorigtbl
              join pg_attribute col on col.attrelid = tbl.oid and col.attnum = rec.resorigcol
              join pg_attribute vcol on vcol.attrelid = rec.view_id and vcol.attnum = rec.view_column
              join pg_namespace sch on sch.oid = tbl.relnamespace
              join pks_fks using (resorigtbl, resorigcol)
              order by view_schema, view_name, view_column_name;
2021-10-27 08:43:59 AEST [2076]: [14-1] db=testdb,user=web_authenticator,app=Postgrest pool connection,client=[local] DETAIL:  parameters: $1 = '{webservices}', $2 = '{bacchus,public}'
2021-10-27 08:43:59 AEST [2076]: [15-1] db=testdb,user=web_authenticator,app=Postgrest pool connection,client=[local] ERROR:  invalid input syntax for type json
2021-10-27 08:43:59 AEST [2076]: [16-1] db=testdb,user=web_authenticator,app=Postgrest pool connection,client=[local] DETAIL:  Token "QUERY" is invalid.
2021-10-27 08:43:59 AEST [2076]: [17-1] db=testdb,user=web_authenticator,app=Postgrest pool connection,client=[local] CONTEXT:  JSON data, line 1: [QUERY...

Our application started life on postgresql 7 and has upgraded along the way. However, as it predates standard_conforming_strings, we need to run standard_conforming_strings=off for our application to run as lots of queries assume the old escape system. This is set in postgresql.conf.

The above query assumes standard_conforming_strings=on and since we have it off, the query fails to work as expected and the resulting string which is cast to json is not correctly formatted.

To get postgrest to work again, I've had to do

alter role web_authenticator set standard_conforming_strings = on;

where web_authenticator is our role that postgrest uses to connect.

I suspect postgrest assumes standard conforming strings in other places as well now (eg escaping in query parameters was added in v8 I saw?).

I'm not yet fully sure of the impacts to our application having set this for the postgrest authenticator role. We are usually pretty good at using E'' strings now, but you never know what developers do....

I would assume the fix is for postgrest to use escape string syntax (E'' strings) to insulate it from these sorts of database settings.

It could automatically set standard_conforming_strings=on for its session (like it does for search_path) or you could add docs to do the alter role authenticator I had to do, but I suspect those options have side-effects.

(Expected behavior vs actual behavior)
I expected postgrest 8 to pretty much work the same as postgrest 7.

(Steps to reproduce: Include a minimal SQL definition plus how you make the request to PostgREST and the response body)

alter system set standard_conforming_strings = off;

then start postgrest

@wolfgangwalther
Copy link
Member

I reformatted your post a litle bit to make it easier to read.

A few thoughts for now:

The pg logs show the query being executed as the pfkSourceColumns query.

Unrelated to this issue, but just looking at the query output, I'm thinking whether we could avoid to send all the comments in that SQL query to the database. If we could strip comments at compile time, that would be cool.

To get postgrest to work again, I've had to do

alter role web_authenticator set standard_conforming_strings = on;

where web_authenticator is our role that postgrest uses to connect.

This seems like a reasonable solution that we could add to the docs.

I suspect postgrest assumes standard conforming strings in other places as well now (eg escaping in query parameters was added in v8 I saw?).

Maybe we could add some CI tests to be able to run our test-suite against different kinds of postgresql configurations down the road to investigate.

It could automatically set standard_conforming_strings=on for its session (like it does for search_path) or you could add docs to do the alter role authenticator I had to do, but I suspect those options have side-effects.

Every set_config comes with a performance penalty, so I'd avoid doing that by default. What kind of side-effects would you expect from that kind of setting? I don't think your legacy applications should access the database via the authenticator role - and PostgREST will work for sure with standard_conforming_strings=on - so I don't see any possible side-effects of altering the role like that.

@wolfgangwalther wolfgangwalther added bug ci Related to CI setup idea Needs of discussion to become an enhancement, not ready for implementation labels Oct 27, 2021
@dgtrapeze
Copy link
Author

The side-effect I'm referring to is that everything executed via postgrest will be done with standard_conforming_strings=on.

Thus all triggers, functions, views, etc it executes inside its session will be done with the 'wrong' setting relative to how the logic is potentially expecting. If it ends up executing something in our application logic with a \ in a string that is not an E'' string, it will probably not work. I'm still confirming how big an issue that is for us specifically, but it does mean my workaround is probably not a generic workaround for the issue.

The session settings of the authenticator session are inherited by all the sessions used by postgrest to execute functions/views as far as I am aware. The SET ROLE done to switch to a specific role for a specific request does not undo those session settings.

Every set_config comes with a performance penalty

Yes, but it is only being done once for each connection in the pool - not for every request. In any case, internally postgresql is doing a set_config based on the role setting I did anyway. So whether the role sets the setting or postgrest does won't be terribly different bar the round trip time to do the statement from postgrest. But as I fear it is not a good solution anyway, it is probably a bit mute.

Anyway, putting everything postgrest does into standard_conforming_strings=on mode on an environment that expects it to be off is what I'm concerned about with this workaround.

@dgtrapeze
Copy link
Author

For the specific query that it is currently failing on, changing the 2 lines

                    ), '\{'              , ''
                    ), '\}'              , ''

to be

                    ), E'\\{'              , ''
                    ), E'\\}'              , ''

would fix that query regardless of the standard_conforming_strings setting. But I'm not sure how many other places in postgrest would also need changing to fully work regardless of this setting.

@wolfgangwalther
Copy link
Member

The side-effect I'm referring to is that everything executed via postgrest will be done with standard_conforming_strings=on.

Thus all triggers, functions, views, etc it executes inside its session will be done with the 'wrong' setting relative to how the logic is potentially expecting. If it ends up executing something in our application logic with a \ in a string that is not an E'' string, it will probably not work. I'm still confirming how big an issue that is for us specifically, but it does mean my workaround is probably not a generic workaround for the issue.

Thanks for the explanation. I haven't really looked at what standard_conforming_strings actually does, tbh. But this explanation will certainly allow to dig deeper and see what kind of side-effects could be expected.

The session settings of the authenticator session are inherited by all the sessions used by postgrest to execute functions/views as far as I am aware. The SET ROLE done to switch to a specific role for a specific request does not undo those session settings.

Correct, SET ROLE does not inherit that role's settings.

Yes, but it is only being done once for each connection in the pool - not for every request. In any case, internally postgresql is doing a set_config based on the role setting I did anyway. So whether the role sets the setting or postgrest does won't be terribly different bar the round trip time to do the statement from postgrest. But as I fear it is not a good solution anyway, it is probably a bit mute.

We do set the search path on every request - and would do so with this setting, too. The role-based setting is done server-side only and should not impact performance in the same way. I don't think we have machinery to do session-based set_config right now, but maybe I'm missing something there, too.

@steve-chavez
Copy link
Member

Unrelated to this issue, but just looking at the query output, I'm thinking whether we could avoid to send all the comments in that SQL query to the database. If we could strip comments at compile time, that would be cool.

We could use qc from Text.InterpolatedString.Perl6 (qc) for that. It's already being used here, btw.

@steve-chavez
Copy link
Member

For the specific query that it is currently failing on, changing the 2 lines
But I'm not sure how many other places in postgrest would also need changing to fully work regardless of this setting.

Pretty sure that's the only place where we use backslashes for a query. So seems like a simple fix.

@wolfgangwalther
Copy link
Member

Unrelated to this issue, but just looking at the query output, I'm thinking whether we could avoid to send all the comments in that SQL query to the database. If we could strip comments at compile time, that would be cool.

We could use qc from Text.InterpolatedString.Perl6 (qc) for that. It's already being used here, btw.

How would that help with stripping SQL comments?

@wolfgangwalther wolfgangwalther added the difficulty: beginner Pure Haskell task label Oct 27, 2021
@steve-chavez
Copy link
Member

How would that help with stripping SQL comments?

I've noted that QuasiQuoters strip comments, can't find a direct reference to that behavior though.

@wolfgangwalther
Copy link
Member

How would that help with stripping SQL comments?

I've noted that QuasiQuoters strip comments, can't find a direct reference to that behavior though.

Tried that - didn't work. Did so on one of the other queries in DbStructure.hs, because the query in question here is a pain to put into qc|, with all the { that would need to be escaped somehow.

@wolfgangwalther
Copy link
Member

For the specific query that it is currently failing on, changing the 2 lines
But I'm not sure how many other places in postgrest would also need changing to fully work regardless of this setting.

Pretty sure that's the only place where we use backslashes for a query. So seems like a simple fix.

Confirmed, see PR.

@steve-chavez
Copy link
Member

steve-chavez commented Oct 27, 2021

I've noted that QuasiQuoters strip comments, can't find a direct reference to that behavior though.
Tried that - didn't work.

Ah, my bad there. I remember at a point we had a quasiquoted SQL and noted that it had comments which didn't appear in the db logs. Perhaps it was a different library or had a workaround.

Edit: I think it was this one(old version)

array_to_string(enum_info.vals, ',') AS enum
FROM (
/*
-- CTE based on pg_catalog to get PRIMARY/FOREIGN key and UNIQUE columns outside api schema
*/
WITH key_columns AS (

@dgtrapeze
Copy link
Author

It looks like the fix for #1938 assumes standard_conforming_strings=on.

Interestingly, the code for pgFmtLit in SqlFragment.hs seems to use E'' strings, but pgFmtArrayLit does not although I don't know Haskell so could be reading it incorrectly.

@wolfgangwalther
Copy link
Member

It looks like the fix for #1938 assumes standard_conforming_strings=on.

Interestingly, the code for pgFmtLit in SqlFragment.hs seems to use E'' strings, but pgFmtArrayLit does not although I don't know Haskell so could be reading it incorrectly.

I don't think pgFmtArrayLit is a problem here, because the escaped string will be passed as a value in a parametrized query - not as a string as part of the query itself. Therefore standard_conforming_strings=off doesn't affect this part.

pgFmtLit seems to deal with that correctly - although looking at this brings up a different problem: We are still depending on pgFmtLit in the fallback case for the is. operator. That means that this part of the query is not parametrized and in theory still up for sql injection attacks. @steve-chavez should we just disallow other values than NULL, FALSE and TRUE for is. altogether, instead of falling back to passing the value as ::unknown?

@rinshadka
Copy link

I have tried v8.0.0.20211102 and v9.0.0-a1 pre-releases. But faced this same error although its mentioned that the v8.0.0.20211102 is having the fix for this particular issue.

Please find the logs below while starting the service,

2021-11-12 14_57_44-Window

Also experimentally changed the standard_conforming_strings postgreSQL configuration for postgREST database user, but failed.

alter role postgrest_authenticator set standard_conforming_strings = on;

Whenever I reverts the version to v7.0.1 , everything works fine.

Thanks.

@steve-chavez
Copy link
Member

@rinshadka Can you confirm that error happens when running the below query?

(Replace $1 || $2 and $1 with ARRAY['your_schema'])

pfkSourceColumns query
with recursive
pks_fks as (
  -- pk + fk referencing col
  select
    conrelid as resorigtbl,
    unnest(conkey) as resorigcol
  from pg_constraint
  where contype IN ('p', 'f')
  union
  -- fk referenced col
  select
    confrelid,
    unnest(confkey)
  from pg_constraint
  where contype='f'
),
views as (
  select
    c.oid       as view_id,
    n.nspname   as view_schema,
    c.relname   as view_name,
    r.ev_action as view_definition
  from pg_class c
  join pg_namespace n on n.oid = c.relnamespace
  join pg_rewrite r on r.ev_class = c.oid
  where c.relkind in ('v', 'm') and n.nspname = ANY($1 || $2)
),
transform_json as (
  select
    view_id, view_schema, view_name,
    -- the following formatting is without indentation on purpose
    -- to allow simple diffs, with less whitespace noise
    replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      regexp_replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      replace(
      replace(
        view_definition::text,
      -- This conversion to json is heavily optimized for performance.
      -- The general idea is to use as few regexp_replace() calls as possible.
      -- Simple replace() is a lot faster, so we jump through some hoops
      -- to be able to use regexp_replace() only once.
      -- This has been tested against a huge schema with 250+ different views.
      -- The unit tests do NOT reflect all possible inputs. Be careful when changing this!
      -- -----------------------------------------------
      -- pattern           | replacement         | flags
      -- -----------------------------------------------
      -- `,` is not part of the pg_node_tree format, but used in the regex.
      -- This removes all `,` that might be part of column names.
         ','               , ''
      -- The same applies for `{` and `}`, although those are used a lot in pg_node_tree.
      -- We remove the escaped ones, which might be part of column names again.
      ), E'\\{'            , ''
      ), E'\\}'            , ''
      -- The fields we need are formatted as json manually to protect them from the regex.
      ), ' :targetList '   , ',"targetList":'
      ), ' :resno '        , ',"resno":'
      ), ' :resorigtbl '   , ',"resorigtbl":'
      ), ' :resorigcol '   , ',"resorigcol":'
      -- Make the regex also match the node type, e.g. `{QUERY ...`, to remove it in one pass.
      ), '{'               , '{ :'
      -- Protect node lists, which start with `({` or `((` from the greedy regex.
      -- The extra `{` is removed again later.
      ), '(('              , '{(('
      ), '({'              , '{({'
      -- This regex removes all unused fields to avoid the need to format all of them correctly.
      -- This leads to a smaller json result as well.
      -- Removal stops at `,` for used fields (see above) and `}` for the end of the current node.
      -- Nesting can't be parsed correctly with a regex, so we stop at `{` as well and
      -- add an empty key for the followig node.
      ), ' :[^}{,]+'       , ',"":'              , 'g'
      -- For performance, the regex also added those empty keys when hitting a `,` or `}`.
      -- Those are removed next.
      ), ',"":}'           , '}'
      ), ',"":,'           , ','
      -- This reverses the "node list protection" from above.
      ), '{('              , '('
      -- Every key above has been added with a `,` so far. The first key in an object doesn't need it.
      ), '{,'              , '{'
      -- pg_node_tree has `()` around lists, but JSON uses `[]`
      ), '('               , '['
      ), ')'               , ']'
      -- pg_node_tree has ` ` between list items, but JSON uses `,`
      ), ' '             , ','
      -- `<>` in pg_node_tree is the same as `null` in JSON, but due to very poor performance of json_typeof
      -- we need to make this an empty array here to prevent json_array_elements from throwing an error
      -- when the targetList is null.
      ), '<>'              , '[]'
    )::json as view_definition
  from views
),
target_entries as(
  select
    view_id, view_schema, view_name,
    json_array_elements(view_definition->0->'targetList') as entry
  from transform_json
),
results as(
  select
    view_id, view_schema, view_name,
    (entry->>'resno')::int as view_column,
    (entry->>'resorigtbl')::oid as resorigtbl,
    (entry->>'resorigcol')::int as resorigcol
  from target_entries
),
recursion as(
  select r.*
  from results r
  where view_schema = ANY ($1)
  union all
  select
    view.view_id,
    view.view_schema,
    view.view_name,
    view.view_column,
    tab.resorigtbl,
    tab.resorigcol
  from recursion view
  join results tab on view.resorigtbl=tab.view_id and view.resorigcol=tab.view_column
)
select
  sch.nspname as table_schema,
  tbl.relname as table_name,
  col.attname as table_column_name,
  rec.view_schema,
  rec.view_name,
  vcol.attname as view_column_name
from recursion rec
join pg_class tbl on tbl.oid = rec.resorigtbl
join pg_attribute col on col.attrelid = tbl.oid and col.attnum = rec.resorigcol
join pg_attribute vcol on vcol.attrelid = rec.view_id and vcol.attnum = rec.view_column
join pg_namespace sch on sch.oid = tbl.relnamespace
join pks_fks using (resorigtbl, resorigcol)
order by view_schema, view_name, view_column_name;

@steve-chavez steve-chavez reopened this Nov 15, 2021
@rinshadka
Copy link

@steve-chavez , Thanks. Actually I am having 10 schemas in database and got this error while running the above query with one particular schema. Later I temporarily removed that schema only from postgREST and was able to start the service properly.

What may be the issue with that schema?

@wolfgangwalther
Copy link
Member

wolfgangwalther commented Nov 16, 2021

@steve-chavez , Thanks. Actually I am having 10 schemas in database and got this error while running the above query with one particular schema. Later I temporarily removed that schema only from postgREST and was able to start the service properly.

What may be the issue with that schema?

Ah, that's very interesting. Are you sure you are running standard_conforming_strings=off? And does the error go away when running with standard_conforming_strings=on? Or did you just add to this issue, because the error message is similar?

If it turns out to be unrelated to standard_conforming_strings, we might have our first case of "our json parsing breaks with some special case view definition". This something I expected to happen at some point in #1632 (comment). In that case, it would be great if you could reduce the problematic schema to the smallest reproducible example (it's one of the view definitions for sure) - or, if possible, could share the schema in question with us privately.

@wolfgangwalther
Copy link
Member

Ah, that's very interesting. Are you sure you are running standard_conforming_strings=off? And does the error go away when running with standard_conforming_strings=on? Or did you just add to this issue, because the error message is similar?

Ah, I see you already answered that above:

Also experimentally changed the standard_conforming_strings postgreSQL configuration for postgREST database user, but failed.

Let's track this in a new issue, as it's very likely to be independent of this.

@steve-chavez
Copy link
Member

Interestingly, the code for pgFmtLit in SqlFragment.hs seems to use E'' strings, but pgFmtArrayLit does not although I don't know Haskell so could be reading it incorrectly.

I don't think pgFmtArrayLit is a problem here, because the escaped string will be passed as a value in a parametrized query - not as a string as part of the query itself. Therefore standard_conforming_strings=off doesn't affect this part.

@dgtrapeze Just wanted to reaffirm this in case there's any doubt. The pgFmtArrayLit function doesn't produce a pg string constant but a value that will be parametrized, so there's no need to use the C-style escape.

Also when testing with standard_conforming_strings = off on #1995 the in filter didn't produce an error.

Btw, I'm removing the pgFmtLit function in #2027 to make sure we don't mess with escaping string constants anymore.

@dgtrapeze
Copy link
Author

Great. I'm using v8.0.0.20211102 without issue (although we got caught out by the change in #1849 which changed the response structure from array to single object when we updated from V7 to V8), but haven't noticed any other issues due to standard_conforming_strings

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug ci Related to CI setup difficulty: beginner Pure Haskell task idea Needs of discussion to become an enhancement, not ready for implementation
4 participants