Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Commits on May 24, 2015
  1. @bmomjian

    pgindent run for 9.5

    bmomjian authored
Commits on May 19, 2015
  1. @anarazel

    Refactor ON CONFLICT index inference parse tree representation.

    anarazel authored
    Defer lookup of opfamily and input type of a of a user specified opclass
    until the optimizer selects among available unique indexes; and store
    the opclass in the parse analyzed tree instead.  The primary reason for
    doing this is that for rule deparsing it's easier to use the opclass
    than the previous representation.
    While at it also rename a variable in the inference code to better fit
    it's purpose.
    This is separate from the actual fixes for deparsing to make review
Commits on May 16, 2015
  1. @anarazel


    anarazel authored
    This SQL standard functionality allows to aggregate data by different
    GROUP BY clauses at once. Each grouping set returns rows with columns
    grouped by in other sets set to NULL.
    This could previously be achieved by doing each grouping as a separate
    query, conjoined by UNION ALLs. Besides being considerably more concise,
    grouping sets will in many cases be faster, requiring only one scan over
    the underlying data.
    The current implementation of grouping sets only supports using sorting
    for input. Individual sets that share a sort order are computed in one
    pass. If there are sets that don't share a sort order, additional sort &
    aggregation steps are performed. These additional passes are sourced by
    the previous sort step; thus avoiding repeated scans of the source data.
    The code is structured in a way that adding support for purely using
    hash aggregation or a mix of hashing and sorting is possible. Sorting
    was chosen to be supported first, as it is the most generic method of
    Instead of, as in an earlier versions of the patch, representing the
    chain of sort and aggregation steps as full blown planner and executor
    nodes, all but the first sort are performed inside the aggregation node
    itself. This avoids the need to do some unusual gymnastics to handle
    having to return aggregated and non-aggregated tuples from underlying
    nodes, as well as having to shut down underlying nodes early to limit
    memory usage.  The optimizer still builds Sort/Agg node to describe each
    phase, but they're not part of the plan tree, but instead additional
    data for the aggregation node. They're a convenient and preexisting way
    to describe aggregation and sorting.  The first (and possibly only) sort
    step is still performed as a separate execution step. That retains
    similarity with existing group by plans, makes rescans fairly simple,
    avoids very deep plans (leading to slow explains) and easily allows to
    avoid the sorting step if the underlying data is sorted by other means.
    A somewhat ugly side of this patch is having to deal with a grammar
    ambiguity between the new CUBE keyword and the cube extension/functions
    named cube (and rollup). To avoid breaking existing deployments of the
    cube extension it has not been renamed, neither has cube been made a
    reserved keyword. Instead precedence hacking is used to make GROUP BY
    cube(..) refer to the CUBE grouping sets feature, and not the function
    cube(). To actually group by a function cube(), unlikely as that might
    be, the function name has to be quoted.
    Needs a catversion bump because stored rules may change.
    Author: Andrew Gierth and Atri Sharma, with contributions from Andres Freund
    Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
        Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule
Commits on May 8, 2015
  1. @anarazel


    anarazel authored
    The newly added ON CONFLICT clause allows to specify an alternative to
    raising a unique or exclusion constraint violation error when inserting.
    ON CONFLICT refers to constraints that can either be specified using a
    inference clause (by specifying the columns of a unique constraint) or
    by naming a unique or exclusion constraint.  DO NOTHING avoids the
    constraint violation, without touching the pre-existing row.  DO UPDATE
    SET ... [WHERE ...] updates the pre-existing tuple, and has access to
    both the tuple proposed for insertion and the existing tuple; the
    optional WHERE clause can be used to prevent an update from being
    executed.  The UPDATE SET and WHERE clauses have access to the tuple
    proposed for insertion using the "magic" EXCLUDED alias, and to the
    pre-existing tuple using the table name or its alias.
    This feature is often referred to as upsert.
    This is implemented using a new infrastructure called "speculative
    insertion". It is an optimistic variant of regular insertion that first
    does a pre-check for existing tuples and then attempts an insert.  If a
    violating tuple was inserted concurrently, the speculatively inserted
    tuple is deleted and a new attempt is made.  If the pre-check finds a
    matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
    If the insertion succeeds without detecting a conflict, the tuple is
    deemed inserted.
    To handle the possible ambiguity between the excluded alias and a table
    named excluded, and for convenience with long relation names, INSERT
    INTO now can alias its target table.
    Bumps catversion as stored rules change.
    Author: Peter Geoghegan, with significant contributions from Heikki
        Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
    Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
        Dean Rasheed, Stephen Frost and many others.
Commits on Mar 28, 2015
  1. Better fix for misuse of Float8GetDatumFast().

    Tom Lane authored
    We can use that macro as long as we put the value into a local variable.
    Commit 735cd61 was not wrong on its own terms, but I think this way
    looks nicer, and it should save a few cycles on 32-bit machines.
  2. @adunstan

    Use standard librart sqrt function in pg_stat_statements

    adunstan authored
    The stddev calculation included a faster but unportable sqrt function.
    This is not worth the extra effort, and won't work everywhere. If the
    standard library function is good enough for the SQL function it
    should be good enough here too.
Commits on Mar 27, 2015
  1. @adunstan

    Fix portability issues with stddev in pg_stat_statements

    adunstan authored
    Stddev is calculated on the fly, and the code in commit 717f709 was
    using Float8GetDatumFast() inappropriately to convert the result to a
    Datum. Mea culpa. It now uses Float8GetDatum().
  2. @adunstan

    Add stats for min, max, mean, stddev times to pg_stat_statements.

    adunstan authored
    The new fields are min_time, max_time, mean_time and stddev_time.
    Based on an original patch from Mitsumasa KONDO, modified by me. Reviewed by Petr Jelínek.
Commits on Jan 22, 2015
  1. Prevent duplicate escape-string warnings when using pg_stat_statements.

    Tom Lane authored
    contrib/pg_stat_statements will sometimes run the core lexer a second time
    on submitted statements.  Formerly, if you had standard_conforming_strings
    turned off, this led to sometimes getting two copies of any warnings
    enabled by escape_string_warning.  While this is probably no longer a big
    deal in the field, it's a pain for regression testing.
    To fix, change the lexer so it doesn't consult the escape_string_warning
    GUC variable directly, but looks at a copy in the core_yy_extra_type state
    struct.  Then, pg_stat_statements can change that copy to disable warnings
    while it's redoing the lexing.
    It seemed like a good idea to make this happen for all three of the GUCs
    consulted by the lexer, not just escape_string_warning.  There's not an
    immediate use-case for callers to adjust the other two AFAIK, but making
    it possible is easy enough and seems like good future-proofing.
    Arguably this is a bug fix, but there doesn't seem to be enough interest to
    justify a back-patch.  We'd not be able to back-patch exactly as-is anyway,
    for fear of breaking ABI compatibility of the struct.  (We could perhaps
    back-patch the addition of only escape_string_warning by adding it at the
    end of the struct, where there's currently alignment padding space.)
Commits on Jan 6, 2015
  1. @bmomjian

    Update copyright for 2015

    bmomjian authored
    Backpatch certain files through 9.0
Commits on Aug 25, 2014
  1. @anarazel

    Fix typos in some error messages thrown by extension scripts when fed…

    anarazel authored
    … to psql.
    Some of the many error messages introduced in 458857c missed 'FROM
    unpackaged'. Also e016b72 and 45ffeb7 forgot to quote extension
    version numbers.
    Backpatch to 9.1, just like 458857c which introduced the messages. Do
    so because the error messages thrown when the wrong command is copy &
    pasted aren't easy to understand.
  2. @hlinnaka

    Don't track DEALLOCATE in pg_stat_statements.

    hlinnaka authored
    We also don't track PREPARE, nor do we track planning time in general, so
    let's ignore DEALLOCATE as well for consistency.
    Backpatch to 9.4, but not further than that. Although it seems unlikely that
    anyone is relying on the current behavior, this is a behavioral change.
    Fabien Coelho
Commits on Jul 14, 2014
  1. @nmisch

    Add file version information to most installed Windows binaries.

    nmisch authored
    Prominent binaries already had this metadata.  A handful of minor
    binaries, such as pg_regress.exe, still lack it; efforts to eliminate
    such exceptions are welcome.
    Michael Paquier, reviewed by MauMau.
Commits on Jul 10, 2014
  1. @bmomjian
Commits on Jun 18, 2014
  1. Implement UPDATE tab SET (col1,col2,...) = (SELECT ...), ...

    Tom Lane authored
    This SQL-standard feature allows a sub-SELECT yielding multiple columns
    (but only one row) to be used to compute the new values of several columns
    to be updated.  While the same results can be had with an independent
    sub-SELECT per column, such a workaround can require a great deal of
    duplicated computation.
    The standard actually says that the source for a multi-column assignment
    could be any row-valued expression.  The implementation used here is
    tightly tied to our existing sub-SELECT support and can't handle other
    cases; the Bison grammar would have some issues with them too.  However,
    I don't feel too bad about this since other cases can be converted into
    sub-SELECTs.  For instance, "SET (a,b,c) = row_valued_function(x)" could
    be written "SET (a,b,c) = (SELECT * FROM row_valued_function(x))".
Commits on Jun 4, 2014
  1. Save pg_stat_statements statistics file into $PGDATA/pg_stat director…

    Fujii Masao authored
    …y at shutdown.
    187492b changed pgstat.c so that
    the stats files were saved into $PGDATA/pg_stat directory when the server
    was shutdowned. But it accidentally forgot to change the location of
    pg_stat_statements permanent stats file. This commit fixes pg_stat_statements
    so that its stats file is also saved into $PGDATA/pg_stat at shutdown.
    Since this fix changes the file layout, we don't back-patch it to 9.3
    where this oversight was introduced.
Commits on May 27, 2014
  1. Avoid unportable usage of sscanf(UINT64_FORMAT).

    Tom Lane authored
    On Mingw, it seems that scanf() doesn't necessarily accept the same format
    codes that printf() does, and in particular it may fail to recognize %llu
    even though printf() does.  Since configure only probes printf() behavior
    while setting up the INT64_FORMAT macros, this means it's unsafe to use
    those macros with scanf().  We had only one instance of such a coding
    pattern, in contrib/pg_stat_statements, so change that code to avoid
    the problem.
    Per buildfarm warnings.  Back-patch to 9.0 where the troublesome code
    was introduced.
    Michael Paquier
Commits on May 6, 2014
  1. @bmomjian

    pgindent run for 9.4

    bmomjian authored
    This includes removing tabs after periods in C comments, which was
    applied to back branches, so this change should not effect backpatching.
Commits on Apr 21, 2014
  1. pg_stat_statements forgot to let previous occupant of hook get contro…

    Tom Lane authored
    …l too.
    pgss_post_parse_analyze() neglected to pass the call on to any earlier
    occupant of the post_parse_analyze_hook.  There are no other users of that
    hook in contrib/, and most likely none in the wild either, so this is
    probably just a latent bug.  But it's a bug nonetheless, so back-patch
    to 9.2 where this code was introduced.
Commits on Apr 18, 2014
  1. @petere

    Create function prototype as part of PG_FUNCTION_INFO_V1 macro

    petere authored
    Because of gcc -Wmissing-prototypes, all functions in dynamically
    loadable modules must have a separate prototype declaration.  This is
    meant to detect global functions that are not declared in header files,
    but in cases where the function is called via dfmgr, this is redundant.
    Besides filling up space with boilerplate, this is a frequent source of
    compiler warnings in extension modules.
    We can fix that by creating the function prototype as part of the
    PG_FUNCTION_INFO_V1 macro, which such modules have to use anyway.  That
    makes the code of modules cleaner, because there is one less place where
    the entry points have to be listed, and creates an additional check that
    functions have the right prototype.
    Remove now redundant prototypes from contrib and other modules.
Commits on Feb 23, 2014
  1. Prefer pg_any_to_server/pg_server_to_any over pg_do_encoding_conversion.

    Tom Lane authored
    A large majority of the callers of pg_do_encoding_conversion were
    specifying the database encoding as either source or target of the
    conversion, meaning that we can use the less general functions
    pg_any_to_server/pg_server_to_any instead.
    The main advantage of using the latter functions is that they can make use
    of a cached conversion-function lookup in the common case that the other
    encoding is the current client_encoding.  It's notationally cleaner too in
    most cases, not least because of the historical artifact that the latter
    functions use "char *" rather than "unsigned char *" in their APIs.
    Note that pg_any_to_server will apply an encoding verification step in
    some cases where pg_do_encoding_conversion would have just done nothing.
    This seems to me to be a good idea at most of these call sites, though
    it partially negates the performance benefit.
    Per discussion of bug #9210.
Commits on Feb 3, 2014
  1. Make pg_basebackup skip temporary statistics files.

    Fujii Masao authored
    The temporary statistics files don't need to be included in the backup
    because they are always reset at the beginning of the archive recovery.
    This patch changes pg_basebackup so that it skips all files located in
    $PGDATA/pg_stat_tmp or the directory specified by stats_temp_directory
Commits on Jan 28, 2014
  1. Update comment.

    Tom Lane authored
    generate_normalized_query() no longer needs to truncate text, but this
    one comment didn't get the memo.  Per Peter Geoghegan.
Commits on Jan 27, 2014
  1. Keep pg_stat_statements' query texts in a file, not in shared memory.

    Tom Lane authored
    This change allows us to eliminate the previous limit on stored query
    length, and it makes the shared-memory hash table very much smaller,
    allowing more statements to be tracked.  (The default value of
    pg_stat_statements.max is therefore increased from 1000 to 5000.)
    In typical scenarios, the hash table can be large enough to hold all the
    statements commonly issued by an application, so that there is little
    "churn" in the set of tracked statements, and thus little need to do I/O
    to the file.
    To further reduce the need for I/O to the query-texts file, add a way
    to retrieve all the columns of the pg_stat_statements view except for
    the query text column.  This is probably not of much interest for human
    use but it could be exploited by programs, which will prefer using the
    queryid anyway.
    Ordinarily, we'd need to bump the extension version number for the latter
    change.  But since we already advanced pg_stat_statements' version number
    from 1.1 to 1.2 in the 9.4 development cycle, it seems all right to just
    redefine what 1.2 means.
    Peter Geoghegan, reviewed by Pavel Stehule
  2. Relax the requirement that all lwlocks be stored in a single array.

    Robert Haas authored
    This makes it possible to store lwlocks as part of some other data
    structure in the main shared memory segment, or in a dynamic shared
    memory segment.  There is still a main LWLock array and this patch does
    not move anything out of it, but it provides necessary infrastructure
    for doing that in the future.
    This change is likely to increase the size of LWLockPadded on some
    platforms, especially 32-bit platforms where it was previously only
    16 bytes.
    Patch by me.  Review by Andres Freund and KaiGai Kohei.
Commits on Jan 20, 2014
  1. Remove pg_stat_statements--1.1.sql.

    Tom Lane authored
    Commit 9148440 should have removed this
    file, not just reduced it to zero size.
Commits on Jan 7, 2014
  1. @bmomjian

    Update copyright for 2014

    bmomjian authored
    Update all files in head, and files COPYRIGHT and legal.sgml in all back
Commits on Dec 23, 2013
  1. Support ordered-set (WITHIN GROUP) aggregates.

    Tom Lane authored
    This patch introduces generic support for ordered-set and hypothetical-set
    aggregate functions, as well as implementations of the instances defined in
    SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
    percent_rank(), cume_dist()).  We also added mode() though it is not in the
    spec, as well as versions of percentile_cont() and percentile_disc() that
    can compute multiple percentile values in one pass over the data.
    Unlike the original submission, this patch puts full control of the sorting
    process in the hands of the aggregate's support functions.  To allow the
    support functions to find out how they're supposed to sort, a new API
    function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
    the aggregate call's Aggref node, which may have other uses beyond the
    immediate need.  There is also support for ordered-set aggregates to
    install cleanup callback functions, so that they can be sure that
    infrastructure such as tuplesort objects gets cleaned up.
    In passing, make some fixes in the recently-added support for variadic
    aggregates, and make some editorial adjustments in the recent FILTER
    additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
    allowing it to succeed whenever the target type is ANY or ANYELEMENT.
    It was inconsistent that it dealt with other polymorphic target types
    but not these.
    Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
    and rather heavily editorialized upon by Tom Lane
Commits on Dec 8, 2013
  1. @mhagander

    Fix pg_stat_statements build on 32-bit systems

    mhagander authored
    Peter Geoghegan
Commits on Dec 7, 2013
  1. Expose qurey ID in pg_stat_statements view.

    Fujii Masao authored
    The query ID is the internal hash identifier of the statement,
    and was not available in pg_stat_statements view so far.
    Daniel Farina, Sameer Thakur and Peter Geoghegan, reviewed by me.
Commits on Nov 22, 2013
  1. Support multi-argument UNNEST(), and TABLE() syntax for multiple func…

    Tom Lane authored
    This patch adds the ability to write TABLE( function1(), function2(), ...)
    as a single FROM-clause entry.  The result is the concatenation of the
    first row from each function, followed by the second row from each
    function, etc; with NULLs inserted if any function produces fewer rows than
    others.  This is believed to be a much more useful behavior than what
    Postgres currently does with multiple SRFs in a SELECT list.
    This syntax also provides a reasonable way to combine use of column
    definition lists with WITH ORDINALITY: put the column definition list
    inside TABLE(), where it's clear that it doesn't control the ordinality
    column as well.
    Also implement SQL-compliant multiple-argument UNNEST(), by turning
    UNNEST(a,b,c) into TABLE(unnest(a), unnest(b), unnest(c)).
    The SQL standard specifies TABLE() with only a single function, not
    multiple functions, and it seems to require an implicit UNNEST() which is
    not what this patch does.  There may be something wrong with that reading
    of the spec, though, because if it's right then the spec's TABLE() is just
    a pointless alternative spelling of UNNEST().  After further review of
    that, we might choose to adopt a different syntax for what this patch does,
    but in any case this functionality seems clearly worthwhile.
    Andrew Gierth, reviewed by Zoltán Böszörményi and Heikki Linnakangas, and
    significantly revised by me
Commits on Jul 18, 2013
  1. Fix typo in update scripts for some contrib modules.

    Fujii Masao authored
Commits on Jul 17, 2013
  1. @nmisch

    Implement the FILTER clause for aggregate function calls.

    nmisch authored
    This is SQL-standard with a few extensions, namely support for
    subqueries and outer references in clause expressions.
    catversion bump due to change in Aggref and WindowFunc.
    David Fetter, reviewed by Dean Rasheed.
Commits on Apr 28, 2013
  1. Editorialize a bit on new ProcessUtility() API.

    Tom Lane authored
    Choose a saner ordering of parameters (adding a new input param after
    the output params seemed a bit random), update the function's header
    comment to match reality (cmon folks, is this really that hard?),
    get rid of useless and sloppily-defined distinction between
Commits on Jan 1, 2013
  1. @bmomjian

    Update copyrights for 2013

    bmomjian authored
    Fully update git head, and update back branches in ./COPYRIGHT and
    legal.sgml files.
Something went wrong with that request. Please try again.