Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
PostgreSQL Unit Testing Suite
PLpgSQL PLSQL Shell Other

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.
bin
compat
expected
sql
Changes
Makefile
README.pgtap
pgtap.c
pgtap.sql.in
test_setup.sql.in
uninstall_pgtap.sql.in

README.pgtap

pgTAP 0.14
==========

pgTAP is a collection of TAP-emitting unit testing functions written in
PL/pgSQL and PL/SQL. It is based on the Test::More module distributed with
Perl 5. You could even consider it a port of sorts.

Installation
============

For the impatient, to install pgTAP into a PostgreSQL database, just do this:

    make USE_PGXS=1
    make install USE_PGXS=1
    make installcheck USE_PGXS=1

If you encounter an error such as:

    "Makefile", line 8: Need an operator

You need to use GNU make, which may well be installed on your system as
'gmake':

    gmake USE_PGXS=1
    gmake install USE_PGXS=1
    gmake installcheck USE_PGXS=1

If you encounter an error such as:

    make: pg_config: Command not found

Be sure that you have `pg_config` installed and in your path. If you used a
package management system such as RPM to install PostgreSQL, be sure that the
`-devel` package is also installed. If necessary, add the path to `pg_config`
to your `$PATH` environment variable:

    env PATH=$PATH:/path/to/pgsql/bin USE_PGXS=1 make && make test && make install

And finally, if all that fails, copy the entire distribution directory to the
`contrib/` subdirectory of the PostgreSQL source code and try it there without
the `$USE_PGXS` variable:

    make
    make install
    make installcheck

If you want to schema-qualify pgTAP (that is, install all of its functions
into their own schema), set the `$TAPSCHEMA` variable to the name of the
schema you'd like to be created, for example:

    make TAPSCHEMA=tap
    make install
    make installcheck

Testing pgTAP with pgTAP
------------------------

In addition to the PostgreSQL-standared `installcheck` target, the `test`
target uses the included `pg_prove` Perl program to do its testing, which
requires that TAP::Harness, included in
[Test::Harness](http://search.cpan.org/dist/Test-Harness/ "Test::Harness on
CPAN") 3.x. You'll need to make sure that you use a database with PL/pgSQL
loaded, or else the tests wont't work. `pg_prove` supports a number of
environment variables that you might need to use, including all the usual
PostgreSQL client environment variables:

* `$PGDATABASE`
* `$PGHOST`
* `$PGPORT`
* `$PGUSER`

You can use it to run the test suite like so:

    make test USE_PGXS=1 PGUSER=postgres

Of course, if you're running the tests from the `contrib/` directory, you
should omit the `USE_PGXS` variable.

Adding pgTAP to a Database
--------------------------

Once pgTAP has been built and tested, you can install it into a
PL/pgSQL-enabled database:

    psql -d dbname -f pgtap.sql

If you want pgTAP to be available to all new databases, install it into the
"template1" database:

    psql -d template1 -f pgtap.sql

If you want to remove pgTAP from a database, run the `uninstall_pgtap.sql`
script:

    psql -d dbname -f uninstall_pgtap.sql

Both scripts will also be installed in the `contrib` directory under the
directory output by `pg_config --sharedir`. So you can always do this:

    psql -d template1 -f `pg_config --sharedir`/contrib/pgtap.sql

But do be aware that, if you've specified a schema using `$TAPSCHEMA`, that
schema will always be created and the pgTAP functions placed in it.

pgTAP Test Scripts
==================

You can distribute `pgtap.sql` with any PostgreSQL distribution, such as a
custom data type. For such a case, if your users want to run your test suite
using PostgreSQL's standard `installcheck` make target, just be sure to set
varibles to keep the tests quiet, start a transaction, load the functions in
your test script, and then rollback the transaction at the end of the script.
Here's an example:

    \set ECHO
    \set QUIET 1
    -- Turn off echo and keep things quiet.

    -- Format the output for nice TAP.
    \pset format unaligned
    \pset tuples_only true
    \pset pager

    -- Revert all changes on failure.
    \set ON_ERROR_ROLLBACK 1
    \set ON_ERROR_STOP true
    \set QUIET 1

    -- Load the TAP functions.
    BEGIN;
        \i pgtap.sql

    -- Plan the tests.
    SELECT plan(1);

    -- Run the tests.
    SELECT pass( 'My test passed, w00t!' );

    -- Finish the tests and clean up.
    SELECT * FROM finish();
    ROLLBACK;

Of course, if you already have the pgTAP functions in your testing database,
you should skip `\i pgtap.sql` at the beginning of the script.

The only other limitation is that the `pg_typoeof()` function, which is
written in C, will not be available. You'll want to comment-out its
declaration in the bundled copy of `pgtap.sql` and then avoid using
`cmp_ok()`, since that function relies on `pg_typeof()`. Note that
`pg_typeof()` might be included in PostgreSQL 8.4, in which case you wouldn't
need to avoid it.

Now you're ready to run your test script!

    % psql -d try -Xf test.sql
    1..1
    ok 1 - My test passed, w00t!

You'll need to have all of those variables in the script to ensure that the
output is proper TAP and that all changes are rolled back -- including the
loading of the test functions -- in the event of an uncaught exception.

Using `pg_prove`
----------------

Or save yourself some effort -- and run a batch of tests scripts at once -- by
using `pg_prove`. If you're not relying on `installcheck`, your test scripts
can be a lot less verbose; you don't need to set all the extra variables,
because `pg_prove` takes care of that for you:

    -- Start transaction and plan the tests.
    BEGIN;
    SELECT plan(1);

    -- Run the tests.
    SELECT pass( 'My test passed, w00t!' );

    -- Finish the tests and clean up.
    SELECT * FROM finish();
    ROLLBACK;

Now run the tests. Here's what it looks like when the pgTAP tests are run with
`pg_prove`:

    % pg_prove -U postgres sql/*.sql
    sql/coltap.....ok
    sql/hastap.....ok
    sql/moretap....ok
    sql/pg73.......ok
    sql/pktap......ok
    All tests successful.
    Files=5, Tests=216,  1 wallclock secs ( 0.06 usr  0.02 sys +  0.08 cusr  0.07 csys =  0.23 CPU)
    Result: PASS

Yep, that's all there is to it. Call `pg_prove --help` to see other supported
options, and `pg_prove --man` to see its entire documentation.

Using pgTAP
===========

The purpose of pgTAP is to provide a wide range of testing utilities that
output TAP. TAP, or the "Test Anything Protocol", is an emerging standard for
representing the output from unit tests. It owes its success to its format as
a simple text-based interface that allows for practical machine parsing and
high legibility for humans. TAP started life as part of the test harness for
Perl but now has implementations in C/C++, Python, PHP, JavaScript, Perl, and
now PostgreSQL.

I love it when a plan comes together
------------------------------------

Before anything else, you need a testing plan. This basically declares how
many tests your script is going to run to protect against premature failure.

The preferred way to do this is to declare a plan by calling the `plan()`
function:

    SELECT plan( 42 );

There are rare cases when you will not know beforehand how many tests your
script is going to run. In this case, you can declare that you have no plan.
(Try to avoid using this as it weakens your test.)

    SELECT * FROM no_plan();

Often, though, you'll be able to calculate the number of tests, like so:

    SELECT plan( COUNT(*) )
      FROM foo;

At the end of your script, you should always tell pgTAP that the tests have
completed, so that it can output any diagnostics about failures or a
discrepancy between the planned number of tests and the number actually run:

    SELECT * FROM finish();

Test names
----------

By convention, each test is assigned a number in order. This is largely done
automatically for you. However, it's often very useful to assign a name to
each test. Would you rather see this?

      ok 4
      not ok 5
      ok 6

Or this?

      ok 4 - basic multi-variable
      not ok 5 - simple exponential
      ok 6 - force == mass * acceleration

The later gives you some idea of what failed. It also makes it easier to find
the test in your script, simply search for "simple exponential".

All test functions take a name argument. It's optional, but highly suggested
that you use it.

I'm ok, you're not ok
---------------------

The basic purpose of pgTAP--and of any TAP-emitting test framework, for that
matter--is to print out either "ok #" or "not ok #", depending on whether a
given test succeeded or failed. Everything else is just gravy.

All of the following functions return "ok" or "not ok" depending on whether
the test succeeded or failed.

### `ok( boolean, description )` ###
### `ok( boolean )` ###

    SELECT ok( :this = :that, :description );

This function simply evaluates any expression (`:this = :that` is just a
simple example) and uses that to determine if the test succeeded or failed. A
true expression passes, a false one fails. Very simple.

For example:

    SELECT ok( 9 ^ 2 = 81,    'simple exponential' );
    SELECT ok( 9 < 10,        'simple comparison' );
    SELECT ok( 'foo' ~ '^f',  'simple regex' );
    SELECT ok( active = true, name ||  widget active' )
      FROM widgets;

(Mnemonic:  "This is ok.")

The `description` is a very short description of the test that will be printed
out. It makes it very easy to find a test in your script when it fails and
gives others an idea of your intentions. The description is optional, but we
*very* strongly encourage its use.

Should an `ok()` fail, it will produce some diagnostics:

    not ok 18 - sufficient mucus
    #     Failed test 18: "sufficient mucus"

Furthermore, should the boolean test result argument be passed as a `NULL`
rather than `true` or `false`, `ok()` will assume a test failure and attach an
additional diagnostic:

    not ok 18 - sufficient mucus
    #     Failed test 18: "sufficient mucus"
    #     (test result was NULL)

### `is( anyelement, anyelement, description )` ###
### `is( anyelement, anyelement )` ###
### `isnt( anyelement, anyelement, description )` ###
### `isnt( anyelement, anyelement )` ###

    SELECT is(   :this, :that, :description );
    SELECT isnt( :this, :that, :description );

Similar to `ok()`, `is()` and `isnt()` compare their two arguments with `=`
and `<>` respectively and use the result of that to determine if the test
succeeded or failed. So these:

    -- Is the ultimate answer 42?
    SELECT is( ultimate_answer(), 42, 'Meaning of Life' );

    -- foo() doesn't return empty
    SELECT isnt( foo(), '', 'Got some foo' );

are similar to these:

    SELECT ok(   ultimate_answer() =  42, 'Meaning of Life' );
    SELECT isnt( foo()             <> '', 'Got some foo'    );

(Mnemonic: "This is that." "This isn't that.")

*Note:* `NULL`s are not treated as unknowns by `is()` or `isnt()`. That is, if
`:this` and `:that` are both `NULL`, the test will pass, and if only one of
them is `NULL`, the test will fail.

So why use these? They produce better diagnostics on failure. `ok()` cannot
know what you are testing for (beyond the description), but `is()` and
`isnt()` know what the test was and why it failed. For example this test:

    \set foo '\'' waffle '\''
    \set bar '\'' yarblokos '\''
    SELECT is( :foo::text, :bar::text, 'Is foo the same as bar?' );

Will produce something like this:

    # Failed test 17:  "Is foo the same as bar?"
    #         have: waffle
    #         want: yarblokos

So you can figure out what went wrong without re-running the test.

You are encouraged to use `is()` and `isnt()` over `ok()` where possible.

### `matches( anyelement, regex, description )` ###
### `matches( anyelement, regex )` ###

    SELECT matches( :this, '^that', :description );

Similar to `ok()`, `matches()` matches `:this` against the regex `/^that/`.

So this:

    SELECT matches( :this, '^that', 'this is like that' );

is similar to:

    SELECT ok( :this ~ '^that', 'this is like that' );

(Mnemonic "This matches that".)

Its advantages over `ok()` are similar to that of `is()` and `isnt()`: Better
diagnostics on failure.

### `imatches( anyelement, regex, description )` ###
### `imatches( anyelement, regex )` ###

    SELECT imatches( :this, '^that', :description );

These are just like `matches()` except that the regular expression is compared
to `:this` case-insensitively.

### `doesnt_match( anyelement, regex, description )` ###
### `doesnt_match( anyelement, regex )` ###
### `doesnt_imatch( anyelement, regex, description )` ###
### `doesnt_imatch( anyelement, regex )` ###

    SELECT doesnt_match( :this, '^that', :description );

These functions work exactly as `matches()` and `imatches()` do, only they
check if `:this` *does not* match the given pattern.

### `alike( anyelement, pattern, description )` ###
### `alike( anyelement, pattern )` ###
### `ialike( anyelement, pattern, description )` ###
### `ialike( anyelement, pattern )` ###

    SELECT alike( :this, 'that%', :description );

Similar to `matches()`, `alike()` matches `:this` against the SQL `LIKE`
pattern 'that%'. `ialike()` matches case-insensitively.

So this:

    SELECT ialike( :this, 'that%', 'this is alike that' );

is similar to:

    SELECT ok( :this ILIKE 'that%', 'this is like that' );

(Mnemonic "This is like that".)

Its advantages over `ok()` are similar to that of `is()` and `isnt()`: Better
diagnostics on failure.

### `unalike( anyelement, pattern, description )` ###
### `unalike( anyelement, pattern )` ###
### `unialike( anyelement, pattern, description )` ###
### `unialike( anyelement, pattern )` ###

    SELECT unalike( :this, 'that%', :description );

Works exactly as `alike()`, only it checks if `:this` *does not* match the
given pattern.

### `cmp_ok( anyelement, operator, anyelement, description )` ###
### `cmp_ok( anyelement, operator, anyelement )` ###

    SELECT cmp_ok( :this, :op, :that, :description );

Halfway between `ok()` and `is()` lies `cmp_ok()`. This function allows you to
compare two arguments using any binary operator.

    -- ok( :this = :that );
    SELECT cmp_ok( :this, '=', :that, 'this = that' );

    -- ok( :this >= :that );
    SELECT cmp_ok( :this, '>=, 'this >= that' );

    -- ok( :this && :that );
    SELECT cmp_ok( :this, '&&', :that, 'this && that' );

Its advantage over `ok()` is that when the test fails you'll know what `:this`
and `:that` were:

    not ok 1
    #     Failed test
    #     '23'
    #         &&
    #     NULL

Note that if the value returned by the operation is `NULL`, the test will
be considered to have failed. This may not be what you expect if your test
was, for example:

    SELECT cmp_ok( NULL, '=', NULL );

But in that case, you should probably use `is()`, instead.

### `pass( description )` ###
### `pass()` ###
### `fail( description )` ###
### `fail()` ###

    SELECT pass( :description );
    SELECT fail( :description );

Sometimes you just want to say that the tests have passed. Usually the case is
you've got some complicated condition that is difficult to wedge into an
`ok()`. In this case, you can simply use `pass()` (to declare the test ok) or
`fail()` (for not ok). They are synonyms for `ok(1)` and `ok(0)`.

Use these functions very, very, very sparingly.

To Error is Human
-----------------

Sometimes you just want to know that a particular query will trigger an error.
Or maybe you want to make sure a query *does not* trigger an error. For such
cases, we provide a couple of test functions to make sure your queries are as
error-prone as you think they should be.

### `throws_ok( query, errcode, errmsg, description )` ###
### `throws_ok( query, errcode, errmsg )` ###
### `throws_ok( query, errmsg, description )` ###
### `throws_ok( query, errcode )` ###
### `throws_ok( query, errmsg )` ###
### `throws_ok( query )` ###

    SELECT throws_ok(
        'INSERT INTO try (id) VALUES (1)',
        '23505',
        'duplicate key value violates unique constraint "try_pkey"',
        'We should get a unique violation for a duplicate PK'
    );

When you want to make sure that an exception is thrown by PostgreSQL under
certain circumstances, use `throws_ok()` to test for those circumstances.
Supported by 8.1 or higher.

The first argument should be a string representing the query to be executed.
`throws_ok()` will use the PL/pgSQL `EXECUTE` statement to execute it and
catch any exception.

The second argument should be an exception error code, which is a
five-character string (if it happens to consist only of numbers and you pass
it as an integer, it will still work). If this value is not `NULL`,
`throws_ok()` will check the thrown exception to ensure that it is the
expected exception. For a complete list of error codes, see [Appendix
A.](http://www.postgresql.org/docs/current/static/errcodes-appendix.html
"Appendix A. PostgreSQL Error Codes") in the [PostgreSQL
documentation](http://www.postgresql.org/docs/current/static/).

The third argument is an error message. This will be most useful for functions
you've written that raise exceptions, so that you can test the excption
message that you've thrown. Otherwise, for core errors, you'll need to be
careful of localized error messages.

The fourth argument is of course a brief test description.

For the three- and two-argument forms of `throws_ok()`, if the second argument
is exactly five bytes long, it is assumed to be an error code and the optional
third argument is the error message. Otherwise, the second argument is assumed
to be an error message and the third argument is a description. If for some
reason you need to test an error message that is five bytes long, use the
four-argument form.

Should a `throws_ok()` test fail it produces appropriate diagnostic messages.
For example:

    not ok 81 - simple error
    # Failed test "This should not die"
    #       caught: 23505: duplicate key value violates unique constraint "try_pkey"
    #       wanted: 23502: null value in column "id" violates not-null constraint

Idea borrowed from the Test::Exception Perl module.

### `lives_ok( query, description )` ###
### `lives_ok( query )` ###

    SELECT lives_ok(
        'INSERT INTO try (id) VALUES (1)',
        'We should not get a unique violation for a new PK'
    );

The inverse of `throws_ok()`, these functions test to ensure that a SQL
statement does *not* throw an exception.
Supported by 8.1 or higher.

Should a `lives_ok()` test fail, it produces appropriate diagnostic messages.
For example:

    not ok 85 - simple success
    # Failed test "simple success"
    #         died: 23505: duplicate key value violates unique constraint "try_pkey"

Idea borrowed from the Test::Exception Perl module.

A Wicked Schema
---------------

Need to make sure that your database is designed just the way you think it
should be? Use these test functions and rest easy.

### `has_table( schema, table, description )` ###
### `has_table( table, description )` ###
### `has_table( table )` ###

    SELECT has_table(
        'myschema',
        'sometable',
        'I got myschema.sometable'
    );

This function tests whether or not a table exists in the database. The first
argument is a schema name, the second is a table name, and the third is the
test description. If you omit the schema, the table must be visible in the
search path. If you omit the test description, it will be set to "Table
`:table` should exist".

### `hasnt_table( schema, table, description )` ###
### `hasnt_table( table, description )` ###
### `hasnt_table( table )` ###

    SELECT hasnt_table(
        'myschema',
        'sometable',
        'There should be no table myschema.sometable'
    );

This function is the inverse of `has_table()`. The test passes if the
specified table does *not* exist.

### `has_view( schema, view, description )` ###
### `has_view( view, description )` ###
### `has_view( view )` ###

    SELECT has_view(
        'myschema',
        'someview',
        'I got myschema.someview'
    );

Just like `has_table()`, only it tests for the existence of a view.

### `hasnt_view( schema, view, description )` ###
### `hasnt_view( view, description )` ###
### `hasnt_view( view )` ###

    SELECT hasnt_view(
        'myschema',
        'someview',
        'There should be no myschema.someview'
    );

This function is the inverse of `has_view()`. The test passes if the specified
view does *not* exist.

### `has_column( schema, table, column, description )` ###
### `has_column( table, column, description )` ###
### `has_column( table, column )` ###

    SELECT has_column(
        'myschema',
        'sometable',
        'somecolumn',
        'I got myschema.sometable.somecolumn'
    );

Tests whether or not a column exists in a given table or view. The first
argument is the schema name, the second the table name, the third the column
name, and the fourth is the test description. If the schema is omitted, the
table must be visible in the search path. If the test description is omitted,
it will be set to "Column `:table`.`:column` should exist".

### `hasnt_column( schema, table, column, description )` ###
### `hasnt_column( table, column, description )` ###
### `hasnt_column( table, column )` ###

    SELECT hasnt_column(
        'myschema',
        'sometable',
        'somecolumn',
        'There should be no myschema.sometable.somecolumn column'
    );

This function is the inverse of `has_column()`. The test passes if the
specified column does *not* exist.

### `col_not_null( schema, table, column, description )` ###
### `col_not_null( table, column, description )` ###
### `col_not_null( table, column )` ###

    SELECT col_not_null(
        'myschema',
        'sometable',
        'somecolumn',
        'Column myschema.sometable.somecolumn should be NOT NULL'
    );

Tests whether the specified column has a `NOT NULL` constraint. The first
argument is the schema name, the second the table name, the third the column
name, and the fourth is the test description. If the schema is omitted, the
table must be visible in the search path. If the test description is omitted,
it will be set to "Column `:table`.`:column` should be NOT NULL". Note that
this test will fail if the table or column in question does not exist.

### `col_is_null( schema, table, column, description )` ###
### `col_is_null( table, column, description )` ###
### `col_is_null( table, column )` ###

    SELECT col_is_null(
        'myschema',
        'sometable',
        'somecolumn',
        'Column myschema.sometable.somecolumn should allow NULL'
    );

This function is the inverse of `col_not_null()`: the test passes if the
column does not have a `NOT NULL` constraint. The first argument is the schema
name, the second the table name, the third the column name, and the fourth is
the test description. If the schema is omitted, the table must be visible in
the search path. If the test description is omitted, it will be set to "Column
`:table`.`:column` should allow NULL". Note that this test will fail if the
table or column in question does not exist.

### `col_type_is( schema, table, column, type, description )` ###
### `col_type_is( table, column, type, description )` ###
### `col_type_is( table, column, type )` ###

    SELECT col_type_is(
        'myschema',
        'sometable',
        'somecolumn',
        'numeric(10,2)',
        'Column myschema.sometable.somecolumn should be type text'
    );

This function tests that the specified column is of a particular type. If it
fails, it will emit diagnostics naming the actual type. The first argument is
the schema name, the second the table name, the third the column name, the
fourth the type, and the fifth is the test description. If the schema is
omitted, the table must be visible in the search path. If the test description
is omitted, it will be set to "Column `:table`.`:column` should be type
`:type`". Note that this test will fail if the table or column in question
does not exist.

The type argument should be formatted as it would be displayed in the view of
a table using the `\d` command in `psql`. For example, if you have a numeric
column with a precision of 8, you should specify "numeric(8,0)". If you
created a `varchar(64)` column, you should pass the type as "character
varying(64)".

If the test fails, it will output useful diagnostics. For example this test:

    SELECT col_type_is( 'pg_catalog', 'pg_type', 'typname', 'text' );

Will produce something like this:

    # Failed test 138: "Column pg_catalog.pg_type.typname should be type text"
    #         have: name
    #         want: text

### `col_default_is( schema, table, column, default, description )` ###
### `col_default_is( table, column, default, description )` ###
### `col_default_is( table, column, type )` ###

    SELECT col_default_is(
        'myschema',
        'sometable',
        'somecolumn',
        'howdy'::text,
        'Column myschema.sometable.somecolumn should default to ''howdy'''
    );

Tests the default value of a column. If it fails, it will emit diagnostics
showing the actual default value. The first argument is the schema name, the
second the table name, the third the column name, the fourth the default
value, and the fifth is the test description. If the schema is omitted, the
table must be visible in the search path. If the test description is omitted,
it will be set to "Column `:table`.`:column` should default to `:default`".
Note that this test will fail if the table or column in question does not
exist.

The default argument must have an unambiguous type in order for the call to
succeed. If you see an error such as 'ERROR: could not determine polymorphic
type because input has type "unknown"', it's because you forgot to cast the
expected value to its proper type. IOW, this will fail:

    SELECT col_default_is( 'tab', 'name', 'foo' );

But this will not:

    SELECT col_default_is( 'tab', 'name', 'foo'::text );

You can also test for functional defaults. Just specify the function call as a
string:

    SELECT col_default_is( 'user', 'created_at', 'now()' );

If the test fails, it will output useful diagnostics. For example, this test:

    SELECT col_default_is(
        'pg_catalog',
        'pg_type',
        'typname',
        'foo'::text,
        'check typname'
    );

Will produce something like this:

    # Failed test 152: "check typname"
    #         have: NULL
    #         want: foo

### `has_pk( schema, table, description )` ###
### `has_pk( table, description )` ###
### `has_pk( table )` ###

    SELECT has_pk(
        'myschema',
        'sometable',
        'Table myschema.sometable should have a primary key'
    );

Tests whether or not a table has a primary key. The first argument is the
schema name, the second the table name, the the third is the test description.
If the schema is omitted, the table must be visible in the search path. If the
test description is omitted, it will be set to "Table `:table` should have a
primary key". Note that this test will fail if the table in question does not
exist.

### `hasnt_pk( schema, table, description )` ###
### `hasnt_pk( table, description )` ###
### `hasnt_pk( table )` ###

    SELECT hasnt_pk(
        'myschema',
        'sometable',
        'Table myschema.sometable should not have a primary key'
    );

This function is the inverse of `has_pk()`. The test passes if the specified
primary key does *not* exist.

### `has_fk( schema, table, description )` ###
### `has_fk( table, description )` ###
### `has_fk( table )` ###

    SELECT has_fk(
        'myschema',
        'sometable',
        'Table myschema.sometable should have a foreign key constraint'
    );

Tests whether or not a table has a foreign key constraint. The first argument
is the schema name, the second the table name, the the third is the test
description. If the schema is omitted, the table must be visible in the search
path. If the test description is omitted, it will be set to "Table `:table`
should have a foreign key constraint". Note that this test will fail if the
table in question does not exist.

### `hasnt_fk( schema, table, description )` ###
### `hasnt_fk( table, description )` ###
### `hasnt_fk( table )` ###

    SELECT hasnt_fk(
        'myschema',
        'sometable',
        'Table myschema.sometable should not have a foreign key constraint'
    );

This function is the inverse of `has_fk()`. The test passes if the specified
foreign key does *not* exist.

### `col_is_pk( schema, table, column, description )` ###
### `col_is_pk( schema, table, column[], description )` ###
### `col_is_pk( table, column, description )` ###
### `col_is_pk( table, column[], description )` ###
### `col_is_pk( table, column )` ###
### `col_is_pk( table, column[] )` ###

    SELECT col_is_pk(
        'myschema',
        'sometable',
        'id',
        'Column myschema.sometable.id should be a primary key'
    );

    SELECT col_is_pk(
        'persons',
        ARRAY['given_name', 'surname'],
    );

Tests whether the specified column or columns in a table is/are the primary
key for that table. If it fails, it will emit diagnostics showing the actual
primary key columns, if any. The first argument is the schema name, the second
the table name, the third the column name or an array of column names, and the
fourth is the test description. If the schema is omitted, the table must be
visible in the search path. If the test description is omitted, it will be set
to "Column `:table`.`:column` should be a primary key". Note that this test
will fail if the table or column in question does not exist.

If the test fails, it will output useful diagnostics. For example this test:

    SELECT col_is_pk( 'pg_type', 'id' );

Will produce something like this:

    # Failed test 178: "Column pg_type.id should be a primary key"
    #         have: {}
    #         want: {id}

### `col_isnt_pk( schema, table, column, description )` ###
### `col_isnt_pk( schema, table, column[], description )` ###
### `col_isnt_pk( table, column, description )` ###
### `col_isnt_pk( table, column[], description )` ###
### `col_isnt_pk( table, column )` ###
### `col_isnt_pk( table, column[] )` ###

    SELECT col_isnt_pk(
        'myschema',
        'sometable',
        'id',
        'Column myschema.sometable.id should not be a primary key'
    );

    SELECT col_isnt_pk(
        'persons',
        ARRAY['given_name', 'surname'],
    );

This function is the inverse of `col_is_pk()`. The test passes if the
specified column or columns are not a primary key.

### `col_is_fk( schema, table, column, description )` ###
### `col_is_fk( schema, table, column[], description )` ###
### `col_is_fk( table, column, description )` ###
### `col_is_fk( table, column[], description )` ###
### `col_is_fk( table, column )` ###
### `col_is_fk( table, column[] )` ###

    SELECT col_is_fk(
        'myschema',
        'sometable',
        'other_id',
        'Column myschema.sometable.other_id should be a foreign key'
    );

    SELECT col_is_fk(
        'contacts',
        ARRAY['given_name', 'surname'],
    );

Just like `col_is_fk()`, except that it test that the column or array of
columns are a primary key. The diagnostics on failure are a bit different,
too. Since the table might have more than one foreign key, the diagnostics
simply list all of the foreign key constraint columns, like so:

    #    Table widget has foreign key constraints on these columns:
    #        {thingy_id}
    #        {surname,given_name}

### `col_isnt_fk( schema, table, column, description )` ###
### `col_isnt_fk( schema, table, column[], description )` ###
### `col_isnt_fk( table, column, description )` ###
### `col_isnt_fk( table, column[], description )` ###
### `col_isnt_fk( table, column )` ###
### `col_isnt_fk( table, column[] )` ###

    SELECT col_isnt_fk(
        'myschema',
        'sometable',
        'other_id',
        'Column myschema.sometable.other_id should not be a foreign key'
    );

    SELECT col_isnt_fk(
        'contacts',
        ARRAY['given_name', 'surname'],
    );

This function is the inverse of `col_is_fk()`. The test passes if the
specified column or columns are not a foreign key.

### `fk_ok( fk_schema, fk_table, fk_column[], pk_schema, pk_table, pk_column[], description )` ###
### `fk_ok( fk_schema, fk_table, fk_column[], fk_schema, pk_table, pk_column[] )` ###
### `fk_ok( fk_table, fk_column[], pk_table, pk_column[], description )` ###
### `fk_ok( fk_table, fk_column[], pk_table, pk_column[] )` ###
### `fk_ok( fk_schema, fk_table, fk_column, pk_schema, pk_table, pk_column, description )` ###
### `fk_ok( fk_schema, fk_table, fk_column, pk_schema, pk_table, pk_column )` ###
### `fk_ok( fk_table, fk_column, pk_table, pk_column, description )` ###
### `fk_ok( fk_table, fk_column, pk_table, pk_column )` ###

    SELECT fk_ok(
        'myschema',
        'sometable',
        'big_id',
        'myschema',
        'bigtable',
        'id',
        'myschema.sometable(big_id) should reference myschema.bigtable(id)'
    );

    SELECT fk_ok(
        'contacts',
        ARRAY['person_given_name', 'person_surname'],
        'persons',
        ARRAY['given_name', 'surname'],
    );

This function combines `col_is_fk()` and `col_is_pk()` into a single test that
also happens to determine that there is in fact a foreign key relationship
between the foreign and primary key tables. To properly test your
relationships, this should be your main test function of choice.

The first three arguments are the schema, table, and column or array of
columns that constitue the foreign key constraint. The schema name is
optional, and the columns can be specified as a string for a single column or
an array of strings for multiple columns. The next three arguments are the
schema, table, and column or columns that constitute the corresponding primary
key. Again, the schema is optional and the columns may be a string or array of
strings (though of course it should have the same number of elements as the
foreign key column argument). The seventh argument is an optional description
If it's not included, it will be set to ":fk_schema.:fk_table(:fk_column)
should reference :pk_column.pk_table(:pk_column).

If the test fails, it will output useful diagnostics. For example this test:

    SELECT fk_ok( 'contacts', 'person_id', 'persons', 'id' );

Will produce something like this:

    # Failed test 178: "Column contacts(person_id) should reference persons(id)"
    #         have: contacts(person_id) REFERENCES persons(id)"
    #         want: contacts(person_nick) REFERENCES persons(nick)"

### `has_unique( schema, table, description )` ###
### `has_unique( table, description )` ###
### `has_unique( table )` ###

    SELECT has_unique(
        'myschema',
        'sometable',
        'Table myschema.sometable should have a unique constraint'
    );

Tests whether or not a table has a unique constraint. The first argument is
the schema name, the second the table name, the the third is the test
description. If the schema is omitted, the table must be visible in the search
path. If the test description is omitted, it will be set to "Table `:table`
should have a unique constraint". Note that this test will fail if the table
in question does not exist.

### `col_is_unique( schema, table, column, description )` ###
### `col_is_unique( schema, table, column[], description )` ###
### `col_is_unique( table, column, description )` ###
### `col_is_unique( table, column[], description )` ###
### `col_is_unique( table, column )` ###
### `col_is_unique( table, column[] )` ###

    SELECT col_is_unique(
        'myschema',
        'sometable',
        'other_id',
        'Column myschema.sometable.other_id should have a unique constraint'
    );

    SELECT col_is_unique(
        'contacts',
        ARRAY['given_name', 'surname'],
    );

Just like `col_is_pk()`, except that it test that the column or array of
columns have a unique constraint on them.

### `has_check( schema, table, description )` ###
### `has_check( table, description )` ###
### `has_check( table )` ###

    SELECT has_check(
        'myschema',
        'sometable',
        'Table myschema.sometable should have a check constraint'
    );

Tests whether or not a table has a check constraint. The first argument is the
schema name, the second the table name, the the third is the test description.
If the schema is omitted, the table must be visible in the search path. If the
test description is omitted, it will be set to "Table `:table` should have a
check constraint". Note that this test will fail if the table in question does
not exist.

### `col_has_check( schema, table, column, description )` ###
### `col_has_check( schema, table, column[], description )` ###
### `col_has_check( table, column, description )` ###
### `col_has_check( table, column[], description )` ###
### `col_has_check( table, column )` ###
### `col_has_check( table, column[] )` ###

    SELECT col_has_check(
        'myschema',
        'sometable',
        'other_id',
        'Column myschema.sometable.other_id should have a check constraint'
    );

    SELECT col_has_check(
        'contacts',
        ARRAY['given_name', 'surname'],
    );

Just like `col_is_pk()`, except that it test that the column or array of
columns have a check constraint on them.

### `has_index( schema, table, index, columns[], description )` ###
### `has_index( schema, table, index, columns[] )` ###
### `has_index( schema, table, index, column/expression, description )` ###
### `has_index( schema, table, index, columns/expression )` ###
### `has_index( table, index, columns[], description )` ###
### `has_index( table, index, columns[], description )` ###
### `has_index( table, index, column/expression, description )` ###
### `has_index( schema, table, index, column/expression )` ###
### `has_index( table, index, column/expression )` ###
### `has_index( schema, table, index )` ###
### `has_index( table, index, description )` ###
### `has_index( table, index )` ###

    SELECT has_index(
        'myschema',
        'sometable',
        'myindex',
        ARRAY[ 'somecolumn', 'anothercolumn' ],
        'Index "myindex" should exist'
    );

    SELECT has_index('myschema', 'sometable', 'anidx', 'somecolumn');
    SELECT has_index('myschema', 'sometable', 'loweridx', 'LOWER(somecolumn)');
    SELECT has_index('sometable', 'someindex');

Checks for the existence of an index associated with the named table. The
`:schema` argument is optional, as is the column name or names or expression,
and the description. The columns argument may be a string naming one column or
an array of column names. It may also be a string representing an expression,
such as `LOWER(foo)`. If you find that the function call seems to be getting
confused, cast the index name to the `NAME` type:

    SELECT has_index( 'public', 'sometab', 'idx_foo', 'name'::name );

If the index does not exist, `has_column()` will output a diagnostic message
such as:

    # Index "blah" ON public.sometab not found

If the index was found but the column specification or expression is
incorrect, the diagnostics will look more like this:

    #       have: "idx_baz" ON public.sometab(lower(name))
    #       want: "idx_baz" ON public.sometab(lower(lname))

### `index_is_unique( schema, table, index, description )` ###
### `index_is_unique( schema, table, index )` ###
### `index_is_unique( table, index )` ###
### `index_is_unique( index )` ###

    SELECT index_is_unique(
        'myschema',
        'sometable',
        'myindex',
        'Index "myindex" should be unique'
    );

   SELECT index_is_unique( 'sometable', 'myindex' );

Tests whether an index is unique.

### `index_is_primary( schema, table, index, description )` ###
### `index_is_primary( schema, table, index )` ###
### `index_is_primary( table, index )` ###
### `index_is_primary( index )` ###

    SELECT index_is_primary(
        'myschema',
        'sometable',
        'myindex',
        'Index "myindex" should be on a primary key'
    );

   SELECT index_is_primary( 'sometable', 'myindex' );

Tests whether an index is on a primary key.

### `is_clustered( schema, table, index, description )` ###
### `is_clustered( schema, table, index )` ###
### `is_clustered( table, index )` ###
### `is_clustered( index )` ###

    SELECT is_clustered(
        'myschema',
        'sometable',
        'myindex',
        'Table sometable should be clustered on "myindex"'
    );

   SELECT is_clustered( 'sometable', 'myindex' );

Tests whether a table is clustered on the given index. A table is clustered on
an index when the SQL command `CLUSTER TABLE INDEXNAME` has been executed.
Clustering reorganizes the table tuples so that they are stored on disk in the
order defined by the index.

### `index_is_type( schema, table, index, type, description )` ###
### `index_is_type( schema, table, index, type )` ###
### `index_is_type( table, index, type )` ###
### `index_is_type( index, type )` ###

    SELECT index_is_type(
        'myschema',
        'sometable',
        'myindex',
        'gist',
        'Index "myindex" should be a GIST index'
    );

   SELECT index_is_type( 'myindex', 'gin' );

Tests to ensure that an index is of a particular type. At the time of this
writing, the supported types are:

* btree
* hash
* gist
* gin

If the test fails, it will emit a diagnostic message with the actual index
type, like so:

    # Failed test 175: "Index idx_bar should be a hash index"
    #         have: btree
    #         want: hash

### `has_trigger( schema, table, trigger, description )` ###
### `has_trigger( schema, table, trigger )` ###
### `has_trigger( table, trigger )` ###

    SELECT has_trigger(
        'myschema',
        'sometable',
        'sometrigger',
        'Trigger "sometrigger" should exist'
    );

    SELECT has_trigger( 'sometable', 'sometrigger' );

Tests to see if the specified table has the named trigger. The `:description`
is optional, and if the schema is omitted, the table with which the trigger is
associated must be visible.

### `trigger_is( schema, table, trigger, schema, function, description )` ###
### `trigger_is( schema, table, trigger, schema, function )` ###
### `trigger_is( table, trigger, function, description )` ###
### `trigger_is( table, trigger, function )` ###

    SELECT trigger_is(
        'myschema',
        'sometable',
        'sometrigger',
        'myschema',
        'somefunction',
        'Trigger "sometrigger" should call somefunction()'
    );

Tests that the specified trigger calls the named function. If not, it outputs
a useful diagnostic:

    # Failed test 31: "Trigger set_users_pass should call hash_password()"
    #         have: hash_pass
    #         want: hash_password

### `can( schema, functions[], description )` ###
### `can( schema, functions[] )` ###
### `can( functions[], description )` ###
### `can( functions[] )` ###

    SELECT can( 'pg_catalog', ARRAY['upper', 'lower'] );

Checks to be sure that `:schema` has `:functions[]` defined. If `:schema` is
omitted, then `can()` will look for functions defined in schemas defined in
the search path. No matter how many functions are listed in `:functions[]`, a
single call to `can()` counts as one test. If you want otherwise, call `can()`
once for each function -- or better yet, use `can_ok()`.

If any of the functions are not defined, the test will fail and the
diagnostics will output a list of the functions that are missing, like so:

    # Failed test 52: "Schema pg_catalog can"
    #     pg_catalog.foo() missing
    #     pg_catalog.bar() missing

### `can_ok( schema, function, args[], description )` ###
### `can_ok( schema, function, args[] )` ###
### `can_ok( schema, function, description )` ###
### `can_ok( schema, function )` ###
### `can_ok( function, args[], description )` ###
### `can_ok( function, args[] )` ###
### `can_ok( function, description )` ###
### `can_ok( function )` ###

    SELECT can_ok(
        'pg_catalog',
        'decode',
        ARRAY[ 'text', 'text' ],
        'Function decode(text, text) should exist'
    );

    SELECT can_ok( 'do_something' );
    SELECT can_ok( 'do_something', ARRAY['integer'] );
    SELECT can_ok( 'do_something', ARRAY['numeric'] );

Checks to be sure that the given function exists in the named schema and with
the specified argument data types. If `:schema` is omitted, `can_ok()` will
search for the function in the schemas defined in the search path. If
`:args[]` is omitted, `can_ok()` will see if the function exists without
regard to its arguments.

The `:args[]` argument should be formatted as it would be displayed in the
view of a function using the `\df` command in `psql`. For example, even if you
have a numeric column with a precision of 8, you should specify
`ARRAY['numeric']`". If you created a `varchar(64)` column, you should pass
the `:args[]` argument as `ARRAY['character varying']`.

If you wish to use the two-argument form of `can_ok()`, specifying only the
schema and the function name, you must cast the `:function` argument to
`:name` in order to disambiguate it from from the
`can_ok(`:function`, `:description)` form. If you neglect to do so, your
results will be unexpected.

Also, if you use the string form to specify the `:args[]` array, be sure to
cast it to `name[]` to disambiguate it from a text string:

    SELECT can_ok( 'lower', '{text}'::name[] );

Diagnostics
-----------

If you pick the right test function, you'll usually get a good idea of what
went wrong when it failed. But sometimes it doesn't work out that way. So here
we have ways for you to write your own diagnostic messages which are safer
than just `\echo` or `SELECT foo`.

### `diag( text )` ###

Returns a diagnostic message which is guaranteed not to interfere with
test output. Handy for this sort of thing:

    -- Output a diagnostic message if the collation is not en_US.UTF-8.
    SELECT diag(
         E'These tests expect LC_COLLATE to be en_US.UTF-8,\n'
      || 'but yours is set to ' || setting || E'.\n'
      || 'As a result, some tests may fail. YMMV.'
    )
      FROM pg_settings
     WHERE name = 'lc_collate'
       AND setting <> 'en_US.UTF-8';

Which would produce:

    # These tests expect LC_COLLATE to be en_US.UTF-8,
    # but yours is set to en_US.ISO8859-1.
    # As a result, some tests may fail. YMMV.

Conditional Tests
-----------------

Sometimes running a test under certain conditions will cause the test script
to die. A certain function or method isn't implemented (such as fork() on
MacOS), some resource isn't available (like a net connection) or a module
isn't available. In these cases it's necessary to skip tests, or declare that
they are supposed to fail but will work in the future (a todo test).

Sometimes you might have tests that you want to pass, but you haven't gotten
around to implementing the logic required to make them pass Other times, you
might have tests that pass only under certain circumstances, such as with
particular versions of PostgreSQL. In these cases it's necessary to skip
tests, or to declare that they are supposed to fail but will work in the
future (a todo test).

### `skip( why, how_many )` ###
### `skip( how_many, why )` ###
### `skip( why )` ###
### `skip( how_many )` ###

Outputs SKIP test results. Use it in a conditional expression within a
`SELECT` statement to replace the output of a test that you otherwise would
have run.

    SELECT CASE WHEN pg_version_num() < 80100
        THEN skip('throws_ok() not supported before 8.1', 2 )
        ELSE throws_ok( 'SELECT 1/0', 22012, 'division by zero' )
             || E'\n'
             || throws_ok( 'INSERT INTO try (id) VALUES (1)', '23505' )
        END;

Note how use of the conditional `CASE` statement has been used to determine
whether or not to run a couple of tests. If they are to be run, they are
concatenated with newlines, so that we can run a few tests in the same query.
If we don't want to run them, we call `skip()` and tell it how many tests
we're skipping.

If you don't specify how many tests to skip, `skip()` will assume that you're
skipping only one. This is useful for the simple case, of course:

    SELECT CASE current_schema()
        WHEN 'public' THEN is( :this, :that )
        ELSE skip( 'Tests not running in the "public" schema' )
        END;

But you can also use it in a `SELECT` statement that would otherwise return
multiple rows:

    SELECT CASE current_schema()
        WHEN 'public' THEN is( nspname, 'public' )
        ELSE skip( 'Cannot see the public schema' )
        END
      FROM pg_namespace;

This will cause it to skip the same number of rows as would have been tested
had the `WHEN` condition been true.

### `todo( why, how_many )` ###
### `todo( how_many )` ###
### `todo( why )` ###

Declares a series of tests that you expect to fail and why. Perhaps it's
because you haven't fixed a bug or haven't finished a new feature:

    todo('URIGeller not finished', 2);

    \set card '\'' Eight of clubs '\''
    SELECT is( URIGeller.yourCard(), :card, 'Is THIS your card?' );
    SELECT is( URIGeller.bendSpoon(), 'bent', 'Spoon bending, how original' );

With `todo()`, `:how_many` specifies how many tests are expected to fail. If
`:how_many` is omitted, it defaults to 1. pgTAP will run the tests normally,
but print out special flags indicating they are "todo" tests. The test harness
will interpret these failures as ok. Should any todo test pass, the harness
will report it as an unexpected success. You then know that the thing you had
todo is done and can remove the call to `todo()`.

The nice part about todo tests, as opposed to simply commenting out a block of
tests, is that they're like a programmatic todo list. You know how much work
is left to be done, you're aware of what bugs there are, and you'll know
immediately when they're fixed.

### `todo_start( :why ) ###
### `todo_start( ) ###

This function allows you declare all subsequent tests as TODO tests, up until
the `todo_end()` function is called.

The `todo()` syntax is generally pretty good about figuring out whether or not
we're in a TODO test. However, often we find it difficult to specify the
*number* of tests that are TODO tests. Thus, you can instead use
`todo_start()` and `todo_end()` to more easily define the scope of your TODO
tests.

Note that you can nest TODO tests, too:

    SELECT todo_start('working on this');
    -- lots of code
    SELECT todo_start('working on that');
    -- more code
    SELECT todo_end();
    SELECT todo_end();

This is generally not recommended, but large testing systems often have weird
internal needs.

The `todo_start()` and `todo_end()` function should also work with the
`todo()` function, although it's not guaranteed and its use is also
discouraged:


    SELECT todo_start('working on this');
    -- lots of code
    SELECT todo('working on that', 2);
    -- Two tests for which the above line applies
    -- Followed by more tests scoped till the following line.
    SELECT todo_end();

We recommend that you pick one style or another of TODO to be on the safe
side.

### todo_end() ###

Stops running tests as TODO tests. This function is fatal if called without a
preceding `todo_start()` method call.

### in_todo() ###

Returns true if the test is currently inside a TODO block.

Writing Test Functions
======================

So, you've been using pgTAP for a while, and now you want to write your own
test functions. Go ahead; I don't mind. In fact, I encourage it. How? Why,
by providing a function you can use to test your tests, of course!

But first, a brief primer on writing your own test functions. There isn't much
to it, really. Just write your function to do whatever comparison you want. As
long as you have a boolean value indicating whether or not the test passed,
you're golden. Just then use `ok()` to ensure that everything is tracked
appropriately by a test script.

For example, say that you wanted to create a function to ensure that two text
values always compare case-insensitively. Sure you could do this with `is()`
and the `LOWER()` function, but if you're doing this all the time, you might
want to simplify things. Here's how to go about it:

    CREATE OR REPLACE FUNCTION is (text, text, text)
    RETURNS TEXT AS $$
    DECLARE
        result BOOLEAN;
    BEGIN
        result := LOWER($1) = LOWER($2);
        RETURN ok( result, $3 ) || CASE WHEN result THEN '' ELSE E'\n' || diag(
               '    Have: ' || $1 ||
            E'\n    Want: ' || $2;
    ) END;
END;
$$ LANGUAGE plpgsql;

Yep, that's it. The key is to always use pgTAP's `ok()` function to guarantee
that the output is properly formatted, uses the next number in the sequence,
and the results are properly recorded in the database for summarization at
the end of the test script. You can also provide diagnostics as appropriate;
just append them to the output of `ok()` as we've done here.

Of course, you don't have to directly use `ok()`; you can also use another
pgTAP function that ultimately calls `ok()`. IOW, while the above example
is instructive, this version is easier on the eyes:

    CREATE OR REPLACE FUNCTION lc_is ( TEXT, TEXT, TEXT )
    RETURNS TEXT AS $$
         SELECT is( LOWER($1), LOWER(2), $3);
    $$ LANGUAGE sql;

But either way, let pgTAP handle recording the test results and formatting the
output.

Testing Test Functions
----------------------

Now you've written your test function. So how do you test it? Why, with this
handy-dandy test function!

### `check_test( test_output, is_ok, name, want_description, want_diag )` ###
### `check_test( test_output, is_ok, name, want_description )` ###
### `check_test( test_output, is_ok, name )` ###
### `check_test( test_output, is_ok )` ###

    SELECT * FROM check_test(
        lc_eq('This', 'THAT', 'not eq'),
        false,
        'lc_eq fail',
        'not eq',
        E'    Want: this\n    Have: that'
    );

    SELECT * FROM check_test(
        lc_eq('This', 'THIS', 'eq'),
        true
    );

This function runs anywhere between one and three tests against a test
function. For the impatient, the arguments are:

* `:test_output` - The output from your test. Usually it's just returned by a
  call to the test function itself. Required.
* `:is_ok` - Boolean indicating whether or not the test is expected to pass.
  Required.
* `:name` - A brief name for your test, to make it easier to find failures in
  your test script. Optional.
* `:want_description` - Expected test description to be output by the test.
  Optional.
* `:want_diag` - Expected diagnostic message output during the execution of
  a test. Must always follow whatever is output by the call to `ok()`.

Now, on with the detailed documentation. At its simplest, you just pass it the
output of your test (and it must be one and **only one** test function's
output, or you'll screw up the count, so don't do that!) and a boolean value
indicating whether or not you expect the test to have passed. That looks
something like the second example above.

All other arguments are optional, but I recommend that you *always* include a
short test name to make it easier to track down failures in your test script.
`check_test()` uses this name to construct descriptions of all of the tests it
runs. For example, without a short name, the above example will yield output
like so:

    not ok 14 - Test should pass

Yeah, but which test? So give it a very succinct name and you'll know what
test. If you have a lot of these, it won't be much help. So give each call
to `check_test()` a name:

    SELECT * FROM check_test(
        lc_eq('This', 'THIS', 'eq'),
        true,
        'Simple lc_eq test',
    );

Then you'll get output more like this:

    not ok 14 - Simple lc_test should pass

Which will make it much easier to find the failing test in your test script.

The optional fourth argument is the description you expect to be output. This
is especially important if your test function generates a description when
none is passed to it. You want to make sure that your function generates the
test description you think it should! This will cause a second test to be run
on your test function. So for something like this:

    SELECT * FROM check_test(
        lc_eq( ''this'', ''THIS'' ),
        true,
        'lc_eq() test',
        'this is THIS'
    );

The output then would look something like this, assuming that the `lc_eq()`
function generated the proper description (the above example does not):

    ok 42 - lc_eq() test should pass
    ok 43 - lc_eq() test should have the proper description

See how there are two tests run for a single call to `check_test()`? Be sure
to adjust your plan accordingly. Also note how the test name was used in the
descriptions for both tests.

If the test had failed, it would output a nice diagnostics (internally it just
uses `is()` to compare the strings):

    not ok 43 - lc_eq() test should have the proper description
    # Failed test 43:  "lc_eq() test should have the proper description"
    #         have: 'this is this'
    #         want: 'this is THIS'

The fifth argument, also optional, of course, compares the diagnostics
generated during the test to an expected string. Such diagnostics **must**
follow whatever is output by the call to `ok()` in your test. Your test
fuction should not call `diag()` until after it calls `ok()` or things will
get truly funky.

Assumign you've followed that rule in your `lc_eq()` tset function, to see
what happens when a `lc_eq()` fails. Write your test to test the diagnostics
like so:

    SELECT * FROM check_test(
        lc_eq( ''this'', ''THat'' ),
        false,
        'lc_eq() failing test',
        'this is THat',
        E'    Want: this\n    Have: THat
    );

This of course triggers a third test to run. The output will look like so:

    ok 44 - lc_eq() failing test should fail
    ok 45 - lc_eq() failing test should have the proper description
    ok 46 - lc_eq() failing test should have the proper diagnostics

And of course, it the diagnostic test fails, it will output diagnostics just
like a description failure would, something like this:

    not ok 46 - lc_eq() failing test should have the proper diagnostics
    # Failed test 46:  "lc_eq() failing test should have the proper diagnostics"
    #         have:     Have: this
    #     Want: that
    #         want:     Have: this
    #     Want: THat

I realize that can be a bit confusing, given the various haves and wants, but
it gets the job done. Of course, if your diagnostics use something other than
indented "have" and "want", such failures will be easier to read. But either
way, do test your diagnostics!

Utility Functions
=================

Along with the usual array of testing, planning, and diagnostic functions,
pTAP provides a few extra functions to make the work of testing more pleasant.

### `pg_version()` ###

    SELECT pg_version();

Returns the server version number against which pgTAP was compiled. This is
the stringified version number displayed in the first part of the core
`version()` function and stored in the "server_version" setting:

    try=% select current_setting( 'server_version'), pg_version();
     current_setting | pg_version
    -----------------+------------
     8.3.4           | 8.3.4
    (1 row)

### `pg_version_num()` ###

    SELECT pg_version_num();

Returns an integer representation of the server version number against which
pgTAP was compiled. This function is useful for determining whether or not
certain tests should be run or skipped (using `skip()`) depending on the
version of PostgreSQL. For example:

    SELECT CASE WHEN pg_version_num() < 80100
        THEN skip('throws_ok() not supported before 8.1' )
        ELSE throws_ok( 'SELECT 1/0', 22012, 'division by zero' )
        END;

The revision level is in the tens position, the minor version in the thousands
position, and the major version in the ten thousands position and above
(assuming PostgreSQL 10 is ever released, it will be in the hundred thousands
position). This value is the same as the "sever_version_num" setting available
in PostgreSQL 8.2 and higher, but supported by this function back to
PostgreSQL 8.0:

    try=% select current_setting( 'server_version_num'), pg_version_num();
     current_setting | pg_version_num
    -----------------+----------------
     80304           |          80304

### `os_name()` ###

    SELECT os_name();

Returns a string representing the name of the operating system on which pgTAP
was compiled. This can be useful for determining whether or not to skip tests
on certain operating systems.

This is usually the same a the ouput of `uname`, but converted to lower case.
There are some semantics in the pgTAP build process to detect other operating
systems, though assistance in improving such detection would be greatly
appreciated.

**NOTE:** The values returned by this function may change in the future,
depending on how good the pgTAP build process gets at detecting a OS.

### `pg_typeof(any)` ###

    SELECT pg_typeof(:value);

Returns a `regtype` identifying the type of value passed to the function. This
function is used internally by `cmp_ok()` to properly construct types when
executing the comparison, but might be generally useful.

    try=% select pg_typeof(12), pg_typeof(100.2);
     pg_typeof | pg_typeof
    -----------+-----------
     integer   | numeric

Compatibility
=============

Here are some notes on how pgTAP is built for particular versions of
PostgreSQL. This helps you to understand any side-effects. If you'd rather not
have these changes in your schema, build `pgTAP` with a schema just for it,
instead:

    make TAPSCHEMA=tap

To see the specifics for each version of PostgreSQL, consult the files in the
`compat/` directory in the pgTAP distribution.

8.3 and Higher
--------------

No changes. Everything should just work.

8.2 and Lower
-------------

A number of casts are added to increase compatibility. The casts are:

* `boolean` to `text`
* `name[]` to `text`
* `regtype` to `text`

An `=` operator is also added that compares `name[]` values.

8.0 and Lower
-------------

A patch is applied that changes how some of the test functions are written.
Also, a few casts are added for compatibility:

* `oidvector` to `regtypep[]`.
* `int2vector` to `integer[]`.

Otherwise, all is the same as for 8.2 Do note, however, that the `throws_ok()`
and `lives_ok()` functions do not work under 8.0. Don't even use them there.

To Do
-----

* Update the Makefile to process pg_prove and set the proper path to Perl on
  the shebang line. Will likely require a `$(PERL)` environment variable.
* Update the Makefile to check for TAP::Harness and warn if it's not
  installed.

Suported Versions
-----------------

pgTAP has been tested on the following builds of PostgreSQL:

* PostgreSQL 8.3.4 on i386-apple-darwin9.5.0
* PostgreSQL 8.2.10 on i386-apple-darwin9.5.0
* PostgreSQL 8.1.14 on i386-apple-darwin9.5.0
* PostgreSQL 8.0.19 on i686-apple-darwin9.5.0

If you know of others, please submit them! Use
`psql -d template1 -c 'SELECT VERSION()'` to get the formal build version and OS.

Author
------
[David E. Wheeler](http://justatheory.com/)

Credits
-------

* Michael Schwern and chromatic for Test::More.
* Adrian Howard for Test::Exception.

Copyright and License
---------------------

Copyright (c) 2008 Kineticode, Inc. Some rights reserved.

Permission to use, copy, modify, and distribute this software and its
documentation for any purpose, without fee, and without a written agreement is
hereby granted, provided that the above copyright notice and this paragraph
and the following two paragraphs appear in all copies.

IN NO EVENT SHALL KINETICODE BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT,
SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING
OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF KINETICODE HAS
BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

KINETICODE SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND
KINETICODE HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES,
ENHANCEMENTS, OR MODIFICATIONS.
Something went wrong with that request. Please try again.