Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Commits on Sep 26, 2012
  1. Bump version to 3.40.0

Commits on Sep 25, 2012
  1. Add a cubrid adapter for accessing CUBRID databases via the cubrid gem

    Due to issues with CUBRID itself and the cubrid ruby gem, I haven't
    bothered to add spec guards for all of the integration spec failures.
    If the situation improves I will probably do so.
  2. Support Database#schema, #tables, #views, and #indexes on cubrid

    The jdbc/cubrid adapter currently uses the JDBC implementation,
    but this should work on all adapters that access CUBRID.
  3. Recognize string database type as string ruby type

    CUBRID uses the STRING type.
  4. Specifically set the primary key in some tests

    These tests are not testing for whether Sequel can correctly
    parse primary keys from the database, so they shouldn't fail
    just because Sequel doesn't parse the primary keys.
  5. Don't use identifier input/output methods on CUBRID

    CUBRID uses case insensitive identifiers, so it doesn't make sense
    to have an input/output method.
    While here, do some reorganization and add some RDoc.
  6. Add a jdbc/cubrid adapter for accessing CUBRID databases via JDBC on …

    Note that this doesn't currently have integration spec guards
    when running the integration tests on CUBRID.  This is due to
    the excessive number of bugs in CUBRID itself.  Once CUBRID
    fixes these bugs, I'll consider adding the spec guards.
  7. Refactor jdbc adapter, remove requires_return_generated_keys?

    Only the jdbc/mysql adapter used this feature, so just move
    the generated keys handling into the jdbc/mysql adapter.
    Break out some code into smaller methods for easier overriding.
    Make sure to pass the prepared statement object to last_insert_id
    when inserting.
    In mysql/jdbc, use return generated keys for prepared statement
    inserts as well as regular inserts.
  8. Work around possible spec issue

    On some databases, a.b can refer to c.a.b, which breaks this query.
    I think that behavior is stupid, but as this spec is testing
    EXISTS and not the identifier lookup rules, it's best to just
    rename the table and avoid the issue.
  9. Return non-String database defaults as ruby defaults

    Just in case the database default is a non-String value, we
    should return the value as-is instead of breaking later when
    using a regexp for parsing.
Commits on Sep 24, 2012
  1. Return OCI8::CLOB values as ruby Strings in the Oracle adapter

    This makes them treated similarly to blobs, which are returned
    as instances of Sequel::SQL::Blob (a String subclass).
  2. Use clob for String :text=>true types on Oracle, DB2, HSQLDB, and Derby

    Previously, attempting to use String :text=>true in a migration
    failed on these databases, since it uses the text type by
    default, and these databases don't support that type.
  3. Allowing marshalling of Sequel::Postgres::HStore (Fixes #556)

    The underlying hash used by the delegate class must use a
    default proc for the symbol=>string conversion, so add
    _dump/_load methods that store the underlying data as
    an array instead of a hash.
Commits on Sep 22, 2012
  1. @trevor

    very minor documentation typo

    trevor authored
Commits on Sep 20, 2012
  1. Cleanup a schema spec

  2. Only NOT NULL for unique constraint columns on DB2, not foreign key c…

    Foreign key columns are allowed to be NULL, but unique columns
    are not.  The previous code was plan wrong, but wasn't caught
    before as the foreign key tests and the unique tests were combined.
  3. Add Dataset#identifier_list_append private method

    This is used to a get a comma separated list of identifiers, and
    DRYs up some common code.  This fixes an issue where CTE columns
    were not being quoted.
    This also removes the Dataset#argument_list and
    Dataset#argument_list_append private methods.  These methods need
    to be removed as they don't quote their arguments.
Commits on Sep 17, 2012
  1. Fix initializing Database objects on PostgreSQL if there are existing…

    This got broken when the conversion procs were moved from the native
    adapter to the shared adapter.  To fix it, just make sure that the
    conversion procs already exist, as they are used themselves when
    resetting the conversion procs if there are named types.
    Basically, if there are existing named types
Commits on Sep 14, 2012
Commits on Sep 13, 2012
  1. Quote channel identifier names when using LISTEN/NOTIFY on PostgreSQL

    Before, the identifier names were unquoted, which meant that the
    LISTEN/NOTIFY query could break depending on the identifier used.
    Quoting them fixes this issue, but note that this makes the
    identifier case sensitive, and as such can break backwards
    compatibility.  If any of your channels are currently specified
    with uppercase characters when the actual channel is lowercase,
    or you are doing any manual quoting of the channel identifier,
    you need to update your code for these changes.
  2. Handle nil values when formatting bound variable arguments in the pg_…

    …row extension (Fixes #548)
    Basically, pg_row piggy-backs on pg_array bound variable
    formatting, but array types use NULL to represent NULL,
    while row types use the empty string to represent NULL.
    While here, add tests for nil values in bound variables for
    arrays and hstore, even though the behavior there was already
Commits on Sep 12, 2012
  1. Handle nil values when parsing composite types in the pg_row extensio…

    …n (Fixes #548)
    Conversion procs are not expected to handle nil values, so they
    should not be called if the value is nil.
Commits on Sep 10, 2012
  1. Detect an additional disconnect type in the postgres adapter

    This handles a bad file descriptor error, and should handle more
    types of disconnect errors by using a more general regexp.
    While here, also do a better job of handling disconnects, so that
    block is not called if a disconnect is detected, since in some
    cases block can raise a separate PGError.
  2. Add :disconnect=>:retry option to Database#transaction, for automatic…

    …ally retrying the transaction on disconnect
    This has been requested numerous times over the years.  I think its
    a bad idea, but it's possible that it is the best alternative when
    dealing with a particularly bad database configuration.
    Basically, automatically retrying can cause all sorts of problems.
    The most likely way of getting it working is to only retry
    at transaction boundries.  For a sane database that rolls back
    transactions on disconnect, this should work OK for pure database
    code.  However, if you do any non-database work inside the
    transaction block, you need to be sure that that the work is
    idempotent in addition to making sure that it handles rollbacks
Commits on Sep 7, 2012
  1. Update CHANGELOG

  2. Change guard for class_table_interheritance integration tests

    The plugin requires support for JOIN USING so that ambiguous
    column name errors are not raised, so check for that support
    instead of listing all adapter that don't support it.
  3. Don't order by column alias in the specs

    Access does not support ordering by column aliases, and these tests
    are checking group_and_count behavior, not support for ordering
    by column aliases.
  4. Split alter table commands on ado/access

    Since it uses schema parsing for some commands, it needs to
    split commands in some cases.
  5. Move the splitting of multiple alter table commands into module

    This removes the code from the shared mysql and mssql adapters,
    and moves it to a new module.  The code wasn't exactly the same,
    but after a more detailed look, there is no problem with using
    the mysql code on mssql.
  6. Emulate set_column_null and set_column_default on ado/access

    Use the schema to get other data related to the column, and then
    rename the column to a backup column, and rename the backup column
    to the original name.
Something went wrong with that request. Please try again.