Skip to content
Commits on Oct 8, 2008
  1. Make the sqlite adapter respect the Sequel.datetime_class setting, fo…

    …r timestamp and datetime types
  2. @jarredholman
  3. @jarredholman
  4. Default to using the simple language if no language is specified for …

    …a full text index on PostgreSQL
  5. Don't blindly qualify columns when eager graphing

    Qualifying all entries in the :order option when eager_graphing is
    not correct behavior.  For example, it's legimate to order by a
    function, which shouldn't be qualified.  Also, already qualified
    symbols should not be qualified again.  Change the qualifying to
    only qualify unqualified symbols and SQL::Identifiers, either of
    which may be wrapped in an SQL::OrderedExpression.  Do not qualify
    anything else.
Commits on Sep 27, 2008
  1. Modified Sequel::Database#typecast_value to raise Sequel::Error::Inva…

    Michal Bugno committed
    …lid value
    if the typecasted value is invalid. This is needed because when typecasting a
    model column this exception is rescued and handled appropriately.
Commits on Sep 26, 2008
  1. Added raise_on_typecast_failure switch. If set to false, it prevents …

    Michal Bugno committed
    raising an error when trying to typecast nil to column which cannot be NULL.
    Defaults to true.
Commits on Sep 25, 2008
Commits on Sep 24, 2008
  1. Make eager_graph respect associations' :order options (use :order_eag…

    …er_graph=>false to disable)
    This changes eager_graph so that it will add the columns specified in
    each graphed association's :order option to the list of ordered
    columns.  It qualifies the column with the table alias used for that
    This shouldn't cause problems unless:
    1) You have an already qualified column in the :order option.
    2) You have a column in the :order option that wasn't in the primary
       table (maybe because you were using the :eager_graph association
       option to include the table whose column you were using).
    While remote, those problems can happen, and you can turn off the
    including of the associations :order option by setting the
    association's :order_eager_graph option to false.
    This commit also fixes a bug in eager_graph when you were loading
    an association with the same name as the base table in the FROM
Commits on Sep 23, 2008
Commits on Sep 19, 2008
  1. Allow string keys to be used when using Dataset#multi_insert

    Add Dataset#identifier_list private method taking an array and
    returning a string containing comma separated identifiers.
Commits on Sep 17, 2008
  1. Fix a few corner cases in eager_graph

    This commit fixes two separate problems with eager graph. The first
    is that cascading *_to_many associations could cause an exception to
    be raised, because eager_graph_make_associations_unique wasn't
    recursing properly.
    The second is that *_to_many associations that were cascaded after
    a many_to_one association weren't recursed into to eliminate
    duplicates caused by cartesian products.
    Thanks a lot to jarredh on IRC for bringing the second bug to my
    attention, which led me to fix the first bug (even before I fixed the
Commits on Sep 15, 2008
  1. Use string literals in AS clauses on SQLite

    This commit adds a private Dataset method as_sql, and coverts the
    various parts of Dataset that use the AS clause to use this method.
    By default, as_sql uses an identifier for the alias, but the SQLite
    adapter overrides it to use a string literal.
    This commit also changes the remaining case where the AS keyword was
    left out, when aliasing tables with a hash.  Now, the AS keyword is
    used in that case.
    This caused one part of a spec to be removed, the case of
    @dataset.from(:a => :c[:d]).sql => "SELECT * FROM a c(d)". I'm not
    sure that that is valid SQL anyway.
Commits on Sep 14, 2008
  1. @divoxx
Commits on Sep 9, 2008
  1. Refactor schema parsing, include primary key on PostgreSQL and MySQL

    This commit refactors the schema parsing.  It eliminates the shared
    INFORMATION_SCHEMA based parsing that worked OK on PostgreSQL and
    very slowly on MySQL.  The Database#schema public method has been
    kept, but the other shared private methods were removed, with the
    exception of schema_column_type for mapping database types to ruby
    types.  MySQL, PostgreSQL, and SQLite schema parsing all include
    the :primary_key entry that shows whether the column is (part of)
    the primary key.
    PostgreSQL now uses the pg_* system catalogs directly, instead of
    the information_schema.  MySQL uses the DESCRIBE syntax to get a
    table description.  SQLite was cleaned up slightly, but still uses
    the table_info pragma.
    The shared Database#schema method now falls back to getting the
    schema separately for each table if it can't get the schema for
    all tables at once, assuming it knows how to get the schema for
    a single table and it knows how to get a list of tables.
    PostgreSQL now supports specifying a qualified table name (including
    schema) in the call to Database#schema.  Before, the schema had to
    be specified explicity using the :schema option, which means it
    didn't work correctly with models.  It still defaults to the public
    schema if no schema is specified (either explicitly or implicitly).
    This means that DB.schema(:s__t) now gives column information for
    table t in schema s.
    The :numeric_precision and :max_chars entries are no longer included
    in the hashes returned by schema.  You should be able to get the same
    or similar information using the :db_type entry, which now includes
    specified lengths (e.g. varchar(255)).
    Since primary keys can now be parsed from the schema, use the schema
    information to set primary keys for schema models.
    In addition, this commit makes schema changing methods such as
    drop_table and alter_table remove the cached schema entry.  It also
    has a small fix to drop_view to allow dropping multiple views at
    once, so it works just like drop_table.  Additionally, it quotes
    the view names.
    The schema hash keys are now always supposed to be symbols.
    This commit also changes some specs to remove the use of set_schema,
    and it adds integration tests to test schema parsing of primary keys,
    NULL/NOT NULL, and defaults.
    This commit makes MySQL Database#tables work on JDBC as well as using
    the native adapter, by using the SHOW TABLES syntax.
Commits on Aug 28, 2008
  1. Add Dataset #set_defaults and #set_overrides, used for scoping the va…

    …lues used in insert/update statements
    Sequel has long been known for its chainable filters.  Before this
    commit, it was not possible to chain the values used in insert or
    update statements.  This commit adds two methods, with slightly
    different features, to accomplish this.
    set_defaults is used to set the default values used in insert or
    update statements, which can be overriden by the values passed to
    insert or update:
      # => INSERT INTO t (x) VALUES (1)
      # => INSERT INTO t (x) VALUES (2)
      # => INSERT INTO t (x, y) VALUES (1, 2)
    set_overrides is used to set default values that override the values
    given to insert or update statements:
      # => INSERT INTO t (x) VALUES (1)
    In addition, chaining calls to set_default and set_overrides operate
    slightly differently:
      # => INSERT INTO t (x) VALUES (2)
      # => INSERT INTO t (x) VALUES (1)
    As show above, with set_default, the last call takes precedence,
    whereas with set_overrides, the first call takes precedence.
    Note that set_defaults and set_overrides only are used when insert or
    update is called with a hash.
    Dataset#insert had to go through some significant refactoring for this
    to work.  All the specs still pass, so hopefully nothing broke because
    of it.
  2. Allow Models to use the RETURNING clause when inserting records on Po…

    This commit makes! check for an insert_select method on
    the dataset.  If it exists, it calls that method and uses the result
    as the values, instead of inserting and then reloading the values
    from the database. This should be faster as well as less error prone.
    It changes a recent commit to the PostgreSQL adapter so that
    insert_sql does not use the RETURNING clause by default. You need to
    call the insert_returning_sql method if you want the RETURNING
    clause.  That is what Dataset#insert now does on PostgreSQL servers
    8.2 and higher.  This caused some issues with prepared statements,
    but this commit takes care of that as well.
    This commit also contains a small speedup to Dataset#single_value,
    as well as a missing spec for Database#raise_error.
Commits on Aug 27, 2008
  1. Use INSERT ... RETURNING ... with PostgreSQL 8.2 and higher

    This makes the PostgreSQL adapter use INSERT ... RETURNING ... if it
    is connected to an 8.2.0 or later server.  This should provide a
    performance boost, as it doesn't require extra queries being issued
    to determine the value of the inserted primary key.
    This also makes the Database object clear the cache for @primary_keys
    and @primary_key_sequences when drop_table is called.  It adds a
    Database#primary_key public method for determining the primary key
    for a given table. If also adds a Dataset#server_version private
    method to make things more testable.
  2. Make insert_sql, delete_sql, and update_sql respect the :sql option

    Before this commit, all of the following would raise errors:
      DB["INSERT INTO blah (id) VALUES (1)"].insert_sql
      DB["DELETE FROM blah"].delete_sql
      DB["UPDATE blah SET num = 1"].update_sql
    Now, these just return the SQL given.  This implies:
      ds = DB["..."]
      ds.update_sql.should == ds.delete_sql
  3. Default to converting 2 digit years

    Before, Sequel would treat 06/07/08 as being in the year 0008 instead
    of 2008, at least when using the Date and DateTime classes.  This
    commit changes that so that 2 digit years are treated as year + 1900
    if year >= 69 or year + 2000 if year < 69.  To get back the old
      Sequel.convert_two_digit_years = false
    If you are using ruby 1.9, you should watch out, as the date parsing
    differs from ruby 1.8:
      Date.parse('01/02/08', true).to_s # ruby 1.8 => "2008-01-02"
      Date.parse('01/02/08', true).to_s # ruby 1.9 => "2001-02-08"
Commits on Aug 22, 2008
  1. @jarredholman

    Added support for composite primary key, composite foreign key and un…

    jarredholman committed
    …ique constraints to Schema::Generator and Schema::AlterTableGenerator
Commits on Aug 19, 2008
  1. Disallow abuse of SQL function syntax for types (use :type=>:varchar,…

    … :size=>255 instead of :type=>:varchar[255])
    This commit also refactors type_literal to return the entire literal
    value for the type, instead of just the base type (so it includes
    any length and/or UNSIGNED specifiers).  It also makes type_literal
    In addition, this fixes an issue with renaming or changing the type
    of a varchar column on MySQL, if you wanted the size to be different
    than 255.
Commits on Aug 13, 2008
  1. Allow validation of multiple attributes at once, with built in suppor…

    …t for uniqueness checking of multiple columns
    This commit allows you to support validating multiple attributes at
      validates_each([:column1, :column2]) do |obj, attributes, values|
        # attributes = [:column1, :column2]
        # values = [obj.send(:column1), obj.send(:column2)]
    Support was added to validates_uniqueness_of to work on multiple
    values at once:
      validates_uniqueness_of([:column1, :column2])
    This is quite different from the following code:
      validates_uniqueness_of(:column1, :column2)
    Which makes sure the each value is unique to its column, instead of
    the combination of values being unique.
    This will give a validation error message if an entry already exists
    in the database that has the same value for column1 and column2.  It
    works for any number of columns.
    validates_uniqueness_of now supports an :allow_nil option that will
    skip checking if the values of all columns are nil.  Previously, it
    automatically skipped the validation if the value was blank.
    There was also a slight memory reduction by reusing the default proc
    if :if is not specified.
Commits on Aug 12, 2008
  1. In PostgreSQL adapter, fix inserting a row with a primary key value i…

    …nside a transaction
    This bug took quite a while to find and fix.  It was compounded by
    the fact that there were multiple bugs in the underlying
    1) Errors raised by PostgreSQL weren't getting reraised inside
    2) Determining the sequence names was not done correctly.
    This caused weird errors, such as the code raising an error stating
    there was already an error on the transaction, without showing
    an error or even that a query was issued.  To fix this and similar
    issues, have the connection log the SQL it issues to find primary
    keys, sequences, and sequence values, just like the Database object
    logs the SQL.
    To make sure that inserting a row with a primary key value works
    inside transactions, figure out the primary key first and see if
    it contained in the values hash.  If not, figure out what the
    sequence is for the table, and get the last sequence value used.
    This is done because trying to get the last sequence value first
    if the sequence wasn't yet used on the connection causes PostgreSQL
    to abort the transaction.  It would return an invalid result
    instead of aborting the transaction if the connection had previously
    been used to insert a row into the same table, without an easy way
    to detect things.
    Fix the SQL used for finding primary keys and sequences so that
    unnecessary columns aren't returned.
    Keep track of sequences at the Database level instead of the
    connection level.
    Make insert_result, primary_key_for_table, and
    primary_key_sequence_for_table private methods.
    The :table option when inserting is now unquoted (generally a symbol),
    instead of the quoted string used previously.
Commits on Aug 7, 2008
Commits on Aug 6, 2008
Commits on Aug 5, 2008
Commits on Aug 4, 2008
  1. Add support for read-only slave/writable master databases and databas…

    …e sharding
    The support is described in the sharding.rdoc file included with this
    commit. This commit makes significant changes to every adapter in
    order to support this new functionality.  I only have the ability to
    test PostgreSQL, MySQL, and SQLite (both via the native drivers and
    via JDBC), so it's possible I could have broken something on other
    adapters.  If you use another adapter, please test this and see if it
    breaks anything.  I try to be fairly careful whenever I change
    something I can't test, but it's always possible I made an error.
    This commit makes the following internal changes:
    * The Database and Dataset execute and execute_dui methods now take
      an options hash.  The prepared statement support was integrated
      into this hash, resulting in a simpler implementation.
    * The connection pool internals were changed significantly to allow
      connections to different servers.  The previous methods all still
      work the same way, but now take an optional server argument
      specifying which server to use.
    * Many low_level methods (transaction, test_connection, synchronize,
      tables) take an optional server argument to specify the server to
    * Some adapter database and dataset methods were made private.
    * Adapter Dataset #fetch_rows methods that used Database#synchronize
      explicitly were modified to use Dataset#execute with a block.
      Adapter Database #execute methods were modified for these adapters
      to yield inside of #synchronize.
    * Database#connect now requires a server argument.  The included
      adapters use this with the new private Database#server_opts method
      that allows overriding the default opts with the server specific
    * The JDBC and MySQL adapters were significantly refactored.
    * The PostgreSQL adapter #execute_insert now takes a hash of options
      instead of table and values arguments.
    * Adapters with specific support for named prepared statements now
      consider the use of a symbol as the first argument to execute
      to indicant the call of a prepared statement.  The
      execute_prepared_statement method in these adapters is now private.
    * Adapter execute_select statements were removed in place of execute,
      with the original use of execute changed to execute_dui.  This
      follows the convention of using execute for SELECT queries, and
      execute_dui for DELETE/UPDATE/INSERT queries.
    * Removes adapter_skeleton adapter.  The existing adapters provide
      better examples of how things should be done compared to this
      example file.
    * No longer defines model methods for non-public dataset methods
      specified in plugins.
Commits on Jul 31, 2008
  1. Huge changes, mostly to add prepared statement/bound variable support

    This is a huge commit.  I general prefer to commit in smaller chunks,
    but this is a major feature that will have a large effect on the
    future of Sequel, and I didn't wan't to commit before I knew that
    the code was flexible enough to work on multiple database types, and
    that it didn't break existing code.
    This commit adds support for prepared statements and bound variables.
    Included is a prepared_statement.rdoc file, review that to get an
    idea of usage.  This has been tested on PostgreSQL, MySQL, and
    SQLite, both with the native drivers and with the JDBC drivers.  For
    other databases, it emulates support using interpolation.
    Along with the prepared statement support comes complete, but not
    necessarily good, documentation for the PostgreSQL, MySQL, SQLite,
    and JDBC adapters.
    There were numerous minor changes made as well:
    * MSSQL should be better supported on JDBC, though I haven't tested
    * Statement execution on JDBC and SQLite was refactored to reduce
      code duplication.
    * JDBC support for inserting records was refactored to reduce code
    * Dataset got private execute and execute_dui methods, that send the
      the SQL to the database.  The adapters that had special database
      #execute methods had similar changes to their datasets.
    * Mysql::Result#convert_type is now a private method.
    * Mysql::Result#each_array was removed, probably a leftover from the
      old arrays with keys code.
    * All databases now have a @transactions instance variable set on
      initialization, saving code inside #transaction.
    * Native support for prepared statements when using PostgreSQL can be
      determined by seeing if SEQUEL_POSTGRES_USES_PG is defined and
    * Postgres::Adapter#connected was removed.
    * #serial_primary_key_options was removed from the MySQL and SQLite
      adapters, since it was the same as the default.
    * Postgres::Database#locks was refactored.
    * Use block_given? and yield instead of block[] in some places.
    * Database#log_info takes an additional args argument, used for
      logging bound variables.
    * The InvalidExpression, InvalidFilter, InvalidJoinType, and
      WorkerStop exceptions were removed.
    * Using Model[true] and Model[false] no longer results in an error.
      This was probably more helpful with the ParseTree expression
      syntax (which used == for equality), but shouldn't be helpful now.
    * Using a full outer join on MySQL raises a generic Error instead of
Commits on Jul 29, 2008
  1. Fixed anonymous columns

    Nusco+Beppe committed
  2. Tests run

    Nusco+Beppe committed
  3. Tests run

    Nusco+Beppe committed
Commits on Jul 28, 2008
  1. Fixed problem with LIMIT in ADO adapter

    Nusco+Beppe committed
Something went wrong with that request. Please try again.