Permalink
Commits on Jun 4, 2009
  1. Bump version to 3.1.0

    committed Jun 4, 2009
Commits on Jun 3, 2009
  1. Fix for polymorphic eager loading spec after reciprocal change

    This is probably going to be an issue for any polymorphic
    association.  Since the associated class doesn't exist, the
    reciprocal should be set manually.
    committed Jun 3, 2009
  2. Make the tactical_eager_loading plugin not depend on the identity_map…

    … plugin
    
    After giving it some thought, the tactical_eager_loading plugin
    doesn't use the identity map at all, other than to not work when
    there isn't an active identity map.  That seems like a pointless
    restriction to me, so I've eliminated it.  The reason that it works
    without the identity_map plugin is that it basically builds a type
    of identity map internally which is used inside the :eager_loader
    block.
    
    The lazy_attributes plugin still depends on both, since it needs
    the identity_map plugin and the tactical_eager_loading plugin in
    order to load the lazy attribute for a bunch of models at once.
    committed Jun 3, 2009
Commits on Jun 2, 2009
  1. Make Migrator work correctly with file names like 001_873465873465873…

    …465_some_name.rb (fixes #267)
    committed Jun 2, 2009
Commits on Jun 1, 2009
Commits on May 30, 2009
Commits on May 28, 2009
  1. Support savepoints when using SQLite

    This commit makes the shared SQLite adapter use the savepoint
    support used by the shared PostgreSQL and MySQL adapters.  It does
    away with the BlockTransactions module, as only the SQLite and
    Amalgalite adapters used that and it doesn't handle savepoints.
    Since both the SQLite and Amalgalite adapters use the shared
    SQLite adapter, they both support savepoints.
    committed May 28, 2009
Commits on May 26, 2009
  1. Add Dataset#qualify_to and #qualify_to_first_source, for qualifying u…

    …nqualified identifiers in the dataset
    
    Because Sequel represents SQL constructs as objects and not strings,
    it is possible to go through the tree of objects that represent a
    query, find all of the unqualified identifiers, and qualify them with
    a table.  This is mostly useful when you have a dataset for a single
    table with SELECT, WHERE, ORDER, GROUP, or HAVING clauses that use
    unqualified columns that you want to join to another table/dataset.
    Qualification is not needed on the main dataset, but if the table
    being joined has columns with the same name as columns currently in
    the dataset, you have to qualify those columns.
    
    The main use case for this would be when you are using models that
    have a filter or order by default.  If you try to join them to
    another table/model where the column names overlap, before you'd
    need to redo the filters or order, or make sure that the model uses
    qualified columns by default.  Now, you can just call
    qualify_to_first_source before you join and Sequel will take care
    of things for you.
    
    While here, allow sub_subscript to be used on all types of
    expressions, not just symbols. Since functions, qualified
    identifiers, and basically any other type of expression can be an
    array, it makes sense to give the objects that represent those
    constructs the sql_subscript method.  Also, I did some testing on
    PostgreSQL and found that the subcripts themselves don't have to be
    integers, they can be expressions as well, so allow any expression
    to be used as a subscript.
    
    Also while here, add an attr_reader to SQLArray to read the array it
    wraps.
    
    In order to work correctly with the fact that you can specify column
    aliases as hashes (a syntax I've never liked), modify the code to
    transform the hashes to AliasedExpressions before adding them to
    the opts, instead of handling them specially at SQL string creation.
    committed May 26, 2009
  2. Add RDoc for schema plugin

    committed May 26, 2009
  3. Add reflection.rdoc file which explains and gives examples of many of…

    … Sequel's reflection methods
    
    This is by request, but I forget who the requester was.
    committed May 26, 2009
Commits on May 25, 2009
  1. Add many_through_many plugin, allowing you to construct an associatio…

    …n to multiple objects through multiple join tables
    
    From the RDoc:
    
     For example, assume the following associations:
    
        Artist.many_to_many :albums
        Album.many_to_many :tags
    
     The many_through_many plugin would allow this:
    
        Artist.many_through_many :tags, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_tags, :album_id, :tag_id]]
    
     Which will give you the tags for all of the artists's albums.
    
     Here are some more examples:
    
       # Same as Artist.many_to_many :albums
       Artist.many_through_many :albums, [[:albums_artists, :artist_id, :album_id]]
    
       # All artists that are associated to any album that this artist is associated to
       Artist.many_through_many :artists, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], [:albums_artists, :album_id, :artist_id]]
    
       # All albums by artists that are associated to any album that this artist is associated to
       Artist.many_through_many :artist_albums, [[:albums_artists, :artist_id, :album_id], [:albums, :id, :id], \
        [:albums_artists, :album_id, :artist_id], [:artists, :id, :id], [:albums_artists, :artist_id, :album_id]]
    committed May 25, 2009
  2. Add the :cartesian_product_number option to associations, for specify…

    …ing if they can cause a cartesian product
    
    Previously this behavior was hard coded into the associations, and
    the user couldn't override it.  Now it should be a number between
    0 and 2:
    
    * 0 - This association cannot cause a cartesian product, as only
      1 row exists in the join table for each row in the current table.
    
    * 1 - This association can cause a cartesian product if used in
      the same query as another association whose cartesian product
      number is also 1, but will not cause a cartesian product if
      used by itself.
    
    * 2 - This association causes a cartesian product even if used by
      itself.
    
    many_to_one associations are 0 by default, and *_to_many
    associations are 1 by default.  You may want to set a many_to_many
    association to 2 if both of the joins it does can cause multiple
    rows for each row in the current table. For example:
    
      Artist.one_to_many :albums
      Album.one_to_many :tracks
      Artist.many_to_many :tracks, :join_table=>:albums, :right_key=>:id,
        :right_primary_key=>:album_id, :cartesian_product_number=>2
    
    While here, fix a description for one of the specs.
    committed May 25, 2009
  3. Make :eager_graph association option work correctly when eagerly load…

    …ing many_to_many associations
    
    Before, this didn't work correctly because eagerly loading
    many_to_many associations requires a key field in the join table be
    returned, and the :eager_graph association option wiped out the
    selection of that key field.
    
    This commit has an interesting use of add_graph_aliases to map
    a table column into an aliased name in the query and the graphed
    output.
    
    Fixing this issue necessitated a bit of refactoring.  Push the
    necessary code into the association reflection for easy access.
    
    While here, make the :conditions option with a placeholder literal
    string work when eager loading.
    
    Also while here, fix a bug in eager_unique_table_alias by making
    it consider previously joined tables as well as tables in the FROM
    clause.
    committed May 25, 2009
  4. Fix using :conditions that are a placeholder string in an association…

    … (e.g. :conditions=>['a = ?', 42])
    
    Not sure if this ever worked correctly, or if it did when it broke.
    There was a spec showing this as an example, but it didn't test
    everything that it should have.
    committed May 25, 2009
Commits on May 23, 2009
  1. On MySQL, make Dataset#insert_ignore affect #insert as well as #multi…

    …_insert and #import
    
    This refactors Dataset#insert_sql and related methods to call
    adapter overrides it to add IGNORE if insert_ignore is used.
    
    While here, make insert_default_values_sql on MySQL a private method.
    committed May 23, 2009
  2. Make schema_dumper extension ignore errors with indexes unless it is …

    …dumping in the database-specific type format
    
    When doing a database independent schema dump, include code to ignore
    errors when creating indexes.  This should allow you to dump the
    schema from a PostgreSQL database with an index on a text column and
    restore it in a MySQL database.
    
    This modifies the bin/sequel tool to use the new feature when copying
    databases, and tweaks the output slightly.
    committed May 23, 2009
  3. Don't dump partial indexes in the MySQL adapter

    Previously, if you had a partial index on a text column in a MySQL
    database and you tried to dump it and restore it on a MySQL database,
    an error would be raised, since Sequel doesn't offer a way to create
    partial indexes.
    
    This treats partial indexes like function indexes on PostgreSQL,
    by not dumping them.  If you are using partial indexes, you are
    definitely in database specific territory and should not expect
    perfect behavior from Sequel's migration dumper/restorer.
    committed May 23, 2009
  4. Add :ignore_index_errors option to Database#create_table and :ignore_…

    …errors option to Database#add_index
    
    The schema_dumper extension is going to be using these options to
    make it more likely that the migrations it generates will be
    loadable.  Since indexes generally have no effect on behavior
    (except for unique indexes), the inability to load one shouldn't
    be a reason to cancel the migration.
    
    I'm not happy with this way of doing things, but I can't really
    think of a better idea that will allow dumping of PostgreSQL tables
    with indexes on text columns and attempting to restore them on
    MySQL.
    committed May 23, 2009
Commits on May 22, 2009
  1. Refactor Database#create_table to eliminate some duplication

    Both the Firebird and Oracle adapters overrode Database#create_table
    with some duplication.  Break create_table into more methods so the
    adapters only need to override the parts they need.
    committed May 22, 2009
  2. Make graphing a complex dataset work correctly

    Before, Dataset#graph didn't work correctly if the argument was a
    complex dataset (anything other than SELECT * FROM table).  This
    is because it only passed the first source of the dataset to
    join_table.  You could work around the issue by using from_self,
    but that shouldn't be necessary.
    
    To keep the SQL simple in the common case, check if the passed
    dataset is a simple dataset and use just the table symbol in that
    case.  Otherwise, pass the dataset itself.
    committed May 22, 2009
Commits on May 21, 2009
  1. Fix MySQL command out of sync errors, disconnect from database if the…

    …y occur
    
    I've found another cause of the command out of sync errors.
    Previously, If you did something like the following:
    
      DB << "SHOW DATABASES"
    
    The next query would give you an error.  That's because the previous
    query returned results and they weren't picked up.  To prevent that
    from happen, I turned off the query_with_result setting, so it's back
    to the default, which is true.
    
    This actually simplified the logic a bit and makes it so that you
    can use DB#<< and similar methods with queries that would usually
    need to return results.
    
    AFAICT, this error has been present since at least 2007.
    
    If a commands out of sync error does occur, this disconnects the
    connection using DatabaseDisconnectError.  If there are other errors
    that indicate that the connection should be disconnected, please
    let me know.
    committed May 21, 2009
  2. Do a much better job of parsing defaults from the database

    Parsing defaults from the database is an ugly job, but somebody's
    got to do it.  I can't say I'm happy with this commit, but it's a
    big improvement and should fix most of the issues people have.
    I'm sure there are still corner cases that I didn't handle, but
    I'll gladly take patches to improve support.
    
    The default parsing works by looking at both the default string
    value provided by the database as well as the type of value
    that Sequel has already parsed (the symbol value used for
    typecasting, not the class value parsed in the schema dumper).
    
    It runs the string through some regexps and gsubs depending on
    the database type, and then converts values to an appropriate
    class.
    
    For implementation reasons, the DateTime class is always used
    default values for datetime types even if it isn't the
    Sequel.datetime_class.
    
    In Schema::Generator, recognize default values that can't be
    represented correctly by inspect, and make sure to generate a
    useable string.
    committed May 21, 2009