Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Commits on Mar 17, 2014
  1. Add Dataset::PlaceholderLiteralizer optimization framework

    authored
    Dataset::PlaceholderLiteralizer is a new class that allows one
    to record the changes made to a given dataset with placeholder
    arguments.  The resulting dataset is then literalized, with the
    placeholder arguments recording the current offset in the SQL
    string insteading of literalizing themselves into the string.
    
    Using the offsets, the string is split into an array of fragments.
    Using that array, you can then pass arbitrary arguments into the
    loader, and it can build the SQL string just by simple string
    concatenation.
    
    This is about 3 times faster in generating the SQL, and since
    generating the SQL is a good percentage of the time spent for
    simple queries that only return one row (assuming a local database
    connection), this can speed up such queries by about 50%.
    
    There are corner cases that this framework can't handle, but it
    handles the vast majority of cases that Sequel users are likely to
    use.  This includes things such as knowing that strings passed
    to where should be considered literal strings:
    
      def find(arg)
        where(arg)
      end
      find("s = 1")  # WHERE (s = 1)
      find(:s=>1)    # WHERE (s = 1)
    
    as well as where you are using the argument on the right hand side
    of a conditions specifier, where the operator used changes based
    on the value of the placeholder:
    
      def find_by_id(arg)
        where(:id=>arg)
      end
      find_by_id(1)      # WHERE (id = 1)
      find_by_id([1, 2]) # WHERE (id IN (1, 2))
    
    This framework is similar to but more flexible than prepared
    statements, since the underlying SQL can change from call to call.
Commits on Oct 21, 2013
  1. Use rdoc-ref: instead of link: for links in RDoc

    authored
    Using link: is not portable, while rdoc-ref: is portable, according
    to rdoc issue #188.
Commits on Jun 2, 2013
  1. Use a shared frozen options hash for all option hashes by default

    authored
    This has two advantages:
    
    1) It makes sure that Sequel code doesn't modify an options hash
       passed in the by user.
    
    2) It should be better for performance as it doesn't create a new
       hash object per call.  On ruby 1.8 this is a significant speedup,
       but in ruby 2.0 it's not a big difference.
Commits on Jan 8, 2013
  1. Move the #meta_def support for Database, Dataset, and Model to the me…

    authored
    …ta_def extension
    
    This is a backwards incompatible change in that the #meta_def
    method is no longer available by default on Database, Dataset,
    and Model classes and instances.
    
    Sequel originally depended on the metaid gem that added meta_def
    and a handful of other methods to Object.  Over the years I removed
    usage of most of the other methods, and restricted their definitions
    in Sequel to just Database, Dataset, and Model classes and instances.
    However, the metadef method was only used in a couple of places
    in Sequel, and it really only saves a few characters in those
    places, so I decided to remove usage of meta_def completely.
    
    As meta_def was no longer used internally, and it was originally
    only used for implementing Sequel, there was no need to load it
    by default and have all Sequel users pay the memory penalty for
    it.
    
    Since some users may need it, I've decided to move it to an
    extension to make it easy to work with old code.
    
    The core and extension specs still use meta_def significantly, so
    the extension is loaded automatically by those specs.  Hopefully
    those specs can be cleaned up in the future.
Commits on Jun 8, 2011
  1. Allow treating Datasets as Expressions, e.g. DB[:table1].select(:colu…

    authored
    …mn1) > DB[:table2].select(:column2)
    
    This includes many of the modules in Sequel::SQL in Sequel::Dataset,
    allowing you to treat a dataset as an expression.  Sequel had
    already implemented Dataset#as, that is being removed and instead
    AliasMethods is being used to implement it.
Commits on Apr 19, 2010
  1. Do a better job of linking class/method RDocs to the guides

    authored
    This links all of the guides except the reflection guide to at least
    one class or method.  There's probably more places that should link
    to the guides, but this is a good start.
Commits on Apr 15, 2010
  1. Move Dataset code into appropriate files, add RDoc sections

    authored
    Before, the Sequel::Dataset RDoc page was pretty messy, as just
    listed the methods in alphabetical order without any sort of
    grouping.
    
    I just recently discovered RDoc sections, which allow you to
    group related methods into different sections on the RDoc page,
    which allows the user to easily focus on only the methods they
    are probably interested in.
    
    This commit adds sections to all of the Dataset code, separating
    the methods the users probably don't care about with the methods
    they probably do care about.  Unfortunately, I haven't yet
    discovered how to to set the order of the sections, so that the
    most important sections are first, but hopefully that can be
    added later.  Even just this is a big improvement.
    
    To make sure each method was in the correct section, I moved all
    of the methods out of dataset.rb and dataset/convenience.rb and
    placed them in the appropriate file.  I also added dataset/misc.rb
    and dataset/mutation.rb to group the related methods together.
    I discovered some more methods in the wrong section, and moved
    those to the appropriate section.
Commits on Apr 6, 2010
  1. Add Dataset#select_append, which always appends to the existing SELEC…

    authored
    …Ted columns
    
    Since before I took over maintenance, the only way to add SELECTed
    columns to a dataset is by using Dataset#select_more.  However,
    select_more's behavior when no columns are currently selected is
    not always the desired behavior:
    
      ds = DB[:a]
      # SELECT * FROM items
      ds.select_more({:id=>DB[:b].select(:a_id)}.as(:in_b))
      # SELECT id IN (SELECT a_id FROM b) AS in_b FROM a
    
    In this case, you want to select all columns from the FROM table,
    but you also want to include information about whether each row
    in the table has a matching row in another table.  select_more
    doesn't work correctly in this case, because it drops the
    implicitly SELECTed wildcard (*).  Enter select_append:
    
      ds.select_append({:id=>DB[:b].select(:a_id)}.as(:in_b))
      # SELECT *, id IN (SELECT a_id FROM b) AS in_b FROM a
    
    select_append works the same as select_more, unless there is
    nothing currently selected.  In that case, it always selects
    the wildcard first, then the columns provided.
    
    Use select_append in the MSSQL shared adapter to simplify some
    code.
Commits on Mar 9, 2010
Commits on Dec 14, 2009
  1. Add #each_server to Database and Dataset, for running operations on a…

    authored
    …ll shards
    
    Sequel has supported sharding for a long time (since 2.4.0), but it
    hasn't had built in support for updating all shards at once until
    now.  Database#each_server and Dataset#each_server are designed to
    fill those needs.
    
    Database#each_server yields a new Database object for each server,
    and is intended for use when you want to run schema modification
    (DDL) queries, or other custom SQL against all shards.
    
    Dataset#each_server yields copies of the current dataset, each
    of which is tied to a separate server in the connection pool.
    It's intended for when you want to select/insert/update/delete
    from all shards.
    
    To implement this, ConnectionPool#servers and Database#servers
    methods were added.
Commits on Dec 10, 2009
  1. Merge Dataset::FROM_SELF_KEEP_OPTS into Dataset::NON_SQL_OPTIONS

    authored
    While used in different places, these were used pretty much for the
    same things, and in both cases where one was used, the values from
    the other should have been taken into account.  Breaks backwards
    compatibility slightly, but it's unlikely anyone is using it.
Commits on Oct 17, 2009
  1. Add generic support for ignoring non-SQL options in simple_select_all?

    authored
    Dataset#simple_select_all? is just used to determine whether the
    dataset is a simple select from a table.  Non-SQL options like
    :server should be ignored, since they don't affect the SQL produced.
    
    This adds the Sequel::Dataset::NON_SQL_OPTIONS constant to hold
    the options.  Individual adapters can add their adapter-specific
    Non-SQL options to this constant to make sure they work.
    
    Change the MSSQL :disable_insert_output handling to use this
    new feature.
    
    While here, fix Dataset#window on PostgreSQL to work correctly
    with existing named windows.
  2. Revert the MSSQL JOIN USING emulation, replace with standard support

    authored
    Standard support for emulating JOIN USING is better since multiple
    database's don't support JOIN USING (MSSQL and H2).  Note that
    emulating JOIN USING really sucks, for 2 reasons:
    
    1) It doesn't combine the output columns, so you must qualify any
    column references to the JOIN USING columns.  This makes using this
    support with the class_table_inheritance possible but still
    problematic.
    
    2) The support assumes the last joined table has the column, when it
    could be any table previously joined.  That's a poor assumption, but
    the only way to work around it is to wrap the current dataset in a
    subselect, which could break queries that use qualified identifiers
    with previously joined tables.
Commits on Oct 8, 2009
  1. Add additional join methods to Dataset: (cross|natural|(natural_)?(fu…

    authored
    …ll|left|right))_join
    
    These are mostly for convenience and consistency.  full_join,
    left_join, and right_join should be equivalent to full_outer_join,
    left_outer_join, and right_outer_join, but shorter methods names are
    nicer.
Commits on Oct 5, 2009
  1. Add emulated support for the lack of multiple column IN/NOT IN suppor…

    authored
    …t in MSSQL and SQLite
    
    Many SQL databases support the following syntax:
    
      (col1, col2) IN ((1, 2), (3, 4))
    
    Unfortunately, Microsoft SQL Server and SQLite do not support that.
    Support can be emulated by using:
    
      (((col1 = 1) AND (col2 = 2)) OR ((col1 = 3) AND (col2 = 4)))
    
    Which is what this commit does.  The
    Dataset#supports_multiple_column_in? method is added to let Sequel
    know if the database supports the syntax natively.  If not, support
    is emulated using the technique above.
    
    To get this to work easily with SQL::SQLArray objects (used by the
    association composite key support), SQL::SQLArray#to_a was added.
Commits on Sep 14, 2009
  1. Replace Dataset#virtual_row_block_call with Sequel.virtual_row

    authored
    This removes the private virtual_row_block_call Dataset instance
    method, replacing with the Sequel.virtual_row module method.
    The API is slightly different, with virtual_row_block_call taking
    the block as a regular argument, and Sequel.virtual_row taking
    it as a block.  This allows the easier use of virtual rows
    outside of the select, order, and filter calls.  For example:
    
      net_benefit = Sequel.virtual_row{revenue > cost}
      good_employee = Sequel.virtual_row{num_commendations > 0}
      fire = ~net_benefit & ~good_employee
      demote = ~net_benefit & good_employee
      promote = net_benefit & good_employee
      DB[:employees].filter(fire).update(:employed=>false)
      DB[:employees].filter(demote).update(:rank=>:rank-1)
      DB[:employees].filter(promote).update(:rank=>:rank+1)
    
    There wasn't an easy way to do the above before, without
    creating the Sequel::SQL::VirtualRow instance manually.
Commits on Sep 3, 2009
  1. Always include __FILE__ and __LINE__ when evaling strings

    authored
    This makes for better backtraces.
  2. Refactor and improve documentation and specs for Dataset#insert_sql

    authored
    The recent refactoring of insert_sql now allows for a much wider
    possibility for inputs.  This simplifies the code by making some
    calls to insert_sql recursive.
    
    Here's the current insert_sql API:
    
    * No arguments - Treat as a single empty hash argument
    * Single argument:
      * Hash - Use keys as columns and values as values
      * Array - Use as values, without specifying columns
      * Dataset - Use a subselect, without specifying columns
      * LiteralString - Use as the values
    * 2 arguments:
      * Array, Array - Use first array as keys, second as values
      * Array, Dataset - Use a subselect, with the array as columns
      * Array, LiteralString - Use LiteralString as the values, with
        the array as the columns
    * Anything else: Treat all given values an an array of values
    
    This adds some additional checking.  It makes sure that if two
    arrays are given, that both arrays are the same length.  If given
    an object that responds to values, it will only use those values
    if they are a hash.  It was designed to be used with Model objects,
    without depending on them.
    
    I think the previous implementation may have allowed an Array and
    a Hash argument and had the Array argument be ignored and the hash
    be used as columns and values.  That is no longer supported.
    
    I don't think it is possible to use insert_sql and get values that
    isn't an Array, Dataset, or LiteralString, but just in case someone
    modifies the dataset opts directly, add an appropriate check to
    insert_values_sql.
Commits on Sep 2, 2009
  1. Rename and refactor select_clause_order and related methods to use an…

    authored
    … array of method name symbols
    
    This implementation is faster and simpler.  Instead of building the
    method symbols in every sql call, precompute the array of method
    symbols and just iterate over it.
    
    To simplify the implementation for inserts and updates, create a
    clone of the datasets with the given columns and values, and have
    the clone produce the SQL.  This makes it so you don't have to
    check method arity for every method call.
    
    To DRY up the code, add a clause_sql private instance method to
    Dataset that runs all clause methods for the given type.
    
    While here, in the MSSQL shared adapter, create a manual output!
    mutation method instead of using the included and extended hooks,
    since it is shorter and simpler.
Commits on Aug 25, 2009
Commits on Aug 19, 2009
  1. Add Dataset#truncate for truncating tables

    authored
    TRUNCATE is like a faster version of DELETE that cuts some corners.
    I'm not sure if it is standard SQL (it's not in SQL-92), but most
    database support it.  SQLite doesn't support it, but a DELETE
    with no WHERE clause operates like a TRUNCATE on SQLite, so that is
    what is used.
    
    To get this to work, the Dataset#execute_ddl private method was
    added, which operates like the other Dataset#execute* methods,
    except it always returns nil.
  2. Add support for converting Time/DateTime to local or UTC time upon st…

    authored
    …orage, retrieval, or typecasting
    
    This commit makes Time and DateTime objects being processed by Sequel
    go through a method that allows them to be converted to or from either
    local or UTC time.
    
    There are three different timezone settings:
    
    * Sequel.database_timezone - The timezone that timestamps use in the
      database.  If they database returns a time without an offset, it
      is assumed to be in this timezone.
    
    * Sequel.typecast_timezone - Similar to database_timezone, but used
      for typecasting data from a source other than the database.  This
      is currently only used by the model typecasting code.
    
    * Sequel.application_timezone - The timezone that the application
      wants to deal with.  All Time/DateTime objects are converted into
      this timezone upon retrieval from the database.
    
    Unlike most things in Sequel, these are only global settings, you
    cannot change them per database.  There are only three valid
    timezone settings:
    
    * nil (the default) - Don't do any timezone conversion.  This is the
      historical behavior.
    
    * :local - Convert to local time/Consider time to be in local time.
    
    * :utc - Convert to UTC/Consider time to be in UTC.
    
    So if you want to store times in the database as UTC, but deal with
    them in local time in the application:
    
      Sequel.application_timezone = :local
      Sequel.database_timezone = :utc
    
    If you want to set all three timezones to the same value:
    
      Sequel.default_timezone = :utc
    
    There are three conversion methods that are called:
    
    * Sequel.database_to_application_timestamp - Called on time objects
      coming out of the database.  If the object coming out of the
      database (usually a string) does not have an offset, assume it is
      already in the database_timezone.  Return a Time/DateTime object
      (depending on Sequel.datetime_class), in the application_timzone.
    
    * Sequel.application_to_database_timestamp - Used when literalizing
      Time/DateTime objects into an SQL string.  Converts the object to
      the database_timezone before literalizing them.
    
    * Sequel.typecast_to_application_timestamp - Called when typecasting
      objects for model datetime columns.  If the object being typecasted
      does not already have an offset, assume it is already in the
      typecast_timezone.  Return a Time/DateTime object (depending on
      Sequel.datetime_class), in the application_timzone.
    
    While preparing these changes, the datetime/time literalization code
    was refactored.  The following settings can now be set in adapter
    dataset classes:
    
    * supports_timestamp_timezones? - Whether to include timezone offsets
      in literal timestamp values (false by default, true for MSSQL,
      Oracle, PostgreSQL, and SQLite).
    
    * supports_timestamp_usecs? - Whether to include fractional seconds
      in literal timestamp values (true by default, false for MySQL).
    
    Since the default settings have changed to use fractional seconds
    in timestamps, it's possible that the untested adapters could be
    broken in regards to timestamps.  If that is the case, please let me
    know so I can turn off fractional seconds in timestamps for that
    adapter.  Also, if you know the database supports timezones in
    timestamps, please let me know so I can turn that on.
    
    The internals have been made very easy to modify to support more
    advanced timezone features via extensions.  Two such possible
    extensions are the ability to use arbitrary named timezones (instead
    of just local time and UTC), and the ability to dynamically modify
    the application timezone based on the environment (such as the
    current user).  The following internal functions are used:
    
    * Sequel.convert_input_timestamp - Converts from a String, Array,
      Date, DateTime, or Time object to an object of
      Sequel.datetime_class. For Strings, if no timezone offset is
      included, assumes the string is already in the input_timezone.
    
    * Sequel.convert_output_timestamp - Converts the Time/DateTime object
      to the given output_timezone.
    
    * Sequel.convert_timestamp - Converts the object using
      convert_input_timestamp and the given input_timezone (either
      database_timezone or typecast_timezone) and passes it to
      convert_output_timestamp with the application_timezone.
    
    The following additional changes have been made:
    
    * ODBC::Time values are now converted to DateTimes if the
    Sequel.datetime_class setting is DateTime.  Also, instead of assuming
    UTC time, they will now respect the database_timezone setting.
    
    * The default timestamp literalization format now uses a space
    instead a T to separate the date and time parts.
    
    * The timestamp storage format for SQLite has changed to now
    include the fractional seconds as well as the timezone offset.  Since
    SQLite just stores ASCII text, this can have significant implications
    if you use something other than Sequel to access the SQLite database.
Commits on Aug 10, 2009
Commits on Jul 7, 2009
  1. Much better support for Microsoft SQL Server

    authored
    This substantial commit provides much better support for Microsoft
    SQL Server.  Specific MSSQL subadapters were added for the ado, odbc,
    and jdbc adapters.  Mostly the subadapters involve getting insert
    working properly to return the last inserted id.
    
    While making these changes I've noticed that the ado adapter has huge
    problems, enough so that I wouldn't recommend that anyone use it.  It
    doesn't use a stable native connection, which means it can't work
    correctly with transactions.  It requires a pretty ugly hack to get
    insert to return the id inserted.
    
    Transactions on ado now unconditionally yield nil.  I thought about
    them raising an exception instead, but that would make the ado
    adapter not work well with models (without fiddling).  It's possible
    the behavior will be changed in the future.
    
    As bad as the ado adapter is now, it's still much better than before.
    Before, the ado adapter would run all queries twice when fetching
    rows, and if you did any nonidempotent actions inside the SQL, you'd
    have problems (as I found out when I used the ugly hack to get
    insert to return the id inserted).
    
    The ado and odbc adapters now catch the native exceptions and raise
    Sequel::DatabaseError exceptions.  Also, the behavior to handle
    blank identifiers has been standardized.  Sequel will now assume an
    identifier of 'untitled' if a blank identifier is given.
    
    The shared MSSQL adapter now supports Database#tables and
    Database#schema, using the INFORMATION_SCHEMA views (very similarly
    to what was used in Sequel 2.0).  Now, it also supports add_column,
    rename_column, set_columns_type, set_column_null, and
    set_column_default.
    
    The shared MSSQL's schema method doesn't include primary key info, so
    some of the model logic changed so that it doesn't try to set
    no primary key unless all schema hashes include a primary key entry.
    
    The shared MSSQL adapter now uses the datetime type instead of the
    timestamp type for generic datetimes, and uses bit and image for
    boolean and file types.  It uses 0 and 1 for false and true, and
    no longer attempts to use IS TRUE.
    
    The odbc adapter's literal_time method has been fixed.
    
    In order to ease the connection to MSSQL servers with instances
    using a connection string, Sequel now will unescape URL parts. So the
    following now works:
    
      Sequel.connect(ado:///db?host=server%5cinstance)
    
    The ado adapter specs were removed, because the ado adapter itself
    doesn't really have any specific behavior that should be tested.  Now
    that Sequel has the generic integration tests, those should be used
    instead.  I removed the spec_ado rake task. and replaced it with a
    spec_firebird rake task.
    
    Here's the results for integration testing on MSSQL with each
    adapter:
    
    * ado: 115 examples, 42 failures
    * jdbc: 117 examples, 22 failures
    * odbc: 115 examples, 19 failures
    
    Many of the remaining failures are due to the fact that some tests
    try to insert values into an autoincrementing primary key field,
    which MSSQL doesn't allow.  Those tests should be refactored unless
    they are explicitly testing that feature.
Commits on Jun 29, 2009
  1. Add support for Common Table Expressions, which use the SQL WITH clause

    authored
    CTEs are supported by MSSQL 2005+, DB2 7+, Firebird 2.1+, Oracle 9+,
    and PostgreSQL 8.4+.  They allow you to specify inline views that
    the SELECT query can reference, and also support a recursive mode
    that allows tables in the WITH clause to reference themselves. This
    allows things not normally possible in standard SQL, like loading all
    descendents in a tree structure in a single query.
    
    The standard with clause takes an alias and a dataset:
    
      ds = DB[:vw]
      ds.with(:vw, DB[:table].filter{col < 1})
      # WITH vw AS (SELECT * FROM table WHERE col < 1) SELECT * FROM vw
    
    The recursive with clause usage takes an alias, a nonrecursive
    dataset, and a recursive dataset:
    
      ds.with_recursive(:vw,
        DB[:tree].filter(:id=>1),
        DB[:tree].join(:vw, :id=>:parent_id).
                  select(:vw__id, :vw__parent_id))
      # WITH RECURSIVE vw AS (SELECT * FROM "tree"
      #     WHERE ("id" = 1)
      #     UNION ALL
      #     SELECT "vw"."id", "vw"."parent_id"
      #     FROM "tree"
      #     INNER JOIN "vw" ON ("vw"."id" = "tree"."parent_id"))
      # SELECT * FROM "vw"
    
    I've only tested this on an PostgreSQL 8.4 release candidate, but
    based on my research it should support MSSQL, DB2, and Firebird.
    Oracle apparently uses a different method for recursive queries.
    
    This commit contains some fixes for the window functions committed
    previously.  Like now, it actually works on ruby 1.9.  The
    Dataset#supports_window_functions? method is now defined and
    defaults to false, and the adapters that support window
    functions override it.  If you attempt to use a window function
    on an unsupported dataset, an Error will be raised.
    
    Sequel's default is to support common table expressions, so the
    databases that don't support CTEs have been modified to not use the
    with clause if they previously used the default clauses.
    
    The MSSQL adapter was already using the :with option for setting
    NOLOCK on the query, so I changed the setting to use :table_options.
    
    Because the clauses the PostgreSQL adapter will use depends on the
    server version, I changed the code for getting the server version
    to use static SQL in order to avoid infinite recursion.
    
    PostgreSQL dataset's explain and analyze methods were broken since
    3.0, this commit fixes them and adds a simple spec.
    
    In order to implement the with_recursive method easily, the compound
    dataset methods (union, except, and intersect) now take an options
    hash instead of a true/false flag.  The previous true/false flag is
    still supported for backwards compatibility.  You can now specify it
    with the :all option, and there is now a :from_self option that you
    can set to false if you don't want to return a from_self dataset.
    
    This commit adds the qualify and with dataset methods to
    Sequel::Model.
Commits on Jun 18, 2009
  1. Add supports_distinct_on? method

    authored
    This removes the override of distinct in the Oracle and MySQL
    shared adapters and has the standard distinct method raise
    an exception if DISTINCT ON is used but it isn't supported.
  2. Remove SQLStandardDateFormat, replace with requires_sql_standard_date…

    authored
    …times? method
    
    One change here is that before, ADO used SQL standard datetimes,
    which was a mistake as ADO can be used to connect to any database.
    It doesn't appear to be required on Microsoft SQL Server (the default
    when using ADO), so I'm not sure why it was used in the first place.
    If MSSQL does require it, it should be added to the shared adapter
    and not the ADO adapter.
  3. Remove UnsupportedIsTrue module, replace with supports_is_true? method

    authored
    This is a fairly straightforward removal of the UnsupportedIsTrue
    module.  Now the default complex_expression_sql code checks the
    supports_is_true method and operates based on that.
    
    Because the unsupported.rb file is no longer needed, it has been
    removed from the repository and lines requiring it have been removed
    as well.
    
    While here, have complex_expression_sql raise InvalidOperation
    exceptions instead of generic Error exceptions.
  4. Remove UnsupportedIntersectExcept(All)? modules, replace with methods

    authored
    This is the first in a series of commits that gets rid of the
    adapters/utils directory.  It removes the modules that overrode the
    methods in the dataset subclasses with two methods,
    supports_intersect_except? and supports_intersect_except_all?,
    and modifies the standard intersect and except methods to use
    those methods to determine whether to raise an error.
    
    This approach isn't as fast as the previous approach, but it allows
    for more reflection support and is simpler to implement and use.
Commits on Jun 7, 2009
  1. Fix graphing of datasets with dataset sources (Fixes #271)

    authored
    This actually took a significant amount of internal restructing.
    
    Dataset#from now does processing on the arguments to make
    sure that the entires in the :from option are in a consistent
    format.  This is similar to the change made recently to
    Dataset#select.  This is mirrored by changes to Dataset#source_list
    and Dataset#table_ref to not process the arguments.  This should
    only break backwards compatibility if you were modifying the :from
    option directly and using datasets or hashes.
    
    Dataset#graph has been modified a little to handle using
    an alias for datasets.  The implementation is a bit less optimal
    than I would like, as it assumes specific behavior in Dataset#join,
    but that's life.  It's possible that in the future Sequel will be
    able to record all table aliases instead of just keeping track of
    the number of dataset sources used.
    
    Dataset#simple_select_all? is now only true if the only source in
    the dataset is a Symbol, as previously datasets with a single
    dataset source were considered simple select alls.
    
    Dataset#first_source has been renamed to first_source_alias, with
    an alias for the old name.  That's because it gives you the alias,
    not the table/dataset used.
    
    Dataset#dataset_alias private method was added.  It allows easier
    overridding of the aliases used, incase the default aliases (tX)
    are already used.
    
    This commit removes Dataset#table_exists?, since it never worked
    perfectly.  Database#table_exists? should be used instead.
Commits on May 22, 2009
  1. Make graphing a complex dataset work correctly

    authored
    Before, Dataset#graph didn't work correctly if the argument was a
    complex dataset (anything other than SELECT * FROM table).  This
    is because it only passed the first source of the dataset to
    join_table.  You could work around the issue by using from_self,
    but that shouldn't be necessary.
    
    To keep the SQL simple in the common case, check if the passed
    dataset is a simple dataset and use just the table symbol in that
    case.  Otherwise, pass the dataset itself.
Commits on Apr 9, 2009
  1. Remove all methods and features deprecated in 2.12

    authored
    This huge commit removes the deprecated code and deprecated specs
    (over 300!).  It also makes some other minor changes.
    
    Sequel.virtual_row_instance_eval is now true, but setting it to false
    does nothing.  Using instance_eval for virtual row blocks that don't
    accept arguments is now standard Sequel behavior.
    
    Some error messages in convenience.rb were turned into constants to
    reduce garbaged produced.  More work on this will probably be done in
    the future.
    
    Dataset#import was using each_slice, which was probably provided by
    enumerator.  Change to using a loop and slicing manually.
    
    The built in inflector now calls the string inflection methods if
    the string responds to the method.  The inflector extension no longer
    updates the built in inflector, since it no longer needs to.
    
    Database#blank_object? calls blank? on the object if the object
    responds to it.
    
    The connection pool had a slight code refactoring to make things
    easier to read.
    
    The hook_class_methods plugin instance methods call super, so using
    the plugin doesn't ignore previous instance level hooks.  Among other
    things, this allows you to use the hook_class_methods plugin after
    the caching plugin, which fixes #264.
    
    Make the serialization plugin test require yaml, since it
    uses yaml.  This only seems necessary on ruby 1.9.
    
    Sequel::Deprecation is being moved to the extra directory.  I'm
    moving it out of lib so it won't show up in the RDoc.  I'm not
    removing it completely because I expect it may be used again
    sometime in the future.
    
    One thing that I realize that I should have officially deprecated
    was the 4th argument to join_table being a table_alias instead of a
    hash of options.  So Sequel will continue to support that.
Commits on Mar 27, 2009
Commits on Mar 26, 2009
  1. Deprecate Dataset#transform and Model.serialize, and model serializat…

    authored
    …ion plugin
    
    This is the last major feature that will be deprecated and removed in
    3.0.  Nobody has responded that they need the dataset transform
    feature, so that is being removed.  It's tied into a lot of different
    places in Dataset, so I'm not going to be providing an extension that
    adds it back.
    
    Model.serialize relies on Dataset#transform, so it is being
    deprecated as well.  The new serialization plugin takes a different
    approach.  It overrides the column reader to deserialize the column
    on request, and adds a before_save hook that serializes the
    deserialized values before saving.  It should do the job and be
    reasonably backward compatible.
    
    This commit moves the requiring of bigdecimal/util, enumerator, and
    yaml to deprecated.rb, since they will be removed in 3.0.
    
    This commit makes Model#reload an actual method instead of an alias,
    to make overriding it in a plugin easier (which the serialization
    plugin does).
    
    This commit also reuses the DATASET_METHOD_RE in
    Model#def_column_accessor.
Something went wrong with that request. Please try again.