Skip to content
This repository


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
branch: master

Mar 05, 2014

  1. Jeremy Evans

    When using the streaming extension, automatically use streaming to im…

    …plement paging in Dataset#paged_each
    Bump version to 1.6.9.
    authored March 05, 2014

Aug 09, 2013

  1. Jeremy Evans

    Modify text in README (Fixes #14)

    authored August 09, 2013
  2. Alex Vaystikh

    Update README with instructions on how support tables with more than …

    …256 columns
    authored August 09, 2013 jeremyevans committed August 09, 2013

Aug 05, 2013

  1. Jeremy Evans

    Add license to gemspec

    authored August 05, 2013
  2. Jeremy Evans

    Allow overriding maximum allowed columns in a result set via -- --wit…

    …h-cflags=\"-DSPG_MAX_FIELDS=1600\" (Fixes #12)
    Previously, you couldn't override this limit without modifying
    the source code.
    authored August 05, 2013
  3. Jeremy Evans

    Remove unused struct

    This was left over from the PQsetRowProcessor streaming support.
    authored August 05, 2013

Jun 06, 2013

  1. Jeremy Evans

    Correctly handle fractional seconds in the time type, bump version to…

    … 1.6.7
    authored June 06, 2013

May 31, 2013

  1. Jeremy Evans

    Work correctly when using the named_timezones extension, bump version…

    … to 1.6.6
    This turns out to be a fairly easy change, since we just call
    Database#to_application_timestamp with the string if we don't
    recognize the timezone.

Mar 15, 2013

  1. Jeremy Evans

    Work around format-security false positive (Fixes #9)

    Even though the rb_raise call is provably safe (static function
    called with all static arguments), format-security still complains.
    To appease it and make sequel_pg work in setups where
    -Werror=format-security is used by default, use "%s" as the format
    authored March 15, 2013

Mar 06, 2013

  1. Jeremy Evans

    Handle infinite dates using Database#convert_infinite_timestamps, bum…

    …p version to 1.6.5
    authored March 06, 2013

Jan 14, 2013

  1. Jeremy Evans

    Remove type conversion of int2vector and money types on PostgreSQL, s…

    …ince previous conversions were wrong (Fixes #8)
    A similar change was made in Sequel about 6 months ago.
    Bump version to 1.6.4.
    authored January 14, 2013

Nov 30, 2012

  1. Jeremy Evans

    Make streaming support not swallow errors when rows are not retrieved…

    …, bump version to 1.6.3
    If the user doesn't retrieve all of the rows when streaming, sequel_pg
    is supposed to flush the connection by manually retrieving the other
    rows.  Before, it wasn't checking the result status of each result,
    causing it to miss errors.
    I noticed this after adding an integration test for deferred foreign
    key support to Sequel.
    authored November 30, 2012

Nov 16, 2012

  1. Jeremy Evans

    Bump version to 1.6.2

    authored November 16, 2012

Nov 11, 2012

  1. Dirkjan Bussink

    Don't register variables on the stack as global

    By definition, VALUE pointers on the stack can never be a global variable.
    The location goes out of scope when the function is executed and
    can point at random data.
    On the other hand, the error class is used as global so mark it as such.
    authored November 11, 2012

Oct 25, 2012

  1. Jeremy Evans

    Make PostgreSQL array parser handle string encodings correctly on rub…

    …y 1.9, bump version to 1.6.1
    This copies the encoding of the embedded array string to each member
    of the array.  It simplifies the logic somewhat, and also marks each
    string as tainted, similar to the default logic to retrieve values.
    authored October 25, 2012

Sep 10, 2012

  1. Jeremy Evans

    Add README note about pg 0.14.1 requirement for streaming

    authored September 10, 2012
  2. Jeremy Evans

    Update README for new streaming setup

    authored September 10, 2012

Sep 04, 2012

  1. Jeremy Evans

    Bump version to 1.6.0

    Bump the dependency on sequel to 3.39.0.  The streaming support
    requires pg 0.14.1, but as most people won't be using it, I am
    leaving the dependency on pg at 0.8.0.
    authored September 04, 2012

Aug 09, 2012

  1. Jeremy Evans

    Replace PQsetRowProcessor streaming with PQsetSingleRowMode streaming…

    … introduced in PostgreSQL 9.2beta3
    PostgreSQL decided that PQsetRowProcessor was a bad API, and replaced
    it with PQsetSingleRowMode in 9.2beta3.  I agree with the choice, as
    the new API is simpler and fairly easy to use.
    Currently, this requires a patch to ruby-pg to recognize
    PGRES_SINGLE_TUPLE as an OK result status.  That should hopefully
    be committed soon.  This also requires the master branch of Sequel,
    as it depends on some recent refactoring in the Sequel postgres
    authored August 08, 2012

Aug 02, 2012

  1. Jeremy Evans

    Fix segfaults in the array parser using RB_GC_GUARD, bump version to …

    authored August 02, 2012

Jul 25, 2012

  1. Jeremy Evans

    Update README with supported ruby versions

    authored July 25, 2012

Jun 29, 2012

  1. Jeremy Evans

    Bump version to 1.5.0

    authored June 29, 2012

Jun 25, 2012

  1. Jeremy Evans

    Add PostgreSQL array parser

    This adds a C-based PostgreSQL array parser.  The original
    implementation is from the pg_array_parser library, but I've
    heavily modified it.  This C-based parser is 5-500 times faster
    than the pure ruby parser that Sequel uses by default.  5 times
    faster for an empty array, and 500 times faster for an array
    with a single 10MB string.
    Because the pg_array extension can be loaded before or after
    sequel_pg, handle the case where it is loaded after by switching
    the Creator class to use the C based parser instead of the pure
    ruby parser.
    authored June 25, 2012

Jun 01, 2012

  1. Jeremy Evans

    Bump version to 1.4.0

    authored June 01, 2012

May 21, 2012

  1. Jeremy Evans

    Include streaming file in the gem, and update README

  2. Jeremy Evans

    Add support for streaming on PostgreSQL 9.2 using PQsetRowProcessor

    PostgreSQL 9.2 adds a new libpq function called PQsetRowProcessor
    that allows the application to set a function that is called with
    every row loaded over the socket.  This is different than the
    standard PostgreSQL API, which collects the entire result in
    memory before returning control to the application.  This API
    makes it possible to easily process large result sets that would
    ordinarily not fit in memory.
    Integrating this API into sequel_pg wasn't simple.  Because you
    need to set the row processing function before execution of the
    query (and unset it afterward), the dataset needs to pass
    additional information to the database, indicating that streaming
    should be used.  It captures the block given to fetch_rows and
    passes both itself and the block to the database.
    Because part of the libpq API is tied to row+column indexing into
    the result set (PGresult), that part can't be reused by the row
    processing code.  So I had to add spg__rp_value, which is sort of
    a duplicate of spg__col_value, but using the row processing API.
    spg__rp_value is probably not going to be as fast, as the row
    processing API doesn't use NUL terminated strings, so in many cases
    spg__rp_value has to create a ruby string where spg__col_value
    does not.
    The column info parsing can be reused between the regular and
    row processing code, so split that into a separate function called
    Because sequel_pg needs to work on older libpq versions, add a
    have_func call to extconf.rb to determine if the row processing
    API is available.  If it is, Sequel::Postgres.supports_streaming?
    will be true.
    The libpq row processing API supports passing information to the
    function via a void* API.  To make memory management easy, a
    C struct is initialized on the C stack and a pointer to it is
    passed to the row processing function. Control is yielded to the
    block, with an ensure block to unset the row processing function
    when the block completes (or raises an error).  So the row
    processing function should only be called when the memory
    referenced by the pointer is valid.  The row processing function
    is reset to the standard one before the C function returns.
    This code needs the master branch of Sequel, since it overrides
    the new Database#_execute method in the postgres adapter.  The
    reason for that refactoring was to enable streaming to work
    easily with prepared statements.
    Currently, streaming is disabled if you attempt to use the map,
    select_map, to_hash, and to_hash_groups optimizations.  In all
    of these cases, you are returning something containing the entire
    result set, so streaming shouldn't be important.  It is possible
    to implement support for streaming with these optimizations, but
    it requires a lot of custom code that I don't want to write. As
    it is, I had to add special support so that streaming worked when
    the optimize_model_load setting was used.
    Streaming is not enabled by default.  You have to specifically
    require sequel_pg/streaming, then extend the database instance
    with Sequel::Postgres::Streaming, then call Dataset#stream on the
    dataset you want to stream.  Alternatively, you can extend the
    database's datasets with Sequel::Postgres::Streaming::AllQueries,
    in which case streaming will be used by default for all queries
    that are streamable.
  3. Jeremy Evans

    Respect DEBUG environment variable when building

Apr 02, 2012

  1. Jeremy Evans

    Bump version to 1.3.0

    authored April 02, 2012

Mar 24, 2012

  1. Jeremy Evans

    Build Windows version against PostgreSQL 9.1.1, ruby 1.8.7, and ruby …

    Also, switch build process to piggy-back on ruby-pg's static
    authored March 23, 2012

Mar 20, 2012

  1. Jeremy Evans

    Add speedup for to_hash_groups

    This makes to_hash_groups around 3 times faster for a key value
    symbol and about 50% faster for a key value array, compared to
    the previous code.  Combined with the other optimizations, sequel_pg
    can speedup to_hash_groups by about 7.5x over the default Sequel
    authored March 20, 2012

Mar 18, 2012

  1. Jeremy Evans

    Handle infinite timestamp values using Database#convert_infinite_time…

    This is a fairly slow code path, but since infinite timestamps are fairly
    rare, that shouldn't matter.
    authored March 17, 2012

Mar 09, 2012

  1. Jeremy Evans

    Bump version to 1.2.2

    authored March 09, 2012
  2. Jeremy Evans

    Get microsecond accuracy when using datetime_class = DateTime with 1.…

    …8-1.9.2 stdlib date library via Rational
    Unfortunately, with the 1.8-1.9.2 stdlib date class, you can't add a
    float to a DateTime instance and get microsecond accuracy, even if the
    float is very small.  For a current DateTime instance, accuracy appears
    to be +/- 33 usecs.
    To work around this issue, check if the 1.8-1.9.2 stdlib date
    implementation is being used via the @ajd instance variable.  If so,
    use a slower but more accurate Rational implementation.
    While here, register spg_SQLTime and spg_Postgres with the garbage
    collector to avoid segfaults if the constants are manually undefined.
    Also, rename a misleading macro to reflect it uses microseconds instead
    of milliseconds.
    authored March 09, 2012

Feb 22, 2012

  1. Jeremy Evans

    Upgrade CHANELOG and bump version to 1.2.1

    authored February 21, 2012

Feb 21, 2012

  1. Jeremy Evans

    Handle NaN/Infinity correctly

    Before, NaN/Infinity values would be returned as 0.0.
    authored February 21, 2012
Something went wrong with that request. Please try again.