Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Commits on Dec 13, 2013
  1. @ribasushi
Commits on Nov 5, 2013
  1. @ribasushi
Commits on Apr 19, 2013
  1. @SineSwiper @ribasushi

    Support for $val === [ {}, $val ] in literal SQL + bind specs

    SineSwiper authored ribasushi committed
    This wraps up the changes started in 0e77335. Now DBIC bind values can
    be specified just like the ones for SQL::Abstract as long as no bind
    metadata (e.g. datatypes) is needed
    
    Also added an explicit check to catch when a non-scalar non-stringifiable
    value is passed without a bind type metadata
Commits on Apr 17, 2013
  1. @ribasushi

    Overhaul GenericSubq limit - add support for multicol order

    ribasushi authored
    A lot of other cleanup as well - this should be the end of it ;)
  2. @ribasushi
  3. @ribasushi

    Merge branch 'master' into topic/constructor_rewrite

    ribasushi authored
    Add some extra code to enforce the assumption that any bind type constant
    is accessible in _dbi_attrs_for_bind, or in other words that all necessary
    DBDs are already loaded (concept originally introduced in ad7c50f)
    
    Without this the combination of 9930caa (do not recalculate bind attrs
    on dbh_do retry) and a2f2285 (do not wrap iterators in dbh_do) can result
    in _dbi_attrs_for_bind being called before DBI/DBD::* has been loaded at all
  4. @ribasushi

    Remove idiotic RowCountOrGenericSubQ - it will never work as part of …

    ribasushi authored
    …as_query
    
    I am not sure how I overlooked this, blame is on me :(
    
    Replacing with a ROWCOUNT fallback at the appropriate place
Commits on Apr 9, 2013
  1. @ribasushi
Commits on Mar 10, 2013
  1. @ribasushi

    Radically rethink complex prefetch - make most useful cases just work…

    ribasushi authored
    … (tm)
    
    TL;DR: mst - I AM SORRY!!! I will rebase the dq branch for you when this
    pile of eyebleed goes stable.
    
    The long version - since we now allow arbitrary prefetch, the old
    _prefetch_selector_range mechanism doesn't cut it anymore. Instead we
    recognize prefetch solely based on _related_results_construction.
    Furthermore group_by/limits do not play well with right-side order_by
    (which we now also support, by transforming foreign order criteria into
    aggregates).
    
    Thus a much more powerful introspection is needed to decide what goes on
    the inside and outside of the prefetch subquery. This is mostly done now
    by the augmented _resolve_aliastypes_from_select_args to track
    identifiers it saw (97e130f), and by extra logic considering what
    exactly are we grouping by.
    
    Everything is done while observing the "group over selection +
    aggregates only" rule, which sould allow us to remain RDBMS agnostic
    (even for pathological cases of "MySQL-ish aggregates").
    
    As a bonus more cases of "the user knows what they are doing" are now
    correctly recognized and left alone. See a t/prefetch/with_limit.t diff
    for a general idea of the scope of improvements.
    
    Yes - there is more regexing crap in the codebase now, and it is
    possible we will call _resolve_aliastypes_from_select_args up to 4(!!!)
    times per statement preparation. However this allows us to establish a
    set of test cases towards which to write optimizations/flog the dq
    framework.
  2. @ribasushi

    Consider unselected order_by during complex subqueried prefetch

    ribasushi authored
    Augment _resolve_aliastypes_from_select_args to collect the column names
    it sees, allowing it to replace _extract_condition_columns() entirely.
    
    In the process fix a number of *incorrect* limit_dialect tests
Commits on Feb 14, 2013
  1. @ribasushi

    Optimization - order only on lazy prefetch

    ribasushi authored
    If the user wants all() without an order_by she doesn't care anyway
  2. @ribasushi

    Move the infmap verification/exception way earlier

    ribasushi authored
    The thing deliberately is not a method on the source, because I am not
    entirely sure where it belongs *YET*. Just knew it does not belong
    in the (hopefully) extractable parser-gens.
  3. @ribasushi
Commits on Jan 25, 2013
  1. @ribasushi
Commits on Jan 21, 2013
  1. @ribasushi

    Fix self-referential resultset update/delete on MySQL (aggravated by 3…

    ribasushi authored
    …1073ac)
    
    MySQL is unable to figure out it needs a temp-table when it is trying
    to update a table with a condition it derived from that table. So what
    we do here is give it a helpful nudge by rewriting any "offending"
    subquery to a double subquery post-sql-generation.
    
    Performance seems to be about the same for moderately large sets. If it
    becomes a problem later we can always revisit and add the ability to
    induce "row-by-row" update/deletion instead.
    
    The implementation sucks, but is rather concise and most importantly
    contained to the MySQL codepath only - it does not affect the rest of
    the code flow in any way.
Commits on Jan 12, 2013
  1. @ribasushi
Commits on Nov 4, 2012
  1. @ribasushi
Commits on Nov 3, 2012
  1. @mattp- @ribasushi

    Let SQLMaker rs_attr 'for' support string literals

    mattp- authored ribasushi committed
    SQLMaker previously only allowed hardcoded values with the 'for' attr,
    overriding in storage specific subclasses. Rather than attempt to provide an
    exhaustive list of possible options, the base class can now take \$scalaras
    an override that is embedded directly in the returned $sql
Commits on Aug 23, 2012
  1. @ribasushi @frioux

    Back out constructor/prefetch rewrite introduced mainly by 43245ad

    ribasushi authored frioux committed
    It was shipped against the authors advice, while containing multiple known
    bugs. After the expected bugreports went warnocked for over two weeks by the
    new DBIC release team, it seems that the only way to partially restore the
    release quality DBIC users have come to expect, is to currently throw this
    code away until better times.
    
    Should resolve RT#78456 and the issues reported in these threads:
    http://lists.scsys.co.uk/pipermail/dbix-class/2012-July/010681.html
    http://lists.scsys.co.uk/pipermail/dbix-class/2012-July/010682.html
Commits on Apr 21, 2012
  1. @ribasushi

    Remove realiasing overengineering introduced in 86bb5a2

    ribasushi authored
    mst is right - there is no viable use case for this, cleanse with fire
    Also I managed to misspell subUery... twice
  2. @ribasushi

    More limit torture

    ribasushi authored
Commits on Apr 20, 2012
  1. @ribasushi

    Make sure order realiasing remains in proper sequence on sorting

    ribasushi authored
    We now support 999 realiased order criteria, this is beyond sufficient
Commits on Apr 16, 2012
  1. @ribasushi

    I think we are done here

    ribasushi authored
Commits on Apr 15, 2012
  1. @ribasushi
Commits on Apr 14, 2012
  1. @shadowcat-mst @ribasushi

    me.minyear is not a valid alias, minyear is

    shadowcat-mst authored ribasushi committed
  2. @ribasushi
  3. @ribasushi
  4. @ribasushi

    Do not alias plain column names to the inflator spec, do it only for …

    ribasushi authored
    …funcs
    
    This solves a problem with deliberate column renames in complex subqueries
  5. @ribasushi
  6. @ribasushi
Commits on Apr 9, 2012
  1. @ribasushi

    Allow for tests to run in parallel (simultaneously from multiple chec…

    ribasushi authored
    …kouts)
    
    This is an interim solution and is by no means the final thing. It simply
    was possible to do in a short timeframe and cuts the test run time in half.
    
    If you have DSN envvars set, use at least -s -j8 for best results (the
    shuffling un-bunches similar tests, see discussion below)
    
    Two things are at play:
    
    First of all every SQLite database and every temp work directory is created
    separately using the pid of the *main* test process (there can be children)
    for disambiguation. Extra cleanup passes have been added to ensure t/var
    remains clean between runs.
    
    All other DSNs are reduced to their ->sqlt_type form and the result is used
    for a global lockfile. Said lockfile is kept in /tmp so that multiple
    testruns from multiple directories can be run against the same set of
    databases with no conflicts.
    
    Some of the tests are explicitly exempt from any locking and will run
    regardless of environment, for example t/storage/dbi_env.t
    
    The lockfiles are deliberately placed in File::Spec->tmpdir. This is done
    so that multiple dbic checkouts can run against the same set of DSNs without
    stepping on each other's toes.
    
    Some notes on why this is not a great idea, even though it works flawlessly
    under continuous test cycling: The problem is that our tests are not yet
    ordered in a spwecific way. This means that multiple tests competing for
    the same resource will inevitably lock all available test threads forming
    several bottlenecks along the path of execution. This issue will be adressed
    in a later patch, with the following considerations:
      - prove -l t/... must continue to work as is
      - test aggregation is something the test suite should try to avoid in
        general - after all DBIC is intended to be usable in CGI (yes, pure CGI)
        environments, so if the tests are getting heavy to run - this is an
        actual problem in need of fixing. Aggregation will instead sweep it under
        the rug
      - general reorganization of test groups / various path changes should only
        be attempted once we have a solid base for multi-db test runs
Commits on Mar 22, 2012
  1. @ribasushi

    Fix pessimization of offset-less Oracle limits, introduced in 6a6394f

    ribasushi authored
    When there is only one RowNum operator, stability of the order is not relevant
  2. @ribasushi
Commits on Mar 12, 2012
  1. @ribasushi

    Test suite wide leaktesting

    ribasushi authored
Commits on Mar 2, 2012
  1. @ribasushi
Something went wrong with that request. Please try again.