Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Comparing changes

Choose two branches to see what's changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
base fork: Cito/sqlalchemy
...
head fork: Cito/sqlalchemy
  • 16 commits
  • 30 files changed
  • 0 commit comments
  • 2 contributors
Commits on Feb 23, 2014
@zzzeek zzzeek -rewrite expire/refresh section 948b14b
Commits on Feb 24, 2014
@zzzeek zzzeek more detail, what actually loads, etc. 8b58c6a
@zzzeek zzzeek - Fixed bug in the versioned_history example where column-level INSERT
defaults would prevent history values of NULL from being written.
537d921
@zzzeek zzzeek - Fixed regression from 0.8 where using an option like
:func:`.orm.lazyload` with the "wildcard" expression, e.g. ``"*"``,
would raise an assertion error in the case where the query didn't
contain any actual entities.  This assertion is meant for other cases
and was catching this one inadvertently.
d966920
@zzzeek zzzeek - we're testing a query here with non-standard aliasing which fails o…
…n PG and MySQL.

Leave this test in place as its ultimately a SQLite use case, but only test on SQLite.
We perhaps should add another test case that works on all platforms.
1122d35
Commits on Feb 25, 2014
@zzzeek zzzeek - Fixed bug where events set to listen at the class
level (e.g. on the :class:`.Mapper` or :class:`.ClassManager`
level, as opposed to on an individual mapped class, and also on
:class:`.Connection`) that also made use of internal argument conversion
(which is most within those categories) would fail to be removable.
fixes #2973
e60529d
Commits on Feb 26, 2014
@zzzeek zzzeek - The new dialect-level keyword argument system for schema-level
constructs has been enhanced in order to assist with existing
schemes that rely upon addition of ad-hoc keyword arguments to
constructs.
- To suit the use case of allowing custom arguments at construction time,
the :meth:`.DialectKWArgs.argument_for` method now allows this registration.
fixes #2962
33f0720
@zzzeek zzzeek docs f9492ef
@zzzeek zzzeek - use MutableMapping to make this more succinct, complete 6fadb57
@zzzeek zzzeek - Fixed issue in new :meth:`.TextClause.columns` method where the ord…
…ering

of columns given positionally would not be preserved.   This could
have potential impact in positional situations such as applying the
resulting :class:`.TextAsFrom` object to a union.
bf67069
@zzzeek zzzeek - Some changes to how the :attr:`.FromClause.c` collection behaves
when presented with duplicate columns.  The behavior of emitting a
warning and replacing the old column with the same name still
remains to some degree; the replacement in particular is to maintain
backwards compatibility.  However, the replaced column still remains
associated with the ``c`` collection now in a collection ``._all_columns``,
which is used by constructs such as aliases and unions, to deal with
the set of columns in ``c`` more towards what is actually in the
list of columns rather than the unique set of key names.  This helps
with situations where SELECT statements with same-named columns
are used in unions and such, so that the union can match the columns
up positionally and also there's some chance of :meth:`.FromClause.corresponding_column`
still being usable here (it can now return a column that is only
in selectable.c._all_columns and not otherwise named).
The new collection is underscored as we still need to decide where this
list might end up.   Theoretically it
would become the result of iter(selectable.c), however this would mean
that the length of the iteration would no longer match the length of
keys(), and that behavior needs to be checked out.
fixes #2974
- add a bunch more tests for ColumnCollection
302ad62
@zzzeek zzzeek - Adjusted the logic which applies names to the .c collection when
a no-name :class:`.BindParameter` is received, e.g. via :func:`.sql.literal`
or similar; the "key" of the bind param is used as the key within
.c. rather than the rendered name.  Since these binds have "anonymous"
names in any case, this allows individual bound parameters to
have their own name within a selectable if they are otherwise unlabeled.
fixes #2974
6aeec02
Commits on Feb 27, 2014
@zzzeek zzzeek - Removed stale names from ``sqlalchemy.orm.interfaces.__all__`` and
refreshed with current names, so that an ``import *`` from this
module again works.
fixes #2975
2d146dd
@zzzeek zzzeek - Fixed a regression in association proxy caused by :ticket:`2810` which
caused a user-provided "getter" to no longer receive values of ``None``
when fetching scalar values from a target that is non-present.  The
check for None introduced by this change is now moved into the default
getter, so a user-provided getter will also again receive values of
None.
re: #2810
b642821
@zzzeek zzzeek restore the contracts of update/extend to the degree that the same co…
…lumn identity

isn't appended to the list.  reflection makes use of this.
c2f86c9
@Cito Restore coercion to unicode with cx_Oracle.
This feature is now turned off by default.
c4dede6
Showing with 1,193 additions and 163 deletions.
  1. +5 −4 doc/build/builder/autodoc_mods.py
  2. +12 −0 doc/build/changelog/changelog_08.rst
  3. +106 −0 doc/build/changelog/changelog_09.rst
  4. +11 −3 doc/build/core/constraints.rst
  5. +3 −0  doc/build/core/sqlelement.rst
  6. +13 −0 doc/build/glossary.rst
  7. +215 −60 doc/build/orm/session.rst
  8. +8 −4 examples/versioned_history/history_meta.py
  9. +30 −1 examples/versioned_history/test_versioning.py
  10. +19 −1 lib/sqlalchemy/dialects/oracle/cx_oracle.py
  11. +20 −10 lib/sqlalchemy/engine/default.py
  12. +1 −1  lib/sqlalchemy/event/attr.py
  13. +3 −5 lib/sqlalchemy/ext/associationproxy.py
  14. +4 −4 lib/sqlalchemy/orm/interfaces.py
  15. +24 −0 lib/sqlalchemy/orm/session.py
  16. +2 −0  lib/sqlalchemy/orm/strategy_options.py
  17. +199 −38 lib/sqlalchemy/sql/base.py
  18. +16 −13 lib/sqlalchemy/sql/elements.py
  19. +8 −7 lib/sqlalchemy/sql/selectable.py
  20. +33 −5 test/base/test_events.py
  21. +186 −1 test/base/test_utils.py
  22. +8 −2 test/dialect/test_oracle.py
  23. +31 −1 test/ext/test_associationproxy.py
  24. +14 −0 test/orm/test_default_strategies.py
  25. +8 −2 test/sql/test_compiler.py
  26. +6 −0 test/sql/test_join_rewriting.py
  27. +133 −0 test/sql/test_metadata.py
  28. +60 −0 test/sql/test_selectable.py
  29. +14 −0 test/sql/test_text.py
  30. +1 −1  test/sql/test_update.py
View
9 doc/build/builder/autodoc_mods.py
@@ -22,14 +22,15 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
}
_convert_modname_w_class = {
- ("sqlalchemy.engine.interfaces", "Connectable"): "sqlalchemy.engine"
+ ("sqlalchemy.engine.interfaces", "Connectable"): "sqlalchemy.engine",
+ ("sqlalchemy.sql.base", "DialectKWArgs"): "sqlalchemy.sql.base",
}
def _adjust_rendered_mod_name(modname, objname):
- if modname in _convert_modname:
- return _convert_modname[modname]
- elif (modname, objname) in _convert_modname_w_class:
+ if (modname, objname) in _convert_modname_w_class:
return _convert_modname_w_class[(modname, objname)]
+ elif modname in _convert_modname:
+ return _convert_modname[modname]
else:
return modname
View
12 doc/build/changelog/changelog_08.rst
@@ -9,6 +9,18 @@
:start-line: 5
.. changelog::
+ :version: 0.8.6
+
+ .. change::
+ :tags: orm, bug
+ :versions: 0.9.4
+ :tickets: 2975
+
+ Removed stale names from ``sqlalchemy.orm.interfaces.__all__`` and
+ refreshed with current names, so that an ``import *`` from this
+ module again works.
+
+.. changelog::
:version: 0.8.5
:released: February 19, 2014
View
106 doc/build/changelog/changelog_09.rst
@@ -15,6 +15,112 @@
:version: 0.9.4
.. change::
+ :tags: bug, ext
+ :tickets: 2810
+
+ Fixed a regression in association proxy caused by :ticket:`2810` which
+ caused a user-provided "getter" to no longer receive values of ``None``
+ when fetching scalar values from a target that is non-present. The
+ check for None introduced by this change is now moved into the default
+ getter, so a user-provided getter will also again receive values of
+ None.
+
+ .. change::
+ :tags: bug, sql
+ :tickets: 2974
+
+ Adjusted the logic which applies names to the .c collection when
+ a no-name :class:`.BindParameter` is received, e.g. via :func:`.sql.literal`
+ or similar; the "key" of the bind param is used as the key within
+ .c. rather than the rendered name. Since these binds have "anonymous"
+ names in any case, this allows individual bound parameters to
+ have their own name within a selectable if they are otherwise unlabeled.
+
+ .. change::
+ :tags: bug, sql
+ :tickets: 2974
+
+ Some changes to how the :attr:`.FromClause.c` collection behaves
+ when presented with duplicate columns. The behavior of emitting a
+ warning and replacing the old column with the same name still
+ remains to some degree; the replacement in particular is to maintain
+ backwards compatibility. However, the replaced column still remains
+ associated with the ``c`` collection now in a collection ``._all_columns``,
+ which is used by constructs such as aliases and unions, to deal with
+ the set of columns in ``c`` more towards what is actually in the
+ list of columns rather than the unique set of key names. This helps
+ with situations where SELECT statements with same-named columns
+ are used in unions and such, so that the union can match the columns
+ up positionally and also there's some chance of :meth:`.FromClause.corresponding_column`
+ still being usable here (it can now return a column that is only
+ in selectable.c._all_columns and not otherwise named).
+ The new collection is underscored as we still need to decide where this
+ list might end up. Theoretically it
+ would become the result of iter(selectable.c), however this would mean
+ that the length of the iteration would no longer match the length of
+ keys(), and that behavior needs to be checked out.
+
+ .. change::
+ :tags: bug, sql
+
+ Fixed issue in new :meth:`.TextClause.columns` method where the ordering
+ of columns given positionally would not be preserved. This could
+ have potential impact in positional situations such as applying the
+ resulting :class:`.TextAsFrom` object to a union.
+
+ .. change::
+ :tags: feature, sql
+ :tickets: 2962, 2866
+
+ The new dialect-level keyword argument system for schema-level
+ constructs has been enhanced in order to assist with existing
+ schemes that rely upon addition of ad-hoc keyword arguments to
+ constructs.
+
+ E.g., a construct such as :class:`.Index` will again accept
+ ad-hoc keyword arguments within the :attr:`.Index.kwargs` collection,
+ after construction::
+
+ idx = Index('a', 'b')
+ idx.kwargs['mysql_someargument'] = True
+
+ To suit the use case of allowing custom arguments at construction time,
+ the :meth:`.DialectKWArgs.argument_for` method now allows this registration::
+
+ Index.argument_for('mysql', 'someargument', False)
+
+ idx = Index('a', 'b', mysql_someargument=True)
+
+ .. seealso::
+
+ :meth:`.DialectKWArgs.argument_for`
+
+ .. change::
+ :tags: bug, orm, engine
+ :tickets: 2973
+
+ Fixed bug where events set to listen at the class
+ level (e.g. on the :class:`.Mapper` or :class:`.ClassManager`
+ level, as opposed to on an individual mapped class, and also on
+ :class:`.Connection`) that also made use of internal argument conversion
+ (which is most within those categories) would fail to be removable.
+
+ .. change::
+ :tags: bug, orm
+
+ Fixed regression from 0.8 where using an option like
+ :func:`.orm.lazyload` with the "wildcard" expression, e.g. ``"*"``,
+ would raise an assertion error in the case where the query didn't
+ contain any actual entities. This assertion is meant for other cases
+ and was catching this one inadvertently.
+
+ .. change::
+ :tags: bug, examples
+
+ Fixed bug in the versioned_history example where column-level INSERT
+ defaults would prevent history values of NULL from being written.
+
+ .. change::
:tags: orm, bug, sqlite
:tickets: 2969
View
14 doc/build/core/constraints.rst
@@ -431,26 +431,33 @@ name as follows::
Constraints API
---------------
.. autoclass:: Constraint
-
+ :members:
.. autoclass:: CheckConstraint
-
+ :members:
+ :inherited-members:
.. autoclass:: ColumnCollectionConstraint
-
+ :members:
.. autoclass:: ForeignKey
:members:
+ :inherited-members:
.. autoclass:: ForeignKeyConstraint
:members:
+ :inherited-members:
.. autoclass:: PrimaryKeyConstraint
+ :members:
+ :inherited-members:
.. autoclass:: UniqueConstraint
+ :members:
+ :inherited-members:
.. _schema_indexes:
@@ -569,3 +576,4 @@ Index API
.. autoclass:: Index
:members:
+ :inherited-members:
View
3  doc/build/core/sqlelement.rst
@@ -100,6 +100,9 @@ used to construct any kind of typed SQL expression.
:special-members:
:inherited-members:
+.. autoclass:: sqlalchemy.sql.base.DialectKWArgs
+ :members:
+
.. autoclass:: Extract
:members:
View
13 doc/build/glossary.rst
@@ -292,6 +292,19 @@ Glossary
:doc:`orm/session`
+ expire
+ expires
+ expiring
+ In the SQLAlchemy ORM, refers to when the data in a :term:`persistent`
+ or sometimes :term:`detached` object is erased, such that when
+ the object's attributes are next accessed, a :term:`lazy load` SQL
+ query will be emitted in order to refresh the data for this object
+ as stored in the current ongoing transaction.
+
+ .. seealso::
+
+ :ref:`session_expire`
+
Session
The container or scope for ORM database operations. Sessions
load instances from the database, track changes to mapped
View
275 doc/build/orm/session.rst
@@ -1005,79 +1005,234 @@ The :meth:`~.Session.close` method issues a
transactional/connection resources. When connections are returned to the
connection pool, transactional state is rolled back as well.
+.. _session_expire:
+
Refreshing / Expiring
---------------------
-The Session normally works in the context of an ongoing transaction (with the
-default setting of autoflush=False). Most databases offer "isolated"
-transactions - this refers to a series of behaviors that allow the work within
-a transaction to remain consistent as time passes, regardless of the
-activities outside of that transaction. A key feature of a high degree of
-transaction isolation is that emitting the same SELECT statement twice will
-return the same results as when it was called the first time, even if the data
-has been modified in another transaction.
-
-For this reason, the :class:`.Session` gains very efficient behavior by
-loading the attributes of each instance only once. Subsequent reads of the
-same row in the same transaction are assumed to have the same value. The
-user application also gains directly from this assumption, that the transaction
-is regarded as a temporary shield against concurrent changes - a good application
-will ensure that isolation levels are set appropriately such that this assumption
-can be made, given the kind of data being worked with.
-
-To clear out the currently loaded state on an instance, the instance or its individual
-attributes can be marked as "expired", which results in a reload to
-occur upon next access of any of the instance's attrbutes. The instance
-can also be immediately reloaded from the database. The :meth:`~.Session.expire`
-and :meth:`~.Session.refresh` methods achieve this::
-
- # immediately re-load attributes on obj1, obj2
- session.refresh(obj1)
- session.refresh(obj2)
+:term:`Expiring` means that the database-persisted data held inside a series
+of object attributes is erased, in such a way that when those attributes
+are next accessed, a SQL query is emitted which will refresh that data from
+the database.
+
+When we talk about expiration of data we are usually talking about an object
+that is in the :term:`persistent` state. For example, if we load an object
+as follows::
+
+ user = session.query(User).filter_by(name='user1').first()
+
+The above ``User`` object is persistent, and has a series of attributes
+present; if we were to look inside its ``__dict__``, we'd see that state
+loaded::
+
+ >>> user.__dict__
+ {
+ 'id': 1, 'name': u'user1',
+ '_sa_instance_state': <...>,
+ }
- # expire objects obj1, obj2, attributes will be reloaded
- # on the next access:
+where ``id`` and ``name`` refer to those columns in the database.
+``_sa_instance_state`` is a non-database-persisted value used by SQLAlchemy
+internally (it refers to the :class:`.InstanceState` for the instance.
+While not directly relevant to this section, if we want to get at it,
+we should use the :func:`.inspect` function to access it).
+
+At this point, the state in our ``User`` object matches that of the loaded
+database row. But upon expiring the object using a method such as
+:meth:`.Session.expire`, we see that the state is removed::
+
+ >>> session.expire(user)
+ >>> user.__dict__
+ {'_sa_instance_state': <...>}
+
+We see that while the internal "state" still hangs around, the values which
+correspond to the ``id`` and ``name`` columns are gone. If we were to access
+one of these columns and are watching SQL, we'd see this:
+
+.. sourcecode:: python+sql
+
+ >>> print(user.name)
+ {opensql}SELECT user.id AS user_id, user.name AS user_name
+ FROM user
+ WHERE user.id = ?
+ (1,)
+ {stop}user1
+
+Above, upon accessing the expired attribute ``user.name``, the ORM initiated
+a :term:`lazy load` to retrieve the most recent state from the database,
+by emitting a SELECT for the user row to which this user refers. Afterwards,
+the ``__dict__`` is again populated::
+
+ >>> user.__dict__
+ {
+ 'id': 1, 'name': u'user1',
+ '_sa_instance_state': <...>,
+ }
+
+.. note:: While we are peeking inside of ``__dict__`` in order to see a bit
+ of what SQLAlchemy does with object attributes, we **should not modify**
+ the contents of ``__dict__`` directly, at least as far as those attributes
+ which the SQLAlchemy ORM is maintaining (other attributes outside of SQLA's
+ realm are fine). This is because SQLAlchemy uses :term:`descriptors` in
+ order to track the changes we make to an object, and when we modify ``__dict__``
+ directly, the ORM won't be able to track that we changed something.
+
+Another key behavior of both :meth:`~.Session.expire` and :meth:`~.Session.refresh`
+is that all un-flushed changes on an object are discarded. That is,
+if we were to modify an attribute on our ``User``::
+
+ >>> user.name = 'user2'
+
+but then we call :meth:`~.Session.expire` without first calling :meth:`~.Session.flush`,
+our pending value of ``'user2'`` is discarded::
+
+ >>> session.expire(user)
+ >>> user.name
+ 'user1'
+
+The :meth:`~.Session.expire` method can be used to mark as "expired" all ORM-mapped
+attributes for an instance::
+
+ # expire all ORM-mapped attributes on obj1
session.expire(obj1)
- session.expire(obj2)
-When an expired object reloads, all non-deferred column-based attributes are
-loaded in one query. Current behavior for expired relationship-based
-attributes is that they load individually upon access - this behavior may be
-enhanced in a future release. When a refresh is invoked on an object, the
-ultimate operation is equivalent to a :meth:`.Query.get`, so any relationships
-configured with eager loading should also load within the scope of the refresh
-operation.
+it can also be passed a list of string attribute names, referring to specific
+attributes to be marked as expired::
-:meth:`~.Session.refresh` and
-:meth:`~.Session.expire` also support being passed a
-list of individual attribute names in which to be refreshed. These names can
-refer to any attribute, column-based or relationship based::
+ # expire only attributes obj1.attr1, obj1.attr2
+ session.expire(obj1, ['attr1', 'attr2'])
- # immediately re-load the attributes 'hello', 'world' on obj1, obj2
- session.refresh(obj1, ['hello', 'world'])
- session.refresh(obj2, ['hello', 'world'])
+The :meth:`~.Session.refresh` method has a similar interface, but instead
+of expiring, it emits an immediate SELECT for the object's row immediately::
- # expire the attributes 'hello', 'world' objects obj1, obj2, attributes will be reloaded
- # on the next access:
- session.expire(obj1, ['hello', 'world'])
- session.expire(obj2, ['hello', 'world'])
+ # reload all attributes on obj1
+ session.refresh(obj1)
+
+:meth:`~.Session.refresh` also accepts a list of string attribute names,
+but unlike :meth:`~.Session.expire`, expects at least one name to
+be that of a column-mapped attribute::
+
+ # reload obj1.attr1, obj1.attr2
+ session.refresh(obj1, ['attr1', 'attr2'])
-The full contents of the session may be expired at once using
-:meth:`~.Session.expire_all`::
+The :meth:`.Session.expire_all` method allows us to essentially call
+:meth:`.Session.expire` on all objects contained within the :class:`.Session`
+at once::
session.expire_all()
-Note that :meth:`~.Session.expire_all` is called **automatically** whenever
-:meth:`~.Session.commit` or :meth:`~.Session.rollback` are called. If using the
-session in its default mode of autocommit=False and with a well-isolated
-transactional environment (which is provided by most backends with the notable
-exception of MySQL MyISAM), there is virtually *no reason* to ever call
-:meth:`~.Session.expire_all` directly - plenty of state will remain on the
-current transaction until it is rolled back or committed or otherwise removed.
-
-:meth:`~.Session.refresh` and :meth:`~.Session.expire` similarly are usually
-only necessary when an UPDATE or DELETE has been issued manually within the
-transaction using :meth:`.Session.execute()`.
+What Actually Loads
+~~~~~~~~~~~~~~~~~~~
+
+The SELECT statement that's emitted when an object marked with :meth:`~.Session.expire`
+or loaded with :meth:`~.Session.refresh` varies based on several factors, including:
+
+* The load of expired attributes is triggered from **column-mapped attributes only**.
+ While any kind of attribute can be marked as expired, including a
+ :func:`.relationship` - mapped attribute, accessing an expired :func:`.relationship`
+ attribute will emit a load only for that attribute, using standard
+ relationship-oriented lazy loading. Column-oriented attributes, even if
+ expired, will not load as part of this operation, and instead will load when
+ any column-oriented attribute is accessed.
+
+* :func:`.relationship`- mapped attributes will not load in response to
+ expired column-based attributes being accessed.
+
+* Regarding relationships, :meth:`~.Session.refresh` is more restrictive than
+ :meth:`~.Session.expire` with regards to attributes that aren't column-mapped.
+ Calling :meth:`.refresh` and passing a list of names that only includes
+ relationship-mapped attributes will actually raise an error.
+ In any case, non-eager-loading :func:`.relationship` attributes will not be
+ included in any refresh operation.
+
+* :func:`.relationship` attributes configured as "eager loading" via the
+ :paramref:`~.relationship.lazy` parameter will load in the case of
+ :meth:`~.Session.refresh`, if either no attribute names are specified, or
+ if their names are inclued in the list of attributes to be
+ refreshed.
+
+* Attributes that are configured as :func:`.deferred` will not normally load,
+ during either the expired-attribute load or during a refresh.
+ An unloaded attribute that's :func:`.deferred` instead loads on its own when directly
+ accessed, or if part of a "group" of deferred attributes where an unloaded
+ attribute in that group is accessed.
+
+* For expired attributes that are loaded on access, a joined-inheritance table
+ mapping will emit a SELECT that typically only includes those tables for which
+ unloaded attributes are present. The action here is sophisticated enough
+ to load only the parent or child table, for example, if the subset of columns
+ that were originally expired encompass only one or the other of those tables.
+
+* When :meth:`~.Session.refresh` is used on a joined-inheritance table mapping,
+ the SELECT emitted will resemble that of when :meth:`.Session.query` is
+ used on the target object's class. This is typically all those tables that
+ are set up as part of the mapping.
+
+
+When to Expire or Refresh
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The :class:`.Session` uses the expiration feature automatically whenever
+the transaction referred to by the session ends. Meaning, whenever :meth:`.Session.commit`
+or :meth:`.Session.rollback` is called, all objects within the :class:`.Session`
+are expired, using a feature equivalent to that of the :meth:`.Session.expire_all`
+method. The rationale is that the end of a transaction is a
+demarcating point at which there is no more context available in order to know
+what the current state of the database is, as any number of other transactions
+may be affecting it. Only when a new transaction starts can we again have access
+to the current state of the database, at which point any number of changes
+may have occurred.
+
+.. sidebar:: Transaction Isolation
+
+ Of course, most databases are capable of handling
+ multiple transactions at once, even involving the same rows of data. When
+ a relational database handles multiple transactions involving the same
+ tables or rows, this is when the :term:`isolation` aspect of the database comes
+ into play. The isolation behavior of different databases varies considerably
+ and even on a single database can be configured to behave in different ways
+ (via the so-called :term:`isolation level` setting). In that sense, the :class:`.Session`
+ can't fully predict when the same SELECT statement, emitted a second time,
+ will definitely return the data we already have, or will return new data.
+ So as a best guess, it assumes that within the scope of a transaction, unless
+ it is known that a SQL expression has been emitted to modify a particular row,
+ there's no need to refresh a row unless explicitly told to do so.
+
+The :meth:`.Session.expire` and :meth:`.Session.refresh` methods are used in
+those cases when one wants to force an object to re-load its data from the
+database, in those cases when it is known that the current state of data
+is possibly stale. Reasons for this might include:
+
+* some SQL has been emitted within the transaction outside of the
+ scope of the ORM's object handling, such as if a :meth:`.Table.update` construct
+ were emitted using the :meth:`.Session.execute` method;
+
+* if the application
+ is attempting to acquire data that is known to have been modified in a
+ concurrent transaction, and it is also known that the isolation rules in effect
+ allow this data to be visible.
+
+The second bullet has the important caveat that "it is also known that the isolation rules in effect
+allow this data to be visible." This means that it cannot be assumed that an
+UPDATE that happened on another database connection will yet be visible here
+locally; in many cases, it will not. This is why if one wishes to use
+:meth:`.expire` or :meth:`.refresh` in order to view data between ongoing
+transactions, an understanding of the isolation behavior in effect is essential.
+
+.. seealso::
+
+ :meth:`.Session.expire`
+
+ :meth:`.Session.expire_all`
+
+ :meth:`.Session.refresh`
+
+ :term:`isolation` - glossary explanation of isolation which includes links
+ to Wikipedia.
+
+ `The SQLAlchemy Session In-Depth <http://techspot.zzzeek.org/2012/11/14/pycon-canada-the-sqlalchemy-session-in-depth/>`_ - a video + slides with an in-depth discussion of the object
+ lifecycle including the role of data expiration.
+
Session Attributes
------------------
View
12 examples/versioned_history/history_meta.py
@@ -32,14 +32,19 @@ def _history_mapper(local_mapper):
polymorphic_on = None
super_fks = []
+ def _col_copy(col):
+ col = col.copy()
+ col.unique = False
+ col.default = col.server_default = None
+ return col
+
if not super_mapper or local_mapper.local_table is not super_mapper.local_table:
cols = []
for column in local_mapper.local_table.c:
if _is_versioning_col(column):
continue
- col = column.copy()
- col.unique = False
+ col = _col_copy(column)
if super_mapper and col_references_table(column, super_mapper.local_table):
super_fks.append((col.key, list(super_history_mapper.local_table.primary_key)[0]))
@@ -80,8 +85,7 @@ def _history_mapper(local_mapper):
# been added and add them to the history table.
for column in local_mapper.local_table.c:
if column.key not in super_history_mapper.local_table.c:
- col = column.copy()
- col.unique = False
+ col = _col_copy(column)
super_history_mapper.local_table.append_column(col)
table = None
View
31 examples/versioned_history/test_versioning.py
@@ -3,7 +3,7 @@
from unittest import TestCase
from sqlalchemy.ext.declarative import declarative_base
from .history_meta import Versioned, versioned_session
-from sqlalchemy import create_engine, Column, Integer, String, ForeignKey
+from sqlalchemy import create_engine, Column, Integer, String, ForeignKey, Boolean
from sqlalchemy.orm import clear_mappers, Session, deferred, relationship
from sqlalchemy.testing import AssertsCompiledSQL, eq_, assert_raises
from sqlalchemy.testing.entities import ComparableEntity
@@ -142,6 +142,35 @@ class SomeClass(Versioned, self.Base, ComparableEntity):
assert sc.version == 2
+ def test_insert_null(self):
+ class SomeClass(Versioned, self.Base, ComparableEntity):
+ __tablename__ = 'sometable'
+
+ id = Column(Integer, primary_key=True)
+ boole = Column(Boolean, default=False)
+
+ self.create_tables()
+ sess = self.session
+ sc = SomeClass(boole=True)
+ sess.add(sc)
+ sess.commit()
+
+ sc.boole = None
+ sess.commit()
+
+ sc.boole = False
+ sess.commit()
+
+ SomeClassHistory = SomeClass.__history_mapper__.class_
+
+ eq_(
+ sess.query(SomeClassHistory.boole).order_by(SomeClassHistory.id).all(),
+ [(True, ), (None, )]
+ )
+
+ eq_(sc.version, 3)
+
+
def test_deferred(self):
"""test versioning of unloaded, deferred columns."""
View
20 lib/sqlalchemy/dialects/oracle/cx_oracle.py
@@ -70,7 +70,12 @@
of a unicode-converting outputtypehandler in Python 2 (not Python 3) incurs
significant performance overhead for all statements that deliver string
results, whether or not values contain non-ASCII characters. For this reason,
-SQLAlchemy as of 0.9.2 does not use cx_Oracle's outputtypehandlers for unicode conversion.
+SQLAlchemy as of 0.9.2 does not use cx_Oracle's outputtypehandlers for unicode
+conversion by default. If you want to use this feature anyway, you can enable
+it by passing the flag ``coerce_to_unicode=True`` to :func:`.create_engine`::
+
+ engine = create_engine("oracle+cx_oracle://dsn",
+ coerce_to_unicode=True)
Keeping in mind that any NVARCHAR or NCLOB type is returned as Python unicode
unconditionally, in order for VARCHAR values to be returned as Python unicode
@@ -90,6 +95,9 @@
performance bottleneck. SQLAlchemy's own unicode facilities are used
instead.
+.. versionadded:: 0.9.4
+ Add the ``coerce_to_unicode`` flag.
+
.. _cx_oracle_returning:
RETURNING Support
@@ -602,6 +610,7 @@ def __init__(self,
threaded=True,
allow_twophase=True,
coerce_to_decimal=True,
+ coerce_to_unicode=False,
arraysize=50, **kwargs):
OracleDialect.__init__(self, **kwargs)
self.threaded = threaded
@@ -630,6 +639,11 @@ def types(*names):
self._cx_oracle_binary_types = types("BFILE", "CLOB", "NCLOB", "BLOB")
self.supports_unicode_binds = self.cx_oracle_ver >= (5, 0)
+ self.coerce_to_unicode = (
+ self.cx_oracle_ver >= (5, 0) and
+ coerce_to_unicode
+ )
+
self.supports_native_decimal = (
self.cx_oracle_ver >= (5, 0) and
coerce_to_decimal
@@ -773,6 +787,10 @@ def output_type_handler(cursor, name, defaultType,
255,
outconverter=self._detect_decimal,
arraysize=cursor.arraysize)
+ # allow all strings to come back natively as Unicode
+ elif self.coerce_to_unicode and \
+ defaultType in (cx_Oracle.STRING, cx_Oracle.FIXED_CHAR):
+ return cursor.var(util.text_type, size, cursor.arraysize)
def on_connect(conn):
conn.outputtypehandler = output_type_handler
View
30 lib/sqlalchemy/engine/default.py
@@ -115,8 +115,7 @@ class DefaultDialect(interfaces.Dialect):
"""Optional set of argument specifiers for various SQLAlchemy
constructs, typically schema items.
- To
- implement, establish as a series of tuples, as in::
+ To implement, establish as a series of tuples, as in::
construct_arguments = [
(schema.Index, {
@@ -127,14 +126,25 @@ class DefaultDialect(interfaces.Dialect):
]
If the above construct is established on the Postgresql dialect,
- the ``Index`` construct will now accept additional keyword arguments
- such as ``postgresql_using``, ``postgresql_where``, etc. Any kind of
- ``postgresql_XYZ`` argument not corresponding to the above template will
- be rejected with an ``ArgumentError`, for all those SQLAlchemy constructs
- which implement the :class:`.DialectKWArgs` class.
-
- The default is ``None``; older dialects which don't implement the argument
- will have the old behavior of un-validated kwargs to schema/SQL constructs.
+ the :class:`.Index` construct will now accept the keyword arguments
+ ``postgresql_using``, ``postgresql_where``, nad ``postgresql_ops``.
+ Any other argument specified to the constructor of :class:`.Index`
+ which is prefixed with ``postgresql_`` will raise :class:`.ArgumentError`.
+
+ A dialect which does not include a ``construct_arguments`` member will
+ not participate in the argument validation system. For such a dialect,
+ any argument name is accepted by all participating constructs, within
+ the namespace of arguments prefixed with that dialect name. The rationale
+ here is so that third-party dialects that haven't yet implemented this
+ feature continue to function in the old way.
+
+ .. versionadded:: 0.9.2
+
+ .. seealso::
+
+ :class:`.DialectKWArgs` - implementing base class which consumes
+ :attr:`.DefaultDialect.construct_arguments`
+
"""
View
2  lib/sqlalchemy/event/attr.py
@@ -135,7 +135,7 @@ def remove(self, event_key):
cls = stack.pop(0)
stack.extend(cls.__subclasses__())
if cls in self._clslevel:
- self._clslevel[cls].remove(event_key.fn)
+ self._clslevel[cls].remove(event_key._listen_fn)
registry._removed_from_collection(event_key, self)
def clear(self):
View
8 lib/sqlalchemy/ext/associationproxy.py
@@ -243,10 +243,7 @@ def __get__(self, obj, class_):
if self.scalar:
target = getattr(obj, self.target_collection)
- if target is not None:
- return self._scalar_get(target)
- else:
- return None
+ return self._scalar_get(target)
else:
try:
# If the owning instance is reborn (orm session resurrect,
@@ -291,7 +288,8 @@ def _initialize_scalar_accessors(self):
def _default_getset(self, collection_class):
attr = self.value_attr
- getter = operator.attrgetter(attr)
+ _getter = operator.attrgetter(attr)
+ getter = lambda target: _getter(target) if target is not None else None
if collection_class is dict:
setter = lambda o, k, v: setattr(o, attr, v)
else:
View
8 lib/sqlalchemy/orm/interfaces.py
@@ -31,16 +31,16 @@
'AttributeExtension',
'EXT_CONTINUE',
'EXT_STOP',
- 'ExtensionOption',
- 'InstrumentationManager',
+ 'ONETOMANY',
+ 'MANYTOMANY',
+ 'MANYTOONE',
+ 'NOT_EXTENSION',
'LoaderStrategy',
'MapperExtension',
'MapperOption',
'MapperProperty',
'PropComparator',
- 'PropertyOption',
'SessionExtension',
- 'StrategizedOption',
'StrategizedProperty',
)
View
24 lib/sqlalchemy/orm/session.py
@@ -1220,6 +1220,14 @@ def refresh(self, instance, attribute_names=None, lockmode=None):
:param lockmode: Passed to the :class:`~sqlalchemy.orm.query.Query`
as used by :meth:`~sqlalchemy.orm.query.Query.with_lockmode`.
+ .. seealso::
+
+ :ref:`session_expire` - introductory material
+
+ :meth:`.Session.expire`
+
+ :meth:`.Session.expire_all`
+
"""
try:
state = attributes.instance_state(instance)
@@ -1258,6 +1266,14 @@ def expire_all(self):
calling :meth:`Session.expire_all` should not be needed when
autocommit is ``False``, assuming the transaction is isolated.
+ .. seealso::
+
+ :ref:`session_expire` - introductory material
+
+ :meth:`.Session.expire`
+
+ :meth:`.Session.refresh`
+
"""
for state in self.identity_map.all_states():
state._expire(state.dict, self.identity_map._modified)
@@ -1288,6 +1304,14 @@ def expire(self, instance, attribute_names=None):
:param attribute_names: optional list of string attribute names
indicating a subset of attributes to be expired.
+ .. seealso::
+
+ :ref:`session_expire` - introductory material
+
+ :meth:`.Session.expire`
+
+ :meth:`.Session.refresh`
+
"""
try:
state = attributes.instance_state(instance)
View
2  lib/sqlalchemy/orm/strategy_options.py
@@ -431,6 +431,8 @@ def _find_entity_basestring(self, query, token, raiseerr):
"Wildcard loader can only be used with exactly "
"one entity. Use Load(ent) to specify "
"specific entities.")
+ elif token.endswith(_DEFAULT_TOKEN):
+ raiseerr = False
for ent in query._mapper_entities:
# return only the first _MapperEntity when searching
View
237 lib/sqlalchemy/sql/base.py
@@ -44,12 +44,145 @@ def _generative(fn, *args, **kw):
return self
+class _DialectArgView(collections.MutableMapping):
+ """A dictionary view of dialect-level arguments in the form
+ <dialectname>_<argument_name>.
+
+ """
+ def __init__(self, obj):
+ self.obj = obj
+
+ def _key(self, key):
+ try:
+ dialect, value_key = key.split("_", 1)
+ except ValueError:
+ raise KeyError(key)
+ else:
+ return dialect, value_key
+
+ def __getitem__(self, key):
+ dialect, value_key = self._key(key)
+
+ try:
+ opt = self.obj.dialect_options[dialect]
+ except exc.NoSuchModuleError:
+ raise KeyError(key)
+ else:
+ return opt[value_key]
+
+ def __setitem__(self, key, value):
+ try:
+ dialect, value_key = self._key(key)
+ except KeyError:
+ raise exc.ArgumentError(
+ "Keys must be of the form <dialectname>_<argname>")
+ else:
+ self.obj.dialect_options[dialect][value_key] = value
+
+ def __delitem__(self, key):
+ dialect, value_key = self._key(key)
+ del self.obj.dialect_options[dialect][value_key]
+
+ def __len__(self):
+ return sum(len(args._non_defaults) for args in
+ self.obj.dialect_options.values())
+
+ def __iter__(self):
+ return (
+ "%s_%s" % (dialect_name, value_name)
+ for dialect_name in self.obj.dialect_options
+ for value_name in self.obj.dialect_options[dialect_name]._non_defaults
+ )
+
+class _DialectArgDict(collections.MutableMapping):
+ """A dictionary view of dialect-level arguments for a specific
+ dialect.
+
+ Maintains a separate collection of user-specified arguments
+ and dialect-specified default arguments.
+
+ """
+ def __init__(self):
+ self._non_defaults = {}
+ self._defaults = {}
+
+ def __len__(self):
+ return len(set(self._non_defaults).union(self._defaults))
+
+ def __iter__(self):
+ return iter(set(self._non_defaults).union(self._defaults))
+
+ def __getitem__(self, key):
+ if key in self._non_defaults:
+ return self._non_defaults[key]
+ else:
+ return self._defaults[key]
+
+ def __setitem__(self, key, value):
+ self._non_defaults[key] = value
+
+ def __delitem__(self, key):
+ del self._non_defaults[key]
+
+
class DialectKWArgs(object):
"""Establish the ability for a class to have dialect-specific arguments
- with defaults and validation.
+ with defaults and constructor validation.
+
+ The :class:`.DialectKWArgs` interacts with the
+ :attr:`.DefaultDialect.construct_arguments` present on a dialect.
+
+ .. seealso::
+
+ :attr:`.DefaultDialect.construct_arguments`
"""
+ @classmethod
+ def argument_for(cls, dialect_name, argument_name, default):
+ """Add a new kind of dialect-specific keyword argument for this class.
+
+ E.g.::
+
+ Index.argument_for("mydialect", "length", None)
+
+ some_index = Index('a', 'b', mydialect_length=5)
+
+ The :meth:`.DialectKWArgs.argument_for` method is a per-argument
+ way adding extra arguments to the :attr:`.DefaultDialect.construct_arguments`
+ dictionary. This dictionary provides a list of argument names accepted by
+ various schema-level constructs on behalf of a dialect.
+
+ New dialects should typically specify this dictionary all at once as a data
+ member of the dialect class. The use case for ad-hoc addition of
+ argument names is typically for end-user code that is also using
+ a custom compilation scheme which consumes the additional arguments.
+
+ :param dialect_name: name of a dialect. The dialect must be locatable,
+ else a :class:`.NoSuchModuleError` is raised. The dialect must
+ also include an existing :attr:`.DefaultDialect.construct_arguments` collection,
+ indicating that it participates in the keyword-argument validation and
+ default system, else :class:`.ArgumentError` is raised.
+ If the dialect does not include this collection, then any keyword argument
+ can be specified on behalf of this dialect already. All dialects
+ packaged within SQLAlchemy include this collection, however for third
+ party dialects, support may vary.
+
+ :param argument_name: name of the parameter.
+
+ :param default: default value of the parameter.
+
+ .. versionadded:: 0.9.4
+
+ """
+
+ construct_arg_dictionary = DialectKWArgs._kw_registry[dialect_name]
+ if construct_arg_dictionary is None:
+ raise exc.ArgumentError("Dialect '%s' does have keyword-argument "
+ "validation and defaults enabled configured" %
+ dialect_name)
+ construct_arg_dictionary[cls][argument_name] = default
+
@util.memoized_property
def dialect_kwargs(self):
"""A collection of keyword arguments specified as dialect-specific
@@ -60,19 +193,25 @@ def dialect_kwargs(self):
unlike the :attr:`.DialectKWArgs.dialect_options` collection, which
contains all options known by this dialect including defaults.
+ The collection is also writable; keys are accepted of the
+ form ``<dialect>_<kwarg>`` where the value will be assembled
+ into the list of options.
+
.. versionadded:: 0.9.2
+ .. versionchanged:: 0.9.4 The :attr:`.DialectKWArgs.dialect_kwargs`
+ collection is now writable.
+
.. seealso::
:attr:`.DialectKWArgs.dialect_options` - nested dictionary form
"""
-
- return util.immutabledict()
+ return _DialectArgView(self)
@property
def kwargs(self):
- """Deprecated; see :attr:`.DialectKWArgs.dialect_kwargs"""
+ """A synonym for :attr:`.DialectKWArgs.dialect_kwargs`."""
return self.dialect_kwargs
@util.dependencies("sqlalchemy.dialects")
@@ -85,14 +224,15 @@ def _kw_reg_for_dialect(dialects, dialect_name):
def _kw_reg_for_dialect_cls(self, dialect_name):
construct_arg_dictionary = DialectKWArgs._kw_registry[dialect_name]
+ d = _DialectArgDict()
+
if construct_arg_dictionary is None:
- return {"*": None}
+ d._defaults.update({"*": None})
else:
- d = {}
for cls in reversed(self.__class__.__mro__):
if cls in construct_arg_dictionary:
- d.update(construct_arg_dictionary[cls])
- return d
+ d._defaults.update(construct_arg_dictionary[cls])
+ return d
@util.memoized_property
def dialect_options(self):
@@ -123,11 +263,9 @@ def _validate_dialect_kwargs(self, kwargs):
if not kwargs:
return
- self.dialect_kwargs = self.dialect_kwargs.union(kwargs)
-
for k in kwargs:
m = re.match('^(.+?)_(.+)$', k)
- if m is None:
+ if not m:
raise TypeError("Additional arguments should be "
"named <dialectname>_<argument>, got '%s'" % k)
dialect_name, arg_name = m.group(1, 2)
@@ -139,9 +277,9 @@ def _validate_dialect_kwargs(self, kwargs):
"Can't validate argument %r; can't "
"locate any SQLAlchemy dialect named %r" %
(k, dialect_name))
- self.dialect_options[dialect_name] = {
- "*": None,
- arg_name: kwargs[k]}
+ self.dialect_options[dialect_name] = d = _DialectArgDict()
+ d._defaults.update({"*": None})
+ d._non_defaults[arg_name] = kwargs[k]
else:
if "*" not in construct_arg_dictionary and \
arg_name not in construct_arg_dictionary:
@@ -297,10 +435,10 @@ class ColumnCollection(util.OrderedProperties):
"""
- def __init__(self, *cols):
+ def __init__(self):
super(ColumnCollection, self).__init__()
- self._data.update((c.key, c) for c in cols)
- self.__dict__['_all_cols'] = util.column_set(self)
+ self.__dict__['_all_col_set'] = util.column_set()
+ self.__dict__['_all_columns'] = []
def __str__(self):
return repr([str(c) for c in self])
@@ -321,15 +459,26 @@ def replace(self, column):
Used by schema.Column to override columns during table reflection.
"""
+ remove_col = None
if column.name in self and column.key != column.name:
other = self[column.name]
if other.name == other.key:
- del self._data[other.name]
- self._all_cols.remove(other)
+ remove_col = other
+ self._all_col_set.remove(other)
+ del self._data[other.key]
+
if column.key in self._data:
- self._all_cols.remove(self._data[column.key])
- self._all_cols.add(column)
+ remove_col = self._data[column.key]
+ self._all_col_set.remove(remove_col)
+
+ self._all_col_set.add(column)
self._data[column.key] = column
+ if remove_col is not None:
+ self._all_columns[:] = [column if c is remove_col
+ else c for c in self._all_columns]
+ else:
+ self._all_columns.append(column)
+
def add(self, column):
"""Add a column to this collection.
@@ -359,37 +508,43 @@ def __setitem__(self, key, value):
'%r, which has the same key. Consider '
'use_labels for select() statements.' % (key,
getattr(existing, 'table', None), value))
- self._all_cols.remove(existing)
+
# pop out memoized proxy_set as this
# operation may very well be occurring
# in a _make_proxy operation
util.memoized_property.reset(value, "proxy_set")
- self._all_cols.add(value)
+
+ self._all_col_set.add(value)
+ self._all_columns.append(value)
self._data[key] = value
def clear(self):
- self._data.clear()
- self._all_cols.clear()
+ raise NotImplementedError()
def remove(self, column):
del self._data[column.key]
- self._all_cols.remove(column)
+ self._all_col_set.remove(column)
+ self._all_columns[:] = [c for c in self._all_columns if c is not column]
- def update(self, value):
- self._data.update(value)
- self._all_cols.clear()
- self._all_cols.update(self._data.values())
+ def update(self, iter):
+ cols = list(iter)
+ self._all_columns.extend(c for label, c in cols if c not in self._all_col_set)
+ self._all_col_set.update(c for label, c in cols)
+ self._data.update((label, c) for label, c in cols)
def extend(self, iter):
- self.update((c.key, c) for c in iter)
+ cols = list(iter)
+ self._all_columns.extend(c for c in cols if c not in self._all_col_set)
+ self._all_col_set.update(cols)
+ self._data.update((c.key, c) for c in cols)
__hash__ = None
@util.dependencies("sqlalchemy.sql.elements")
def __eq__(self, elements, other):
l = []
- for c in other:
- for local in self:
+ for c in getattr(other, "_all_columns", other):
+ for local in self._all_columns:
if c.shares_lineage(local):
l.append(c == local)
return elements.and_(*l)
@@ -399,22 +554,28 @@ def __contains__(self, other):
raise exc.ArgumentError("__contains__ requires a string argument")
return util.OrderedProperties.__contains__(self, other)
+ def __getstate__(self):
+ return {'_data': self.__dict__['_data'],
+ '_all_columns': self.__dict__['_all_columns']}
+
def __setstate__(self, state):
self.__dict__['_data'] = state['_data']
- self.__dict__['_all_cols'] = util.column_set(self._data.values())
+ self.__dict__['_all_columns'] = state['_all_columns']
+ self.__dict__['_all_col_set'] = util.column_set(state['_all_columns'])
def contains_column(self, col):
# this has to be done via set() membership
- return col in self._all_cols
+ return col in self._all_col_set
def as_immutable(self):
- return ImmutableColumnCollection(self._data, self._all_cols)
+ return ImmutableColumnCollection(self._data, self._all_col_set, self._all_columns)
class ImmutableColumnCollection(util.ImmutableProperties, ColumnCollection):
- def __init__(self, data, colset):
+ def __init__(self, data, colset, all_columns):
util.ImmutableProperties.__init__(self, data)
- self.__dict__['_all_cols'] = colset
+ self.__dict__['_all_col_set'] = colset
+ self.__dict__['_all_columns'] = all_columns
extend = remove = util.ImmutableProperties._immutable
View
29 lib/sqlalchemy/sql/elements.py
@@ -588,7 +588,7 @@ class ColumnElement(ClauseElement, operators.ColumnOperators):
primary_key = False
foreign_keys = []
_label = None
- _key_label = None
+ _key_label = key = None
_alt_names = ()
def self_group(self, against=None):
@@ -681,10 +681,14 @@ def _make_proxy(self, selectable, name=None, name_is_truncatable=False, **kw):
"""
if name is None:
name = self.anon_label
- try:
- key = str(self)
- except exc.UnsupportedCompilationError:
- key = self.anon_label
+ if self.key:
+ key = self.key
+ else:
+ try:
+ key = str(self)
+ except exc.UnsupportedCompilationError:
+ key = self.anon_label
+
else:
key = name
co = ColumnClause(
@@ -755,7 +759,6 @@ def anon_label(self):
'name', 'anon')))
-
class BindParameter(ColumnElement):
"""Represent a "bound expression".
@@ -1446,13 +1449,13 @@ def columns(self, selectable, *cols, **types):
"""
- col_by_name = dict(
- (col.key, col) for col in cols
- )
- for key, type_ in types.items():
- col_by_name[key] = ColumnClause(key, type_)
-
- return selectable.TextAsFrom(self, list(col_by_name.values()))
+ input_cols = [
+ ColumnClause(col.key, types.pop(col.key))
+ if col.key in types
+ else col
+ for col in cols
+ ] + [ColumnClause(key, type_) for key, type_ in types.items()]
+ return selectable.TextAsFrom(self, input_cols)
@property
def type(self):
View
15 lib/sqlalchemy/sql/selectable.py
@@ -342,7 +342,7 @@ def embedded(expanded_proxy_set, target_set):
return column
col, intersect = None, None
target_set = column.proxy_set
- cols = self.c
+ cols = self.c._all_columns
for c in cols:
expanded_proxy_set = set(_expand_cloned(c.proxy_set))
i = target_set.intersection(expanded_proxy_set)
@@ -934,6 +934,7 @@ def __init__(self, selectable, name=None):
or 'anon'))
self.name = name
+
@property
def description(self):
if util.py3k:
@@ -954,7 +955,7 @@ def is_derived_from(self, fromclause):
return self.element.is_derived_from(fromclause)
def _populate_column_collection(self):
- for col in self.element.columns:
+ for col in self.element.columns._all_columns:
col._make_proxy(self)
def _refresh_for_new_column(self, column):
@@ -1738,13 +1739,13 @@ def __init__(self, keyword, *selects, **kwargs):
s = _clause_element_as_expr(s)
if not numcols:
- numcols = len(s.c)
- elif len(s.c) != numcols:
+ numcols = len(s.c._all_columns)
+ elif len(s.c._all_columns) != numcols:
raise exc.ArgumentError('All selectables passed to '
'CompoundSelect must have identical numbers of '
'columns; select #%d has %d columns, select '
- '#%d has %d' % (1, len(self.selects[0].c), n
- + 1, len(s.c)))
+ '#%d has %d' % (1, len(self.selects[0].c._all_columns), n
+ + 1, len(s.c._all_columns)))
self.selects.append(s.self_group(self))
@@ -1876,7 +1877,7 @@ def is_derived_from(self, fromclause):
return False
def _populate_column_collection(self):
- for cols in zip(*[s.c for s in self.selects]):
+ for cols in zip(*[s.c._all_columns for s in self.selects]):
# this is a slightly hacky thing - the union exports a
# column that resembles just that of the *first* selectable.
View
38 test/base/test_events.py
@@ -669,6 +669,39 @@ def test_listen_override(self):
[call(5, 7), call(10, 5)]
)
+ def test_remove_clslevel(self):
+ listen_one = Mock()
+ event.listen(self.Target, "event_one", listen_one, add=True)
+ t1 = self.Target()
+ t1.dispatch.event_one(5, 7)
+ eq_(
+ listen_one.mock_calls,
+ [call(12)]
+ )
+ event.remove(self.Target, "event_one", listen_one)
+ t1.dispatch.event_one(10, 5)
+ eq_(
+ listen_one.mock_calls,
+ [call(12)]
+ )
+
+ def test_remove_instancelevel(self):
+ listen_one = Mock()
+ t1 = self.Target()
+ event.listen(t1, "event_one", listen_one, add=True)
+ t1.dispatch.event_one(5, 7)
+ eq_(
+ listen_one.mock_calls,
+ [call(12)]
+ )
+ event.remove(t1, "event_one", listen_one)
+ t1.dispatch.event_one(10, 5)
+ eq_(
+ listen_one.mock_calls,
+ [call(12)]
+ )
+
+
class PropagateTest(fixtures.TestBase):
def setUp(self):
class TargetEvents(event.Events):
@@ -1087,8 +1120,3 @@ def test_remove_not_listened(self):
)
event.remove(t1, "event_three", m1)
-
-
-
-
-
View
187 test/base/test_utils.py
@@ -5,7 +5,7 @@
from sqlalchemy.testing import eq_, is_, ne_, fails_if
from sqlalchemy.testing.util import picklers, gc_collect
from sqlalchemy.util import classproperty, WeakSequence, get_callable_argspec
-
+from sqlalchemy.sql import column
class KeyedTupleTest():
@@ -298,6 +298,191 @@ def test_compare(self):
assert (cc1 == cc2).compare(c1 == c2)
assert not (cc1 == cc3).compare(c2 == c3)
+ @testing.emits_warning("Column ")
+ def test_dupes_add(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2a, c3, c2b = column('c1'), column('c2'), column('c3'), column('c2')
+
+ cc.add(c1)
+ cc.add(c2a)
+ cc.add(c3)
+ cc.add(c2b)
+
+ eq_(cc._all_columns, [c1, c2a, c3, c2b])
+
+ # for iter, c2a is replaced by c2b, ordering
+ # is maintained in that way. ideally, iter would be
+ # the same as the "_all_columns" collection.
+ eq_(list(cc), [c1, c2b, c3])
+
+ assert cc.contains_column(c2a)
+ assert cc.contains_column(c2b)
+
+ ci = cc.as_immutable()
+ eq_(ci._all_columns, [c1, c2a, c3, c2b])
+ eq_(list(ci), [c1, c2b, c3])
+
+ def test_replace(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2a, c3, c2b = column('c1'), column('c2'), column('c3'), column('c2')
+
+ cc.add(c1)
+ cc.add(c2a)
+ cc.add(c3)
+
+ cc.replace(c2b)
+
+ eq_(cc._all_columns, [c1, c2b, c3])
+ eq_(list(cc), [c1, c2b, c3])
+
+ assert not cc.contains_column(c2a)
+ assert cc.contains_column(c2b)
+
+ ci = cc.as_immutable()
+ eq_(ci._all_columns, [c1, c2b, c3])
+ eq_(list(ci), [c1, c2b, c3])
+
+ def test_replace_key_matches(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2a, c3, c2b = column('c1'), column('c2'), column('c3'), column('X')
+ c2b.key = 'c2'
+
+ cc.add(c1)
+ cc.add(c2a)
+ cc.add(c3)
+
+ cc.replace(c2b)
+
+ assert not cc.contains_column(c2a)
+ assert cc.contains_column(c2b)
+
+ eq_(cc._all_columns, [c1, c2b, c3])
+ eq_(list(cc), [c1, c2b, c3])
+
+ ci = cc.as_immutable()
+ eq_(ci._all_columns, [c1, c2b, c3])
+ eq_(list(ci), [c1, c2b, c3])
+
+ def test_replace_name_matches(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2a, c3, c2b = column('c1'), column('c2'), column('c3'), column('c2')
+ c2b.key = 'X'
+
+ cc.add(c1)
+ cc.add(c2a)
+ cc.add(c3)
+
+ cc.replace(c2b)
+
+ assert not cc.contains_column(c2a)
+ assert cc.contains_column(c2b)
+
+ eq_(cc._all_columns, [c1, c2b, c3])
+ eq_(list(cc), [c1, c3, c2b])
+
+ ci = cc.as_immutable()
+ eq_(ci._all_columns, [c1, c2b, c3])
+ eq_(list(ci), [c1, c3, c2b])
+
+ def test_replace_no_match(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2, c3, c4 = column('c1'), column('c2'), column('c3'), column('c4')
+ c4.key = 'X'
+
+ cc.add(c1)
+ cc.add(c2)
+ cc.add(c3)
+
+ cc.replace(c4)
+
+ assert cc.contains_column(c2)
+ assert cc.contains_column(c4)
+
+ eq_(cc._all_columns, [c1, c2, c3, c4])
+ eq_(list(cc), [c1, c2, c3, c4])
+
+ ci = cc.as_immutable()
+ eq_(ci._all_columns, [c1, c2, c3, c4])
+ eq_(list(ci), [c1, c2, c3, c4])
+
+ def test_dupes_extend(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2a, c3, c2b = column('c1'), column('c2'), column('c3'), column('c2')
+
+ cc.add(c1)
+ cc.add(c2a)
+
+ cc.extend([c3, c2b])
+
+ eq_(cc._all_columns, [c1, c2a, c3, c2b])
+
+ # for iter, c2a is replaced by c2b, ordering
+ # is maintained in that way. ideally, iter would be
+ # the same as the "_all_columns" collection.
+ eq_(list(cc), [c1, c2b, c3])
+
+ assert cc.contains_column(c2a)
+ assert cc.contains_column(c2b)
+
+ ci = cc.as_immutable()
+ eq_(ci._all_columns, [c1, c2a, c3, c2b])
+ eq_(list(ci), [c1, c2b, c3])
+
+ def test_dupes_update(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2a, c3, c2b = column('c1'), column('c2'), column('c3'), column('c2')
+
+ cc.add(c1)
+ cc.add(c2a)
+
+ cc.update([(c3.key, c3), (c2b.key, c2b)])
+
+ eq_(cc._all_columns, [c1, c2a, c3, c2b])
+
+ assert cc.contains_column(c2a)
+ assert cc.contains_column(c2b)
+
+ # for iter, c2a is replaced by c2b, ordering
+ # is maintained in that way. ideally, iter would be
+ # the same as the "_all_columns" collection.
+ eq_(list(cc), [c1, c2b, c3])
+
+ def test_extend_existing(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2, c3, c4, c5 = column('c1'), column('c2'), column('c3'), column('c4'), column('c5')
+
+ cc.extend([c1, c2])
+ eq_(cc._all_columns, [c1, c2])
+
+ cc.extend([c3])
+ eq_(cc._all_columns, [c1, c2, c3])
+ cc.extend([c4, c2, c5])
+
+ eq_(cc._all_columns, [c1, c2, c3, c4, c5])
+
+ def test_update_existing(self):
+ cc = sql.ColumnCollection()
+
+ c1, c2, c3, c4, c5 = column('c1'), column('c2'), column('c3'), column('c4'), column('c5')
+
+ cc.update([('c1', c1), ('c2', c2)])
+ eq_(cc._all_columns, [c1, c2])
+
+ cc.update([('c3', c3)])
+ eq_(cc._all_columns, [c1, c2, c3])
+ cc.update([('c4', c4), ('c2', c2), ('c5', c5)])
+
+ eq_(cc._all_columns, [c1, c2, c3, c4, c5])
+
+
class LRUTest(fixtures.TestBase):
View
10 test/dialect/test_oracle.py
@@ -1218,8 +1218,6 @@ def test_numerics(self):
assert isinstance(row[i], type_), '%r is not %r' \
% (row[i], type_)
-
-
def test_numeric_no_decimal_mode(self):
engine = testing_engine(options=dict(coerce_to_decimal=False))
value = engine.scalar("SELECT 5.66 FROM DUAL")
@@ -1228,6 +1226,14 @@ def test_numeric_no_decimal_mode(self):
value = testing.db.scalar("SELECT 5.66 FROM DUAL")
assert isinstance(value, decimal.Decimal)
+ def test_coerce_to_unicode(self):
+ engine = testing_engine(options=dict(coerce_to_unicode=True))
+ value = engine.scalar("SELECT 'hello' FROM DUAL")
+ assert isinstance(value, util.text_type)
+
+ value = testing.db.scalar("SELECT 'hello' FROM DUAL")
+ assert isinstance(value, util.binary_type)
+
@testing.provide_metadata
def test_numerics_broken_inspection(self):
"""Numeric scenarios where Oracle type info is 'broken',
View
32 test/ext/test_associationproxy.py
@@ -12,6 +12,7 @@
from sqlalchemy.testing import fixtures, AssertsCompiledSQL
from sqlalchemy import testing
from sqlalchemy.testing.schema import Table, Column
+from sqlalchemy.testing.mock import Mock, call
class DictCollection(dict):
@collection.appender
@@ -602,7 +603,6 @@ def test_basic(self):
p.children.__getitem__, 1
)
-
class ProxyFactoryTest(ListTest):
def setup(self):
metadata = MetaData(testing.db)
@@ -815,6 +815,36 @@ class B(object):
assert a1.a2b_name is None
assert a1.b_single is None
+ def custom_getset_test(self):
+ metadata = MetaData()
+ p = Table('p', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('cid', Integer, ForeignKey('c.id')))
+ c = Table('c', metadata,
+ Column('id', Integer, primary_key=True),
+ Column('foo', String(128)))
+
+ get = Mock()
+ set_ = Mock()
+ class Parent(object):
+ foo = association_proxy('child', 'foo',
+ getset_factory=lambda cc, parent: (get, set_))
+
+ class Child(object):
+ def __init__(self, foo):
+ self.foo = foo
+
+ mapper(Parent, p, properties={'child': relationship(Child)})
+ mapper(Child, c)
+
+ p1 = Parent()
+
+ eq_(p1.foo, get(None))
+ p1.child = child = Child(foo='x')
+ eq_(p1.foo, get(child))
+ p1.foo = "y"
+ eq_(set_.mock_calls, [call(child, "y")])
+
class LazyLoadTest(fixtures.TestBase):
View
14 test/orm/test_default_strategies.py
@@ -156,6 +156,20 @@ def test_star_must_be_alone(self):
sess.query(User).options, opt
)
+ def test_global_star_ignored_no_entities_unbound(self):
+ sess = self._downgrade_fixture()
+ User = self.classes.User
+ opt = sa.orm.lazyload('*')
+ q = sess.query(User.name).options(opt)
+ eq_(q.all(), [(u'jack',), (u'ed',), (u'fred',), (u'chuck',)])
+
+ def test_global_star_ignored_no_entities_bound(self):
+ sess = self._downgrade_fixture()
+ User = self.classes.User
+ opt = sa.orm.Load(User).lazyload('*')
+ q = sess.query(User.name).options(opt)
+ eq_(q.all(), [(u'jack',), (u'ed',), (u'fred',), (u'chuck',)])
+
def test_select_with_joinedload(self):
"""Mapper load strategy defaults can be downgraded with
lazyload('*') option, while explicit joinedload() option
View
10 test/sql/test_compiler.py
@@ -2082,6 +2082,10 @@ def test_delayed_col_naming(self):
)
def test_naming(self):
+ # TODO: the part where we check c.keys() are not "compile" tests, they
+ # belong probably in test_selectable, or some broken up
+ # version of that suite
+
f1 = func.hoho(table1.c.name)
s1 = select([table1.c.myid, table1.c.myid.label('foobar'),
f1,
@@ -2098,7 +2102,8 @@ def test_naming(self):
exprs = (
table1.c.myid == 12,
func.hoho(table1.c.myid),
- cast(table1.c.name, Numeric)
+ cast(table1.c.name, Numeric),
+ literal('x'),
)
for col, key, expr, label in (
(table1.c.name, 'name', 'mytable.name', None),
@@ -2108,7 +2113,8 @@ def test_naming(self):
'CAST(mytable.name AS NUMERIC)', 'anon_1'),
(t1.c.col1, 'col1', 'mytable.col1', None),
(column('some wacky thing'), 'some wacky thing',
- '"some wacky thing"', '')
+ '"some wacky thing"', ''),
+ (exprs[3], exprs[3].key, ":param_1", "anon_1")
):
if getattr(col, 'table', None) is not None:
t = col.table
View
6 test/sql/test_join_rewriting.py
@@ -530,6 +530,12 @@ def _test(self, selectable, assert_):
def test_a_atobalias_balias_c_w_exists(self):
super(JoinExecTest, self).test_a_atobalias_balias_c_w_exists()
+ @testing.only_on("sqlite", "non-standard aliasing rules used at the moment, "
+ "possibly fix this or add another test that uses "
+ "cross-compatible aliasing")
+ def test_b_ab1_union_b_ab2(self):
+ super(JoinExecTest, self).test_b_ab1_union_b_ab2()
+
class DialectFlagTest(fixtures.TestBase, AssertsCompiledSQL):
def test_dialect_flag(self):
d1 = default.DefaultDialect(supports_right_nested_joins=True)