The SQLAlchemy Object Relational Mapper presents a method of associating user-defined Python classes with database tables, and instances of those classes (objects) with rows in their corresponding tables. It includes a system that transparently synchronizes all changes in state between objects and their related rows, called a unit of work, as well as a system for expressing database queries in terms of the user defined classes and their defined relationships between each other.
The ORM is in contrast to the SQLAlchemy Expression Language, upon which the ORM is constructed. Whereas the SQL Expression Language, introduced in sqlexpression_toplevel
, presents a system of representing the primitive constructs of the relational database directly without opinion, the ORM presents a high level and abstracted pattern of usage, which itself is an example of applied usage of the Expression Language.
While there is overlap among the usage patterns of the ORM and the Expression Language, the similarities are more superficial than they may at first appear. One approaches the structure and content of data from the perspective of a user-defined domain model which is transparently persisted and refreshed from its underlying storage model. The other approaches it from the perspective of literal schema and SQL expression representations which are explicitly composed into messages consumed individually by the database.
A successful application may be constructed using the Object Relational Mapper exclusively. In advanced situations, an application constructed with the ORM may make occasional usage of the Expression Language directly in certain areas where specific database interactions are required.
The following tutorial is in doctest format, meaning each >>>
line represents something you can type at a Python command prompt, and the following text represents the expected return value.
A quick check to verify that we are on at least version 0.7 of SQLAlchemy:
>>> import sqlalchemy
>>> sqlalchemy.__version__ # doctest:+SKIP
0.7.0
For this tutorial we will use an in-memory-only SQLite database. To connect we use ~sqlalchemy.create_engine
:
>>> from sqlalchemy import create_engine
>>> engine = create_engine('sqlite:///:memory:', echo=True)
The echo
flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python's standard logging
module. With it enabled, we'll see all the generated SQL produced. If you are working through this tutorial and want less output generated, set it to False
. This tutorial will format the SQL behind a popup window so it doesn't get in our way; just click the "SQL" links to see what's being generated.
The return value of .create_engine
is an instance of .Engine
, and it represents the core interface to the database, adapted through a dialect that handles the details of the database and DBAPI in use, in this case the SQLite dialect. It has not actually tried to connect to the database yet; that happens the first time the .Engine
is actually used to do something.
Normally, the .Engine
is passed off to the ORM where it is used behind the scenes. We can execute SQL directly from it however, as we illustrate here:
python+sql
{sql}>>> engine.execute("select 1").scalar() select 1 () {stop}1
We have now tested that the .Engine
is in fact able to connect to the database.
When using the ORM, two key steps which occur before communicating with the database are to tell SQLAlchemy about our tables, as well as about the classes we're defining in our application and how they map to those tables. SQLAlchemy refers to these two steps as defining table metadata and mapping classes. In modern SQLAlchemy usage, these two tasks are performed together, using a system known as declarative_toplevel
, which allows us to create our own mapped classes which also define how they will be mapped to an actual database table.
We define our classes in terms of a base class which maintains a catalog of classes and tables relative to that base - this is known as the declarative base class. Our application will usually have just one instance of this base in a commonly imported module. We create this class using the .declarative_base
function, as follows:
>>> from sqlalchemy.ext.declarative import declarative_base
>>> Base = declarative_base()
Now that we have a "base", we can define any number of mapped classes in terms of it. We will start with just a single table called users
, which will store records for the end-users using our application (lets assume it's a website). A new class called User
will be the class to which we map this table. The imports we'll need to accomplish this include objects that represent the components of our table, including the .Column
class which represents a column in a table, as well as the .Integer
and .String
type objects that represent basic datatypes used in columns:
>>> from sqlalchemy import Column, Integer, String
>>> class User(Base):
... __tablename__ = 'users'
...
... id = Column(Integer, primary_key=True)
... name = Column(String)
... fullname = Column(String)
... password = Column(String)
...
... def __init__(self, name, fullname, password):
... self.name = name
... self.fullname = fullname
... self.password = password
...
... def __repr__(self):
... return "<User('%s','%s', '%s')>" % (self.name, self.fullname, self.password)
The Non Opinionated Philosophy
Above, the __tablename__
attribute represents the name of the table in the database to which we are mapping, and the .Column
object referenced by the name id
defines the primary key column of our table; the usage of .Integer
states that it should be of type INT
. SQLAlchemy never makes assumptions about decisions like these - the developer using SQLAlchemy must always decide on the specific conventions to be used. However, that doesn't mean the task can't be automated. While this tutorial will keep things explicit, developers are encouraged to make usage of helper functions as well as "Declarative Mixins" to automate their tasks in large scale applications. The section declarative_mixins
introduces many of these techniques.
With our User
class constructed via the Declarative system, we have defined table metadata as well as a mapped class. This configuration is shorthand for what in SQLAlchemy is called a "classical mapping", which would have us first create an object representing the 'users' table using a class known as .Table
, and then creating a mapping to this table through the usage of a function called .mapper
. Declarative instead performs these steps for us, making available the .Table
it has created for us via the __table__
attribute:
>>> User.__table__ # doctest: +NORMALIZE_WHITESPACE
Table('users', MetaData(None),
Column('id', Integer(), table=<users>, primary_key=True, nullable=False),
Column('name', String(), table=<users>),
Column('fullname', String(), table=<users>),
Column('password', String(), table=<users>), schema=None)
and while rarely needed, making available the .mapper
object via the __mapper__
attribute:
>>> User.__mapper__ # doctest: +ELLIPSIS
<Mapper at 0x...; User>
The Declarative base class also contains a catalog of all the .Table
objects that have been defined called .MetaData
, available via the .metadata
attribute. In this example, we are defining new tables that have yet to be created in our SQLite database, so one helpful feature the .MetaData
object offers is the ability to issue CREATE TABLE statements to the database for all tables that don't yet exist. We illustrate this by calling the .MetaData.create_all
method, passing in our .Engine
as a source of database connectivity:
python+sql
{sql}>>> Base.metadata.create_all(engine) # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE PRAGMA table_info("users") () CREATE TABLE users ( id INTEGER NOT NULL, name VARCHAR, fullname VARCHAR, password VARCHAR, PRIMARY KEY (id) ) () COMMIT
Note
Users familiar with the syntax of CREATE TABLE may notice that the VARCHAR columns were generated without a length; on SQLite and Postgresql, this is a valid datatype, but on others, it's not allowed. So if running this tutorial on one of those databases, and you wish to use SQLAlchemy to issue CREATE TABLE, a "length" may be provided to the ~sqlalchemy.types.String
type as below:
Column(String(50))
The length field on
~sqlalchemy.types.String
, as well as similar precision/scale fields available on~sqlalchemy.types.Integer
,~sqlalchemy.types.Numeric
, etc. are not referenced by SQLAlchemy other than when creating tables.Additionally, Firebird and Oracle require sequences to generate new primary key identifiers, and SQLAlchemy doesn't generate or assume these without being instructed. For that, you use the
~sqlalchemy.schema.Sequence
construct:from sqlalchemy import Sequence Column(Integer, Sequence('user_id_seq'), primary_key=True)
A full, foolproof
~sqlalchemy.schema.Table
generated via our declarative mapping is therefore:class User(Base): __tablename__ = 'users' id = Column(Integer, Sequence('user_id_seq'), primary_key=True) name = Column(String(50)) fullname = Column(String(50)) password = Column(String(12)) def __init__(self, name, fullname, password): self.name = name self.fullname = fullname self.password = password def __repr__(self): return "<User('%s','%s', '%s')>" % (self.name, self.fullname, self.password)
We include this more verbose table definition separately to highlight the difference between a minimal construct geared primarily towards in-Python usage only, versus one that will be used to emit CREATE TABLE statements on a particular set of backends with more stringent requirements.
With our fully specified User
class and associated table metadata, let's now create and inspect a User
object:
>>> ed_user = User('ed', 'Ed Jones', 'edspassword')
>>> ed_user.name
'ed'
>>> ed_user.password
'edspassword'
>>> str(ed_user.id)
'None'
The id
attribute, which while not defined by our __init__()
method, exists with a value of None
on our User
instance due to the id
column we declared in our mapping. By default, the ORM creates class attributes for all columns present in the table being mapped. These class attributes exist as Python descriptors, and define instrumentation for the mapped class. The functionality of this instrumentation is very rich and includes the ability to track modifications and automatically load new data from the database when needed.
Since we have not yet told SQLAlchemy to persist Ed Jones
within the database, its id is None
. When we persist the object later, this attribute will be populated with a newly generated value.
We're now ready to start talking to the database. The ORM's "handle" to the database is the ~sqlalchemy.orm.session.Session
. When we first set up the application, at the same level as our ~sqlalchemy.create_engine
statement, we define a ~sqlalchemy.orm.session.Session
class which will serve as a factory for new ~sqlalchemy.orm.session.Session
objects:
>>> from sqlalchemy.orm import sessionmaker
>>> Session = sessionmaker(bind=engine)
In the case where your application does not yet have an ~sqlalchemy.engine.base.Engine
when you define your module-level objects, just set it up like this:
>>> Session = sessionmaker()
Later, when you create your engine with ~sqlalchemy.create_engine
, connect it to the ~sqlalchemy.orm.session.Session
using ~.sessionmaker.configure
:
>>> Session.configure(bind=engine) # once engine is available
This custom-made ~sqlalchemy.orm.session.Session
class will create new ~sqlalchemy.orm.session.Session
objects which are bound to our database. Other transactional characteristics may be defined when calling ~.sessionmaker
as well; these are described in a later chapter. Then, whenever you need to have a conversation with the database, you instantiate a ~sqlalchemy.orm.session.Session
:
>>> session = Session()
The above ~sqlalchemy.orm.session.Session
is associated with our SQLite-enabled .Engine
, but it hasn't opened any connections yet. When it's first used, it retrieves a connection from a pool of connections maintained by the .Engine
, and holds onto it until we commit all changes and/or close the session object.
To persist our User
object, we ~.Session.add
it to our ~sqlalchemy.orm.session.Session
:
>>> ed_user = User('ed', 'Ed Jones', 'edspassword')
>>> session.add(ed_user)
At this point, we say that the instance is pending; no SQL has yet been issued and the object is not yet represented by a row in the database. The ~sqlalchemy.orm.session.Session
will issue the SQL to persist Ed Jones
as soon as is needed, using a process known as a flush. If we query the database for Ed Jones
, all pending information will first be flushed, and the query is issued immediately thereafter.
For example, below we create a new ~sqlalchemy.orm.query.Query
object which loads instances of User
. We "filter by" the name
attribute of ed
, and indicate that we'd like only the first result in the full list of rows. A User
instance is returned which is equivalent to that which we've added:
python+sql
{sql}>>> our_user = session.query(User).filter_by(name='ed').first() # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE BEGIN (implicit) INSERT INTO users (name, fullname, password) VALUES (?, ?, ?) ('ed', 'Ed Jones', 'edspassword') SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? LIMIT ? OFFSET ? ('ed', 1, 0) {stop}>>> our_user <User('ed','Ed Jones', 'edspassword')>
In fact, the ~sqlalchemy.orm.session.Session
has identified that the row returned is the same row as one already represented within its internal map of objects, so we actually got back the identical instance as that which we just added:
>>> ed_user is our_user
True
The ORM concept at work here is known as an identity map and ensures that all operations upon a particular row within a ~sqlalchemy.orm.session.Session
operate upon the same set of data. Once an object with a particular primary key is present in the ~sqlalchemy.orm.session.Session
, all SQL queries on that ~sqlalchemy.orm.session.Session
will always return the same Python object for that particular primary key; it also will raise an error if an attempt is made to place a second, already-persisted object with the same primary key within the session.
We can add more User
objects at once using ~sqlalchemy.orm.session.Session.add_all
:
python+sql
>>> session.add_all([ ... User('wendy', 'Wendy Williams', 'foobar'), ... User('mary', 'Mary Contrary', 'xxg527'), ... User('fred', 'Fred Flinstone', 'blah')])
Also, Ed has already decided his password isn't too secure, so lets change it:
python+sql
>>> ed_user.password = 'f8s7ccs'
The ~sqlalchemy.orm.session.Session
is paying attention. It knows, for example, that Ed Jones
has been modified:
python+sql
>>> session.dirty IdentitySet([<User('ed','Ed Jones', 'f8s7ccs')>])
and that three new User
objects are pending:
python+sql
>>> session.new # doctest: +SKIP IdentitySet([<User('wendy','Wendy Williams', 'foobar')>, <User('mary','Mary Contrary', 'xxg527')>, <User('fred','Fred Flinstone', 'blah')>])
We tell the ~sqlalchemy.orm.session.Session
that we'd like to issue all remaining changes to the database and commit the transaction, which has been in progress throughout. We do this via ~.Session.commit
:
python+sql
{sql}>>> session.commit() UPDATE users SET password=? WHERE users.id = ? ('f8s7ccs', 1) INSERT INTO users (name, fullname, password) VALUES (?, ?, ?) ('wendy', 'Wendy Williams', 'foobar') INSERT INTO users (name, fullname, password) VALUES (?, ?, ?) ('mary', 'Mary Contrary', 'xxg527') INSERT INTO users (name, fullname, password) VALUES (?, ?, ?) ('fred', 'Fred Flinstone', 'blah') COMMIT
~.Session.commit
flushes whatever remaining changes remain to the database, and commits the transaction. The connection resources referenced by the session are now returned to the connection pool. Subsequent operations with this session will occur in a new transaction, which will again re-acquire connection resources when first needed.
If we look at Ed's id
attribute, which earlier was None
, it now has a value:
python+sql
{sql}>>> ed_user.id # doctest: +NORMALIZE_WHITESPACE BEGIN (implicit) SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.id = ? (1,) {stop}1
After the ~sqlalchemy.orm.session.Session
inserts new rows in the database, all newly generated identifiers and database-generated defaults become available on the instance, either immediately or via load-on-first-access. In this case, the entire row was re-loaded on access because a new transaction was begun after we issued ~.Session.commit
. SQLAlchemy by default refreshes data from a previous transaction the first time it's accessed within a new transaction, so that the most recent state is available. The level of reloading is configurable as is described in the chapter on Sessions.
Since the ~sqlalchemy.orm.session.Session
works within a transaction, we can roll back changes made too. Let's make two changes that we'll revert; ed_user
's user name gets set to Edwardo
:
python+sql
>>> ed_user.name = 'Edwardo'
and we'll add another erroneous user, fake_user
:
python+sql
>>> fake_user = User('fakeuser', 'Invalid', '12345') >>> session.add(fake_user)
Querying the session, we can see that they're flushed into the current transaction:
python+sql
{sql}>>> session.query(User).filter(User.name.in(['Edwardo', 'fakeuser'])).all() #doctest: +NORMALIZE_WHITESPACE UPDATE users SET name=? WHERE users.id = ? ('Edwardo', 1) INSERT INTO users (name, fullname, password) VALUES (?, ?, ?) ('fakeuser', 'Invalid', '12345') SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name IN (?, ?) ('Edwardo', 'fakeuser') {stop}[<User('Edwardo','Ed Jones', 'f8s7ccs')>, <User('fakeuser','Invalid', '12345')>]
Rolling back, we can see that ed_user
's name is back to ed
, and fake_user
has been kicked out of the session:
python+sql
{sql}>>> session.rollback() ROLLBACK {stop}
{sql}>>> ed_user.name #doctest: +NORMALIZE_WHITESPACE BEGIN (implicit) SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.id = ? (1,) {stop}u'ed' >>> fake_user in session False
issuing a SELECT illustrates the changes made to the database:
python+sql
{sql}>>> session.query(User).filter(User.name.in(['ed', 'fakeuser'])).all() #doctest: +NORMALIZE_WHITESPACE SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name IN (?, ?) ('ed', 'fakeuser') {stop}[<User('ed','Ed Jones', 'f8s7ccs')>]
A ~sqlalchemy.orm.query.Query
is created using the ~sqlalchemy.orm.session.Session.query()
function on ~sqlalchemy.orm.session.Session
. This function takes a variable number of arguments, which can be any combination of classes and class-instrumented descriptors. Below, we indicate a ~sqlalchemy.orm.query.Query
which loads User
instances. When evaluated in an iterative context, the list of User
objects present is returned:
python+sql
{sql}>>> for instance in session.query(User).order_by(User.id): # doctest: +NORMALIZE_WHITESPACE ... print instance.name, instance.fullname SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users ORDER BY users.id () {stop}ed Ed Jones wendy Wendy Williams mary Mary Contrary fred Fred Flinstone
The ~sqlalchemy.orm.query.Query
also accepts ORM-instrumented descriptors as arguments. Any time multiple class entities or column-based entities are expressed as arguments to the ~sqlalchemy.orm.session.Session.query()
function, the return result is expressed as tuples:
python+sql
{sql}>>> for name, fullname in session.query(User.name, User.fullname): # doctest: +NORMALIZE_WHITESPACE ... print name, fullname SELECT users.name AS users_name, users.fullname AS users_fullname FROM users () {stop}ed Ed Jones wendy Wendy Williams mary Mary Contrary fred Fred Flinstone
The tuples returned by ~sqlalchemy.orm.query.Query
are named tuples, and can be treated much like an ordinary Python object. The names are the same as the attribute's name for an attribute, and the class name for a class:
python+sql
{sql}>>> for row in session.query(User, User.name).all(): #doctest: +NORMALIZE_WHITESPACE ... print row.User, row.name SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users () {stop}<User('ed','Ed Jones', 'f8s7ccs')> ed <User('wendy','Wendy Williams', 'foobar')> wendy <User('mary','Mary Contrary', 'xxg527')> mary <User('fred','Fred Flinstone', 'blah')> fred
You can control the names using the ~._CompareMixin.label
construct for scalar attributes and ~.orm.aliased
for class constructs:
python+sql
>>> from sqlalchemy.orm import aliased >>> user_alias = aliased(User, name='user_alias') {sql}>>> for row in session.query(user_alias, user_alias.name.label('name_label')).all(): #doctest: +NORMALIZE_WHITESPACE ... print row.user_alias, row.name_label SELECT user_alias.id AS user_alias_id, user_alias.name AS user_alias_name, user_alias.fullname AS user_alias_fullname, user_alias.password AS user_alias_password, user_alias.name AS name_label FROM users AS user_alias (){stop} <User('ed','Ed Jones', 'f8s7ccs')> ed <User('wendy','Wendy Williams', 'foobar')> wendy <User('mary','Mary Contrary', 'xxg527')> mary <User('fred','Fred Flinstone', 'blah')> fred
Basic operations with ~sqlalchemy.orm.query.Query
include issuing LIMIT and OFFSET, most conveniently using Python array slices and typically in conjunction with ORDER BY:
python+sql
{sql}>>> for u in session.query(User).order_by(User.id)[1:3]: #doctest: +NORMALIZE_WHITESPACE ... print u SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users ORDER BY users.id LIMIT ? OFFSET ? (2, 1){stop} <User('wendy','Wendy Williams', 'foobar')> <User('mary','Mary Contrary', 'xxg527')>
and filtering results, which is accomplished either with ~sqlalchemy.orm.query.Query.filter_by
, which uses keyword arguments:
python+sql
{sql}>>> for name, in session.query(User.name).filter_by(fullname='Ed Jones'): # doctest: +NORMALIZE_WHITESPACE ... print name SELECT users.name AS users_name FROM users WHERE users.fullname = ? ('Ed Jones',) {stop}ed
...or ~sqlalchemy.orm.query.Query.filter
, which uses more flexible SQL expression language constructs. These allow you to use regular Python operators with the class-level attributes on your mapped class:
python+sql
{sql}>>> for name, in session.query(User.name).filter(User.fullname=='Ed Jones'): # doctest: +NORMALIZE_WHITESPACE ... print name SELECT users.name AS users_name FROM users WHERE users.fullname = ? ('Ed Jones',) {stop}ed
The ~sqlalchemy.orm.query.Query
object is fully generative, meaning that most method calls return a new ~sqlalchemy.orm.query.Query
object upon which further criteria may be added. For example, to query for users named "ed" with a full name of "Ed Jones", you can call ~sqlalchemy.orm.query.Query.filter
twice, which joins criteria using AND
:
python+sql
{sql}>>> for user in session.query(User).filter(User.name=='ed').filter(User.fullname=='Ed Jones'): # doctest: +NORMALIZE_WHITESPACE ... print user SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? AND users.fullname = ? ('ed', 'Ed Jones') {stop}<User('ed','Ed Jones', 'f8s7ccs')>
Here's a rundown of some of the most common operators used in ~sqlalchemy.orm.query.Query.filter
:
equals:
query.filter(User.name == 'ed')
not equals:
query.filter(User.name != 'ed')
LIKE:
query.filter(User.name.like('%ed%'))
IN:
query.filter(User.name.in_(['ed', 'wendy', 'jack'])) # works with query objects too: query.filter(User.name.in_(session.query(User.name).filter(User.name.like('%ed%'))))
NOT IN:
query.filter(~User.name.in_(['ed', 'wendy', 'jack']))
IS NULL:
filter(User.name == None)
IS NOT NULL:
filter(User.name != None)
AND:
from sqlalchemy import and_ filter(and_(User.name == 'ed', User.fullname == 'Ed Jones')) # or call filter()/filter_by() multiple times filter(User.name == 'ed').filter(User.fullname == 'Ed Jones')
OR:
from sqlalchemy import or_ filter(or_(User.name == 'ed', User.name == 'wendy'))
match:
query.filter(User.name.match('wendy'))
The contents of the match parameter are database backend specific.
The ~sqlalchemy.orm.query.Query.all()
, ~sqlalchemy.orm.query.Query.one()
, and ~sqlalchemy.orm.query.Query.first()
methods of ~sqlalchemy.orm.query.Query
immediately issue SQL and return a non-iterator value. ~sqlalchemy.orm.query.Query.all()
returns a list:
python+sql
>>> query = session.query(User).filter(User.name.like('%ed')).order_by(User.id) {sql}>>> query.all() #doctest: +NORMALIZE_WHITESPACE SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? ORDER BY users.id ('%ed',) {stop}[<User('ed','Ed Jones', 'f8s7ccs')>, <User('fred','Fred Flinstone', 'blah')>]
~sqlalchemy.orm.query.Query.first()
applies a limit of one and returns the first result as a scalar:
python+sql
{sql}>>> query.first() #doctest: +NORMALIZE_WHITESPACE SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? ORDER BY users.id LIMIT ? OFFSET ? ('%ed', 1, 0) {stop}<User('ed','Ed Jones', 'f8s7ccs')>
~sqlalchemy.orm.query.Query.one()
, fully fetches all rows, and if not exactly one object identity or composite row is present in the result, raises an error:
python+sql
{sql}>>> from sqlalchemy.orm.exc import MultipleResultsFound >>> try: #doctest: +NORMALIZE_WHITESPACE ... user = query.one() ... except MultipleResultsFound, e: ... print e SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? ORDER BY users.id ('%ed',) {stop}Multiple rows were found for one()
python+sql
{sql}>>> from sqlalchemy.orm.exc import NoResultFound >>> try: #doctest: +NORMALIZE_WHITESPACE ... user = query.filter(User.id == 99).one() ... except NoResultFound, e: ... print e SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ? AND users.id = ? ORDER BY users.id ('%ed', 99) {stop}No row was found for one()
Literal strings can be used flexibly with ~sqlalchemy.orm.query.Query
. Most methods accept strings in addition to SQLAlchemy clause constructs. For example, ~sqlalchemy.orm.query.Query.filter()
and ~sqlalchemy.orm.query.Query.order_by()
:
python+sql
{sql}>>> for user in session.query(User).filter("id<224").order_by("id").all(): #doctest: +NORMALIZE_WHITESPACE ... print user.name SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE id<224 ORDER BY id () {stop}ed wendy mary fred
Bind parameters can be specified with string-based SQL, using a colon. To specify the values, use the ~sqlalchemy.orm.query.Query.params()
method:
python+sql
{sql}>>> session.query(User).filter("id<:value and name=:name").... params(value=224, name='fred').order_by(User.id).one() # doctest: +NORMALIZE_WHITESPACE SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE id<? and name=? ORDER BY users.id (224, 'fred') {stop}<User('fred','Fred Flinstone', 'blah')>
To use an entirely string-based statement, using ~sqlalchemy.orm.query.Query.from_statement()
; just ensure that the columns clause of the statement contains the column names normally used by the mapper (below illustrated using an asterisk):
python+sql
{sql}>>> session.query(User).from_statement( ... "SELECT * FROM users where name=:name").... params(name='ed').all() SELECT * FROM users where name=? ('ed',) {stop}[<User('ed','Ed Jones', 'f8s7ccs')>]
You can use ~sqlalchemy.orm.query.Query.from_statement()
to go completely "raw", using string names to identify desired columns:
python+sql
{sql}>>> session.query("id", "name", "thenumber12").... from_statement("SELECT id, name, 12 as " ... "thenumber12 FROM users where name=:name").... params(name='ed').all() SELECT id, name, 12 as thenumber12 FROM users where name=? ('ed',) {stop}[(1, u'ed', 12)]
~sqlalchemy.orm.query.Query
includes a convenience method for counting called ~sqlalchemy.orm.query.Query.count()
:
python+sql
{sql}>>> session.query(User).filter(User.name.like('%ed')).count() #doctest: +NORMALIZE_WHITESPACE SELECT count(*) AS count_1 FROM (SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name LIKE ?) AS anon_1 ('%ed',) {stop}2
The ~.Query.count()
method is used to determine how many rows the SQL statement would return. Looking at the generated SQL above, SQLAlchemy always places whatever it is we are querying into a subquery, then counts the rows from that. In some cases this can be reduced to a simpler SELECT count(*) FROM table
, however modern versions of SQLAlchemy don't try to guess when this is appropriate, as the exact SQL can be emitted using more explicit means.
For situations where the "thing to be counted" needs to be indicated specifically, we can specify the "count" function directly using the expression func.count()
, available from the ~sqlalchemy.sql.expression.func
construct. Below we use it to return the count of each distinct user name:
python+sql
>>> from sqlalchemy import func {sql}>>> session.query(func.count(User.name), User.name).group_by(User.name).all() #doctest: +NORMALIZE_WHITESPACE SELECT count(users.name) AS count_1, users.name AS users_name FROM users GROUP BY users.name () {stop}[(1, u'ed'), (1, u'fred'), (1, u'mary'), (1, u'wendy')]
To achieve our simple SELECT count(*) FROM table
, we can apply it as:
python+sql
{sql}>>> session.query(func.count('')).select_from(User).scalar() SELECT count(?) AS count_1 FROM users ('',) {stop}4
The usage of ~.Query.select_from
can be removed if we express the count in terms of the User
primary key directly:
python+sql
{sql}>>> session.query(func.count(User.id)).scalar() #doctest: +NORMALIZE_WHITESPACE SELECT count(users.id) AS count_1 FROM users () {stop}4
Now let's consider a second table to be dealt with. Users in our system also can store any number of email addresses associated with their username. This implies a basic one to many association from the users_table
to a new table which stores email addresses, which we will call addresses
. Using declarative, we define this table along with its mapped class, Address
:
python+sql
>>> from sqlalchemy import ForeignKey >>> from sqlalchemy.orm import relationship, backref >>> class Address(Base): ... __tablename__ = 'addresses' ... id = Column(Integer, primary_key=True) ... email_address = Column(String, nullable=False) ... user_id = Column(Integer, ForeignKey('users.id')) ... ... user = relationship(User, backref=backref('addresses', order_by=id)) ... ... def __init__(self, email_address): ... self.email_address = email_address ... ... def __repr__(self): ... return "<Address('%s')>" % self.email_address
The above class introduces a foreign key constraint which references the users
table. This defines for SQLAlchemy the relationship between the two tables at the database level. The relationship between the User
and Address
classes is defined separately using the ~sqlalchemy.orm.relationship()
function, which defines an attribute user
to be placed on the Address
class, as well as an addresses
collection to be placed on the User
class. Such a relationship is known as a bidirectional relationship. Because of the placement of the foreign key, from Address
to User
it is many to one, and from User
to Address
it is one to many. SQLAlchemy is automatically aware of many-to-one/one-to-many based on foreign keys.
Note
The ~sqlalchemy.orm.relationship()
function has historically been known as ~sqlalchemy.orm.relation()
in the 0.5 series of SQLAlchemy and earlier.
The ~sqlalchemy.orm.relationship()
function is extremely flexible, and could just have easily been defined on the User
class:
python+sql
- class User(Base):
# .... addresses = relationship(Address, order_by=Address.id, backref="user")
We are also free to not define a backref, and to define the ~sqlalchemy.orm.relationship()
only on one class and not the other. It is also possible to define two separate ~sqlalchemy.orm.relationship()
constructs for either direction, which is generally safe for many-to-one and one-to-many relationships, but not for many-to-many relationships.
When using the ~.sqlalchemy.ext.declarative
extension, ~sqlalchemy.orm.relationship()
gives us the option to use strings for most arguments that concern the target class, in the case that the target class has not yet been defined. This only works in conjunction with ~.sqlalchemy.ext.declarative
:
python+sql
- class User(Base):
.... addresses = relationship("Address", order_by="Address.id", backref="user")
When ~.sqlalchemy.ext.declarative
is not in use, you typically define your ~sqlalchemy.orm.mapper()
well after the target classes and ~sqlalchemy.schema.Table
objects have been defined, so string expressions are not needed.
We'll need to create the addresses
table in the database, so we will issue another CREATE from our metadata, which will skip over tables which have already been created:
python+sql
{sql}>>> Base.metadata.create_all(engine) # doctest: +NORMALIZE_WHITESPACE PRAGMA table_info("users") () PRAGMA table_info("addresses") () CREATE TABLE addresses ( id INTEGER NOT NULL, email_address VARCHAR NOT NULL, user_id INTEGER, PRIMARY KEY (id), FOREIGN KEY(user_id) REFERENCES users (id) ) () COMMIT
One extra step here is to make the ORM aware that the mapping for User
has changed. This is normally not necessary, except that we've already worked with instances of User
- so here we call .configure_mappers
so that the User.addresses
attribute can be established:
>>> from sqlalchemy.orm import configure_mappers
>>> configure_mappers()
Now when we create a User
, a blank addresses
collection will be present. Various collection types, such as sets and dictionaries, are possible here (see custom_collections
for details), but by default, the collection is a Python list.
python+sql
>>> jack = User('jack', 'Jack Bean', 'gjffdd') >>> jack.addresses []
We are free to add Address
objects on our User
object. In this case we just assign a full list directly:
python+sql
>>> jack.addresses = [Address(email_address='jack@google.com'), Address(email_address='j25@yahoo.com')]
When using a bidirectional relationship, elements added in one direction automatically become visible in the other direction. This is the basic behavior of the backref keyword, which maintains the relationship purely in memory, without using any SQL:
python+sql
>>> jack.addresses[1] <Address('j25@yahoo.com')>
>>> jack.addresses[1].user <User('jack','Jack Bean', 'gjffdd')>
Let's add and commit Jack Bean
to the database. jack
as well as the two Address
members in his addresses
collection are both added to the session at once, using a process known as cascading:
python+sql
>>> session.add(jack) {sql}>>> session.commit() INSERT INTO users (name, fullname, password) VALUES (?, ?, ?) ('jack', 'Jack Bean', 'gjffdd') INSERT INTO addresses (email_address, user_id) VALUES (?, ?) ('jack@google.com', 5) INSERT INTO addresses (email_address, user_id) VALUES (?, ?) ('j25@yahoo.com', 5) COMMIT
Querying for Jack, we get just Jack back. No SQL is yet issued for Jack's addresses:
python+sql
{sql}>>> jack = session.query(User).... filter_by(name='jack').one() #doctest: +NORMALIZE_WHITESPACE BEGIN (implicit) SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? ('jack',)
{stop}>>> jack <User('jack','Jack Bean', 'gjffdd')>
Let's look at the addresses
collection. Watch the SQL:
python+sql
{sql}>>> jack.addresses #doctest: +NORMALIZE_WHITESPACE SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE ? = addresses.user_id ORDER BY addresses.id (5,) {stop}[<Address('jack@google.com')>, <Address('j25@yahoo.com')>]
When we accessed the addresses
collection, SQL was suddenly issued. This is an example of a lazy loading relationship. The addresses
collection is now loaded and behaves just like an ordinary list.
If you want to reduce the number of queries (dramatically, in many cases), we can apply an eager load to the query operation, using the ~sqlalchemy.orm.joinedload
function. This function is a query option that gives additional instructions to the query on how we would like it to load, in this case we'd like to indicate that we'd like addresses
to load "eagerly". SQLAlchemy then constructs an outer join between the users
and addresses
tables, and loads them at once, populating the addresses
collection on each User
object if it's not already populated:
python+sql
>>> from sqlalchemy.orm import joinedload
{sql}>>> jack = session.query(User).... options(joinedload('addresses')).... filter_by(name='jack').one() #doctest: +NORMALIZE_WHITESPACE SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_address, addresses_1.user_id AS addresses_1_user_id FROM users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id WHERE users.name = ? ORDER BY addresses_1.id ('jack',)
{stop}>>> jack <User('jack','Jack Bean', 'gjffdd')>
>>> jack.addresses [<Address('jack@google.com')>, <Address('j25@yahoo.com')>]
See loading_toplevel
for information on ~sqlalchemy.orm.joinedload
and its new brother, ~sqlalchemy.orm.subqueryload
. We'll also see another way to "eagerly" load in the next section.
Note
The join created by .joinedload
is anonymously aliased such that it does not affect the query results. An .Query.order_by
or .Query.filter
call cannot reference these aliased tables - so-called "user space" joins are constructed using .Query.join
. The rationale for this is that .joinedload
is only applied in order to affect how related objects or collections are loaded as an optimizing detail - it can be added or removed with no impact on actual results. See the section zen_of_eager_loading
for a detailed description of how this is used, including how to use a single explicit JOIN for filtering/ordering and eager loading simultaneously.
While ~sqlalchemy.orm.joinedload
created an anonymous, non-accessible JOIN specifically to populate a collection, we can also work explicitly with joins in many ways. For example, to construct a simple inner join between User
and Address
, we can just ~sqlalchemy.orm.query.Query.filter()
their related columns together. Below we load the User
and Address
entities at once using this method:
python+sql
{sql}>>> for u, a in session.query(User, Address).filter(User.id==Address.user_id).... filter(Address.email_address=='jack@google.com').all(): # doctest: +NORMALIZE_WHITESPACE ... print u, a SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM users, addresses WHERE users.id = addresses.user_id AND addresses.email_address = ? ('jack@google.com',) {stop}<User('jack','Jack Bean', 'gjffdd')> <Address('jack@google.com')>
Or we can make a real JOIN construct; the most common way is to use ~sqlalchemy.orm.query.Query.join
:
python+sql
{sql}>>> session.query(User).join(Address).... filter(Address.email_address=='jack@google.com').all() #doctest: +NORMALIZE_WHITESPACE SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users JOIN addresses ON users.id = addresses.user_id WHERE addresses.email_address = ? ('jack@google.com',) {stop}[<User('jack','Jack Bean', 'gjffdd')>]
~sqlalchemy.orm.query.Query.join
knows how to join between User
and Address
because there's only one foreign key between them. If there were no foreign keys, or several, ~sqlalchemy.orm.query.Query.join
works better when one of the following forms are used:
query.join(Address, User.id==Address.user_id) # explicit condition
query.join(User.addresses) # specify relationship from left to right
query.join(Address, User.addresses) # same, with explicit target
query.join('addresses') # same, using a string
As you would expect, the same idea is used for "outer" joins, using the ~.Query.outerjoin
function:
query.outerjoin(User.addresses) # LEFT OUTER JOIN
The reference documentation for ~.Query.join
contains detailed information and examples of the calling styles accepted.
The "eager loading" capabilities of the ~sqlalchemy.orm.joinedload
function and the join-construction capabilities of ~sqlalchemy.orm.query.Query.join()
or an equivalent can be combined together using the ~sqlalchemy.orm.contains_eager
option. This is typically used for a query that is already joining to some related entity (more often than not via many-to-one), and you'd like the related entity to also be loaded onto the resulting objects in one step without the need for additional queries and without the "automatic" join embedded by the ~sqlalchemy.orm.joinedload
function:
python+sql
>>> from sqlalchemy.orm import contains_eager {sql}>>> for address in session.query(Address).... join(Address.user).... filter(User.name=='jack').... options(contains_eager(Address.user)): #doctest: +NORMALIZE_WHITESPACE ... print address, address.user SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses JOIN users ON users.id = addresses.user_id WHERE users.name = ? ('jack',) {stop}<Address('jack@google.com')> <User('jack','Jack Bean', 'gjffdd')> <Address('j25@yahoo.com')> <User('jack','Jack Bean', 'gjffdd')>
Note that above the join was used both to limit the rows to just those Address
objects which had a related User
object with the name "jack". It's safe to have the Address.user
attribute populated with this user using an inner join. However, when filtering on a join that is filtering on a particular member of a collection, using ~sqlalchemy.orm.contains_eager
to populate a related collection may populate the collection with only part of what it actually references, since the collection itself is filtered.
When querying across multiple tables, if the same table needs to be referenced more than once, SQL typically requires that the table be aliased with another name, so that it can be distinguished against other occurrences of that table. The ~sqlalchemy.orm.query.Query
supports this most explicitly using the aliased
construct. Below we join to the Address
entity twice, to locate a user who has two distinct email addresses at the same time:
python+sql
>>> from sqlalchemy.orm import aliased >>> adalias1 = aliased(Address) >>> adalias2 = aliased(Address) {sql}>>> for username, email1, email2 in ... session.query(User.name, adalias1.email_address, adalias2.email_address).... join(adalias1, User.addresses).... join(adalias2, User.addresses).... filter(adalias1.email_address=='jack@google.com').... filter(adalias2.email_address=='j25@yahoo.com'): ... print username, email1, email2 # doctest: +NORMALIZE_WHITESPACE SELECT users.name AS users_name, addresses_1.email_address AS addresses_1_email_address, addresses_2.email_address AS addresses_2_email_address FROM users JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id JOIN addresses AS addresses_2 ON users.id = addresses_2.user_id WHERE addresses_1.email_address = ? AND addresses_2.email_address = ? ('jack@google.com', 'j25@yahoo.com') {stop}jack jack@google.com j25@yahoo.com
The ~sqlalchemy.orm.query.Query
is suitable for generating statements which can be used as subqueries. Suppose we wanted to load User
objects along with a count of how many Address
records each user has. The best way to generate SQL like this is to get the count of addresses grouped by user ids, and JOIN to the parent. In this case we use a LEFT OUTER JOIN so that we get rows back for those users who don't have any addresses, e.g.:
SELECT users.*, adr_count.address_count FROM users LEFT OUTER JOIN
(SELECT user_id, count(*) AS address_count FROM addresses GROUP BY user_id) AS adr_count
ON users.id=adr_count.user_id
Using the ~sqlalchemy.orm.query.Query
, we build a statement like this from the inside out. The statement
accessor returns a SQL expression representing the statement generated by a particular ~sqlalchemy.orm.query.Query
- this is an instance of a ~.expression.select
construct, which are described in sqlexpression_toplevel
:
>>> from sqlalchemy.sql import func
>>> stmt = session.query(Address.user_id, func.count('*').label('address_count')).group_by(Address.user_id).subquery()
The func
keyword generates SQL functions, and the subquery()
method on ~sqlalchemy.orm.query.Query
produces a SQL expression construct representing a SELECT statement embedded within an alias (it's actually shorthand for query.statement.alias()
).
Once we have our statement, it behaves like a ~sqlalchemy.schema.Table
construct, such as the one we created for users
at the start of this tutorial. The columns on the statement are accessible through an attribute called c
:
python+sql
{sql}>>> for u, count in session.query(User, stmt.c.address_count).... outerjoin(stmt, User.id==stmt.c.user_id).order_by(User.id): # doctest: +NORMALIZE_WHITESPACE ... print u, count SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, anon_1.address_count AS anon_1_address_count FROM users LEFT OUTER JOIN (SELECT addresses.user_id AS user_id, count(?) AS address_count FROM addresses GROUP BY addresses.user_id) AS anon_1 ON users.id = anon_1.user_id ORDER BY users.id ('*',) {stop}<User('ed','Ed Jones', 'f8s7ccs')> None <User('wendy','Wendy Williams', 'foobar')> None <User('mary','Mary Contrary', 'xxg527')> None <User('fred','Fred Flinstone', 'blah')> None <User('jack','Jack Bean', 'gjffdd')> 2
Above, we just selected a result that included a column from a subquery. What if we wanted our subquery to map to an entity ? For this we use aliased()
to associate an "alias" of a mapped class to a subquery:
python+sql
{sql}>>> stmt = session.query(Address).filter(Address.email_address != 'j25@yahoo.com').subquery() >>> adalias = aliased(Address, stmt) >>> for user, address in session.query(User, adalias).join(adalias, User.addresses): # doctest: +NORMALIZE_WHITESPACE ... print user, address SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, anon_1.id AS anon_1_id, anon_1.email_address AS anon_1_email_address, anon_1.user_id AS anon_1_user_id FROM users JOIN (SELECT addresses.id AS id, addresses.email_address AS email_address, addresses.user_id AS user_id FROM addresses WHERE addresses.email_address != ?) AS anon_1 ON users.id = anon_1.user_id ('j25@yahoo.com',) {stop}<User('jack','Jack Bean', 'gjffdd')> <Address('jack@google.com')>
The EXISTS keyword in SQL is a boolean operator which returns True if the given expression contains any rows. It may be used in many scenarios in place of joins, and is also useful for locating rows which do not have a corresponding row in a related table.
There is an explicit EXISTS construct, which looks like this:
python+sql
>>> from sqlalchemy.sql import exists >>> stmt = exists().where(Address.user_id==User.id) {sql}>>> for name, in session.query(User.name).filter(stmt): # doctest: +NORMALIZE_WHITESPACE ... print name SELECT users.name AS users_name FROM users WHERE EXISTS (SELECT * FROM addresses WHERE addresses.user_id = users.id) () {stop}jack
The ~sqlalchemy.orm.query.Query
features several operators which make usage of EXISTS automatically. Above, the statement can be expressed along the User.addresses
relationship using ~.RelationshipProperty.Comparator.any
:
python+sql
{sql}>>> for name, in session.query(User.name).filter(User.addresses.any()): # doctest: +NORMALIZE_WHITESPACE ... print name SELECT users.name AS users_name FROM users WHERE EXISTS (SELECT 1 FROM addresses WHERE users.id = addresses.user_id) () {stop}jack
~.RelationshipProperty.Comparator.any
takes criterion as well, to limit the rows matched:
python+sql
{sql}>>> for name, in session.query(User.name).... filter(User.addresses.any(Address.email_address.like('%google%'))): # doctest: +NORMALIZE_WHITESPACE ... print name SELECT users.name AS users_name FROM users WHERE EXISTS (SELECT 1 FROM addresses WHERE users.id = addresses.user_id AND addresses.email_address LIKE ?) ('%google%',) {stop}jack
~.RelationshipProperty.Comparator.has
is the same operator as ~.RelationshipProperty.Comparator.any
for many-to-one relationships (note the ~
operator here too, which means "NOT"):
python+sql
{sql}>>> session.query(Address).filter(~Address.user.has(User.name=='jack')).all() # doctest: +NORMALIZE_WHITESPACE SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE NOT (EXISTS (SELECT 1 FROM users WHERE users.id = addresses.user_id AND users.name = ?)) ('jack',) {stop}[]
Here's all the operators which build on relationships - each one is linked to its API documentation which includes full details on usage and behavior:
~.RelationshipProperty.Comparator.__eq__
(many-to-one "equals" comparison):query.filter(Address.user == someuser)
~.RelationshipProperty.Comparator.__ne__
(many-to-one "not equals" comparison):query.filter(Address.user != someuser)
IS NULL (many-to-one comparison, also uses
~.RelationshipProperty.Comparator.__eq__
):query.filter(Address.user == None)
~.RelationshipProperty.Comparator.contains
(used for one-to-many collections):query.filter(User.addresses.contains(someaddress))
~.RelationshipProperty.Comparator.any
(used for collections):query.filter(User.addresses.any(Address.email_address == 'bar')) # also takes keyword arguments: query.filter(User.addresses.any(email_address='bar'))
~.RelationshipProperty.Comparator.has
(used for scalar references):query.filter(Address.user.has(name='ed'))
.Query.with_parent
(used for any relationship):session.query(Address).with_parent(someuser, 'addresses')
Let's try to delete jack
and see how that goes. We'll mark as deleted in the session, then we'll issue a count
query to see that no rows remain:
python+sql
>>> session.delete(jack) {sql}>>> session.query(User).filter_by(name='jack').count() # doctest: +NORMALIZE_WHITESPACE UPDATE addresses SET user_id=? WHERE addresses.id = ? (None, 1) UPDATE addresses SET user_id=? WHERE addresses.id = ? (None, 2) DELETE FROM users WHERE users.id = ? (5,) SELECT count(1) AS count_1 FROM users WHERE users.name = ? ('jack',) {stop}0
So far, so good. How about Jack's Address
objects ?
python+sql
{sql}>>> session.query(Address).filter( ... Address.email_address.in(['jack@google.com', 'j25@yahoo.com']) ... ).count() # doctest: +NORMALIZE_WHITESPACE SELECT count(1) AS count_1 FROM addresses WHERE addresses.email_address IN (?, ?) ('jack@google.com', 'j25@yahoo.com') {stop}2
Uh oh, they're still there ! Analyzing the flush SQL, we can see that the user_id
column of each address was set to NULL, but the rows weren't deleted. SQLAlchemy doesn't assume that deletes cascade, you have to tell it to do so.
We will configure cascade options on the User.addresses
relationship to change the behavior. While SQLAlchemy allows you to add new attributes and relationships to mappings at any point in time, in this case the existing relationship needs to be removed, so we need to tear down the mappings completely and start again.
Note
Tearing down mappers with ~.orm.clear_mappers
is not a typical operation, and normal applications do not need to use this function. It is here so that the tutorial code can be executed as a whole.
python+sql
>>> session.close() # roll back and close the transaction >>> from sqlalchemy.orm import clear_mappers >>> clear_mappers() # remove all class mappings
Below, we use ~.orm.mapper
to reconfigure an ORM mapping for User
and Address
, on our existing but currently un-mapped classes. The User.addresses
relationship now has delete, delete-orphan
cascade on it, which indicates that DELETE operations will cascade to attached Address
objects as well as Address
objects which are removed from their parent:
python+sql
>>> users_table = User.__table__ >>> mapper(User, users_table, properties={ # doctest: +ELLIPSIS ... 'addresses':relationship(Address, backref='user', cascade="all, delete, delete-orphan") ... }) <Mapper at 0x...; User>
>>> addresses_table = Address.__table__ >>> mapper(Address, addresses_table) # doctest: +ELLIPSIS <Mapper at 0x...; Address>
Now when we load Jack (below using ~.Query.get
, which loads by primary key), removing an address from his addresses
collection will result in that Address
being deleted:
python+sql
# load Jack by primary key {sql}>>> jack = session.query(User).get(5) #doctest: +NORMALIZE_WHITESPACE BEGIN (implicit) SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.id = ? (5,) {stop}
# remove one Address (lazy load fires off) {sql}>>> del jack.addresses[1] #doctest: +NORMALIZE_WHITESPACE SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id FROM addresses WHERE ? = addresses.user_id (5,) {stop}
# only one address remains {sql}>>> session.query(Address).filter( ... Address.email_address.in(['jack@google.com', 'j25@yahoo.com']) ... ).count() # doctest: +NORMALIZE_WHITESPACE DELETE FROM addresses WHERE addresses.id = ? (2,) SELECT count(1) AS count_1 FROM addresses WHERE addresses.email_address IN (?, ?) ('jack@google.com', 'j25@yahoo.com') {stop}1
Deleting Jack will delete both Jack and his remaining Address
:
python+sql
>>> session.delete(jack)
{sql}>>> session.query(User).filter_by(name='jack').count() # doctest: +NORMALIZE_WHITESPACE DELETE FROM addresses WHERE addresses.id = ? (1,) DELETE FROM users WHERE users.id = ? (5,) SELECT count(1) AS count_1 FROM users WHERE users.name = ? ('jack',) {stop}0
{sql}>>> session.query(Address).filter( ... Address.email_address.in(['jack@google.com', 'j25@yahoo.com']) ... ).count() # doctest: +NORMALIZE_WHITESPACE SELECT count(1) AS count_1 FROM addresses WHERE addresses.email_address IN (?, ?) ('jack@google.com', 'j25@yahoo.com') {stop}0
We're moving into the bonus round here, but lets show off a many-to-many relationship. We'll sneak in some other features too, just to take a tour. We'll make our application a blog application, where users can write BlogPost
items, which have Keyword
items associated with them.
The declarative setup is as follows:
python+sql
>>> from sqlalchemy import Text
>>> # association table >>> post_keywords = Table('post_keywords', metadata, ... Column('post_id', Integer, ForeignKey('posts.id')), ... Column('keyword_id', Integer, ForeignKey('keywords.id')) ... )
>>> class BlogPost(Base): ... __tablename__ = 'posts' ... ... id = Column(Integer, primary_key=True) ... user_id = Column(Integer, ForeignKey('users.id')) ... headline = Column(String(255), nullable=False) ... body = Column(Text) ... ... # many to many BlogPost<->Keyword ... keywords = relationship('Keyword', secondary=post_keywords, backref='posts') ... ... def __init__(self, headline, body, author): ... self.author = author ... self.headline = headline ... self.body = body ... ... def __repr__(self): ... return "BlogPost(%r, %r, %r)" % (self.headline, self.body, self.author)
>>> class Keyword(Base): ... __tablename__ = 'keywords' ... ... id = Column(Integer, primary_key=True) ... keyword = Column(String(50), nullable=False, unique=True) ... ... def __init__(self, keyword): ... self.keyword = keyword
Above, the many-to-many relationship is BlogPost.keywords
. The defining feature of a many-to-many relationship is the secondary
keyword argument which references a ~sqlalchemy.schema.Table
object representing the association table. This table only contains columns which reference the two sides of the relationship; if it has any other columns, such as its own primary key, or foreign keys to other tables, SQLAlchemy requires a different usage pattern called the "association object", described at association_pattern
.
The many-to-many relationship is also bi-directional using the backref
keyword. This is the one case where usage of backref
is generally required, since if a separate posts
relationship were added to the Keyword
entity, both relationships would independently add and remove rows from the post_keywords
table and produce conflicts.
We would also like our BlogPost
class to have an author
field. We will add this as another bidirectional relationship, except one issue we'll have is that a single user might have lots of blog posts. When we access User.posts
, we'd like to be able to filter results further so as not to load the entire collection. For this we use a setting accepted by ~sqlalchemy.orm.relationship
called lazy='dynamic'
, which configures an alternate loader strategy on the attribute. To use it on the "reverse" side of a ~sqlalchemy.orm.relationship
, we use the ~sqlalchemy.orm.backref
function:
python+sql
>>> from sqlalchemy.orm import backref >>> # "dynamic" loading relationship to User >>> BlogPost.author = relationship(User, backref=backref('posts', lazy='dynamic'))
Create new tables:
python+sql
{sql}>>> metadata.create_all(engine) # doctest: +NORMALIZE_WHITESPACE PRAGMA table_info("users") () PRAGMA table_info("addresses") () PRAGMA table_info("posts") () PRAGMA table_info("keywords") () PRAGMA table_info("post_keywords") () CREATE TABLE posts ( id INTEGER NOT NULL, user_id INTEGER, headline VARCHAR(255) NOT NULL, body TEXT, PRIMARY KEY (id), FOREIGN KEY(user_id) REFERENCES users (id) ) () COMMIT CREATE TABLE keywords ( id INTEGER NOT NULL, keyword VARCHAR(50) NOT NULL, PRIMARY KEY (id), UNIQUE (keyword) ) () COMMIT CREATE TABLE post_keywords ( post_id INTEGER, keyword_id INTEGER, FOREIGN KEY(post_id) REFERENCES posts (id), FOREIGN KEY(keyword_id) REFERENCES keywords (id) ) () COMMIT
Usage is not too different from what we've been doing. Let's give Wendy some blog posts:
python+sql
{sql}>>> wendy = session.query(User).filter_by(name='wendy').one() #doctest: +NORMALIZE_WHITESPACE SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password FROM users WHERE users.name = ? ('wendy',) {stop} >>> post = BlogPost("Wendy's Blog Post", "This is a test", wendy) >>> session.add(post)
We're storing keywords uniquely in the database, but we know that we don't have any yet, so we can just create them:
python+sql
>>> post.keywords.append(Keyword('wendy')) >>> post.keywords.append(Keyword('firstpost'))
We can now look up all blog posts with the keyword 'firstpost'. We'll use the any
operator to locate "blog posts where any of its keywords has the keyword string 'firstpost'":
python+sql
{sql}>>> session.query(BlogPost).filter(BlogPost.keywords.any(keyword='firstpost')).all() #doctest: +NORMALIZE_WHITESPACE INSERT INTO keywords (keyword) VALUES (?) ('wendy',) INSERT INTO keywords (keyword) VALUES (?) ('firstpost',) INSERT INTO posts (user_id, headline, body) VALUES (?, ?, ?) (2, "Wendy's Blog Post", 'This is a test') INSERT INTO post_keywords (post_id, keyword_id) VALUES (?, ?) ((1, 1), (1, 2)) SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body FROM posts WHERE EXISTS (SELECT 1 FROM post_keywords, keywords WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?) ('firstpost',) {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User('wendy','Wendy Williams', 'foobar')>)]
If we want to look up just Wendy's posts, we can tell the query to narrow down to her as a parent:
python+sql
{sql}>>> session.query(BlogPost).filter(BlogPost.author==wendy).... filter(BlogPost.keywords.any(keyword='firstpost')).all() #doctest: +NORMALIZE_WHITESPACE SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body FROM posts WHERE ? = posts.user_id AND (EXISTS (SELECT 1 FROM post_keywords, keywords WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)) (2, 'firstpost') {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User('wendy','Wendy Williams', 'foobar')>)]
Or we can use Wendy's own posts
relationship, which is a "dynamic" relationship, to query straight from there:
python+sql
{sql}>>> wendy.posts.filter(BlogPost.keywords.any(keyword='firstpost')).all() #doctest: +NORMALIZE_WHITESPACE SELECT posts.id AS posts_id, posts.user_id AS posts_user_id, posts.headline AS posts_headline, posts.body AS posts_body FROM posts WHERE ? = posts.user_id AND (EXISTS (SELECT 1 FROM post_keywords, keywords WHERE posts.id = post_keywords.post_id AND keywords.id = post_keywords.keyword_id AND keywords.keyword = ?)) (2, 'firstpost') {stop}[BlogPost("Wendy's Blog Post", 'This is a test', <User('wendy','Wendy Williams', 'foobar')>)]
Query Reference: query_api_toplevel
Mapper Reference: mapper_config_toplevel
Relationship Reference: relationship_config_toplevel
Session Reference: session_toplevel
.