Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pep484 Type annotations ,mypy compatibility #4609

Closed
auvipy opened this issue Apr 11, 2019 · 29 comments
Closed

pep484 Type annotations ,mypy compatibility #4609

auvipy opened this issue Apr 11, 2019 · 29 comments
Assignees
Labels
alchemy 2 goes along with the 2.0 milestone to aid in searching big job a major change that requires a lot of knowledge and work
Milestone

Comments

@auvipy
Copy link
Contributor

auvipy commented Apr 11, 2019

No description provided.

@zzzeek zzzeek added the big job a major change that requires a lot of knowledge and work label Apr 11, 2019
@zzzeek zzzeek added this to the 2.0 milestone Apr 11, 2019
@zzzeek zzzeek added alchemy 2 goes along with the 2.0 milestone to aid in searching pep484 labels Apr 11, 2019
@zzzeek zzzeek added this to 2.0 deliverables in progress in sqlalchemy 2 Apr 20, 2019
@zzzeek zzzeek moved this from 2.0 deliverables in progress to 2.0 deliverables backlog in sqlalchemy 2 Apr 20, 2019
@ilevkivskyi
Copy link

I have few questions/comments here:

  • What is the proposed schedule here?
  • Are you going to try applying the existing stubs to sources using retype?
  • Note that mypy plugin is required for SQLAlchemy types to work correctly. Are you going to distribute it with the main package?
  • We will probably need to migrate open issues/PRs against sqlalchemy-stubs to this repo at some point.

@bryanforbes may be also interested in this.

@JukkaL @gvanrossum @msullivan just FYI

@zzzeek
Copy link
Member

zzzeek commented Apr 25, 2019

Hi Ivan and thanks for the comments.

I have few questions/comments here:

* What is the proposed schedule here?

a schedule in open source is as you know about as stable as Jello, let's say what I envision is that SQLAlchemy 2 is looking very much like what it will look like in Q1/Q2 of 2020 and a full pass of type annotations can be added.

* Are you going to try applying the existing stubs to sources using `retype`?

I haven't looked at all at tooling for this, I assumed there would be some way to leverage the stubs you've made so perhaps, but at the same time a lot of the public facing APIs are changing in SQLAlchemy 2 as well, a lot of it inspired by the kinds of thinking we use when our types are more static (e.g. it's a code smell when a function returns Union[Foo, Bar], at least back in my Java days it was), and I'm also getting the vague sense that when I actually look at the existing stubs more closely I might be more inspired to make the types a lot tighter, that said there's a lot of lines of code here so probably the reality will be some combination of doing lots of new typing by hand, running in the stubs with tooling to handle lots of other areas, and then depending on what kinds of volunteer efforts come in that might move it either way.

* Note that mypy plugin is required for SQLAlchemy types to work correctly. Are you going to distribute it with the main package?

So for this part perhaps I need to understand things a bit better because I don't understand what "for the types to work correctly" means. The name of this issue that refers to "mypy" should more likely refer to pep484. I use mypy when doing this to do a static check ahead of time, as I did in my "publishthing" project recently in order to learn it: https://github.com/sqlalchemyorg/publishthing but IIUC there are any number of type checkers e.g. https://github.com/ethanhs/python-typecheckers and these are all only used as checkers against the source code, they aren't needed to actually run software that uses pep484 annotations (SQLAlchemy 2 will be targeting Python 3.5 and above only). Certainly there should be a tox target that runs mypy and pulls it in as a dependency for that task.

* We will probably need to migrate open issues/PRs against `sqlalchemy-stubs` to this repo at some point.

yes once all of this new stuff is released, which again hopefully is an early 2020 thing but it depends on how my time constraints work out.

@bryanforbes may be also interested in this.

@JukkaL @gvanrossum @msullivan just FYI

@zzzeek zzzeek changed the title Type annotations for mypy pep484 Type annotations ,mypy compatibility Apr 25, 2019
@ilevkivskyi
Copy link

let's say what I envision is that SQLAlchemy 2 is looking very much like what it will look like in Q1/Q2 of 2020 and a full pass of type annotations can be added.

This sounds great!

So for this part perhaps I need to understand things a bit better because I don't understand what "for the types to work correctly" means.

SQLAlchemy has some semantics that are impossible to express in PEP 484 type system (even including more advanced types that are added in subsequent typing-related PEPs). So even if you will have 100% precise type coverage in the library, the user code that imports it may be hard to type-check precisely. For example consider this code:

Base = declarative_base()

class User(Base):
    id = Column(Integer())
    name = Column(String())

user = User(id="bad", name=42)

It is impossible to express why the last call is bad, while if you use the existing mypy-SQLAlchemy plugin, mypy will show reasonable error message. Similar situations exist also for other popular frameworks and libraries because Python can be very dynamic. This is why there are mypy plugins for attrs, django, zope.interfaces, etc.

Other type-checkers will be still able to use the types from your library, but there might be both false positives and false negatives in the user code. Note that Pyre IIRC has some builtin hardcoded support for Django, maybe they will add something for SQLAlchemy, but mypy decided to choose the way of extending plugins.

(SQLAlchemy 2 will be targeting Python 3.5 and above only).

If you want to have really nice syntax for types, then I would propose to be even more aggressive and support 3.6+, because in Python 3.6 you can write:

items: List[str] = []
class User:
    id: int
    name: str

etc. Also, Python 3.5 comes to EoL in September 2020.

@zzzeek
Copy link
Member

zzzeek commented Apr 28, 2019

oh that plugin, yes, I see that, it's likely better that this is part of SQLAlchemy if only because as SQLAlchemy's API changes I assume the tests would highlight if we broke anything. We'd need to be able to support this as well as helping users with issues so if we prefer that it be with SQLAlchemy eventually, I would really be grateful if other people had availability to help out with user issues and requests if I reach out. Having user mappings be mypy-compatible wasn't my goal here but yes, that should be a thing too. I would say that SQLAlchemy 2.0 can launch with or without it as part of the library but if we can get it integrated with the test suite and all that, that's probably fine.

As far as the 3.6 syntax, I see that, but ORM mappings aren't going to be able to use that syntax anytime soon, because we still are too conservative to assume Python types to DB types. We make people type out the database types because as often as it's generic Integer it's something specific like mysql.BIGINT(display_width=8), so keeping it consistent makes things a little clearer IMO.

That said, sure we might just go to Python 3.6 by then.

@ilevkivskyi
Copy link

I would really be grateful if other people had availability to help out with user issues and requests if I reach out

Sure, I will be available (and glad) to help with this.

@commentator8
Copy link

We are using sqlalchemy as our ORM and i recently converted our code base over to py3.8.
I have added types to most all code, but am having an issue with the following two scenarios:
deferred code trying to use it doesn't seem to recognise it exists
relationships - if its a one to many than it sees it as a MyTable object and not List[MyTable]

Is this just the current state and will have to wait for 2020 as mentioned above? Or am i missing a work around or existing fix?

Thanks!

@zzzeek
Copy link
Member

zzzeek commented Dec 11, 2019

We are using sqlalchemy as our ORM and i recently converted our code base over to py3.8.
I have added types to most all code, but am having an issue with the following two scenarios:
deferred code trying to use it doesn't seem to recognise it exists
relationships - if its a one to many than it sees it as a MyTable object and not List[MyTable]

Is this just the current state and will have to wait for 2020 as mentioned above? Or am i missing a work around or existing fix?

Thanks!

whether or not SQLAlchemy uses pep 484 annotations internally is not important for the behavior of the dropbox plugin here, are you reporting an issue with the plugin ? I would post an issue at https://github.com/dropbox/sqlalchemy-stubs#mypy-plugin-and-stubs-for-sqlalchemy .

@commentator8
Copy link

I am not entirely sure. Basically what i want is to be able to have a table with a deferred column and not have pycharm complain that the Table.deferred_item doesnt exist :) I saw from the above the existence of the plugin and installed it - it seems that it might do what i want.

Thanks

@pszpetkowski
Copy link

Is there a chance to broaden the scope of this issue to include support for pyright as well?

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2020

I've not looked into the mechanics of how a mypy plugin works yet I assume there is a .py file somewhere with the right elements inside of it, if the "pyright" tool has some similar mechanism then we can surely add two .py files with the correct extension elements, sure.

@tiangolo
Copy link
Sponsor Contributor

Standard Python types as per PEP 484 (and the next ones) are the thing that provides compatibility with Pyright. There's no need for something else.

Pyright just reads those standard Python type annotations. And they are currently focussing on maximum compatibility with the type annotation standards, so they don't expose any type of external plug-in interface as of now.

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2020

So there's not really a way to not support pyright if pep-484 is followed, so, in that case there is no issue.

However the reason there's a mypy plugin is to support type annotations working for ORM mapped classes based on SQLAlchemy datatypes. I haven't looked at the existing plugin, but something that would definitely need a lot of mechanics would be figuring out something such as:

stmt = select(User.name, User.id)   # somehow knows it's a select[String, Integer]
result = session.execute(stmt)  # this returns Result
row = result.first()   # it would be known at a pep-484 level that this is a tuple[String, Integer]

Above I'm not sure if that's even possible, but it certainly would be very slick.

I would guess the kind of thing the mypy plugin does right now is:

addresses = some_user.addresses  # pep-484 tools know this is List[Address]

@henrycjc
Copy link

Thanks for your response in #5521 @zzzeek. I appreciate not wanting to make such a large change to the stable 1.x architecture just to accommodate typing.
Assuming there is scope with SQLA2 for dynamic typing, I can certainly see room for some sort of generic approach (similar to this PR that I came across adding generics to Pandas DataFrames databricks/koalas@0f7449f)

If typing had some sort of variadic support (discussed here python/typing#193) then I could see something like this?

S = TypeVar('S') # statement
T = TypeVar('T', variadic=True)

class Select(Statement[T*]):
    ...

stmt = select(User.name, User.id)   # SelectStatement is generic on T*, i.e. select[str, int]
result = session.execute(stmt)  # Result is generic on S[T*], i.e. ResultProxy[select[str, int]]
row = result.first()   # result knows T* and can infer ScalarResult[SelectStatement[str, int] as an alias for Tuple[str, int]

@Dreamsorcerer
Copy link
Contributor

Having updated my application to 2.0-style, from what I can see, the mypy plugin probably works fine, but most of the typing annotations need replacing.

For example, select() now accepts ORM classes and columns.
The new Engine/Session classes are not recognised, so just end up as Any, which means that user = ret.scalar() will just return Any and lose the ORM typing. But, adding an annotation as user: User = ret.scalar() gets that information back.

@mrahtz
Copy link

mrahtz commented Feb 20, 2021

@henrycjc I noticed you referenced python/typing#193 on variadic generics in this thread. Heads up that we've been working on a draft of a PEP for this in PEP 646. If this is something you still care about, take a read and let us know any feedback in this thread in typing-sig. Thanks!

@zzzeek zzzeek self-assigned this Feb 20, 2021
@zzzeek
Copy link
Member

zzzeek commented Feb 20, 2021

just an update folks now that 1.4 is ready for a final release and the 2.0 concept is implemented, the typing stubs and plugin are what I've worked on all week. As the existing plugin approach needs updates for all the new mapping styles in any case I'm evaluating an altered approach for a plugin based on descriptors, since SQLAlchemy mapped classes are ultimately using descriptors for every attribute it seems this provides the most accurate representation of a mapped class' behavior, mypy seems to match the behavior of a descriptor very well so far.

But, it's not clear if "the most accurate" representation is what people would actually want, so that remains to be seen. Example: any mapped attribute on a mapped class can be None if the class is not persistent yet, even if the mapped Column is nullable=False. so..does the plugin disregard "nullable" on the Column? seems like yes but is that what's actually practical. This kind of thing might be something that is configurable since I can see different people going different directions with that.

As I'd like to be able to have the stubs be a continued work in progress over the coming months that developers can "opt in" to using, I will be publishing them under a separate pypi package https://pypi.org/project/sqlalchemy2-stubs/ . The current sqlalchemy-stubs are in very widespread use, despite their alpha status, including that my own Visual Studio Code is using them out of the box with pylance, so if I want to be able to have something that's in flux I'd rather not touch the existing sqlalchemy-stubs at all for now, and SQLAlchemy 2.0 will hopefully just integrate the stubs so there wont be a separate package (I think. at the same time separate stubs seem oddly convenient, not sure why). The plugin itself I would prefer it to be in sqlalchemy.ext.mypy, but as it would be dependent on the newer stubs, im not totally sure how to best present that, it would have to detect that the "right" stubs are installed. there's probably a way to do that.

@sqla-tester
Copy link
Collaborator

Mike Bayer referenced this issue:

document declarative base made non-dynamically https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/2600

@zzzeek
Copy link
Member

zzzeek commented Feb 20, 2021

in case anyone is following along, here's an example of the descriptor working - the below program using my work in progress passes mypy --strict with no errors, not using any plugins, but to do that it has the Table set up separately. The necessary part is that there's a generic "descriptor" type called Mapped[_T] (as of this comment, the Mapped type is not committed to master yet, it's in review still).

This is a documented configuration style called "imperative table" and it also is compatible with dataclasses. The plugin, when enabled, will accommodate for SQLAlchemy's automatic application of these descriptors in place of Column and other objects when they are declared inline, to support the more traditional "declarative" form.

from typing import List
from typing import Optional

from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import select
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy.orm import registry
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
from sqlalchemy.orm.attributes import Mapped
from sqlalchemy.orm.decl_api import DeclarativeMeta


class Base(metaclass=DeclarativeMeta):
    __abstract__ = True
    registry = registry()
    metadata = registry.metadata


class A(Base):
    __table__ = Table(
        "a",
        Base.metadata,
        Column("id", Integer, primary_key=True),
        Column("data", String),
    )

    __mapper_args__ = {"properties": {"bs": relationship("B")}}

    id: Mapped[int]
    data: Mapped[str]
    bs: "Mapped[List[B]]"

    def __init__(
        self,
        id: Optional[int] = None,
        data: Optional[str] = None,
        bs: "Optional[List[B]]" = None,
    ):
        self.registry.constructor(self, id=id, data=data, bs=bs)


class B(Base):
    __table__ = Table(
        "b",
        Base.metadata,
        Column("id", Integer, primary_key=True),
        Column("a_id", ForeignKey("a.id")),
        Column("data", String),
    )
    id: Mapped[int]
    a_id: Mapped[int]
    data: Mapped[str]

    def __init__(
        self,
        id: Optional[int] = None,
        a_id: Optional[int] = None,
        data: Optional[str] = None,
    ):
        self.registry.constructor(self, id=id, a_id=a_id, data=data)


e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)

s = Session(e)


a1 = A(data="some data", bs=[B(data="some data")])

s.add(a1)
s.commit()

# illustrate descriptor working at the class level, A.data.in_()
stmt = (
    select(A.data, B.data)
    .join(B)
    .where(A.data.in_(["some data", "some other data"]))
)

for row in s.execute(stmt):
    print(row)

@septatrix
Copy link

This is already looking great albeit rather verbose. I definitely like that it feels way more natural to declare the types than to assign the to the attributes.

The necessary part is that there's a generic "descriptor" type called Mapped[_T] (as of this comment, the Mapped type is not committed to master yet, it's in review still).

Have you had a look at Pydantic yet? It also supports the declaration using type hints and has good mypy support¹. Maybe you could elaborate why the Mapped[_T] is currently necessary. It seems like one could get away without it.

This is a documented configuration style called "imperative table" and it also is compatible with dataclasses.

There is also typing.Annotated which might allow one to get away without this. It is only available since 3.9 but maybe it will get added to the typing_extensions package. Those annotations could allow one to use simple hints (displayname: str) to signify a simple column of the same name and corresponding type whereas more complex columns could be annotated like this (id: Annotated[int, primary_key=True, column_name="some_different_name"]).

¹ It also has a mypy plugin but that is mostly necessary for generating the signature for __init__ and advanced features.

@zzzeek
Copy link
Member

zzzeek commented Feb 24, 2021

This is already looking great albeit rather verbose. I definitely like that it feels way more natural to declare the types than to assign the to the attributes.

OK just so you know, there's nothing new at all about that. the "imperative table" style is as old as Declarative itself. I'm using it above just to illustrate how to set types for the attributes that won't cause conflicts, in the absense of a plugin.

The necessary part is that there's a generic "descriptor" type called Mapped[_T] (as of this comment, the Mapped type is not committed to master yet, it's in review still).

Have you had a look at Pydantic yet? It also supports the declaration using type hints and has good mypy support¹. Maybe you could elaborate why the Mapped[_T] is currently necessary. It seems like one could get away without it.

It's necessary because SQLAlchemy mapped classes define special behaviors for these attributes at the class level, as well as at the instance level. Such as the example above, A.data.in_(['a', 'b', 'c']) . Pydantic doesn't do anything like this, pydantic does not even apply any descriptors to the class. So there is really not a lot of comparison between how SQLAlchemy and Pydantic actually work or what they provide as far as functionality.

As for verbosity, the plugin that I'm writing will be supplying the "Mapped[]" annotation automatically in every case that it can handle, which currently includes just the traditional declarative style. For "imperative table" style it would be too involved for the mypy plugiun to look for __table__ and then read the whole table. I think Mapped[int] makes it very clear, again noting that this is not just an 'int', there are class level behaviors that are public API as well.

There is also typing.Annotated which might allow one to get away without this. It is only available since 3.9 but maybe it will get added to the typing_extensions package. Those annotations could allow one to use simple hints (displayname: str) to signify a simple column of the same name and corresponding type whereas more complex columns could be annotated like this (id: Annotated[int, primary_key=True, column_name="some_different_name"]).

I'm not sure im following that, why wouldn't one simply use Column like they do now? that's where stuff like "primary key" and "column name" go.

The Column also gets the type annotation here, so it's Column[_T] which is not actually my idea it's what the existing sqlalhcemy-stubs does, I wasn't sure I liked that at first but it is kind of useful to pull through the Python type that is going to correspond to the mapped object and it works out very well.

@septatrix
Copy link

OK just so you know, there's nothing new at all about that. the "imperative table" style is as old as Declarative itself. I'm using it above just to illustrate how to set types for the attributes that won't cause conflicts, in the absense of a plugin.

Okay I did not know that as I have not used sqlalchemy in depth yet or looked at some alternative patterns/styles.

The necessary part is that there's a generic "descriptor" type called Mapped[_T] [...]

Have you had a look at Pydantic yet? It also supports the declaration using type hints and has good mypy support¹. Maybe you could elaborate why the Mapped[_T] is currently necessary. It seems like one could get away without it.

It's necessary because SQLAlchemy mapped classes define special behaviors for these attributes at the class level, as well as at the instance level. Such as the example above, A.data.in_(['a', 'b', 'c']) . Pydantic doesn't do anything like this, pydantic does not even apply any descriptors to the class. So there is really not a lot of comparison between how SQLAlchemy and Pydantic actually work or what they provide as far as functionality.

I already forgot that all the methods etc would lead to errors should one use normal types without such a generic.

As for verbosity, the plugin that I'm writing will be supplying the "Mapped[]" annotation automatically in every case that it can handle, which currently includes just the traditional declarative style. For "imperative table" style it would be too involved for the mypy plugiun to look for __table__ and then read the whole table. I think Mapped[int] makes it very clear, again noting that this is not just an 'int', there are class level behaviors that are public API as well.

Yes I agree. Having a special generic definitely signifies that an attribute is more than just a string etc.

There is also typing.Annotated [...]

I'm not sure im following that, why wouldn't one simply use Column like they do now? that's where stuff like "primary key" and "column name" go.

The Column also gets the type annotation here, so it's Column[_T] which is not actually my idea it's what the existing sqlalhcemy-stubs does, I wasn't sure I liked that at first but it is kind of useful to pull through the Python type that is going to correspond to the mapped object and it works out very well.

Well what I am imagining is something like the following:

class User(Base):
    id: Column[int, primary_key=True]
    name: Column[str]

In general my problem with envisioning how this could work and what the interface might be is that mostly imagine tables like dataclasses or named tuples but always forget that the attributes themselves also have methods. Therefore the type of the attributes passed to the constructor does not match up with the type as which they are accessible on the resulting object.

@zzzeek
Copy link
Member

zzzeek commented Feb 25, 2021

Well what I am imagining is something like the following:

class User(Base):
    id: Column[int, primary_key=True]
    name: Column[str]

oh I see. One would think all of dataclasses would work this way instead of having dataclasses.field()? is that the plan for dataclasses? once they move to that, that would be encouraging for this style.

In general my problem with envisioning how this could work and what the interface might be is that mostly imagine tables like dataclasses or named tuples but always forget that the attributes themselves also have methods. Therefore the type of the attributes passed to the constructor does not match up with the type as which they are accessible on the resulting object.

Well that's the part im dealng with, as long as the thing that's up front there has the right runtime signature, it's good. Maybe we we could do this like this (someday, I'm not ready for this yet):

class User:
    id: Mapped[Column(Integer, nullable=False)]
    stuff: Mapped[relationship("Stuff", lazy='joined')]

@zzzeek
Copy link
Member

zzzeek commented Feb 25, 2021

Just gave pydantic more of a spin and so far it looks like SQLAlchemy would have to circumvent most of what it does in order to get its own instrumentation to work. not sure if it would be worth it, so what I tried and got halfway there is at https://gist.github.com/zzzeek/ee7f5925959b218a7fdc9a637b5efcfb.

@zzzeek
Copy link
Member

zzzeek commented Feb 25, 2021

yeah pydantic's init https://github.com/samuelcolvin/pydantic/blob/master/pydantic/main.py#L397 takes all the attributes, puts them in a dictionary, and just sets obj.__dict__ wholesale. a SQLAlchemy approach would have to rewrite that method to run the values through the descriptors.

sqlalchemy-bot pushed a commit that referenced this issue Feb 26, 2021
officially document how to make declarative base
non-dynamically such that mypy and similar tools
can process it without plugins

Change-Id: I884f9a7c06c4a8b8111948a2dd64e308e7dce4fc
References: #4609
@septatrix
Copy link

What prevents you from incorporating the stubs from https://github.com/dropbox/sqlalchemy-stubs (and maybe even shipping the mypy plugin)?

@zzzeek
Copy link
Member

zzzeek commented Mar 6, 2021

What prevents you from incorporating the stubs from https://github.com/dropbox/sqlalchemy-stubs (and maybe even shipping the mypy plugin)?

well the license is different, which was the first part, and also copyrighted by a big company like Dropbox makes me hesitate a bit before copying their mypy plugin code. the stubs are generated by stubgen in any case and most of what's in sqlalchemy-stubs is just plain stubgen code anyway. since I'm changing the fundamental nature of how typing will be applied, the way the mypy plugin works is totally different as I'm using different hooks entirely, so in that sense, it was better to write it from scratch rather than have three lines of code from the original making for a more complicated copyright situation.

the existing plugin they have only does a few things most which I am doing differently and I have most of the "hard" part of that working now.

@gvanrossum
Copy link

It’s open source, Apache 2.0. Don’t worry about it.

@zzzeek
Copy link
Member

zzzeek commented Mar 7, 2021

It’s open source, Apache 2.0. Don’t worry about it.

my plan worked, I got @gvanrossum to comment on my issue tracker!

kidding! (at the same time welcome!). I will be lifting a bunch from the stubs since I'm only changing those a little bit. the plugin I used different plugin events, and I believe the difference is that im doing things more in the "semantic analysis" stage rather than the "type checking" stage, which I have a vague notion that type checking is after semanal? if people want to see what I'm doing the plugin is at https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/2590, the plugin hooks are in plugin.py and the attribute manipulation is in decl_class.py.

@zzzeek zzzeek modified the milestones: 2.0, 1.4 Mar 12, 2021
@sqla-tester
Copy link
Collaborator

Mike Bayer has proposed a fix for this issue in the master branch:

Implement Mypy plugin https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/2590

sqlalchemy 2 automation moved this from 2.0 deliverables backlog to 1.4 -> 2.0 transitional deliverables done Mar 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
alchemy 2 goes along with the 2.0 milestone to aid in searching big job a major change that requires a lot of knowledge and work
Projects
No open projects
sqlalchemy 2
  
1.4 -> 2.0 transitional deliverabl...
Development

No branches or pull requests