Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: How to use ormar with alembic #53

Closed
soderluk opened this issue Nov 24, 2020 · 4 comments
Closed

Question: How to use ormar with alembic #53

soderluk opened this issue Nov 24, 2020 · 4 comments
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@soderluk
Copy link

Thanks for your work on the library!

One question that came to my mind, that I don't really find in the docs, is that how should this be used with alembic?
I mean, at least the --autogenerate produces some really weird results for me.

Disregarding the autogenerate for alembic, should one use ormar.* types when creating the tables:

def upgrade():
    alembic.op.create_table("table", sa.Column("column", ormar.Integer, primary_key=True), sa.Column("name", ormar.String(max_length=50)))

Some examples in the documentation would be nice.

@soderluk
Copy link
Author

I dug around for quite some time, and found out why the alembic autogenerate didn't work.

I hadn't imported my models in the alembic env.py so that the metadata would include all the tables to be created.

This could maybe be updated in the documentation, that if the metadata instantiation and models are in separate modules, they should be imported so alembic works correctly.

@collerek collerek added documentation Improvements or additions to documentation question Further information is requested labels Nov 25, 2020
@collerek
Copy link
Owner

Hi, Thanks for the suggestion.

Will try to update the docs in the nearest time.
There are areas that are covered by other libraries docs (like alembic/fastapi/pydantic/sqlalchemy) and during creation of docs I skipped those due to limited time.

But I guess migrations are so important that it should be expanded in the docs.

A quick solution/example should be something similar to:

When you have application structure like:

-> app
    -> alembic (initialized folder - so run alembic init inside app folder)
    -> models (here are the models)
      ->__init__.py
      ->my_models.py

Your env.py file (in alembic folder) can look something like:

from logging.config import fileConfig
from sqlalchemy import create_engine

from alembic import context
import sys, os

# add app folder to system path (alternative is running it from parent folder with python -m ...)
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '/../../')

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)

# add your model's MetaData object here (the one used in ormar)
# for 'autogenerate' support
from app.models.my_models import metadata
target_metadata = metadata


# set your url here or import from settings
URL = "sqlite:///test.db"


def run_migrations_offline():
    """Run migrations in 'offline' mode.

    This configures the context with just a URL
    and not an Engine, though an Engine is acceptable
    here as well.  By skipping the Engine creation
    we don't even need a DBAPI to be available.

    Calls to context.execute() here emit the given string to the
    script output.

    """
    context.configure(
        url=URL
        target_metadata=target_metadata,
        literal_binds=True,
        dialect_opts={"paramstyle": "named"},
    )

    with context.begin_transaction():
        context.run_migrations()


def run_migrations_online():
    """Run migrations in 'online' mode.

    In this scenario we need to create an Engine
    and associate a connection with the context.

    """
    connectable = create_engine(URL)

    with connectable.connect() as connection:
        context.configure(
            connection=connection,
            target_metadata=target_metadata
        )

        with context.begin_transaction():
            context.run_migrations()


if context.is_offline_mode():
    run_migrations_offline()
else:
    run_migrations_online()

you can also include/exclude specific tables with include_object parameter passed to context.configure. That should be a function returning True/False for given objects.

A sample function excluding tables starting with data_ in name unless it's 'data_jobs':

def include_object(object, name, type_, reflected, compare_to):
    if name and name.startswith('data_') and name not in ['data_jobs']:
        return False

    return True

And you pass it into context like (both in online and offline):

context.configure(
        url=url,
        target_metadata=target_metadata,
        literal_binds=True,
        dialect_opts={"paramstyle": "named"},
        include_object=include_object
    )

@soderluk
Copy link
Author

Thanks for the thorough answer. I think the alembic migrations never get the attention they deserve in the documentations. It required at least on my end a bit more work than just "import the metadata" to get things up and running.
Even just this answer in the docs, would probably improve things a lot.

@collerek
Copy link
Owner

That's exactly what I did - included this answer in documentation, thanks for the suggestion again!

If you like ormar please star the repo and help to spread the knowledge about it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants