Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sqlalchemy.dialects.postgresql.asyncpg.InterfaceError - cannot perform operation: another operation is in progress #863

Closed
harshitsinghai77 opened this issue Dec 10, 2021 · 13 comments

Comments

@harshitsinghai77
Copy link

harshitsinghai77 commented Dec 10, 2021

  • asyncpg version: 0.25.0
  • PostgreSQL version: 13
  • Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
    the issue with a local PostgreSQL install?
    : I'm using local PostgreSQL
  • Python version: 3.8.0
  • Platform: Linux
  • Do you use pgbouncer?: no
  • Did you install asyncpg with pip?: No, poetry
  • If you built asyncpg locally, which version of Cython did you use?:
  • Can the issue be reproduced under both asyncio and
    uvloop?
    :

I'm using async sqlalchemy with postgresql+asyncpg.

I've created a FastAPI.

Here's my code snippet.

DATABASE_URL = (
    f"postgresql+asyncpg://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}"
)

metadata = MetaData()
Base = declarative_base()
async_engine = create_async_engine(DATABASE_URL, pool_size=20)
async_session = AsyncSession(async_engine)
async with async_session as session:
    query = (
        nemo_user_analytics.select()
        .with_only_columns(
            [
                func.to_char(nemo_user_analytics.c.full_date, "Mon DD").label(
                    "weekday"
                ),
                func.sum(nemo_user_analytics.c.duration).label("total_count"),
            ]
        )
        .where(
            and_(
                nemo_user_analytics.c.full_date > seven_day_interval_before,
                nemo_user_analytics.c.google_id == google_id,
            )
        )
        .group_by("weekday")
        .order_by("weekday")
    )
    result = await session.execute(query)
    result = result.fetchall()
    return result

This works fine when only one request is called. But simultaneously calling the API results in this error

sqlalchemy.exc.InterfaceError: (sqlalchemy.dialects.postgresql.asyncpg.InterfaceError) <class 'asyncpg.exceptions._base.InterfaceError'>: cannot perform operation: another operation is in progress

I'm using different session for each request with

 async with async_session as session:

But it still doesn't work.

@elprans
Copy link
Member

elprans commented Jan 3, 2022

@CaselIT, @zzzeek, any ideas what might be going on here?

@zzzeek
Copy link

zzzeek commented Jan 3, 2022

this usually is someone using the same AsyncSession object in more than one async task at a time, not any different from if they did this with the asyncpg connection directly.

@harshitsinghai77
Copy link
Author

Actually, I solved this already. I was making parallel requests and using the same async_session. I can provide the code snippet too if you want.

This issue is solved.

@elprans elprans closed this as completed Jan 4, 2022
@ghost
Copy link

ghost commented Aug 11, 2022

Actually, I solved this already. I was making parallel requests and using the same async_session. I can provide the code snippet too if you want.

This issue is solved.

I am facing same error
Can you share your fixed code snippet

@MarkParker5
Copy link

@CaselIT
Copy link

CaselIT commented Aug 25, 2022

The pool class should not influence this issue, so it's rather strange that changing it fixes the issue

@zzzeek
Copy link

zzzeek commented Aug 25, 2022

The pool class should not influence this issue, so it's rather strange that changing it fixes the issue

the person who had parallel requests on a single session likely had a different issue than the person switching to NullPool, who has a different problem. nobody ever wants to share reproducing examples so the best we can do is watch as everyone mis-configures their applications to work around issues that are likely still manifest in their code.

@sk-
Copy link

sk- commented Aug 27, 2022

We were hit by this issue as well and the reason was due to pytest creating a different event loop for each test. By forcing the same event loop for all tests (see pytest-dev/pytest-asyncio#38 (comment)) we were able to fix the problem.

Note though, that the problem does not happen when one uses a NullPool.

@zzzeek do you have any ideas how could we solve this without forcing a unique event loop?

Below is the code that can replicate the issue

from sqlalchemy.sql import text
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.pool import NullPool

engine = create_async_engine(
    "postgresql+asyncpg://user:password@localhost:5432/db_name", future=True#, poolclass=NullPool
)


async def test_echo():
    async with engine.begin() as conn:
        r = await conn.execute(text("SELECT 'hello' as echo"))
        assert r.scalar() == "hello"

async def test_echo2():
    async with engine.begin() as conn:
        r = await conn.execute(text("SELECT 'hello' as echo"))
        assert r.scalar() == "hello"

async def test_echo3():
    async with engine.begin() as conn:
        r = await conn.execute(text("SELECT 'hello' as echo"))
        assert r.scalar() == "hello"

@zzzeek
Copy link

zzzeek commented Aug 27, 2022

dunno, we use pytest and we dont have that problem. but we rolled the async test thing ourselves, maybe report this as a bug to whatever pytest extension this is?

@junoriosity
Copy link

@harshitsinghai77 It would be awesome, if you could provide your solution. 🙂

@EwertonDCSilv
Copy link

Isso ainda é um problema. E usar a classe poolclass=NullPool, faz com que parte dos problemas sumam

@harshitsinghai77
Copy link
Author

harshitsinghai77 commented Sep 18, 2023

I switched 2 companies since I wrote my last comment. I don't remember anything now. Lol.

@brunolnetto
Copy link

Update: async engine creation as below makes the test work half of the time, and not just the first one.

create_async_engine(
            uri,
            pool_size=20,              # Adjust pool size based on your workload
            max_overflow=10,           # Adjust maximum overflow connections
            pool_recycle=3600,         # Periodically recycle connections (optional)
            pool_pre_ping=True,        # Check the connection status before using it
        )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants