Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Acquired connections does not return to pool? #126

Closed
serg666 opened this issue Jul 15, 2016 · 4 comments
Closed

Acquired connections does not return to pool? #126

serg666 opened this issue Jul 15, 2016 · 4 comments

Comments

@serg666
Copy link

serg666 commented Jul 15, 2016

Guys! This issue refer to #113 As @asvetlov said I began to use asyncio.wait_for

import logging
import logging.config
import asyncio

import aiopg.sa
import sqlalchemy as sa

import tornado.web
import tornado.platform.asyncio
from tornado.options import parse_command_line

async def create_engine():
    return await aiopg.sa.create_engine(
        'dbname=gateway user=gateway password=qazwsx host=127.0.0.1',
        echo=True,
    )

log = logging.getLogger(__name__)

tornado.platform.asyncio.AsyncIOMainLoop().install()
loop = asyncio.get_event_loop()

engine = loop.run_until_complete(create_engine())
metadata = sa.MetaData()


t1 = sa.Table('t1', metadata,
              sa.Column('id', sa.Integer, primary_key=True),
              sa.Column('name', sa.String(255), nullable=False))


t2 = sa.Table('t2', metadata,
              sa.Column('id', sa.Integer, primary_key=True),
              sa.Column('name', sa.String(255), nullable=False))


async def fetch_t2():
    log.debug('try to acquire from fetch_f2')
    conn = await asyncio.wait_for(engine.acquire(), 10)
    log.debug('acquired from fetch_f2')
    await conn.execute(t2.select().where(t2.c.id == 4))
    await engine.release(conn)


class ReqHandler(tornado.web.RequestHandler):
    async def post(self):
        log.debug('try to acquire from post')
        conn = await asyncio.wait_for(engine.acquire(), 10)
        log.debug('acquired from post')
        async with conn.begin():
            await conn.execute(t1.select().where(t1.c.id == 1))
            await fetch_t2()
            await conn.execute(t1.insert().values(name='some name'))
        await engine.release(conn)

        self.write("Hello world!\n")


app = tornado.web.Application([
    (r'/', ReqHandler)
])

if __name__ == '__main__':
    app.listen(8080, '192.168.156.101')
    loop.run_forever()

Then I provide a load of 100 concurrent requests I had seen that connections has timed out but it seems to me that they does not returned to pool at all.

My ps command gives me 10 idle postgres connections

1016 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35420) idle
1020 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35421) idle
1032 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35422) idle
1033 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35423) idle
1034 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35424) idle
1035 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35425) idle
1036 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35426) idle
1037 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35427) idle
1038 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35428) idle
1039 ? Ss 0:00 postgres: gateway gateway 127.0.0.1(35429) idle

and any new request to http://192.168.156.101:8080/ leads to concurrent.futures._base.TimeoutError

It seems to me that if the pool is completely exhausted, the connections are not returned to the pool

@asvetlov
Copy link
Member

See my answer #113 (comment)

I believe it solves your issue.

@serg666
Copy link
Author

serg666 commented Jul 18, 2016

Of course it solves, but what about nested connection acquiring as @mpaolini said, is it possible?

@asvetlov
Copy link
Member

It is an architectural error, nothing more.

@serg666
Copy link
Author

serg666 commented Jul 18, 2016

Okey. In another words it is not possible to acquire connection in a nested manner....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants