Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FATAL: sorry, too many clients already #260

Open
visch opened this issue Dec 19, 2023 · 8 comments
Open

FATAL: sorry, too many clients already #260

visch opened this issue Dec 19, 2023 · 8 comments

Comments

@visch
Copy link
Member

visch commented Dec 19, 2023

          Hi @visch, @BuzzCutNorman, and @edgarrmondragon,

it looks like this one is tripping again. I am observing it on my workstation, and on CI, like at GH-250. Do you think that would qualify to re-open this ticket?

@visch said:

Verified that this opens up 100 connections in my use case. This seems like multiple issues.

  1. Pooling doesn't seem to be working properly / setup properly

That would also be my personal feeling about the situation. However, I haven't been able to take a closer look up until now, and thought it would be a good idea to talk to you beforehand.

With kind regards,
Andreas.

Originally posted by @amotl in #124 (comment)

@visch
Copy link
Member Author

visch commented Dec 19, 2023

@amotl I created this one instead of reviving the old ticket. Seems like you can replicate this with the code at #250 , should definitely take a look. If it's just during testing that's one level of issue, it sounds like this doesn't happen when running the target in production which is good so that means we're probably limited to just a Test setup related issue

@amotl
Copy link
Contributor

amotl commented Dec 19, 2023

Hi Derek,

thank you for refactoring the ticket, and for your reply.

Yeah I guess it looks like you've invested into proper connection pool behaviour the other day already, so there is a good chance it may only be related to testing. It looks like @BuzzCutNorman is an expert on this matter, so maybe he could take a look? 🙏

With kind regards,
Andreas.

@amotl
Copy link
Contributor

amotl commented Dec 19, 2023

Adding 8ec2a15 to GH-250 improves the situation, only leaving one error like this behind.

@amotl
Copy link
Contributor

amotl commented Dec 19, 2023

Adding 87b2437 to GH-254 seems to fix the problem without further ado.

So, to summarize, I think the flaw has been introduced by GH-215, starting to call create_engine(target) per test function, but got amplified by adding more test cases and another test utility function doing the same thing on behalf of GH-250.

@BuzzCutNorman
Copy link
Contributor

BuzzCutNorman commented Dec 19, 2023

@amotl thanks for the kind words but I am definitely not an expert. I just contributed some code. I took a quick glance and you might be running into the connection limit because you are creating a new engine/connection pool with each test run.

The code I contributed does the following when a target class instance adds its first sink it generates an instance of the default connector class and stores it in a private variable _target_connector. This connector is available via the property target_connector. The target then shares this connector with each sink instance it generates so they share one engine/connection pool. Which is hard to replicate if you are attempting to test multiple connection types.

If I am following the test code correctly each test is creating an engine/connection pool and not closing or disposing of the engine. I think this is causing you to reach the limit of connections. My suggestion would be to dispose of the engine at the end of each test that generates an engine to see if that helps.

https://docs.sqlalchemy.org/en/20/core/connections.html#engine-disposal

@amotl
Copy link
Contributor

amotl commented Dec 19, 2023

Hi again. I think I've identified the flaw and fixed it per 0eb1e18, making it part of GH-250. As we all expected, it just concerned the test suite here, so it is appropriate to relax again. Thanks for your reply, Dan. Appreciate it.

@amotl
Copy link
Contributor

amotl commented Dec 19, 2023

Hm, it looks like it still tripping on CI, on the same single occasion as before. I can promise it works on my machine ™, though. Apparently, it needs more digging...

-- https://github.com/MeltanoLabs/target-postgres/actions/runs/7266523429/job/19798429798?pr=250

@amotl
Copy link
Contributor

amotl commented Dec 19, 2023

Apparently, all of 8b3ea4f, a9d1796, 723e1fa are needed to remedy the flaw. I am not sure if they are correct, but they resolve the problem at hand: GH-250 passes CI now.

@BuzzCutNorman, @sebastianswms: Maybe you can have a look, to check that this isn't all wrong, and roughly does what you outlined within your comment?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

3 participants