Skip to content
This repository has been archived by the owner on Sep 20, 2023. It is now read-only.

Socket/Handle leak in versions 0.2.6 and 0.2.7 #29

Closed
erikd opened this issue Mar 15, 2017 · 9 comments
Closed

Socket/Handle leak in versions 0.2.6 and 0.2.7 #29

erikd opened this issue Mar 15, 2017 · 9 comments

Comments

@erikd
Copy link
Contributor

erikd commented Mar 15, 2017

I have a reproducible test case with instructions in https://github.com/erikd-ambiata/test-warp-wai (the github account for my day job).

Following the instructions there should reproduce the Socket/Handle leak within a couple of minutes. If you then edit the cabal file to force connection == 0.2.5 and reinstall/rebuild everything, there will be no Socket/Handle leak even after running for an hour.

@ocheron
Copy link

ocheron commented Mar 18, 2017

Reproduced easily indeed, but not anymore after applying #30.
Do you see that too?

@erikd
Copy link
Contributor Author

erikd commented Mar 18, 2017

I didn't see that. I'll check when I'm in the office on Monday.

@ocheron
Copy link

ocheron commented Mar 18, 2017

Yes please, the behavior in test-warp-wai is subtle and may depend on resource limits.

I didn't see much of a difference with regard to TIME_WAIT server sockets on port 2000.
They still accumulate but stay below a few thousand and don't cause an exhaustion.

However #30 fixes the leak relating to CLOSED client sockets to port 3000, not visible with netstat but only lsof. On my macOS machine that's what caused the exhaustion.

@erikd
Copy link
Contributor Author

erikd commented Mar 20, 2017

Yep, @snoyberg's fix in PR #30 definitely fixes this problem. Using my test-warp-wai project, without the fix my machine runs out of sockets within two minutes. With the fix it was still running without a problem after 20 minutes. Thanks Michael!

@vincenthz
Copy link
Owner

Fix in #30

@erikd
Copy link
Contributor Author

erikd commented Mar 26, 2017

Since this is now fixed, is it worth deprecating versions 0.2.6 and 0.2.7 because of these file descriptor leak? Specifying

   , connection               >= 0.2.5 && < 0.2.6 || >= 0.2.8 && < 0.2.9

in our cabal files is a bit painful.

@vincenthz
Copy link
Owner

I don't see why would you maintain such a complicated range of allowed versions when you can just specify just the latest (which is the only non-deprecated version since it came out).

@erikd
Copy link
Contributor Author

erikd commented Mar 27, 2017

I don't see why would you maintain such a complicated range

Its because in $day_job we have a very deep and wide set of packages. These packages (over 300 of them) are pulled into dependent packages as git submodules and added (with Mafia, basically cabal sandbox add-source ....) to the build tree.

As it stands, we either need the complicated range I suggested or we need to update (and test) all the submodules in all the projects. That is a huge amount of churn.

@vincenthz
Copy link
Owner

vincenthz commented Mar 28, 2017

it sounds somewhat odd that while you have a complicated/advanced setup, you need upstream developer to mark things deprecated (which cabal doesn't really take in full consideration when solving) for doing the right thing. Also am I right to assume that cabal is still in control to what it's actually downloading by looking at the range of allowed package in .cabal ? Does that means that if you use package X at version 1.0.0 and you have for example >= 1.0.0 & < 1.0.1 and someone upload 1.0.0.1 which is not compatible or have some build issues, you builds would be completely borked ?

I think you'ld have more flexibility and better ability to act on packages, to teach your overall tool (mafia ?) to whitelists/blacklists upstream packages at the whole $day_job level.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants