Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GetListingAsync takes > 30 seconds and can't be canceled #743

Closed
crose-varde opened this issue Jul 25, 2021 · 4 comments
Closed

GetListingAsync takes > 30 seconds and can't be canceled #743

crose-varde opened this issue Jul 25, 2021 · 4 comments
Labels
aws-deployment closing soon - ancient issue Unless someone protests, this will be closed end of the next quarter no response

Comments

@crose-varde
Copy link

FTP OS: Unix (.NET Core 3.1)

FTP Server: i don't know how to determine this, it's not mine

Computer OS: Whatever AWS Lambda is running

FluentFTP Version: 34.0.0

Occasionally (about 5 times out of 100000) GetListingAsync will block for a long time. I'm not sure entirely how long, because it takes > 30 seconds, which is currently what the hosting AWS Lambda timeout is set to, so our process is just killed unceremoniously. (I'm going to increase this for testing but I don't have the data yet.)

All of the client timeouts are set to 15 seconds, so they're not being respected.

What's worse is that when this happens, the operation can't be canceled, either - whatever is blocking doesn't appear to be checking the passed-in CancellationToken. Based on the logs, it looks like a blocking disposal, but I'm not sure.

Logs :

(the first line shows the timed-out process; the second is an application-level log that the token has had cancellation signaled)

2021-07-25T12:07:41.133Z | 2021-07-25T12:07:41.133Z 048e37ff-9788-4cf1-999d-6300c655de8b Task timed out after 30.03 
2021-07-25T12:07:36.666Z | {"MessageTemplate":"Polly is attempting to cancel a directory listing..."}
2021-07-25T12:07:36.666Z | Status:   Testing connectivity using Socket.Poll()...
2021-07-25T12:07:36.666Z | # CloseDataStream()
2021-07-25T12:07:36.666Z | Status:   Disposing FtpSocketStream...
2021-07-25T12:07:36.666Z | Status:   Disposing FtpSocketStream...
2021-07-25T12:07:11.707Z | +---------------------------------------+
2021-07-25T12:07:11.707Z | Response: 150 File status okay; about to open data connection.
2021-07-25T12:07:11.697Z | Command:  MLSD /outgoing
2021-07-25T12:07:11.687Z | Status:   Connecting to 204.4.178.87:61273
2021-07-25T12:07:11.683Z | Response: 229 Entering Passive Mode (\|\|\|61273\|)
2021-07-25T12:07:11.674Z | Command:  EPSV
2021-07-25T12:07:11.674Z | # OpenPassiveDataStreamAsync(AutoPassive, "MLSD /outgoing", 0)
2021-07-25T12:07:11.673Z | Response: 200 Command TYPE okay.
2021-07-25T12:07:11.664Z | Command:  TYPE I
2021-07-25T12:07:11.664Z | # GetListingAsync("/outgoing", Auto)

@crose-varde
Copy link
Author

Update: We are still seeing this with a timeout of 90 seconds, although it's much less frequent.

@robinrodricks
Copy link
Owner

Can you log all the FluentFTP log events to cloudwatch, and then you can determine at which command its hanging?

@crose-varde
Copy link
Author

crose-varde commented Aug 10, 2021

What I've posted is all the FluentFTP log events. +---------------------------------------+ is the last thing FluentFTP logs before the hang. (Note how 25 seconds passes between that and the stream disposal - that's my timeout interval.) The last sent command is MLSD /outgoing and the last received response is 150 File status okay; about to open data connection. Then...nothing.

@FanDjango
Copy link
Collaborator

@crose-varde Is this still an issue for you? There has been a lot of work on the async APIs and many code changes for CancellationTokens, so maybe this actually fixed by now. Also, verbose logging has been much improved so a new take on this issue with a fresh log might provide new insights?

@FanDjango FanDjango added the closing soon - ancient issue Unless someone protests, this will be closed end of the next quarter label Dec 23, 2022
@FanDjango FanDjango closed this as not planned Won't fix, can't repro, duplicate, stale Jan 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
aws-deployment closing soon - ancient issue Unless someone protests, this will be closed end of the next quarter no response
Projects
None yet
Development

No branches or pull requests

3 participants