I have noticed some interesting behaviour when deleting a very large number of files from S3 using Cyberduck.
When removing a bucket containing around 14,000 files a request is sent to delete each file one at a time (as best as I can tell). If you leave this operation to run and return after 20 or 30 minute you will notice that in the Log draw the operation is now happening much more slowly to the point that the log messages are easily readable as they pass through the windows1 or two a second.
If you stop the operation, disconnect, re-connect and start the operation again the operation resumes at full speed with the operations passing the log window faster than you can read them at the rate of several every second.
This is repeatable and has happened the last 3 or 4 times I have run this operation.
The text was updated successfully, but these errors were encountered:
The performance impact is most possibly due to the large amount of lines in the transcript and automatically scrolling to the last appended line in the log drawer. Closing the log drawer (⌘-L) should resolve this.