When uploading or downloading a large job (e.g. a directory containing 5000 files, 75 GB in total) to/from an S3-cryptomator-vault, it takes very long (up to one hour and more) until the job is actually marked as finished. The CPU load during this phase is high.
It can be observed within CyberDuck and Duck.sh, on Windows and on Linux (I can not test on a Mac).
It makes queuing jobs impossible, because the job still counts as active.
The message that is displayed while nothing is being transferred but the job is still unfinished looks like this (note the missing ?? minutes remaining phrase):
This is caused by verifying the checksum of the downloaded file. Related to #10215.
Unfortunately this can not be the reason, because (as the ticket states) the behaviour occurs also on uploads where the checksum is calculated before the transfer.
So I did some more testing (on Windows using the Cyberduck GUI and the file:// protocol) and can provide more, hopefully helpful insights:
The waiting time only occurs when working with cryptomator vaults, not when working directly with an unencrypted local folder.
The waiting time scales (quadratically) with the number of files, not the amount of data:
10.000 MB in 3 files produced no noticeable waiting time.
0,8 MB in 100 files produced about 1 second of waiting time.
1,6 MB in 200 files produced about 5 seconds of waiting time.
3,2 MB in 400 files produced about 20 seconds of waiting time.
6,4 MB in 800 files produced about 82 seconds of waiting time.
I did not wait for the job of 80 MB in 10.000 files to finish, but the waiting time calculates to about 3,5 hours given the observed quadratic growth.
My assumption is that for each file transferred (either up- or downloaded) from/to a cryptomator vault some data structure is left over and has to be cleaned up after the transfer is finished and that cleanup is done in an inefficient manner.