-
-
Notifications
You must be signed in to change notification settings - Fork 30.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have shutil.copytree(), copy() and copystat() use cached scandir() stat()s #77876
Comments
Patch in attachment makes shutil.copytree() use os.scandir() and (differently from bpo-33414) DirEntry instances are passed around so that cached stat()s are used also from within copy2() and copystat() functions. The number of times the filesystem gets accessed via os.stat() is therefore reduced quite consistently. A similar improvement can be done for rmtree() (but that's for another ticket). Patch and benchmark script are in attachment. Linux (+13.5% speedup) --- without patch:
--- with patch: $ ./python bench.py
Priming the system's cache...
7956 files and dirs, repeat 1/3... min = 0.481s
7956 files and dirs, repeat 2/3... min = 0.479s
7956 files and dirs, repeat 3/3... min = 0.474s
best result = 0.474s Windows (+17% speedup) --- without patch:
--- with patch: $ ./python bench.py
Priming the system's cache...
7956 files and dirs, repeat 1/3... min = 7.827s
7956 files and dirs, repeat 2/3... min = 7.369s
7956 files and dirs, repeat 3/3... min = 7.153s
best result = 7.153s Windows SMB share (+30%) --- without patch:
--- with patch:
Number of stat() syscalls (-38%) --- without patch: $ strace ./python bench.py 2>&1 | grep "stat(" | wc -l
324808
--- with patch:
$ strace ./python bench.py 2>&1 | grep "stat(" | wc -l
198768 |
PR at: #7874. Linux (+8.8%) without patch: with patch:
$ ./python bench-copytree.py
Priming the system's cache...
7956 files and dirs, repeat 1/3... min = 0.557s
7956 files and dirs, repeat 2/3... min = 0.548s
7956 files and dirs, repeat 3/3... min = 0.548s
best result = 0.548s Windows (+20.7%) without patch: With patch: |
Sorry, I meant bpo-33671. |
Unless somebody has complaints I think I'm gonna merge this soon. |
I'm not convinced that this change should be merged. The benefit is small, and 1) it is only for an artificial set of tiny files, 2) the benchmarking ignores the real IO, it measures the work with a cache. When copy real files (/usr/include or Lib/) with dropped caches the difference is insignificant. On other hand, this optimization makes the code more complex. It can make the case with specifying the ignore argument slower. |
For dropping disc caches on Linux run with open('/proc/sys/vm/drop_caches', 'ab') as f: f.write(b'3\n') before every test. |
I agree the provided benchmark on Linux should be more refined. And I'm not sure if "echo 3 | sudo tee /proc/sys/vm/drop_caches" before running it is enough honestly. The main point here is the reduction of stat() syscalls (-38%) and that can make a considerable difference, especially with network filesystems. That's basically the reason why scandir() was introduced in the first place and used in os.walk() glob.glob() and shutil.rmtree(), so I'm not sure why we should use a different rationale for shutil.copytree(). |
os.walk() and glob.glob() used *only* stat(), opendir() and readdir() syscalls (and stat() syscalls dominated). The effect of reducing the number of the stat() syscalls is significant. shutil.rmtree() uses also the unlink() syscall. Since it is usually cheap (but see bpo-32453), the benefit still is good, but not such large. Actually I had concerns about using scandir() in shutil.rmtree(). shutil.copytree() needs to open, read, and write files. This is not so cheap, and the benefit of reducing the number of the stat() syscalls is hardly noticed in real cases. shutil.copytree() was not converted to using scandir() intentionally. |
When I worked on the os.scandir() implementation, I recall that an interesting test was NFS. Depending on the configuration, stat() in a network filesystem can be between very slow and slow. |
Yes, file copy (open() + read() + write()) is of course more expensive than just "reading" a tree (os.walk(), glob()) or deleting it (rmtree()) and the "pure file copy" time adds up to the benchmark. And indeed it's not an coincidence that bpo-33671 (which replaced read() + write() with sendfile()) shaved off a 5% gain from the benchmark I posted initially for Linux. Still, in a 8k small-files-tree scenario we're seeing ~9% gain on Linux, 20% on Windows and 30% on a SMB share on localhost vs. VirtualBox. I do not consider this a "hardly noticeable gain" as you imply: it is noticeable, exponential and measurable, even with cache being involved (as it is). Note that the number of stat() syscalls per file is being reduced from 6 to 1 (or more if follow_symlinks=False), and that is the real gist here. That *does* make a difference on a regular Windows fs and makes a huge difference with network filesystems in general, as a simple stat() call implies access to the network, not the disk. |
+1. I also quickly glanced over the patch and I think it looks like a clear win. |
@serhiy: I would like to proceed with this. Do you have further comments? Do you prefer to bring this up on python-dev for further discussion? |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: