Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
for users that set the needed privilige "Lock Pages in Memory" large pages will be automatically enabled (see Readme.md). This expert setting might improve speed, 5% - 30%, depending on the hardware, the number of threads and hash size. More for large hashes, large number of threads and NUMA. If the operating system can not allocate large pages (easier after a reboot), default allocation is used automatically. The engine log provides details. closes #2656 fixes #2619 No functional change
- Loading branch information
Showing
5 changed files
with
120 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -49,6 +49,7 @@ int main(int argc, char* argv[]) { | |
|
||
UCI::loop(argc, argv); | ||
|
||
TT.resize(0); | ||
Threads.set(0); | ||
return 0; | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
d476342
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spot a significant increase in disk I/O in Windows Task Manager after large pages used. Why?
d476342
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide more system information , RAM , CPUS and UCI command provided to Stockfish. Large pages are not for everyone. A significant amount of disk activity could be indicative of memory being swapped to the HD. I would suggest start at 2048 Mb hash and raise hash by 2048 Mb increments. Wherever you hear the Disk swapping , you would want to go with less hash. Also , number of windows and browser pages opens can significantly impact your memory usage - for serious analysis , you may want to make sure your machine has all the latest updates and you do a clean restart before doing serious chess analysis . The biggest difference between windows and Linux users , is that Linux users are religiously keeping their OS updated. Most windows users could care less. A generalization for sure , but it’s true.
d476342
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No disk thrashing here. I agree with MichaelB7's suggestions.
d476342
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Win10 19628. RAM 16㎇. Intel Core i7-8650U. Unlike your generalization, I religiously keep my Windows updated. I tried Hash lower to 1024 and disk I/O disappeared … So you mean large page is not for impulsive analysis? My laptop was indeed doing other jobs.
d476342
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't think it has to do with windows being up to date. Just, if a program requires 'large pages' windows might have a hard time 'assembling them' from available memory, especially on a machine that has been in use for a while with applications that do lots of allocation as well (e.g. browsers etc). The result of this is paging some memory to disk etc, but that should be temporary if enough memory is available. I'd argue that if you do some quick & implusive analysis on a 'smaller' machine, the small speed improvement (10%?) is not worth the hassle. That picture probably changes if you have dedicated hardware or long analysis sessions.