Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue with large page commit (second part of the issue). #2681

Closed
tomtor opened this issue May 15, 2020 · 66 comments
Closed

issue with large page commit (second part of the issue). #2681

tomtor opened this issue May 15, 2020 · 66 comments

Comments

@tomtor
Copy link

tomtor commented May 15, 2020

See 86ee4eb#commitcomment-39204441

@vondele
Copy link
Member

vondele commented May 15, 2020

yep will be fixed shortly. The compiler doesn't support c++11 completely, but we can work around.

vondele added a commit to vondele/Stockfish that referenced this issue May 15, 2020
gcc < 5 doesn't fully support the c++11 `std::is_trivially_copyable<Entry>::value`
Remove it, as it is not essential.

fixes official-stockfish#2681

No functional change.
@samer707
Copy link

stockfish engine developer from 14/5/2020 not working on chesspartner ( uci)

@vondele
Copy link
Member

vondele commented May 17, 2020

@samer707 can you clarify your comment. Are you using latest master, what error do you see, and which compiler do you use?

@samer707
Copy link

ok i have (software) on my pc called chesspartner version 6.0.4.0 and when i want to import engine stockfish developer in 14/5/2020 as uci selection , my software program freeze and close when i select stockfish developer engine in 14/5/2020 , but in 13/5/2020 it works fine

@vondele
Copy link
Member

vondele commented May 17, 2020

are you referring to the abrok builds ? There are multiple builds on the 13th and 14th can you be precise which one is the last to work, which one is the fist to fail?
stockfish_20051319_x64_modern.exe
stockfish_20051320_x64_modern.exe
[...]
It matters because it will help us identify which particular commit might cause it.
In particular a new feature was introduced in stockfish_20051320_x64_modern which requires a sufficiently recent windows. Assuming your pc runs windows, which version do you have ?

@vondele vondele reopened this May 17, 2020
@samer707
Copy link

beginning from this one , not working :
Author: Sami Kiminki
Date: Wed May 13 20:57:47 2020 +0200
Timestamp: 1589396267

Add support for Windows large pages

for users that set the needed privilige "Lock Pages in Memory"
large pages will be automatically enabled (see Readme.md).

This expert setting might improve speed, 5% - 30%, depending
on the hardware, the number of threads and hash size. More for
large hashes, large number of threads and NUMA. If the operating
system can not allocate large pages (easier after a reboot), default
allocation is used automatically. The engine log provides details.

closes #2656

fixes #2619

No functional change

and all before is working good
and my system is windows 10 and another one windows 8.1 and two systems worked for before items fine

@samer707
Copy link

this one not working

@vondele vondele changed the title g++ 4.9 compile error issue with large page commit (second part of the issue). May 17, 2020
@vondele
Copy link
Member

vondele commented May 17, 2020

@samer707 OK, that good info. (So failing on both windows 10 and window 8.1 I understand).

So, can you execute:
stockfish_20051320_x64_modern.exe bench
from the command line, and see if there are any error messages printed?

@skiminki can you help figure out what is wrong?

@samer707
Copy link

from this one not work
Author: Sami Kiminki
Date: Wed May 13 20:57:47 2020 +0200
Timestamp: 1589396267

Add support for Windows large pages

for users that set the needed privilige "Lock Pages in Memory"
large pages will be automatically enabled (see Readme.md).

This expert setting might improve speed, 5% - 30%, depending
on the hardware, the number of threads and hash size. More for
large hashes, large number of threads and NUMA. If the operating
system can not allocate large pages (easier after a reboot), default
allocation is used automatically. The engine log provides details.

closes #2656

fixes #2619

No functional change

@d3vv
Copy link

d3vv commented May 17, 2020

@vondele Seems I have an issue on win7.. When I start a first game against sf with a big hash (>50% of ram) via Cutechess GUI - that's ok.. Then if abort a game and start new one quickly - I have time out from gui or a long freeze.. In last case I have a message about Large Pages (can't allocated)

@samer707
Copy link

not failing in windows 8.1 , 10 before last developer version

@vondele
Copy link
Member

vondele commented May 17, 2020

@samer707 does it fail outside of the gui? can you try in a command line window?

@samer707
Copy link

why when Sami Kiminki make his developer stockfish this error seen ?

@samer707
Copy link

samer707 commented May 17, 2020

thank you at all for your responding :
please correct all new developers build stockfish error for using in with chesspartner version 6.0.4.0 , that error seen after developer Author: Sami Kiminki Date: Wed May 13 20:57:47 2020 +0200
Timestamp: 1589396267 , because i usually work in chesspartner best one

@samer707
Copy link

i do not know how to correct that by my self

@samer707
Copy link

samer707 commented May 17, 2020

ok stockfish_20051319_x64_modern.exe worked fine for chesspartner v 6.0.4.0
but stockfish_20051320_x64_modern.exe ( not worked) - sami kiminki , after sami intervention all developer not working for chesspartner v6.0.4.0 ( note this happened in all versions of windows system ) , please try to make new stockfish developer that working on chesspartner v6.0.4.0

@skiminki
Copy link
Contributor

@vondele Seems I have an issue on win7.. When I start a first game against sf with a big hash (>50% of ram) via Cutechess GUI - that's ok.. Then if abort a game and start new one quickly - I have time out from gui or a long freeze..

@d3vv This looks like Windows is still dealing with hash memory deallocation of the previous SF process, while a new SF process is already being launched? To confirm, could you check whether swap is being used? You should see this from Task Manager.

If swapping is the case, I'm not sure what can be done here other than falling back to malloc() for small pages if the previous versions worked for you. However, for some reason, malloc() provides slow memory for some Opteron boxes, so the fix is not so clear.

@skiminki can you help figure out what is wrong?
why when Sami Kiminki make his developer stockfish this error seen ?

My guess is that the root cause is about printing the info string

info string Hash table allocation: Windows large pages [not] used.

before uci is sent. Technically, I think we're violating the UCI protocol by printing the unsolicited info string at launch. This can be misinterpreted as the first reply to the uci command which must be id.

Here are some ideas how to fix this in order of my preference:

  1. deferred hash allocation (allocate on isready / go)
  2. remove the prints and use an option for large pages, instead
  3. add a global state variable to track whether 'uci' hash been sent or not, and suppress printing the info string before 'uci' has been sent
  4. fix ChessPartner

But still, this root cause is just an educated guess at the moment. I don't have ChessPartner to confirm.

@samer707
Copy link

samer707 commented May 18, 2020

i am not a programmer , i want development build stockfish version that opened and work in chesspartner v6.0.4.0 , because chesspartner is easy to import and play chess engines than other , and before (20051320_x64_modern.exe) version all development builds opened on chesspartner , and all versions from and after that version (20051320_x64_modern.exe) is make chesspartner freeze and close

@samer707
Copy link

please you are programmers and you can make new development build stockfish that fix this case

@skiminki
Copy link
Contributor

@samer707 For the time being, I suggest using an older dev version. The fix ideas list was meant for other developers. We need to discuss our options before rushing into coding.

@vondele
Copy link
Member

vondele commented May 18, 2020

@samer707 we can try to fix if we can understand the problem. Until we have enough info or can reproduce ourselves, there is not much we can do.

So, can you run 20051320_x64_modern.exe in a different gui or on the command line (type bench if it started), just to see if it starts on your system? If so we, can related it to the particular gui used.

@skiminki I agree the sending the info string early might be an issue.

@samer707
Copy link

ok thank you , i am waiting

@samer707
Copy link

samer707 commented May 18, 2020

can i send pictures that explain the case ?

@DragonMist
Copy link

@vondele Seems I have an issue on win7.. When I start a first game against sf with a big hash (>50% of ram) via Cutechess GUI - that's ok.. Then if abort a game and start new one quickly - I have time out from gui or a long freeze.. In last case I have a message about Large Pages (can't allocated)

I did mention that after reload(s) there might be no more LP available.
Btw and not helping the discussion too much, on my old 4 core without HT I'm having 10% speed up which is a blast. Great work @skiminki and guys!

@samer707
Copy link

samer707 commented May 19, 2020

thank you and we hope to see fixes in next upcoming development builds

@skiminki
Copy link
Contributor

@skiminki can you make a pull request with your version ?

#2689

@d3vv
Copy link

d3vv commented May 19, 2020

@skiminki Could it possible to give me sf binary for modern(popcnt) win version for test?

And please be advised that bench 18000 1 1 works good for me in any cases with 48GB.. Problem starts near 24000..

@skiminki
Copy link
Contributor

@d3vv stockfish-test.zip

@d3vv
Copy link

d3vv commented May 19, 2020

@skiminki Thank You in Advance. Output Here:

c:\test>stockfish.exe bench 25000 1 1|more
Stockfish 190520 64 POPCNT by T. Romstad, M. Costalba, J. Kiiski, G. Linscott
info string Hash table allocation: Windows large pages used.
info string Hash table allocation: Windows large pages used.
Failed to allocate large pages: err=1450
info string Hash table allocation: Windows large pages not used.
c:\test>stockfish.exe bench 65536 1 1
Stockfish 190520 64 POPCNT by T. Romstad, M. Costalba, J. Kiiski, G. Linscott
info string Hash table allocation: Windows large pages used.
info string Hash table allocation: Windows large pages used.
Failed to allocate large pages: err=1455
info string Hash table allocation: Windows large pages not used.
Failed to allocate 65536MB for transposition table.
c:\test>stockfish.exe bench 9000 1 1|more
Stockfish 190520 64 POPCNT by T. Romstad, M. Costalba, J. Kiiski, G. Linscott
info string Hash table allocation: Windows large pages used.
info string Hash table allocation: Windows large pages used.
info string Hash table allocation: Windows large pages used.

Position: 1/47
info depth 1 seldepth 1 multipv 1 score cp 113 nodes 20 nps 20000 tbhits 0 time

@skiminki
Copy link
Contributor

skiminki commented May 19, 2020

Seems that we're in luck! Two different error codes here:

Failed to allocate large pages: err=1450 = ERROR_NO_SYSTEM_RESOURCES
Insufficient system resources exist to complete the requested service.

Failed to allocate large pages: err=1455 = ERROR_COMMITMENT_LIMIT
The paging file is too small for this operation to complete.

I'll double check the error codes on win10 later. But we may be able to use err=1450 to detect a transient error condition and have SF try again.

vondele pushed a commit that referenced this issue May 19, 2020
Do not send the following info string on the first call to
aligned_ttmem_alloc() on Windows:

  info string Hash table allocation: Windows large pages [not] used.

The first call occurs before the 'uci' command has been received. This
confuses some GUIs, which expect the first engine-sent command to be
'id' as the response to the 'uci' command. (see #2681)

closes #2689

No functional change.
@d3vv
Copy link

d3vv commented May 19, 2020

@skiminki As I know more or less quality implementation for Win LP persist into postgresql.. I think here:

https://github.com/postgres/postgres/blob/master/src/backend/port/win32_shmem.c

@samer707
Copy link

thank you very much for your concern , latest 19/5/2020 build is working , thank you , you are sincere in your job

@samer707
Copy link

and all staff is good , because they cared for fixing problems

@snicolet
Copy link
Member

I suggest that we output the error code (if there is one) at the end of the info line, as such:

info string Hash table allocation: Windows large pages not used (error=XXXXXX).

This would help debug problems with large pages in the future, while remaining compliant with the UCI protocol.

@skiminki
Copy link
Contributor

@d3vv From the PostgreSQL code:

    /*
 * When recycling a shared memory segment, it may take a short while
 * before it gets dropped from the global namespace. So re-try after
 * sleeping for a second, and continue retrying 10 times. (both the 1
 * second time and the 10 retries are completely arbitrary)
 */

Coincidentally, this was just what I was about to try: If we encounter error 1450, we keep trying for 5..10 secs.

@skiminki
Copy link
Contributor

Updated #2687 with a test patch for @d3vv 's issue.

Locally compiled unoptimized binary: stockfish-2681-pending-free-v1.zip

@d3vv Could you give this a try? Unfortunately, I can't quite reproduce this behavior on my Windows 10, even when compatibility modes are used.

Also, it turns out that Windows may return error 1450 (ERROR_NO_SYSTEM_RESOURCES) even if sizes larger than the physical memory size are requested, so this workaround may actually be a regression for some people who have enabled large pages on Windows 10 but who expect the allocation to fall back to small pages... Not sure how important corner case this is, though.

Finally, there's no trivial way to detect the Windows version, it seems, so we can't simply enable this workaround for pre-Win10 boxes only.

@joergoster
Copy link
Contributor

joergoster commented May 20, 2020

Is supporting Large Pages really worth all the trouble???
There was good reason to not implement this stuff in the past, I guess.

Edit and PS: This whole Hash and the related UCI and Threading stuff has become way too convoluted and complicated recently, imho.

@d3vv
Copy link

d3vv commented May 20, 2020

@skiminki

c:\test>systeminfo |find "Available Physical Memory"
Available Physical Memory: 41,191 MB

c:\test>wmic ComputerSystem get TotalPhysicalMemory
TotalPhysicalMemory
51530289152


c:\test>wmic OS get FreePhysicalMemory
FreePhysicalMemory
42191064
c:\test>stockfish.exe bench 24000 1 1
Stockfish 200520 64 POPCNT by T. Romstad, M. Costalba, J. Kiiski, G. Linscott
info string Hash table allocation: Windows large pages used.
info string Hash table allocation failed: Not enough resources for large pages.
Will keep trying for 10 seconds...
info string Hash table allocation: Windows large pages not used. (error 1450)

Position: 1/47
info depth 1 seldepth 1 multipv 1 score cp 113 nodes 20 nps 20000 tbhits 0 time
1 pv e2e3
bestmove e2e3

Position: 2/47
info depth 1 seldepth 1 multipv 1 score cp 285 nodes 254 nps 127000 tbhits 0 tim
e 2 pv c3b5 e6d5 b5c7 e8f8
bestmove c3b5 ponder e6d5

Position: 3/47
info depth 1 seldepth 1 multipv 1 score cp 71 nodes 18 nps 18000 tbhits 0 time 1
 pv b4c4
bestmove b4c4

Position: 4/47
info depth 1 seldepth 1 multipv 1 score cp -130 nodes 129 nps 129000 tbhits 0 ti
me 1 pv f5d3
bestmove f5d3

Position: 5/47
info depth 1 seldepth 1 multipv 1 score cp 24 nodes 61 nps 61000 tbhits 0 time 1
 pv f7e6
bestmove f7e6

Position: 6/47
info depth 1 seldepth 1 multipv 1 score cp 114 nodes 37 nps 37000 tbhits 0 time
1 pv c8d7
bestmove c8d7

Position: 7/47
info depth 1 seldepth 1 multipv 1 score cp -21 nodes 51 nps 51000 tbhits 0 time
1 pv b4b2
bestmove b4b2

Position: 8/47
info depth 1 seldepth 2 multipv 1 score cp 477 nodes 157 nps 157000 tbhits 0 tim
e 1 pv f4c7
bestmove f4c7

Position: 9/47
info depth 1 seldepth 1 multipv 1 score cp 596 nodes 53 nps 53000 tbhits 0 time
1 pv b3c2
bestmove b3c2

Position: 10/47
info depth 1 seldepth 1 multipv 1 score cp 58 nodes 193 nps 96500 tbhits 0 time
2 pv a1e1
bestmove a1e1

Position: 11/47
info depth 1 seldepth 1 multipv 1 score cp -69 nodes 52 nps 52000 tbhits 0 time
1 pv e5g4
bestmove e5g4

Position: 12/47
info depth 1 seldepth 1 multipv 1 score cp 183 nodes 39 nps 39000 tbhits 0 time
1 pv a1c1
bestmove a1c1

Position: 13/47
info depth 1 seldepth 1 multipv 1 score cp 97 nodes 44 nps 44000 tbhits 0 time 1
 pv a3d6
bestmove a3d6

Position: 14/47
info depth 1 seldepth 1 multipv 1 score cp 2 nodes 39 nps 19500 tbhits 0 time 2
pv a4c2
bestmove a4c2

Position: 15/47
info depth 1 seldepth 1 multipv 1 score cp 31 nodes 35 nps 35000 tbhits 0 time 1
 pv c3c2
bestmove c3c2

Position: 16/47
info depth 1 seldepth 1 multipv 1 score cp 167 nodes 77 nps 77000 tbhits 0 time
1 pv d8e7
bestmove d8e7

Position: 17/47
info depth 1 seldepth 2 multipv 1 score cp -28 nodes 35 nps 35000 tbhits 0 time
1 pv e4d6
bestmove e4d6

Position: 18/47
info depth 1 seldepth 1 multipv 1 score cp 187 nodes 12 nps 12000 tbhits 0 time
1 pv d3c3
bestmove d3c3

Position: 19/47
info depth 1 seldepth 1 multipv 1 score cp 222 nodes 25 nps 25000 tbhits 0 time
1 pv h6h7
bestmove h6h7

Position: 20/47
info depth 1 seldepth 1 multipv 1 score cp 145 nodes 55 nps 55000 tbhits 0 time
1 pv d1d7
bestmove d1d7

Position: 21/47
info depth 1 seldepth 1 multipv 1 score cp -620 nodes 44 nps 44000 tbhits 0 time
 1 pv e4e6 d7e6
bestmove e4e6 ponder d7e6

Position: 22/47
info depth 1 seldepth 1 multipv 1 score cp 61 nodes 19 nps 19000 tbhits 0 time 1
 pv e5e6
bestmove e5e6

Position: 23/47
info depth 1 seldepth 1 multipv 1 score cp 47 nodes 10 nps 10000 tbhits 0 time 1
 pv c2d2
bestmove c2d2

Position: 24/47
info depth 1 seldepth 1 multipv 1 score cp 65 nodes 10 nps 10000 tbhits 0 time 1
 pv g3f2
bestmove g3f2

Position: 25/47
info depth 1 seldepth 1 multipv 1 score cp 172 nodes 13 nps 13000 tbhits 0 time
1 pv d6e5
bestmove d6e5

Position: 26/47
info depth 1 seldepth 1 multipv 1 score cp 122 nodes 36 nps 36000 tbhits 0 time
1 pv a4a3 h7h6
bestmove a4a3 ponder h7h6

Position: 27/47
info depth 1 seldepth 1 multipv 1 score cp 260 nodes 52 nps 52000 tbhits 0 time
1 pv c5d3
bestmove c5d3

Position: 28/47
info depth 1 seldepth 1 multipv 1 score cp 128 nodes 60 nps 60000 tbhits 0 time
1 pv c6c7
bestmove c6c7

Position: 29/47
info depth 1 seldepth 1 multipv 1 score cp 228 nodes 35 nps 35000 tbhits 0 time
1 pv a5a6
bestmove a5a6

Position: 30/47
info depth 1 seldepth 1 multipv 1 score cp -42 nodes 11 nps 11000 tbhits 0 time
1 pv b3a4 a2a3
bestmove b3a4 ponder a2a3

Position: 31/47
info depth 1 seldepth 3 multipv 1 score cp 261 nodes 60 nps 60000 tbhits 0 time
1 pv g3d6
bestmove g3d6

Position: 32/47
info depth 1 seldepth 1 multipv 1 score cp 359 nodes 230 nps 115000 tbhits 0 tim
e 2 pv h1h3 g4h3
bestmove h1h3 ponder g4h3

Position: 33/47
info depth 1 seldepth 1 multipv 1 score cp 164 nodes 53 nps 53000 tbhits 0 time
1 pv a1c1
bestmove a1c1

Position: 34/47
info depth 1 seldepth 1 multipv 1 score cp 160 nodes 36 nps 36000 tbhits 0 time
1 pv g7f8
bestmove g7f8

Position: 35/47
info depth 1 seldepth 1 multipv 1 score cp 19 nodes 491 nps 491000 tbhits 0 time
 1 pv e2h2 e6b6
bestmove e2h2 ponder e6b6

Position: 36/47
info depth 1 seldepth 1 multipv 1 score cp 129 nodes 10 nps 10000 tbhits 0 time
1 pv b1c2
bestmove b1c2

Position: 37/47
info depth 1 seldepth 1 multipv 1 score cp 163 nodes 21 nps 21000 tbhits 0 time
1 pv f5e3
bestmove f5e3

Position: 38/47
info depth 1 seldepth 1 multipv 1 score cp 1257 nodes 21 nps 21000 tbhits 0 time
 1 pv e2d3
bestmove e2d3

Position: 39/47
info depth 1 seldepth 1 multipv 1 score cp 326 nodes 29 nps 29000 tbhits 0 time
1 pv b6b7
bestmove b6b7

Position: 40/47
info depth 1 seldepth 1 multipv 1 score cp 290 nodes 9 nps 9000 tbhits 0 time 1
pv b1c2
bestmove b1c2

Position: 41/47
info depth 1 seldepth 1 multipv 1 score cp -13 nodes 13 nps 13000 tbhits 0 time
1 pv b4b3
bestmove b4b3

Position: 42/47
info depth 1 seldepth 1 multipv 1 score cp 123 nodes 44 nps 44000 tbhits 0 time
1 pv h1h2
bestmove h1h2

Position: 43/47
info depth 1 seldepth 2 multipv 1 score cp -293 nodes 62 nps 31000 tbhits 0 time
 2 pv h4h1 e3g1
bestmove h4h1 ponder e3g1

Position: 44/47
info depth 1 seldepth 1 multipv 1 score cp 1952 nodes 59 nps 59000 tbhits 0 time
 1 pv h4h5 d5a2
bestmove h4h5 ponder d5a2

Position: 45/47
info depth 0 score cp 0
bestmove (none)

Position: 46/47
info depth 0 score mate 0
bestmove (none)

Position: 47/47
info depth 1 seldepth 1 multipv 1 score cp 96 nodes 37 nps 37000 tbhits 0 time 1
 pv d1e3
bestmove d1e3

===========================
Total time (ms) : 71
Nodes searched  : 2888
Nodes/second    : 40676

@skiminki
Copy link
Contributor

Ok, thanks. I take it that if you wait for a while after the previous SF process has terminated, then SF will again be able to use the large pages? For the record, we're doing the same with allocation what PostgreSQL is doing with the large page file mappings.

There are all sorts of fallbacks that we could try such as allocating the hash table in smaller chunks in case the full alloc fails, and then we'd probably be able to allocate the hash table at least mostly with the large pages. But I'm not quite sure whether it's worth the effort, since that kind of stuff tends to become a bit complicated and possibly fragile. I'd also expect almost zero chance that if I write that 100-300 lines worth of semi-complex code to workaround this Win 7 problem that the code can actually get merged...

At least the large pages seem to work better with Windows 10. I'm not sure how much comfort this is for the Win 7 users.

One more note. Windows 7 extended support has officially ended in Jan 2020. Windows 8.1 extended support ends in Jan 2023 (mainstream ended in Jan 2018). Would it make sense to consider an upgrade to Windows 10?

Anyways, I'm open for suggestions wrt what we should do with d3vv's issue.

@MichaelB7
Copy link
Contributor

MichaelB7 commented May 20, 2020

There will be issues under Windows when users do not have adequate memory for heavy usage of hash - hash that exceeds 50% of RAM. You do not need to use massive hash to get a 10% speed up. 2048 Mb hash size works wonderfully for most people with limited RAM. I consider anything under 128 Gb limited when using Large Pages. Ymmv of course.

Edit : Obviously the best case is for Windows 10 users and it may not be suitable for outdated OS or for those with a limited amount of RAM.

@vondele
Copy link
Member

vondele commented May 20, 2020

My 2cents: the code looks OK as it stands now.

If the OS says it can't provided the large pages, we shouldn't negotiate with it, that's it, and we have a functional fallback. After all, it is marked as 'expert use'... experts will quickly figure out what the sweat spot for the (optional) feature is.

@skiminki
Copy link
Contributor

I'm kinda agreeing with that. The only thing I'm not completely sure about is whether there should be a UCI/commandline/build-time option to disable/enable LP. To me, it sounds entirely feasible to consider that someone would want to use SF with both settings without going to the group policy editor to toggle the feature on/off.

For example: Doing the regular analysis: LP ON. Running SF under cutechess: LP OFF.

@vondele
Copy link
Member

vondele commented May 20, 2020

I still prefer to have no additional option, at least for now. It is easy to add an option, it is very difficult to remove. There will always be use-cases for more options, but the majority is covered with the current setup.

@skiminki
Copy link
Contributor

Works for me

@d3vv
Copy link

d3vv commented May 20, 2020

Anyways, I'm open for suggestions wrt what we should do with d3vv's issue.

I think it’s not necessary to support Win7 if everything is so complicated. In fact I have used Win7 for legacy system tests only.

@vondele
Copy link
Member

vondele commented May 20, 2020

OK, thanks. I'll close the issue, and the testing PR.

@vondele vondele closed this as completed May 20, 2020
@samer707
Copy link

with your permission ,i want to know , how run the stockfish bench in window

@vondele
Copy link
Member

vondele commented May 21, 2020

you need to open a windows command line window and in that window execute
C:\PATHTOSTOCKFISH\stockfish.exe bench
where C:\PATHTOSTOCKFISH\ is the full path to the binary.

@samer707
Copy link

Thank you

@samer707
Copy link

HELLO , i found something important to mention , the ( Author: Joost VandeVondele
Date: Thu Aug 20 21:13:07 2020 +0200
Timestamp: 1597950787) is working good for chesspartner software , but the ( Author: Joost VandeVondele
Date: Thu Aug 20 21:14:32 2020 +0200
Timestamp: 1597950872 ) not continue after 2 moves and stops analyse , video is here about that , and all developer versions after that the same till now 23/8/2020
11111

@vondele
Copy link
Member

vondele commented Aug 24, 2020

you need to configure your GUI so that the network file is found, i.e. set EvalFile to the proper value including the full path. https://github.com/vondele/Stockfish#evalfile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants