Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrency and CPU usage #3

Open
ChrisWhittington opened this issue Feb 12, 2021 · 13 comments
Open

Concurrency and CPU usage #3

ChrisWhittington opened this issue Feb 12, 2021 · 13 comments

Comments

@ChrisWhittington
Copy link

Testing using AMD with 64 cores.

If run at concurrency 50, works for a budget or two then gives 'game incomplete' fail.
Same situation concurrency at 40, 30.
Concurrency 20 works (so far).

Observing the CPU usage, there's a demand spike at the end of each game run (one budget), which for concurrency 30, 40, 50 takes CPU usage to 100% and the system gets its knickers in a twist and starts reporting fails.

At concurrency 20. CPU usage is 44%, 255 processes, 3372 threads, and 79Gb RAM in use.
At the end of each game run (one budget), CPU usage doubles to 80%, processes to 333, threads to 3590 and RAM to 131 GB. I'm getting away with concurrency 20, so far.

Guessing here, but it looks like, at the end of each budget for some reason, it fires up both master and test engines (2x20) and does some processing, using 40(?) cores for a few seconds?
It fails for high concurrency, because 60, 80, 100 core fire ups are too much for it? Well, too much when using my engine.
Is there anyway to lower the increased core demand at the end of each budget?

@fsmosca
Copy link
Owner

fsmosca commented Feb 12, 2021

Does this happen when using stockfish?

@ChrisWhittington
Copy link
Author

SF runs okay (so far) with 60 concurrency, but it puts much less load on the CPU. 18% at concurrency 60. But still shows the spike demand at the end of each game cycle (each budget) to about 44%
I also tried 500 game cycles, and depth 9, to see if it loaded more, yes, but not by much.
.
Quite probably SF is way more efficiently organised, memory-wise, than mine. Even so, the doubling of CPU demand at cycle end is weird. Be good if checked it out, and batched or time delayed (?) so not everything core-wise was being hit at once?

@ChrisWhittington
Copy link
Author

Because the memory and CPU demand spikes (doubling) happen at the end of each budget, I wonder if you're closing all engines out, and then opening them up again, such that there's overlap? New invocations opening before the old invocations are finished off with ending? That might account for the doubling.

@fsmosca
Copy link
Owner

fsmosca commented Feb 12, 2021

Because the memory and CPU demand spikes (doubling) happen at the end of each budget, I wonder if you're closing all engines out, and then opening them up again, such that there's overlap? New invocations opening before the old invocations are finished off with ending? That might account for the doubling.

Only after each budget engines are restarted.

@fsmosca
Copy link
Owner

fsmosca commented Feb 12, 2021

SF runs okay (so far) with 60 concurrency, but it puts much less load on the CPU. 18% at concurrency 60. But still shows the spike demand at the end of each game cycle (each budget) to about 44%

What time control or depth did you use for this test?

Quite probably SF is way more efficiently organised, memory-wise, than mine. Even so, the doubling of CPU demand at cycle end is weird. Be good if checked it out, and batched or time delayed (?) so not everything core-wise was being hit at once?

After a budget all engines are quitted. Then its time for nevergrad to update the params. I will try to add a log to measure the cpu and ram usage and time elapsed when nevergrad is updating its thing.

@ChrisWhittington
Copy link
Author

Giving back memory from many processes may be taking time?
It’s for sure there are 2xN engine threads running/opening/closing simultaneously because of the RAM usage doubling during the CPU usage spike. It can’t be nevergrad grabbing RAM, the amount is too large and more or less exactly matches what N engines grab.
Maybe it’s possible to await engine close signals before starting up again?

@fsmosca
Copy link
Owner

fsmosca commented Feb 13, 2021

Created a branch https://github.com/fsmosca/Lakas/tree/more_logging

You need psutil for this.
pip install psutil

It will log to match_lakas.txt for the usage of cpu when nevergrad updates its data.

sample:

Initial:

2021-02-13 09:53:42,317 |  12048 | INFO  | starting main()
2021-02-13 09:53:42,581 |  12048 | INFO  | budget 1, after asking recommendation      , proc_id: 12048, cpu_usage%: 0, num_threads: 8, proc_name: python
2021-02-13 09:53:42,584 |  12048 | INFO  | before a match starts                      , proc_id: 12048, cpu_usage%: 0, num_threads: 8, proc_name: python
budget 2, after asking recommendation      , proc_id: 12048, cpu_usage%: 12, num_threads: 8, proc_name: python
budget 3, after asking recommendation      , proc_id: 12048, cpu_usage%: 10, num_threads: 8, proc_name: python

Using stockfish with concurrency 6 on my 4-core/8-thread PC, optimizer uses around 12% of python alone. In my other tests it reached 25%, this is the highest I observed.

Note match_lakas.txt can get very big as it logs the engine output from cutechess-cli.
If you test it, just use a smaller budget of 4 or so or interrupt after couple of budgets.
To reduce the log remove the line
command += ' -debug'
from lakas.py

Later I will log the memory used.

@fsmosca
Copy link
Owner

fsmosca commented Feb 13, 2021

Giving back memory from many processes may be taking time?

That is possible, I am working on logging memory usage.

It’s for sure there are 2xN engine threads running/opening/closing simultaneously because of the RAM usage doubling during the CPU usage spike. It can’t be nevergrad grabbing RAM, the amount is too large and more or less exactly matches what N engines grab.
Maybe it’s possible to await engine close signals before starting up again?

cutechess-cli has -wait N
Wait N milliseconds between games. The default is 0

I will add it as an option later.

Or you can modify the code at

Lakas/lakas.py

Line 299 in 1c5201e

command += ' -debug'

Just add
command += ' -wait 5000'
to wait for 5 seconds.

@fsmosca
Copy link
Owner

fsmosca commented Feb 13, 2021

The more logging branch at https://github.com/fsmosca/Lakas/tree/more_logging is updated:
v0.23.3

commit summary

  • Add --cutechess-debug flag
  • Add memory used by python when optimizer updates its data
  • Add --cutechess-wait option

There is new v0.25.0 featuring movetime.

Example:
--move-time-ms 100

@ChrisWhittington
Copy link
Author

--move-time-ms 10 seems to work (budget about twice as fast as with move-time-ms 25).

the default wait (code looks like it defaults to 5000 ms, if I read it correct) doesn't help with the CPU load, which is still doubling at the end of each budget.

it may be that I am just being too careless with RAM usage (256 Gb available, makes you lazy). When a process is ended, it has to clear the RAM it gives back (windows security reasons), so overlap exit-start, is going to have cores=concurrency x 2 with that many cores busy nulling out RAM given back. No doubt the RAM is segmented all over the place by concurrent starts.
What is cutechess doing? a 5000 ms wait AFTER it closes an engine? Because that ought to work. Weird.

@fsmosca
Copy link
Owner

fsmosca commented Feb 14, 2021

the default wait (code looks like it defaults to 5000 ms, if I read it correct) doesn't help with the CPU load, which is still doubling at the end of each budget.

Yes default wait is 5000 ms in lakas.
Perhaps you will be able to see which application is using more cpu after a budget using task manager. Is it your engine or python or cutechess or other application?

I will also try to log the cpu usage of all the process ids running.

What is cutechess doing? a 5000 ms wait AFTER it closes an engine?

From cutechess:
-wait N Wait N milliseconds between games. The default is 0.

Looks like:

  1. game starts
  2. game ends
  3. wait N
    ...

Don't know what it is doing after.

I will update the branch and log the cpu and mem usage of cutechess.

What happened if you try to increase the wait like 10s.
--cutechess-wait 10000

@fsmosca
Copy link
Owner

fsmosca commented Feb 16, 2021

Master is now updated with some changes including that of more logging branch.

@Matthies
Copy link

Maybe related to a known issue of cutechess: cutechess/cutechess#630

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants