Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

psutil.cpu_percent() is not thread safe #1703

Closed
giampaolo opened this issue Feb 21, 2020 · 1 comment · Fixed by #2282
Closed

psutil.cpu_percent() is not thread safe #1703

giampaolo opened this issue Feb 21, 2020 · 1 comment · Fixed by #2282

Comments

@giampaolo
Copy link
Owner

giampaolo commented Feb 21, 2020

psutil.cpu_percent(interval=None) and psutil.cpu_times_percent(interval=None) are non-blocking functions returning the CPU percent consumption since the last time they were called. In order to do so they use a global variable, in which the last CPU timings are saved, and that means they are not thread safe. E.g., if 10 threads call cpu_percent(interval=None) with a 1 second interval, only 1 thread out of 10 will get the right result, as it will "invalidate" the timings for the other 9. Problem can be reproduced with the following script:

import threading, time, psutil

NUM_WORKERS = psutil.cpu_count()

def measure_cpu():
    while 1:
        print(psutil.cpu_percent())
        time.sleep(1)

for x in range(NUM_WORKERS):
    threading.Thread(target=measure_cpu).start()
while 1:
    print()
    time.sleep(1.1)

The output looks like this, and it shows how inconsistent CPU measurements are between different threads (notice 0.0 values):

3.5
3.5
0.0
0.0

2.8
2.8
0.0
0.0

2.5
2.5
0.0
0.0

2.5
2.5
2.5
2.5

3.3
3.3
3.3
50.0

2.8
0.0
0.0
0.0

Problem emerged in #1667 (comment), and it's a nasty one to solve in terms of API. The only idea I can think of right now which doesn't require using a class is passing a dict parameter as in:

>>> map = {}
>>> while 1:
...       print(psutil.cpu_percent(interval=None, map=map)
...       time.sleep()

If map is passed, the last/internal CPU timings should be stored in the dict and not in the global var. This same approach is used by the (now deprecated) asyncore stdlib module.

The same problem would also affect psutil.process_iter() if we introduce the new new_only parameter, which I reverted precisely for this reason (see #1667 (comment)). So theoretically if we decide to stick with this approach also process_iter() should get a map parameter (so 3 functions in total).

@giampaolo
Copy link
Owner Author

Note to self: an alternative approach which does not require any new parameter is the following.

Maintain an internal "map" (a dict), and each key will be the id of the current thread (threading.current_thread()).

giampaolo added a commit that referenced this issue Aug 1, 2023
`psutil.cpu_percent(interval=None)` and `psutil.cpu_times_percent(interval=None)` are non-blocking functions returning the CPU percent consumption since the last time they were called. In order to do so they use a global variable, in which the last CPU timings are saved, and that means they are not thread safe. E.g., if 10 threads call `cpu_percent(interval=None)` with a 1 second interval, only 1 thread out of 10 will get the right result, as it will "invalidate" the timings for the other 9. Problem can be reproduced with the following script:

```python
import threading, time, psutil

NUM_WORKERS = psutil.cpu_count()

def measure_cpu():
    while 1:
        print(psutil.cpu_percent())
        time.sleep(1)

for x in range(NUM_WORKERS):
    threading.Thread(target=measure_cpu).start()
while 1:
    print()
    time.sleep(1.1)
```

The output looks like this, and it shows how inconsistent CPU measurements are between different threads (notice 0.0 values):

```
3.5
3.5
0.0
0.0

2.8
2.8
0.0
0.0

2.5
2.5
0.0
0.0

2.5
2.5
2.5
2.5

3.3
3.3
3.3
50.0

2.8
0.0
0.0
0.0
```

After patch:

```
0.0
0.0
0.0

0.0
2.0
2.3
2.3
2.3

5.5
5.3
5.5
5.5

3.3
3.3
3.0
3.0

9.0
8.9
9.0
9.4

30.0
30.0
29.6
30.0

24.7
24.7
24.7
24.7
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant