Conversation
CLA Signature Passhipudding, thanks for your pull request. All authors of the commits have signed the CLA. 👍 |
There was a problem hiding this comment.
Pull request overview
This PR simplifies the thread count calculation in the fetch_ascend_data.py script, hardcoding it to always use 32 worker threads instead of dynamically selecting a value based on the number of available CPU cores (capped at 32).
Changes:
- Replaces
num_threads = min(os.cpu_count() or 4, 32)with the constantnum_threads = 32, fixing the thread count at the maximum value.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| # 创建工作线程(限制最大线程数为32,避免API限流) | ||
| num_threads = min(os.cpu_count() or 4, 32) | ||
| num_threads = 32 |
There was a problem hiding this comment.
The comment on line 318 still says "限制最大线程数为32,避免API限流" (limit max threads to 32, avoid API rate limiting), but the original intent of the code was to use min(os.cpu_count() or 4, 32) — meaning 32 was already the upper bound to avoid API rate limiting. By hardcoding num_threads = 32, the thread count is now fixed at 32 regardless of the number of available CPUs or tasks, which can cause unnecessary resource consumption on machines with fewer cores (e.g. CI runners with 2–4 CPUs). More critically, the original comment's rationale about avoiding API rate limiting is now inverted: the original code treated 32 as the maximum to stay safe, but the new code always spawns the maximum, which increases the risk of hitting rate limits on the external hiascend.com API. The comment should be updated to reflect this change, or better yet, the original min(os.cpu_count() or 4, 32) logic should be preserved to adaptively scale threads based on the machine.
No description provided.