Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ThreadPoolExecutor doesn't reuse threads until #threads == max_workers #69070

Closed
MattSpitz mannequin opened this issue Aug 17, 2015 · 23 comments
Closed

ThreadPoolExecutor doesn't reuse threads until #threads == max_workers #69070

MattSpitz mannequin opened this issue Aug 17, 2015 · 23 comments
Labels
3.8 (EOL) end of life stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error

Comments

@MattSpitz
Copy link
Mannequin

MattSpitz mannequin commented Aug 17, 2015

BPO 24882
Nosy @brianquinlan, @pitrou, @methane, @Bluehorn, @MojoVampire, @tomMoral, @dmacnet, @iUnknwn, @pierreglaser
PRs
  • bpo-24882: Let ThreadPoolExecutor reuse idle threads before creating new thread #6375
  • bpo-24882: eagerly spawn threads in ThreadPoolExecutor #6530
  • Files
  • many_threads.py
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2019-05-22.21:34:37.217>
    created_at = <Date 2015-08-17.14:59:00.382>
    labels = ['3.8', 'type-bug', 'library']
    title = "ThreadPoolExecutor doesn't reuse threads until #threads == max_workers"
    updated_at = <Date 2019-05-22.21:34:37.216>
    user = 'https://bugs.python.org/MattSpitz'

    bugs.python.org fields:

    activity = <Date 2019-05-22.21:34:37.216>
    actor = 'pitrou'
    assignee = 'none'
    closed = True
    closed_date = <Date 2019-05-22.21:34:37.217>
    closer = 'pitrou'
    components = ['Library (Lib)']
    creation = <Date 2015-08-17.14:59:00.382>
    creator = 'Matt Spitz'
    dependencies = []
    files = ['43061']
    hgrepos = []
    issue_num = 24882
    keywords = ['patch']
    message_count = 23.0
    messages = ['248732', '248749', '266716', '266772', '279144', '279145', '315007', '315090', '315091', '315092', '315260', '315261', '315264', '315265', '315471', '315485', '315715', '341917', '341935', '342796', '342798', '343243', '343244']
    nosy_count = 11.0
    nosy_names = ['bquinlan', 'pitrou', 'methane', 'torsten', 'josh.r', 'Matt Spitz', 'tilsche', 'tomMoral', 'dmacnet', 'iunknwn', 'pierreglaser']
    pr_nums = ['6375', '6530']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue24882'
    versions = ['Python 3.8']

    @MattSpitz
    Copy link
    Mannequin Author

    MattSpitz mannequin commented Aug 17, 2015

    https://hg.python.org/cpython/file/3.4/Lib/concurrent/futures/thread.py#l114

    ThreadPoolExecutor will keep spawning new threads, even if existing threads are waiting for new work. We should check against the queue length when deciding to spawn a new thread to avoid creating unnecessary threads.

    @MattSpitz MattSpitz mannequin added the type-bug An unexpected behavior, bug, or error label Aug 17, 2015
    @MattSpitz MattSpitz mannequin changed the title ThreadPoolExceutor doesn't reuse threads until #threads == max_workers ThreadPoolExecutor doesn't reuse threads until #threads == max_workers Aug 17, 2015
    @MattSpitz
    Copy link
    Mannequin Author

    MattSpitz mannequin commented Aug 17, 2015

    On further investigation, it appears that we can't just check against the queue length, as it doesn't indicate whether threads are doing work or idle.

    A change here will need a counter/semaphore to keep track of the number of idle/working threads, which may have negative performance implications.

    @Bluehorn
    Copy link
    Mannequin

    Bluehorn mannequin commented May 30, 2016

    For demonstration purposes, here is a small example specifically for Linux which shows how each request starts a new thread even though the client blocks for each result.

    @MojoVampire
    Copy link
    Mannequin

    MojoVampire mannequin commented May 31, 2016

    Is there a good reason to worry about overeager worker spawning? ProcessPoolExecutor spawns all workers when the first work item is submitted ( https://hg.python.org/cpython/file/3.4/Lib/concurrent/futures/process.py#l361 ), only ThreadPoolExecutor even makes an effort to limit the number of threads spawned. Threads are typically more lightweight than processes, and with the recent GIL improvements, the CPython specific costs associated with threads (particularly threads that are just sitting around waiting on a lock) are fairly minimal.

    It just seems like if eager process spawning isn't a problem, neither is (cheaper) eager thread spawning.

    @dmacnet
    Copy link
    Mannequin

    dmacnet mannequin commented Oct 21, 2016

    If each worker thread ties up other resources in an application, such as handles to server connections, conserving threads could have a significant impact. That's the situation for an application I am involved with.

    I've written and tested a patch to make this change, using a second Queue for the worker threads to notify the executor in the main thread by sending a None when they finish a WorkItem and are therefore idle and ready for more work. It's a fairly simple patch. It does add a little more overhead to executing a job, inevitably. I can submit the patch if there's interest. Otherwise, perhaps the TODO comment in thread.py should be rewritten to explain why it's not worth doing.

    @dmacnet
    Copy link
    Mannequin

    dmacnet mannequin commented Oct 21, 2016

    This issue seems to overlap with 14119.

    @iUnknwn
    Copy link
    Mannequin

    iUnknwn mannequin commented Apr 5, 2018

    I've submitted a PR that should resolve this - it uses a simple atomic counter to ensure new threads are not created if existing threads are idle.

    One concern I do have - while writing the patch, I noticed the existing submit method (specifically the adjust_thread_count function) isn't thread safe. I've added more details in the PR.

    @iUnknwn iUnknwn mannequin added the stdlib Python modules in the Lib dir label Apr 5, 2018
    @pitrou
    Copy link
    Member

    pitrou commented Apr 8, 2018

    I'm not fond of this proposal. The existing behaviour is harmless; especially for a thread pool, since threads are cheap resources. Improving the logic a bit might seem nice, but it also complicates the executor implementation a bit more.

    Besides, once the N threads are spawned, they remain alive until the executor is shut down. So all it takes is a spike in incoming requests and you don't save resources anymore.

    @pitrou pitrou added the 3.8 (EOL) end of life label Apr 8, 2018
    @pitrou
    Copy link
    Member

    pitrou commented Apr 8, 2018

    If each worker thread ties up other resources in an application, such as handles to server connections, conserving threads could have a significant impact.

    You may want to implement a pooling mechanism for those connections, independent of the thread pool. It is also probably more flexible (you can implement whichever caching and lifetime logic benefits your application).

    @pitrou
    Copy link
    Member

    pitrou commented Apr 8, 2018

    Side note:

    One concern I do have - while writing the patch, I noticed the existing submit method (specifically the adjust_thread_count function) isn't thread safe.

    True. The executor is obviously thread-safe internally (as it handles multiple worker threads). But the user should not /call/ it from multiple threads.

    (most primitives exposed by the Python stdlib are not thread-safe, except for the simplest ones such as lists, dicts etc.)

    @iUnknwn
    Copy link
    Mannequin

    iUnknwn mannequin commented Apr 13, 2018

    The existing behavior seems strange (and isn't well documented). The code had a TODO comment from bquinlan to implement idle thread recycling, so that was why I made the change.

    That said, if threads are cheap, why not just create all the work threads on initialization, and then remove all the logic entirely?

    Also, regarding the executor and thread-safety, there's an example in the current docs showing a job being added to the executor from a worker thread (it's part of the example on deadlocks, but it focuses on max worker count, not on the executor's thread-safety).

    @pitrou
    Copy link
    Member

    pitrou commented Apr 13, 2018

    That said, if threads are cheap, why not just create all the work threads on initialization, and then remove all the logic entirely?

    That would sound reasonable to me. bquinlan has been absent for a long time, so I wouldn't expect an answer from him on this issue.

    Also, regarding the executor and thread-safety, there's an example in the current docs showing a job being added to the executor from a worker thread

    Actually, looking at the code again, submit() is protected by the shutdown_lock, so it seems it should be thread-safe. That's on git master btw.

    @iUnknwn
    Copy link
    Mannequin

    iUnknwn mannequin commented Apr 13, 2018

    Alright - I'll put together another patch that removes the logic, and spins up all threads during initialization.

    Do you want me to create a completely new PR, or just update my existing one?

    @pitrou
    Copy link
    Member

    pitrou commented Apr 13, 2018

    Creating a new PR would be cleaner IMHO.

    @iUnknwn
    Copy link
    Mannequin

    iUnknwn mannequin commented Apr 19, 2018

    Done - as recommend, I've opened a new PR that changes the behavior to spawn all worker threads when the executor is created. This eliminates all the thread logic from the submit function.

    @methane
    Copy link
    Member

    methane commented Apr 19, 2018

    Why not just remove TODO comment?
    Thread is cheap, but not zero-cost.

    @iUnknwn
    Copy link
    Mannequin

    iUnknwn mannequin commented Apr 24, 2018

    I feel like there are two reasonable options here:

    1. We can implement a thread pool with basic resource tracking. This means idle threads get recycled, and threads that have been sitting idle for a while are terminated as demand drops, so resources can be reclaimed. This wouldn't require very much work - we would just need to modify some of the queue operations to have timeouts, and track the number of idle threads (most of that's already in my first PR). We could easily add options like min_threads and idle_thread_timeout as options in kwargs to the init routine.

    2. We can skip all tracking, and spin a fixed number of threads at initialization. This removes the complexity of locks and counts, and means the thread pool executor will work identically to the process pool executor (which also eagerly spawns resources). If we want this, this is ready to go in the second PR.

    I personally like option 1 because it feels closer to other languages I've worked in, but I'd like a bit more guidance from the reviewers before proceeding.

    @brianquinlan
    Copy link
    Contributor

    When I first wrote and started using ThreadPoolExecutor, I had a lot of code like this:

    with ThreadPoolExecutor(max_workers=500) as e:
      e.map(download, images)

    I didn't expect that images would be a large list but, if it was, I wanted all of the downloads to happen in parallel.

    I didn't want to have to explicitly take into account the list size when starting the executor (e.g. max_works=min(500, len(images))) but I also didn't want to create 500 threads up front when I only needed a few.

    My use case involved transient ThreadPoolExecutors so I didn't have to worry about idle threads.

    In principle, I'd be OK with trying to avoid unnecessary thread creation if the implementation can be simple and efficient enough.

    #6375 seems simple enough but I haven't convinced myself that it works yet ;-)

    @brianquinlan
    Copy link
    Contributor

    After playing with it for a while, #6375 seems reasonable to me.

    It needs tests and some documentation.

    Antoine, are you still -1 because of the complexity increase?

    @tilsche
    Copy link
    Mannequin

    tilsche mannequin commented May 18, 2019

    We ran into this issue in the context of asyncio which uses an internal ThreadPoolExecutor to provide an asynchronous getaddrinfo / getnameinfo.

    We observed an async application spawned more and more threads through several reconnects. With a maximum of 5 x CPUs these were dozens of threads which easily looked like a resource leak.

    At least in this scenario I would strongly prefer to correctly reuse idle threads.

    Spawning all possible threads on initialization in such a transparent case would be quite bad. Imagine having a process-parallel daemon that running a apparently single-threaded asyncio loop but then getting these executors for doing a single asyncio.getaddrinfo. Now you run 80 instances on an 80 core machine you get 32.000 extra implicit threads.

    Now you can argue whether the default executor in asyncio is good as is, but if the executors properly reuse threads, it would be quite unlikely to be a practical problem.

    @pitrou
    Copy link
    Member

    pitrou commented May 18, 2019

    Thomas, I think that's a good argument, so perhaps we should do this (strive to reuse threads) after all.

    @pitrou
    Copy link
    Member

    pitrou commented May 22, 2019

    New changeset 904e34d by Antoine Pitrou (Sean) in branch 'master':
    bpo-24882: Let ThreadPoolExecutor reuse idle threads before creating new thread (bpo-6375)
    904e34d

    @pitrou
    Copy link
    Member

    pitrou commented May 22, 2019

    Thank you for your contribution iunknwn!

    @pitrou pitrou closed this as completed May 22, 2019
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.8 (EOL) end of life stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    3 participants