Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reserve a fraction of idle threads for each priority class #2796

Merged
merged 2 commits into from
Jan 15, 2020

Commits on Jan 15, 2020

  1. generalize the worker pool reserve to avoid deadlocks

    Previously, we used a minimum number of idle threads (the reserve) to
    ensure that we do not assign all threads with client requests and no
    threads left over for backend requests.
    
    This was actually only a special case of the more general issue
    exposed by h2: Lower priority tasks depend on higher priority tasks
    (for h2, sessions need streams, which need requests, which may need
    backend requests).
    
    To solve this problem, we divide the reserve by the number of priority
    classes and schedule lower priority tasks only if there are enough
    idle threads to run higher priority tasks eventually.
    
    This change does not guarantee any upper limit on the amount of time
    it can take for a task to be scheduled (e.g. backend requests could be
    blocking on arbitrarily long timeouts), so the thread pool watchdog is
    still warranted. But this change should guarantee that we do make
    progress eventually.
    
    With the reserves, thread_pool_min needs to be no smaller than the
    number of priority classes (TASK_QUEUE__END). Ideally, we should have
    an even higher minimum (@dridi rightly suggested to make it 2 *
    TASK_QUEUE__END), but that would prevent the very useful test
    t02011.vtc.
    
    For now, the value of TASK_QUEUE__END (5) is hardcoded as such for the
    parameter configuration and documentation because auto-generating it
    would require include/macro dances which I consider over the top for
    now. Instead, the respective places are marked and an assert is in
    place to ensure we do not start a worker with too small a number of
    workers. I dicided against checks in the manager to avoid include
    pollution from the worker (cache.h) into the manager.
    
    Fixes varnishcache#2418 for real
    nigoroll committed Jan 15, 2020
    Configuration menu
    Copy the full SHA
    3bb8b84 View commit details
    Browse the repository at this point in the history
  2. remove a now pointless vtc

    This test is to detect a deadlock which does not exist any more. IMHO,
    the only sensible way to test for the lack of it now is to do a load
    test, which is not what we want in vtc.
    nigoroll committed Jan 15, 2020
    Configuration menu
    Copy the full SHA
    5fe2a46 View commit details
    Browse the repository at this point in the history