This repository has been archived by the owner on Apr 28, 2022. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add background fetch and multiple client streaming support
Add parameter stream_maxchunksize that limits the storage chunks allocated when streaming. This creates interruption points where the receiving clients are notified of new data Add parameter stream_tokens controlling the default number of tokens available when new stream data arrives from backend. Add counter values for use with streaming Token strategy: Any thread hitting end of data will be marked as a fast writer. It will then have to get a token before calling RES_StreamWrite, thus n_tokens limits the number of thundering threads run whenever new data arrives. The fast writer flag is cleared on each pass through the loop, and reset if it hits the end of data and waits. Expose the number of tokens as VRT functions accessible through VCL in vcl_fetch. This allows the rate throttling to be tuned per connection. Make conditional delivery work while streaming Rename parameter default_tokens to stream_tokens Update streaming documentation Update transition graph Make r00979.vtc compatible with threaded streaming. Add author in files with more than trivial changes Add WRK_QueueFirst() that will schedule a work request first, and not take queue lengths into account. For already commited to work loads. Add WRK_QueueSessionFirst() that will queue a session using WRK_QueueFirst(). Add SES_NewNonVCA() for getting recycled sessions from the list not owned by the VCA Conflicts: bin/varnishd/cache_center.c
- Loading branch information