Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve file read performance #105

Merged
merged 2 commits into from
Jun 4, 2020
Merged

Improve file read performance #105

merged 2 commits into from
Jun 4, 2020

Commits on Jun 4, 2020

  1. Prevent the prefetch from making big HTTP request header

    During prefetch, current filesystem implementation chunks this prefetching range
    into small, many, and mostly neighbouring chunks and lists them in a single HTTP
    Range Request header without enough squashing. Sometimes this leads to large
    HTTP header and the request fails. This commit solves this by aggressively
    squashing neighbouring/overwrapping chunks in HTTP headers, as much as
    possible.
    
    Signed-off-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
    ktock committed Jun 4, 2020
    Configuration menu
    Copy the full SHA
    d8b1b19 View commit details
    Browse the repository at this point in the history
  2. Improve file read performance

    Throughout the benchmarking in the community, it turned out the file read
    performance is low especially on random and parallel reads.
    
    This commit solves this by the following fixes:
    - minimizing the occurrence of slice allocation in the execution path of file
      read, leveraging sync.Pool,
    - minimizing the memory copy and disk I/O by allowing to fetch a partials range
      of blobs from the cache, and
    - minimizing the locked region in the cache.
    
    Signed-off-by: Kohei Tokunaga <ktokunaga.mail@gmail.com>
    ktock committed Jun 4, 2020
    Configuration menu
    Copy the full SHA
    b3c5173 View commit details
    Browse the repository at this point in the history