Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pack pipeline cache contents using tar/7z #10925

Closed
willsmythe opened this issue Jul 17, 2019 · 10 comments
Closed

Pack pipeline cache contents using tar/7z #10925

willsmythe opened this issue Jul 17, 2019 · 10 comments

Comments

@willsmythe
Copy link
Contributor

Basic information

  • Question, bug, or feature? : Feature

  • Task name: CacheBeta/Cache

Environment

Hosted

Description

For general information on caching in Azure Pipelines, see: https://aka.ms/pipeline-caching-docs

To improve cache restore/save performance especially for caches with a large number of small files (like node_modules), the Cache task should have built-in support for "packing" the cache contents, meaning consolidate all files for the specified "path" into a single file and only store this file in the cache on the server. Why?

  1. Reduce number of network connections during cache restore/save
  2. Improve performance (for most scenarios)
  3. Improve reliability

For performance reasons, "tar" should be used on Linux and macOS, and "7z" on Windows.

Turning on cache content packing

For now, the option for packing a cache's contents should be controlled via an environment variable (e.g. AZDEVOPS_PIPELINECACHE_PACK), with a decision coming later about whether to always pack or give developers the option (likely via an input on the task).

Changes to the generated cache fingerprint

Definitions: the key is the developer-provided identifier for the cache that is typically a mix of strings and file paths; the fingerprint is a hash generated from the key segments and is the actual identifier for the cache on the server.

Since packing changes the actual contents of the cache (i.e. a single tar or 7z file versus many individual files), the task (technically the agent plugin) needs to append an appropriate segment to the developer-provided key to ensure a different fingerprint is produced (which logically makes sense since the cache's contents on the server are different from the "same" cache whose content wasn't packed). We should establish a "namespace" for these key segments injected by the task, and then define different key segments for the different pack formats, for example:

  • microsoft.azure.pipelines.caching.pack=tar (on posix)
  • microsoft.azure.pipelines.caching.pack=7z (on Windows)

Notice the key segment doesn't say anything about the OS, just the format of the contents [which happens to be determined by the OS].

The naming convention for key segments follows the convention for Docker labels and gives us room to support other key segments in the future. Developers should be blocked from specifying key segments in this namespace.

All of this should be somewhat transparent to the developer (but should still be reported in the logs so developers understand why turning on/off pack impacts cache's identifer). Developers should continue to use variables like $(Agent.OS) in their cache key when they know the cache's contents are different for different OSes (and not just rely on the auto-injected pack key segment creating this differentiation).

Runtime behavior

When caching packing is enabled ....

On restore

The task (technically the agent plugin) should append an appropriate key segment to the developer-provided key (and optional "restore keys") based on the preferred pack technology for the environment (tar on posix, 7z on Windows).

This generated fingerprint will then be looked up on the server as usual. If there is a cache hit, the downloaded contents will be appropriately unpacked and dropped into the developer-specified path.

On save

Like during restore, the task should append an appropriate key segment based on the preferred pack technology. If a cache with this key doesn't already exist on the server, the task should appropriately pack the files in the specified path and upload this single file as the contents for the new cache.

@jneira
Copy link

jneira commented Jul 19, 2019

I've tried to manually pack and unpack cache files to avoid #10841, but strangely the file.tar.gz is recognized as a directory by the tar utility:

tar: /archive.tar.gz: Cannot read: Is a directory
tar: At beginning of tape, quitting now
tar: Error is not recoverable: exiting now

but its file attributes say that it is a file (-rw-r--r--)!

@willsmythe
Copy link
Contributor Author

@jneira - to make it easier to test the performance of tar/zip, I created pipeline step templates that handle tar/untarring files cached with the CacheBeta@0 task. See details here: https://github.com/willsmythe/caching-templates

Feel free to give it a try. If you run into a problem, please report it at willsmythe/caching-templates.

Disclaimer: this is not an official solution from Microsoft. It simply wraps the Microsoft-provided CacheBeta task and provides tar/untar (or zip/unzip on Windows) support.

Alternatively, point me to your repo and I can take a look ...

@jneira
Copy link

jneira commented Jul 20, 2019

@willsmythe thanks! Actually i am implementing the template steps manually (afaiu, tar the original folder, put the tar file in other folder, and cache the last one) so maybe i'll give a try

@jneira
Copy link

jneira commented Jul 22, 2019

Finally i've being able to cache the tar files with bash script steps. As i only need packing to workaround temporary #10841 only for linux and macos i will keep the manual hack for now. Thanks anyway @willsmythe
My final configuration was:

# .....
variables:
    STACK_ROOT: /home/vsts/.stack
 steps:
  - task: CacheBeta@0
    inputs:
      key: |
        "cache"
        $(Agent.OS)
        $(Build.SourcesDirectory)/$(YAML_FILE)
      path: .azure-cache
      cacheHitVar: CACHE_RESTORED
    displayName: "Download cache"
  - bash: |
      mkdir -p $STACK_ROOT
      tar -xzf .azure-cache/stack-root.tar.gz -C /
      mkdir -p .stack-work
      tar -xzf .azure-cache/stack-work.tar.gz
    displayName: "Unpack cache"
    condition: eq(variables.CACHE_RESTORED, 'true')
# ....
- bash: |
      mkdir .azure-cache
      tar -czf .azure-cache/stack-root.tar.gz $STACK_ROOT
      tar -czf .azure-cache/stack-work.tar.gz .stack-work
    displayName: "Pack cache"

The final build cached is https://dev.azure.com/jneira/haskell-ide-engine/_build/results?buildId=179

@lukeapage
Copy link

I tried out 7z, zip, tar using the archive task on node_modules.

For zip and tar the performance is worse, for 7zip is marginally better

w/o extra task
cache restore - 1m30s (~810mb)

w/ tar (no compression)
cache restore -35s (~940mb)
untar - 1min

w/ 7zip
cache restore - 10s (~140mb)
un zip - 1m

w/ zip
cache restore - 20s (~280mb)
unzip - 1m 6 s

@willsmythe
Copy link
Contributor Author

This feature is merged and will be available in the v2.157.0 agent, which should be rolling out everywhere this week.

The functionality is currently "opt in" --- you need to set the AZP_CACHING_TAR variable to true to use it.

IMPORTANT: this variable is only checked on "cache save", which only runs if needed (i.e. a cache entry with the same key doesn't already exist) and the build status is successful. On "cache restore", regardless of this variable's value, the cache's contents are untarred whenever the cache entry metadata indicates the contents are cached.

@lukeapage
Copy link

We've been using 7z compression (as above) as its the best timings. However, the 7z task to compress when the cache fails is expensive and the problem is that we get cache misses all the time - we might get a cache hit and then 20 minutes later a cache fail.. and we end up compressing again and again on different pipelines for the exact same cache key and that in the end makes the builds alot slower that if we weren't 7z-ing.

Is it worth me trying the built-in tar? is it likely quicker than my custom test above where I have two seperate job to untar/fetch from azure?

@johnterickson
Copy link
Contributor

@fadnavistanmay Close this out when we've deployed to all rings

@johnterickson
Copy link
Contributor

We're rolling out TARing as default with agent 160. If this is what you want, you can just remove this env var AZP_CACHING_TAR. If you want to specifically not use TARing, then set AZP_CACHING_CONTENT_FORMAT to Files. We'll be documenting this new environment variable.

@fadnavistanmay
Copy link
Contributor

This is releases as part of the agent 2.160.0 Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants