New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let multiple containers share downloaded dependencies #851

Open
bsideup opened this Issue Nov 14, 2016 · 86 comments

Comments

Projects
None yet
@bsideup
Copy link

bsideup commented Nov 14, 2016

Hi!

Looks like Gradle is locking the global cache when running the tests. We run Gradle in Docker containers, and from what I saw in the logs, it fails to acquire the lock with:

14:09:05.981 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] The file lock is held by a different Gradle process (pid: 1, operation: ). Will attempt to ping owner at port 39422

Expected Behavior

Gradle should release the lock when it executes the tests. Other Gradle instances are failing with:

Timeout waiting to lock Plugin Resolution Cache (/root/.gradle/caches/3.2/plugin-resolution). It is currently in use by another Gradle instance.

Current Behavior

Gradle holds the lock

Context

Our CI servers are affected. Parallel builds are impossible.

Steps to Reproduce

I managed to reproduce it with a simple Docker-based environment:
https://github.com/bsideup/gradle-lock-bug

Docker and Docker Compose should be installed.

$ git clone git@github.com:bsideup/gradle-lock-bug.git
$ COMPOSE_HTTP_TIMEOUT=7200 docker-compose up --no-recreate

Your Environment

Gradle 3.2 (tried with 3.1, 3.0 and 2.12 as well)
Docker

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 14, 2016

I don't quite understand the use case yet. Are you running several builds at the same time on the same working directory? That will give you many other odd problems besides just the .gradle directory being locked. Just think about what happens when one of those builds runs a clean while another is trying to compile.

If you want to do different builds on the same project at the same time, I'd recommend using separate checkouts for that.

Just a minor piece of terminology: The .gradle directory inside your project is the local cache. The global caches are in the user home by default.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Nov 14, 2016

@oehme
We don't, I just reused the same project source to demonstrate an issue. We run different projects inside the containers at the same time with .gradle folder shared across them (think CI environment)

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 14, 2016

Got it, thanks. The Gradle user home cannot be shared between different machines. Why do you want to share it? It'll just create contention between your builds, even if this specific issue was solved.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Nov 14, 2016

@oehme well, it's a bit hard to define "machine" here.
We use Docker containers, on the same host.

Having to create a separate .gradle for each project sounds a bit expensive and breaks the concept of the shared global cache.

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 14, 2016

A docker container is a machine for that matter. It's processes are isolated from the host system.

Having to create a separate .gradle for each project sounds a bit expensive and breaks the concept of the shared global cache.

I don't really understand the use case I guess. What is the reason to run the builds in docker containers, but share the user home? If you don't trust the code, then it absolutely should not have write access to the host's user home. If you trust it, then what do the docker containers buy you?

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Nov 15, 2016

@oehme it's not about the security.

We use Docker containers as a unified way to run different kinds of builds in our CI process, different projects might want to use different Java versions, for instance

This is pretty common things nowadays I should say. Jenkins is promoting Docker builds a lot, others are integrating Docker containers as well.

I understand the problem with "the multiple machines issue". However, this issue is more about "Why test executor takes a lock for a long time?", because AFAIK locks in Gradle are short living things.

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 15, 2016

Gradle processes will hold locks if they are uncontended (to gain performance). Contention is announced through inter-process communication, which does not work when the processes are isolated in Docker containers.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Nov 15, 2016

Hm, back in the days it was different - Gradle was trying to release the lock as soon as possible, and I really liked that strategy. What happened? :)

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 15, 2016

The cross-process caches use file-based locking, so every lock/unlock operation is an I/O operation. Since these are expensive, we try to avoid them as much as possible.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Nov 15, 2016

Any chance to get them configurable? I would really like to disable this optimization on our CI environments. Otherwise, we just delete lock file manually to workaround the issue when there are some long-running tests are being executed :D

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 15, 2016

We could potentially add a system property that tells Gradle to "assume contention". There might be other issues that we haven't yet discovered though, since sharing a user home between machines is not a use case we have designed for.

I'd like to assess the alternatives first: What would be the drawback if you don't share the user home?

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Nov 15, 2016

A huge disk space and network usage. We will have to download the same dependencies for every Gradle job type. Right now Gradle cache takes a few GBs, but if we don't share, we will have to multiply it by the number of Gradle-based jobs we have, so the result will be tens, or maybe even hundreds of GBs, which is not really acceptable for us

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 23, 2016

I think the best next step would be for you to implement a fix for that specific problem and try it out in your environment.

My gut feeling is that there may be other issues waiting when you try to reuse the user home. If there aren't, then we could discuss introducing a flag into Gradle to opt-in to a "docker mode" :)

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Nov 24, 2016

@oehme ok, thanks for the link! I'll try to play around with it and will report back.

Also, there is one more option - on *nix-based systems, Gradle can use sockets to communicate. That way it should work, and Docker will allow us to mount the socket inside a container.

WDYT?

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Nov 24, 2016

That could work ad well. Let's first make sure though that the locking problem is in fact the only problem here.

@saiimons

This comment has been minimized.

Copy link

saiimons commented Feb 8, 2017

@bsideup Did you fix this ? I am currently facing this issue with the same kind of setup as yours...
At least, it would be nice to have an option to set the timeout.

@oehme oehme changed the title Test executor locks the global cache for the entire test run Let multiplecontainers share Gradle caches Mar 3, 2017

@oehme oehme changed the title Let multiplecontainers share Gradle caches Let multiple containers share Gradle caches Mar 3, 2017

@martinda

This comment has been minimized.

Copy link

martinda commented Mar 21, 2017

Another use case is when a user runs multiple different builds of different projects on multiple different hosts, all using his/her account. This is typical of environments with network mounted home directories.

Gradle has to pro-actively release the lock as soon as it is done with the cache. I am willing to pay the price of an IO operation to save the build from a timeout. Please see the excellent explanation in this GRADLE-3106 comment.

@martinda

This comment has been minimized.

Copy link

martinda commented Apr 11, 2017

Just want to explain how to reproduce this problem by posting a simple build.gradle file:

task sleep() {
    doLast {
        Thread.sleep(100000)
    }
}

Get two terminals on different hosts that mount the same home directory with the same ~/.gradle in it, then type gradle sleep --debug --stacktrace in both terminals. One of them will fail to acquire the lock and die waiting. The failing one will show:

The file lock is held by a different Gradle process (pid: 64549, operation: ). Will attempt to ping owner at port 40291

Of course the other process cannot be notified, it is on another host, resulting in:

Caused by: org.gradle.cache.internal.LockTimeoutException: Timeout waiting to lock file hash cache (/home/martinda/.gradle/caches/3.5/fileHashes). It is currently in use by another Gradle instance.
    Owner PID: 64549
    Our PID: 25504
    Owner Operation: 
    Our operation: 
    Lock file: /home/martinda/.gradle/caches/3.5/fileHashes/fileHashes.lock

Could it be as simple as adding the IP address of the process holding the lock to the lock file and add it to the pingOwner method?

@AdrianAbraham

This comment has been minimized.

Copy link

AdrianAbraham commented May 11, 2017

My team is also encountering this issue when dealing with containerized CI builds, forcing us to keep many copies of the Gradle cache. We'd love to see an option to aggressively release the lock.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented May 11, 2017

FYI:
For us, the workaround was to run the container where Gradle is with --net=host, this way Gradle will be able to communicate with other instances.

@saiimons

This comment has been minimized.

Copy link

saiimons commented May 11, 2017

My workaround is setting up a maven repository acting as a proxy (and cache) and loading some init.gradle in the build container.
This allows us to keep one cache instead of multiple ones, and there is no conflict.

@AdrianAbraham

This comment has been minimized.

Copy link

AdrianAbraham commented May 11, 2017

@bsideup That sounds good, though I don't know if I'll be able to convince my CI runner to do that ... I'll try it out.

@saiimons Could you go into more detail about your workaround?

@saiimons

This comment has been minimized.

Copy link

saiimons commented May 11, 2017

@AdrianAbraham I run a Nexus repository with a proxy configuration for the major maven repositories (jcenter, maven central, etc. check your log for the URLs).

image

All these guys go behind a group, in order to use a single URL:

image

Then my build container will load a init.gradle file (in /opt/gradle/init.d/ as I am using this image for CI) :

allprojects {
  buildscript {
    repositories {
      mavenLocal()
      maven {
        url "http://nexus:8081/repository/global_proxy/"
      }
    }
  }
  repositories {
    mavenLocal()
    maven {
      url "http://nexus:8081/repository/global_proxy/"
    }
  }
}
@AdrianAbraham

This comment has been minimized.

Copy link

AdrianAbraham commented May 11, 2017

@saiimons We're running a local Nexus, and our configuration is similar (no mavenCentral(), though); but Gradle still has to download the packages from Nexus into its own cache to run a build. Does your setup just avoid Internet downloads? Or does it avoid the Gradle cache itself?

@lptr

This comment has been minimized.

Copy link
Member

lptr commented Feb 8, 2018

@bsideup suppose we separate caches and allow them to be shared between containers. How would you solve the different OS problem?

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Feb 8, 2018

@lptr different OS?

@melix

This comment has been minimized.

Copy link
Member

melix commented Feb 8, 2018

I think we all agree that having a shared cache would be great, for different reasons. However, the fact our cache is by design limited to a single machine, and not meant to be shared over the network or docker containers is not an arbitrary decision to make everybody swear. Ideally, if we can find a solution that:

  1. allows concurrent read/write access in the cache
  2. from a single machine (multiple builds executed concurrently on a single machine)
  3. fine grained (doesn't lock the cache for the lifetime or a build, like it used to, or dependency resolution, like it used to also)
  4. supports different OS (including, yes, Windows)
  5. supports multiple concurrent hosts (aka, the docker use case here, or different CI agents)
  6. doesn't lock the cache for a single host, blocking all others
  7. is TCP/IP connection failure safe (in other words, you're not allowed to use TCP/IP to see that there are concurrent access, because docker isolates everybody)
  8. doesn't kill the performance of the local builds

Then of course, we would accept such a PR. Today, what we have supports 1 to 4.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Feb 8, 2018

@lptr I wonder why does caching of things downloaded from the internet have to be OS dependent? Not talking about Gradle's build cache or something

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Feb 8, 2018

We could reduce the scope of the problem to just the file store (i.e. downloaded POMs and JARs), leaving all other caches private. Since the file store is written infrequently, there shouldn't be a performance problem when sharing it with a more pessimistic locking strategy. Even with a localized dependency mirror that would still be worthwhile, as the agents would save disk space.

The other caches (e.g. compiled scripts, build cache, transformed artifacts etc.) would be much harder to handle, as @melix explained. So we'd need to separate those directories using some new option.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Feb 8, 2018

@oehme sounds good to me. The issue was about the files downloaded from the internet (POMs, JARs, etc), everything else is not critical for us

@oehme oehme changed the title Let multiple containers share Gradle caches Let multiple containers share downloaded dependencies Feb 8, 2018

@gesellix

This comment has been minimized.

Copy link

gesellix commented Feb 8, 2018

+1 from me for scoping "dependencies" (jars, etc), because we would like to reduce network traffic of bigger files. Regarding the build cache: I assumed that would already be addressed with the global build cache (and disabling local build cache)?

@ldaley

This comment has been minimized.

Copy link
Member

ldaley commented Feb 11, 2018

@bsideup just out of curiosity, why are you having to download dependencies over the Internet? Is it that …

  • You do not have an internal proxy (e.g. Artifactory)?
  • You are using a near proxy but it is still too slow?
  • You are building on a platform (e.g. Travis) that doesn't support that kind of near proxy?
  • Something else?

@gesellix @gayakwad @mkobit @zageyiff @saiimons - I'd appreciate your answers to if you don't mind. If you prefer, you can email me direct via luke - at - gradle.com. Thanks in advance.

@bsideup

This comment has been minimized.

Copy link
Author

bsideup commented Feb 12, 2018

@ldaley,

You do not have an internal proxy (e.g. Artifactory)?
You are using a near proxy but it is still too slow?

No, that will only increase the complexity & cost of our build infrastructure

You are building on a platform (e.g. Travis) that doesn't support that kind of near proxy?

Sometimes, but it's not affected by the issue I described (Although you have to delete some files (some .locks) from the cache, otherwise Gradle will fail... )

Something else?

A standard setup with Jenkins where every job is executed inside a Docker container (to avoid having to install the tools on a host), multiple containers per host with mounted ~/.gradle.

@vyphan

This comment has been minimized.

Copy link

vyphan commented Feb 23, 2018

@oehme Can you guess what milestone this feature might be targeted for? Just being able to share those POMs and JARs would be a huge speed improvement for teams that build in Docker containers.

@oehme

This comment has been minimized.

Copy link
Member

oehme commented Feb 24, 2018

There is no plan at this point. Our recommendation for now remains using artifactory/nexus to provide fast artifact downloads both for your CI agents and your team members.

@redeamer

This comment has been minimized.

Copy link

redeamer commented Mar 2, 2018

In regard of this 'issue' in a CI context:
For our setup I think I introduced a good 'workaround', but it depends on your number of executors per build node and the size of a singe cache: I map a container volume to keep the gradle caches and set the GRADLE_USER_HOME to <cache_volume_path>/${env.EXECUTOR_NUMBER} (on Jenkins-CI, do not know for other CIs). So I avoid any parallelism issues and have still the cache around for reuse and the cache duplication is justifiable/feasible (for us at least).

@vyphan

This comment has been minimized.

Copy link

vyphan commented Mar 2, 2018

@redeamer Have you run into any issues from persisting GRADLE_USER_HOME from build to build? Wondering what else other than the downloaded dependencies is carried over.

@AdrianAbraham

This comment has been minimized.

Copy link

AdrianAbraham commented Mar 2, 2018

We use almost exactly the same workaround as @redeamer and haven't had any issues

@michellehalliwell

This comment has been minimized.

Copy link

michellehalliwell commented Mar 5, 2018

Unfortunately a workaround like the one @redeamer mentioned above won't work with Dockerized slaves. The build will always be running on executor 1 since each Docker container is treated as a new node.

@Saggi432

This comment has been minimized.

Copy link

Saggi432 commented Apr 16, 2018

@bsideup we are using kubenetes to launch the jenkins agents on containers using the kubenetes plugin. we do face a similar issue when we run multiple containers, which are sharing the same NAS. This approach does help us to fasten our build time..

As per the discussion above, how can we inject the --net=host in the kube environment for the containers as we run. The plugin kubernetes-plugin, does not give us the options to pass the arguments when we run the docker.

  • ./gradlew clean test --refresh-dependencies
    Starting a Gradle Daemon, 1 busy and 1 incompatible and 3 stopped Daemons could not be reused, use --status for details

FAILURE: Build failed with an exception.

  • What went wrong:
    Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().

Timeout waiting to lock file hash cache (/nas/jenkins/.gradle/caches/4.0/fileHashes). It is currently in use by another Gradle instance.
Owner PID: 141
Our PID: 141
Owner Operation:
Our operation:
Lock file: /nas/jenkins/.gradle/caches/4.0/fileHashes/fileHashes.lock

@vertrost

This comment has been minimized.

Copy link

vertrost commented Apr 23, 2018

Our cache takes about ~1.3GB, so the decision was to rsync .gradle for every docker container and than rsync updates back to volume. But still would be nice to have a solution to run docker just from volume, it would be much convenient and will take less time for "syncing".

@daggerok

This comment has been minimized.

Copy link

daggerok commented Jun 2, 2018

very actual problem,

on each docker pipeline build at the moment I'm:

  • disabling daemon org.gradle.daemon=false

  • on non root user doing chown, because all folder are mounting wuth root user

  • and mounting no whole ~/.gradle folder, but only dependencies caches:

docker run --rm --name run-my-e2e-tests \
  -v ~/.gradle/caches/modules-2/files-2.1:/home/e2e/.gradle/caches/modules-2/files-2.1 \
  -v ~/.m2/repository:/home/e2e/.m2/repository \
  my-e2e-tests
@richfromm

This comment has been minimized.

Copy link

richfromm commented Aug 16, 2018

Here is our situation:

We use docker for our CI builds. Not for security; we use docker for repeatability, isolation, and ease of management. Each build runs in its own docker container. We have up to 3 build agents running on the same host, each as a separate docker container.

We recently enabled the gradle build cache, pointing to a remote HttpBuildCache. I hadn't realized that the default behavior, if you enable caching, is that you also get a local cache, at $GRADLE_HOME/caches/build-cache-1. Note that we have $GRADLE_HOME mapped outside of the docker container, to a directory on the actual host.

It's possible (I think; I haven't tried it yet) to just disable the local cache. But now that I think about it further, I think it's a good idea, and I'd like to keep it.

But having 3 different local build caches on the same host, when they in the long term contain largely the same contents, seems wasteful. And has now led (on two separate occasions) to filling up the local disk.

I will probably work around this by limiting the size of the local cache. Apparently you can no longer limit by size, but I can change the default DirectoryBuildCache.removeUnusuedEntriesAfterDays to be something less than the default of 7. But it would be preferable to just have all of the agents share a cache.

But from this issue it sounds like that's not possible.

@mmeijeri

This comment has been minimized.

Copy link

mmeijeri commented Aug 30, 2018

@oehme Is the workaround described by bsideup (--net=host Docker option) safe? If not, would it be safe to mount only .gradle/caches/modules-2 as a Docker volume in the container? In that case each container would still have its own .gradle directory, but the modules-2 subdirectory would be mapped from the host.

Another potential workaround: would it be possible to add a command-line option to fall back to using the local Maven cache? I know Gradle already checks the local Maven cache first, but with such an option it could be instructed to write to it as well.

@zhleonix

This comment has been minimized.

Copy link

zhleonix commented Aug 31, 2018

.gradle/caches/modules-2 can not be shared too as gradle will put a lock there once using it. What we have done is to setup a pool of shared caches and allocated to container to accelerate the build.

@nniesen

This comment has been minimized.

Copy link

nniesen commented Nov 1, 2018

We tried a mount to ~/.gradle/caches/modules-2/files-2.1 for just the artifacts (no build or metadata caching which we didn't want anyway). That seemed to work without causing locking issues. Stable artifacts did not re-download and snapshots with a 1ms changing timeout downloaded every time as expected.

YMMV but in our case, even though everything was working as expected, it didn't improve our pipeline build times. We are now thinking we have some other issue with our agents file system latency. Unlike others, we're not too concerned about disk space since the Jenkins agent containers are destroyed after every build.

@simpss

This comment has been minimized.

Copy link

simpss commented Nov 22, 2018

YMMV but in our case, even though everything was working as expected, it didn't improve our pipeline build times. We are now thinking we have some other issue with our agents file system latency. Unlike others, we're not too concerned about disk space since the Jenkins agent containers are destroyed after every build.

Using a different CI system(gitlab-ci) I had similar results. The real cause behind slow builds was IO. We migrated our gitlab runners(jenkins agents?) to SSD's and made sure every runner VM was on a different host to not have IO spikes at the same time.

@roadSurfer

This comment has been minimized.

Copy link

roadSurfer commented Nov 29, 2018

We are also wanting to make use of "throwaway" Jenkins Build Agents that are simply Docker Containers, but still share the cached downloads between them to save bandwidth and time.
We will often run multiple builds in parallel, and having to download & store the exact same artefact for the different builds is wasteful.
Or we will have to use multiple volumes and then run de-duplication/syncing on a schedule; which strikes me as brittle and prone to failure (and potentially blocking builds whilst they run).
Even with a Nexus proxy, pulling artefacts can take up to 13 minutes - that simply isn't going to be viable.

Edit: The most concerning scenario is running multiple builds of the same project in parallel. Imagine two devs working on the same project:

  • Dev 1 commits & pushes - this triggers a build & test with takes time.
  • Dev 2 commits & pushed as few second later - this also triggering a build& test.

In order to inform each dev as fast as possible of any failure, forcing Dev 2 to wait for Dev1's build to complete is less than optimal. For this reason we want to run the Gradle builds in parallel but share all/part of the cache.

@roadSurfer

This comment has been minimized.

Copy link

roadSurfer commented Nov 29, 2018

We tried a mount to ~/.gradle/caches/modules-2/files-2.1 for just the artifacts (no build or metadata caching which we didn't want anyway). That seemed to work without causing locking issues. Stable artifacts did not re-download and snapshots with a 1ms changing timeout downloaded every time as expected.

@nniesen , how did you achieve that? When I try I see the ownership of "caches/modules-2" etc flip from "gradle:gradle" to "root:root" and the builds then fail.

Edit: After a bit of digging, it's as soon as I add a mount to "~/.gradle/caches/modules-2/files-2.1" the folder path "caches/modules-2/files-2.1" become owned by "root"; meaning that the user "gradle" has no access.

@nniesen

This comment has been minimized.

Copy link

nniesen commented Nov 29, 2018

@roadSurfer: Unfortunately, I no longer have access to the environment but I believe everything was running as a build agent user on a Jenkins Kubernetes/Azure build agent. DevOps set up the Jenkins build agent configuration so that the jenkins agent's docker image had access to an external volume.

Perhaps your "throwaway" Jenkins Build Agent docker image was not properly setup to execute builds as the 'gradle' user. If the docker file doesn't create a user and switch to that user the image will execute using the root user.

In the Jenkins pipeline, before the Agent ran a Gradle command that downloaded artifacts, I did something like the following:

# In the Agents gradle user home, create the modules-2 directory.
mkdir -p ~/.gradle/caches/modules-2

# In the Agent, create the gradle user home files-2.1 link to the external directory.
# Note: /external-files-2.1 is the volume mount in the Agent that points to external location.
ln -s /external-files-2.1 ~/.gradle/caches/modules-2/files-2.1

Now when the pipeline runs the ./gradlew build, Gradle builds the metadata (in the Agents gradle home directory (~/.gradle/)) for all the existing artifacts in /external-files-2.1 and also downloads any missing artifacts.

Gradle is usually very fast at rebuilding the metadata in the users and projects .gradle directory so I assume we were having latency issues accessing and computing file hashes for the artifacts on the external volume.

Note: You can kind of play around with it locally by setting the GRADLE_USER_HOME environment variable to a temporary location. Then you can arbitrarily delete files from that directory or the projects .gradle directory to see how Gradle gracefully rebuilds any missing information.

@roadSurfer

This comment has been minimized.

Copy link

roadSurfer commented Nov 30, 2018

Thanks @nniesen , I was wondering if the symlink trick would work. I placed the whole lot into a script to make my life easier:
Edit: The previous version of the below script was causing issues when it used links and multiple builds were running (random inability to read "module-artifact.bin" etc). So I had to change to using rsync and copy everything in and out. This takes ~20 seconds per build and that is not really acceptable long term.

#!/bin/bash
set -e

GRADLE_CACHE_NAME=caches
GRADLE_HASHES_NAME=fileHashes
GRADLE_MODULES_NAME=modules-2
GRADLE_NATIVE_NAME=native
GRADLE_WRAPPER_NAME=wrapper

GRADLE_SOURCE=/gradle
GRADLE_CACHE_SOURCE=${GRADLE_SOURCE}/${GRADLE_CACHE_NAME}
GRADLE_VERSION_SOURCE=${GRADLE_CACHE_SOURCE}/${GRADLE_VERSION}

GRADLE_TARGET_USER=/home/gradle/.gradle
GRADLE_CACHE_USER=${GRADLE_TARGET_USER}/${GRADLE_CACHE_NAME}
GRADLE_HASHES_USER=${GRADLE_CACHE_USER}/${GRADLE_VERSION}/${GRADLE_HASHES_NAME}
GRADLE_MODULES_USER=${GRADLE_CACHE_USER}/${GRADLE_MODULES_NAME}
GRADLE_NATIVE_USER=${GRADLE_TARGET_USER}/${GRADLE_NATIVE_NAME}
GRADLE_WRAPPER_USER=${GRADLE_TARGET_USER}/${GRADLE_WRAPPER_NAME}

if [[ "${USE_CI}" == "Yes" ]]; then
	if [[ "$1" == "-u" ]]; then
		if [[ -d ${GRADLE_CACHE_USER} ]]; then
			echo "CI cache already enabled."
		else
			echo "Copying cache into Container."
			rsync -a --include /caches --include /wrapper --include /native --exclude '/*' --exclude '*.lock' ${GRADLE_SOURCE}/ ${GRADLE_TARGET_USER}
		fi
	elif [[ "$1" == "-d" ]]; then
		if [[ -d ${GRADLE_CACHE_USER} ]]; then
			if [[ ! -d ${GRADLE_VERSION_SOURCE} ]]; then
				echo "Minimal source structure did not exist - creating."
				mkdir -p ${GRADLE_VERSION_SOURCE}
			fi
			echo "Copying ${GRADLE_HASHES_USER} to ${GRADLE_VERSION_SOURCE}"
			rsync -au --exclude '*.lock' ${GRADLE_HASHES_USER} ${GRADLE_VERSION_SOURCE}
			echo "Copying ${GRADLE_MODULES_USER} to ${GRADLE_CACHE_SOURCE}"
			rsync -au --exclude '*.lock' ${GRADLE_MODULES_USER} ${GRADLE_CACHE_SOURCE}
			echo "Copying ${GRADLE_NATIVE_USER} to ${GRADLE_SOURCE}"
			rsync -au --exclude '*.lock' ${GRADLE_NATIVE_USER} ${GRADLE_SOURCE}
			if [[ -d ${GRADLE_WRAPPER_USER} ]]; then
				echo "Copying ${GRADLE_WRAPPER_USER} to ${GRADLE_SOURCE}"
				rsync -au --exclude '*.lock' ${GRADLE_WRAPPER_USER} ${GRADLE_SOURCE}
			fi
		else
			echo "CI cache was not enabled."
		fi
	else
		echo "Unknown option: '$1'"
		exit 1
	fi
else
	echo "CI cache not enabled. 'USE_CI' was: '${USE_CI}'"
fi

"USE_CI" should be set to "Yes" either in the image itself or via Container config.
"GRADLE_VERSION" should be set either in the image itself. via Container config or by getting it from Gradle.
The location "/gradle" should be defined as a volume in the Image.
This should then be mapped to a named volume pointing at a full Gradle cache.
Run as "script-name.sh -u" at the start of a Jenkins pipline (for example) and then "script-name.sh -d" at the end.

Only outstanding issue I can see with this approach is having to remember to run the script at the start and end of every pipeline. Still, it's an improvement.

I really hope the team find a proper fix for this CI issue soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment