Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JENKINS-12763] Reduce lock contention while updating caches #21

Merged
merged 4 commits into from Jun 15, 2012

Conversation

nimeacuerdo
Copy link
Contributor

Use different locks for controlling the updates of master and slave nodes caches.

Reduces lock contention and increases cache update performance; in particular in scenarios where multiple slaves are used for building jobs that build branches for several different repositories.

Regards,
David

Use different locks for updating master and slave nodes caches .
Reduces lock contention and increases cache update performance when
using multiple slave nodes.
@nimeacuerdo
Copy link
Contributor Author

@@ -94,6 +94,10 @@ public ProcStarter pull() {
return run("pull");
}

public ProcStarter pull(String branch) {
return run("pull", "-b", branch);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For safety the plugin always spells out the long form of every option (--branch).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, no problem

@buildhive
Copy link

@buildhive
Copy link

@buildhive
Copy link

@nimeacuerdo
Copy link
Contributor Author

The idea behind the changes in the repositoryCache method is that instead of using a single lock for the whole operation that includes updating the master and slave caches, it's possible to use different locks to reduce the contention: one lock for the updates of the master caches and an additional set of locks per slave node to control the update of their caches.

This way, if one build gets the master lock, it will be unlocked (and therefore be free), for a subsequent build that is waiting for it as soon as the master cache is updated and the update of the corresponding slave cache starts. Also, it would be possible for different builds to concurrently trigger the update of the caches of the different nodes where they are running, without stepping on each others feet.

With the existing implementation, that's not possible. Every triggered build configured to use the cache will have to wait for the whole master+slave cache update process to finish for the currently running build, as the lock spans through all the method.

I do not think that our usage scenario, with one master and many slaves, besides several different repos to be used during builds, is that strange. And in this case, the overall performance of the ci system benefits from the proposed approach.

@jglick
Copy link
Member

jglick commented May 2, 2012

Yes, that is all fine - the JIRA issue gave motivation. Just waiting for some minor review comments to be addressed, so the diff is minimal and clean. Also note that this does not currently merge cleanly against master according to GitHub.

@buildhive
Copy link

@nimeacuerdo
Copy link
Contributor Author

I have made some changes in order to get the tiniest possible diff as you request (also without the pulling of specific branches).

Nevertheless I will do some tests before pushing it.

Please note that the indentation of some lines is not the best just for
the sake of getting the smallest diff, as requested during the review.
@buildhive
Copy link

jglick pushed a commit that referenced this pull request Jun 15, 2012
[FIXED JENKINS-12763] Reduce lock contention while updating caches
@jglick jglick merged commit dc0bd85 into jenkinsci:master Jun 15, 2012
@jglick
Copy link
Member

jglick commented Jun 15, 2012

Thanks and sorry for the slow response on my end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants