Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-2244. Use new ReadWrite lock in OzoneManager. #1589

Merged
merged 2 commits into from
Oct 8, 2019

Conversation

bharatviswa504
Copy link
Contributor

@bharatviswa504 bharatviswa504 commented Oct 4, 2019

Use new ReadWriteLock added in HDDS-2223.

Existing tests should cover this.

Ran a few Integration tests.

Not removed Old methods in OzoneManagerLock, as it is used in Old write requests in VolumeManagerImpl/BucketManagerImpl/KeyManagerImpl. Marked those methods as deprecated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
0 reexec 39 Docker mode activated.
_ Prechecks _
+1 dupname 1 No case conflicting files found.
+1 @author 0 The patch does not contain any @author tags.
-1 test4tests 0 The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
0 mvndep 14 Maven dependency ordering for branch
-1 mvninstall 31 hadoop-hdds in trunk failed.
-1 mvninstall 33 hadoop-ozone in trunk failed.
-1 compile 21 hadoop-hdds in trunk failed.
-1 compile 16 hadoop-ozone in trunk failed.
+1 checkstyle 49 trunk passed
+1 mvnsite 0 trunk passed
+1 shadedclient 854 branch has no errors when building and testing our client artifacts.
-1 javadoc 29 hadoop-hdds in trunk failed.
-1 javadoc 25 hadoop-ozone in trunk failed.
0 spotbugs 967 Used deprecated FindBugs config; considering switching to SpotBugs.
-1 findbugs 34 hadoop-hdds in trunk failed.
-1 findbugs 20 hadoop-ozone in trunk failed.
_ Patch Compile Tests _
0 mvndep 24 Maven dependency ordering for patch
-1 mvninstall 34 hadoop-hdds in the patch failed.
-1 mvninstall 38 hadoop-ozone in the patch failed.
-1 compile 25 hadoop-hdds in the patch failed.
-1 compile 19 hadoop-ozone in the patch failed.
-1 javac 25 hadoop-hdds in the patch failed.
-1 javac 19 hadoop-ozone in the patch failed.
+1 checkstyle 64 the patch passed
+1 mvnsite 0 the patch passed
+1 whitespace 0 The patch has no whitespace issues.
+1 shadedclient 723 patch has no errors when building and testing our client artifacts.
-1 javadoc 23 hadoop-hdds in the patch failed.
-1 javadoc 20 hadoop-ozone in the patch failed.
-1 findbugs 32 hadoop-hdds in the patch failed.
-1 findbugs 21 hadoop-ozone in the patch failed.
_ Other Tests _
-1 unit 28 hadoop-hdds in the patch failed.
-1 unit 25 hadoop-ozone in the patch failed.
+1 asflicense 34 The patch does not generate ASF License warnings.
2419
Subsystem Report/Notes
Docker Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/Dockerfile
GITHUB PR #1589
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle
uname Linux 758eb245ea84 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / f209722
Default Java 1.8.0_222
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-compile-hadoop-hdds.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-compile-hadoop-ozone.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-javadoc-hadoop-hdds.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-javadoc-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-findbugs-hadoop-hdds.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/branch-findbugs-hadoop-ozone.txt
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
mvninstall https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-compile-hadoop-hdds.txt
compile https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-compile-hadoop-ozone.txt
javac https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-compile-hadoop-hdds.txt
javac https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-compile-hadoop-ozone.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-javadoc-hadoop-hdds.txt
javadoc https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-javadoc-hadoop-ozone.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-findbugs-hadoop-hdds.txt
findbugs https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-findbugs-hadoop-ozone.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-unit-hadoop-hdds.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/artifact/out/patch-unit-hadoop-ozone.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/testReport/
Max. process+thread count 447 (vs. ulimit of 5500)
modules C: hadoop-ozone/common hadoop-ozone/ozone-manager U: hadoop-ozone
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1589/4/console
versions git=2.7.4 maven=3.3.9
Powered by Apache Yetus 0.10.0 http://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@anuengineer anuengineer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have an uber question on this patch. How do we ensure that writes will not be starved on a resource, since Reads allow multiple of them to get thru at the same time? Do we have a mechanism to avoid write starvation in place? if not, it is not better to keep simple locks?

manager.lock(resourceName);
LOG.debug("Acquired {} lock on resource {}", resource.name,
lockFn.accept(resourceName);
LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
resourceName);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am trying to read this debug statement.. Do you need to have resource name twice ? once via resource.name and another via resourceName?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the first resource.name prints VOLUME_LOCK/BUCKET_LOCK, next resourceName prints actual resource name. (I think it is little confusing here, because class Resource name is defined like that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is very confusing. But Thanks for the explanation, it makes sense now.

@bharatviswa504
Copy link
Contributor Author

I have an uber question on this patch. How do we ensure that writes will not be starved on a resource, since Reads allow multiple of them to get thru at the same time? Do we have a mechanism to avoid write starvation in place? if not, it is not better to keep simple locks?

Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you mean, we want to create the RWLOCK with an option of fair mode. If my understanding is wrong, could you let me know what additional things need to be implemented?

And also this work is mainly to improve read performance workloads, as now with current approach of exclusive lock all reads are serialized.

@anuengineer
Copy link
Contributor

anuengineer commented Oct 7, 2019

Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you mean, we want to create the RWLOCK with an option of fair mode. If my understanding is wrong, could you let me know what additional things need to be implemented?

When you use a Reader writer lock, there is a question of fairness. Where as exclusive locks are first come first serve.

And also this work is mainly to improve read performance workloads, as now with current approach of exclusive lock all reads are serialized.

I am afraid this gives so much importance to Reads that you will have your writes getting stalled completely.

@arp7
Copy link
Contributor

arp7 commented Oct 8, 2019

I looked through the JDK implementation of read-write locks a couple of years ago. Even in non-fair mode there is prevention against starvation. HDFS uses non-fair mode by default and works well even for very busy Name Nodes.

However we can make the lock fair for now, and evaluate making it non-fair later.

Copy link
Contributor

@arp7 arp7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 LGTM.

Thanks for taking care of this critical improvement @bharatviswa504. Will hold off committing in case @anuengineer has additional comments.

@anuengineer
Copy link
Contributor

+1, I am fine with this getting committed. Thanks for taking care of this issue.

@bharatviswa504 bharatviswa504 merged commit 87d9f36 into apache:trunk Oct 8, 2019
@bharatviswa504
Copy link
Contributor Author

Thank You @anuengineer and @arp7 for the review.
I have committed this to the trunk.

For fair/non-fair I will make it configurable. I will open a Jira for this.

@nandakumar131
Copy link
Contributor

Thanks @bharatviswa504 for taking care of this. The change looks good to me.

amahussein pushed a commit to amahussein/hadoop that referenced this pull request Oct 29, 2019
RogPodge pushed a commit to RogPodge/hadoop that referenced this pull request Mar 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
5 participants