Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDFS-16283. RBF: reducing the load of renewLease() RPC #4524

Merged
merged 6 commits into from
Jul 14, 2022

Conversation

ZanderXu
Copy link
Contributor

@ZanderXu ZanderXu commented Jul 1, 2022

Description of PR

HDFS-16283: RBF: improve renewLease() to call only a specific NameNode rather than make fan-out calls

Currently RBF will forward the renewLease() rpc to all the available name services. So the forwarding efficiency will be affected by the unhealthy downstream name services. And along with as more as NSs are monitored by RBF, this problem will become more and more serious.

In our prod cluster, there are 70+ nameservices, the phenomenon that renewLease() rpc is blocked often occurs.

This patch is be used to fix this problem and work well on our cluster, and the main ideas is:

  • Carrying nsId to client when creating a new file
  • Store this nsId in DFSOutputStream
  • Client renewLease() rpc carries all nsIds of all DFSOutputStream to RBF
  • RBF parses the nsIds and forwards the renewLease rpc to the corresponding name services

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 1m 4s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 buf 0m 0s buf was not available.
+0 🆗 buf 0m 0s buf was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 8 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 58s Maven dependency ordering for branch
+1 💚 mvninstall 32m 6s trunk passed
+1 💚 compile 8m 1s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 7m 8s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 47s trunk passed
+1 💚 mvnsite 4m 11s trunk passed
+1 💚 javadoc 3m 29s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 4m 19s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 9m 5s trunk passed
+1 💚 shadedclient 26m 31s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 27s Maven dependency ordering for patch
+1 💚 mvninstall 3m 23s the patch passed
+1 💚 compile 7m 37s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 cc 7m 37s the patch passed
+1 💚 javac 7m 37s the patch passed
+1 💚 compile 7m 23s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 cc 7m 23s the patch passed
+1 💚 javac 7m 23s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 30s the patch passed
+1 💚 mvnsite 3m 44s the patch passed
-1 ❌ javadoc 0m 50s /results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 98 unchanged - 1 fixed = 99 total (was 99)
+1 💚 javadoc 3m 15s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 52s the patch passed
+1 💚 shadedclient 23m 40s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 29s hadoop-hdfs-client in the patch passed.
-1 ❌ unit 345m 29s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 unit 32m 38s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 1m 1s The patch does not generate ASF License warnings.
558m 56s
Reason Tests
Failed junit tests hadoop.hdfs.TestDFSClientRetries
hadoop.hdfs.TestLease
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/1/artifact/out/Dockerfile
GITHUB PR #4524
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat
uname Linux f0e1052376d4 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 802a405
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/1/testReport/
Max. process+thread count 2386 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/1/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Member

@ayushtkn ayushtkn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me to have calls only to specified namespaces. Had a quick look, have dropped some comments.

Comment on lines 1177 to 1180
public void renewLease(String clientName) throws IOException {
public void renewLease(String clientName, String nsIdentifies)
throws IOException {
checkNNStartup();
// just ignore nsIdentifies
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to have a check that it is null, from accidentally letting user pass some value to Namenode and feel it is getting honoured.

@@ -579,6 +581,28 @@ void updateLastLeaseRenewal() {
}
}

/**
* Get all nsIdentifies of DFSOutputStreams.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Identifies in the method names and arguments, doesn't make sense, Can we change it to NsIndentifiers, well I am good with just namespaces/namespace also

Comment on lines 595 to 596
if (nsIdentify != null && !nsIdentify.isEmpty()) {
allNSIdentifies.add(nsIdentify);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In which case it can be null or empty?

One which I can think of is if the router is at older version than the client, means if Router doesn't have this and client is upgraded.

I think that scenario should be sorted, if either of the identifier is null or empty pass some null or so to the Router and make sure the old functionality of shooting RPC to all namespaces, stays intact.

@@ -763,7 +763,7 @@ SnapshotStatus[] getSnapshotListing(String snapshotRoot)
* @throws IOException If an I/O error occurred
*/
@Idempotent
void renewLease(String clientName) throws IOException;
void renewLease(String clientName, String allNSIdentifies) throws IOException;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add detail about the new argument in the javadoc as well

if (nsIdentifies == null || nsIdentifies.isEmpty()) {
return new ArrayList<>(namenodeResolver.getNamespaces());
}
String[] nsIdList = nsIdentifies.split(",");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First at client we are doing a String.Joinner stuff, then here we are splitting, can't we pass an array/set/list whichever possible and get rid of this join & split overhead during the call?

Comment on lines 787 to 788
namespaceInfo = new FederationNamespaceInfo("", "", nsId);
nsNameSpaceInfoCache.put(nsId, namespaceInfo);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't catch this logic of new FederationNamespaceInfor creation, you have a cached Map, which is empty. You do a get, it will return null, you come to the if block and create explicitly this, why aren't we initialising the cached map from namenodeResolver.getNamespaces() or in case we don't find it in the cached map, why don't we go ahead and try find from namenodeResolver.getNamespaces()

Comment on lines +803 to +804
if (nss.size() == 1) {
rpcClient.invokeSingle(nss.get(0).getNameserviceId(), method);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nsId is getting passed from the client, if we get an array or so, you can figure out initially itself whether you have only one entry or not. so you can get rid of getRewLeaseNSs(nsIdentifies); completely in that case?

Comment on lines 1488 to 1489
fsDataOutputStream0.close();
fsDataOutputStream1.close();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either use finally or try-with resources, for close.

Comment on lines 1481 to 1482
FSDataOutputStream fsDataOutputStream0 = routerFS.create(newTestPath0);
FSDataOutputStream fsDataOutputStream1 = routerFS.create(newTestPath1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this code bother Append flow as well?

dfsRouterFS.getClient().getLeaseRenewer().interruptAndJoin();

Path testPath = new Path("/testRenewLease0/test.txt");
FSDataOutputStream fsDataOutputStream = routerFS.create(testPath);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test both for both replicated as well as Erasure Coded files

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Jul 2, 2022

Thanks @ayushtkn for your review, I learned a lot from it. Thank you again.
I have modified the path based on your helpful suggestions, please help me to review it again.

Because the apend() rpc is forwarded by invokeSequential, so I try to fill the namespace info in invokeSequential method like below. Please help me confirm whether this modification is reasonable or not.

if (ret instanceof LastBlockWithStatus) {
    ((LastBlockWithStatus) ret).getFileStatus().setNamespace(ns);
}

Copy link
Member

@ayushtkn ayushtkn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanx @ZanderXu for the update, dropped some comments give a check, there may be some checkstyle warnings as well from Jenkins.
The last build shows some test failures as well, and they look related I think, give a check to them as well
Rest post that things looks good...

@@ -759,11 +759,19 @@ SnapshotStatus[] getSnapshotListing(String snapshotRoot)
* the last call to renewLease(), the NameNode assumes the
* client has died.
*
* @param namespaces The full Namespace list that the release rpc
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems typo release -> renewLease

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

throws IOException {
if (namespaces != null && namespaces.size() > 0) {
LOG.warn("namespaces({}) should be null or empty "
+ "on NameNode side, please check it.", namespaces);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

throw Exception here, We don't expect Namespaces here and neither wan't to silently ignore such an occurrence

@@ -1450,6 +1452,95 @@ public void testProxyRestoreFailedStorage() throws Exception {
assertEquals(nnSuccess, routerSuccess);
}

@Test
public void testRewnewLease() throws Exception {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is has become little big, Can we split the create & append apart into different tests? Can extract the common stuff into a util method and reuse

Comment on lines 1016 to 1018
if (ret instanceof LastBlockWithStatus) {
((LastBlockWithStatus) ret).getFileStatus().setNamespace(ns);
}
Copy link
Member

@ayushtkn ayushtkn Jul 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for append? Then No I don't think we should do this for all other API, should restrict our changes to only Append code.
Check if changing the Append code in RouterClientProtocol helps:

  @Override
  public LastBlockWithStatus append(String src, final String clientName,
      final EnumSetWritable<CreateFlag> flag) throws IOException {
    rpcServer.checkOperation(NameNode.OperationCategory.WRITE);

    List<RemoteLocation> locations = rpcServer.getLocationsForPath(src, true);
    RemoteMethod method = new RemoteMethod("append",
        new Class<?>[] {String.class, String.class, EnumSetWritable.class},
        new RemoteParam(), clientName, flag);
    RemoteResult result = rpcClient
        .invokeSequential(method, locations, LastBlockWithStatus.class, null);
    LastBlockWithStatus lbws = (LastBlockWithStatus) result.getResult();
    lbws.getFileStatus().setNamespace(result.getLocation().getNameserviceId());
    return lbws;
  }

Comment on lines +783 to +784
Map<String, FederationNamespaceInfo> allAvailableNamespaces =
getAvailableNamespaces();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should have some caching here:
Like:
Initially initialise availableNamespace and for every call check from this, if some entry isn't found in the stored/cached availableNamespace, In that case call getAvailableNamespaces() and update the value of availableNamespace,
if still we don't find the entry after then we can return all the namespace what we are doing now

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 48s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 buf 0m 0s buf was not available.
+0 🆗 buf 0m 0s buf was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 8 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 44s Maven dependency ordering for branch
+1 💚 mvninstall 28m 28s trunk passed
+1 💚 compile 6m 45s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 6m 26s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 35s trunk passed
+1 💚 mvnsite 3m 41s trunk passed
+1 💚 javadoc 3m 3s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 39s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 8m 4s trunk passed
+1 💚 shadedclient 23m 36s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 27s Maven dependency ordering for patch
+1 💚 mvninstall 2m 57s the patch passed
+1 💚 compile 6m 39s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 cc 6m 39s the patch passed
+1 💚 javac 6m 39s the patch passed
+1 💚 compile 6m 15s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 cc 6m 15s the patch passed
+1 💚 javac 6m 15s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 1m 19s /results-checkstyle-hadoop-hdfs-project.txt hadoop-hdfs-project: The patch generated 2 new + 334 unchanged - 0 fixed = 336 total (was 334)
+1 💚 mvnsite 3m 8s the patch passed
+1 💚 javadoc 2m 22s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 6s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 55s the patch passed
+1 💚 shadedclient 23m 11s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 29s hadoop-hdfs-client in the patch passed.
-1 ❌ unit 342m 58s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 unit 32m 57s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 1m 2s The patch does not generate ASF License warnings.
539m 50s
Reason Tests
Failed junit tests hadoop.hdfs.server.mover.TestMover
hadoop.hdfs.TestDFSClientRetries
hadoop.hdfs.TestLease
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/2/artifact/out/Dockerfile
GITHUB PR #4524
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat
uname Linux c1dc81e47ee5 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / de5120f
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/2/testReport/
Max. process+thread count 2372 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/2/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Jul 3, 2022

Thanks @ayushtkn for your good idea, and I have updated the patch.

About caching availableNamespace, there is a point that is not easy to deal with, I'm looking for your help.
The available namespaces is dynamically changing along with RBF runs, such as name services becomes unavailable, end user manually disable some name services. If we want to cache available FederationNamespaceInfos, we need to consider cache invalidation. For simple processing, so I removed the cache.

I'm looking for your help, thanks.

@ayushtkn
Copy link
Member

ayushtkn commented Jul 3, 2022

hmm, the caching may be can have a follow up post this, might be tricky but doable. you missed a couple of comments, rest things look almost good to me

@goiri / @Hexiaoqiao mind giving an additional check..

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Jul 3, 2022

Thanks @ayushtkn for your review and ideas.
If needed, I will create a new PR to support caching availableNamespace

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 48s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 buf 0m 1s buf was not available.
+0 🆗 buf 0m 1s buf was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 8 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 30s Maven dependency ordering for branch
+1 💚 mvninstall 28m 5s trunk passed
+1 💚 compile 6m 55s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 6m 27s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 36s trunk passed
+1 💚 mvnsite 3m 42s trunk passed
+1 💚 javadoc 3m 2s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 39s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 8m 5s trunk passed
+1 💚 shadedclient 24m 17s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 30s Maven dependency ordering for patch
+1 💚 mvninstall 3m 17s the patch passed
+1 💚 compile 8m 0s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 cc 8m 0s the patch passed
+1 💚 javac 8m 0s the patch passed
+1 💚 compile 7m 12s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 cc 7m 12s the patch passed
+1 💚 javac 7m 12s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 1m 19s /results-checkstyle-hadoop-hdfs-project.txt hadoop-hdfs-project: The patch generated 1 new + 334 unchanged - 0 fixed = 335 total (was 334)
+1 💚 mvnsite 3m 8s the patch passed
+1 💚 javadoc 2m 21s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 3s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 53s the patch passed
+1 💚 shadedclient 23m 36s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 27s hadoop-hdfs-client in the patch passed.
-1 ❌ unit 351m 10s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 unit 33m 38s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 1m 1s The patch does not generate ASF License warnings.
552m 28s
Reason Tests
Failed junit tests hadoop.hdfs.TestLease
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/3/artifact/out/Dockerfile
GITHUB PR #4524
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat
uname Linux 0cbc9b2b2237 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 2d6e01e
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/3/testReport/
Max. process+thread count 2370 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/3/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@Hexiaoqiao
Copy link
Contributor

Hexiaoqiao commented Jul 4, 2022

@ZanderXu @ayushtkn, Thanks for your great works here. After a quick glance, it seems one solution to improve renewLease for RBF.
I would like to share my practice for this issue. I also meet this renewLease performance issue when upgrade to RBF architecture. When I observed that renewLease request count to NameNode grew unexpected and time cost obviously, I try to analysis if it is possible to bring file path as one parameter to renewLease. After collect create and renewLease audit, there are less than 3% renewLease requests to renew different files lease for one client (generally less than 5 files) in our data warehouse scenario (Maybe it is not true for other scenarios). Then I try to add a new interface public void renewLease(String path, String clientName) and route request based on path at Router side which is common logic. When apply this interface, the result is very good at both Router and Client side, include load of router and time cost at client.
Just propose another choice for this improvement. This is not objection for this PR, just another solution for discussion. If we have reached agreement, I would like to give deep reviews. Thanks again.

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Jul 4, 2022

Thank @Hexiaoqiao for your solution. In the beginning, we try to carry the writing paths to RBF to fix this issue. After running for a while, I found some cases also need to be fixed:

  • Long run client case. There may be many files being written at the same time
  • Multiple destination case. RBF always forwards the renew lease rpc to all destination name service

Also, the number of renewLease requests between client and rbf will also increases, depending on the number of files being written at the same time.

@Hexiaoqiao
Copy link
Contributor

Thanks for quick response.

Long run client case. There may be many files being written at the same time.

In my practice, the cost with split-path to renewLease will be under control even for long running applications, such flink applications (I have not observed that many files being written concurrently, it will be helpful if any cases could offer.)

Multiple destination case. RBF always forwards the renew lease rpc to all destination name service.

For both create and renewLease (with file path), I think they will apply the same MountTableResolver for same file. So it does not seem to one issue for renewLease. Maybe some corner case I do not catch. Please correct me if something missed.

the number of renewLease requests between client and rbf will also increases.

Yes, it is true. I am totally agree. Based on my internal production cluster, it will be less than 5% increase.
BTW, My consideration here is that it will be more smooth and understandable only one rbf namespace expose for client, rather than rbf and all namespaces behind router expose to client. Another side, renewLease is one lightweight request, less than 5% overhead is acceptable in my opinion.
Of course, the above information is totally based on my internal practice, maybe some other cases are not included. Very glad to hear more discussions and suggestions. Thanks.

@ZanderXu
Copy link
Contributor Author

ZanderXu commented Jul 4, 2022

For both create and renewLease (with file path), I think they will apply the same MountTableResolver for same file.

Although they all used MountTableResolver, create and append rpc can get the NS which the file belongs, but renewlease can only obtain the full NSs which the file mounted. So int this case, the renewlease rpc always forwarded to some unnecessary nameservices.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 40s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 buf 0m 1s buf was not available.
+0 🆗 buf 0m 1s buf was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 8 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 55s Maven dependency ordering for branch
+1 💚 mvninstall 25m 7s trunk passed
+1 💚 compile 5m 57s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 5m 40s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 19s trunk passed
+1 💚 mvnsite 3m 8s trunk passed
+1 💚 javadoc 2m 36s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 11s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 35s trunk passed
+1 💚 shadedclient 20m 31s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 36s Maven dependency ordering for patch
+1 💚 mvninstall 2m 44s the patch passed
+1 💚 compile 6m 28s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 cc 6m 28s the patch passed
+1 💚 javac 6m 28s the patch passed
+1 💚 compile 5m 57s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 cc 5m 57s the patch passed
+1 💚 javac 5m 57s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 14s the patch passed
+1 💚 mvnsite 2m 48s the patch passed
+1 💚 javadoc 2m 1s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 46s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 31s the patch passed
+1 💚 shadedclient 21m 23s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 24s hadoop-hdfs-client in the patch passed.
+1 💚 unit 239m 35s hadoop-hdfs in the patch passed.
+1 💚 unit 23m 2s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 0m 51s The patch does not generate ASF License warnings.
412m 37s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/4/artifact/out/Dockerfile
GITHUB PR #4524
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat
uname Linux 8db473773c03 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 642f920
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/4/testReport/
Max. process+thread count 3297 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/4/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@Hexiaoqiao
Copy link
Contributor

For both create and renewLease (with file path), I think they will apply the same MountTableResolver for same file.

Although they all used MountTableResolver, create and append rpc can get the NS which the file belongs, but renewlease can only obtain the full NSs which the file mounted. So int this case, the renewlease rpc always forwarded to some unnecessary nameservices.

Exactly true. For MultipleDestinationMount, it could forward to different NS when request with file path only, especially for DestinationOrder.RANDOM and related order. cc @ayushtkn @goiri Anymore feedback here? Thanks.

@ayushtkn
Copy link
Member

ayushtkn commented Jul 6, 2022

@Hexiaoqiao I am ok with using path, but if I catch correct, the only save with using path will be like we won't be exposing the namespaces to the end client? but in exchange we will be saving I think a bunch of RPCs, especially in case of multi destination mount points.

May be from the performance point of view, it might be better with namespaces( the present approach). But I don't have any strong objections, if you feel we shouldn't expose the namespaces to end client.

if there is a particular use case where we shouldn't expose namespace to end client, in that case we may hide this change behind a config, and this optimisation won't work in that case, but in general ViewFs also knows about all namespaces and usually a lot of clients too have these namespaces defined in their configs. so, that is not a big secret, and this namespace info will also be their in back-end and I even don't think exposing them via this route can have any security issue?

But I am Ok, with whichever approach you folks feel better..

@Hexiaoqiao
Copy link
Contributor

@ayushtkn It is not related with any security issue when I propose to use path as one parameter of renewLease. Actually in my opinion, it will be confused and poor readable with both namespaces and router name at client side, without other strong support points.

For MultipleDestinationMount, it could forward to different NS when request with file path only, especially for DestinationOrder.RANDOM and related order.

As mentioned above, for MultipleDestinationMount it will be difficult to reduce requests to NameNode at Router side. (I am limited by my internal case where no MultipleDestination with DestinationOrder.RANDOM hash configured.)
In conclusion, I agree that the current approach (expose namespaces to client and use ns to renewLease) will be more general solution, especially for MultipleDestinationMount cases. Thanks all for your detailed explanation.

Copy link
Member

@ayushtkn ayushtkn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanx @Hexiaoqiao for the details. Makes sense :-)
Changes LGTM.
Will hold for @Hexiaoqiao to have a final look before we conclude this.

checkNNStartup();
// just ignore nsIdentifies
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this line or change it to // Ignore the namespaces.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copy, I will fix it.

Copy link
Contributor

@Hexiaoqiao Hexiaoqiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZanderXu It almost look good to me. Just leave some nit comments. FYI. Will give my +1 once fixed. Thanks again.

/**
* Try to get a list of FederationNamespaceInfo for renewLease RPC.
*/
private List<FederationNamespaceInfo> getRewLeaseNSs(List<String> namespaces)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method name should be getRenewLeaseNSs?

getAvailableNamespaces();
for (String namespace : namespaces) {
if (!allAvailableNamespaces.containsKey(namespace)) {
return new ArrayList<>(namenodeResolver.getNamespaces());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should use result directly rather than create another ArrayList again here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

namenodeResolver.getNamespaces() is a hashSet, I want to a List so that we can use invokeSingle method to forward this rpc when there is only one namespace.

List<FederationNamespaceInfo> nss = getRenewLeaseNSs(namespaces);
    if (nss.size() == 1) {
      rpcClient.invokeSingle(nss.get(0).getNameserviceId(), method);
    } else {
      rpcClient.invokeConcurrent(nss, method, false, false);
 }

Of course, Set can also achieve this goal.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Get it. make sense to me.

}
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate blank line.

@@ -759,11 +759,19 @@ SnapshotStatus[] getSnapshotListing(String snapshotRoot)
* the last call to renewLease(), the NameNode assumes the
* client has died.
*
* @param namespaces The full Namespace list that the release rpc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 37s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 buf 0m 1s buf was not available.
+0 🆗 buf 0m 1s buf was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 8 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 74m 49s Maven dependency ordering for branch
+1 💚 mvninstall 24m 57s trunk passed
+1 💚 compile 5m 56s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 5m 44s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 20s trunk passed
+1 💚 mvnsite 3m 12s trunk passed
+1 💚 javadoc 2m 33s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 8s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 11s trunk passed
+1 💚 shadedclient 20m 1s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 27s Maven dependency ordering for patch
+1 💚 mvninstall 2m 41s the patch passed
+1 💚 compile 5m 54s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 cc 5m 54s the patch passed
+1 💚 javac 5m 54s the patch passed
+1 💚 compile 5m 33s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 cc 5m 33s the patch passed
+1 💚 javac 5m 33s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 6s the patch passed
+1 💚 mvnsite 2m 48s the patch passed
+1 💚 javadoc 2m 3s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 2m 50s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 7s the patch passed
+1 💚 shadedclient 19m 54s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 24s hadoop-hdfs-client in the patch passed.
-1 ❌ unit 234m 55s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 unit 21m 41s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 0m 51s The patch does not generate ASF License warnings.
462m 18s
Reason Tests
Failed junit tests hadoop.hdfs.TestRollingUpgrade
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/5/artifact/out/Dockerfile
GITHUB PR #4524
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat
uname Linux db558b1d5a3b 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 86ad891
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/5/testReport/
Max. process+thread count 3345 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/5/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

🎊 +1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 48s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 buf 0m 1s buf was not available.
+0 🆗 buf 0m 1s buf was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 8 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 25s Maven dependency ordering for branch
+1 💚 mvninstall 28m 16s trunk passed
+1 💚 compile 6m 55s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 compile 6m 24s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 checkstyle 1m 37s trunk passed
+1 💚 mvnsite 3m 41s trunk passed
+1 💚 javadoc 3m 4s trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 39s trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 8m 5s trunk passed
+1 💚 shadedclient 26m 16s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 31s Maven dependency ordering for patch
+1 💚 mvninstall 3m 25s the patch passed
+1 💚 compile 7m 27s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 cc 7m 27s the patch passed
+1 💚 javac 7m 27s the patch passed
+1 💚 compile 6m 40s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 cc 6m 40s the patch passed
+1 💚 javac 6m 40s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 checkstyle 1m 22s the patch passed
+1 💚 mvnsite 3m 14s the patch passed
+1 💚 javadoc 2m 23s the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
+1 💚 javadoc 3m 7s the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
+1 💚 spotbugs 7m 51s the patch passed
+1 💚 shadedclient 23m 44s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 2m 28s hadoop-hdfs-client in the patch passed.
+1 💚 unit 340m 53s hadoop-hdfs in the patch passed.
+1 💚 unit 33m 11s hadoop-hdfs-rbf in the patch passed.
+1 💚 asflicense 1m 2s The patch does not generate ASF License warnings.
543m 12s
Subsystem Report/Notes
Docker ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/6/artifact/out/Dockerfile
GITHUB PR #4524
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat
uname Linux 89e41ce6bc5b 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 6739c22
Default Java Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/6/testReport/
Max. process+thread count 2375 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4524/6/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@Hexiaoqiao Hexiaoqiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. +1 from my side.
Thanks @ZanderXu and @ayushtkn for your works.

@ayushtkn ayushtkn merged commit 6f9c435 into apache:trunk Jul 14, 2022
@ZanderXu
Copy link
Contributor Author

@ayushtkn @Hexiaoqiao Thanks for your discussion and review. I will continue to word hard to submit more patches to the community.

HarshitGupta11 pushed a commit to HarshitGupta11/hadoop that referenced this pull request Nov 28, 2022
… Contributed by ZanderXu.

Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
LiuGuH pushed a commit to LiuGuH/hadoop that referenced this pull request Mar 26, 2024
… Contributed by ZanderXu.

Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
LiuGuH pushed a commit to LiuGuH/hadoop that referenced this pull request Mar 26, 2024
…3.2-bzl-hdfs-merge'

HDFS-16283. RBF: reducing the load of renewLease() RPC (apache#4524).

See merge request dap/hadoop!79
NyteKnight pushed a commit to NyteKnight/hadoop that referenced this pull request Jun 25, 2024
… Contributed by ZanderXu.

Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>

With part of HDFS-15535.

ACLOVERRIDE
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants