Skip to content
This repository has been archived by the owner on Feb 9, 2021. It is now read-only.

Commit

Permalink
Merging changes r1090113:r1095461 from trunk to federation
Browse files Browse the repository at this point in the history
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hdfs/branches/HDFS-1052@1095512 13f79535-47bb-0310-9956-ffa450edef68
  • Loading branch information
Suresh Srinivas committed Apr 20, 2011
1 parent 9ef86e7 commit f529dfe
Show file tree
Hide file tree
Showing 61 changed files with 1,801 additions and 1,445 deletions.
54 changes: 53 additions & 1 deletion CHANGES.txt
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -258,6 +258,16 @@ Trunk (unreleased changes)
HDFS-1813. Federation: Authentication using BlockToken in RPC to datanode HDFS-1813. Federation: Authentication using BlockToken in RPC to datanode
fails. (jitendra) fails. (jitendra)


HDFS_1630. Support fsedits checksum. (hairong)

HDFS-1606. Provide a stronger data guarantee in the write pipeline by
adding a new datanode when an existing datanode failed. (szetszwo)

HDFS-1442. Api to get delegation token in Hdfs class. (jitendra)

HDFS-1070. Speedup namenode image loading and saving by storing only
local file names. (hairong)

IMPROVEMENTS IMPROVEMENTS


HDFS-1510. Added test-patch.properties required by test-patch.sh (nigel) HDFS-1510. Added test-patch.properties required by test-patch.sh (nigel)
Expand Down Expand Up @@ -327,6 +337,25 @@ Trunk (unreleased changes)
HDFS-1767. Namenode ignores non-initial block report from datanodes HDFS-1767. Namenode ignores non-initial block report from datanodes
when in safemode during startup. (Matt Foley via suresh) when in safemode during startup. (Matt Foley via suresh)


HDFS-1817. Move pipeline_Fi_[39-51] from TestFiDataTransferProtocol
to TestFiPipelineClose. (szetszwo)

HDFS-1760. In FSDirectory.getFullPathName(..), it is better to return "/"
for root directory instead of an empty string. (Daryn Sharp via szetszwo)

HDFS-1833. Reduce repeated string constructions and unnecessary fields,
and fix comments in BlockReceiver.PacketResponder. (szetszwo)

HDFS-1486. Generalize CLITest structure and interfaces to faciliate
upstream adoption (e.g. for web testing). (cos)

HDFS-1844. Move "fs -help" shell command tests from HDFS to COMMOM; see
also HADOOP-7230. (Daryn Sharp via szetszwo)

HDFS-1840. In DFSClient, terminate the lease renewing thread when all files
being written are closed for a grace period, and start a new thread when
new files are opened for write. (szetszwo)

OPTIMIZATIONS OPTIMIZATIONS


HDFS-1458. Improve checkpoint performance by avoiding unnecessary image HDFS-1458. Improve checkpoint performance by avoiding unnecessary image
Expand Down Expand Up @@ -397,6 +426,18 @@ Trunk (unreleased changes)
HDFS-1543. Reduce dev. cycle time by moving system testing artifacts from HDFS-1543. Reduce dev. cycle time by moving system testing artifacts from
default build and push to maven for HDFS (Luke Lu via cos) default build and push to maven for HDFS (Luke Lu via cos)


HDFS-1818. TestHDFSCLI is failing on trunk after HADOOP-7202.
(Aaron T. Myers via todd)

HDFS-1828. TestBlocksWithNotEnoughRacks intermittently fails assert.
(Matt Foley via eli)

HDFS-1824. delay instantiation of file system object until it is
needed (linked to HADOOP-7207) (boryas)

HDFS-1831. Fix append bug in FileContext and implement CreateFlag
check (related to HADOOP-7223). (suresh)

Release 0.22.0 - Unreleased Release 0.22.0 - Unreleased


NEW FEATURES NEW FEATURES
Expand Down Expand Up @@ -853,9 +894,20 @@ Release 0.21.1 - Unreleased


HDFS-1781. Fix the path for jsvc in bin/hdfs. (John George via szetszwo) HDFS-1781. Fix the path for jsvc in bin/hdfs. (John George via szetszwo)


HDFS-1782. Fix an NPE in RFSNamesystem.startFileInternal(..). HDFS-1782. Fix an NPE in FSNamesystem.startFileInternal(..).
(John George via szetszwo) (John George via szetszwo)


HDFS-1821. Fix username resolution in NameNode.createSymlink(..) and
FSDirectory.addSymlink(..). (John George via szetszwo)

HDFS-1806. TestBlockReport.blockReport_08() and _09() are timing-dependent
and likely to fail on fast servers. (Matt Foley via eli)

HDFS-1845. Symlink comes up as directory after namenode restart.
(John George via eli)

HDFS-1666. Disable failing hdfsproxy test TestAuthorizationFilter (todd)

Release 0.21.1 - Unreleased Release 0.21.1 - Unreleased


HDFS-1411. Correct backup node startup command in hdfs user guide. HDFS-1411. Correct backup node startup command in hdfs user guide.
Expand Down
2 changes: 2 additions & 0 deletions src/contrib/build.xml
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -46,9 +46,11 @@
<!-- Test all the contribs. --> <!-- Test all the contribs. -->
<!-- ====================================================== --> <!-- ====================================================== -->
<target name="test"> <target name="test">
<!-- hdfsproxy tests failing due to HDFS-1666
<subant target="test"> <subant target="test">
<fileset dir="." includes="hdfsproxy/build.xml"/> <fileset dir="." includes="hdfsproxy/build.xml"/>
</subant> </subant>
-->
</target> </target>




Expand Down
36 changes: 36 additions & 0 deletions src/java/hdfs-default.xml
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -317,6 +317,42 @@ creations/deletions), or "all".</description>
</description> </description>
</property> </property>


<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>ture</value>
<description>
If there is a datanode/network failure in the write pipeline,
DFSClient will try to remove the failed datanode from the pipeline
and then continue writing with the remaining datanodes. As a result,
the number of datanodes in the pipeline is decreased. The feature is
to add new datanodes to the pipeline.

This is a site-wise property to enable/disable the feature.

See also dfs.client.block.write.replace-datanode-on-failure.policy
</description>
</property>

<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>DEFAULT</value>
<description>
This property is used only if the value of
dfs.client.block.write.replace-datanode-on-failure.enable is true.

ALWAYS: always add a new datanode when an existing datanode is removed.

NEVER: never add a new datanode.

DEFAULT:
Let r be the replication number.
Let n be the number of existing datanodes.
Add a new datanode only if r is greater than or equal to 3 and either
(1) floor(r/2) is greater than or equal to n; or
(2) r is greater than n and the block is hflushed/appended.
</description>
</property>

<property> <property>
<name>dfs.blockreport.intervalMsec</name> <name>dfs.blockreport.intervalMsec</name>
<value>21600000</value> <value>21600000</value>
Expand Down
48 changes: 47 additions & 1 deletion src/java/org/apache/hadoop/fs/Hdfs.java
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@
import java.net.URISyntaxException; import java.net.URISyntaxException;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.EnumSet; import java.util.EnumSet;
import java.util.List;
import java.util.NoSuchElementException;


import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.classification.InterfaceStability;
Expand All @@ -37,8 +39,13 @@
import org.apache.hadoop.hdfs.protocol.FSConstants; import org.apache.hadoop.hdfs.protocol.FSConstants;
import org.apache.hadoop.hdfs.protocol.HdfsFileStatus; import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
import org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus; import org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
import org.apache.hadoop.hdfs.server.namenode.NameNode; import org.apache.hadoop.hdfs.server.namenode.NameNode;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.security.AccessControlException;
import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.security.token.SecretManager.InvalidToken;
import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
import org.apache.hadoop.util.Progressable; import org.apache.hadoop.util.Progressable;


@InterfaceAudience.Private @InterfaceAudience.Private
Expand Down Expand Up @@ -249,7 +256,7 @@ public HdfsFileStatus getNext() throws IOException {
if (hasNext()) { if (hasNext()) {
return thisListing.getPartialListing()[i++]; return thisListing.getPartialListing()[i++];
} }
throw new java.util.NoSuchElementException("No more entry in " + src); throw new NoSuchElementException("No more entry in " + src);
} }
} }


Expand Down Expand Up @@ -384,4 +391,43 @@ public void createSymlink(Path target, Path link, boolean createParent)
public Path getLinkTarget(Path p) throws IOException { public Path getLinkTarget(Path p) throws IOException {
return new Path(dfs.getLinkTarget(getUriPath(p))); return new Path(dfs.getLinkTarget(getUriPath(p)));
} }

@Override //AbstractFileSystem
public List<Token<?>> getDelegationTokens(String renewer) throws IOException {
Token<DelegationTokenIdentifier> result = dfs
.getDelegationToken(renewer == null ? null : new Text(renewer));
result.setService(new Text(this.getCanonicalServiceName()));
List<Token<?>> tokenList = new ArrayList<Token<?>>();
tokenList.add(result);
return tokenList;
}

/**
* Renew an existing delegation token.
*
* @param token delegation token obtained earlier
* @return the new expiration time
* @throws InvalidToken
* @throws IOException
*/
@SuppressWarnings("unchecked")
public long renewDelegationToken(
Token<? extends AbstractDelegationTokenIdentifier> token)
throws InvalidToken, IOException {
return dfs.renewDelegationToken((Token<DelegationTokenIdentifier>) token);
}

/**
* Cancel an existing delegation token.
*
* @param token delegation token
* @throws InvalidToken
* @throws IOException
*/
@SuppressWarnings("unchecked")
public void cancelDelegationToken(
Token<? extends AbstractDelegationTokenIdentifier> token)
throws InvalidToken, IOException {
dfs.cancelDelegationToken((Token<DelegationTokenIdentifier>) token);
}
} }
Loading

0 comments on commit f529dfe

Please sign in to comment.