From e901de5a25fbb513399a3bfa3eeabd34f81a35bd Mon Sep 17 00:00:00 2001
From: Ayush Saxena Unless explicitly turned off, Hadoop by default specifies two
* resources, loaded in-order from the classpath: When conf.get("tempdir") is called, then ${basedir}
+ * When When conf.get("otherdir") is called, then ${env.BASE_DIR}
- * will be resolved to the value of the ${BASE_DIR} environment variable.
- * It supports ${env.NAME:-default} and ${env.NAME-default} notations.
- * The former is resolved to "default" if ${NAME} environment variable is undefined
+ * When By default, warnings will be given to any deprecated configuration
* parameters and these are suppressible by configuring
- * log4j.logger.org.apache.hadoop.conf.Configuration.deprecation in
+ * Properties marked with tags can be retrieved with conf
- * .getAllPropertiesByTag("HDFS") or conf.getAllPropertiesByTags
- * (Arrays.asList("YARN","SECURITY")). Properties marked with tags can be retrieved with The actual compression algorithm used to compress key and/or values can be
* specified by using the appropriate {@link CompressionCodec}. The recommended way is to use the static createWriter methods
+ * The recommended way is to use the static The {@link SequenceFile.Reader} acts as the bridge and can read any of the
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
index 187fe481588c8..f7a71a9bdec21 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/bzip2/CBZip2InputStream.java
@@ -37,13 +37,13 @@
*
* The decompression requires large amounts of memory. Thus you should call the
* {@link #close() close()} method as soon as possible, to force
- * CBZip2InputStream to release the allocated memory. See
+ *
- * CBZip2InputStream reads bytes from the compressed source stream via
+ *
- * Although BZip2 headers are marked with the magic "Bz" this
+ * Although BZip2 headers are marked with the magic
* Called while holding the FSDirectory lock.
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index f57135a7fc664..4fd56928c993e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -992,7 +992,7 @@ private void stopHttpServer() {
*
- *
* Applications may add additional resources, which are loaded
@@ -150,7 +150,7 @@
* </property>
*
* Administrators typically define parameters as final in
- * core-site.xml for values that user applications may not alter.
+ *
*
- * core-default.xmlcore-site.xml: Site-specific configuration for a given hadoop
* installation.core-site.xml for values that user applications may not alter.
*
* Variable Expansion
*
@@ -182,19 +182,19 @@
* </property>
*
*
- * conf.get("tempdir") is called, then ${basedir}
* will be resolved to another property in this Configuration, while
- * ${user.name} would then ordinarily be resolved to the value
+ * ${user.name} would then ordinarily be resolved to the value
* of the System property with that name.
- * conf.get("otherdir") is called, then ${env.BASE_DIR}
+ * will be resolved to the value of the ${BASE_DIR} environment variable.
+ * It supports ${env.NAME:-default} and ${env.NAME-default} notations.
+ * The former is resolved to "default" if ${NAME} environment variable is undefined
* or its value is empty.
- * The latter behaves the same way only if ${NAME} is undefined.
+ * The latter behaves the same way only if ${NAME} is undefined.
* log4j.logger.org.apache.hadoop.conf.Configuration.deprecation in
* log4j.properties file.
*
* Tags
@@ -217,9 +217,9 @@
* <tag>HDFS,SECURITY</tag>
* </property>
*
- * conf
+ * .getAllPropertiesByTag("HDFS") or conf.getAllPropertiesByTags
+ * (Arrays.asList("YARN","SECURITY")).UnsupportedOperationException
*
* If a key is deprecated in favor of multiple keys, they are all treated as
* aliases of each other, and setting any one of them resets all the others
@@ -601,7 +601,7 @@ public static void addDeprecation(String key, String[] newKeys,
* It does not override any existing entries in the deprecation map.
* This is to be used only by the developers in order to add deprecation of
* keys, and attempts to call this method after loading resources once,
- * would lead to UnsupportedOperationException
+ * would lead to UnsupportedOperationException
*
* If you have multiple deprecation entries to add, it is more efficient to
* use #addDeprecations(DeprecationDelta[] deltas) instead.
@@ -621,7 +621,7 @@ public static void addDeprecation(String key, String newKey,
* It does not override any existing entries in the deprecation map.
* This is to be used only by the developers in order to add deprecation of
* keys, and attempts to call this method after loading resources once,
- * would lead to UnsupportedOperationException
+ * would lead to UnsupportedOperationException
*
* If a key is deprecated in favor of multiple keys, they are all treated as
* aliases of each other, and setting any one of them resets all the others
@@ -645,7 +645,7 @@ public static void addDeprecation(String key, String[] newKeys) {
* It does not override any existing entries in the deprecation map.
* This is to be used only by the developers in order to add deprecation of
* keys, and attempts to call this method after loading resources once,
- * would lead to UnsupportedOperationException
+ * would lead to UnsupportedOperationException
*
* If you have multiple deprecation entries to add, it is more efficient to
* use #addDeprecations(DeprecationDelta[] deltas) instead.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
index a4737c548c8fa..2d1f7e5ee9d8f 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
@@ -247,7 +247,7 @@ protected static synchronized Mapuri
* determines a configuration property name,
- * fs.AbstractFileSystem.scheme.impl whose value names the
+ * fs.AbstractFileSystem.scheme.impl whose value names the
* AbstractFileSystem class.
*
* The entire URI and conf is passed to the AbstractFileSystem factory method.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
index 0efcdc8022f7b..14c6b5dc1fe25 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
@@ -686,7 +686,7 @@ boolean apply(Path p) throws IOException {
/**
* Set replication for an existing file.
- * Implement the abstract setReplication of FileSystem
+ * Implement the abstract setReplication of FileSystem
* @param src file name
* @param replication new replication
* @throws IOException if an I/O error occurs.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
index 4820c5c3045d7..5f3e5d9b8efa9 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java
@@ -453,7 +453,7 @@ private boolean isDirectory(Path f)
}
/**
* Set replication for an existing file.
- * Implement the abstract setReplication of FileSystem
+ * Implement the abstract setReplication of FileSystem
* @param src file name
* @param replication new replication
* @throws IOException if an I/O error occurs.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RemoteIterator.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RemoteIterator.java
index 9238c3f6fb993..06b7728ae3e9d 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RemoteIterator.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RemoteIterator.java
@@ -24,9 +24,9 @@
*/
public interface RemoteIteratortrue if the iteration has more elements.
*
- * @return true if the iterator has more elements.
+ * @return true if the iterator has more elements.
* @throws IOException if any IO error occurs
*/
boolean hasNext() throws IOException;
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/EnumSetWritable.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/EnumSetWritable.java
index 4b1dc7513d054..f2c8b76e2ab70 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/EnumSetWritable.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/EnumSetWritable.java
@@ -59,10 +59,10 @@ public boolean add(E e) {
}
/**
- * Construct a new EnumSetWritable. If the value argument is null or
- * its size is zero, the elementType argument must not be null. If
- * the argument value's size is bigger than zero, the argument
- * elementType is not be used.
+ * Construct a new EnumSetWritable. If the value argument is null or
+ * its size is zero, the elementType argument must not be null. If
+ * the argument value's size is bigger than zero, the argument
+ * elementType is not be used.
*
* @param value enumSet value.
* @param elementType elementType.
@@ -72,7 +72,7 @@ public EnumSetWritable(EnumSetvalue should not be null
* or empty.
*
* @param value enumSet value.
@@ -83,10 +83,10 @@ public EnumSetWritable(EnumSetvalue and elementType. If the value argument
+ * is null or its size is zero, the elementType argument must not be
+ * null. If the argument value's size is bigger than zero, the
+ * argument elementType is not be used.
*
* @param value enumSet Value.
* @param elementType elementType.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ObjectWritable.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ObjectWritable.java
index 29c06a01ad6e3..831931bdace66 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ObjectWritable.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ObjectWritable.java
@@ -401,8 +401,8 @@ static Method getStaticProtobufMethod(Class> declaredClass, String method,
}
/**
- * Find and load the class with given name className by first finding
- * it in the specified conf. If the specified conf is null,
+ * Find and load the class with given name className by first finding
+ * it in the specified conf. If the specified conf is null,
* try load it directly.
*
* @param conf configuration.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
index a0b45814f1c77..762c2dac08bd8 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
@@ -91,7 +91,7 @@
* createWriter methods
* provided by the SequenceFile to chose the preferred format.CBZip2InputStream to release the allocated memory. See
* {@link CBZip2OutputStream CBZip2OutputStream} for information about memory
* usage.
* CBZip2InputStream reads bytes from the compressed source stream via
* the single byte {@link java.io.InputStream#read() read()} method exclusively.
* Thus you should consider to use a buffered source stream.
* "Bz" this
* constructor expects the next byte in the stream to be the first one after
* the magic. Thus callers have to skip the first two bytes. Otherwise this
* constructor will throw an exception.
@@ -288,7 +288,7 @@ private void makeMaps() {
* @throws IOException
* if the stream content is malformed or an I/O error occurs.
* @throws NullPointerException
- * if in == null
+ * if in == null
*/
public CBZip2InputStream(final InputStream in, READ_MODE readMode)
throws IOException {
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInZlibDeflater.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInZlibDeflater.java
index 739788fa5f5ec..e98980f0f26aa 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInZlibDeflater.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInZlibDeflater.java
@@ -57,7 +57,7 @@ public synchronized int compress(byte[] b, int off, int len)
/**
* reinit the compressor with the given configuration. It will reset the
* compressor's compression level and compression strategy. Different from
- * ZlibCompressor, BuiltInZlibDeflater only support three
+ * ZlibCompressor, BuiltInZlibDeflater only support three
* kind of compression strategy: FILTERED, HUFFMAN_ONLY and DEFAULT_STRATEGY.
* It will use DEFAULT_STRATEGY as default if the configured compression
* strategy is not supported.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/Chunk.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/Chunk.java
index 05e3d48a469a2..ec508c020468a 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/Chunk.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/Chunk.java
@@ -219,8 +219,8 @@ static public class ChunkEncoder extends OutputStream {
/**
* The number of valid bytes in the buffer. This value is always in the
- * range 0 through buf.length; elements buf[0]
- * through buf[count-1] contain valid byte data.
+ * range 0 through buf.length; elements buf[0]
+ * through buf[count-1] contain valid byte data.
*/
private int count;
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcClientException.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcClientException.java
index 7f8d9707f9cd7..107899a9c0d4b 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcClientException.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcClientException.java
@@ -38,7 +38,7 @@ public class RpcClientException extends RpcException {
* @param message message.
* @param cause that cause this exception
* @param cause the cause (can be retried by the {@link #getCause()} method).
- * (A null value is permitted, and indicates that the cause
+ * (A null value is permitted, and indicates that the cause
* is nonexistent or unknown.)
*/
RpcClientException(final String message, final Throwable cause) {
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcException.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcException.java
index 8141333d717a8..ac687050d7cb1 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcException.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcException.java
@@ -40,7 +40,7 @@ public class RpcException extends IOException {
* @param message message.
* @param cause that cause this exception
* @param cause the cause (can be retried by the {@link #getCause()} method).
- * (A null value is permitted, and indicates that the cause
+ * (A null value is permitted, and indicates that the cause
* is nonexistent or unknown.)
*/
RpcException(final String message, final Throwable cause) {
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcServerException.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcServerException.java
index ce4aac54b6cd2..31f62d4f06fe0 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcServerException.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcServerException.java
@@ -39,7 +39,7 @@ public RpcServerException(final String message) {
*
* @param message message.
* @param cause the cause (can be retried by the {@link #getCause()} method).
- * (A null value is permitted, and indicates that the cause
+ * (A null value is permitted, and indicates that the cause
* is nonexistent or unknown.)
*/
public RpcServerException(final String message, final Throwable cause) {
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/UnexpectedServerException.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/UnexpectedServerException.java
index f00948d5d5065..c683010a88029 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/UnexpectedServerException.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/UnexpectedServerException.java
@@ -39,7 +39,7 @@ public class UnexpectedServerException extends RpcException {
* @param message message.
* @param cause that cause this exception
* @param cause the cause (can be retried by the {@link #getCause()} method).
- * (A null value is permitted, and indicates that the cause
+ * (A null value is permitted, and indicates that the cause
* is nonexistent or unknown.)
*/
UnexpectedServerException(final String message, final Throwable cause) {
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index c49706d66f27d..9320678439064 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -83,9 +83,9 @@ public class NetUtils {
/**
* Get the socket factory for the given class according to its
* configuration parameter
- * hadoop.rpc.socket.factory.class.<ClassName>. When no
+ * hadoop.rpc.socket.factory.class.<ClassName>. When no
* such parameter exists then fall back on the default socket factory as
- * configured by hadoop.rpc.socket.factory.class.default. If
+ * configured by hadoop.rpc.socket.factory.class.default. If
* this default socket factory is not configured, then fall back on the JVM
* default socket factory.
*
@@ -111,7 +111,7 @@ public static SocketFactory getSocketFactory(Configuration conf,
/**
* Get the default socket factory as specified by the configuration
- * parameter hadoop.rpc.socket.factory.default
+ * parameter hadoop.rpc.socket.factory.default
*
* @param conf the configuration
* @return the default socket factory as specified in the configuration or
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AccessControlException.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AccessControlException.java
index d0a3620d6d4b2..1ed121f9616da 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AccessControlException.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AccessControlException.java
@@ -48,10 +48,10 @@ public AccessControlException() {
/**
* Constructs a new exception with the specified cause and a detail
- * message of (cause==null ? null : cause.toString()) (which
- * typically contains the class and detail message of cause).
+ * message of (cause==null ? null : cause.toString()) (which
+ * typically contains the class and detail message of cause).
* @param cause the cause (which is saved for later retrieval by the
- * {@link #getCause()} method). (A null value is
+ * {@link #getCause()} method). (A null value is
* permitted, and indicates that the cause is nonexistent or
* unknown.)
*/
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AuthorizationException.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AuthorizationException.java
index 79c7d1814da28..e9c3323bb5b12 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AuthorizationException.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AuthorizationException.java
@@ -44,10 +44,10 @@ public AuthorizationException(String message) {
/**
* Constructs a new exception with the specified cause and a detail
- * message of (cause==null ? null : cause.toString()) (which
- * typically contains the class and detail message of cause).
+ * message of (cause==null ? null : cause.toString()) (which
+ * typically contains the class and detail message of cause).
* @param cause the cause (which is saved for later retrieval by the
- * {@link #getCause()} method). (A null value is
+ * {@link #getCause()} method). (A null value is
* permitted, and indicates that the cause is nonexistent or
* unknown.)
*/
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java
index 18f6ccfdb176b..85ad7d9a45e71 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadLock.java
@@ -26,7 +26,7 @@
import org.slf4j.Logger;
/**
- * This is a wrap class of a ReadLock.
+ * This is a wrap class of a ReadLock.
* It extends the class {@link InstrumentedLock}, and can be used to track
* whether a specific read lock is being held for too long and log
* warnings if so.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadWriteLock.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadWriteLock.java
index 758f1ff87cff7..caceb31cfb552 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadWriteLock.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedReadWriteLock.java
@@ -28,7 +28,7 @@
/**
* This is a wrap class of a {@link ReentrantReadWriteLock}.
* It implements the interface {@link ReadWriteLock}, and can be used to
- * create instrumented ReadLock and WriteLock.
+ * create instrumented ReadLock and WriteLock.
*/
@InterfaceAudience.Private
@InterfaceStability.Unstable
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java
index 667b1ca6a4b60..0f99504161109 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/InstrumentedWriteLock.java
@@ -26,7 +26,7 @@
import org.slf4j.Logger;
/**
- * This is a wrap class of a WriteLock.
+ * This is a wrap class of a WriteLock.
* It extends the class {@link InstrumentedLock}, and can be used to track
* whether a specific write lock is being held for too long and log
* warnings if so.
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
index b620ba73222ad..d76e36a30d021 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
@@ -231,7 +231,7 @@ public static String uriToString(URI[] uris){
/**
* @param str
* The string array to be parsed into an URI array.
- * @return null if str is null, else the URI array
+ * @return null if str is null, else the URI array
* equivalent to str.
* @throws IllegalArgumentException
* If any string in str violates RFC 2396.
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
index 2290270bfba1a..f04d978db8728 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/MiniRPCBenchmark.java
@@ -54,18 +54,18 @@
* The benchmark supports three authentication methods:
*
*
* Input arguments:
* core-site.xml should specify
+ * hadoop.security.authentication = simple.
* This is the default mode.core-site.xml should specify
+ * hadoop.security.authentication = kerberos and
* the argument string should provide qualifying
- * keytabFile and userName parameters.
+ * keytabFile and userName parameters.
* useToken argument option.
*
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderLocalLegacy.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderLocalLegacy.java
index e48ace6c22754..18b220b7a584a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderLocalLegacy.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderLocalLegacy.java
@@ -503,7 +503,7 @@ public synchronized int read(ByteBuffer buf) throws IOException {
* byte buffer to write bytes to. If checksums are not required, buf
* can have any number of bytes remaining, otherwise there must be a
* multiple of the checksum chunk size remaining.
- * @return max(min(totalBytesRead, len) - offsetFromChunkBoundary, 0)
+ * @return
* The option is passed via configuration field:
- * dfs.namenode.startup
+ * max(min(totalBytesRead, len) - offsetFromChunkBoundary, 0)
* that is, the the number of useful bytes (up to the amount
* requested) readable from the buffer by the client.
*/
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index a6ca697fa63f7..a79807e73aee5 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -108,7 +108,7 @@ synchronized Listtrue if the queue contains the specified element.
*/
synchronized boolean contains(E e) {
return blockq.contains(e);
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 7bf5879971615..2118b1d03fffa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -369,7 +369,7 @@ String getFullPathName(Long nodeId) {
}
/**
- * Get the key name for an encryption zone. Returns null if iip is
+ * Get the key name for an encryption zone. Returns null if iip is
* not within an encryption zone.
* dfs.namenode.startup
*
* The conf will be modified to reflect the actual ports on which
* the NameNode is up and running if the user passes the port as
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DiffList.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DiffList.java
index 80ef538000977..7ad3981d9c4f2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DiffList.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DiffList.java
@@ -100,7 +100,7 @@ public Listindex < 0 || index >= size())
*/
T get(int index);
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
index fbeea0f673c0e..6586d42f92d64 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
@@ -37,20 +37,20 @@
/**
* This is the tool for analyzing file sizes in the namespace image. In order to
- * run the tool one should define a range of integers [0, maxSize] by
- * specifying maxSize and a step. The range of integers is
- * divided into segments of size step:
- * [0, s1, ..., sn-1, maxSize], and the visitor
+ * run the tool one should define a range of integers [0, maxSize] by
+ * specifying maxSize and a step. The range of integers is
+ * divided into segments of size step:
+ * [0, s1, ..., sn-1, maxSize], and the visitor
* calculates how many files in the system fall into each segment
- * [si-1, si). Note that files larger than
- * maxSize always fall into the very last segment.
+ * [si-1, si). Note that files larger than
+ * maxSize always fall into the very last segment.
*
* Input.
*
- *
*
* filename specifies the location of the image file;maxSize determines the range [0, maxSize] of files
* sizes considered by the visitor;step the range is divided into segments of size step.Output.
The output file is formatted as a tab separated two column
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
index 7dcc29998f335..36e61d811b88c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
@@ -28,20 +28,20 @@
* Description.
* This is the tool for analyzing file sizes in the namespace image.
* In order to run the tool one should define a range of integers
- * [0, maxSize] by specifying maxSize and a step.
- * The range of integers is divided into segments of size step:
- * [0, s1, ..., sn-1, maxSize],
+ * [0, maxSize] by specifying maxSize and a step.
+ * The range of integers is divided into segments of size step:
+ * [0, s1, ..., sn-1, maxSize],
* and the visitor calculates how many files in the system fall into
- * each segment [si-1, si).
- * Note that files larger than maxSize always fall into
+ * each segment [si-1, si).
+ * Note that files larger than maxSize always fall into
* the very last segment.
*
* Input.
*
- *
*
* filename specifies the location of the image file;maxSize determines the range [0, maxSize] of files
* sizes considered by the visitor;step the range is divided into segments of size step.Output.
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
index 3e9231f476005..55f926d23c8ea 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
@@ -110,7 +110,7 @@ public void tearDown() throws IOException {
* Name-node should stay in automatic safe-mode.
* dfs.namenode.safemode.extension and
* verify that the name-node is still in safe mode.READ_ONLY_SHARED replicas are not counted towards the overall
* replication count, but are included as replica locations returned to clients for reads.
*/
@Test
@@ -221,7 +221,7 @@ public void testReplicaCounting() throws Exception {
}
/**
- * Verify that the NameNode is able to still use READ_ONLY_SHARED replicas even
+ * Verify that the NameNode is able to still use READ_ONLY_SHARED replicas even
* when the single NORMAL replica is offline (and the effective replication count is 0).
*/
@Test
@@ -253,7 +253,7 @@ public void testNormalReplicaOffline() throws Exception {
}
/**
- * Verify that corrupt READ_ONLY_SHARED replicas aren't counted
+ * Verify that corrupt READ_ONLY_SHARED replicas aren't counted
* towards the corrupt replicas total.
*/
@Test
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/preemption/AMPreemptionPolicy.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/preemption/AMPreemptionPolicy.java
index 85211f958d6c3..a49700d8e5587 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/preemption/AMPreemptionPolicy.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/preemption/AMPreemptionPolicy.java
@@ -109,7 +109,7 @@ public abstract class Context {
* TaskId}. Assigning a null is akin to remove all previous checkpoints for
* this task.
* @param taskId TaskID
- * @param cid Checkpoint to assign or null to remove it.
+ * @param cid Checkpoint to assign or null to remove it.
*/
public void setCheckpointID(TaskId taskId, TaskCheckpointID cid);
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
index 3932e5849ea14..a89f1f1cee999 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
@@ -185,7 +185,7 @@ public static Path getOutputPath(JobConf conf) {
* is {@link FileOutputCommitter}. If OutputCommitter is not
* a FileOutputCommitter, the task's temporary output
* directory is same as {@link #getOutputPath(JobConf)} i.e.
- * ${mapreduce.output.fileoutputformat.outputdir}$
${mapreduce.output.fileoutputformat.outputdir}$
*
* Some applications need to create/write-to side-files, which differ from * the actual job-outputs. @@ -194,27 +194,27 @@ public static Path getOutputPath(JobConf conf) { * (running simultaneously e.g. speculative tasks) trying to open/write-to the * same file (path) on HDFS. Hence the application-writer will have to pick * unique names per task-attempt (e.g. using the attemptid, say - * attempt_200709221812_0001_m_000000_0), not just per TIP.
+ *attempt_200709221812_0001_m_000000_0), not just per TIP.
*
* To get around this the Map-Reduce framework helps the application-writer
* out by maintaining a special
- * ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid}
+ * ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid}
* sub-directory for each task-attempt on HDFS where the output of the
* task-attempt goes. On successful completion of the task-attempt the files
- * in the ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} (only)
- * are promoted to ${mapreduce.output.fileoutputformat.outputdir}. Of course, the
+ * in the ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} (only)
+ * are promoted to ${mapreduce.output.fileoutputformat.outputdir}. Of course, the
* framework discards the sub-directory of unsuccessful task-attempts. This
* is completely transparent to the application.
The application-writer can take advantage of this by creating any
- * side-files required in ${mapreduce.task.output.dir} during execution
+ * side-files required in ${mapreduce.task.output.dir} during execution
* of his reduce-task i.e. via {@link #getWorkOutputPath(JobConf)}, and the
* framework will move them out similarly - thus she doesn't have to pick
* unique paths per task-attempt.
Note: the value of ${mapreduce.task.output.dir} during + *
Note: the value of ${mapreduce.task.output.dir} during
* execution of a particular task-attempt is actually
- * ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_{$taskid}, and this value is
+ * ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_{$taskid}, and this value is
* set by the map-reduce framework. So, just create any side-files in the
* path returned by {@link #getWorkOutputPath(JobConf)} from map/reduce
* task to take advantage of this feature.
The uri can contain 2 special parameters: $jobId and - * $jobStatus. Those, if present, are replaced by the job's + *
The uri can contain 2 special parameters: $jobId and
+ * $jobStatus. Those, if present, are replaced by the job's
* identifier and completion-status respectively.
This is typically used by application-writers to implement chaining of
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunnable.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunnable.java
index 7aa4f336ae522..e5f585e0fbc8f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunnable.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapRunnable.java
@@ -37,7 +37,7 @@ public interface MapRunnable Mapping of input records to output records is complete when this method
* returns.<key, value> pairs.
*
* true if the Job was added.
*/
public synchronized boolean addDependingJob(Job dependingJob) {
return super.addDependingJob(dependingJob);
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java
index 40690e7541fdb..226363ac8caae 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeInputFormat.java
@@ -38,10 +38,10 @@
* and partitioned the same way.
*
* A user may define new join types by setting the property
- * mapred.join.define.<ident> to a classname. In the expression
- * mapred.join.expr, the identifier will be assumed to be a
+ * mapred.join.define.<ident> to a classname. In the expression
+ * mapred.join.expr, the identifier will be assumed to be a
* ComposableRecordReader.
- * mapred.join.keycomparator can be a classname used to compare keys
+ * mapred.join.keycomparator can be a classname used to compare keys
* in the join.
* @see #setFormat
* @see JoinRecordReader
@@ -66,9 +66,9 @@ public CompositeInputFormat() { }
* class ::= @see java.lang.Class#forName(java.lang.String)
* path ::= @see org.apache.hadoop.fs.Path#Path(java.lang.String)
* }
- * Reads expression from the mapred.join.expr property and
- * user-supplied join types from mapred.join.define.<ident>
- * types. Paths supplied to tbl are given as input paths to the
+ * Reads expression from the mapred.join.expr property and
+ * user-supplied join types from mapred.join.define.<ident>
+ * types. Paths supplied to tbl are given as input paths to the
* InputFormat class listed.
* @see #compose(java.lang.String, java.lang.Class, java.lang.String...)
*/
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java
index 0684268d2d79f..1bb0745d918da 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/CompositeRecordReader.java
@@ -61,8 +61,8 @@ public abstract class CompositeRecordReader<
protected abstract boolean combine(Object[] srcs, TupleWritable value);
/**
- * Create a RecordReader with capacity children to position
- * id in the parent reader.
+ * Create a RecordReader with capacity children to position
+ * id in the parent reader.
* The id of a root CompositeRecordReader is -1 by convention, but relying
* on this is not recommended.
*/
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java
index 1671e6e895684..d36b776a94409 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/OverrideRecordReader.java
@@ -31,7 +31,7 @@
/**
* Prefer the "rightmost" data source for this key.
- * For example, override(S1,S2,S3) will prefer values
+ * For example, override(S1,S2,S3) will prefer values
* from S3 over S2, and values from S2 over S1 for all keys
* emitted from all sources.
*/
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/Parser.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/Parser.java
index 3c7a991fd045e..96792c1e6662a 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/Parser.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/join/Parser.java
@@ -275,7 +275,7 @@ public WNode(String ident) {
/**
* Let the first actual define the InputFormat and the second define
- * the mapred.input.dir property.
+ * the mapred.input.dir property.
*/
public void parse(ListR reduces, there are R-1
* keys in the SequenceFile.
* @deprecated Use
* {@link #setPartitionFile(Configuration, Path)}
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java
index 16ba22bfb604e..196f731e18a8d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/ControlledJob.java
@@ -205,7 +205,7 @@ public Listtrue if the Job was added.
*/
public synchronized boolean addDependingJob(ControlledJob dependingJob) {
if (this.state == State.WAITING) { //only allowed to add jobs when waiting
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java
index 6189a271bc3cb..b0b459afe2a0b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeInputFormat.java
@@ -41,10 +41,10 @@
* and partitioned the same way.
*
* A user may define new join types by setting the property
- * mapreduce.join.define.<ident> to a classname.
- * In the expression mapreduce.join.expr, the identifier will be
+ * mapreduce.join.define.<ident> to a classname.
+ * In the expression mapreduce.join.expr, the identifier will be
* assumed to be a ComposableRecordReader.
- * mapreduce.join.keycomparator can be a classname used to compare
+ * mapreduce.join.keycomparator can be a classname used to compare
* keys in the join.
* @see #setFormat
* @see JoinRecordReader
@@ -73,9 +73,9 @@ public CompositeInputFormat() { }
* class ::= @see java.lang.Class#forName(java.lang.String)
* path ::= @see org.apache.hadoop.fs.Path#Path(java.lang.String)
* }
- * Reads expression from the mapreduce.join.expr property and
- * user-supplied join types from mapreduce.join.define.<ident>
- * types. Paths supplied to tbl are given as input paths to the
+ * Reads expression from the mapreduce.join.expr property and
+ * user-supplied join types from mapreduce.join.define.<ident>
+ * types. Paths supplied to tbl are given as input paths to the
* InputFormat class listed.
* @see #compose(java.lang.String, java.lang.Class, java.lang.String...)
*/
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java
index 40f3570cb59a2..45e3224a3fe08 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/CompositeRecordReader.java
@@ -67,8 +67,8 @@ public abstract class CompositeRecordReader<
protected X value;
/**
- * Create a RecordReader with capacity children to position
- * id in the parent reader.
+ * Create a RecordReader with capacity children to position
+ * id in the parent reader.
* The id of a root CompositeRecordReader is -1 by convention, but relying
* on this is not recommended.
*/
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java
index 5678445f11ba8..2396e9daa42da 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/OverrideRecordReader.java
@@ -33,7 +33,7 @@
/**
* Prefer the "rightmost" data source for this key.
- * For example, override(S1,S2,S3) will prefer values
+ * For example, override(S1,S2,S3) will prefer values
* from S3 over S2, and values from S2 over S1 for all keys
* emitted from all sources.
*/
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/Parser.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/Parser.java
index c557e14136622..68cf31025943f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/Parser.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/Parser.java
@@ -290,7 +290,7 @@ public WNode(String ident) {
/**
* Let the first actual define the InputFormat and the second define
- * the mapred.input.dir property.
+ * the mapred.input.dir property.
*/
@Override
public void parse(List[<child1>,<child2>,...,<childn>]
*/
public String toString() {
StringBuffer buf = new StringBuffer("[");
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
index 2b1f7e37ebe75..5dd572835ccff 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
@@ -208,15 +208,15 @@ public static Path getOutputPath(JobContext job) {
* (running simultaneously e.g. speculative tasks) trying to open/write-to the
* same file (path) on HDFS. Hence the application-writer will have to pick
* unique names per task-attempt (e.g. using the attemptid, say
- * attempt_200709221812_0001_m_000000_0), not just per TIP.
attempt_200709221812_0001_m_000000_0), not just per TIP.
*
* To get around this the Map-Reduce framework helps the application-writer
* out by maintaining a special
- * ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid}
+ * ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid}
* sub-directory for each task-attempt on HDFS where the output of the
* task-attempt goes. On successful completion of the task-attempt the files
- * in the ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} (only)
- * are promoted to ${mapreduce.output.fileoutputformat.outputdir}. Of course, the
+ * in the ${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid} (only)
+ * are promoted to ${mapreduce.output.fileoutputformat.outputdir}. Of course, the
* framework discards the sub-directory of unsuccessful task-attempts. This
* is completely transparent to the application.
total.order.partitioner.natural.order is not false, a trie
+ * of the first total.order.partitioner.max.trie.depth(2) + 1 bytes
* will be built. Otherwise, keys will be located using a binary search of
* the partition keyset using the {@link org.apache.hadoop.io.RawComparator}
* defined for this job. The input file must be sorted with the same
@@ -128,7 +128,7 @@ public int getPartition(K key, V value, int numPartitions) {
/**
* Set the path to the SequenceFile storing the sorted partition keyset.
- * It must be the case that for R reduces, there are R-1
+ * It must be the case that for R reduces, there are R-1
* keys in the SequenceFile.
*/
public static void setPartitionFile(Configuration conf, Path p) {
@@ -156,7 +156,7 @@ interface Nodetotal.order.partitioner.max.trie.depth
* bytes.
*/
static abstract class TrieNode implements Nodetotal.order.partitioner.natural.order,
* search the partition keyset with a binary search.
*/
class BinarySearchNode implements Node
- * type:key
+ * type:key
*
* The values are accumulated according to the types: *
s: - string, concatenatef: - float, summl: - long, summ
* The map task is to get the
- * key, which contains the file name, and the
- * value, which is the offset within the file.
+ * key, which contains the file name, and the
+ * value, which is the offset within the file.
*
* The parameters are passed to the abstract method
* {@link #doIO(Reporter,String,long)}, which performs the io operation,
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/JHLogAnalyzer.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/JHLogAnalyzer.java
index 5e3e745f0229c..9eb2d42f5d042 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/JHLogAnalyzer.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/JHLogAnalyzer.java
@@ -76,7 +76,7 @@
* specific attempt A during hour h.
* The tool then sums all slots for all attempts for every hour.
* The result is the slot hour utilization of the cluster:
- * slotTime(h) = SUMA slotTime(A,h).
+ * slotTime(h) = SUMA slotTime(A,h).
*
* Log analyzer calculates slot hours for MAP and REDUCE * attempts separately. @@ -88,8 +88,8 @@ *
* Map-reduce clusters are usually configured to have a fixed number of MAP
* and REDUCE slots per node. Thus the maximal possible number of slots on
- * the cluster is total_slots = total_nodes * slots_per_node.
- * Effective slot hour cannot exceed total_slots for successful
+ * the cluster is total_slots = total_nodes * slots_per_node.
+ * Effective slot hour cannot exceed total_slots for successful
* attempts.
*
* Pending time characterizes the wait time of attempts. @@ -106,39 +106,39 @@ * The following input parameters can be specified in the argument string * to the job log analyzer: *
-historyDir inputDir specifies the location of the directory
* where analyzer will be looking for job history log files.-resFile resultFile the name of the result file.-usersIncluded | -usersExcluded userList slot utilization and
* pending time can be calculated for all or for all but the specified users.
* userList is a comma or semicolon separated list of users.-gzip is used if history log files are compressed.
* Only {@link GzipCodec} is currently supported.-jobDelimiter pattern one can concatenate original log files into
* larger file(s) with the specified delimiter to recognize the end of the log
* for one job from the next one.pattern is a java regular expression
* {@link java.util.regex.Pattern}, which should match only the log delimiters.
* ".!!FILE=.*!!" matches delimiters, which contain
* the original history log file names in the following form:"$!!FILE=my.job.tracker.com_myJobId_user_wordcount.log!!"-clean cleans up default directories used by the analyzer.-test test one file locally and exit;
* does not require map-reduce.-help print usage.SERIES, PERIOD, TYPE, SLOT_HOUR.
* SERIES one of the four statistical series;PERIOD the start of the time interval in the following format:
+ * "yyyy-mm-dd hh:mm:ss";TYPE the slot type, e.g. MAP or REDUCE;SLOT_HOUR the value of the slot usage during this
* time interval.
+
|