Permalink
Browse files

HDFS-2069. Incorrect default trash interval value in the docs. Contri…

…buted by Harsh J Chouraria

git-svn-id: https://svn.apache.org/repos/asf/hadoop/hdfs/trunk@1134955 13f79535-47bb-0310-9956-ffa450edef68
  • Loading branch information...
1 parent a4910f2 commit b2d2a3262c587638db04c2991d48656b3d06275c @elicollins elicollins committed Jun 12, 2011
Showing with 6 additions and 4 deletions.
  1. +3 −0 CHANGES.txt
  2. +3 −4 src/docs/src/documentation/content/xdocs/hdfs_design.xml
View
@@ -732,6 +732,9 @@ Trunk (unreleased changes)
HDFS-2067. Bump DATA_TRANSFER_VERSION constant in trunk after introduction
of protocol buffers in the protocol. (szetszwo via todd)
+ HDFS-2069. Incorrect default trash interval value in the docs.
+ (Harsh J Chouraria via eli)
+
Release 0.22.0 - Unreleased
INCOMPATIBLE CHANGES
@@ -391,7 +391,7 @@
<title> Replication Pipelining </title>
<p>
When a client is writing data to an HDFS file with a replication factor of 3, the NameNode retrieves a list of DataNodes using a replication target choosing algorithm.
- This list contains the DataNodes that will host a replica of that block. The client then writes to the first DataNode. The first DataNode starts receiving the data in small portions (4 KB),
+ This list contains the DataNodes that will host a replica of that block. The client then writes to the first DataNode. The first DataNode starts receiving the data in small portions (64 KB, configurable),
writes each portion to its local repository and transfers that portion to the second DataNode in the list.
The second DataNode, in turn starts receiving each portion of the data block, writes that portion to its
repository and then flushes that portion to the third DataNode. Finally, the third DataNode writes the
@@ -498,9 +498,8 @@
If a user wants to undelete a file that he/she has deleted, he/she can navigate the <code>/trash</code>
directory and retrieve the file. The <code>/trash</code> directory contains only the latest copy of the file
that was deleted. The <code>/trash</code> directory is just like any other directory with one special
- feature: HDFS applies specified policies to automatically delete files from this directory. The current
- default policy is to delete files from <code>/trash</code> that are more than 6 hours old. In the future,
- this policy will be configurable through a well defined interface.
+ feature: HDFS applies specified policies to automatically delete files from this directory.
+ By default, the trash feature is disabled. It can be enabled by setting the <em>fs.trash.interval</em> property in core-site.xml to a non-zero value (set as minutes of retention required). The property needs to exist on both client and server side configurations.
</p>
</section>

0 comments on commit b2d2a32

Please sign in to comment.