Skip to content
This repository has been archived by the owner on Feb 9, 2021. It is now read-only.

Commit

Permalink
HDFS-2069. Incorrect default trash interval value in the docs. Contri…
Browse files Browse the repository at this point in the history
…buted by Harsh J Chouraria

git-svn-id: https://svn.apache.org/repos/asf/hadoop/hdfs/trunk@1134955 13f79535-47bb-0310-9956-ffa450edef68
  • Loading branch information
elicollins committed Jun 12, 2011
1 parent a4910f2 commit b2d2a32
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
3 changes: 3 additions & 0 deletions CHANGES.txt
Expand Up @@ -732,6 +732,9 @@ Trunk (unreleased changes)
HDFS-2067. Bump DATA_TRANSFER_VERSION constant in trunk after introduction
of protocol buffers in the protocol. (szetszwo via todd)

HDFS-2069. Incorrect default trash interval value in the docs.
(Harsh J Chouraria via eli)

Release 0.22.0 - Unreleased

INCOMPATIBLE CHANGES
Expand Down
7 changes: 3 additions & 4 deletions src/docs/src/documentation/content/xdocs/hdfs_design.xml
Expand Up @@ -391,7 +391,7 @@
<title> Replication Pipelining </title>
<p>
When a client is writing data to an HDFS file with a replication factor of 3, the NameNode retrieves a list of DataNodes using a replication target choosing algorithm.
This list contains the DataNodes that will host a replica of that block. The client then writes to the first DataNode. The first DataNode starts receiving the data in small portions (4 KB),
This list contains the DataNodes that will host a replica of that block. The client then writes to the first DataNode. The first DataNode starts receiving the data in small portions (64 KB, configurable),
writes each portion to its local repository and transfers that portion to the second DataNode in the list.
The second DataNode, in turn starts receiving each portion of the data block, writes that portion to its
repository and then flushes that portion to the third DataNode. Finally, the third DataNode writes the
Expand Down Expand Up @@ -498,9 +498,8 @@
If a user wants to undelete a file that he/she has deleted, he/she can navigate the <code>/trash</code>
directory and retrieve the file. The <code>/trash</code> directory contains only the latest copy of the file
that was deleted. The <code>/trash</code> directory is just like any other directory with one special
feature: HDFS applies specified policies to automatically delete files from this directory. The current
default policy is to delete files from <code>/trash</code> that are more than 6 hours old. In the future,
this policy will be configurable through a well defined interface.
feature: HDFS applies specified policies to automatically delete files from this directory.
By default, the trash feature is disabled. It can be enabled by setting the <em>fs.trash.interval</em> property in core-site.xml to a non-zero value (set as minutes of retention required). The property needs to exist on both client and server side configurations.
</p>
</section>

Expand Down

0 comments on commit b2d2a32

Please sign in to comment.