Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Updating batch-wordcount to latest versions

- updating versions

- removing profiles

- cleaning up poms
  • Loading branch information...
commit dd61a00db40cc84db604a7b2108b537c8d02fabd 1 parent 18525fe
Thomas Risberg trisberg authored
6 batch-hashtag-count/pom.xml
View
@@ -14,15 +14,13 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<hadoop.version>2.2.0</hadoop.version>
- <hadoop.exclude>hadoop-hdfs</hadoop.exclude>
- <mapreduce.framework>yarn</mapreduce.framework>
</properties>
<dependencies>
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
- <version>2.4</version>
+ <version>2.6</version>
<scope>provided</scope>
</dependency>
<dependency>
@@ -50,7 +48,7 @@
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
+ <amapreduce.framertifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.6</source>
22 batch-wordcount/README.md
View
@@ -18,12 +18,6 @@ Build the sample simply by executing:
By default this builds against Apache Hadoop 2.2.0
->````
-> If you would like to build against Apache Hadoop 1.2.1 you can use the provided profile "hadoop12":
->````
-> $ mvn clean assembly:assembly -P hadoop12
->````
-
As a result, you will see the following files and directories created under `target/batch-wordcount-1.0.0.BUILD-SNAPSHOT-bin/`:
```
@@ -36,8 +30,6 @@ As a result, you will see the following files and directories created under `tar
| `-- nietzsche-chapter-1.txt
```
-In the case of hadoop 1.2.1, the `hadoop-examples-1.2.1.jar` will be under the lib directory.
-
the wordcount.xml defines the location of the file to import, HDFS directories to use as well as the name node location. You can verify the settings under in util:property element:
<util:properties id="myProperties" >
@@ -58,7 +50,7 @@ In the `batch-wordcount` directory copy the result of the build to the XD instal
Note that the `nietzsche-chapter-1.txt` file is copied to the /tmp directory.
-The wordcount sample is ready to be executed. For ease of use, start up the single node version of Spring XD that combines the admin and container nodes into one process. If it was already running, you must restart it.
+The wordcount sample is ready to be executed. For ease of use, start up the single node version of Spring XD that combines the admin and container nodes into one process. If it was already running, *you must restart it*.
xd/bin>$ ./xd-singlenode
@@ -68,16 +60,6 @@ Now start the *Spring XD Shell* in a separate window:
By default, hadoop 2.2.0 distribution will be used.
-
->````
-> If you would like to run against Apache Hadoop 1.2.1 pass in the command line option "--hadoopDistro hadoop12"
->
->````
-> xd/bin>$ ./xd-singlenode --hadoopDistro hadoop12
->
-> xd/bin>$ ./xd-shell --hadoopDistro hadoop12
->````
-
You will now create a new Batch Job Stream using the *Spring XD Shell*:
xd:>job create --name wordCountJob --definition "wordcount"
@@ -90,7 +72,7 @@ Alternatively, you can deploy it using the shell command:
We will now create a stream that polls a local directory for files. By default the name of the directory is named after the name of the stream, so in this case the directory will be `/tmp/xd/input/wordCountFiles`. If the directory does not exist, it will be created. You can override the default directory using the `--dir` option.
- stream create --name wordCountFiles --definition "file --ref=true > queue:job:wordCountJob" --deploy
+ xd:>stream create --name wordCountFiles --definition "file --ref=true > queue:job:wordCountJob" --deploy
If you now drop text files into the `/tmp/xd/input/wordCountFiles/` directory, the file will be picked up, copied to HDFS and its words counted. You can move the supplied .txt file there via
19 batch-wordcount/pom.xml
View
@@ -14,32 +14,17 @@
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<hadoop.version>2.2.0</hadoop.version>
- <hadoop.examples>hadoop-mapreduce-examples</hadoop.examples>
- <hadoop.exclude>hadoop-hdfs</hadoop.exclude>
- <mapreduce.framework>yarn</mapreduce.framework>
</properties>
- <profiles>
- <profile>
- <id>hadoop12</id>
- <properties>
- <hadoop.version>1.2.1</hadoop.version>
- <hadoop.examples>hadoop-examples</hadoop.examples>
- <hadoop.exclude>hadoop-core</hadoop.exclude>
- <mapreduce.framework>classic</mapreduce.framework>
- </properties>
- </profile>
- </profiles>
-
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
- <artifactId>${hadoop.examples}</artifactId>
+ <artifactId>hadoop-mapreduce-examples</artifactId>
<version>${hadoop.version}</version>
<scope>compile</scope>
<exclusions>
<exclusion>
- <artifactId>${hadoop.exclude}</artifactId>
+ <artifactId>hadoop-hdfs</artifactId>
<groupId>org.apache.hadoop</groupId>
</exclusion>
<exclusion>
Please sign in to comment.
Something went wrong with that request. Please try again.