Skip to content
This repository has been archived by the owner on Apr 5, 2022. It is now read-only.

Commit

Permalink
overall doc formatting and minor bug fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
Costin Leau committed Jun 11, 2012
1 parent 2f04ea4 commit 649adac
Show file tree
Hide file tree
Showing 6 changed files with 22 additions and 22 deletions.
6 changes: 3 additions & 3 deletions docs/src/reference/docbook/reference/cascading.xml
Expand Up @@ -61,10 +61,10 @@ public class CascadingAnalysisConfig {
<!-- Tap created through XML rather then code (using Spring's 3.1 c: namespace)-->
<bean id="tap" class="cascading.tap.hadoop.Hfs" c:fields-ref="fields" c:string-path-value="${cascade.input}"/>
<bean id="cascade" class="org.springframework.data.hadoop.cascading.CascadeFactoryBean" p:configuration-ref="hadoop-configuration">
<bean id="cascade" class="org.springframework.data.hadoop.cascading.CascadeFactoryBean" p:configuration-ref="hadoopConfiguration">
<property name="flows"><list>
<bean class="org.springframework.data.hadoop.cascading.HadoopFlowFactoryBean"
p:configuration-ref="hadoop-configuration" p:source-ref="tap" p:sinks-ref="sinks">
p:configuration-ref="hadoopConfiguration" p:source-ref="tap" p:sinks-ref="sinks">
<property name="tails"><list>
<ref bean="tsCountPipe"/>
<ref bean="tmCountPipe"/>
Expand Down Expand Up @@ -109,7 +109,7 @@ public class CascadingAnalysisConfig {
<para>which compiles Tutorial1, creates a bundled jar and runs it on a local Hadoop instance. When using the <literal>Tool</literal> support, the compilation and the library provisioning are external tasks
(just as in the case of typical Hadoop jobs). The SHDP configuration to run the tutorial looks as follows:</para>

<programlisting language="xml"><![CDATA[<!-- the tool automatically is injected with 'hadoop-configuration' -->
<programlisting language="xml"><![CDATA[<!-- the tool automatically is injected with 'hadoopConfiguration' -->
<hdp:tool-runner id="scalding" tool-class="com.twitter.scalding.Tool">
<hdp:arg value="tutorial/Tutorial1"/>
<hdp:arg value="--local"/>
Expand Down
4 changes: 2 additions & 2 deletions docs/src/reference/docbook/reference/fs.xml
Expand Up @@ -130,7 +130,7 @@
name = UUID.randomUUID().toString()
scriptName = "src/test/resources/test.properties"
]]>// <emphasis role="strong">fs</emphasis> - FileSystem instance based on 'hadoop-configuration' bean
]]>// <emphasis role="strong">fs</emphasis> - FileSystem instance based on 'hadoopConfiguration' bean
// call FileSystem#copyFromLocal(Path, Path)
<emphasis role="strong">fs</emphasis>.copyFromLocalFile(scriptName, name)
// return the file length
Expand Down Expand Up @@ -213,7 +213,7 @@ print Path(name).makeQualified(fs)]]></programlisting>
<row>
<entry>cfg</entry>
<entry><literal>org.apache.hadoop.conf.Configuration</literal></entry>
<entry>Hadoop Configuration (relies on <emphasis>hadoop-configuration</emphasis> bean or singleton type match)</entry>
<entry>Hadoop Configuration (relies on <emphasis>hadoopConfiguration</emphasis> bean or singleton type match)</entry>
</row>
<row>
<entry>cl</entry>
Expand Down
28 changes: 14 additions & 14 deletions docs/src/reference/docbook/reference/hbase.xml
Expand Up @@ -14,8 +14,8 @@

<programlisting language="xml"><![CDATA[<!-- delete associated connections but do not stop the proxies -->
<hdp:hbase-configuration stop-proxy="false" delete-connection="true">
foo=bar
property=value
foo=bar
property=value
</hdp:hbase-configuration>]]></programlisting>

<para>Notice that like with the other elements, one can specify additional properties specific to this configuration. In fact <literal>hbase-configuration</literal> provides the same properties
Expand Down Expand Up @@ -53,21 +53,21 @@

<programlisting language="java"><![CDATA[// writing to 'MyTable'
template.execute("MyTable", new TableCallback<Object>() {
@Override
public Object doInTable(HTable table) throws Throwable {
Put p = new Put(Bytes.toBytes("SomeRow"));
p.add(Bytes.toBytes("SomeColumn"), Bytes.toBytes("SomeQualifier"), Bytes.toBytes("AValue"));
table.put(p);
return null;
}
}); ]]></programlisting>
@Override
public Object doInTable(HTable table) throws Throwable {
Put p = new Put(Bytes.toBytes("SomeRow"));
p.add(Bytes.toBytes("SomeColumn"), Bytes.toBytes("SomeQualifier"), Bytes.toBytes("AValue"));
table.put(p);
return null;
}
});]]></programlisting>

<programlisting language="java"><![CDATA[// read each row from 'MyTable'
List<String> rows = template.find("MyTable", "SomeColumn", new RowMapper<String>() {
@Override
public String mapRow(Result result, int rowNum) throws Exception {
return result.toString();
}
@Override
public String mapRow(Result result, int rowNum) throws Exception {
return result.toString();
}
}));]]></programlisting>

<para>The first snippet show-cases the generic <literal>TableCallback</literal> - the most generic of the callbacks, it does the table lookup and resource cleanup so that the user code does not have to.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/reference/docbook/reference/hive.xml
Expand Up @@ -18,7 +18,7 @@
<para>If needed the Hadoop configuration can be passed in or additional properties specified. In fact <literal>hiver-server</literal> provides the same properties configuration <literal>knobs</literal>
as <link linkend="hadoop:config:properties">hadoop configuration</link>:</para>:

<programlisting language="xml"><![CDATA[<hdp:hive-server host="some-other-host" port="10001" properties-location="classpath:hive-dev.properties" configuration-ref="hadoop-configuration">
<programlisting language="xml"><![CDATA[<hdp:hive-server host="some-other-host" port="10001" properties-location="classpath:hive-dev.properties" configuration-ref="hadoopConfiguration">
someproperty=somevalue
hive.exec.scratchdir=/tmp/mydir
</hdp:hive-server>]]></programlisting>
Expand Down
2 changes: 1 addition & 1 deletion docs/src/reference/docbook/reference/pig.xml
Expand Up @@ -12,7 +12,7 @@
executing scripts in <literal>MapReduce</literal> mode. In typical scenarios however, one might want to connect to a remote Hadoop tracker and register some scripts automatically so let us take a look of how the
configuration might look like:</para>

<programlisting language="xml"><![CDATA[<pig exec-type="LOCAL" job-name="pig-script" configuration-ref="hadoop-configuration" properties-location="pig-dev.properties"
<programlisting language="xml"><![CDATA[<pig exec-type="LOCAL" job-name="pig-script" configuration-ref="hadoopConfiguration" properties-location="pig-dev.properties"
xmlns="http://www.springframework.org/schema/hadoop">
source=${pig.script.src}
<script location="org/company/pig/script.pig">
Expand Down
Expand Up @@ -149,7 +149,7 @@
shared across many jobs. It is defined in the file
<literal>hadoop-context.xml</literal> and is shown below</para>

<programlisting language="xml">&lt;!-- default id is 'hadoop-configuration' --&gt;
<programlisting language="xml">&lt;!-- default id is 'hadoopConfiguration' --&gt;
&lt;hdp:configuration register-url-handler="false"&gt;
fs.default.name=${hd.fs}
&lt;/hdp:configuration&gt;</programlisting>
Expand Down

0 comments on commit 649adac

Please sign in to comment.