Skip to content

Commit

Permalink
[SPARK-938][doc] Add OpenStack Swift support
Browse files Browse the repository at this point in the history
See compiled doc at
http://people.apache.org/~rxin/tmp/openstack-swift/_site/storage-openstack-swift.html

This is based on apache#1010. Closes apache#1010.

Author: Reynold Xin <rxin@apache.org>
Author: Gil Vernik <gilv@il.ibm.com>

Closes apache#2298 from rxin/openstack-swift and squashes the following commits:

ff4e394 [Reynold Xin] Two minor comments from Patrick.
279f6de [Reynold Xin] core-sites -> core-site
dfb8fea [Reynold Xin] Updated based on Gil's suggestion.
846f5cb [Reynold Xin] Added a link from overview page.
0447c9f [Reynold Xin] Removed sample code.
e9c3761 [Reynold Xin] Merge pull request apache#1010 from gilv/master
9233fef [Gil Vernik] Fixed typos
6994827 [Gil Vernik] Merge pull request #1 from rxin/openstack
ac0679e [Reynold Xin] Fixed an unclosed tr.
47ce99d [Reynold Xin] Merge branch 'master' into openstack
cca7192 [Gil Vernik] Removed white spases from pom.xml
99f095d [Reynold Xin] Pending openstack changes.
eb22295 [Reynold Xin] Merge pull request apache#1010 from gilv/master
39a9737 [Gil Vernik] Spark integration with Openstack Swift
c977658 [Gil Vernik] Merge branch 'master' of https://github.com/gilv/spark
2aba763 [Gil Vernik] Fix to docs/openstack-integration.md
9b625b5 [Gil Vernik] Merge branch 'master' of https://github.com/gilv/spark
eff538d [Gil Vernik] SPARK-938 - Openstack Swift object storage support
ce483d7 [Gil Vernik] SPARK-938 - Openstack Swift object storage support
b6c37ef [Gil Vernik] Openstack Swift support
  • Loading branch information
rxin authored and pwendell committed Sep 8, 2014
1 parent f25bbbd commit eddfedd
Show file tree
Hide file tree
Showing 2 changed files with 154 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,8 @@ options for deployment:
* [Security](security.html): Spark security support
* [Hardware Provisioning](hardware-provisioning.html): recommendations for cluster hardware
* [3<sup>rd</sup> Party Hadoop Distributions](hadoop-third-party-distributions.html): using common Hadoop distributions
* Integration with other storage systems:
* [OpenStack Swift](storage-openstack-swift.html)
* [Building Spark with Maven](building-with-maven.html): build Spark using the Maven system
* [Contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark)

Expand Down
152 changes: 152 additions & 0 deletions docs/storage-openstack-swift.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
---
layout: global
title: Accessing OpenStack Swift from Spark
---

Spark's support for Hadoop InputFormat allows it to process data in OpenStack Swift using the
same URI formats as in Hadoop. You can specify a path in Swift as input through a
URI of the form <code>swift://container.PROVIDER/path</code>. You will also need to set your
Swift security credentials, through <code>core-site.xml</code> or via
<code>SparkContext.hadoopConfiguration</code>.
Current Swift driver requires Swift to use Keystone authentication method.

# Configuring Swift for Better Data Locality

Although not mandatory, it is recommended to configure the proxy server of Swift with
<code>list_endpoints</code> to have better data locality. More information is
[available here](https://github.com/openstack/swift/blob/master/swift/common/middleware/list_endpoints.py).


# Dependencies

The Spark application should include <code>hadoop-openstack</code> dependency.
For example, for Maven support, add the following to the <code>pom.xml</code> file:

{% highlight xml %}
<dependencyManagement>
...
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-openstack</artifactId>
<version>2.3.0</version>
</dependency>
...
</dependencyManagement>
{% endhighlight %}


# Configuration Parameters

Create <code>core-site.xml</code> and place it inside Spark's <code>conf</code> directory.
There are two main categories of parameters that should to be configured: declaration of the
Swift driver and the parameters that are required by Keystone.

Configuration of Hadoop to use Swift File system achieved via

<table class="table">
<tr><th>Property Name</th><th>Value</th></tr>
<tr>
<td>fs.swift.impl</td>
<td>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</td>
</tr>
</table>

Additional parameters required by Keystone (v2.0) and should be provided to the Swift driver. Those
parameters will be used to perform authentication in Keystone to access Swift. The following table
contains a list of Keystone mandatory parameters. <code>PROVIDER</code> can be any name.

<table class="table">
<tr><th>Property Name</th><th>Meaning</th><th>Required</th></tr>
<tr>
<td><code>fs.swift.service.PROVIDER.auth.url</code></td>
<td>Keystone Authentication URL</td>
<td>Mandatory</td>
</tr>
<tr>
<td><code>fs.swift.service.PROVIDER.auth.endpoint.prefix</code></td>
<td>Keystone endpoints prefix</td>
<td>Optional</td>
</tr>
<tr>
<td><code>fs.swift.service.PROVIDER.tenant</code></td>
<td>Tenant</td>
<td>Mandatory</td>
</tr>
<tr>
<td><code>fs.swift.service.PROVIDER.username</code></td>
<td>Username</td>
<td>Mandatory</td>
</tr>
<tr>
<td><code>fs.swift.service.PROVIDER.password</code></td>
<td>Password</td>
<td>Mandatory</td>
</tr>
<tr>
<td><code>fs.swift.service.PROVIDER.http.port</code></td>
<td>HTTP port</td>
<td>Mandatory</td>
</tr>
<tr>
<td><code>fs.swift.service.PROVIDER.region</code></td>
<td>Keystone region</td>
<td>Mandatory</td>
</tr>
<tr>
<td><code>fs.swift.service.PROVIDER.public</code></td>
<td>Indicates if all URLs are public</td>
<td>Mandatory</td>
</tr>
</table>

For example, assume <code>PROVIDER=SparkTest</code> and Keystone contains user <code>tester</code> with password <code>testing</code>
defined for tenant <code>test</code>. Then <code>core-site.xml</code> should include:

{% highlight xml %}
<configuration>
<property>
<name>fs.swift.impl</name>
<value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value>
</property>
<property>
<name>fs.swift.service.SparkTest.auth.url</name>
<value>http://127.0.0.1:5000/v2.0/tokens</value>
</property>
<property>
<name>fs.swift.service.SparkTest.auth.endpoint.prefix</name>
<value>endpoints</value>
</property>
<name>fs.swift.service.SparkTest.http.port</name>
<value>8080</value>
</property>
<property>
<name>fs.swift.service.SparkTest.region</name>
<value>RegionOne</value>
</property>
<property>
<name>fs.swift.service.SparkTest.public</name>
<value>true</value>
</property>
<property>
<name>fs.swift.service.SparkTest.tenant</name>
<value>test</value>
</property>
<property>
<name>fs.swift.service.SparkTest.username</name>
<value>tester</value>
</property>
<property>
<name>fs.swift.service.SparkTest.password</name>
<value>testing</value>
</property>
</configuration>
{% endhighlight %}

Notice that
<code>fs.swift.service.PROVIDER.tenant</code>,
<code>fs.swift.service.PROVIDER.username</code>,
<code>fs.swift.service.PROVIDER.password</code> contains sensitive information and keeping them in
<code>core-site.xml</code> is not always a good approach.
We suggest to keep those parameters in <code>core-site.xml</code> for testing purposes when running Spark
via <code>spark-shell</code>.
For job submissions they should be provided via <code>sparkContext.hadoopConfiguration</code>.

0 comments on commit eddfedd

Please sign in to comment.