Skip to content

Commit

Permalink
[ZEPPELIN-1274]Write "Spark SQL" in docs rather than "SparkSQL"
Browse files Browse the repository at this point in the history
### What is this PR for?
Some of the doc files say "SparkSQL" but the collect spelling is "Spark SQL" (need a white space between "Spark" and "SQL").
Lets's replace it with the collect one.

### What type of PR is it?
Improvement

### Todos
* [x] - Replace all of "SparkSQL" in some files into "Spark SQL".

### What is the Jira issue?
https://issues.apache.org/jira/browse/ZEPPELIN-1274

### How should this be tested?
Check the changes by some people.

Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>

Closes apache#1271 from sarutak/ZEPPELIN-1274 and squashes the following commits:

edc9212 [Kousuke Saruta] Further replaced "SparkSQL" and "SparkSql" into "Spark SQL"
14aa2b7 [Kousuke Saruta] Replaced 'SparkSQL' in docs into 'Spark SQL'
  • Loading branch information
sarutak authored and PhilippGrulich committed Aug 8, 2016
1 parent 90ec4dd commit 3ecc2e0
Show file tree
Hide file tree
Showing 12 changed files with 17 additions and 17 deletions.
2 changes: 1 addition & 1 deletion conf/zeppelin-env.cmd.template
Expand Up @@ -62,7 +62,7 @@ REM
REM set ZEPPELIN_SPARK_USEHIVECONTEXT REM Use HiveContext instead of SQLContext if set true. true by default.
REM set ZEPPELIN_SPARK_CONCURRENTSQL REM Execute multiple SQL concurrently if set true. false by default.
REM set ZEPPELIN_SPARK_IMPORTIMPLICIT REM Import implicits, UDF collection, and sql if set true. true by default.
REM set ZEPPELIN_SPARK_MAXRESULT REM Max number of SparkSQL result to display. 1000 by default.
REM set ZEPPELIN_SPARK_MAXRESULT REM Max number of Spark SQL result to display. 1000 by default.

REM ZeppelinHub connection configuration
REM
Expand Down
2 changes: 1 addition & 1 deletion conf/zeppelin-env.sh.template
Expand Up @@ -62,7 +62,7 @@
# export ZEPPELIN_SPARK_USEHIVECONTEXT # Use HiveContext instead of SQLContext if set true. true by default.
# export ZEPPELIN_SPARK_CONCURRENTSQL # Execute multiple SQL concurrently if set true. false by default.
# export ZEPPELIN_SPARK_IMPORTIMPLICIT # Import implicits, UDF collection, and sql if set true. true by default.
# export ZEPPELIN_SPARK_MAXRESULT # Max number of SparkSQL result to display. 1000 by default.
# export ZEPPELIN_SPARK_MAXRESULT # Max number of Spark SQL result to display. 1000 by default.
# export ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE # Size in characters of the maximum text message to be received by websocket. Defaults to 1024000


Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Expand Up @@ -62,7 +62,7 @@ For the further information about Apache Spark in Apache Zeppelin, please see [S
<br />
## Data visualization

Some basic charts are already included in Apache Zeppelin. Visualizations are not limited to SparkSQL query, any output from any language backend can be recognized and visualized.
Some basic charts are already included in Apache Zeppelin. Visualizations are not limited to Spark SQL query, any output from any language backend can be recognized and visualized.

<div class="row">
<div class="col-md-6">
Expand Down
2 changes: 1 addition & 1 deletion docs/interpreter/livy.md
Expand Up @@ -50,7 +50,7 @@ Example: `spark.master` to `livy.spark.master`
<tr>
<td>zeppelin.livy.spark.maxResult</td>
<td>1000</td>
<td>Max number of SparkSQL result to display.</td>
<td>Max number of Spark SQL result to display.</td>
</tr>
<tr>
<td>livy.spark.driver.cores</td>
Expand Down
2 changes: 1 addition & 1 deletion docs/interpreter/spark.md
Expand Up @@ -105,7 +105,7 @@ You can also set other Spark properties which are not listed in the table. For a
<tr>
<td>zeppelin.spark.maxResult</td>
<td>1000</td>
<td>Max number of SparkSQL result to display.</td>
<td>Max number of Spark SQL result to display.</td>
</tr>
<tr>
<td>zeppelin.spark.printREPLOutput</td>
Expand Down
2 changes: 1 addition & 1 deletion docs/manual/dynamicform.md
Expand Up @@ -28,7 +28,7 @@ Custom language backend can select which type of form creation it wants to use.

## Using form Templates

This mode creates form using simple template language. It's simple and easy to use. For example Markdown, Shell, SparkSql language backend uses it.
This mode creates form using simple template language. It's simple and easy to use. For example Markdown, Shell, Spark SQL language backend uses it.

### Text input form

Expand Down
4 changes: 2 additions & 2 deletions docs/manual/interpreters.md
Expand Up @@ -27,7 +27,7 @@ limitations under the License.

In this section, we will explain about the role of interpreters, interpreters group and interpreter settings in Zeppelin.
The concept of Zeppelin interpreter allows any language/data-processing-backend to be plugged into Zeppelin.
Currently, Zeppelin supports many interpreters such as Scala ( with Apache Spark ), Python ( with Apache Spark ), SparkSQL, JDBC, Markdown, Shell and so on.
Currently, Zeppelin supports many interpreters such as Scala ( with Apache Spark ), Python ( with Apache Spark ), Spark SQL, JDBC, Markdown, Shell and so on.

## What is Zeppelin interpreter?
Zeppelin Interpreter is a plug-in which enables Zeppelin users to use a specific language/data-processing-backend. For example, to use Scala code in Zeppelin, you need `%spark` interpreter.
Expand All @@ -51,7 +51,7 @@ Each notebook can be bound to multiple Interpreter Settings using setting icon o

## What is interpreter group?
Every Interpreter is belonged to an **Interpreter Group**. Interpreter Group is a unit of start/stop interpreter.
By default, every interpreter is belonged to a single group, but the group might contain more interpreters. For example, Spark interpreter group is including Spark support, pySpark, SparkSQL and the dependency loader.
By default, every interpreter is belonged to a single group, but the group might contain more interpreters. For example, Spark interpreter group is including Spark support, pySpark, Spark SQL and the dependency loader.

Technically, Zeppelin interpreters from the same group are running in the same JVM. For more information about this, please checkout [here](../development/writingzeppelininterpreter.html).

Expand Down
4 changes: 2 additions & 2 deletions docs/rest-api/rest-interpreter.md
Expand Up @@ -92,7 +92,7 @@ The role of registered interpreters, settings and interpreters group are describ
"properties": {
"zeppelin.spark.maxResult": {
"defaultValue": "1000",
"description": "Max number of SparkSQL result to display."
"description": "Max number of Spark SQL result to display."
}
},
"path": "/zeppelin/interpreter/spark"
Expand Down Expand Up @@ -460,4 +460,4 @@ The role of registered interpreters, settings and interpreters group are describ
<td> 500 </td>
</tr>
</table>


2 changes: 1 addition & 1 deletion docs/screenshots.md
Expand Up @@ -21,7 +21,7 @@ limitations under the License.
<div class="row">
<div class="col-md-3">
<a href="assets/themes/zeppelin/img/screenshots/sparksql.png"><img class="thumbnail" src="assets/themes/zeppelin/img/screenshots/sparksql.png" /></a>
<center>SparkSQL with inline visualization</center>
<center>Spark SQL with inline visualization</center>
</div>
<div class="col-md-3">
<a href="assets/themes/zeppelin/img/screenshots/spark.png"><img class="thumbnail" src="assets/themes/zeppelin/img/screenshots/spark.png" /></a>
Expand Down
4 changes: 2 additions & 2 deletions livy/src/main/resources/interpreter-setting.json
Expand Up @@ -93,7 +93,7 @@
"envName": "ZEPPELIN_LIVY_MAXRESULT",
"propertyName": "zeppelin.livy.spark.sql.maxResult",
"defaultValue": "1000",
"description": "Max number of SparkSQL result to display."
"description": "Max number of Spark SQL result to display."
},
"zeppelin.livy.concurrentSQL": {
"propertyName": "zeppelin.livy.concurrentSQL",
Expand All @@ -116,4 +116,4 @@
"properties": {
}
}
]
]
4 changes: 2 additions & 2 deletions spark/src/main/resources/interpreter-setting.json
Expand Up @@ -46,7 +46,7 @@
"envName": "ZEPPELIN_SPARK_MAXRESULT",
"propertyName": "zeppelin.spark.maxResult",
"defaultValue": "1000",
"description": "Max number of SparkSQL result to display."
"description": "Max number of Spark SQL result to display."
},
"master": {
"envName": "MASTER",
Expand Down Expand Up @@ -77,7 +77,7 @@
"envName": "ZEPPELIN_SPARK_MAXRESULT",
"propertyName": "zeppelin.spark.maxResult",
"defaultValue": "1000",
"description": "Max number of SparkSQL result to display."
"description": "Max number of Spark SQL result to display."
},
"zeppelin.spark.importImplicit": {
"envName": "ZEPPELIN_SPARK_IMPORTIMPLICIT",
Expand Down
4 changes: 2 additions & 2 deletions spark/src/main/sparkr-resources/interpreter-setting.json
Expand Up @@ -46,7 +46,7 @@
"envName": "ZEPPELIN_SPARK_MAXRESULT",
"propertyName": "zeppelin.spark.maxResult",
"defaultValue": "1000",
"description": "Max number of SparkSQL result to display."
"description": "Max number of Spark SQL result to display."
},
"master": {
"envName": "MASTER",
Expand Down Expand Up @@ -77,7 +77,7 @@
"envName": "ZEPPELIN_SPARK_MAXRESULT",
"propertyName": "zeppelin.spark.maxResult",
"defaultValue": "1000",
"description": "Max number of SparkSQL result to display."
"description": "Max number of Spark SQL result to display."
},
"zeppelin.spark.importImplicit": {
"envName": "ZEPPELIN_SPARK_IMPORTIMPLICIT",
Expand Down

0 comments on commit 3ecc2e0

Please sign in to comment.