Skip to content

Commit

Permalink
miscellaneous fixes - wording, formatting, etc.
Browse files Browse the repository at this point in the history
  • Loading branch information
alexott committed Jun 1, 2018
1 parent 63ca2b0 commit 37a2bb7
Show file tree
Hide file tree
Showing 15 changed files with 76 additions and 52 deletions.
4 changes: 3 additions & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,10 @@ If you wish to help us and contribute to Zeppelin Documentation, please look at
```

2. checkout ASF repo

```
svn co https://svn.apache.org/repos/asf/zeppelin asf-zeppelin
```

3. copy `zeppelin/docs/_site` to `asf-zeppelin/site/docs/[VERSION]`
4. ```svn commit```
4. `svn commit`
4 changes: 2 additions & 2 deletions docs/development/helium/writing_visualization_basic.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,15 +190,15 @@ e.g.

#### 4. Run in dev mode

Place your __Helium package file__ in local registry (ZEPPELIN_HOME/helium).
Place your __Helium package file__ in local registry (`ZEPPELIN_HOME/helium`).
Run Zeppelin. And then run zeppelin-web in visualization dev mode.

```bash
cd zeppelin-web
yarn run dev:helium
```

You can browse localhost:9000. Everytime refresh your browser, Zeppelin will rebuild your visualization and reload changes.
You can browse `localhost:9000`. Everytime refresh your browser, Zeppelin will rebuild your visualization and reload changes.


#### 5. Publish your visualization
Expand Down
10 changes: 5 additions & 5 deletions docs/development/writing_zeppelin_interpreter.md
Original file line number Diff line number Diff line change
Expand Up @@ -235,27 +235,27 @@ println(a)
### 0.6.0 and later
Inside of a note, `%[INTERPRETER_GROUP].[INTERPRETER_NAME]` directive will call your interpreter.

You can omit either [INTERPRETER\_GROUP] or [INTERPRETER\_NAME]. If you omit [INTERPRETER\_NAME], then first available interpreter will be selected in the [INTERPRETER\_GROUP].
Likewise, if you skip [INTERPRETER\_GROUP], then [INTERPRETER\_NAME] will be chosen from default interpreter group.
You can omit either `[INTERPRETER\_GROUP]` or `[INTERPRETER\_NAME]`. If you omit `[INTERPRETER\_NAME]`, then first available interpreter will be selected in the `[INTERPRETER\_GROUP]`.
Likewise, if you skip `[INTERPRETER\_GROUP]`, then `[INTERPRETER\_NAME]` will be chosen from default interpreter group.


For example, if you have two interpreter myintp1 and myintp2 in group mygrp, you can call myintp1 like
For example, if you have two interpreter `myintp1` and `myintp2` in group `mygrp`, you can call myintp1 like

```
%mygrp.myintp1
codes for myintp1
```

and you can call myintp2 like
and you can call `myintp2` like

```
%mygrp.myintp2
codes for myintp2
```

If you omit your interpreter name, it'll select first available interpreter in the group ( myintp1 ).
If you omit your interpreter name, it'll select first available interpreter in the group ( `myintp1` ).

```
%mygrp
Expand Down
2 changes: 1 addition & 1 deletion docs/interpreter/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -585,7 +585,7 @@ To configure the **Cassandra** interpreter, go to the **Interpreter** menu and s
The **Cassandra** interpreter is using the official **[Cassandra Java Driver]** and most of the parameters are used
to configure the Java driver

Below are the configuration parameters and their default value.
Below are the configuration parameters and their default values.

<table class="table-configuration">
<tr>
Expand Down
4 changes: 2 additions & 2 deletions docs/interpreter/hbase.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,9 @@ mvn clean package -DskipTests -Phadoop-2.6 -Dhadoop.version=2.6.0 -P build-distr
If you want to connect to HBase running on a cluster, you'll need to follow the next step.

### Export HBASE_HOME
In **conf/zeppelin-env.sh**, export `HBASE_HOME` environment variable with your HBase installation path. This ensures `hbase-site.xml` can be loaded.
In `conf/zeppelin-env.sh`, export `HBASE_HOME` environment variable with your HBase installation path. This ensures `hbase-site.xml` can be loaded.

for example
For example

```bash
export HBASE_HOME=/usr/lib/hbase
Expand Down
14 changes: 9 additions & 5 deletions docs/interpreter/lens.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,9 +102,9 @@ For more interpreter binding information see [here](../usage/interpreter/overvie
### How to use
You can analyze your data by using [OLAP Cube](http://lens.apache.org/user/olap-cube.html) [QL](http://lens.apache.org/user/cli.html) which is a high level SQL like language to query and describe data sets organized in data cubes.
You may experience OLAP Cube like this [Video tutorial](https://cwiki.apache.org/confluence/display/LENS/2015/07/13/20+Minute+video+demo+of+Apache+Lens+through+examples).
As you can see in this video, they are using Lens Client Shell(./bin/lens-cli.sh). All of these functions also can be used on Zeppelin by using Lens interpreter.
As you can see in this video, they are using Lens Client Shell(`./bin/lens-cli.sh`). All of these functions also can be used on Zeppelin by using Lens interpreter.

<li> Create and Use(Switch) Databases.
<li> Create and Use (Switch) Databases.

```sql
create database newDb
Expand Down Expand Up @@ -161,17 +161,21 @@ create fact your/path/to/lens/client/examples/resources/sales-raw-fact.xml
<li> Add partitions to Dimtable and Fact.

```
dimtable add single-partition --dimtable_name customer_table --storage_name local --path your/path/to/lens/client/examples/resources/customer-local-part.xml
dimtable add single-partition --dimtable_name customer_table --storage_name local
--path your/path/to/lens/client/examples/resources/customer-local-part.xml
```

```
fact add partitions --fact_name sales_raw_fact --storage_name local --path your/path/to/lens/client/examples/resources/sales-raw-local-parts.xml
fact add partitions --fact_name sales_raw_fact --storage_name local
--path your/path/to/lens/client/examples/resources/sales-raw-local-parts.xml
```

<li> Now, you can run queries on cubes.

```
query execute cube select customer_city_name, product_details.description, product_details.category, product_details.color, store_sales from sales where time_range_in(delivery_time, '2015-04-11-00', '2015-04-13-00')
query execute cube select customer_city_name, product_details.description,
product_details.category, product_details.color, store_sales from sales
where time_range_in(delivery_time, '2015-04-11-00', '2015-04-13-00')
```

![Lens Query Result]({{BASE_PATH}}/assets/themes/zeppelin/img/docs-img/lens-result.png)
Expand Down
20 changes: 11 additions & 9 deletions docs/interpreter/mahout.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ Apache Mahout is a collection of packages that enable machine learning and matri

### Easy Installation
To quickly and easily get up and running using Apache Mahout, run the following command from the top-level directory of the Zeppelin install:

```bash
python scripts/mahout/add_mahout.py
```
Expand All @@ -39,34 +40,34 @@ This will create the `%sparkMahout` and `%flinkMahout` interpreters, and restart

The `add_mahout.py` script contains several command line arguments for advanced users.

<table class="table-configuration">
<table class="table-configuration">
<tr>
<th>Argument</th>
<th>Description</th>
<th>Example</th>
</tr>
<tr>
<td>--zeppelin_home</td>
<td>`--zeppelin_home`</td>
<td>This is the path to the Zeppelin installation. This flag is not needed if the script is run from the top-level installation directory or from the `zeppelin/scripts/mahout` directory.</td>
<td>/path/to/zeppelin</td>
<td>`/path/to/zeppelin`</td>
</tr>
<tr>
<td>--mahout_home</td>
<td>`--mahout_home`</td>
<td>If the user has already installed Mahout, this flag can set the path to `MAHOUT_HOME`. If this is set, downloading Mahout will be skipped.</td>
<td>/path/to/mahout_home</td>
<td>`/path/to/mahout_home`</td>
</tr>
<tr>
<td>--restart_later</td>
<td>Restarting is necessary for updates to take effect. By default the script will restart Zeppelin for you- restart will be skipped if this flag is set.</td>
<td>`--restart_later`</td>
<td>Restarting is necessary for updates to take effect. By default the script will restart Zeppelin for you. Restart will be skipped if this flag is set.</td>
<td>NA</td>
</tr>
<tr>
<td>--force_download</td>
<td>`--force_download`</td>
<td>This flag will force the script to re-download the binary even if it already exists. This is useful for previously failed downloads.</td>
<td>NA</td>
</tr>
<tr>
<td>--overwrite_existing</td>
<td>`--overwrite_existing`</td>
<td>This flag will force the script to overwrite existing `%sparkMahout` and `%flinkMahout` interpreters. Useful when you want to just start over.</td>
<td>NA</td>
</tr>
Expand Down Expand Up @@ -165,6 +166,7 @@ Resource Pools are a powerful Zeppelin feature that lets us share information be
### Setting up a Resource Pool in Flink

In Spark based interpreters resource pools are accessed via the ZeppelinContext API. To put and get things from the resource pool one can be done simple

```scala
val myVal = 1
z.put("foo", myVal)
Expand Down
30 changes: 24 additions & 6 deletions docs/interpreter/r.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,30 @@ R -e "print(1+1)"

To enjoy plots, install additional libraries with:

```
+ devtools with `R -e "install.packages('devtools', repos = 'http://cran.us.r-project.org')"`
+ knitr with `R -e "install.packages('knitr', repos = 'http://cran.us.r-project.org')"`
+ ggplot2 with `R -e "install.packages('ggplot2', repos = 'http://cran.us.r-project.org')"`
+ Other vizualisation librairies: `R -e "install.packages(c('devtools','mplot', 'googleVis'), repos = 'http://cran.us.r-project.org'); require(devtools); install_github('ramnathv/rCharts')"`
```
+ devtools with

```bash
R -e "install.packages('devtools', repos = 'http://cran.us.r-project.org')"
```

+ knitr with

```bash
R -e "install.packages('knitr', repos = 'http://cran.us.r-project.org')"
```

+ ggplot2 with

```bash
R -e "install.packages('ggplot2', repos = 'http://cran.us.r-project.org')"
```

+ Other visualization libraries:

```bash
R -e "install.packages(c('devtools','mplot', 'googleVis'), repos = 'http://cran.us.r-project.org');
require(devtools); install_github('ramnathv/rCharts')"
```

We recommend you to also install the following optional R libraries for happy data analytics:

Expand Down
11 changes: 5 additions & 6 deletions docs/interpreter/scalding.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,20 +66,19 @@ and directories with custom jar files you need for your scalding commands.

**Set arguments to the scalding repl**

The default arguments are: "--local --repl"
The default arguments are: `--local --repl`

For hdfs mode you need to add: "--hdfs --repl"
For hdfs mode you need to add: `--hdfs --repl`

If you want to add custom jars, you need to add:
"-libjars directory/*:directory/*"
If you want to add custom jars, you need to add: `-libjars directory/*:directory/*`

For reducer estimation, you need to add something like:
"-Dscalding.reducer.estimator.classes=com.twitter.scalding.reducer_estimation.InputSizeReducerEstimator"
`-Dscalding.reducer.estimator.classes=com.twitter.scalding.reducer_estimation.InputSizeReducerEstimator`

**Set max.open.instances**

If you want to control the maximum number of open interpreters, you have to select "scoped" interpreter for note
option and set max.open.instances argument.
option and set `max.open.instances` argument.

## Testing the Interpreter

Expand Down
6 changes: 4 additions & 2 deletions docs/interpreter/spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -361,8 +361,10 @@ This is to make the server communicate with KDC.

3. Add the two properties below to Spark configuration (`[SPARK_HOME]/conf/spark-defaults.conf`):

spark.yarn.principal
spark.yarn.keytab
```
spark.yarn.principal
spark.yarn.keytab
```

> **NOTE:** If you do not have permission to access for the above spark-defaults.conf file, optionally, you can add the above lines to the Spark Interpreter setting through the Interpreter tab in the Zeppelin UI.
Expand Down
4 changes: 1 addition & 3 deletions docs/setup/deployment/flink_and_spark_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ limitations under the License.

{% include JB/setup %}

# Install with flink and spark cluster
# Install with Flink and Spark cluster

<div id="toc"></div>

Expand Down Expand Up @@ -158,9 +158,7 @@ See the [Zeppelin tutorial](../../quickstart/tutorial.html) for basic Zeppelin u
##### Flink Test
Create a new notebook named "Flink Test" and copy and paste the following code.


```scala

%flink // let Zeppelin know what interpreter to use.

val text = benv.fromElements("In the time of chimpanzees, I was a monkey", // some lines of text to analyze
Expand Down
10 changes: 4 additions & 6 deletions docs/setup/deployment/virtual_machine.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,7 @@ limitations under the License.

## Overview

Apache Zeppelin distribution includes a script directory

`scripts/vagrant/zeppelin-dev`
Apache Zeppelin distribution includes a script directory `scripts/vagrant/zeppelin-dev`

This script creates a virtual machine that launches a repeatable, known set of core dependencies required for developing Zeppelin. It can also be used to run an existing Zeppelin build if you don't plan to build from source.
For PySpark users, this script includes several helpful [Python Libraries](#python-extras).
Expand Down Expand Up @@ -87,8 +85,8 @@ By default, Vagrant will share your project directory (the directory with the Va

Running the following commands in the guest machine should display these expected versions:

`node --version` should report *v0.12.7*
`mvn --version` should report *Apache Maven 3.3.9* and *Java version: 1.7.0_85*
* `node --version` should report *v0.12.7*
* `mvn --version` should report *Apache Maven 3.3.9* and *Java version: 1.7.0_85*

The virtual machine consists of:

Expand Down Expand Up @@ -189,4 +187,4 @@ show(plt)
### R Extras

With zeppelin running, an R Tutorial notebook will be available. The R packages required to run the examples and graphs in this tutorial notebook were installed by this virtual machine.
The installed R Packages include: Knitr, devtools, repr, rCharts, ggplot2, googleVis, mplot, htmltools, base64enc, data.table
The installed R Packages include: `knitr`, `devtools`, `repr`, `rCharts`, `ggplot2`, `googleVis`, `mplot`, `htmltools`, `base64enc`, `data.table`.
2 changes: 1 addition & 1 deletion docs/setup/operation/trouble_shooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ limitations under the License.
-->
{% include JB/setup %}

# Trouble Shooting
# Troubleshooting

<div id="toc"></div>

6 changes: 3 additions & 3 deletions docs/setup/security/http_security_headers.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,9 +89,9 @@ The following property needs to be updated in the zeppelin-site.xml in order to

You can choose appropriate value from below.

* DENY
* SAMEORIGIN
* ALLOW-FROM _uri_
* `DENY`
* `SAMEORIGIN`
* `ALLOW-FROM uri`

## Setting up Server Header

Expand Down
1 change: 1 addition & 0 deletions docs/usage/display_system/angular_backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@ In this section, we will introduce a simpler and more intuitive way of using **A
Here are some usages.

### Import

```scala
// In notebook scope
import org.apache.zeppelin.display.angular.notebookscope._
Expand Down

0 comments on commit 37a2bb7

Please sign in to comment.