Skip to content

Commit

Permalink
add missing language spec for syntax highlighting
Browse files Browse the repository at this point in the history
  • Loading branch information
alexott committed Jun 1, 2018
1 parent bb26a29 commit 5a7950e
Show file tree
Hide file tree
Showing 35 changed files with 221 additions and 196 deletions.
39 changes: 24 additions & 15 deletions docs/development/contribution/how_to_contribute_code.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,13 @@ First of all, you need Zeppelin source code. The official location of Zeppelin i

Get the source code on your development machine using git.

```
```bash
git clone git://git.apache.org/zeppelin.git zeppelin
```

You may also want to develop against a specific branch. For example, for branch-0.5.6

```
```bash
git clone -b branch-0.5.6 git://git.apache.org/zeppelin.git zeppelin
```

Expand All @@ -69,19 +69,19 @@ Before making a pull request, please take a look [Contribution Guidelines](http:

### Build

```
```bash
mvn install
```

To skip test

```
```bash
mvn install -DskipTests
```

To build with specific spark / hadoop version

```
```bash
mvn install -Dspark.version=x.x.x -Dhadoop.version=x.x.x
```

Expand All @@ -93,18 +93,26 @@ For the further

1. Copy the `conf/zeppelin-site.xml.template` to `zeppelin-server/src/main/resources/zeppelin-site.xml` and change the configurations in this file if required
2. Run the following command
```

```bash
cd zeppelin-server
HADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn exec:java -Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" -Dexec.args=""
HADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME \
mvn exec:java -Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" -Dexec.args=""
```

#### Option 2 - Daemon Script

> **Note:** Make sure you first run ```mvn clean install -DskipTests``` on your zeppelin root directory, otherwise your server build will fail to find the required dependencies in the local repro.
> **Note:** Make sure you first run
```bash
mvn clean install -DskipTests
```

in your zeppelin root directory, otherwise your server build will fail to find the required dependencies in the local repro.

or use daemon script

```
```bash
bin/zeppelin-daemon start
```

Expand All @@ -122,8 +130,7 @@ Some portions of the Zeppelin code are generated by [Thrift](http://thrift.apach

To regenerate the code, install **thrift-0.9.2** and then run the following command to generate thrift code.


```
```bash
cd <zeppelin_home>/zeppelin-interpreter/src/main/thrift
./genthrift.sh
```
Expand All @@ -132,14 +139,16 @@ cd <zeppelin_home>/zeppelin-interpreter/src/main/thrift

Zeppelin has [set of integration tests](https://github.com/apache/zeppelin/tree/master/zeppelin-server/src/test/java/org/apache/zeppelin/integration) using Selenium. To run these test, first build and run Zeppelin and make sure Zeppelin is running on port 8080. Then you can run test using following command

```
TEST_SELENIUM=true mvn test -Dtest=[TEST_NAME] -DfailIfNoTests=false -pl 'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'
```bash
TEST_SELENIUM=true mvn test -Dtest=[TEST_NAME] -DfailIfNoTests=false \
-pl 'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'
```

For example, to run [ParagraphActionIT](https://github.com/apache/zeppelin/blob/master/zeppelin-server/src/test/java/org/apache/zeppelin/integration/ParagraphActionsIT.java),

```
TEST_SELENIUM=true mvn test -Dtest=ParagraphActionsIT -DfailIfNoTests=false -pl 'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'
```bash
TEST_SELENIUM=true mvn test -Dtest=ParagraphActionsIT -DfailIfNoTests=false \
-pl 'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'
```

You'll need Firefox web browser installed in your development environment. While CI server uses [Firefox 31.0](https://ftp.mozilla.org/pub/firefox/releases/31.0/) to run selenium test, it is good idea to install the same version (disable auto update to keep the version).
Expand Down
2 changes: 1 addition & 1 deletion docs/development/contribution/how_to_contribute_website.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Documentation website is hosted in 'master' branch under `/docs/` dir.
First of all, you need the website source code. The official location of mirror for Zeppelin is [http://git.apache.org/zeppelin.git](http://git.apache.org/zeppelin.git).
Get the source code on your development machine using git.

```
```bash
git clone git://git.apache.org/zeppelin.git
cd docs
```
Expand Down
17 changes: 10 additions & 7 deletions docs/development/contribution/useful_developer_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Check [zeppelin-web: Local Development](https://github.com/apache/zeppelin/tree/

this script would be helpful when changing JDK version frequently.

```
```bash
function setjdk() {
if [ $# -ne 0 ]; then
# written based on OSX.
Expand All @@ -59,7 +59,7 @@ you can use this function like `setjdk 1.8` / `setjdk 1.7`

### Building Submodules Selectively

```
```bash
# build `zeppelin-web` only
mvn clean -pl 'zeppelin-web' package -DskipTests;

Expand All @@ -71,15 +71,16 @@ mvn clean package -pl 'spark,spark-dependencies,zeppelin-server' --am -DskipTest

# build spark related modules with profiles: scala 2.11, spark 2.1 hadoop 2.7
./dev/change_scala_version.sh 2.11
mvn clean package -Pspark-2.1 -Phadoop-2.7 -Pscala-2.11 -pl 'spark,spark-dependencies,zeppelin-server' --am -DskipTests
mvn clean package -Pspark-2.1 -Phadoop-2.7 -Pscala-2.11 \
-pl 'spark,spark-dependencies,zeppelin-server' --am -DskipTests

# build `zeppelin-server` and `markdown` with dependencies
mvn clean package -pl 'markdown,zeppelin-server' --am -DskipTests
```

### Running Individual Tests

```
```bash
# run the `HeliumBundleFactoryTest` test class
mvn test -pl 'zeppelin-server' --am -DfailIfNoTests=false -Dtest=HeliumBundleFactoryTest
```
Expand All @@ -88,13 +89,15 @@ mvn test -pl 'zeppelin-server' --am -DfailIfNoTests=false -Dtest=HeliumBundleFac

Make sure that Zeppelin instance is started to execute integration tests (= selenium tests).

```
```bash
# run the `SparkParagraphIT` test class
TEST_SELENIUM="true" mvn test -pl 'zeppelin-server' --am -DfailIfNoTests=false -Dtest=SparkParagraphIT
TEST_SELENIUM="true" mvn test -pl 'zeppelin-server' --am \
-DfailIfNoTests=false -Dtest=SparkParagraphIT

# run the `testSqlSpark` test function only in the `SparkParagraphIT` class
# but note that, some test might be dependent on the previous tests
TEST_SELENIUM="true" mvn test -pl 'zeppelin-server' --am -DfailIfNoTests=false -Dtest=SparkParagraphIT#testSqlSpark
TEST_SELENIUM="true" mvn test -pl 'zeppelin-server' --am \
-DfailIfNoTests=false -Dtest=SparkParagraphIT#testSqlSpark
```


Expand Down
6 changes: 3 additions & 3 deletions docs/development/helium/writing_application.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,15 +147,15 @@ Resouce name is a string which will be compared with the name of objects in the

Application may require two or more resources. Required resources can be listed inside of the json array. For example, if the application requires object "name1", "name2" and "className1" type of object to run, resources field can be

```
```json
resources: [
[ "name1", "name2", ":className1", ...]
]
```

If Application can handle alternative combination of required resources, alternative set can be listed as below.

```
```json
resources: [
[ "name", ":className"],
[ "altName", ":altClassName1"],
Expand All @@ -165,7 +165,7 @@ resources: [

Easier way to understand this scheme is

```
```json
resources: [
[ 'resource' AND 'resource' AND ... ] OR
[ 'resource' AND 'resource' AND ... ] OR
Expand Down
14 changes: 7 additions & 7 deletions docs/development/writing_zeppelin_interpreter.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ In 'Separate Interpreter(scoped / isolated) for each note' mode which you can se
Creating a new interpreter is quite simple. Just extend [org.apache.zeppelin.interpreter](https://github.com/apache/zeppelin/blob/master/zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/Interpreter.java) abstract class and implement some methods.
For your interpreter project, you need to make `interpreter-parent` as your parent project and use plugin `maven-enforcer-plugin`, `maven-dependency-plugin` and `maven-resources-plugin`. Here's one sample pom.xml

```
```xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>

Expand Down Expand Up @@ -128,7 +128,7 @@ Here is an example of `interpreter-setting.json` on your own interpreter.

Finally, Zeppelin uses static initialization with the following:

```
```java
static {
Interpreter.register("MyInterpreterName", MyClassName.class.getName());
}
Expand Down Expand Up @@ -157,7 +157,7 @@ If you want to add a new set of syntax highlighting,
1. Add the `mode-*.js` file to <code>[zeppelin-web/bower.json](https://github.com/apache/zeppelin/blob/master/zeppelin-web/bower.json)</code> (when built, <code>[zeppelin-web/src/index.html](https://github.com/apache/zeppelin/blob/master/zeppelin-web/src/index.html)</code> will be changed automatically).
2. Add `language` field to `editor` object. Note that if you don't specify language field, your interpreter will use plain text mode for syntax highlighting. Let's say you want to set your language to `java`, then add:

```
```json
"editor": {
"language": "java"
}
Expand All @@ -166,7 +166,7 @@ If you want to add a new set of syntax highlighting,
### Edit on double click
If your interpreter uses mark-up language such as markdown or HTML, set `editOnDblClick` to `true` so that text editor opens on pargraph double click and closes on paragraph run. Otherwise set it to `false`.

```
```json
"editor": {
"editOnDblClick": false
}
Expand All @@ -177,7 +177,7 @@ By default, `Ctrl+dot(.)` brings autocompletion list in the editor.
Through `completionKey`, each interpreter can configure autocompletion key.
Currently `TAB` is only available option.

```
```json
"editor": {
"completionKey": "TAB"
}
Expand All @@ -201,7 +201,7 @@ To configure your interpreter you need to follow these steps:
Property value is comma separated [INTERPRETER\_CLASS\_NAME].
For example,

```
```xml
<property>
<name>zeppelin.interpreters</name>
<value>org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.hive.HiveInterpreter,com.me.MyNewInterpreter</value>
Expand All @@ -225,7 +225,7 @@ Note that the first interpreter configuration in zeppelin.interpreters will be t

For example,

```
```scala
%myintp

val a = "My interpreter"
Expand Down
8 changes: 4 additions & 4 deletions docs/interpreter/ignite.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,8 @@ In order to use Ignite interpreters, you may install Apache Ignite in some simpl

> **Tip. If you want to run Ignite examples on the cli not IDE, you can export executable Jar file from IDE. Then run it by using below command.**
```
$ nohup java -jar </path/to/your Jar file name>
```bash
nohup java -jar </path/to/your Jar file name>
```

## Configuring Ignite Interpreter
Expand Down Expand Up @@ -96,7 +96,7 @@ In order to execute SQL query, use ` %ignite.ignitesql ` prefix. <br>
Supposing you are running `org.apache.ignite.examples.streaming.wordcount.StreamWords`, then you can use "words" cache( Of course you have to specify this cache name to the Ignite interpreter setting section `ignite.jdbc.url` of Zeppelin ).
For example, you can select top 10 words in the words cache using the following query

```
```sql
%ignite.ignitesql
select _val, count(_val) as cnt from String group by _val order by cnt desc limit 10
```
Expand All @@ -105,7 +105,7 @@ select _val, count(_val) as cnt from String group by _val order by cnt desc limi

As long as your Ignite version and Zeppelin Ignite version is same, you can also use scala code. Please check the Zeppelin Ignite version before you download your own Ignite.

```
```scala
%ignite
import org.apache.ignite._
import org.apache.ignite.cache.affinity._
Expand Down
8 changes: 5 additions & 3 deletions docs/interpreter/jdbc.md
Original file line number Diff line number Diff line change
Expand Up @@ -738,16 +738,18 @@ The JDBC interpreter also supports interpolation of `ZeppelinContext` objects in
The following example shows one use of this facility:

####In Scala cell:
```

```scala
z.put("country_code", "KR")
// ...
```

####In later JDBC cell:

```sql
%jdbc_interpreter_name
select * from patents_list where
priority_country = '{country_code}' and filing_date like '2015-%'
select * from patents_list where
priority_country = '{country_code}' and filing_date like '2015-%'
```

Object interpolation is disabled by default, and can be enabled for all instances of the JDBC interpreter by
Expand Down
2 changes: 1 addition & 1 deletion docs/interpreter/kylin.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ To get start with Apache Kylin, please see [Apache Kylin Quickstart](https://kyl
## Using the Apache Kylin Interpreter
In a paragraph, use `%kylin(project_name)` to select the **kylin** interpreter, **project name** and then input **sql**. If no project name defined, will use the default project name from the above configuration.

```
```sql
%kylin(learn_project)
select count(*) from kylin_sales group by part_dt
```
Expand Down
6 changes: 3 additions & 3 deletions docs/interpreter/lens.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ In order to use Lens interpreters, you may install Apache Lens in some simple st
2. Before running Lens, you have to set HIVE_HOME and HADOOP_HOME. If you want to get more information about this, please refer to [here](http://lens.apache.org/lenshome/install-and-run.html#Installation). Lens also provides Pseudo Distributed mode. [Lens pseudo-distributed setup](http://lens.apache.org/lenshome/pseudo-distributed-setup.html) is done by using [docker](https://www.docker.com/). Hive server and hadoop daemons are run as separate processes in lens pseudo-distributed setup.
3. Now, you can start lens server (or stop).

```
./bin/lens-ctl start (or stop)
```bash
./bin/lens-ctl start # (or stop)
```

## Configuring Lens Interpreter
Expand Down Expand Up @@ -106,7 +106,7 @@ As you can see in this video, they are using Lens Client Shell(./bin/lens-cli.sh

<li> Create and Use(Switch) Databases.

```
```sql
create database newDb
```

Expand Down
9 changes: 5 additions & 4 deletions docs/interpreter/livy.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,22 +177,22 @@ Basically, you can use

**spark**

```
```scala
%livy.spark
sc.version
```


**pyspark**

```
```python
%livy.pyspark
print "1"
```

**sparkR**

```
```r
%livy.sparkr
hello <- function( name ) {
sprintf( "Hello, %s", name );
Expand All @@ -209,7 +209,8 @@ This is particularly useful when multi users are sharing a Notebook server.

## Apply Zeppelin Dynamic Forms
You can leverage [Zeppelin Dynamic Form](../usage/dynamic_form/intro.html). Form templates is only avalible for livy sql interpreter.
```

```sql
%livy.sql
select * from products where ${product_id=1}
```
Expand Down
6 changes: 3 additions & 3 deletions docs/interpreter/neo4j.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ In a notebook, to enable the **Neo4j** interpreter, click the **Gear** icon and
In a paragraph, use `%neo4j` to select the Neo4j interpreter and then input the Cypher commands.
For list of Cypher commands please refer to the official [Cyper Refcard](http://neo4j.com/docs/cypher-refcard/current/)

```bash
```
%neo4j
//Sample the TrumpWorld dataset
WITH
Expand All @@ -92,7 +92,7 @@ The Neo4j interpreter leverages the [Network display system](../usage/display_sy

This query:

```bash
```
%neo4j
MATCH (vp:Person {name:"VLADIMIR PUTIN"}), (dt:Person {name:"DONALD J. TRUMP"})
MATCH path = allShortestPaths( (vp)-[*]-(dt) )
Expand All @@ -104,7 +104,7 @@ produces the following result_
### Apply Zeppelin Dynamic Forms
You can leverage [Zeppelin Dynamic Form](../usage/dynamic_form/intro.html) inside your queries. This query:

```bash
```
%neo4j
MATCH (o:Organization)-[r]-()
RETURN o.name, count(*), collect(distinct type(r)) AS types
Expand Down

0 comments on commit 5a7950e

Please sign in to comment.