Skip to content

Commit

Permalink
[MINOR][DOCS] Make code blocks pretty in README.md
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?

This PR proposes to enable [syntax highlighting](https://docs.github.com/en/github/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks#syntax-highlighting) in code blocks of `README.md`.

### Why are the changes needed?

To make it easier to read.

### Does this PR introduce _any_ user-facing change?

No, dev-only.

**Before:**

<img width="865" alt="Screen Shot 2022-01-05 at 10 31 32 AM" src="https://user-images.githubusercontent.com/6477701/148146477-addc6d9f-4da8-4860-9ead-6baaec442e0b.png">

**After:**

<img width="1067" alt="Screen Shot 2022-01-05 at 10 31 21 AM" src="https://user-images.githubusercontent.com/6477701/148146464-dc3d942f-a857-493c-a438-ebf23aa4c069.png">

### How was this patch tested?

Manually tested via the GitHub viewer.

Closes #35103 from HyukjinKwon/minor-format.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
  • Loading branch information
HyukjinKwon authored and dongjoon-hyun committed Jan 5, 2022
1 parent 639d6f4 commit 3c6c690
Showing 1 changed file with 24 additions and 8 deletions.
32 changes: 24 additions & 8 deletions README.md
Expand Up @@ -25,7 +25,9 @@ This README file only contains basic setup instructions.
Spark is built using [Apache Maven](https://maven.apache.org/).
To build Spark and its example programs, run:

./build/mvn -DskipTests clean package
```bash
./build/mvn -DskipTests clean package
```

(You do not need to do this if you downloaded a pre-built package.)

Expand All @@ -38,28 +40,38 @@ For general development tips, including info on developing Spark using an IDE, s

The easiest way to start using Spark is through the Scala shell:

./bin/spark-shell
```bash
./bin/spark-shell
```

Try the following command, which should return 1,000,000,000:

scala> spark.range(1000 * 1000 * 1000).count()
```scala
scala> spark.range(1000 * 1000 * 1000).count()
```

## Interactive Python Shell

Alternatively, if you prefer Python, you can use the Python shell:

./bin/pyspark
```bash
./bin/pyspark
```

And run the following command, which should also return 1,000,000,000:

>>> spark.range(1000 * 1000 * 1000).count()
```python
>>> spark.range(1000 * 1000 * 1000).count()
```

## Example Programs

Spark also comes with several sample programs in the `examples` directory.
To run one of them, use `./bin/run-example <class> [params]`. For example:

./bin/run-example SparkPi
```bash
./bin/run-example SparkPi
```

will run the Pi example locally.

Expand All @@ -70,7 +82,9 @@ locally with one thread, or "local[N]" to run locally with N threads. You
can also use an abbreviated class name if the class is in the `examples`
package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi
```bash
MASTER=spark://host:7077 ./bin/run-example SparkPi
```

Many of the example programs print usage help if no params are given.

Expand All @@ -79,7 +93,9 @@ Many of the example programs print usage help if no params are given.
Testing first requires [building Spark](#building-spark). Once Spark is built, tests
can be run using:

./dev/run-tests
```bash
./dev/run-tests
```

Please see the guidance on how to
[run tests for a module, or individual tests](https://spark.apache.org/developer-tools.html#individual-tests).
Expand Down

0 comments on commit 3c6c690

Please sign in to comment.