Skip to content

Commit

Permalink
[SPARK-34494][SQL][DOCS] Move JSON data source options from Python an…
Browse files Browse the repository at this point in the history
…d Scala into a single page

### What changes were proposed in this pull request?

This PR proposes move JSON data source options from Python, Scala and Java into a single page.

### Why are the changes needed?

So far, the documentation for JSON data source options is separated into different pages for each language API documents. However, this makes managing many options inconvenient, so it is efficient to manage all options in a single page and provide a link to that page in the API of each language.

### Does this PR introduce _any_ user-facing change?

Yes, the documents will be shown below after this change:

- "JSON Files" page
<img width="876" alt="Screen Shot 2021-05-20 at 8 48 27 PM" src="https://user-images.githubusercontent.com/44108233/118973662-ddb3e580-b9ac-11eb-987c-8139aa9c3fe2.png">

- Python
<img width="714" alt="Screen Shot 2021-04-16 at 5 04 11 PM" src="https://user-images.githubusercontent.com/44108233/114992491-ca0cef00-9ed5-11eb-9d0f-4de60d8b2516.png">

- Scala
<img width="726" alt="Screen Shot 2021-04-16 at 5 04 54 PM" src="https://user-images.githubusercontent.com/44108233/114992594-e315a000-9ed5-11eb-8bd3-af7e568fcfe1.png">

- Java
<img width="911" alt="Screen Shot 2021-04-16 at 5 06 11 PM" src="https://user-images.githubusercontent.com/44108233/114992751-10624e00-9ed6-11eb-888c-8668d3c74289.png">

### How was this patch tested?

Manually build docs and confirm the page.

Closes #32204 from itholic/SPARK-35081.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
  • Loading branch information
itholic authored and HyukjinKwon committed May 21, 2021
1 parent 0fe65b5 commit 419ddcb
Show file tree
Hide file tree
Showing 9 changed files with 263 additions and 719 deletions.
165 changes: 165 additions & 0 deletions docs/sql-data-sources-json.md
Expand Up @@ -94,3 +94,168 @@ SELECT * FROM jsonTable
</div>

</div>

## Data Source Option

Data source options of JSON can be set via:
* the `.option`/`.options` methods of
* `DataFrameReader`
* `DataFrameWriter`
* `DataStreamReader`
* `DataStreamWriter`

<table class="table">
<tr><th><b>Property Name</b></th><th><b>Default</b></th><th><b>Meaning</b></th><th><b>Scope</b></th></tr>
<tr>
<!-- TODO(SPARK-35433): Add timeZone to Data Source Option for CSV, too. -->
<td><code>timeZone</code></td>
<td>None</td>
<td>Sets the string that indicates a time zone ID to be used to format timestamps in the JSON datasources or partition values. The following formats of <code>timeZone</code> are supported:<br>
<ul>
<li>Region-based zone ID: It should have the form 'area/city', such as 'America/Los_Angeles'.</li>
<li>Zone offset: It should be in the format '(+|-)HH:mm', for example '-08:00' or '+01:00'. Also 'UTC' and 'Z' are supported as aliases of '+00:00'.</li>
</ul>
Other short names like 'CST' are not recommended to use because they can be ambiguous. If it isn't set, the current value of the SQL config <code>spark.sql.session.timeZone</code> is used by default.
</td>
<td>read/write</td>
</tr>
<tr>
<td><code>primitivesAsString</code></td>
<td>None</td>
<td>Infers all primitive values as a string type. If None is set, it uses the default value, <code>false</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>prefersDecimal</code></td>
<td>None</td>
<td>Infers all floating-point values as a decimal type. If the values do not fit in decimal, then it infers them as doubles. If None is set, it uses the default value, <code>false</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>allowComments</code></td>
<td>None</td>
<td>Ignores Java/C++ style comment in JSON records. If None is set, it uses the default value, <code>false</code></td>
<td>read</td>
</tr>
<tr>
<td><code>allowUnquotedFieldNames</code></td>
<td>None</td>
<td>Allows unquoted JSON field names. If None is set, it uses the default value, <code>false</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>allowSingleQuotes</code></td>
<td>None</td>
<td>Allows single quotes in addition to double quotes. If None is set, it uses the default value, <code>true</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>allowNumericLeadingZero</code></td>
<td>None</td>
<td>Allows leading zeros in numbers (e.g. 00012). If None is set, it uses the default value, <code>false</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>allowBackslashEscapingAnyCharacter</code></td>
<td>None</td>
<td>Allows accepting quoting of all character using backslash quoting mechanism. If None is set, it uses the default value, <code>false</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>mode</code></td>
<td>None</td>
<td>Allows a mode for dealing with corrupt records during parsing. If None is set, it uses the default value, <code>PERMISSIVE</code><br>
<ul>
<li><code>PERMISSIVE</code>: when it meets a corrupted record, puts the malformed string into a field configured by <code>columnNameOfCorruptRecord</code>, and sets malformed fields to <code>null</code>. To keep corrupt records, an user can set a string type field named <code>columnNameOfCorruptRecord</code> in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. When inferring a schema, it implicitly adds a <code>columnNameOfCorruptRecord</code> field in an output schema.</li>
<li><code>DROPMALFORMED</code>: ignores the whole corrupted records.</li>
<li><code>FAILFAST</code>: throws an exception when it meets corrupted records.</li>
</ul>
</td>
<td>read</td>
</tr>
<tr>
<td><code>columnNameOfCorruptRecord</code></td>
<td>None</td>
<td>Allows renaming the new field having malformed string created by <code>PERMISSIVE</code> mode. This overrides spark.sql.columnNameOfCorruptRecord. If None is set, it uses the value specified in <code>spark.sql.columnNameOfCorruptRecord</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>dateFormat</code></td>
<td>None</td>
<td>Sets the string that indicates a date format. Custom date formats follow the formats at <a href="https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html"> datetime pattern</a>. This applies to date type. If None is set, it uses the default value, <code>yyyy-MM-dd</code>.</td>
<td>read/write</td>
</tr>
<tr>
<td><code>timestampFormat</code></td>
<td>None</td>
<td>Sets the string that indicates a timestamp format. Custom date formats follow the formats at <a href="https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html"> datetime pattern</a>. This applies to timestamp type. If None is set, it uses the default value, <code>yyyy-MM-dd'T'HH:mm:ss[.SSS][XXX]</code>.</td>
<td>read/write</td>
</tr>
<tr>
<td><code>multiLine</code></td>
<td>None</td>
<td>Parse one record, which may span multiple lines, per file. If None is set, it uses the default value, <code>false</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>allowUnquotedControlChars</code></td>
<td>None</td>
<td>Allows JSON Strings to contain unquoted control characters (ASCII characters with value less than 32, including tab and line feed characters) or not.</td>
<td>read</td>
</tr>
<tr>
<td><code>encoding</code></td>
<td>None</td>
<td>For reading, allows to forcibly set one of standard basic or extended encoding for the JSON files. For example UTF-16BE, UTF-32LE. If None is set, the encoding of input JSON will be detected automatically when the multiLine option is set to <code>true</code>. For writing, Specifies encoding (charset) of saved json files. If None is set, the default UTF-8 charset will be used.</td>
<td>read/write</td>
</tr>
<tr>
<td><code>lineSep</code></td>
<td>None</td>
<td>Defines the line separator that should be used for parsing. If None is set, it covers all <code>\r</code>, <code>\r\n</code> and <code>\n</code>.</td>
<td>read/write</td>
</tr>
<tr>
<td><code>samplingRatio</code></td>
<td>None</td>
<td>Defines fraction of input JSON objects used for schema inferring. If None is set, it uses the default value, <code>1.0</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>dropFieldIfAllNull</code></td>
<td>None</td>
<td>Whether to ignore column of all null values or empty array/struct during schema inference. If None is set, it uses the default value, <code>false</code>.</td>
<td>read</td>
</tr>
<tr>
<td><code>locale</code></td>
<td>None</td>
<td>Sets a locale as language tag in IETF BCP 47 format. If None is set, it uses the default value, <code>en-US</code>. For instance, <code>locale</code> is used while parsing dates and timestamps.</td>
<td>read</td>
</tr>
<tr>
<td><code>allowNonNumericNumbers</code></td>
<td>None</td>
<td>Allows JSON parser to recognize set of “Not-a-Number” (NaN) tokens as legal floating number values. If None is set, it uses the default value, <code>true</code>.<br>
<ul>
<li><code>+INF</code>: for positive infinity, as well as alias of <code>+Infinity</code> and <code>Infinity</code>.</li>
<li><code>-INF</code>: for negative infinity, alias <code>-Infinity</code>.</li>
<li><code>NaN</code>: for other not-a-numbers, like result of division by zero.</li>
</ul>
</td>
<td>read</td>
</tr>
<tr>
<td><code>compression</code></td>
<td>None</td>
<td>Compression codec to use when saving to file. This can be one of the known case-insensitive shorten names (none, bzip2, gzip, lz4, snappy and deflate).</td>
<td>write</td>
</tr>
<tr>
<td><code>ignoreNullFields</code></td>
<td>None</td>
<td>Whether to ignore null fields when generating JSON objects. If None is set, it uses the default value, <code>true</code>.</td>
<td>write</td>
</tr>
</table>
Other generic options can be found in <a href="https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html"> Generic File Source Options</a>.
4 changes: 3 additions & 1 deletion python/pyspark/sql/functions.py
Expand Up @@ -3711,7 +3711,9 @@ def schema_of_json(json, options=None):
json : :class:`~pyspark.sql.Column` or str
a JSON string or a foldable string column containing a JSON string.
options : dict, optional
options to control parsing. accepts the same options as the JSON datasource
options to control parsing. accepts the same options as the JSON datasource.
See `Data Source Option <https://spark.apache.org/docs/latest/sql-data-sources-json.html#data-source-option>`_ # noqa
in the version you use.
.. versionchanged:: 3.0
It accepts `options` parameter to control schema inferring.
Expand Down

0 comments on commit 419ddcb

Please sign in to comment.