Skip to content

Commit

Permalink
Update programming guide
Browse files Browse the repository at this point in the history
  • Loading branch information
maryannxue committed Jul 26, 2018
1 parent a83b64b commit 027b6c4
Showing 1 changed file with 7 additions and 0 deletions.
7 changes: 7 additions & 0 deletions docs/sql-programming-guide.md
Expand Up @@ -1435,6 +1435,13 @@ the following case-insensitive options:
The custom schema to use for reading data from JDBC connectors. For example, <code>"id DECIMAL(38, 0), name STRING"</code>. You can also specify partial fields, and the others use the default type mapping. For example, <code>"id DECIMAL(38, 0)"</code>. The column names should be identical to the corresponding column names of JDBC table. Users can specify the corresponding data types of Spark SQL instead of using the defaults. This option applies only to reading.
</td>
</tr>

<tr>
<td><code>pushDownPredicate</code></td>
<td>
The option to enable or disable predicate push-down into the JDBC data source. When set to true, Spark will push down filters to the JDBC data source as much as possible. Otherwise, when set to false, no filter will be pushed down to the JDBC data source and thus all filters will be handled by Spark. Predicate push-down is usually turned off when the predicate filtering is performed faster by Spark than by the JDBC data source.
</td>
</tr>
</table>

<div class="codetabs">
Expand Down

0 comments on commit 027b6c4

Please sign in to comment.