From 2048c39e09a2f73c16e0e1ab4f0124516adf51a3 Mon Sep 17 00:00:00 2001 From: Dilip Biswal Date: Thu, 18 Oct 2018 16:56:08 -0700 Subject: [PATCH] [SPARK-24499][DOC] Fix some broken links --- docs/sql-data-sources.md | 4 ++-- docs/sql-programming-guide.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/sql-data-sources.md b/docs/sql-data-sources.md index aa607ec23a56d..636636af6263c 100644 --- a/docs/sql-data-sources.md +++ b/docs/sql-data-sources.md @@ -16,8 +16,8 @@ goes into specific options that are available for the built-in data sources. * [Manually Specifying Options](sql-data-sources-load-save-functions.html#manually-specifying-options) * [Run SQL on files directly](sql-data-sources-load-save-functions.html#run-sql-on-files-directly) * [Save Modes](sql-data-sources-load-save-functions.html#save-modes) - * [Saving to Persistent Tables](sql-data-sources-load-save-functions.html#run-sql-on-files-directly) - * [Bucketing, Sorting and Partitioning](sql-data-sources-load-save-functions.html#run-sql-on-files-directly) + * [Saving to Persistent Tables](sql-data-sources-load-save-functions.html#saving-to-persistent-tables) + * [Bucketing, Sorting and Partitioning](sql-data-sources-load-save-functions.html#bucketing-sorting-and-partitioning) * [Parquet Files](sql-data-sources-parquet.html) * [Loading Data Programmatically](sql-data-sources-parquet.html#loading-data-programmatically) * [Partition Discovery](sql-data-sources-parquet.html#partition-discovery) diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md index 42b00c9c83681..eca8915dfa975 100644 --- a/docs/sql-programming-guide.md +++ b/docs/sql-programming-guide.md @@ -22,7 +22,7 @@ Spark SQL can also be used to read data from an existing Hive installation. For configure this feature, please refer to the [Hive Tables](sql-data-sources-hive-tables.html) section. When running SQL from within another programming language the results will be returned as a [Dataset/DataFrame](#datasets-and-dataframes). You can also interact with the SQL interface using the [command-line](sql-distributed-sql-engine.html#running-the-spark-sql-cli) -or over [JDBC/ODBC](#sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server). +or over [JDBC/ODBC](sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server). ## Datasets and DataFrames