Skip to content
Permalink
Browse files
Sql docs items (#12530)
* touch up sql refactor

* brush up SQL refactor

* incorporate feedback

* reorder sql

* Update docs/querying/sql.md

Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>

Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
  • Loading branch information
techdocsmith and vtlim committed May 17, 2022
1 parent 177638f commit 3e8d7a6d9f43f889df7d73dedb2a94e03574615e
Showing 17 changed files with 293 additions and 314 deletions.
@@ -54,7 +54,7 @@ The table datasource is the most common type. This is the kind of datasource you
[data ingestion](../ingestion/index.md). They are split up into segments, distributed around the cluster,
and queried in parallel.

In [Druid SQL](sql-syntax.md#from), table datasources reside in the `druid` schema. This is the default schema, so table
In [Druid SQL](sql.md#from), table datasources reside in the `druid` schema. This is the default schema, so table
datasources can be referenced as either `druid.dataSourceName` or simply `dataSourceName`.

In native queries, table datasources can be referenced using their names as strings (as in the example above), or by
@@ -91,7 +91,7 @@ SELECT k, v FROM lookup.countries
```
<!--END_DOCUSAURUS_CODE_TABS-->

Lookup datasources correspond to Druid's key-value [lookup](lookups.md) objects. In [Druid SQL](sql-syntax.md#from),
Lookup datasources correspond to Druid's key-value [lookup](lookups.md) objects. In [Druid SQL](sql.md#from),
they reside in the `lookup` schema. They are preloaded in memory on all servers, so they can be accessed rapidly.
They can be joined onto regular tables using the [join operator](#join).

@@ -139,10 +139,10 @@ FROM (
<!--END_DOCUSAURUS_CODE_TABS-->

Unions allow you to treat two or more tables as a single datasource. In SQL, this is done with the UNION ALL operator
applied directly to tables, called a ["table-level union"](sql-syntax.md#table-level). In native queries, this is done with a
applied directly to tables, called a ["table-level union"](sql.md#table-level). In native queries, this is done with a
"union" datasource.

With SQL [table-level unions](sql-syntax.md#table-level) the same columns must be selected from each table in the same order,
With SQL [table-level unions](sql.md#table-level) the same columns must be selected from each table in the same order,
and those columns must either have the same types, or types that can be implicitly cast to each other (such as different
numeric types). For this reason, it is more robust to write your queries to select specific columns.

@@ -22,10 +22,10 @@ title: "Joins"
~ under the License.
-->

Druid has two features related to joining of data:
Apache Druid has two features related to joining of data:

1. [Join](datasource.md#join) operators. These are available using a [join datasource](datasource.md#join) in native
queries, or using the [JOIN operator](sql-syntax.md) in Druid SQL. Refer to the
queries, or using the [JOIN operator](sql.md) in Druid SQL. Refer to the
[join datasource](datasource.md#join) documentation for information about how joins work in Druid.
2. [Query-time lookups](lookups.md), simple key-to-value mappings. These are preloaded on all servers that are involved
in queries and can be accessed with or without an explicit join operator. Refer to the [lookups](lookups.md)
@@ -24,7 +24,7 @@ title: "Sorting and limiting (groupBy)"

> Apache Druid supports two query languages: [Druid SQL](sql.md) and [native queries](querying.md).
> This document describes the native
> language. For information about sorting in SQL, refer to the [SQL documentation](sql-syntax.md#order-by).
> language. For information about sorting in SQL, refer to the [SQL documentation](sql.md#order-by).
The limitSpec field provides the functionality to sort and limit the set of results from a groupBy query. If you group by a single dimension and are ordering by a single metric, we highly recommend using [TopN Queries](../querying/topnquery.md) instead. The performance will be substantially better. Available options are:

@@ -33,7 +33,8 @@ sidebar_label: "Aggregation functions"
> Apache Druid supports two query languages: Druid SQL and [native queries](querying.md).
> This document describes the SQL language.
You can use aggregation functions in the SELECT clause of any query.
You can use aggregation functions in the SELECT clause of any [Druid SQL](./sql.md) query.

Filter any aggregator using the FILTER clause, for example:

```
@@ -26,7 +26,7 @@ sidebar_label: "Druid SQL API"
> Apache Druid supports two query languages: Druid SQL and [native queries](querying.md).
> This document describes the SQL language.
You can submit and cancel Druid SQL queries using the Druid SQL API.
You can submit and cancel [Druid SQL](./sql.md) queries using the Druid SQL API.
The Druid SQL API is available at `https://ROUTER:8888/druid/v2/sql`, where `ROUTER` is the IP address of the Druid Router.

## Submit a query
@@ -27,7 +27,7 @@ sidebar_label: "SQL data types"
> This document describes the SQL language.

Columns in Druid are associated with a specific data type. This topic describes supported data types in Druid SQL.
Columns in Druid are associated with a specific data type. This topic describes supported data types in [Druid SQL](./sql.md).

## Standard types

@@ -27,7 +27,7 @@ sidebar_label: "JDBC driver API"
> This document describes the SQL language.

You can make Druid SQL queries using the [Avatica JDBC driver](https://calcite.apache.org/avatica/downloads/). We recommend using Avatica JDBC driver version 1.17.0 or later. Note that as of the time of this writing, Avatica 1.17.0, the latest version, does not support passing connection string parameters from the URL to Druid, so you must pass them using a `Properties` object. Once you've downloaded the Avatica client jar, add it to your classpath and use the connect string `jdbc:avatica:remote:url=http://BROKER:8082/druid/v2/sql/avatica/`.
You can make [Druid SQL](./sql.md) queries using the [Avatica JDBC driver](https://calcite.apache.org/avatica/downloads/). We recommend using Avatica JDBC driver version 1.17.0 or later. Note that as of the time of this writing, Avatica 1.17.0, the latest version, does not support passing connection string parameters from the URL to Druid, so you must pass them using a `Properties` object. Once you've downloaded the Avatica client jar, add it to your classpath and use the connect string `jdbc:avatica:remote:url=http://BROKER:8082/druid/v2/sql/avatica/`.

Example code:

@@ -74,7 +74,7 @@ Note that the non-JDBC [JSON over HTTP](sql-api.md#submit-a-query) API is statel

## Dynamic parameters

You can use [parameterized queries](sql-syntax.md#dynamic-parameters) in JDBC code, as in this example:
You can use [parameterized queries](sql.md#dynamic-parameters) in JDBC code, as in this example:

```java
PreparedStatement statement = connection.prepareStatement("SELECT COUNT(*) AS cnt FROM druid.foo WHERE dim1 = ? OR dim1 = ?");
@@ -28,7 +28,7 @@ sidebar_label: "SQL metadata tables"

Druid Brokers infer table and column metadata for each datasource from segments loaded in the cluster, and use this to
plan SQL queries. This metadata is cached on Broker startup and also updated periodically in the background through
plan [SQL queries](./sql.md). This metadata is cached on Broker startup and also updated periodically in the background through
[SegmentMetadata queries](segmentmetadataquery.md). Background metadata refreshing is triggered by
segments entering and exiting the cluster, and can also be throttled through configuration.

@@ -35,7 +35,7 @@ sidebar_label: "Multi-value string functions"
> This document describes the SQL language.
Druid supports string dimensions containing multiple values.
This page describes the operations you can perform on multi-value string dimensions.
This page describes the operations you can perform on multi-value string dimensions using [Druid SQL](./sql.md).
See [Multi-value dimensions](multi-value-dimensions.md) for more information.

All "array" references in the multi-value string function documentation can refer to multi-value string columns or
@@ -35,7 +35,7 @@ sidebar_label: "Operators"
> This document describes the SQL language.

Operators in Druid SQL typically operate on one or two values and return a result based on the values. Types of operators in Druid SQL include arithmetic, comparison, logical, and more, as described here.
Operators in [Druid SQL](./sql.md) typically operate on one or two values and return a result based on the values. Types of operators in Druid SQL include arithmetic, comparison, logical, and more, as described here.

## Arithmetic operators

@@ -26,7 +26,7 @@ sidebar_label: "SQL query context"
> Apache Druid supports two query languages: Druid SQL and [native queries](querying.md).
> This document describes the SQL language.
Druid supports query context parameters which affect SQL planning.
Druid supports query context parameters which affect [SQL query](./sql.md) planning.
See [Query context](query-context.md) for general query context parameters for all query types.

## SQL query context parameters
@@ -34,7 +34,7 @@ sidebar_label: "Scalar functions"
> Apache Druid supports two query languages: Druid SQL and [native queries](querying.md).
> This document describes the SQL language.
Druid SQL includes scalar functions that include numeric and string functions, IP address functions, Sketch functions, and more, as described on this page.
[Druid SQL](./sql.md) includes scalar functions that include numeric and string functions, IP address functions, Sketch functions, and more, as described on this page.


## Numeric functions
@@ -96,7 +96,7 @@ String functions accept strings, and return a type appropriate to the function.
|`CHAR_LENGTH(expr)`|Alias for `LENGTH`.|
|`CHARACTER_LENGTH(expr)`|Alias for `LENGTH`.|
|`STRLEN(expr)`|Alias for `LENGTH`.|
|`LOOKUP(expr, lookupName)`|Look up `expr` in a registered [query-time lookup table](lookups.md). Note that lookups can also be queried directly using the [`lookup` schema](sql-syntax.md#from).|
|`LOOKUP(expr, lookupName)`|Look up `expr` in a registered [query-time lookup table](lookups.md). Note that lookups can also be queried directly using the [`lookup` schema](sql.md#from).|
|`LOWER(expr)`|Returns `expr` in all lowercase.|
|`UPPER(expr)`|Returns `expr` in all uppercase.|
|`PARSE_LONG(string, [radix])`|Parses a string into a long (BIGINT) with the given radix, or 10 (decimal) if a radix is not provided.|

0 comments on commit 3e8d7a6

Please sign in to comment.