diff --git a/.github/workflows/preview.yml b/.github/workflows/preview.yml
index 9629878b9..0ecb1f135 100644
--- a/.github/workflows/preview.yml
+++ b/.github/workflows/preview.yml
@@ -60,4 +60,27 @@ jobs:
Commit SHA: ${{ github.event.pull_request.head.sha }}
> :package: Build generates a preview & updates link on each commit.
- comment_tag: preview
\ No newline at end of file
+ comment_tag: preview
+
+ validate-links:
+ name: "Validate broken links"
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v3
+ with:
+ ref: ${{ github.event.pull_request.head.sha }}
+
+ - name: Setup Node.js
+ uses: actions/setup-node@v3
+ with:
+ node-version: '18'
+ cache: 'yarn'
+
+ - name: Install dependencies
+ run: yarn install --frozen-lockfile
+
+ - name: Build site for broken link validation
+ run: |
+ yarn build
diff --git a/documentation/clients/ingest-c-and-cpp.md b/documentation/clients/ingest-c-and-cpp.md
index cf5bbe236..0a08eed53 100644
--- a/documentation/clients/ingest-c-and-cpp.md
+++ b/documentation/clients/ingest-c-and-cpp.md
@@ -422,7 +422,7 @@ error:
As you can see, both events use the same timestamp. We recommended using the
original event timestamps when ingesting data into QuestDB. Using the current
timestamp hinder the ability to deduplicate rows which is
-[important for exactly-once processing](#/docs/clients/java_ilp/#exactly-once-delivery-vs-at-least-once-delivery).
+[important for exactly-once processing](/docs/reference/api/ilp/overview/#exactly-once-delivery-vs-at-least-once-delivery).
## Other Considerations for both C and C++
diff --git a/documentation/clients/java_ilp.md b/documentation/clients/java_ilp.md
index bcc775136..b547485d2 100644
--- a/documentation/clients/java_ilp.md
+++ b/documentation/clients/java_ilp.md
@@ -96,11 +96,12 @@ The valid transport protocols are:
- `tcp`: ILP/TCP
- `tcps`: ILP/TCP with TLS encryption
-A [transport protocol](#transport-selection) and the key `addr=host:port` are
-required. The key `addr` defines the hostname and port of the QuestDB server. If
-the port is not specified, it defaults to 9000 for HTTP(s) transports and 9009
-for TCP(s) transports. For a complete list of options, refer to the
-[Configuration Options](#configuration-options) section.
+A [transport protocol](/docs/reference/api/ilp/overview/#transport-selection)
+and the key `addr=host:port` are required. The key `addr` defines the hostname
+and port of the QuestDB server. If the port is not specified, it defaults to
+9000 for HTTP(s) transports and 9009 for TCP(s) transports. For a complete list
+of options, refer to the [Configuration Options](#configuration-options)
+section.
## Example with TLS and Authentication enabled
diff --git a/documentation/concept/designated-timestamp.md b/documentation/concept/designated-timestamp.md
index b1e2918e7..1836311c7 100644
--- a/documentation/concept/designated-timestamp.md
+++ b/documentation/concept/designated-timestamp.md
@@ -13,7 +13,7 @@ leverage time-oriented language features and high-performance functionalities.
A designated timestamp is elected by using the
[`timestamp(columnName)`](/docs/reference/function/timestamp/) function:
-- during a [CREATE TABLE](/docs/reference/sql/create-table/#timestamp) operation
+- during a [CREATE TABLE](/docs/reference/sql/create-table/#designated-timestamp) operation
- during a [SELECT](/docs/reference/sql/select/#timestamp) operation
(`dynamic timestamp`)
- when ingesting data via InfluxDB Line Protocol, for tables that do not already
diff --git a/documentation/concept/geohashes.md b/documentation/concept/geohashes.md
index a8ddc316a..5b52244dc 100644
--- a/documentation/concept/geohashes.md
+++ b/documentation/concept/geohashes.md
@@ -454,7 +454,7 @@ geo_data geohash="46swgj10"
## CSV import
Geohashes may also be inserted via
-[REST API](/docs/guides/import-csv/#import-csv-via-rest/). In order to perform
+[REST API](/docs/guides/import-csv/#import-csv-via-rest). In order to perform
inserts in this way;
1. Create a table with columns of geohash type beforehand:
diff --git a/documentation/concept/indexes.md b/documentation/concept/indexes.md
index aa6964e76..023c567ae 100644
--- a/documentation/concept/indexes.md
+++ b/documentation/concept/indexes.md
@@ -18,7 +18,7 @@ for other types will be added over time.
The following are ways to index a `symbol` column:
- At table creation time using
- [CREATE TABLE](/docs/reference/sql/create-table/#index)
+ [CREATE TABLE](/docs/reference/sql/create-table/#column-indexes)
- Using
[ALTER TABLE ALTER COLUMN ADD INDEX](/docs/reference/sql/alter-table-alter-column-add-index/)
to index an existing `symbol` column
diff --git a/documentation/concept/mat-views.md b/documentation/concept/mat-views.md
index 8f5c7d078..53f36d44b 100644
--- a/documentation/concept/mat-views.md
+++ b/documentation/concept/mat-views.md
@@ -2,23 +2,15 @@
title: Materialized views
sidebar_label: Materialized views
description:
- Overview of QuestDB's materialized views. This feature helps you significantly
- speed up your aggregation queries.
+ Materialized views are designed to maintain the speed of your queries as you scale your data.
+ Understand how to structure your queries to take advantage of this feature.
---
:::info
-Materialized View support is in **beta**.
+Materialized View support is in **beta**. It may not be fit for production use.
-It may not be fit for production use.
-
-To enable **beta** materialized views, set `cairo.mat.view.enabled=true` in
-`server.conf`, or export the equivalent environment variable:
-`QDB_CAIRO_MAT_VIEW_ENABLED=true`.
-
-Please let us know if you run into issues.
-
-Either:
+Please let us know if you run into issues. Either:
1. Email us at [support@questdb.io](mailto:support@questdb.io)
2. Join our [public Slack](https://slack.questdb.com/)
@@ -26,212 +18,117 @@ Either:
:::
-A materialized view is a database object that stores the pre-computed results of
+A materialized view is a special QuestDB table that stores the pre-computed results of
a query. Unlike regular views, which compute their results at query time,
materialized views persist their data to disk, making them particularly
efficient for expensive aggregate queries that are run frequently.
-## Related documentation
-
-
-
-- **SQL Commands**
-
- - [`CREATE MATERIALIZED VIEW`](/docs/reference/sql/create-mat-view/): Create a
- new materialized view
- - [`DROP MATERIALIZED VIEW`](/docs/reference/sql/drop-mat-view/): Remove a
- materialized view
- - [`REFRESH MATERIALIZED VIEW`](/docs/reference/sql/refresh-mat-view/):
- Manually refresh a materialized view
- - [`ALTER MATERIALIZED VIEW RESUME WAL`](/docs/reference/sql/alter-mat-view-resume-wal/):
- Resume WAL for a materialized view
-
-- **Configuration**
- - [Materialized views configs](/docs/configuration/#materialized-views):
- Server configuration options for materialized views from `server.conf`
-
-## Architecture and behaviour
-
-### Storage model
+## What are materialized views for?
-Materialized views in QuestDB are implemented as special tables that maintain
-their data independently of their base tables. They use the same underlying
-storage engine as regular tables, benefiting from QuestDB's columnar storage and
-partitioning capabilities.
+Let's say that your application is ingesting vast amounts of time series data.
+Soon your QuestDB instance will grow from gigabytes to terabytes.
-### Refresh mechanism
-
-:::note
-
-Currently, QuestDB only supports **incremental refresh** for materialized views.
-
-Future releases will include additional refresh types, such as time-interval and
-manual refreshes.
-
-:::
-
-Unlike regular views, which recompute their results at query time, materialized
-views in QuestDB are incrementally refreshed as new data is added to the base
-table. This approach ensures that only the **relevant time slices** of the view
-are updated, avoiding the need to recompute the entire dataset. The refresh
-process works as follows:
-
-1. New data is inserted into the base table.
-2. The time-range of this data is identified.
-3. This data is extracted and used to recompute the materialised view.
-
-This refresh happens asynchronously, minimising any impact on write performance.
-The refresh state of the materialized view is tracked using transaction numbers.
-The transaction numbers can be compared with the base table, for monitoring the
-'refresh lag'.
-
-For example, if a base table receives new rows for `2025-02-18`, only that day's
-relevant time slices are recomputed, rather than reprocessing all historical
-data.
-
-You can monitor refresh status using the `materialized_views()` system function:
-
-```questdb-sql title="Listing all materialized views"
-SELECT
- view_name,
- last_refresh_timestamp,
- view_status,
- base_table_txn,
- applied_base_table_txn
-FROM materialized_views();
+```questdb-sql title="trades ddl"
+CREATE TABLE 'trades' (
+ symbol SYMBOL,
+ side SYMBOL,
+ price DOUBLE,
+ amount DOUBLE,
+ timestamp TIMESTAMP
+) TIMESTAMP(timestamp) PARTITION BY DAY;
```
-Here is an example output:
-
-| view_name | last_refresh_timestamp | view_status | base_table_txn | applied_base_table_txn |
-| ----------- | ---------------------- | ----------- | -------------- | ---------------------- |
-| trades_view | null | valid | 102 | 102 |
-
-When `base_table_txn` matches `applied_base_table_txn`, the materialized view is
-fully up-to-date.
-
-#### Refreshing an invalid view
+Queries that rely on a specific subset of the data (say, the last hour) will
+continue to run fast, but anything that requires scanning large numbers of rows
+or the entire dataset will begin to slow down.
-If a materialized view becomes invalid, you can check its status:
+One of the most common queries for time series data is the `SAMPLE BY` query.
+This query is used to aggregate data into time-window buckets. Here's an example
+that can analyze trade volumes by the minute, broken down by symbol.
-```questdb-sql title="Checking view status"
+```questdb-sql title="SAMPLE BY query"
SELECT
- view_name,
- base_table_name,
- view_status,
- invalidation_reason
-FROM materialized_views();
+ timestamp,
+ symbol,
+ side,
+ sum(price * amount) AS notional
+FROM trades
+SAMPLE BY 1m;
```
-| view_name | base_table_name | view_status | invalidation_reason |
-| ------------- | --------------- | ----------- | -------------------------------------------- |
-| trades_view | trades | valid | null |
-| exchange_view | exchange | invalid | [-105] table does not exist [table=exchange] |
+Each time this query is run it will scan the entire dataset. This type of query
+will become slower as the dataset grows. Materialized views run the query only
+on small subset of rows of the base table each time when new rows are inserted.
+In other words, materialized views are designed to maintain the speed of your
+queries as you scale your data.
-To restore an invalid view, and refresh its data from scratch, use:
+When you create a materialized view you register your time-based grouping
+query with the QuestDB database against a base table.
-```questdb-sql title="Restoring an invalid view"
-REFRESH MATERIALIZED VIEW view_name FULL;
-```
-
-This command deletes existing data in the materialized view, and re-runs its
-query.
-
-Once the view is repopulated, the view is marked as 'valid' so that it can be
-incrementally refreshed.
-
-For large base tables, a full refresh may take a significant amount of time. You
-can cancel the refresh using the
-[`CANCEL QUERY`](/docs/reference/sql/cancel-query/) SQL.
-
-For the conditions which can invalidate a materialized view, see the
-[technical requirements](#technical-requirements) section.
-
-### Base table relationship
-
-Every materialized view is tied to a base table that serves as its primary data
-source.
-
-- For single-table queries, the base table is automatically determined.
-- For multi-table queries, one table must be explicitly defined as the base
- table using `WITH BASE`.
-
-The view is automatically refreshed when the base table is changed. Therefore,
-you should make sure the table that you wish to drive the view is defined
-correctly. If you use the wrong base table, then the view may not be refreshed
-at the times you expect.
-
-## Technical requirements
-
-### Query constraints
+
-To create a materialized view, your query:
+Conceptually a materialized view is an on-disk table tied to a query:
+As you add new data to the base table, the materialized view will efficiently
+update itself. You can then query the materialized view as a regular table
+without the impact of a full table scan of the base table.
-- Must use either `SAMPLE BY` or `GROUP BY` with a designated timestamp column
- key.
-- Must not contain `FROM-TO`, `FILL`, and `ALIGN TO FIRST OBSERVATION` clauses
- in `SAMPLE BY` queries
-- Must use join conditions that are compatible with incremental refreshing.
-- When the base table has [deduplication](/docs/concept/deduplication/) enabled,
- the non-aggregate columns selected by the materialized view query must be a
- subset of the `DEDUP` keys from the base table.
+## Creating a materialized view
-We intend to loosen some of these restrictions in future.
+To create a materialize view, surround your `SAMPLE BY` or time-based `GROUP BY`
+query with a [`CREATE MATERIALIZED VIEW`](/docs/reference/sql/create-mat-view) statement.
-### View invalidation
-
-The view's structure is tightly coupled with its base table.
-
-The main cause of invalidation for a materialised view, is when the table schema
-or underlying data is modified.
-
-These changes include dropping columns, dropping partitions and renaming the
-table.
+```questdb-sql title="trades_notional_1m ddl"
+CREATE MATERIALIZED VIEW 'trades_notional_1m' AS (
+ SELECT
+ timestamp,
+ symbol,
+ side,
+ sum(price * amount) AS notional
+ FROM trades
+ SAMPLE BY 1m
+) PARTITION BY DAY;
+```
-Data deletion or modification, for example, using `TRUNCATE` or `UPDATE`, may
-also cause invalidation.
+Querying a materialized view can be up to hundreds of times faster than
+executing the same query on the base table.
-## Replicated views (Enterprise only)
+```questdb-sql title="querying a materialized view"
+SELECT *
+FROM trades_notional_1m;
+```
-Replication of the base table is independent of materialized view maintenance.
+## Roadmap and limitations
-If you promote a replica to a new primary instance, this may trigger a full
-materialized view refresh in the case where the replica did not already have a
-fully up-to-date materialized view.
+We aim to expand the scope materialized view over time. For now, the feature
+focuses on time-based aggregations. It currently supports JOIN operations,
+but does not yet support all query types.
-## Resource management
+## Continue learning
-Materialized Views are compatible with the usual resource management systems:
+
-### Materialized view with TTL
+- **Guide**
-Materialized Views take extra storage and resources to maintain. If your
-`SAMPLE BY` unit is small (seconds, milliseconds), this could be a significant
-amount of data.
+ - [Materialized views guide](/docs/guides/mat-views/): A
+ comprehensive guide to materialized views, including examples and
+ explanations of the different options available
-Therefore, you can decide on a retention policy for the data, and set it using
-`TTL`:
+- **SQL Commands**
-```questdb-sql title="Create a materialized view with a TTL policy"
-CREATE MATERIALIZED VIEW trades_hourly_prices AS (
- SELECT
- timestamp,
- symbol,
- avg(price) AS avg_price
- FROM trades
- SAMPLE BY 1h
-) PARTITION BY WEEK TTL 8 WEEKS;
-```
+ - [`CREATE MATERIALIZED VIEW`](/docs/reference/sql/create-mat-view/): Create a
+ new materialized view
+ - [`DROP MATERIALIZED VIEW`](/docs/reference/sql/drop-mat-view/): Remove a
+ materialized view
+ - [`REFRESH MATERIALIZED VIEW`](/docs/reference/sql/refresh-mat-view/):
+ Manually refresh a materialized view
+ - [`ALTER MATERIALIZED VIEW RESUME WAL`](/docs/reference/sql/alter-mat-view-resume-wal/):
+ Resume WAL for a materialized view
-In this example, the view stores hourly summaries of the pricing data, in weekly
-partitions, keeping the prior 8 partitions.
+- **Configuration**
+ - [Materialized views configs](/docs/configuration/#materialized-views):
+ Server configuration options for materialized views from `server.conf`
diff --git a/documentation/concept/partitions.md b/documentation/concept/partitions.md
index 2638cc32f..252ad60a7 100644
--- a/documentation/concept/partitions.md
+++ b/documentation/concept/partitions.md
@@ -60,7 +60,7 @@ We recommend partitioning tables to benefit from the following advantages:
- Enables out-of-order indexing. From QuestDB 7.2, heavily out-of-order commits
can [split the partitions](#splitting-and-squashing-time-partitions) into
parts to reduce
- [write amplification](/docs/deployment/capacity-planning/#write-amplification).
+ [write amplification](/docs/operations/capacity-planning/#write-amplification).
## Checking time partition information
diff --git a/documentation/concept/write-ahead-log.md b/documentation/concept/write-ahead-log.md
index db650c4c7..114598072 100644
--- a/documentation/concept/write-ahead-log.md
+++ b/documentation/concept/write-ahead-log.md
@@ -83,7 +83,7 @@ WAL-enabled tables are the default table.
You can choose to use non-WAL tables, if it's appropriate for your usecase.
For more information, see the
-[`CREATE TABLE`](/docs/reference/sql/create-table/#wal-table-parameter)
+[`CREATE TABLE`](/docs/reference/sql/create-table/#write-ahead-log-wal-settings)
reference.
Other related configurations include:
diff --git a/documentation/configuration-utils/_cairo.config.json b/documentation/configuration-utils/_cairo.config.json
index 9ad85beb9..3e0aaec6e 100644
--- a/documentation/configuration-utils/_cairo.config.json
+++ b/documentation/configuration-utils/_cairo.config.json
@@ -413,7 +413,7 @@
},
"cairo.o3.partition.split.min.size": {
"default": "50MB",
- "description": "The estimated partition size on disk. This setting is one of the conditions to trigger [auto-partitioning](/docs/deployment/capacity-planning/#auto-partitioning)."
+ "description": "The estimated partition size on disk. This setting is one of the conditions to trigger [auto-partitioning](/docs/operations/capacity-planning/#auto-partitioning)."
},
"cairo.o3.last.partition.max.splits": {
"default": "20",
diff --git a/documentation/configuration-utils/_mat-view.config.json b/documentation/configuration-utils/_mat-view.config.json
index 5536a80a9..dff9aaa35 100644
--- a/documentation/configuration-utils/_mat-view.config.json
+++ b/documentation/configuration-utils/_mat-view.config.json
@@ -1,7 +1,7 @@
{
"cairo.mat.view.enabled": {
- "default": "false",
- "description": "Enables SQL support and refresh job for materialized views."
+ "default": "true",
+ "description": "Enables or disables SQL support and refresh job for materialized views."
},
"cairo.mat.view.parallel.sql.enabled": {
"default": "true",
diff --git a/documentation/configuration.md b/documentation/configuration.md
index ea545ad9c..d52cb74e3 100644
--- a/documentation/configuration.md
+++ b/documentation/configuration.md
@@ -96,9 +96,9 @@ export QDB_SHARED_WORKER_COUNT=5
## Reloadable settings
-Certain configuration settings can be reloaded without having to restart
-the server. To reload a setting, edit its value in the `server.conf` file
-and then run the `reload_config` SQL function:
+Certain configuration settings can be reloaded without having to restart the
+server. To reload a setting, edit its value in the `server.conf` file and then
+run the `reload_config` SQL function:
```questdb-sql title="Reload server configuration"
SELECT reload_config();
@@ -145,9 +145,10 @@ configuration) every other subsystem.
### HTTP server
-This section describes configuration settings for the [Web Console](/docs/web-console/) and the REST
-API available by default on port `9000`. For details on the use of this
-component, refer to the [web console documentation](/docs/web-console/) page.
+This section describes configuration settings for the
+[Web Console](/docs/web-console/) and the REST API available by default on port
+`9000`. For details on the use of this component, refer to the
+[web console documentation](/docs/web-console/) page.
@@ -173,16 +174,16 @@ CSV files.
Settings for `COPY`:
#### CSV import configuration for Docker
@@ -238,12 +239,12 @@ PostgresSQL wire protocol.
This section describes ingestion settings for incoming messages using InfluxDB
Line Protocol.
-| Property | Default | Description |
+| Property | Default | Description |
| ---------------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| line.default.partition.by | DAY | Table partition strategy to be used with tables that are created automatically by InfluxDB Line Protocol. Possible values are: `HOUR`, `DAY`, `WEEK`, `MONTH`, and `YEAR`. |
| line.auto.create.new.columns | true | When enabled, automatically creates new columns when they appear in the ingested data. When disabled, messages with new columns will be rejected. |
-| line.auto.create.new.tables | true | When enabled, automatically creates new tables when they appear in the ingested data. When disabled, messages for non-existent tables will be rejected. |
-| line.log.message.on.error | true | Controls whether malformed ILP messages are printed to the server log when errors occur. |
+| line.auto.create.new.tables | true | When enabled, automatically creates new tables when they appear in the ingested data. When disabled, messages for non-existent tables will be rejected. |
+| line.log.message.on.error | true | Controls whether malformed ILP messages are printed to the server log when errors occur. |
#### HTTP specific settings
@@ -263,8 +264,9 @@ ILP over HTTP is the preferred way of ingesting data.
:::note
-The UDP receiver is deprecated since QuestDB version 6.5.2. We recommend ILP over
-HTTP instead, or less frequently [ILP over TCP](/docs/reference/api/ilp/overview/).
+The UDP receiver is deprecated since QuestDB version 6.5.2. We recommend ILP
+over HTTP instead, or less frequently
+[ILP over TCP](/docs/reference/api/ilp/overview/).
:::
@@ -286,7 +288,7 @@ For setup instructions, see the
For an overview of the concept, see the
[replication concept](/docs/concept/replication/) page.
-For a tuning guide see... the
+For a tuning guide see, the
[replication tuning guide](/docs/guides/replication-tuning/).
@@ -295,12 +297,13 @@ For a tuning guide see... the
:::note
-Identity and Access Management is available within [QuestDB Enterprise](/enterprise/).
+Identity and Access Management is available within
+[QuestDB Enterprise](/enterprise/).
:::
-Identity and Access Management (IAM) ensures that data can be accessed only
-by authorized users. The below configuration properties relate to various
+Identity and Access Management (IAM) ensures that data can be accessed only by
+authorized users. The below configuration properties relate to various
authentication and authorization features.
For a full explanation of IAM, see the
@@ -316,8 +319,9 @@ OpenID Connect is [Enterprise](/enterprise/) only.
:::
-OpenID Connect (OIDC) support is part of QuestDB's Identity and Access Management.
-The database can be integrated with any OAuth2/OIDC Identity Provider (IdP).
+OpenID Connect (OIDC) support is part of QuestDB's Identity and Access
+Management. The database can be integrated with any OAuth2/OIDC Identity
+Provider (IdP).
For detailed information about OIDC, see the
[OpenID Connect (OIDC) integration guide](/docs/operations/openid-connect-oidc-integration).
@@ -337,9 +341,10 @@ database to fail its startup.
rows={{
"config.validation.strict": {
default: "false",
- description: "When enabled, startup fails if there are configuration errors.",
- reloadable: false
- }
+ description:
+ "When enabled, startup fails if there are configuration errors.",
+ reloadable: false,
+ },
}}
/>
@@ -356,18 +361,20 @@ information, and we do not share any of this data with third parties.
"telemetry.enabled": {
default: "true",
description: "Enable or disable anonymous usage metrics collection.",
- reloadable: false
+ reloadable: false,
},
"telemetry.hide.tables": {
default: "false",
- description: "Hides telemetry tables from `select * from tables()` output. As a result, telemetry tables will not be visible in the Web Console table view.",
- reloadable: false
+ description:
+ "Hides telemetry tables from `select * from tables()` output. As a result, telemetry tables will not be visible in the Web Console table view.",
+ reloadable: false,
},
"telemetry.queue.capacity": {
default: "512",
- description: "Capacity of the internal telemetry queue, which is the gateway of all telemetry events. This queue capacity does not require tweaking.",
- reloadable: false
- }
+ description:
+ "Capacity of the internal telemetry queue, which is the gateway of all telemetry events. This queue capacity does not require tweaking.",
+ reloadable: false,
+ },
}}
/>
@@ -375,16 +382,9 @@ information, and we do not share any of this data with third parties.
:::info
-Materialized View support is in **beta**.
-
-It may not be fit for production use.
+Materialized View support is in **beta**. It may not be fit for production use.
-To enable **beta** materialized views, set `cairo.mat.view.enabled=true` in `server.conf`, or export the equivalent
-environment variable: `QDB_CAIRO_MAT_VIEW_ENABLED=true`.
-
-Please let us know if you run into issues.
-
-Either:
+Please let us know if you run into issues. Either:
1. Email us at [support@questdb.io](mailto:support@questdb.io)
2. Join our [public Slack](https://slack.questdb.com/)
@@ -392,7 +392,6 @@ Either:
:::
-
The following settings are available in `server.conf`:
@@ -403,4 +402,6 @@ The following settings are available in `server.conf`:
-Further settings are available in `log.conf`. For more information, and details of our Prometheus metrics, please visit the [Logging & Metrics](/docs/operations/logging-metrics/) documentation.
+Further settings are available in `log.conf`. For more information, and details
+of our Prometheus metrics, please visit the
+[Logging & Metrics](/docs/operations/logging-metrics/) documentation.
diff --git a/documentation/deployment/docker.md b/documentation/deployment/docker.md
index 15e6879a3..0e6d02d38 100644
--- a/documentation/deployment/docker.md
+++ b/documentation/deployment/docker.md
@@ -40,7 +40,7 @@ This command starts a Docker container from `questdb/questdb` image. In
addition, it exposes some ports, allowing you to explore QuestDB.
In order to configure QuestDB, it is recommended to mount a
-[volume](#v-parameter-to-mount-storage) to allow data persistance. This can be
+[volume](#-v-parameter-to-mount-storage) to allow data persistance. This can be
done by adding a `-v` flag to the above command:
```
diff --git a/documentation/guides/enterprise-quick-start.md b/documentation/guides/enterprise-quick-start.md
index f6e165409..5ae22fe4c 100644
--- a/documentation/guides/enterprise-quick-start.md
+++ b/documentation/guides/enterprise-quick-start.md
@@ -42,7 +42,7 @@ The following are required prior to following this guide:
- QuestDB Enterprise binary with an active license
- No license? [Contact us](/enterprise/contact/) for more information.
-- Use of a [supported file system](/docs/deployment/capacity-planning/#supported-filesystems)
+- Use of a [supported file system](/docs/operations/capacity-planning/#supported-filesystems)
- A [Zettabyte File System (ZFS)](https://openzfs.org/wiki/Main_Page) is recommended to enable compression
## Installation guide
@@ -453,7 +453,7 @@ of Kubernetes is supported.
QuestDB works together with your server operating system to achieve maximum
performance. Prior to putting your server under heavy loads, consider checking
your
-[kernel-based limitations](/docs/deployment/capacity-planning/#os-configuration).
+[kernel-based limitations](/docs/operations/capacity-planning/#os-configuration).
Specifically, increase the limits for how many files can be opened by your OS
and its users, and the maximum amount of virtual memory allowed. This helps
diff --git a/documentation/guides/import-csv.md b/documentation/guides/import-csv.md
index fb9455197..d14e5d1b5 100644
--- a/documentation/guides/import-csv.md
+++ b/documentation/guides/import-csv.md
@@ -134,7 +134,7 @@ csvstack *.csv > singleFile.csv
### Create the target table schema
If you know the target table schema already, you can
-[skip this section](/docs/guides/import-csv/#import-csv-via-copy-sql/#import-csv).
+[skip this section](#import-csv).
QuestDB could analyze the input file and "guess" the schema. This logic is
activated when target table does not exist.
@@ -175,8 +175,9 @@ process running in the background:
| ---------------- |
| 5179978a6d7a1772 |
-3. In the [Web Console](/docs/web-console/) right click table and select `Copy Schema to Clipboard` -
- this copies the schema generated by the input file analysis.
+3. In the [Web Console](/docs/web-console/) right click table and select
+ `Copy Schema to Clipboard` - this copies the schema generated by the input
+ file analysis.
4. Paste the table schema to the code editor:
@@ -202,8 +203,7 @@ process running in the background:
5.1. The generated schema may not be completely correct. Check the log table
and log file to resolve common errors using the id (see also
- [Track import progress](/docs/guides/import-csv/#import-csv-via-copy-sql/#track-import-progress)
- and [FAQ](/docs/guides/import-csv/#import-csv-via-copy-sql/#faq)):
+ [Track import progress](#track-import-progress) and [FAQ](#faq)):
```questdb-sql
SELECT * FROM sys.text_import_log WHERE id = '5179978a6d7a1772' ORDER BY ts DESC;
@@ -675,7 +675,7 @@ curl \
```
More information on the patterns for timestamps can be found on the
-[date and time functions](/docs/reference/function/date-time/#date-and-timestamp-format)
+[date and time functions](/docs/reference/function/date-time/#timestamp-format)
page.
:::note
diff --git a/documentation/guides/mat-views.md b/documentation/guides/mat-views.md
new file mode 100644
index 000000000..042a3586c
--- /dev/null
+++ b/documentation/guides/mat-views.md
@@ -0,0 +1,737 @@
+---
+title: Materialized views
+description:
+ Overview of QuestDB's materialized views. This feature helps you significantly
+ speed up your time-based aggregation queries.
+---
+
+:::info
+
+Materialized View support is in **beta**. It may not be fit for production use.
+
+Please let us know if you run into issues. Either:
+
+1. Email us at [support@questdb.io](mailto:support@questdb.io)
+2. Join our [public Slack](https://slack.questdb.com/)
+3. Post on our [Discourse community](https://community.questdb.com/)
+
+:::
+
+A materialized view is a special QuestDB table that stores the pre-computed
+results of a query. Unlike regular views, which compute their results at query
+time, materialized views persist their data to disk, making them particularly
+efficient for expensive aggregate queries that are run frequently.
+
+## Related documentation
+
+
+
+- **Concepts**
+
+ - [Introduction to materialized views](/docs/concept/mat-views/):
+ Understanding how to best design queries with materialized views
+
+- **SQL Commands**
+
+ - [`CREATE MATERIALIZED VIEW`](/docs/reference/sql/create-mat-view/): Create a
+ new materialized view
+ - [`DROP MATERIALIZED VIEW`](/docs/reference/sql/drop-mat-view/): Remove a
+ materialized view
+ - [`REFRESH MATERIALIZED VIEW`](/docs/reference/sql/refresh-mat-view/):
+ Manually refresh a materialized view
+ - [`ALTER MATERIALIZED VIEW RESUME WAL`](/docs/reference/sql/alter-mat-view-resume-wal/):
+ Resume WAL for a materialized view
+
+- **Configuration**
+ - [Materialized views settings](/docs/configuration/#materialized-views):
+ Server configuration options for materialized views from `server.conf`
+
+## What are materialized views for?
+
+As data grows in size, the performance of certain queries can degrade.
+Materialized views store the result of a `SAMPLE BY` or time-based `GROUP BY`
+query on disk, and keep it automatically up to date.
+
+The refresh of a materialized view is `INCREMENTAL` and very efficient, and
+using materialized views can offer 100x or higher query speedups. If you require
+the lowest latency queries, for example, for charts and dashboards, use
+materialized views!
+
+For a better understanding of what materialized views are for, read the
+[introduction to materialized views](/docs/concept/mat-views/) documentation.
+
+## Creating a materialized view
+
+There is a fundamental limit to how fast certain aggregation and scanning
+queries can execute, based on the data size, number of rows, disk speed, and
+number of cores.
+
+Materialized views let you bound the runtime for common aggregation queries, by
+allowing you to pre-aggregate historical data ahead-of-time. This means that for
+many queries, you only need to aggregate the latest partition's data, and then
+you can use already aggregated results for historical data.
+
+Throughout this document, we will use the [demo](https://demo.questdb.com/)
+`trades` table. This is a table containing crypto trading data, with over 1.6
+billion rows.
+
+```questdb-sql title="trades ddl"
+CREATE TABLE 'trades' (
+ symbol SYMBOL,
+ side SYMBOL,
+ price DOUBLE,
+ amount DOUBLE,
+ timestamp TIMESTAMP
+) TIMESTAMP(timestamp) PARTITION BY DAY;
+```
+
+A full syntax definition can be found in the
+[CREATE MATERIALIZED VIEW](/docs/reference/sql/create-mat-view) documentation.
+
+Here is a materialized view taken from our demo, which calculates OHLC bars for
+a candlestick chart. The view reads data from the base table, `trades`. It then
+calculates aggregate functions such as `first`, `sum` etc. over 15 minutes time
+buckets. The view is incrementally refreshed, meaning it is always up to date
+with the latest `trades` data.
+
+:::note
+
+If you are unfamiliar with the OHLC concept, please see our
+[OHLC guide](https://www.questdb.com/glossary/ohcl-candlestick).
+
+:::
+
+```questdb-sql title="trades_OHLC_15m ddl"
+CREATE MATERIALIZED VIEW 'trades_OHLC_15m'
+WITH BASE 'trades' REFRESH INCREMENTAL
+AS (
+ SELECT
+ timestamp, symbol,
+ first(price) AS open,
+ max(price) as high,
+ min(price) as low,
+ last(price) AS close,
+ sum(amount) AS volume
+ FROM trades
+ SAMPLE BY 15m
+) PARTITION BY MONTH;
+```
+
+In this example:
+
+1. The view is called `trades_OHLC_15m`.
+2. The base table is `trades`
+ - This is the data source, and will trigger incremental refresh when new data
+ is written.
+3. The refresh strategy is `INCREMENTAL`
+ - The data is automatically refreshed and incrementally written; efficient,
+ fast, low maintenance.
+4. The `SAMPLE BY` query contains two key column (`timestamp`, `symbol`) and
+ five aggregates (`first`, `max`, `min`, `last`, `price`) calculated in `15m`
+ time buckets.
+5. The view is partitioned by `DAY`.
+6. No TTL is defined
+ - Therefore, the materialized view will contain a summary of _all_ the base
+ `trades` table's data.
+
+:::tip
+
+This particular example can also be written via the
+[compact syntax](#compact-syntax).
+
+:::
+
+#### The view name
+
+We recommend naming the view with some reference to the base table, its purpose,
+and its sample size.
+
+In our `trades_OHLC_15m` example, we combine:
+
+- `trades` (the base table name)
+- `OHLC` (the purpose)
+- `15m` (the sample unit)
+
+#### The base table
+
+The base table triggers updating the materialized view, and is the main source
+of raw data.
+
+The `SAMPLE BY` query can contain a `JOIN`. However, the secondary `JOIN` tables
+will not trigger any sort of refresh.
+
+#### Refresh strategies
+
+Currently, only `INCREMENTAL` refresh is supported. This strategy incrementally
+updates the view when new data is inserted into the base table. This means that
+only new data is written to the view, so there is minimal write overhead.
+
+Upon creation, or when the view is invalidated, a full refresh will occur, which
+rebuilds the view from scratch.
+
+#### SAMPLE BY
+
+Materialized views are populated using `SAMPLE BY` or time-based `GROUP BY`
+queries.
+
+When new data is written into the `base` table, an incremental refresh is
+triggered, which adds this new data to the view.
+
+Not all `SAMPLE BY` syntax is supported. In general, you should aim to keep your
+query as simple as possible, and move complex transformations to an outside
+query that runs on the down-sampled data.
+
+#### PARTITION BY
+
+Optionally, you may specify a partitioning scheme.
+
+You should choose a partition unit which is larger than the sampling interval.
+Ideally, the partition unit should be divisible by the sampling interval.
+
+For example, an `SAMPLE BY 8h` clause fits nicely with a `DAY` partitioning
+strategy, with 3 timestamp buckets per day.
+
+#### Default partitioning
+
+If the `PARTITION BY` clauses is omitted, the partitioning scheme is
+automatically inferred from the `SAMPLE BY` clause.
+
+| Interval | Default partitioning |
+|----------------|----------------------|
+| > 1 hour | `PARTITION BY YEAR` |
+| > 1 minute | `PARTITION BY MONTH` |
+| <= 1 minute | `PARTITION BY DAY` |
+
+#### TTL
+
+Though `TTL` was not included, it can be set on a materialized view, and does
+not need to match the base table.
+
+For example, if we only wanted to pre-aggregate the last 30 days of data, we
+could add:
+
+```questdb-sql
+PARTITION BY DAY TTL 30 DAYS;
+```
+
+to the end of our materialized view definition.
+
+#### Compact syntax
+
+If you're happy with the defaults and don't need to customize materialized view
+parameters such as `PARTITION BY` or `TTL`, then you can use the compact syntax
+which omits the parentheses.
+
+```questdb-sql title="trades_OHLC_15m compact syntax"
+CREATE MATERIALIZED VIEW trades_OHLC_15m AS
+ SELECT
+ timestamp, symbol,
+ first(price) AS open,
+ max(price) as high,
+ min(price) as low,
+ last(price) AS close,
+ sum(amount) AS volume
+ FROM trades
+ SAMPLE BY 15m;
+```
+
+## Querying materialized views
+
+:::note
+
+The example `trades_OHLC_15m` view is available on our demo, and contains
+realtime crypto data - try it out!
+
+:::
+
+Materialized Views support **all the same queries** as regular QuestDB tables.
+
+Here's how you can check today's trading data:
+
+```questdb-sql title="querying trades_OHLC_15m" demo
+trades_OHLC_15m WHERE timestamp IN today();
+```
+
+| timestamp | symbol | open | high | low | close | volume |
+|-----------------------------|----------|---------|---------|---------|---------|--------------------|
+| 2025-03-31T00:00:00.000000Z | ETH-USD | 1807.94 | 1813.32 | 1804.69 | 1808.58 | 1784.144071999995 |
+| 2025-03-31T00:00:00.000000Z | BTC-USD | 82398.4 | 82456.5 | 82177.6 | 82284.5 | 34.47331241 |
+| 2025-03-31T00:00:00.000000Z | DOGE-USD | 0.16654 | 0.16748 | 0.16629 | 0.16677 | 3052051.6327359965 |
+| 2025-03-31T00:00:00.000000Z | AVAX-USD | 18.87 | 18.885 | 18.781 | 18.826 | 6092.852976000005 |
+| ... | ... | ... | ... | ... | ... | ... |
+
+### How much faster is it?
+
+Let's run the OHLC query without using the view, against our `trades` table:
+
+```questdb-sql title="the OHLC query" demo
+SELECT
+ timestamp, symbol,
+ first(price) AS open,
+ max(price) as high,
+ min(price) as low,
+ last(price) AS close,
+ sum(amount) AS volume
+FROM trades
+SAMPLE BY 15m;
+```
+
+This takes several seconds to execute.
+
+Yet if we query the materialized view instead:
+
+```questdb-sql title="OHLC materialized view unbounded" demo
+trades_OHLC_15m;
+```
+
+This returns in milliseconds, since the database only has to respond with data,
+and not calculate anything - that has all been done efficiently, ahead of time.
+
+### What about for fewer rows?
+
+Let's try this calculation again, but just for one day instead of the entire 1.6
+billion rows.
+
+```questdb-sql title="OHLC query for yesterday" demo
+SELECT
+ timestamp, symbol,
+ first(price) AS open,
+ max(price) as high,
+ min(price) as low,
+ last(price) AS close,
+ sum(amount) AS volume
+FROM trades
+WHERE timestamp IN yesterday()
+SAMPLE BY 15m
+ORDER BY timestamp, symbol;
+```
+
+| timestamp | symbol | open | high | low | close | volume |
+|-----------------------------|-----------|--------|--------|---------|--------|--------------------|
+| 2025-03-30T00:00:00.000000Z | ADA-USD | 0.6732 | 0.6744 | 0.671 | 0.6744 | 132304.36510000005 |
+| 2025-03-30T00:00:00.000000Z | ADA-USDC | 0.6727 | 0.673 | 0.671 | 0.6729 | 15614.750700000002 |
+| 2025-03-30T00:00:00.000000Z | ADA-USDT | 0.6732 | 0.6744 | 0.671 | 0.6744 | 132304.36510000005 |
+| 2025-03-30T00:00:00.000000Z | AVAX-USD | 19.602 | 19.632 | 19.518 | 19.631 | 3741.162465999998 |
+| 2025-03-30T00:00:00.000000Z | AVAX-USDT | 19.602 | 19.632 | 19.518 | 19.631 | 3741.162465999998 |
+| 2025-03-30T00:00:00.000000Z | BTC-USD | 82650 | 82750 | 82563.6 | 82747 | 25.493136499999 |
+| ... | ... | ... | ... | ... | ... | ... |
+
+Calculating the OHLC for a single day takes only `15ms`.
+
+We can get the same data using the materialized view:
+
+```questdb-sql title="OHLC materialized view for yesterday" demo
+trades_OHLC_15m
+WHERE timestamp IN yesterday()
+ORDER BY timestamp, symbol;
+```
+
+| timestamp | symbol | open | high | low | close | volume |
+|-----------------------------|-----------|--------|--------|---------|--------|--------------------|
+| 2025-03-30T00:00:00.000000Z | ADA-USD | 0.6732 | 0.6744 | 0.671 | 0.6744 | 132304.36510000005 |
+| 2025-03-30T00:00:00.000000Z | ADA-USDC | 0.6727 | 0.673 | 0.671 | 0.6729 | 15614.750700000002 |
+| 2025-03-30T00:00:00.000000Z | ADA-USDT | 0.6732 | 0.6744 | 0.671 | 0.6744 | 132304.36510000005 |
+| 2025-03-30T00:00:00.000000Z | AVAX-USD | 19.602 | 19.632 | 19.518 | 19.631 | 3741.162465999998 |
+| 2025-03-30T00:00:00.000000Z | AVAX-USDT | 19.602 | 19.632 | 19.518 | 19.631 | 3741.162465999998 |
+| 2025-03-30T00:00:00.000000Z | BTC-USD | 82650 | 82750 | 82563.6 | 82747 | 25.493136499999 |
+| ... | ... | ... | ... | ... | ... | ... |
+
+This returns the data in just `2ms` over 7x faster - again, because it doesn't
+have to calculate anything. The data has already been efficiently
+pre-aggregated, cached by the materialized view, and persisted to disk. No
+aggregation is required, and hardly any rows are scanned!
+
+So even for **small amounts of data**, a materialized view can be extremely
+useful.
+
+## Limitations
+
+### Beta
+
+- Not all `SAMPLE BY` syntax is supported, for example, `FILL`.
+- The `INCREMENTAL` refresh strategy relies on deduplicated inserts (O3 writes)
+ - We will instead delete a time range and insert the data as an append, which
+ is **much** faster.
+ - This also means that currently, deduplication keys must be aligned across
+ the `base` table and the view.
+
+### Post-release
+
+- Only `INCREMENTAL` refresh is supported
+ - We intend to add alternatives, such as:
+ - `PERIODIC` (once per partition),
+ - `TIMER` (once per time interval)
+ - `MANUAL` (only when manually triggered)
+- `INCREMENTAL` refresh is only triggered by inserts into the `base` table, not join tables.
+
+## LATEST ON materialized views
+
+`LATEST ON` queries can have variable performance, based on how frequently the
+symbols in the `PARTITION BY` column have new entries written to the table.
+Infrequently updated symbols require scanning more data to find their last
+entry.
+
+For example, pretend you have two symbols, `A` and `B`, and 100 million rows.
+
+Then, there is one row with `B`, at the start of the data set, and the rest are
+`A`.
+
+Unfortunately, the database will scan backwards and scan all 100 million rows of
+data, just to find the `B` entry.
+
+But materialized views offer a solution to this performance issue too!
+
+```questdb-sql title="LATEST ON on demo trades" demo
+trades LATEST ON timestamp PARTITION BY symbol;
+```
+
+| symbol | side | price | amount | timestamp |
+| --------- | ---- | ---------- | ------ | --------------------------- |
+| XLM-BTC | sell | 0.00000163 | 541 | 2024-08-21T16:56:15.038557Z |
+| AVAX-BTC | sell | 0.00039044 | 10.125 | 2024-08-21T18:00:24.549949Z |
+| MATIC-BTC | sell | 0.0000088 | 622.6 | 2024-08-21T18:01:21.607212Z |
+| ADA-BTC | buy | 0.00000621 | 127.32 | 2024-08-21T18:05:37.852092Z |
+| ... | ... | ... | ... | ... |
+
+This takes around `2s` to execute. Now, let's see how much data needed to be
+scanned:
+
+```questdb-sql title="filtering for the time range" demo
+SELECT min(timestamp), max(timestamp)
+FROM trades
+LATEST ON timestamp
+PARTITION BY symbol;
+```
+
+| min | max |
+| --------------------------- | --------------------------- |
+| 2024-08-21T16:56:15.038557Z | 2025-03-31T12:55:28.193000Z |
+
+So the database scanned approximately 7 months of data to serve this query. How
+many rows was that?
+
+```questdb-sql title="number of rows the LATEST ON scanned" demo
+SELECT count()
+FROM trades
+WHERE timestamp BETWEEN '2024-08-21T16:56:15.038557Z' AND '2025-03-31T12:55:28.193000Z';
+```
+
+| count |
+| --------- |
+| 766834703 |
+
+Yes, **~767 million rows**, just to serve the most recent **42 rows**, one for
+each symbol.
+
+Let's fix this using a new materialized view.
+
+Observe that we have `42` unique symbols in the dataset.
+
+If we were to take a `LATEST ON` query for a single day, we would therefore
+expect up to `84` rows (`42` buys, `42` sells):
+
+```questdb-sql title="yesterday() LATEST ON" demo
+(trades WHERE timestamp IN yesterday())
+LATEST ON timestamp PARTITION BY symbol, side
+ORDER BY symbol, side, timestamp;
+```
+
+| symbol | side | price | amount | timestamp |
+| --------- | ---- | ------- | ---------- | --------------------------- |
+| ADA-USD | buy | 0.6611 | 686.3557 | 2025-03-30T23:59:59.052000Z |
+| ADA-USD | sell | 0.6609 | 270.8935 | 2025-03-30T23:59:46.585999Z |
+| ADA-USDC | buy | 0.6603 | 109.35 | 2025-03-30T23:57:56.194000Z |
+| ADA-USDC | sell | 0.6607 | 755.9739 | 2025-03-30T23:59:35.635000Z |
+| ADA-USDT | buy | 0.6611 | 686.3557 | 2025-03-30T23:59:59.052000Z |
+| ADA-USDT | sell | 0.6609 | 270.8935 | 2025-03-30T23:59:46.585999Z |
+| AVAX-USD | buy | 18.859 | 9.199842 | 2025-03-30T23:59:47.788000Z |
+| AVAX-USD | sell | 18.846 | 7.70086 | 2025-03-30T23:59:13.130000Z |
+| AVAX-USDT | buy | 18.859 | 9.199842 | 2025-03-30T23:59:47.788000Z |
+| AVAX-USDT | sell | 18.846 | 7.70086 | 2025-03-30T23:59:13.130000Z |
+| BTC-USD | buy | 82398.2 | 0.000025 | 2025-03-30T23:59:59.992000Z |
+| BTC-USD | sell | 82397.9 | 0.00001819 | 2025-03-30T23:59:59.796999Z |
+| ... | ... | ... | ... | ... |
+
+This executes in `40ms`.
+
+A similar `GROUP BY` query looks like this:
+
+```questdb-sql title="LATEST ON as a GROUP BY" demo
+SELECT
+ symbol,
+ side,
+ last(price) AS price,
+ last(amount) AS amount,
+ last(timestamp) AS timestamp
+FROM trades
+WHERE timestamp IN yesterday()
+ORDER BY symbol, side, timestamp;
+```
+
+which executes in `8ms`.
+
+Instead of using the `LATEST ON` syntax, we can use a `SAMPLE BY` equivalent,
+which massively reduces the number of rows we need to query.
+
+Then, we run this `SAMPLE BY` automatically using a materialized view, so we
+always have the fastest possible `LATEST ON` query.
+
+### Pre-aggregating the data
+
+We will pre-aggregate the ~767 million rows into just ~15000.
+
+Instead of storing the raw data, we will store one row, per symbol, per side,
+per day of data.
+
+```questdb-sql title="down-sampling test query" demo
+
+SELECT timestamp, symbol, side, price, amount, "latest" as timestamp FROM (
+ SELECT timestamp,
+ symbol,
+ side,
+ last(price) AS price,
+ last(amount) AS amount,
+ last(timestamp) as latest
+ FROM trades
+ WHERE timestamp BETWEEN '2024-08-21T16:56:15.038557Z' AND '2025-03-31T12:55:28.193000Z'
+ SAMPLE BY 1d
+) ORDER BY timestamp;
+```
+
+This result set comprises just `14595` rows, instead of ~767 million. That's
+51000x fewer rows the database needs to scan to handle the query.
+
+Here it is as a materialized view:
+
+```questdb-sql title="LATEST ON materialized view"
+CREATE MATERIALIZED VIEW 'trades_latest_1d' WITH BASE 'trades' REFRESH INCREMENTAL AS (
+ SELECT
+ timestamp,
+ symbol,
+ side,
+ last(price) AS price,
+ last(amount) AS amount,
+ last(timestamp) as latest
+ FROM trades
+ SAMPLE BY 1d
+) PARTITION BY DAY;
+```
+
+You can try this view out on our demo:
+
+```questdb-sql title="trades_latest_1d" demo
+trades_latest_1d;
+```
+
+Then, you can query this 'per-day LATEST ON' view to quickly calculate the
+'true' `LATEST ON` result.
+
+```questdb-sql title="LATEST ON over the trades_latest_1d" demo
+SELECT symbol, side, price, amount, "latest" as timestamp FROM (
+ trades_latest_1d
+ LATEST ON timestamp
+ PARTITION BY symbol, side
+) ORDER BY timestamp;
+```
+
+And in just a few milliseconds, we get the result:
+
+| symbol | side | price | amount | timestamp |
+|----------|------|---------|-----------|-----------------------------|
+| ETH-BTC | sell | 0.02196 | 0.005998 | 2025-03-31T14:24:18.916000Z |
+| DAI-USDT | sell | 1.0006 | 53 | 2025-03-31T14:29:19.392999Z |
+| DAI-USD | sell | 1.0006 | 53 | 2025-03-31T14:29:19.392999Z |
+| DAI-USD | buy | 1.0007 | 29.785106 | 2025-03-31T14:30:33.394000Z |
+| DAI-USDT | buy | 1.0007 | 29.785106 | 2025-03-31T14:30:33.394000Z |
+| ... | ... | ... | ... | ... |
+
+Seconds down to milliseconds - **100x, even 1000x faster!**
+
+## Architecture and internals
+
+The rest of this document contains information about how materialized views work
+internally.
+
+### Storage model
+
+Materialized views in QuestDB are implemented as special tables that maintain
+their data independently of their base tables. They use the same underlying
+storage engine as regular tables, benefiting from QuestDB's columnar storage and
+partitioning capabilities.
+
+### Refresh mechanism
+
+:::note
+
+Currently, QuestDB only supports **incremental refresh** for materialized views.
+
+Future releases will include additional refresh types, such as time-interval and
+manual refreshes.
+
+:::
+
+Unlike regular views, which recompute their results at query time, materialized
+views in QuestDB are incrementally refreshed as new data is added to the base
+table. This approach ensures that only the **relevant time slices** of the view
+are updated, avoiding the need to recompute the entire dataset. The refresh
+process works as follows:
+
+1. New data is inserted into the base table.
+2. The time-range of this data is identified.
+3. This data is extracted and used to recompute the materialized view.
+
+This refresh happens asynchronously, minimizing any impact on write performance.
+The refresh state of the materialized view is tracked using transaction numbers.
+The transaction numbers can be compared with the base table, for monitoring the
+'refresh lag'.
+
+For example, if a base table receives new rows for `2025-02-18`, only that day's
+relevant time slices are recomputed, rather than reprocessing all historical
+data.
+
+You can monitor refresh status using the `materialized_views()` system function:
+
+```questdb-sql title="Listing all materialized views"
+SELECT
+ view_name,
+ last_refresh_timestamp,
+ view_status,
+ base_table_txn,
+ applied_base_table_txn
+FROM materialized_views();
+```
+
+Here is an example output:
+
+| view_name | last_refresh_timestamp | view_status | base_table_txn | applied_base_table_txn |
+|-------------|------------------------|-------------|----------------|------------------------|
+| trades_view | null | valid | 102 | 102 |
+
+When `base_table_txn` matches `applied_base_table_txn`, the materialized view is
+fully up-to-date.
+
+#### Refreshing an invalid view
+
+If a materialized view becomes invalid, you can check its status:
+
+```questdb-sql title="Checking view status"
+SELECT
+ view_name,
+ base_table_name,
+ view_status,
+ invalidation_reason
+FROM materialized_views();
+```
+
+| view_name | base_table_name | view_status | invalidation_reason |
+|---------------|-----------------|-------------|----------------------------------------------|
+| trades_view | trades | valid | null |
+| exchange_view | exchange | invalid | [-105] table does not exist [table=exchange] |
+
+To restore an invalid view, and refresh its data from scratch, use:
+
+```questdb-sql title="Restoring an invalid view"
+REFRESH MATERIALIZED VIEW view_name FULL;
+```
+
+This command deletes existing data in the materialized view, and re-runs its
+query.
+
+Once the view is repopulated, the view is marked as 'valid' so that it can be
+incrementally refreshed.
+
+For large base tables, a full refresh may take a significant amount of time. You
+can cancel the refresh using the
+[`CANCEL QUERY`](/docs/reference/sql/cancel-query/) SQL.
+
+For the conditions which can invalidate a materialized view, see the
+[technical requirements](#technical-requirements) section.
+
+### Base table relationship
+
+Every materialized view is tied to a base table that serves as its primary data
+source.
+
+- For single-table queries, the base table is automatically determined.
+- For multi-table queries, one table must be explicitly defined as the base
+ table using `WITH BASE`.
+
+The view is automatically refreshed when the base table is changed. Therefore,
+you should make sure the table that you wish to drive the view is defined
+correctly. If you use the wrong base table, then the view may not be refreshed
+at the times you expect.
+
+## Technical requirements
+
+### Query constraints
+
+To create a materialized view, your query:
+
+- Must use either `SAMPLE BY` or `GROUP BY` with a designated timestamp column
+ key.
+- Must not contain `FROM-TO`, `FILL`, and `ALIGN TO FIRST OBSERVATION` clauses
+ in `SAMPLE BY` queries
+- Must use join conditions that are compatible with incremental refreshing.
+- When the base table has [deduplication](/docs/concept/deduplication/) enabled,
+ the non-aggregate columns selected by the materialized view query must be a
+ subset of the `DEDUP` keys from the base table.
+
+We intend to loosen some of these restrictions in future.
+
+### View invalidation
+
+The view's structure is tightly coupled with its base table.
+
+The main cause of invalidation for a materialized view, is when the table schema
+or underlying data is modified.
+
+These changes include dropping columns, dropping partitions and renaming the
+table.
+
+Data deletion or modification, for example, using `TRUNCATE` or `UPDATE`, may
+also cause invalidation.
+
+## Replicated views (Enterprise only)
+
+Replication of the base table is independent of materialized view maintenance.
+
+If you promote a replica to a new primary instance, this may trigger a full
+materialized view refresh in the case where the replica did not already have a
+fully up-to-date materialized view.
+
+## Resource management
+
+Materialized Views are compatible with the usual resource management systems:
+
+- View TTL settings are separate from the base table.
+- TTL deletions in the base table will not be propagated to the view.
+- Partitions are managed separately between the base table and the view.
+- Refresh intervals can be configured independently.
+
+### Materialized view with TTL
+
+Materialized Views take extra storage and resources to maintain. If your
+`SAMPLE BY` unit is small (seconds, milliseconds), this could be a significant
+amount of data.
+
+Therefore, you can decide on a retention policy for the data, and set it using
+`TTL`:
+
+```questdb-sql title="Create a materialized view with a TTL policy"
+CREATE MATERIALIZED VIEW trades_hourly_prices AS (
+ SELECT
+ timestamp,
+ symbol,
+ avg(price) AS avg_price
+ FROM trades
+ SAMPLE BY 1h
+) PARTITION BY WEEK TTL 8 WEEKS;
+```
+
+In this example, the view stores hourly summaries of the pricing data, in weekly
+partitions, keeping the prior 8 partitions.
diff --git a/documentation/guides/working-with-timestamps-timezones.md b/documentation/guides/working-with-timestamps-timezones.md
index 50152ce9e..23d3f79b3 100644
--- a/documentation/guides/working-with-timestamps-timezones.md
+++ b/documentation/guides/working-with-timestamps-timezones.md
@@ -46,7 +46,7 @@ my_table;
| 2021-06-08T16:45:45.123456Z | 13 |
When inserting timestamps into a table, it is also possible to use
-[timestamp units](/docs/reference/function/date-time/#date-and-timestamp-format)
+[timestamp units](/docs/reference/function/date-time/#timestamp-format)
to define the timestamp format, in order to process trailing zeros in exported
data sources such as PostgreSQL:
diff --git a/documentation/operations/backup.md b/documentation/operations/backup.md
index 71bf8bb93..5f370b914 100644
--- a/documentation/operations/backup.md
+++ b/documentation/operations/backup.md
@@ -255,7 +255,7 @@ QuestDB supports the following filesystems:
Other file systems are untested and while they may work, we do not officially
support them. See the
-[filesystem compatibility](/docs/deployment/capacity-planning/#supported-filesystems)
+[filesystem compatibility](/docs/operations/capacity-planning/#supported-filesystems)
section for more information.
## Further reading
diff --git a/documentation/operations/capacity-planning.md b/documentation/operations/capacity-planning.md
index 5c273a99b..7b733263f 100644
--- a/documentation/operations/capacity-planning.md
+++ b/documentation/operations/capacity-planning.md
@@ -166,11 +166,11 @@ for other database processes to use.
### CPU cores
By default, QuestDB tries to use all available CPU cores.
-[The guide on shared worker configuration](#shared-workers) explains how to
-change the default settings. Assuming that the disk is not bottlenecked on IOPS,
-the throughput of read-only queries scales proportionally with the number of
-available cores. As a result, a machine with more cores will provide better
-query performance.
+[The guide on shared worker configuration](/docs/configuration/#shared-worker)
+explains how to change the default settings. Assuming that the disk is not
+bottlenecked on IOPS, the throughput of read-only queries scales proportionally
+with the number of available cores. As a result, a machine with more cores will
+provide better query performance.
### Writer page size
diff --git a/documentation/operations/command-line-options.md b/documentation/operations/command-line-options.md
index 0dc5eab07..f348dd0c9 100644
--- a/documentation/operations/command-line-options.md
+++ b/documentation/operations/command-line-options.md
@@ -59,7 +59,7 @@ questdb.exe [start|stop|status|install|remove] \
| Option | Description |
| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `-d` | Expects a `dir` directory value which is a folder that will be used as QuestDB's root directory. For more information and the default values, see the [default root](#default-root-directory) section below. |
+| `-d` | Expects a `dir` directory value which is a folder that will be used as QuestDB's root directory. For more information and the default values, see the [default root](#default-root-directory-1) section below. |
| `-t` | Expects a `tag` string value which will be as a tag for the service. This option allows users to run several QuestDB services and manage them separately. If this option is omitted, the default tag will be `questdb`. |
| `-f` | Force re-deploying the [Web Console](/docs/web-console/). Without this option, the [Web Console](/docs/web-console/) is cached and deployed only when missing. |
| `-n` | Do not respond to the HUP signal. This keeps QuestDB alive after you close the terminal window where you started it. |
diff --git a/documentation/operations/design-for-performance.md b/documentation/operations/design-for-performance.md
index 1f945f026..092baa9e1 100644
--- a/documentation/operations/design-for-performance.md
+++ b/documentation/operations/design-for-performance.md
@@ -12,7 +12,7 @@ To monitor various metrics of the QuestDB instances, refer to the
[Prometheus monitoring](/docs/third-party-tools/prometheus/) page or the
[Logging & Monitoring](/docs/operations/logging-metrics/) page.
-Refer to [Capacity planning](/docs/deployment/capacity-planning/) for deployment
+Refer to [Capacity planning](/docs/operations/capacity-planning/) for deployment
considerations.
## Optimizing queries
@@ -101,7 +101,7 @@ This example adds a `symbol` type with:
- **index** for the symbol column with a storage block value
A full description of the options used above for `symbol` types can be found in
-the [CREATE TABLE](/docs/reference/sql/create-table/#symbol) page.
+the [CREATE TABLE](/docs/reference/sql/create-table/#symbols) page.
#### Symbol caching
diff --git a/documentation/operations/logging-metrics.md b/documentation/operations/logging-metrics.md
index 04687f755..07ba5f567 100644
--- a/documentation/operations/logging-metrics.md
+++ b/documentation/operations/logging-metrics.md
@@ -153,7 +153,7 @@ Which one you need depends on your use case.
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| file | Select from one of the two above patterns. Write to a single log that will grow indefinitely, or write a rolling log. Rolling logs can be split into `minute`, `hour`, `day`, `month` or `year`. |
| stdout | Writes logs to standard output. |
-| http.min | Enabled at port `9003` by default. For more information, see the next section: [minimal HTTP server](#-minimal-http-server). |
+| http.min | Enabled at port `9003` by default. For more information, see the next section: [minimal HTTP server](#minimal-http-server). |
### Minimal HTTP server
@@ -205,7 +205,7 @@ The following configuration options can be set in your `server.conf`:
:::warning
On systems with
-[8 Cores and less](/docs/deployment/capacity-planning/#cpu-cores), contention
+[8 Cores and less](/docs/operations/capacity-planning/#cpu-cores), contention
for threads might increase the latency of health check service responses. If you use
a load balancer thinks the QuestDB service is dead with nothing apparent in the
QuestDB logs, you may need to configure a dedicated thread pool for the health
diff --git a/documentation/operations/rbac.md b/documentation/operations/rbac.md
index e2e3a855c..d41ad904e 100644
--- a/documentation/operations/rbac.md
+++ b/documentation/operations/rbac.md
@@ -17,7 +17,7 @@ It covers:
- A [conceptual overview](/docs/operations/rbac/#rbac-conceptual-review)
- [Permission reference](/docs/operations/rbac/#permissions) with
- [examples](/docs/operations/rbac/#database-vs-table-vs-column-permissions)
+ [examples](/docs/operations/rbac/#permission-levels)
- Full list of related
[SQL statements](/docs/operations/rbac/#full-sql-grammar-list)
- Special cases such as within the
diff --git a/documentation/operations/tls.md b/documentation/operations/tls.md
index 461129d19..dd9881540 100644
--- a/documentation/operations/tls.md
+++ b/documentation/operations/tls.md
@@ -56,7 +56,7 @@ The private key file must contain the key in one of the following formats:
- A SEC1-encoded plaintext private key; as specified in RFC5915
If you need to create a quick `.pem` file for testing, see the
-[below steps](/docs/operations/tls/#generating-a-test-pem-certificate).
+[below steps](/docs/operations/tls/#generating-a-test-pem-certificate-manually).
### Enabling TLS for InfluxDB Line Protocol
diff --git a/documentation/quick-start-utils/_options-not-windows.partial.mdx b/documentation/quick-start-utils/_options-not-windows.partial.mdx
index 486d2f20d..64b02e3c0 100644
--- a/documentation/quick-start-utils/_options-not-windows.partial.mdx
+++ b/documentation/quick-start-utils/_options-not-windows.partial.mdx
@@ -1,6 +1,6 @@
-| Option | Description |
-| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `-d` | Expects a `dir` directory value which is a folder that will be used as QuestDB's root directory. For more information and the default values, see the [default root](#default-root-directory) section below. |
-| `-t` | Expects a `tag` string value which will be as a tag for the service. This option allows users to run several QuestDB services and manage them separately. If this option is omitted, the default tag will be `questdb`. |
-| `-f` | Force re-deploying the [Web Console](/docs/web-console/). Without this option, the [Web Console](/docs/web-console/) is cached and deployed only when missing. |
-| `-n` | Do not respond to the HUP signal. This keeps QuestDB alive after you close the terminal window where you started it. |
+| Option | Description |
+| ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `-d` | Expects a `dir` directory value which is a folder that will be used as QuestDB's root directory. For more information and the default values, see the [default root](/docs/operations/command-line-options/#default-root-directory-1) section below. |
+| `-t` | Expects a `tag` string value which will be as a tag for the service. This option allows users to run several QuestDB services and manage them separately. If this option is omitted, the default tag will be `questdb`. |
+| `-f` | Force re-deploying the [Web Console](/docs/web-console/). Without this option, the [Web Console](/docs/web-console/) is cached and deployed only when missing. |
+| `-n` | Do not respond to the HUP signal. This keeps QuestDB alive after you close the terminal window where you started it. |
diff --git a/documentation/quick-start-utils/_options-windows.partial.mdx b/documentation/quick-start-utils/_options-windows.partial.mdx
index 06eb43765..daf17cdb3 100644
--- a/documentation/quick-start-utils/_options-windows.partial.mdx
+++ b/documentation/quick-start-utils/_options-windows.partial.mdx
@@ -1,8 +1,8 @@
-| Option | Description |
-| --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `install` | Installs the Windows QuestDB service. The service will start automatically at startup. |
-| `remove` | Removes the Windows QuestDB service. It will no longer start at startup. |
-| `-d` | Expects a `dir` directory value which is a folder that will be used as QuestDB's root directory. For more information and the default values, see the [default root](#default-root-directory) section below. |
-| `-t` | Expects a `tag` string value which will be as a tag for the service. This option allows users to run several QuestDB services and manage them separately. If this option is omitted, the default tag will be `questdb`. |
-| `-f` | Force re-deploying the [Web Console](/docs/web-console/). Without this option, the [Web Console](/docs/web-console/) is cached and deployed only when missing. |
-| `-j` | **Windows only!** This option allows to specify a path to `JAVA_HOME`. |
+| Option | Description |
+| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `install` | Installs the Windows QuestDB service. The service will start automatically at startup. |
+| `remove` | Removes the Windows QuestDB service. It will no longer start at startup. |
+| `-d` | Expects a `dir` directory value which is a folder that will be used as QuestDB's root directory. For more information and the default values, see the [default root](/docs/operations/command-line-options/#default-root-directory-1) section below. |
+| `-t` | Expects a `tag` string value which will be as a tag for the service. This option allows users to run several QuestDB services and manage them separately. If this option is omitted, the default tag will be `questdb`. |
+| `-f` | Force re-deploying the [Web Console](/docs/web-console/). Without this option, the [Web Console](/docs/web-console/) is cached and deployed only when missing. |
+| `-j` | **Windows only!** This option allows to specify a path to `JAVA_HOME`. |
diff --git a/documentation/quick-start.mdx b/documentation/quick-start.mdx
index 0b89167e8..6fd2596f4 100644
--- a/documentation/quick-start.mdx
+++ b/documentation/quick-start.mdx
@@ -249,7 +249,7 @@ There are several quick options:
For operators or developers looking for next steps to run an efficient instance,
see:
-- **[Capacity planning](/docs/deployment/capacity-planning/) for recommended
+- **[Capacity planning](/docs/operations/capacity-planning/) for recommended
configurations for operating QuestDB in production**
- [Configuration](/docs/configuration/) to see all of the available options in
your `server.conf` file
diff --git a/documentation/reference/api/ilp/advanced-settings.md b/documentation/reference/api/ilp/advanced-settings.md
index bcecd1840..45c5b4c27 100644
--- a/documentation/reference/api/ilp/advanced-settings.md
+++ b/documentation/reference/api/ilp/advanced-settings.md
@@ -386,5 +386,5 @@ things:
```
Refer to
- [InfluxDB Line Protocol's configuration](/docs/configuration/#influxdb-line-protocol)
+ [InfluxDB Line Protocol's configuration](/docs/configuration/#influxdb-line-protocol-ilp)
documentation for more on these configuration settings.
diff --git a/documentation/reference/api/ilp/overview.md b/documentation/reference/api/ilp/overview.md
index 05d491be6..97054d123 100644
--- a/documentation/reference/api/ilp/overview.md
+++ b/documentation/reference/api/ilp/overview.md
@@ -70,7 +70,7 @@ Configure the thread pools, buffer and queue sizes, receiver IP address and
port, load balancing, and more.
For more guidance in how to tune QuestDB, see
-[capacity planning](/docs/deployment/capacity-planning/).
+[capacity planning](/docs/operations/capacity-planning/).
## Transport selection
diff --git a/documentation/reference/api/rest.md b/documentation/reference/api/rest.md
index 66b74f95a..342e0dae9 100644
--- a/documentation/reference/api/rest.md
+++ b/documentation/reference/api/rest.md
@@ -112,8 +112,8 @@ The `/exec` entrypoint takes a SQL query and returns results as JSON.
We can use this for quick SQL inserts too, but note that there's no support for
parameterized queries that are necessary to avoid SQL injection issues. Prefer
-[InfluxDB Line Protocol](#influxdb-line-protocol) if you need high-performance
-inserts.
+[InfluxDB Line Protocol](/docs/configuration/#influxdb-line-protocol-ilp) if
+you need high-performance inserts.
` or `<`](#description).
-
#### Arguments
- `value1` is any data type.
diff --git a/documentation/reference/operators/ipv4.md b/documentation/reference/operators/ipv4.md
index 60f29fda2..7a87525de 100644
--- a/documentation/reference/operators/ipv4.md
+++ b/documentation/reference/operators/ipv4.md
@@ -16,11 +16,11 @@ The following operators support `string` type arguments to permit the passing of
netmasks:
- `<<`
- [Strict IP address contained by](/docs/reference/operators/ipv4/#-strict-ip-address-contained-by)
+ [Strict IP address contained by](/docs/reference/operators/ipv4/#-left-strict-ip-address-contained-by)
- `<<=`
- [IP address contained by or equal](/docs/reference/operators/ipv4/#--ip-address-contained-by-or-equal)
-- [rnd_ipv4(string, int)](/docs/reference/operators/ipv4/#random-address-range-generator---rnd_ipv4string-int)
-- [netmask()](/docs/reference/operators/ipv4/#return-netmask---netmask)
+ [IP address contained by or equal](/docs/reference/operators/ipv4/#-left-ip-address-contained-by-or-equal)
+- [rnd_ipv4(string, int)](/docs/reference/function/random-value-generator/#rnd_ipv4string-int)
+- [netmask()](/docs/reference/operators/ipv4/#return-netmask---netmaskstring)
## `<` Lesser than
diff --git a/documentation/reference/operators/precedence.md b/documentation/reference/operators/precedence.md
index cac3a1847..205e4493d 100644
--- a/documentation/reference/operators/precedence.md
+++ b/documentation/reference/operators/precedence.md
@@ -55,8 +55,8 @@ See the next section for the current precedence.
| [`&`](bitwise.md#-and) | bitwise and | 8 | bitwise AND of two numbers |
| [`^`](bitwise.md#-xor) | bitwise xor | 9 | bitwise XOR of two numbers |
| [`\|`](bitwise.md#-or) | bitwise or | 10 | bitwise OR of two numbers |
-| [`IN`](date-time.md#in) | in | 11 | check if value in list or range |
-| [`BETWEEN`](date-time.md#between) | between | 11 | check if timestamp in range |
+| [`IN`](date-time.md#in-timerange) | in | 11 | check if value in list or range |
+| [`BETWEEN`](date-time.md#between-value1-and-value2) | between | 11 | check if timestamp in range |
| [`WITHIN`](spatial.md#within) | within geohash | 11 | prefix matches geohash |
| [`<`](comparison.md#-lesser-than) | lesser than | 12 | lt comparison |
| [`<=`](comparison.md#-lesser-than-or-equal-to) | lesser than or equal to | 12 | leq comparison |
@@ -69,6 +69,6 @@ See the next section for the current precedence.
| [`!~`](text.md#-regex-doesnt-match) | regex does not match | 13 | regex pattern does not match |
| [`LIKE`](text.md#like) | match string | 13 | pattern matching |
| [`ILIKE`](text.md#ilike) | match string without case | 13 | case insensitive pattern matching |
-| [`NOT`](logical.md#not) | logical not | 14 | logical NOT of two numbers |
-| [`AND`](logical.md#and) | logical and | 15 | logical AND of two numbers |
-| [`OR`](logical.md#or) | logical or | 16 | logical OR of two numbers |
+| [`NOT`](logical.md#not-logical-not) | logical not | 14 | logical NOT of two numbers |
+| [`AND`](logical.md#and-logical-and) | logical and | 15 | logical AND of two numbers |
+| [`OR`](logical.md#or-logical-or) | logical or | 16 | logical OR of two numbers |
diff --git a/documentation/reference/sql/alter-mat-view-resume-wal.md b/documentation/reference/sql/alter-mat-view-resume-wal.md
index 69090c30a..0a3f0c0e4 100644
--- a/documentation/reference/sql/alter-mat-view-resume-wal.md
+++ b/documentation/reference/sql/alter-mat-view-resume-wal.md
@@ -8,16 +8,9 @@ description:
:::info
-Materialized View support is in **beta**.
+Materialized View support is in **beta**. It may not be fit for production use.
-It may not be fit for production use.
-
-To enable **beta** materialized views, set `cairo.mat.view.enabled=true` in `server.conf`, or export the equivalent
-environment variable: `QDB_CAIRO_MAT_VIEW_ENABLED=true`.
-
-Please let us know if you run into issues.
-
-Either:
+Please let us know if you run into issues. Either:
1. Email us at [support@questdb.io](mailto:support@questdb.io)
2. Join our [public Slack](https://slack.questdb.com/)
@@ -25,7 +18,6 @@ Either:
:::
-
`ALTER MATERIALIZED VIEW RESUME WAL` restarts
[WAL table](/docs/concept/write-ahead-log/) transactions after resolving errors.
@@ -33,9 +25,6 @@ Accepts the same optional `sequencerTxn` input as the
[`ALTER TABLE RESUME WAL`](/docs/reference/sql/alter-table-resume-wal/)
operation. Refer to that page for more details.
-
-
-
## Syntax

diff --git a/documentation/reference/sql/alter-table-add-column.md b/documentation/reference/sql/alter-table-add-column.md
index 75861a8ca..098446029 100644
--- a/documentation/reference/sql/alter-table-add-column.md
+++ b/documentation/reference/sql/alter-table-add-column.md
@@ -53,7 +53,7 @@ ALTER TABLE ratings ADD COLUMN IF NOT EXISTS score DOUBLE;
When adding a column of `Symbol` type, optional keywords may be passed which are
unique to this type. These keywords are described in the
-[Symbol type](/docs/reference/sql/create-table/#symbol) section of the
+[Symbol type](/docs/reference/sql/create-table/#symbols) section of the
`CREATE TABLE` documentation.
The following example shows how to add a new `SYMBOL` column with `NOCACHE` and
diff --git a/documentation/reference/sql/alter-table-resume-wal.md b/documentation/reference/sql/alter-table-resume-wal.md
index 6d698bcfe..5659bb890 100644
--- a/documentation/reference/sql/alter-table-resume-wal.md
+++ b/documentation/reference/sql/alter-table-resume-wal.md
@@ -72,7 +72,7 @@ If you have [data deduplication](/concept/deduplication/) enabled on your tables
:::
-Sometimes a table may get suspended due to full disk or [kernel limits](/docs/deployment/capacity-planning/#os-configuration). In this case, an entire WAL segment may be corrupted. This means that there will be multiple transactions that rely on the corrupted segment, and finding the transaction number to resume from may be difficult.
+Sometimes a table may get suspended due to full disk or [kernel limits](/docs/operations/capacity-planning/#os-configuration). In this case, an entire WAL segment may be corrupted. This means that there will be multiple transactions that rely on the corrupted segment, and finding the transaction number to resume from may be difficult.
When you run RESUME WAL on such suspended table, you may see an error like this:
diff --git a/documentation/reference/sql/copy.md b/documentation/reference/sql/copy.md
index a9cac5672..c7f359ea0 100644
--- a/documentation/reference/sql/copy.md
+++ b/documentation/reference/sql/copy.md
@@ -70,7 +70,7 @@ operation. There are two root directories to be defined:
`root_directory/tmp` directory.
Use the [configuration keys](/docs/configuration/) to edit these properties in
-[`COPY` configuration settings](/docs/configuration/#bulk-csv-import):
+[`COPY` configuration settings](/docs/configuration/#csv-import):
```shell title="Example"
cairo.sql.copy.root=/Users/UserName/Desktop
@@ -139,7 +139,7 @@ progress.
imported.
- `FORMAT`: Timestamp column format when the format is not the default
(`yyyy-MM-ddTHH:mm:ss.SSSUUUZ`) or cannot be detected. See
- [Date and Timestamp format](/docs/reference/function/date-time/#date-and-timestamp-format)
+ [Date and Timestamp format](/docs/reference/function/date-time/#timestamp-format)
for more information.
- `DELIMITER`: Default setting is `,`.
- `PARTITION BY`: Partition unit.
@@ -153,7 +153,7 @@ progress.
## Examples
For more details on parallel import, please also see
-[Importing data in bulk via CSV](/docs/guides/import-csv/#import-csv-via-copy-sql/).
+[Importing data in bulk via CSV](/docs/guides/import-csv/#import-csv-via-copy-sql).
```questdb-sql title="COPY"
COPY weather FROM 'weather.csv' WITH HEADER true FORMAT 'yyyy-MM-ddTHH:mm:ss.SSSUUUZ' ON ERROR SKIP_ROW;
diff --git a/documentation/reference/sql/create-mat-view.md b/documentation/reference/sql/create-mat-view.md
index 215688d1c..91fa64cc6 100644
--- a/documentation/reference/sql/create-mat-view.md
+++ b/documentation/reference/sql/create-mat-view.md
@@ -7,17 +7,9 @@ description:
:::info
-Materialized View support is in **beta**.
+Materialized View support is in **beta**. It may not be fit for production use.
-It may not be fit for production use.
-
-To enable **beta** materialized views, set `cairo.mat.view.enabled=true` in
-`server.conf`, or export the equivalent environment variable:
-`QDB_CAIRO_MAT_VIEW_ENABLED=true`.
-
-Please let us know if you run into issues.
-
-Either:
+Please let us know if you run into issues. Either:
1. Email us at [support@questdb.io](mailto:support@questdb.io)
2. Join our [public Slack](https://slack.questdb.com/)
@@ -31,7 +23,8 @@ materialized view.
A materialized view holds the result set of the given query, and is
automatically refreshed and persisted. For more information on the concept, see
-the [reference](/docs/concept/mat-views/) on materialized views.
+the [introduction](/docs/concept/mat-views/) and [guide](/docs/guides/mat-views/)
+on materialized views.
## Syntax
@@ -39,6 +32,12 @@ To create a materialized view, manually enter the parameters and settings:

+:::tip
+
+For simple materialized views, you can alternatively use the [compact syntax](#compact-syntax).
+
+:::
+
## Metadata
To check materialized view metadata, use the `materialized_views()` function,
@@ -115,7 +114,7 @@ AS (
## Partitioning
-`PARTITION BY` allows for specifying the
+`PARTITION BY` optionally allows specifying the
[partitioning strategy](/docs/concept/partitions/) for the materialized view.
Materialized views can be partitioned by one of the following:
@@ -129,6 +128,9 @@ Materialized views can be partitioned by one of the following:
The partitioning strategy **cannot be changed** after the materialized view has
been created.
+If unspecified, the `CREATE MATERIALIZED VIEW` statement will infer the
+[default partitioning strategy](/docs/guides/mat-views/#default-partitioning).
+
## Time To Live (TTL)
A retention policy can be set on the materialized view, bounding how much data
@@ -215,9 +217,28 @@ CREATE MATERIALIZED VIEW trades_hourly_prices AS (
OWNED BY analysts;
```
+## Compact syntax
+
+The `CREATE MATERIALIZED VIEW` statement also supports a compact syntax
+which can be used when the default parameters are sufficient.
+
+
+
+```questdb-sql
+CREATE MATERIALIZED VIEW trades_hourly_prices AS
+SELECT
+ timestamp,
+ symbol,
+ avg(price) AS avg_price
+FROM trades
+SAMPLE BY 1h;
+```
+
+For more on the semantics of the compact syntax, see the [materialized view guide](/docs/guides/mat-views/#compact-syntax).
+
## Query constraints
There is a list of requirements for the queries that are used in materialized
views. Refer to this
-[documentation section](/docs/concept/mat-views/#technical-requirements) to
-learn them.
+[documentation section](/docs/guides/mat-views/#technical-requirements) to learn
+about them.
diff --git a/documentation/reference/sql/drop-mat-view.md b/documentation/reference/sql/drop-mat-view.md
index bdd149d16..8fa4e7f59 100644
--- a/documentation/reference/sql/drop-mat-view.md
+++ b/documentation/reference/sql/drop-mat-view.md
@@ -7,16 +7,9 @@ description:
:::info
-Materialized View support is in **beta**.
+Materialized View support is in **beta**. It may not be fit for production use.
-It may not be fit for production use.
-
-To enable **beta** materialized views, set `cairo.mat.view.enabled=true` in `server.conf`, or export the equivalent
-environment variable: `QDB_CAIRO_MAT_VIEW_ENABLED=true`.
-
-Please let us know if you run into issues.
-
-Either:
+Please let us know if you run into issues. Either:
1. Email us at [support@questdb.io](mailto:support@questdb.io)
2. Join our [public Slack](https://slack.questdb.com/)
@@ -24,13 +17,12 @@ Either:
:::
-
`DROP MATERIALIZED VIEW` permanently deletes a materialized view and its
contents.
The deletion is **permanent** and **not recoverable**, except if the view was
-created in a non-standard volume. In such cases, the view is only logically removed while the underlying data
-remains intact in its volume.
+created in a non-standard volume. In such cases, the view is only logically
+removed while the underlying data remains intact in its volume.
Disk space is reclaimed asynchronously after the materialized view is dropped.
@@ -54,5 +46,6 @@ it exists.
## See also
-For more information on the concept, see the
-[reference](/docs/concept/mat-views/) on materialized views.
+For more information on the concept, see the the
+[introduction](/docs/concept/mat-views/) and [guide](/docs/guides/mat-views/) on
+materialized views.
diff --git a/documentation/reference/sql/group-by.md b/documentation/reference/sql/group-by.md
index 446b58b41..fe4e31d54 100644
--- a/documentation/reference/sql/group-by.md
+++ b/documentation/reference/sql/group-by.md
@@ -5,7 +5,7 @@ description: GROUP BY SQL keyword reference documentation.
---
Groups aggregation calculations by one or several keys. In QuestDB, this clause
-is [optional](/docs/concept/sql-extensions/#optionality-of-group-by/).
+is [optional](/docs/concept/sql-extensions/#group-by-is-optional).
## Syntax
diff --git a/documentation/reference/sql/over.md b/documentation/reference/sql/over.md
index 698577e77..03b40ed99 100644
--- a/documentation/reference/sql/over.md
+++ b/documentation/reference/sql/over.md
@@ -49,7 +49,7 @@ They are often used in analytics for tasks such as:
- Finding the maximum or minimum value in a sequence or partition
- Ranking items within a specific category or partition
- Calculating [moving averages](/docs/reference/function/window#avg) or
- [cumulative sums](/docs/reference/function/window#cumulative-sum)
+ [cumulative sums](/docs/reference/function/window#cumulative-bid-size)
Window functions are tough to grok.
@@ -110,7 +110,7 @@ Where:
- [`last_value()`](/docs/reference/function/window#last_value) – Retrieves the last value in a window
-- [`lead()`](/docs/docs/reference/function/window#lead) – Accesses data from subsequent rows
+- [`lead()`](/docs/reference/function/window#lead) – Accesses data from subsequent rows
- [`max()`](/docs/reference/function/window#max) – Returns the maximum value within a window
@@ -120,7 +120,7 @@ Where:
- [`row_number()`](/docs/reference/function/window#row_number) – Assigns sequential numbers to rows
-- [`sum()`](/docs/reference/function/window#cumulative-sum) – Calculates the sum within a window
+- [`sum()`](/docs/reference/function/window#cumulative-bid-size) – Calculates the sum within a window
## Components of a window function
diff --git a/documentation/reference/sql/overview.md b/documentation/reference/sql/overview.md
index 40b2eb7b7..373e4dbbc 100644
--- a/documentation/reference/sql/overview.md
+++ b/documentation/reference/sql/overview.md
@@ -385,13 +385,9 @@ run_query("UPDATE trades SET value = 9876 WHERE name = 'abc'")
:::info
-Apache Parquet support is in **beta**.
+Apache Parquet support is in **beta**. It may not be fit for production use.
-It may not be fit for production use.
-
-Please let us know if you run into issues.
-
-Either:
+Please let us know if you run into issues. Either:
1. Email us at [support@questdb.io](mailto:support@questdb.io)
2. Join our [public Slack](https://slack.questdb.com/)
@@ -447,7 +443,7 @@ And to learn about some of our favourite, most powerful syntax:
date and time
- [`SAMPLE BY`](/docs/reference/sql/sample-by/) to summarize data into chunks
based on a specified time interval, from a year to a microsecond
-- [`WHERE IN`](/docs/reference/sql/where/#time-range) to compress time ranges
+- [`WHERE IN`](/docs/reference/sql/where/#time-range-where-in) to compress time ranges
into concise intervals
- [`LATEST ON`](/docs/reference/sql/latest-on/) for latest values within
multiple series within a table
diff --git a/documentation/reference/sql/refresh-mat-view.md b/documentation/reference/sql/refresh-mat-view.md
index dc58ee1b2..ed0496caa 100644
--- a/documentation/reference/sql/refresh-mat-view.md
+++ b/documentation/reference/sql/refresh-mat-view.md
@@ -7,16 +7,9 @@ description:
:::info
-Materialized View support is in **beta**.
+Materialized View support is in **beta**. It may not be fit for production use.
-It may not be fit for production use.
-
-To enable **beta** materialized views, set `cairo.mat.view.enabled=true` in `server.conf`, or export the equivalent
-environment variable: `QDB_CAIRO_MAT_VIEW_ENABLED=true`.
-
-Please let us know if you run into issues.
-
-Either:
+Please let us know if you run into issues. Either:
1. Email us at [support@questdb.io](mailto:support@questdb.io)
2. Join our [public Slack](https://slack.questdb.com/)
@@ -24,19 +17,18 @@ Either:
:::
-
-`REFRESH MATERIALIZED VIEW` refreshes a materialized view. This is helpful when a view
-becomes invalid, and no longer refreshes incrementally.
+`REFRESH MATERIALIZED VIEW` refreshes a materialized view. This is helpful when
+a view becomes invalid, and no longer refreshes incrementally.
When the FULL keyword is specified, this command deletes the data in the target
materialized view and inserts the results of the query into the view. It also
-marks the materialized view as valid, reactivating the incremental
-refresh processes.
+marks the materialized view as valid, reactivating the incremental refresh
+processes.
-When the `INCREMENTAL` keyword is used, the `REFRESH` command schedules an incremental
-refresh of the materialized view. Usually, incremental refresh is automatic, so
-this command is useful only in niche situations when incremental refresh is not working
-as expected, but the view is still valid.
+When the `INCREMENTAL` keyword is used, the `REFRESH` command schedules an
+incremental refresh of the materialized view. Usually, incremental refresh is
+automatic, so this command is useful only in niche situations when incremental
+refresh is not working as expected, but the view is still valid.
## Syntax
@@ -54,5 +46,6 @@ REFRESH MATERIALIZED VIEW trades_1h INCREMENTAL;
## See also
-For more information on the concept, see the
-[reference](/docs/concept/mat-views/) on materialized views.
+For more information on the concept, see the the
+[introduction](/docs/concept/mat-views/) and [guide](/docs/guides/mat-views/) on
+materialized views.
diff --git a/documentation/reference/sql/show.md b/documentation/reference/sql/show.md
index 379d27c53..4510bee6f 100644
--- a/documentation/reference/sql/show.md
+++ b/documentation/reference/sql/show.md
@@ -310,5 +310,5 @@ The following functions allow querying tables with filters and using the results
as part of a function:
- [table_columns()](/docs/reference/function/meta/#table_columns)
-- [tables()](/docs/reference/function/meta/#all_tables)
+- [tables()](/docs/reference/function/meta/#tables)
- [table_partitions()](/docs/reference/function/meta/#table_partitions)
diff --git a/documentation/sidebars.js b/documentation/sidebars.js
index e07680764..7b94e5515 100644
--- a/documentation/sidebars.js
+++ b/documentation/sidebars.js
@@ -446,6 +446,7 @@ module.exports = {
"guides/compression-zfs",
"reference/api/java-embedded",
"guides/import-csv",
+ "guides/mat-views",
"guides/modifying-data",
"guides/replication-tuning",
"guides/working-with-timestamps-timezones",
diff --git a/documentation/third-party-tools/kafka.md b/documentation/third-party-tools/kafka.md
index 4d223deba..3c5ddb56a 100644
--- a/documentation/third-party-tools/kafka.md
+++ b/documentation/third-party-tools/kafka.md
@@ -396,7 +396,7 @@ the `timestamp.string.fields` option. Set the timestamp format with the
`timestamp.string.format` option, which adheres to the QuestDB timestamp format.
See the
-[QuestDB timestamp](/docs/reference/function/date-time/#date-and-timestamp-format)
+[QuestDB timestamp](/docs/reference/function/date-time/#timestamp-format)
documentation for more details.
#### Example
diff --git a/documentation/third-party-tools/qstudio.md b/documentation/third-party-tools/qstudio.md
index 198690b1d..049dd11f1 100644
--- a/documentation/third-party-tools/qstudio.md
+++ b/documentation/third-party-tools/qstudio.md
@@ -14,7 +14,7 @@ every database including QuestDB via the PostgreSQL driver.
## Prerequisites
-- A running QuestDB instance (See [Getting Started](/docs/#getting-started))
+- A running QuestDB instance (See [Getting Started](/docs/quick-start/))
## Configure QuestDB connection
diff --git a/documentation/troubleshooting/faq.md b/documentation/troubleshooting/faq.md
index d12708640..dc37ed3d9 100644
--- a/documentation/troubleshooting/faq.md
+++ b/documentation/troubleshooting/faq.md
@@ -76,7 +76,7 @@ io.questdb.cairo.CairoException: [24] could not open read-only [file=/root/.ques
The machine may have insufficient limits for the maximum number of open files.
Try checking the `ulimit` value on your machine. Refer to
-[capacity planning](/docs/deployment/capacity-planning/#maximum-open-files) page
+[capacity planning](/docs/operations/capacity-planning/#maximum-open-files) page
for more details.
## Why do I see `errno=12` mmap messages in the server logs?
@@ -90,7 +90,7 @@ Log messages may appear like the following:
The machine may have insufficient limits of memory map areas a process may have.
Try checking and increasing the `vm.max_map_count` value on your machine. Refer
to
-[capacity planning](/docs/deployment/capacity-planning/#max-virtual-memory-areas-limit)
+[capacity planning](/docs/operations/capacity-planning/#max-virtual-memory-areas-limit)
page for more details.
## Why do I see `async command/event queue buffer overflow` messages when dropping partitions?
@@ -112,7 +112,7 @@ inserted with identical fields. Until then, you need to
## Can I query by time?
Yes! When using the `WHERE` statement to define the time range for a query, the
-[`IN`](/docs/reference/sql/where/#time-range-with-modifier) keyword allows
+[`IN`](/docs/reference/sql/where/#time-range-with-interval-modifier) keyword allows
modifying the range and interval of the search. The range can be tuned to a
second resolution.
diff --git a/documentation/web-console.md b/documentation/web-console.md
index edf4ca617..afc1d565f 100644
--- a/documentation/web-console.md
+++ b/documentation/web-console.md
@@ -35,7 +35,7 @@ running locally, this will be [http://localhost:9000](http://localhost:9000).
It is possible to hide QuestDB system tables (`telemetry` and
`telemetry_config`) in Schema explorer by setting up the following configuration
-option in a [server.conf](/docs/concept/root-directory-structure/#serverconf)
+option in a [server.conf](/docs/concept/root-directory-structure/#conf-directory)
file:
```bash title="/var/lib/questdb/conf/server.conf"
diff --git a/documentation/why-questdb.md b/documentation/why-questdb.md
index 730e3e90c..89e5f77a3 100644
--- a/documentation/why-questdb.md
+++ b/documentation/why-questdb.md
@@ -52,7 +52,7 @@ Greatest hits include:
- [`SAMPLE BY`](/docs/reference/sql/sample-by/) summarizes data into chunks
based on a specified time interval, from a year to a microsecond
-- [`WHERE IN`](/docs/reference/sql/where/#time-range) to compress time ranges
+- [`WHERE IN`](/docs/reference/sql/where/#time-range-where-in) to compress time ranges
into concise intervals
- [`LATEST ON`](/docs/reference/sql/latest-on/) for latest values within
multiple series within a table
@@ -148,7 +148,7 @@ From there, you can learn more about what's to offer.
upload/export functionality
- [Grafana guide](/docs/third-party-tools/grafana/) to visualize your data as
beautiful and functional charts.
-- [Capacity planning](/docs/deployment/capacity-planning/) to optimize your
+- [Capacity planning](/docs/operations/capacity-planning/) to optimize your
QuestDB deployment for production workloads.
## Support
diff --git a/docusaurus.config.js b/docusaurus.config.js
index f9ee5019e..6021ecc2a 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -20,8 +20,9 @@ const config = {
staticDirectories: ['static'],
projectName: "questdb",
customFields,
- onBrokenLinks: "warn",
- onBrokenMarkdownLinks: "warn",
+ onBrokenLinks: isPreviews ? "warn" : "throw",
+ onBrokenMarkdownLinks: isPreviews ? "warn" : "throw",
+ onBrokenAnchors: isPreviews ? "warn" : "throw",
trailingSlash: true,
stylesheets: [
{
diff --git a/src/components/Guides/index.tsx b/src/components/Guides/index.tsx
index 13a9cbb76..49340e713 100644
--- a/src/components/Guides/index.tsx
+++ b/src/components/Guides/index.tsx
@@ -2,7 +2,7 @@ import { DocButton } from '../DocButton'
const guides = [
{
- href: '/docs/deployment/capacity-planning/',
+ href: '/docs/operations/capacity-planning/',
name: 'Capacity planning',
description: 'Select a storage medium, plan, size and compress your QuestDB deployment.',
},
diff --git a/static/images/docs/concepts/mat-view-agg.svg b/static/images/docs/concepts/mat-view-agg.svg
new file mode 100644
index 000000000..90be9ba99
--- /dev/null
+++ b/static/images/docs/concepts/mat-view-agg.svg
@@ -0,0 +1,222 @@
+
+
+
+
diff --git a/static/images/docs/diagrams/.railroad b/static/images/docs/diagrams/.railroad
index 1b5f7bcf7..3a042de6e 100644
--- a/static/images/docs/diagrams/.railroad
+++ b/static/images/docs/diagrams/.railroad
@@ -373,6 +373,11 @@ createMatViewDef
(viewTargetVolumeDef)?
('OWNED' 'BY' ownerName)?
+createMatViewCompactDef
+ ::= 'CREATE' 'MATERIALIZED' 'VIEW' ('IF' 'NOT' 'EXISTS')? viewName
+ 'AS'
+ (query)
+
alterMatView
::= 'ALTER' 'MATERIALIZED' 'VIEW' viewName
diff --git a/static/images/docs/diagrams/createMatViewCompactDef.svg b/static/images/docs/diagrams/createMatViewCompactDef.svg
new file mode 100644
index 000000000..ffde7d2f2
--- /dev/null
+++ b/static/images/docs/diagrams/createMatViewCompactDef.svg
@@ -0,0 +1,61 @@
+
\ No newline at end of file