From 53d5140da84d82c8da5bc6794d11c2568b30fcb4 Mon Sep 17 00:00:00 2001 From: Philip Krauss <35487337+philkra@users.noreply.github.com> Date: Wed, 29 Oct 2025 14:17:46 +0100 Subject: [PATCH 1/5] Add PG18 to the support matrix (#4528) * Update Alter_Table doc (#4523) * chore: cleanup. (#4527) * chore: cleanup. * Apply suggestions from code review Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> Signed-off-by: Iain Cox --------- Signed-off-by: Iain Cox Co-authored-by: billy-the-fish Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> * Add PG18 to the support matrix * update --------- Signed-off-by: Iain Cox Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> Co-authored-by: Iain Cox Co-authored-by: billy-the-fish Co-authored-by: atovpeko --- _partials/_cloudtrial.md | 2 +- _partials/_cloudtrial_unused.md | 2 +- _partials/_consider-cloud.md | 2 +- ...self_postgres_timescaledb_compatibility.md | 36 +++++++++++-------- _partials/_timescale-cloud-platforms.md | 2 +- 5 files changed, 25 insertions(+), 19 deletions(-) diff --git a/_partials/_cloudtrial.md b/_partials/_cloudtrial.md index 745c0dc9a5..e1a4ed6930 100644 --- a/_partials/_cloudtrial.md +++ b/_partials/_cloudtrial.md @@ -1,4 +1,4 @@ - + Your $CLOUD_LONG trial is completely free for you to use for the first thirty days. This gives you enough time to complete all the tutorials and run a diff --git a/_partials/_cloudtrial_unused.md b/_partials/_cloudtrial_unused.md index b35cabebe3..5d29f295c1 100644 --- a/_partials/_cloudtrial_unused.md +++ b/_partials/_cloudtrial_unused.md @@ -1,4 +1,4 @@ - +
  • Get started at the click of a button
  • diff --git a/_partials/_consider-cloud.md b/_partials/_consider-cloud.md index 27e440e181..55a26f73e4 100644 --- a/_partials/_consider-cloud.md +++ b/_partials/_consider-cloud.md @@ -1,4 +1,4 @@ - + $CLOUD_LONG is a fully managed service with automatic backup and restore, high availability with replication, seamless scaling and resizing, and much more. You diff --git a/_partials/_migrate_self_postgres_timescaledb_compatibility.md b/_partials/_migrate_self_postgres_timescaledb_compatibility.md index 820c5e9584..1719ee5a2d 100644 --- a/_partials/_migrate_self_postgres_timescaledb_compatibility.md +++ b/_partials/_migrate_self_postgres_timescaledb_compatibility.md @@ -1,19 +1,25 @@ + -| $TIMESCALE_DB version |$PG 17|$PG 16|$PG 15|$PG 14|$PG 13|$PG 12|$PG 11|$PG 10| -|-----------------------|-|-|-|-|-|-|-|-| -| 2.22.x |✅|✅|✅|❌|❌|❌|❌|❌|❌| -| 2.21.x |✅|✅|✅|❌|❌|❌|❌|❌|❌| -| 2.20.x |✅|✅|✅|❌|❌|❌|❌|❌|❌| -| 2.17 - 2.19 |✅|✅|✅|✅|❌|❌|❌|❌|❌| -| 2.16.x |❌|✅|✅|✅|❌|❌|❌|❌|❌|❌| -| 2.13 - 2.15 |❌|✅|✅|✅|✅|❌|❌|❌|❌| -| 2.12.x |❌|❌|✅|✅|✅|❌|❌|❌|❌| -| 2.10.x |❌|❌|✅|✅|✅|✅|❌|❌|❌| -| 2.5 - 2.9 |❌|❌|❌|✅|✅|✅|❌|❌|❌| -| 2.4 |❌|❌|❌|❌|✅|✅|❌|❌|❌| -| 2.1 - 2.3 |❌|❌|❌|❌|✅|✅|✅|❌|❌| -| 2.0 |❌|❌|❌|❌|❌|✅|✅|❌|❌ -| 1.7 |❌|❌|❌|❌|❌|✅|✅|✅|✅| +$PG 15 support is deprecated and will be removed from $TIMESCALE_DB in June 2026. + + + +| $TIMESCALE_DB version |$PG 18|$PG 17|$PG 16|$PG 15|$PG 14|$PG 13|$PG 12|$PG 11|$PG 10| +|-----------------------|-|-|-|-|-|-|-|-|-| +| 2.23.x |✅|✅|✅|✅|❌|❌|❌|❌|❌|❌| +| 2.22.x |❌|✅|✅|✅|❌|❌|❌|❌|❌|❌| +| 2.21.x |❌|✅|✅|✅|❌|❌|❌|❌|❌|❌| +| 2.20.x |❌|✅|✅|✅|❌|❌|❌|❌|❌|❌| +| 2.17 - 2.19 |❌|✅|✅|✅|✅|❌|❌|❌|❌|❌| +| 2.16.x |❌|❌|✅|✅|✅|❌|❌|❌|❌|❌| +| 2.13 - 2.15 |❌|❌|✅|✅|✅|✅|❌|❌|❌|❌| +| 2.12.x |❌|❌|❌|✅|✅|✅|❌|❌|❌|❌| +| 2.10.x |❌|❌|❌|✅|✅|✅|✅|❌|❌|❌| +| 2.5 - 2.9 |❌|❌|❌|❌|✅|✅|✅|❌|❌|❌| +| 2.4 |❌|❌|❌|❌|❌|✅|✅|❌|❌|❌| +| 2.1 - 2.3 |❌|❌|❌|❌|❌|✅|✅|✅|❌|❌| +| 2.0 |❌|❌|❌|❌|❌|❌|✅|✅|❌|❌| +| 1.7 |❌|❌|❌|❌|❌|❌|✅|✅|✅|✅| We recommend not using $TIMESCALE_DB with $PG 17.1, 16.5, 15.9, 14.14, 13.17, 12.21. These minor versions [introduced a breaking binary interface change][postgres-breaking-change] that, diff --git a/_partials/_timescale-cloud-platforms.md b/_partials/_timescale-cloud-platforms.md index 91cc6b844c..9f35235d54 100644 --- a/_partials/_timescale-cloud-platforms.md +++ b/_partials/_timescale-cloud-platforms.md @@ -31,7 +31,7 @@ $COMPANY offers the following services for your self-hosted installations: ### $PG, $TIMESCALE_DB support matrix -$TIMESCALE_DB and $TOOLKIT_LONG run on Postgres v10, v11, v12, v13, v14, v15, v16, and v17. Currently Postgres 15 and higher are supported. +$TIMESCALE_DB and $TOOLKIT_LONG run on Postgres v10, v11, v12, v13, v14, v15, v16, v17, and v18. The latest versions support Postgres 15 and higher. From e66e4e8331a28a39428fb8b54d4a88a1a1ccb36b Mon Sep 17 00:00:00 2001 From: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> Date: Wed, 29 Oct 2025 15:19:42 +0200 Subject: [PATCH 2/5] first update (#4526) --- _partials/_timescaledb-gucs.md | 2 +- api/hypertable/create_table.md | 12 ++++-------- 2 files changed, 5 insertions(+), 9 deletions(-) diff --git a/_partials/_timescaledb-gucs.md b/_partials/_timescaledb-gucs.md index e34a0c729e..bd57e7e266 100644 --- a/_partials/_timescaledb-gucs.md +++ b/_partials/_timescaledb-gucs.md @@ -67,7 +67,7 @@ | `enable_tiered_reads` | `BOOLEAN` | `true` | Enable reading of tiered data by including a foreign table representing the data in the object storage into the query plan | | `enable_transparent_decompression` | `BOOLEAN` | `true` | Enable transparent decompression when querying hypertable | | `enable_tss_callbacks` | `BOOLEAN` | `true` | Enable ts_stat_statements callbacks | -| `enable_uuid_compression` | `BOOLEAN` | `false` | Enable uuid compression | +| `enable_uuid_compression` | `BOOLEAN` | `true` | Enable uuid compression | | `enable_vectorized_aggregation` | `BOOLEAN` | `true` | Enable vectorized aggregation for compressed data | | `last_tuned` | `STRING` | `NULL` | records last time timescaledb-tune ran | | `last_tuned_version` | `STRING` | `NULL` | version of timescaledb-tune used to tune | diff --git a/api/hypertable/create_table.md b/api/hypertable/create_table.md index faa96db632..05ce9a269f 100644 --- a/api/hypertable/create_table.md +++ b/api/hypertable/create_table.md @@ -89,26 +89,22 @@ arguments specific to $TIMESCALE_DB. ```sql - -- For optimal compression on the ID column, first enable UUIDv7 compression - SET enable_uuid_compression=true; - -- Then create your table + -- UUIDv7 compression is enabled by default CREATE TABLE events ( id uuid PRIMARY KEY DEFAULT generate_uuidv7(), payload jsonb - ) WITH (tsdb.hypertable, tsdb.partition_column = 'id'); + ) WITH (tsdb.hypertable, tsdb.partition_column = 'id'); ``` ```sql - -- For optimal compression on the ID column, first enable UUIDv7 compression - SET enable_uuid_compression=true; - -- Then create your table + -- UUIDv7 compression is enabled by default CREATE TABLE events ( id uuid PRIMARY KEY DEFAULT uuidv7(), payload jsonb - ) WITH (tsdb.hypertable, tsdb.partition_column = 'id'); + ) WITH (tsdb.hypertable, tsdb.partition_column = 'id'); ``` From f49a858515e66713535e873c58c10daeac17fb5a Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Tue, 4 Nov 2025 13:19:38 +0000 Subject: [PATCH 3/5] chore: compress on INSERT. (#4533) * chore: compress on INSERT. * chore: add missing partials. * Update changelog with new features and enhancements (#4536) * Update changelog with new features and enhancements Added details about Crypto Payments early access and S3 Source Connector improvements. Signed-off-by: Brandon * review and images * crypto payments added --------- Signed-off-by: Brandon Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> Co-authored-by: atovpeko * fixes on azure review (#4538) * CLI/MCP fixes (#4537) * Fix erroneous references to --cpu-memory flag * Improve documentation of configuration parameters, env vars, and flags * Add new 'create role' command * Add backticks around output format * Minor tweaks to global CLI flags * List config options, not env vars * MCP tools, not commands * Add new service_fork MCP tool * chore: update on review. --------- Signed-off-by: Iain Cox Co-authored-by: billy-the-fish Co-authored-by: Iain Cox * Link to support portal + fix duplication (#4541) * chore: update logo in the readme. (#4542) * Remove table access method option (#4543) This option no longer exists Signed-off-by: Philip Krauss <35487337+philkra@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com> Signed-off-by: Iain Cox * Update _use-case-transport-geolocation.md (#4501) Hinting about the tuple decompression limit error and how to set it to zero. Signed-off-by: Raja Yogan Co-authored-by: Iain Cox --------- Signed-off-by: Brandon Signed-off-by: Iain Cox Signed-off-by: Philip Krauss <35487337+philkra@users.noreply.github.com> Signed-off-by: Raja Yogan Co-authored-by: billy-the-fish Co-authored-by: Brandon Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> Co-authored-by: atovpeko Co-authored-by: Nathan Cochran Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com> Co-authored-by: Raja Yogan --- README.md | 4 +- _partials/_aws-features.md | 4 +- _partials/_azure-features.md | 8 +- _partials/_billing-example.md | 4 + _partials/_devops-cli-config-options.md | 15 +++ _partials/_devops-cli-global-flags.md | 4 +- _partials/_devops-cli-reference.md | 107 ++++++++---------- _partials/_devops-mcp-commands.md | 62 +++++----- _partials/_early_access_2_23_0.md | 1 + _partials/_manage-pricing-plan.md | 4 +- _partials/_prometheus-integrate.md | 2 +- _partials/_since_2_23_0.md | 1 + _partials/_support-plans.md | 7 +- _partials/_use-case-transport-geolocation.md | 15 ++- about/changelog.md | 48 ++++++++ about/pricing-and-account-management.md | 4 +- ai/mcp-server.md | 4 +- api/hypercore/alter_table.md | 1 - .../try-key-features-timescale-products.md | 5 + migrate/livesync-for-s3.md | 2 +- .../configuration/timescaledb-config.md | 6 - use-timescale/write-data/insert.md | 34 ++++++ 22 files changed, 232 insertions(+), 110 deletions(-) create mode 100644 _partials/_devops-cli-config-options.md create mode 100644 _partials/_early_access_2_23_0.md create mode 100644 _partials/_since_2_23_0.md diff --git a/README.md b/README.md index d80f6e3900..a583f3a6a2 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@
    - - + + Tiger Data logo
    diff --git a/_partials/_aws-features.md b/_partials/_aws-features.md index 6b83b8229b..9487a115ad 100644 --- a/_partials/_aws-features.md +++ b/_partials/_aws-features.md @@ -62,4 +62,6 @@ The features included in each [$PRICING_PLAN][pricing-plans] are: For a personalized quote, [get in touch with $COMPANY][contact-company]. [pricing-plans]: https://www.timescale.com/pricing -[contact-company]: https://www.tigerdata.com/contact/ \ No newline at end of file +[contact-company]: https://www.tigerdata.com/contact/ +[hipaa-compliance]: https://www.hhs.gov/hipaa/for-professionals/index.html +[commercial-sla]: https://www.timescale.com/legal/timescale-cloud-terms-of-service \ No newline at end of file diff --git a/_partials/_azure-features.md b/_partials/_azure-features.md index 7a14f11fac..b8aba46031 100644 --- a/_partials/_azure-features.md +++ b/_partials/_azure-features.md @@ -15,7 +15,7 @@ The features included in each [$PRICING_PLAN][pricing-plans] are: | Number of $SERVICE_SHORTs | Up to 2 free services | Up to 2 free and 4 standard services | Up to 2 free and and unlimited standard services | Up to 2 free and and unlimited standard services | | CPU limit per $SERVICE_SHORT | Shared | Up to 8 CPU | Up to 32 CPU | Up to 64 CPU | | Memory limit per $SERVICE_SHORT | Shared | Up to 32 GB | Up to 128 GB | Up to 256 GB | -| Storage limit per $SERVICE_SHORT | 750 MB | Up to 16 TB | Up to 16 TB | Up to 64 TB | +| Storage limit per $SERVICE_SHORT | 750 MB | Up to 16 TB | Up to 16 TB | Up to 16 TB | | Independently scale compute and storage | | Standard services only | Standard services only | Standard services only | | **Data services and workloads** | | | | | Relational | ✓ | ✓ | ✓ | ✓ | @@ -28,7 +28,7 @@ The features included in each [$PRICING_PLAN][pricing-plans] are: | **Storage and performance** | | | | | | IOPS | Shared | 3,000 - 5,000 | 5,000 - 8,000 | 5,000 - 8,000 | | Bandwidth (autoscales) | Shared | 125 - 250 Mbps | 250 - 500 Mbps | Up to 500 mbps | -| I/O boost | | | Add-on:
    Up to 16K IOPS, 1000 Mbps BW | Add-on:
    Up to 32K IOPS, 4000 Mbps BW | +| I/O boost | | | Add-on:
    Up to 16K IOPS, 1000 Mbps BW | Add-on:
    Up to 16K IOPS, 1000 Mbps BW | | **Availability and monitoring** | | | | | | High-availability replicas
    (Automated multi-AZ failover) | | ✓ | ✓ | ✓ | | Read replicas | | | ✓ | ✓ | @@ -58,4 +58,6 @@ The features included in each [$PRICING_PLAN][pricing-plans] are: For a personalized quote, [get in touch with $COMPANY][contact-company]. [pricing-plans]: https://www.timescale.com/pricing -[contact-company]: https://www.tigerdata.com/contact/ \ No newline at end of file +[contact-company]: https://www.tigerdata.com/contact/ +[hipaa-compliance]: https://www.hhs.gov/hipaa/for-professionals/index.html +[commercial-sla]: https://www.timescale.com/legal/timescale-cloud-terms-of-service \ No newline at end of file diff --git a/_partials/_billing-example.md b/_partials/_billing-example.md index 3be33287ba..7b3dfc2eb5 100644 --- a/_partials/_billing-example.md +++ b/_partials/_billing-example.md @@ -1,3 +1,5 @@ +import BillingForInactiveServices from "versionContent/_partials/_billing-for-inactive-services.mdx"; + You are billed at the end of each month in arrears. Your monthly invoice includes an itemized cost accounting for each $SERVICE_LONG and any additional charges. @@ -22,4 +24,6 @@ and consumed high-performance storage for 720 hours total: Some add-ons such as tiered storage, HA replicas, and connection pooling may incur additional charges. These charges are clearly marked in your billing snapshot in $CONSOLE. + + \ No newline at end of file diff --git a/_partials/_devops-cli-config-options.md b/_partials/_devops-cli-config-options.md new file mode 100644 index 0000000000..962e25b6f6 --- /dev/null +++ b/_partials/_devops-cli-config-options.md @@ -0,0 +1,15 @@ + +| Flag | Default | Description | +|---------------------------|-------------------|-----------------------------------------------------------------------------| +| `analytics` | `true` | Set to `false` to disable usage analytics | +| `color ` | `true` | Set to `false` to disable colored output | +| `debug` | No debugging | Enable debug logging | +| `docs_mcp` | `true` | Enable or disable the $COMPANY documentation MCP proxy | +| `output` | `table` | Set the output format to `json`, `yaml`, or `table` | +| `password-storage` string | `keyring` | Set the password storage method. Options are `keyring`, `pgpass`, or `none` | +| `service-id` string | - | Set the $SERVICE_LONG to manage | +| `version_check_interval` | `24h` | Set how often the $CLI_SHORT checks for a new version | + +You can also set these configuration options as environment variables. Environment variables: +* Take precedence over configuration parameters values. +* Are in upper case and use the `TIGER_` prefix. For example, `TIGER_ANALYTICS` \ No newline at end of file diff --git a/_partials/_devops-cli-global-flags.md b/_partials/_devops-cli-global-flags.md index 1fd8c692dc..4bfc3d0bad 100644 --- a/_partials/_devops-cli-global-flags.md +++ b/_partials/_devops-cli-global-flags.md @@ -3,9 +3,9 @@ |-------------------------------|-------------------|-----------------------------------------------------------------------------| | `--analytics` | `true` | Set to `false` to disable usage analytics | | `--color ` | `true` | Set to `false` to disable colored output | -| `--config-dir` string | `.config/tiger` | Set the directory that holds `config.yaml` | +| `--config-dir` string | `~/.config/tiger` | Set the directory that holds `config.yaml` | | `--debug` | No debugging | Enable debug logging | | `--help` | - | Print help about the current command. For example, `tiger service --help` | | `--password-storage` string | keyring | Set the password storage method. Options are `keyring`, `pgpass`, or `none` | | `--service-id` string | - | Set the $SERVICE_LONG to manage | -| ` --skip-update-check ` | - | Do not check if a new version of $CLI_LONG is available| \ No newline at end of file +| `--skip-update-check` | - | Do not check if a new version of $CLI_LONG is available | diff --git a/_partials/_devops-cli-reference.md b/_partials/_devops-cli-reference.md index 5af5462e50..4fe8151ff6 100644 --- a/_partials/_devops-cli-reference.md +++ b/_partials/_devops-cli-reference.md @@ -1,77 +1,70 @@ import GLOBALFLAGS from "versionContent/_partials/_devops-cli-global-flags.mdx"; +import CONFIGOPTIONS from "versionContent/_partials/_devops-cli-config-options.mdx"; ## Commands You can use the following commands with $CLI_LONG. For more information on each command, use the `-h` flag. For example: `tiger auth login -h` -| Command | Subcommand | Description | -|---------|----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| auth | | Manage authentication and credentials for your $ACCOUNT_LONG | -| | login | Create an authenticated connection to your $ACCOUNT_LONG | -| | logout | Remove the credentials used to create authenticated connections to $CLOUD_LONG | -| | status | Show your current authentication status and project ID | -| version | | Show information about the currently installed version of $CLI_LONG | -| config | | Manage your $CLI_LONG configuration | -| | show | Show the current configuration | -| | set `` `` | Set a specific value in your configuration. For example, `tiger config set debug true` | -| | unset `` | Clear the value of a configuration parameter. For example, `tiger config unset debug` | -| | reset | Reset the configuration to the defaults. This also logs you out from the current $PROJECT_LONG | -| service | | Manage the $SERVICE_LONGs in this $PROJECT_SHORT | -| | create | Create a new $SERVICE_SHORT in this $PROJECT_SHORT. Possible flags are:
    • `--name`: service name (auto-generated if not provided)
    • `--addons`: addons to enable (time-series, ai, or none for PostgreSQL-only)
    • `--region`: the region code to deploy the service in. Set an AWS region to deploy the service in AWS, an Azure region for Azure
    • `--cpu-memory`: CPU/memory allocation combination
    • `--replicas`: number of high-availability replicas
    • `--no-wait`: don't wait for the operation to complete
    • `--wait-timeout`: wait timeout duration (for example, 30m, 1h30m, 90s)
    • `--no-set-default`: don't set this service as the default service
    • `--with-password`: include password in output
    • `--output, -o`: output format (`json`, `yaml`, table)

    Possible `cpu-memory` combinations are:
    • shared/shared
    • 0.5 CPU/2 GB
    • 1 CPU/4 GB
    • 2 CPU/8 GB
    • 4 CPU/16 GB
    • 8 CPU/32 GB
    • 16 CPU/64 GB
    • 32 CPU/128 GB
    | -| | delete `` | Delete a $SERVICE_SHORT from this $PROJECT_SHORT. This operation is irreversible and requires confirmation by typing the service ID | -| | fork `` | Fork an existing service to create a new independent copy. Key features are:
    • Timing options: `--now`, `--last-snapshot`, `--to-timestamp`
    • Resource configuration: `--cpu-memory`
    • Naming: `--name `. Defaults to `{source-service-name}-fork`
    • Wait behavior: `--no-wait`, `--wait-timeout`
    • Default service: `--no-set-default`
    | -| | get `` (aliases: describe, show) | Show detailed information about a specific $SERVICE_SHORT in this $PROJECT_SHORT | -| | list | List all the $SERVICE_SHORTs in this $PROJECT_SHORT | -| | update-password `` | Update the master password for a $SERVICE_SHORT | -| db | | Database operations and management | -| | connect `` | Connect to a $SERVICE_SHORT | -| | connection-string `` | Retrieve the connection string for a $SERVICE_SHORT | -| | save-password `` | Save the password for a service | -| | test-connection `` | Test the connectivity to a $SERVICE_SHORT | -| mcp | | Manage $MCP_LONG for AI Assistant integration | -| | install `[client]` | Install and configure $MCP_LONG for a specific client (`claude-code`, `cursor`, `windsurf`, or other). If no client is specified, you'll be prompted to select one interactively | -| | start | Start $MCP_LONG. This is the same as `tiger mcp start stdio` | -| | start stdio | Start $MCP_LONG with stdio transport (default) | -| | start http | Start $MCP_LONG with HTTP transport. Includes flags: `--port` (default: `8080`), `--host` (default: `localhost`) | - - -## Global flags - -You can use the following global flags with $CLI_LONG: - - +| Command | Subcommand | Description | +|---------|-----------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| auth | | Manage authentication and credentials for your $ACCOUNT_LONG | +| | login | Create an authenticated connection to your $ACCOUNT_LONG | +| | logout | Remove the credentials used to create authenticated connections to $CLOUD_LONG | +| | status | Show your current authentication status and project ID | +| version | | Show information about the currently installed version of $CLI_LONG | +| config | | Manage your $CLI_LONG configuration | +| | show | Show the current configuration | +| | set `` `` | Set a specific value in your configuration. For example, `tiger config set debug true` | +| | unset `` | Clear the value of a configuration parameter. For example, `tiger config unset debug` | +| | reset | Reset the configuration to the defaults. This also logs you out from the current $PROJECT_LONG | +| service | | Manage the $SERVICE_LONGs in this $PROJECT_SHORT | +| | create | Create a new $SERVICE_SHORT in this $PROJECT_SHORT. Possible flags are:
    • `--name`: service name (auto-generated if not provided)
    • `--addons`: addons to enable. Possible values are `time-series`, `ai`. Set to `none` for vanille $PG
    • `--region`: region code where the service will be deployed
    • `--cpu`: CPU allocation in millicores. Set to `shared` to create a free service. See the allowed CPU/memory configurations below.
    • `--memory`: memory allocation in gigabytes. Set to `shared` to create a free service. See the allowed CPU/memory configurations below.
    • `--replicas`: number of high-availability replicas
    • `--no-wait`: don't wait for the operation to complete
    • `--wait-timeout`: wait timeout duration (for example, 30m, 1h30m, 90s)
    • `--no-set-default`: don't set this service as the default service
    • `--with-password`: include password in output
    • `--output, -o`: set the output format to `json`, `yaml`, or `table`

    Allowed CPU/memory configurations are:
    • shared / shared
    • 0.5 CPU (500m) / 2GB
    • 1 CPU (1000m) / 4GB
    • 2 CPU (2000m) / 8GB
    • 4 CPU (4000m) / 16GB
    • 8 CPU (8000m) / 32GB
    • 16 CPU (16000m) / 64GB
    • 32 CPU (32000m) / 128GB
    Note: You can specify both CPU and memory together, or specify only one (the other will be automatically configured). | +| | delete `` | Delete a $SERVICE_SHORT from this $PROJECT_SHORT. This operation is irreversible and requires confirmation by typing the service ID | +| | fork `` | Fork an existing service to create a new independent copy. Key features are:
    • Timing options:
      • `--now`
      • `--last-snapshot`
      • `--to-timestamp`
    • Resource configuration:
      • `--cpu`: CPU allocation in millicores. Set to `shared` to create a free service. See the allowed CPU/memory configurations in the `create` subcommand
      • `--memory`: memory allocation in gigabytes. Set to `shared` to create a free service. If you do not specify this parameter, `--memory` takes the same value as the source service. See the allowed CPU/memory configurations in the `create` subcommand
    • Naming:
      • `--name `: defaults to `{source-service-name}-fork`
    • Wait behavior:
      • `--no-wait`
      • `--wait-timeout`
    • Default service:
      • `--no-set-default`
    | +| | get ``
    aliases: `describe`, `show` | Show detailed information about a specific $SERVICE_SHORT in this $PROJECT_SHORT | +| | list | List all the $SERVICE_SHORTs in this $PROJECT_SHORT | +| | update-password `` | Update the master password for a $SERVICE_SHORT | +| db | | Database operations and management | +| | connect `` | Connect to a $SERVICE_SHORT | +| | connection-string `` | Retrieve the connection string for a $SERVICE_SHORT | +| | create role `` | Create a new database role. Possible flags are:
    • `--name` (required): the name for the role you are creating
    • `--read-only`: enable permanent read-only enforcement for `--name` using `tsdb_admin.read_only_role`
    • `--from`: inherit grants from one or more roles. For example, `--from app_role`, `--from readonly_role`, `--from app_role,readonly_role`
    • `--statement-timeout`: set the statement timeout for `--name`. For example, `30s`, `5m`
    • `--password`: set the password for `--name`. If not provided, the CLI checks the `TIGER_NEW_PASSWORD` environment variable. If you have not defined `TIGER_NEW_PASSWORD` the CLI auto-generates a secure random password.
    • `-o, --output`: set the output format to `json`, `yaml`, or `table`
    | +| | save-password `` | Save the password for a service | +| | test-connection `` | Test the connectivity to a $SERVICE_SHORT | +| mcp | | Manage $MCP_LONG for AI Assistant integration | +| | install `[client]` | Install and configure $MCP_LONG for a specific client (`claude-code`, `cursor`, `windsurf`, or other). If no client is specified, you'll be prompted to select one interactively | +| | start | Start $MCP_LONG. This is the same as `tiger mcp start stdio` | +| | start stdio | Start $MCP_LONG with stdio transport (default) | +| | start http | Start $MCP_LONG with HTTP transport. Includes flags: `--port` (default: `8080`), `--host` (default: `localhost`) | + ## Configuration parameters -By default, $CLI_LONG stores your configuration in `~/.config/tiger/config.yaml`. The name of these -variables matches the flags you use to update them. However, you can override them using the following -environmental variables: - -- **Configuration parameters** - - `TIGER_CONFIG_DIR`: path to configuration directory (default: `~/.config/tiger`) - - `TIGER_API_URL`: $REST_LONG base endpoint (default: https://console.cloud.timescale.com/public/api/v1) - - `TIGER_CONSOLE_URL`: URL to $CONSOLE (default: https://console.cloud.timescale.com) - - `TIGER_GATEWAY_URL`: URL to the $CONSOLE gateway (default: https://console.cloud.timescale.com/api) - - `TIGER_DOCS_MCP`: enable/disable docs MCP proxy (default: `true`) - - `TIGER_DOCS_MCP_URL`: URL to $MCP_SHORT for $COMPANY docs (default: https://mcp.tigerdata.com/docs) - - `TIGER_SERVICE_ID`: ID for the $SERVICE_SHORT updated when you call $CLI_SHORT commands - - `TIGER_ANALYTICS`: enable or disable analytics (default: `true`) - - `TIGER_PASSWORD_STORAGE`: password storage method (keyring, pgpass, or none) - - `TIGER_DEBUG`: enable/disable debug logging (default: `false`) - - `TIGER_COLOR`: set to `false` to disable colored output (default: `true`) - +By default, $CLI_LONG stores your configuration in `~/.config/tiger/config.yaml`. The location of the config +directory can be adjusted via the `--config-dir` flag or the `TIGER_CONFIG_DIR` environment variable. + +- **Configuration options** + + You set the following configuration options using `tiger config set `: + + + +- **Global Flags** + + These flags are available on all commands and take precedence over both environment variables and configuration file values: + + - **Authentication parameters** - To authenticate without using the interactive login, either: + To authenticate without using the interactive login, either: - Set the following parameters with your [client credentials][rest-api-credentials], then `login`: ```shell - TIGER_PUBLIC_KEY= TIGER_SECRET_KEY= TIGER_PROJECT_ID=\ + TIGER_PUBLIC_KEY= TIGER_SECRET_KEY= TIGER_PROJECT_ID=\ tiger auth login ``` - - Add your [client credentials][rest-api-credentials] to the `login` command: + - Add your [client credentials][rest-api-credentials] to the `login` command: ```shell tiger auth login --public-key= --secret-key= --project-id= ``` @@ -81,4 +74,4 @@ environmental variables: [get-project-id]: /integrations/:currentVersion:/find-connection-details/#find-your-project-and-service-id [create-client-credentials]: /integrations/:currentVersion:/find-connection-details/#create-client-credentials [curl]: https://curl.se/ -[rest-api-credentials]: https://console.cloud.timescale.com/dashboard/settings \ No newline at end of file +[rest-api-credentials]: https://console.cloud.timescale.com/dashboard/settings diff --git a/_partials/_devops-mcp-commands.md b/_partials/_devops-mcp-commands.md index 0b91f74756..f2acb7481a 100644 --- a/_partials/_devops-mcp-commands.md +++ b/_partials/_devops-mcp-commands.md @@ -1,32 +1,42 @@ $MCP_LONG exposes the following MCP tools to your AI Assistant: -| Command | Parameter | Required | Description | -|--------------------------|---------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `service_list` | - | - | Returns a list of the $SERVICE_SHORTs in the current $PROJECT_SHORT. | -| `service_get` | - | - | Returns detailed information about a $SERVICE_SHORT. | -| | `service_id` | ✓ | The unique identifier of the $SERVICE_SHORT (10-character alphanumeric string). | -| | `with_password` | - | Set to `true` to include the password in the response and connection string.
    **WARNING**: never do this unless the user explicitly requests the password. | -| `service_create` | - | - | Create a new $SERVICE_SHORT in $CLOUD_LONG.
    **WARNING**: creates billable resources. | -| | `name` | - | Set the human-readable name of up to 128 characters for this $SERVICE_SHORT. | -| | `addons` | - | Set the array of [addons][create-service] to enable for the $SERVICE_SHORT. Options:
    • `time-series`: enables $TIMESCALE_DB
    • `ai`: enables the AI and vector extensions
    Set an empty array for $PG-only. | -| | `region` | - | Set the [AWS region][cloud-regions] to deploy this $SERVICE_SHORT in. | -| | `cpu_memory` | - | CPU and memory allocation combination.
    Available configurations are:
    • shared/shared
    • 0.5 CPU/2 GB
    • 1 CPU/4 GB
    • 2 CPU/8 GB
    • 4 CPU/16 GB
    • 8 CPU/32 GB
    • 16 CPU/64 GB
    • 32 CPU/128 GB
    | -| | `replicas` | - | Set the number of [high-availability replicas][readreplica] for fault tolerance. | -| | `wait` | - | Set to `true` to wait for $SERVICE_SHORT to be fully ready before returning. | -| | `timeout_minutes` | - | Set the timeout in minutes to wait for $SERVICE_SHORT to be ready. Only used when `wait=true`. Default: 30 minutes | -| | `set_default` | - | By default, the new $SERVICE_SHORT is the default for following commands in $CLI_SHORT. Set to `false` to keep the previous $SERVICE_SHORT as the default. | -| | `with_password` | - | Set to `true` to include the password for this $SERVICE_SHORT in response and connection string.
    **WARNING**: never set to `true` unless user explicitly requests the password. | -| `service_update_password` | - | - | Update the password for the `tsdbadmin` for this $SERVICE_SHORT. The password change takes effect immediately and may terminate existing connections. | -| | `service_id` | ✓ | The unique identifier of the $SERVICE_SHORT you want to update the password for. | -| | `password` | ✓ | The new password for the `tsdbadmin` user. | -| `db_execute_query` | - | - | Execute a single SQL query against a $SERVICE_SHORT. This command returns column metadata, result rows, affected row count, and execution time. Multi-statement queries are not supported.
    **WARNING**: can execute destructive SQL including INSERT, UPDATE, DELETE, and DDL commands. | -| | `service_id` | ✓ | The unique identifier of the $SERVICE_SHORT. Use `tiger_service_list` to find $SERVICE_SHORT IDs. | -| | `query` | ✓ | The SQL query to execute. Single statement queries are supported. | -| | `parameters` | - | Query parameters for parameterized queries. Values are substituted for the `$n` placeholders in the query. | -| | `timeout_seconds` | - | The query timeout in seconds. Default: `30`. | -| | `role` | - | The $SERVICE_SHORT role/username to connect as. Default: `tsdbadmin`. | -| | `pooled` | - | Use [connection pooling][Connection pooling]. This is only available if you have already enabled it for the $SERVICE_SHORT. Default: `false`. | +| Command | Parameter | Required | Description | +|--------------------------|---------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `service_list` | - | - | Returns a list of the $SERVICE_SHORTs in the current $PROJECT_SHORT. | +| `service_get` | - | - | Returns detailed information about a $SERVICE_SHORT. | +| | `service_id` | ✓ | The unique identifier of the $SERVICE_SHORT (10-character alphanumeric string). | +| | `with_password` | - | Set to `true` to include the password in the response and connection string.
    **WARNING**: never do this unless the user explicitly requests the password. | +| `service_create` | - | - | Create a new $SERVICE_SHORT in $CLOUD_LONG.
    **WARNING**: creates billable resources. | +| | `name` | - | Set the human-readable name of up to 128 characters for this $SERVICE_SHORT. | +| | `addons` | - | Set the array of [addons][create-service] to enable for the $SERVICE_SHORT. Options:
    • `time-series`: enables $TIMESCALE_DB
    • `ai`: enables the AI and vector extensions
    Set an empty array for $PG-only. | +| | `region` | - | Set the [AWS region][cloud-regions] to deploy this $SERVICE_SHORT in. | +| | `cpu_memory` | - | CPU and memory allocation combination.
    Available configurations are:
    • shared/shared
    • 0.5 CPU/2 GB
    • 1 CPU/4 GB
    • 2 CPU/8 GB
    • 4 CPU/16 GB
    • 8 CPU/32 GB
    • 16 CPU/64 GB
    • 32 CPU/128 GB
    | +| | `replicas` | - | Set the number of [high-availability replicas][readreplica] for fault tolerance. | +| | `wait` | - | Set to `true` to wait for the $SERVICE_SHORT to be fully ready before returning. | +| | `timeout_minutes` | - | Set the timeout in minutes to wait for $SERVICE_SHORT to be ready. Only used when `wait=true`. Default: 30 minutes | +| | `set_default` | - | By default, the new $SERVICE_SHORT is the default for following commands in $CLI_SHORT. Set to `false` to keep the previous $SERVICE_SHORT as the default. | +| | `with_password` | - | Set to `true` to include the password for this $SERVICE_SHORT in response and connection string.
    **WARNING**: never set to `true` unless user explicitly requests the password. | +| `service_fork` | - | - | Fork an existing $SERVICE_SHORT to create a new independent copy.
    **WARNING**: creates billable resources. | +| | `service_id` | ✓ | The unique identifier of the $SERVICE_SHORT to fork (10-character alphanumeric string). | +| | `fork_strategy` | ✓ | Fork strategy:
    • `NOW`: fork at the current database state
    • `LAST_SNAPSHOT`:fork at last existing snapshot. This is the faster option
    • `PITR`: create a point-in-time recovery. You must also set the `target_time` parameter for PITR forks.
    | +| | `target_time` | - | Set the target time for a `PIRT` `fork_strategy` in RFC3339 format. For example `2025-01-15T10:30:00Z`). | +| | `name` | - | Set the human-readable name for the forked $SERVICE_SHORT. Defaults to `{source-service-name}-fork`. | +| | `cpu_memory` | - | CPU and memory allocation combination. Inherits from source $SERVICE_SHORT if not specified.
    Available configurations are:
    • shared/shared
    • 0.5 CPU/2 GB
    • 1 CPU/4 GB
    • 2 CPU/8 GB
    • 4 CPU/16 GB
    • 8 CPU/32 GB
    • 16 CPU/64 GB
    • 32 CPU/128 GB
    | +| | `wait` | - | Set to `true` to wait for the forked $SERVICE_SHORT to be fully ready before returning. Default: `false`. | +| | `timeout_minutes` | - | Set the timeout in minutes to wait for forked $SERVICE_SHORT to be ready. Only used when `wait=true`. Default: 30 minutes | +| | `set_default` | - | By default, the forked $SERVICE_SHORT is set as the default for following commands in $CLI_SHORT. Set to `false` to keep the previous $SERVICE_SHORT as the default. | +| | `with_password` | - | Set to `true` to include the password for the forked $SERVICE_SHORT in response and connection string.
    **WARNING**: never set to `true` unless user explicitly requests the password. | +| `service_update_password` | - | - | Update the password for the `tsdbadmin` for this $SERVICE_SHORT. The password change takes effect immediately and may terminate existing connections. | +| | `service_id` | ✓ | The unique identifier of the $SERVICE_SHORT you want to update the password for. | +| | `password` | ✓ | The new password for the `tsdbadmin` user. | +| `db_execute_query` | - | - | Execute a single SQL query against a $SERVICE_SHORT. This command returns column metadata, result rows, affected row count, and execution time. Multi-statement queries are not supported.
    **WARNING**: can execute destructive SQL including INSERT, UPDATE, DELETE, and DDL commands. | +| | `service_id` | ✓ | The unique identifier of the $SERVICE_SHORT. Use `tiger_service_list` to find $SERVICE_SHORT IDs. | +| | `query` | ✓ | The SQL query to execute. Single statement queries are supported. | +| | `parameters` | - | Query parameters for parameterized queries. Values are substituted for the `$n` placeholders in the query. | +| | `timeout_seconds` | - | The query timeout in seconds. Default: `30`. | +| | `role` | - | The $SERVICE_SHORT role/username to connect as. Default: `tsdbadmin`. | +| | `pooled` | - | Use [connection pooling][Connection pooling]. This is only available if you have already enabled it for the $SERVICE_SHORT. Default: `false`. | [cloud-regions]: /use-timescale/:currentVersion:/regions/ [create-service]: /getting-started/:currentVersion:/services/ diff --git a/_partials/_early_access_2_23_0.md b/_partials/_early_access_2_23_0.md new file mode 100644 index 0000000000..62f3c93c3c --- /dev/null +++ b/_partials/_early_access_2_23_0.md @@ -0,0 +1 @@ +Tech preview: [TimescaleDB v2.23.0](https://github.com/timescale/timescaledb/releases/tag/2.23.0) diff --git a/_partials/_manage-pricing-plan.md b/_partials/_manage-pricing-plan.md index f18d9a5331..dbbf10b846 100644 --- a/_partials/_manage-pricing-plan.md +++ b/_partials/_manage-pricing-plan.md @@ -2,12 +2,12 @@ You handle all details about your $CLOUD_LONG project including updates to your payment methods, and add-ons in the [billing section in $CONSOLE][cloud-billing]: Adding a payment method in Timescale - **Details**: an overview of your $PRICING_PLAN, usage, and payment details. You can add up to three credit cards to your `Wallet`. If you prefer to pay by invoice, - [contact $COMPANY][contact-company] and ask to change to corporate billing. + [contact $COMPANY][contact-company] and ask to change to corporate billing. You can also request early access to paying with crypto. - **Emails**: the addresses $COMPANY uses to communicate with you. Payment confirmations and alerts are sent to the email address you signed up with. diff --git a/_partials/_prometheus-integrate.md b/_partials/_prometheus-integrate.md index 744ea2e3d3..744ac2cc09 100644 --- a/_partials/_prometheus-integrate.md +++ b/_partials/_prometheus-integrate.md @@ -29,7 +29,7 @@ To export your data, do the following: - + diff --git a/_partials/_since_2_23_0.md b/_partials/_since_2_23_0.md new file mode 100644 index 0000000000..d80336c95e --- /dev/null +++ b/_partials/_since_2_23_0.md @@ -0,0 +1 @@ +Since [TimescaleDB v2.23.0](https://github.com/timescale/timescaledb/releases/tag/2.23.0) diff --git a/_partials/_support-plans.md b/_partials/_support-plans.md index 61cbcd1c9c..744bb18156 100644 --- a/_partials/_support-plans.md +++ b/_partials/_support-plans.md @@ -1,8 +1,9 @@ $COMPANY runs a global support organization with Customer Satisfaction (CSAT) scores above 99%. -Support covers all timezones, and is fully staffed at weekend hours. +Support covers all timezones and is fully staffed at weekend hours. All paid $PRICING_PLANs have free Developer Support through email with a target response time of 1 business day; we are often faster. If you need 24x7 responsiveness, talk to us about -[Production Support][production-support]. +[Production Support][production-support]. With Production Support, you can request help at any time at our [Support portal][support-portal]. -[production-support]: https://www.timescale.com/support \ No newline at end of file +[production-support]: https://www.timescale.com/support +[support-portal]: https://portal.support.timescale.com/login \ No newline at end of file diff --git a/_partials/_use-case-transport-geolocation.md b/_partials/_use-case-transport-geolocation.md index e92679180a..e9ef5b9f3d 100644 --- a/_partials/_use-case-transport-geolocation.md +++ b/_partials/_use-case-transport-geolocation.md @@ -26,7 +26,20 @@ data by time and location. UPDATE rides SET pickup_geom = ST_Transform(ST_SetSRID(ST_MakePoint(pickup_longitude,pickup_latitude),4326),2163), dropoff_geom = ST_Transform(ST_SetSRID(ST_MakePoint(dropoff_longitude,dropoff_latitude),4326),2163); ``` - This updates 10,906,860 rows of data on both columns, it takes a while. Coffee is your friend. + This updates 10,906,860 rows of data on both columns, it takes a while. Coffee is your friend. + + You might run into this error while the update happens + + `Error: tuple decompression limit exceeded by operation + Error Code: 53400 + Details: current limit: 100000, tuples decompressed: 10906860 + Hint: Consider increasing timescaledb.max_tuples_decompressed_per_dml_transaction or set to 0 (unlimited).` + + To fix this, use + + ```sql + SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0; + ``` diff --git a/about/changelog.md b/about/changelog.md index 0d29280c72..189b01d973 100644 --- a/about/changelog.md +++ b/about/changelog.md @@ -9,6 +9,54 @@ products: [cloud] All the latest features and updates to $CLOUD_LONG. +## S3 source connector and crypto payments + + +### 🔐 Crypto payments — early access! + +You can now pay your Tiger Cloud invoices with stablecoins through Stripe’s crypto payments. Each month, you can receive a Stripe crypto payment link to pay the prior month’s invoice in USD. Request access via the `Billing` page in Tiger Console. + +![Crypto payments](https://assets.timescale.com/docs/images/tiger-on-azure/tiger-cloud-crypto-payment.png) + +**Note:** Payments must be completed within 7 days. + +Supported currencies: + +- **USDC** (Ethereum, Solana, Polygon, Base) +- **USDP** (Ethereum, Solana) +- **USDG** (Ethereum) + +Eligibility: + +- Paid customer for 1 month or more +- No outstanding invoices +- $500–$10,000 monthly spend + +For a higher spend, simply contact us! + +### ✨ Detailed S3 source connector progress screen + +We’ve introduced major improvements to the S3 source connector on Tiger Cloud, to enhance observability and provide deeper visibility into connector performance. This update will help you quickly understand the overall state of the connector, take action faster, and trace the complete lifecycle of every imported file. + +![S3 connector stats](https://assets.timescale.com/docs/images/tiger-on-azure/tiger-console-s3-connector-import-details.png) + +The improvements include: + +- **Cumulative summary** of total imported, queued, and failed files +- **Search** capability across all files +- **Detailed file statuses** including: + - In-queue + - In-progress + - Completed + - Error + - Retry + - Resolve + - Cancelled +- **Filtering** by file status +- **Bulk retry** option for all failed files +- **Lifecycle history** showing file progression across states and time spent in each +- **Auto-refresh** option (every minute) for real-time updates + ## 🧠 🐅 ☁️ AI and Tiger Cloud major changes! diff --git a/about/pricing-and-account-management.md b/about/pricing-and-account-management.md index 75757e0303..baaf802e8f 100644 --- a/about/pricing-and-account-management.md +++ b/about/pricing-and-account-management.md @@ -50,7 +50,7 @@ If you create a $ACCOUNT_LONG from AWS Marketplace, the pricing options are pay- -## $COMPANY support +## $CLOUD_LONG support @@ -103,7 +103,7 @@ When you get $CLOUD_LONG at AWS Marketplace, the following pricing options are a -## $COMPANY support +## $CLOUD_LONG support diff --git a/ai/mcp-server.md b/ai/mcp-server.md index 9e82771484..078cd598e8 100644 --- a/ai/mcp-server.md +++ b/ai/mcp-server.md @@ -207,7 +207,7 @@ start $MCP_LONG: } ``` -## $MCP_LONG commands +## $MCP_LONG tools @@ -230,4 +230,4 @@ You can use the following $CLI_LONG global flags when you run $MCP_SHORT: [cloud-regions]: /use-timescale/:currentVersion:/regions/ [readreplica]: /use-timescale/:currentVersion:/ha-replicas/read-scaling/ [manual-config]: /ai/:currentVersion:/mcp-server/#manually-configure-the-tiger-mcp-server - \ No newline at end of file + diff --git a/api/hypercore/alter_table.md b/api/hypercore/alter_table.md index ffffa3acd5..1289c38f45 100644 --- a/api/hypercore/alter_table.md +++ b/api/hypercore/alter_table.md @@ -66,7 +66,6 @@ ALTER TABLE SET (timescaledb.enable_columnstore, timescaledb.compress_segmentby = ' [, ...]', timescaledb.sparse_index = '(), ()' timescaledb.compress_chunk_time_interval='interval', - SET ACCESS METHOD { new_access_method | DEFAULT }, ALTER SET NOT NULL, ADD CONSTRAINT UNIQUE (, ... ) ); diff --git a/getting-started/try-key-features-timescale-products.md b/getting-started/try-key-features-timescale-products.md index 1a65536eb0..d76c2b8e36 100644 --- a/getting-started/try-key-features-timescale-products.md +++ b/getting-started/try-key-features-timescale-products.md @@ -12,6 +12,7 @@ import HypercoreIntroShort from "versionContent/_partials/_hypercore-intro-short import HypercoreDirectCompress from "versionContent/_partials/_hypercore-direct-compress.mdx"; import NotAvailableFreePlan from "versionContent/_partials/_not-available-in-free-plan.mdx"; import NotSupportedAzure from "versionContent/_partials/_not-supported-for-azure.mdx"; +import SupportPlans from "versionContent/_partials/_support-plans.mdx"; # Try the key features in $COMPANY products @@ -410,6 +411,10 @@ data loss during failover. For more information, see [High availability][high-availability]. +## $CLOUD_LONG support + + + What next? See the [use case tutorials][tutorials], interact with the data in your $SERVICE_LONG using [your favorite programming language][connect-with-code], integrate your $SERVICE_LONG with a range of [third-party tools][integrations], plain old [Use $COMPANY products][use-timescale], or dive into [the API][use-the-api]. diff --git a/migrate/livesync-for-s3.md b/migrate/livesync-for-s3.md index b5c9e91ae1..3ed85198b5 100644 --- a/migrate/livesync-for-s3.md +++ b/migrate/livesync-for-s3.md @@ -144,7 +144,7 @@ To sync data from your S3 bucket to your $SERVICE_LONG using $CONSOLE: 1. To view file import statistics and logs, click `Connectors` > `Source connectors`, then select the name of your connector in the table. - ![S3 connector stats](https://assets.timescale.com/docs/images/tiger-on-azure/tiger-console-s3-connector-import-stats.png) + ![S3 connector stats](https://assets.timescale.com/docs/images/tiger-on-azure/tiger-console-s3-connector-import-details.png) 1. **Manage the connector** diff --git a/self-hosted/configuration/timescaledb-config.md b/self-hosted/configuration/timescaledb-config.md index 4b01ae3d76..c94aaf1104 100644 --- a/self-hosted/configuration/timescaledb-config.md +++ b/self-hosted/configuration/timescaledb-config.md @@ -11,12 +11,6 @@ import MultiNodeDeprecation from "versionContent/_partials/_multi-node-deprecati # $TIMESCALE_DB configuration and tuning -Just as you can tune settings in $PG, $TIMESCALE_DB provides a number of configuration -settings that may be useful to your specific installation and performance needs. These can -also be set within the `postgresql.conf` file or as command-line parameters -when starting $PG. -when starting $PG. - ## Distributed hypertables diff --git a/use-timescale/write-data/insert.md b/use-timescale/write-data/insert.md index 7753dc3a58..3d79f95444 100644 --- a/use-timescale/write-data/insert.md +++ b/use-timescale/write-data/insert.md @@ -6,6 +6,8 @@ keywords: [ingest] tags: [insert, write, hypertables] --- +import EarlyAccess2230 from "versionContent/_partials/_early_access_2_23_0.mdx"; + # Insert data Insert data into a hypertable with a standard [`INSERT`][postgres-insert] SQL @@ -65,4 +67,36 @@ time | location | temperature | humidity (1 row) ``` +## Direct compress on INSERT + +This columnar format enables fast scanning and +aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the +$COLUMNSTORE conversion, $HYPERTABLE chunks are compressed by up to 98%, and organized for efficient, large-scale +queries. + +To improve performance, you can compress data during `INSERT` so that it is injected directly into chunks +in the $COLUMNSTORE rather than waiting for the policy. + +To enable direct compress on INSERT, enable the following [GUC parameters][gucs]: + +```sql +SET timescaledb.enable_compressed_insert = true; +SET timescaledb.enable_compressed_insert_sort_batches = true; +SET timescaledb.enable_compressed_insert_client_sorted = true; +``` + +When you set `enable_compressed_insert_client_sorted` to `true`, you must ensure that data in the input +stream is sorted. + + + +[postgres-update]: https://www.postgresql.org/docs/current/sql-update.html +[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ +[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ +[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/ +[create_table_arguments]: /api/:currentVersion:/hypertable/create_table/#arguments +[alter_job_samples]: /api/:currentVersion:/jobs-automation/alter_job/#samples +[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ +[gucs]: /api/:currentVersion:/configuration/gucs/ + [postgres-insert]: https://www.postgresql.org/docs/current/sql-insert.html From 574fbdf86009a0c3b07b464a453e170fb6bd57c4 Mon Sep 17 00:00:00 2001 From: Iain Cox Date: Tue, 4 Nov 2025 13:20:53 +0000 Subject: [PATCH 4/5] 484 create table with start columnstore policy immediately and selecting partition column optional (#4529) * chore: first steps for new create table. * chore: change to using CREATE TABLE with the default columnstore policy rather than add_columnstore_policy in another step. * chore: review updates. * Apply suggestions from code review Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> Signed-off-by: Iain Cox * chore: update on review * chore: change includes to show new way of working * chore: update after review. * Apply suggestions from code review Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com> Signed-off-by: Iain Cox * chore: update links --------- Signed-off-by: Iain Cox Co-authored-by: billy-the-fish Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com> Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com> --- .helper-scripts/llms/generate_llms_full.py | 2 +- _partials/_create-hypertable-blockchain.md | 5 +- ...eate-hypertable-columnstore-policy-note.md | 18 ++ _partials/_create-hypertable-energy.md | 7 +- _partials/_create-hypertable-nyctaxis.md | 6 +- .../_create-hypertable-twelvedata-crypto.md | 5 +- .../_create-hypertable-twelvedata-stocks.md | 7 +- _partials/_create-hypertable.md | 7 +- _partials/_dimensions_info.md | 10 +- _partials/_hypercore-intro-short.md | 2 + ...re_create_hypertable_columnstore_policy.md | 64 +++++++ _partials/_hypercore_policy_workflow.md | 43 +---- _partials/_import-data-iot.md | 5 +- _partials/_import-data-nyc-taxis.md | 5 +- _partials/_old-api-create-hypertable.md | 19 +- api/hypercore/add_columnstore_policy.md | 64 +++---- api/hypercore/alter_table.md | 18 +- api/hypercore/chunk_columnstore_stats.md | 10 +- api/hypercore/index.md | 40 +---- api/hypertable/create_table.md | 98 ++++++----- api/hypertable/enable_chunk_skipping.md | 7 +- api/hypertable/index.md | 51 +++--- api/jobs-automation/alter_job.md | 164 ++++++++++-------- .../try-key-features-timescale-products.md | 50 +----- integrations/amazon-sagemaker.md | 7 +- integrations/apache-kafka.md | 7 +- integrations/aws-lambda.md | 7 +- integrations/supabase.md | 7 +- self-hosted/migration/same-db.md | 4 +- .../blockchain-query/blockchain-compress.md | 99 ----------- tutorials/blockchain-query/index.md | 2 - .../financial-tick-compress.md | 104 ----------- tutorials/financial-tick-data/index.md | 3 - tutorials/page-index/page-index.js | 12 -- .../real-time-analytics-energy-consumption.md | 42 ----- tutorials/real-time-analytics-transport.md | 35 ---- tutorials/simulate-iot-sensor-data.md | 8 +- .../about-continuous-aggregates.md | 7 +- .../create-a-continuous-aggregate.md | 2 +- use-timescale/extensions/postgis.md | 7 +- .../real-time-analytics-in-hypercore.md | 2 - use-timescale/hypercore/secondary-indexes.md | 7 +- .../hyperfunctions/counter-aggregation.md | 5 +- use-timescale/hypertables/hypertable-crud.md | 30 +--- .../hypertables-and-unique-indexes.md | 5 +- .../hypertables/improve-query-performance.md | 4 - use-timescale/hypertables/index.md | 1 - .../query-data/advanced-analytic-queries.md | 7 +- .../schema-management/about-constraints.md | 7 +- use-timescale/schema-management/indexing.md | 6 +- 50 files changed, 401 insertions(+), 733 deletions(-) create mode 100644 _partials/_create-hypertable-columnstore-policy-note.md create mode 100644 _partials/_hypercore_create_hypertable_columnstore_policy.md delete mode 100644 tutorials/blockchain-query/blockchain-compress.md delete mode 100644 tutorials/financial-tick-data/financial-tick-compress.md diff --git a/.helper-scripts/llms/generate_llms_full.py b/.helper-scripts/llms/generate_llms_full.py index 37b9a8b26e..1d0336eac9 100644 --- a/.helper-scripts/llms/generate_llms_full.py +++ b/.helper-scripts/llms/generate_llms_full.py @@ -712,7 +712,7 @@ def process_imports(self, content: str, current_file_path: Path) -> str: print(f"Replaced {component_name} using default path: {default_path}") # Remove or replace components that don't have clear partials - orphaned_components = ['Installation', 'Skip', 'OldCreateHypertable', 'PolicyVisualizerDownsampling', 'APIReference', 'Since2180'] + orphaned_components = ['Installation', 'Skip', 'OldCreateHypertable', 'CreateHypertablePolicyNote', 'PolicyVisualizerDownsampling', 'APIReference', 'Since2180'] for component_name in orphaned_components: # Handle both normal and spaced component tags component_tags = [ diff --git a/_partials/_create-hypertable-blockchain.md b/_partials/_create-hypertable-blockchain.md index bfd672de63..63f0b3c438 100644 --- a/_partials/_create-hypertable-blockchain.md +++ b/_partials/_create-hypertable-blockchain.md @@ -1,4 +1,4 @@ -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intro.mdx"; ## Optimize time-series data using hypertables @@ -31,13 +31,12 @@ import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intr details JSONB ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.segmentby='block_id', tsdb.orderby='time DESC' ); ``` - + 1. Create an index on the `hash` column to make queries for individual transactions faster: diff --git a/_partials/_create-hypertable-columnstore-policy-note.md b/_partials/_create-hypertable-columnstore-policy-note.md new file mode 100644 index 0000000000..c31a73ade6 --- /dev/null +++ b/_partials/_create-hypertable-columnstore-policy-note.md @@ -0,0 +1,18 @@ +When you create a $HYPERTABLE using [CREATE TABLE ... WITH ...][hypertable-create-table], the default partitioning +column is automatically the first column with a timestamp data type. Also, $TIMESCALE_DB creates a +[columnstore policy][add_columnstore_policy] that automatically converts your data to the $COLUMNSTORE, after an interval equal to the value of the [chunk_interval][create_table_arguments], defined through `compress_after` in the policy. This columnar format enables fast scanning and +aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the +$COLUMNSTORE conversion, $HYPERTABLE chunks are compressed by up to 98%, and organized for efficient, large-scale queries. + +You can customize this policy later using [alter_job][alter_job_samples]. However, to change `after` or +`created_before`, the compression settings, or the $HYPERTABLE the policy is acting on, you must +[remove the columnstore policy][remove_columnstore_policy] and [add a new one][add_columnstore_policy]. + +You can also manually [convert chunks][convert_to_columnstore] in a $HYPERTABLE to the $COLUMNSTORE. + +[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ +[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/ +[create_table_arguments]: /api/:currentVersion:/hypertable/create_table/#arguments +[alter_job_samples]: /api/:currentVersion:/jobs-automation/alter_job/#samples +[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ +[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ \ No newline at end of file diff --git a/_partials/_create-hypertable-energy.md b/_partials/_create-hypertable-energy.md index 7ff1cf0f4b..41e454eb9d 100644 --- a/_partials/_create-hypertable-energy.md +++ b/_partials/_create-hypertable-energy.md @@ -1,4 +1,4 @@ -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intro.mdx"; ## Optimize time-series data in hypertables @@ -15,12 +15,11 @@ import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intr type_id integer not null, value double precision not null ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + diff --git a/_partials/_create-hypertable-nyctaxis.md b/_partials/_create-hypertable-nyctaxis.md index 585192ea92..77324274a3 100644 --- a/_partials/_create-hypertable-nyctaxis.md +++ b/_partials/_create-hypertable-nyctaxis.md @@ -1,4 +1,4 @@ -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; ## Optimize time-series data in hypertables @@ -15,7 +15,6 @@ same way. You use regular $PG tables for relational data. 1. **Create a $HYPERTABLE to store the taxi trip data** - ```sql CREATE TABLE "rides"( vendor_id TEXT, @@ -38,11 +37,10 @@ same way. You use regular $PG tables for relational data. total_amount NUMERIC ) WITH ( tsdb.hypertable, - tsdb.partition_column='pickup_datetime', tsdb.create_default_indexes=false ); ``` - + 1. **Add another dimension to partition your $HYPERTABLE more efficiently** diff --git a/_partials/_create-hypertable-twelvedata-crypto.md b/_partials/_create-hypertable-twelvedata-crypto.md index f5bc74f7d0..722aa68dfe 100644 --- a/_partials/_create-hypertable-twelvedata-crypto.md +++ b/_partials/_create-hypertable-twelvedata-crypto.md @@ -1,5 +1,5 @@ import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intro.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; ## Optimize time-series data in a hypertable @@ -25,12 +25,11 @@ import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypert day_volume NUMERIC ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.segmentby='symbol', tsdb.orderby='time DESC' ); ``` - + diff --git a/_partials/_create-hypertable-twelvedata-stocks.md b/_partials/_create-hypertable-twelvedata-stocks.md index 70a431f1ae..1a597397e5 100644 --- a/_partials/_create-hypertable-twelvedata-stocks.md +++ b/_partials/_create-hypertable-twelvedata-stocks.md @@ -1,5 +1,5 @@ import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intro.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; ## Optimize time-series data in hypertables @@ -20,11 +20,10 @@ import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypert price DOUBLE PRECISION NULL, day_volume INT NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + 1. **Create an index to support efficient queries** diff --git a/_partials/_create-hypertable.md b/_partials/_create-hypertable.md index 56ed0c630a..46ab0ba998 100644 --- a/_partials/_create-hypertable.md +++ b/_partials/_create-hypertable.md @@ -1,5 +1,5 @@ import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intro.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; @@ -22,11 +22,10 @@ To create a hypertable: price DOUBLE PRECISION NULL, day_volume INT NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + You see the result immediately: diff --git a/_partials/_dimensions_info.md b/_partials/_dimensions_info.md index 742d28ea66..9a932b6530 100644 --- a/_partials/_dimensions_info.md +++ b/_partials/_dimensions_info.md @@ -1,4 +1,4 @@ -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; ### Dimension info @@ -46,12 +46,11 @@ Create a by-range dimension builder. You can partition `by_range` on it's own. temperature DOUBLE PRECISION NULL, humidity DOUBLE PRECISION NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + This is the default partition, you do not need to add it explicitly. @@ -152,8 +151,7 @@ CREATE TABLE conditions ( temperature DOUBLE PRECISION NULL, humidity DOUBLE PRECISION NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time', + tsdb.hypertable tsdb.chunk_interval='1 day' ); diff --git a/_partials/_hypercore-intro-short.md b/_partials/_hypercore-intro-short.md index a43eba377f..19d6dc9408 100644 --- a/_partials/_hypercore-intro-short.md +++ b/_partials/_hypercore-intro-short.md @@ -6,6 +6,8 @@ transactional capabilities. $HYPERCORE_CAP dynamically stores data in the most efficient format for its lifecycle: +![Move from rowstore to columstore in hypercore](https://assets.timescale.com/docs/images/hypercore_intro.svg ) + * **Row-based storage for recent data**: the most recent chunk (and possibly more) is always stored in the $ROWSTORE, ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage. diff --git a/_partials/_hypercore_create_hypertable_columnstore_policy.md b/_partials/_hypercore_create_hypertable_columnstore_policy.md new file mode 100644 index 0000000000..018d4c48dd --- /dev/null +++ b/_partials/_hypercore_create_hypertable_columnstore_policy.md @@ -0,0 +1,64 @@ +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; + +1. **Enable $COLUMNSTORE on a $HYPERTABLE** + + For [efficient queries][secondary-indexes], remember to `segmentby` the column you will + use most often to filter your data. For example: + + * **$HYPERTABLE_CAPs**: + + [Use `CREATE TABLE` for a $HYPERTABLE][hypertable-create-table] + + ```sql + CREATE TABLE crypto_ticks ( + "time" TIMESTAMPTZ, + symbol TEXT, + price DOUBLE PRECISION, + day_volume NUMERIC + ) WITH ( + timescaledb.hypertable, + timescaledb.segmentby='symbol', + timescaledb.orderby='time DESC' + ); + ``` + + + * **$CAGG_CAPs** + 1. [Use `ALTER MATERIALIZED VIEW` for a $CAGG][compression_continuous-aggregate]: + ```sql + ALTER MATERIALIZED VIEW assets_candlestick_daily set ( + timescaledb.enable_columnstore = true, + timescaledb.segmentby = 'symbol'); + ``` + Before you say `huh`, a $CAGG is a specialized $HYPERTABLE. + + 1. Add a policy to convert $CHUNKs to the $COLUMNSTORE at a specific time interval: + + Create a [columnstore_policy][add_columnstore_policy] that automatically converts $CHUNKs in a $HYPERTABLE to + the $COLUMNSTORE at a specific time interval. For example: + ``` sql + CALL add_columnstore_policy('assets_candlestick_daily', after => INTERVAL '1d'); + ``` + + $TIMESCALE_DB is optimized for fast updates on compressed data in the $COLUMNSTORE. To modify data in the + $COLUMNSTORE, use standard SQL. + + +[job]: /api/:currentVersion:/actions/add_job/ +[alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/ +[compression_continuous-aggregate]: /api/:currentVersion:/continuous-aggregates/alter_materialized_view/ +[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/ +[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ +[informational-views]: /api/:currentVersion:/informational-views/jobs/ +[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ +[hypercore_workflow]: /api/:currentVersion:/hypercore/#hypercore-workflow +[alter_job]: /api/:currentVersion:/actions/alter_job/ +[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/ +[in-console-editors]: /getting-started/:currentVersion:/run-queries-from-console/ +[services-portal]: https://console.cloud.timescale.com/dashboard/services +[connect-using-psql]: /integrations/:currentVersion:/psql/#connect-to-your-service +[insert]: /use-timescale/:currentVersion:/write-data/insert/ +[hypertables-section]: /use-timescale/:currentVersion:/hypertables/ +[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ +[hypercore]: /use-timescale/:currentVersion:/hypercore/ +[secondary-indexes]: /use-timescale/:currentVersion:/hypercore/secondary-indexes/ diff --git a/_partials/_hypercore_policy_workflow.md b/_partials/_hypercore_policy_workflow.md index 25cba9c4b7..b7ce2f6835 100644 --- a/_partials/_hypercore_policy_workflow.md +++ b/_partials/_hypercore_policy_workflow.md @@ -1,4 +1,4 @@ -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertableProcedure from "versionContent/_partials/_hypercore_create_hypertable_columnstore_policy.mdx"; @@ -6,46 +6,7 @@ import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypert In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. You can also connect to your $SERVICE_SHORT using [psql][connect-using-psql]. -1. **Enable $COLUMNSTORE on a $HYPERTABLE** - - Create a [$HYPERTABLE][hypertables-section] for your time-series data using [CREATE TABLE][hypertable-create-table]. - For [efficient queries][secondary-indexes] on data in the columnstore, remember to `segmentby` the column you will - use most often to filter your data. For example: - - * [Use `CREATE TABLE` for a $HYPERTABLE][hypertable-create-table] - - ```sql - CREATE TABLE crypto_ticks ( - "time" TIMESTAMPTZ, - symbol TEXT, - price DOUBLE PRECISION, - day_volume NUMERIC - ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time', - tsdb.segmentby='symbol', - tsdb.orderby='time DESC' - ); - ``` - - - * [Use `ALTER MATERIALIZED VIEW` for a $CAGG][compression_continuous-aggregate] - ```sql - ALTER MATERIALIZED VIEW assets_candlestick_daily set ( - timescaledb.enable_columnstore = true, - timescaledb.segmentby = 'symbol' ); - ``` - Before you say `huh`, a $CAGG is a specialized $HYPERTABLE. - -1. **Add a policy to convert $CHUNKs to the $COLUMNSTORE at a specific time interval** - - Create a [columnstore_policy][add_columnstore_policy] that automatically converts $CHUNKs in a $HYPERTABLE to the $COLUMNSTORE at a specific time interval. For example, convert yesterday's crypto trading data to the $COLUMNSTORE: - ``` sql - CALL add_columnstore_policy('crypto_ticks', after => INTERVAL '1d'); - ``` - - $TIMESCALE_DB is optimized for fast updates on compressed data in the $COLUMNSTORE. To modify data in the - $COLUMNSTORE, use standard SQL. + 1. **Check the $COLUMNSTORE policy** diff --git a/_partials/_import-data-iot.md b/_partials/_import-data-iot.md index e0daabaa8c..314265b44d 100644 --- a/_partials/_import-data-iot.md +++ b/_partials/_import-data-iot.md @@ -1,4 +1,4 @@ -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intro.mdx"; @@ -38,12 +38,11 @@ import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intr value double precision not null ) WITH ( tsdb.hypertable, - tsdb.partition_column='created', tsdb.segmentby = 'type_id', tsdb.orderby = 'created DESC' ); ``` - + 1. Upload the dataset to your $SERVICE_SHORT ```sql diff --git a/_partials/_import-data-nyc-taxis.md b/_partials/_import-data-nyc-taxis.md index 1411b6a434..abf586643a 100644 --- a/_partials/_import-data-nyc-taxis.md +++ b/_partials/_import-data-nyc-taxis.md @@ -1,4 +1,4 @@ -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intro.mdx"; @@ -53,13 +53,12 @@ import HypertableIntro from "versionContent/_partials/_tutorials_hypertable_intr total_amount NUMERIC ) WITH ( tsdb.hypertable, - tsdb.partition_column='pickup_datetime', tsdb.create_default_indexes=false, tsdb.segmentby='vendor_id', tsdb.orderby='pickup_datetime DESC' ); ``` - + 1. Add another dimension to partition your $HYPERTABLE more efficiently: ```sql diff --git a/_partials/_old-api-create-hypertable.md b/_partials/_old-api-create-hypertable.md index b9b5b18b07..7d095acd62 100644 --- a/_partials/_old-api-create-hypertable.md +++ b/_partials/_old-api-create-hypertable.md @@ -1,8 +1,23 @@ -If you are self-hosting $TIMESCALE_DB v2.19.3 and below, create a [$PG relational table][pg-create-table], +For $TIMESCALE_DB [v2.23.0][tsdb-release-2-23-0] and higher, the table is automatically partitioned on the first column +in the table with a timestamp data type. If multiple columns are suitable candidates as a partitioning column, +$TIMESCALE_DB throws an error and asks for an explicit definition. For earlier versions, set `partition_column` to a +time column. + +If you are self-hosting $TIMESCALE_DB [v2.20.0][tsdb-release-2-23-0] to [v2.22.1][tsdb-release-2-23-0], to convert your +data to the $COLUMNSTORE after a specific time interval, you have to call [add_columnstore_policy] after you call +[CREATE TABLE][hypertable-create-table] + +If you are self-hosting $TIMESCALE_DB [v2.19.3][tsdb-release-2-19-3] and below, create a [$PG relational table][pg-create-table], then convert it using [create_hypertable][create_hypertable]. You then enable $HYPERCORE with a call to [ALTER TABLE][alter_table_hypercore]. - [pg-create-table]: https://www.postgresql.org/docs/current/sql-createtable.html [create_hypertable]: /api/:currentVersion:/hypertable/create_hypertable/ [alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/ +[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ +[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ +[chunk_interval]: /api/:currentVersion:/hypertable/set_chunk_time_interval/ +[tsdb-release-2-23-0]: https://github.com/timescale/timescaledb/releases/tag/2.23.0 +[tsdb-release-2-20-0]: https://github.com/timescale/timescaledb/releases/tag/2.20.0 +[tsdb-release-2-22-1]: https://github.com/timescale/timescaledb/releases/tag/2.22.1 +[tsdb-release-2-19-3]: https://github.com/timescale/timescaledb/releases/tag/2.19.3 \ No newline at end of file diff --git a/api/hypercore/add_columnstore_policy.md b/api/hypercore/add_columnstore_policy.md index a460736465..254f6e3327 100644 --- a/api/hypercore/add_columnstore_policy.md +++ b/api/hypercore/add_columnstore_policy.md @@ -12,26 +12,33 @@ api: import Since2180 from "versionContent/_partials/_since_2_18_0.mdx"; import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # add_columnstore_policy() Create a [job][job] that automatically moves chunks in a hypertable to the $COLUMNSTORE after a specific time interval. -You enable the $COLUMNSTORE a hypertable or continuous aggregate before you create a $COLUMNSTORE policy. -You do this by calling `CREATE TABLE` for hypertables and `ALTER MATERIALIZED VIEW` for continuous aggregates. When -$COLUMNSTORE is enabled, [bloom filters][bloom-filters] are enabled by default, and every new chunk has a bloom index. -If you converted chunks to $COLUMNSTORE using $TIMESCALE_DB v2.19.3 or below, to enable bloom filters on that data you have -to convert those chunks to the $ROWSTORE, then convert them back to the $COLUMNSTORE. +- **$CAGG_CAPs**: + + You first call `ALTER MATERIALIZED VIEW` to enable the $COLUMNSTORE on a $CAGG, then create the job that converts + your data to the $COLUMNSTORE with a call to `add_columnstore_policy`. + +- **$HYPERTABLE_CAPs**: -Bloom indexes are not retrofitted, meaning that the existing chunks need to be fully recompressed to have the bloom -indexes present. Please check out the PR description for more in-depth explanations of how bloom filters in -TimescaleDB work. + -To view the policies that you set or the policies that already exist, -see [informational views][informational-views], to remove a policy, see [remove_columnstore_policy][remove_columnstore_policy]. +When $COLUMNSTORE is enabled, [bloom filters][bloom-filters] are enabled by default, and every new chunk has a bloom index. +Bloom indexes are not retrofitted, existing chunks need to be fully recompressed to have the bloom indexes present. If +you converted chunks to $COLUMNSTORE using $TIMESCALE_DB [v2.19.3](tsdb-release-2-19-3) or below, to enable bloom filters on that data you have +to convert those chunks to the $ROWSTORE, then convert them back to the $COLUMNSTORE. + +To view the policies that you set or the policies that already exist, see [informational views][informational-views]. -A $COLUMNSTORE policy is applied on a per-chunk basis. If you remove an existing policy and then add a new one, the new policy applies only to the chunks that have not yet been converted to $COLUMNSTORE. The existing chunks in the $COLUMNSTORE remain unchanged. This means that chunks with different $COLUMNSTORE settings can co-exist in the same $HYPERTABLE. +A $COLUMNSTORE policy is applied on a per-chunk basis. If you remove an existing policy and then add a new one, the new +policy applies only to the chunks that have not yet been converted to $COLUMNSTORE. The existing chunks in the +$COLUMNSTORE remain unchanged. This means that chunks with different $COLUMNSTORE settings can co-exist in the same +$HYPERTABLE. @@ -39,15 +46,18 @@ A $COLUMNSTORE policy is applied on a per-chunk basis. If you remove an existing To create a $COLUMNSTORE job: - - -1. **Enable $COLUMNSTORE** +- **Enable $COLUMNSTORE** - Create a [$HYPERTABLE][hypertables-section] for your time-series data using [CREATE TABLE][hypertable-create-table]. - For [efficient queries][secondary-indexes] on data in the columnstore, remember to `segmentby` the column you will - use most often to filter your data. For example: + For [efficient queries][secondary-indexes] on data in the columnstore, remember to `segmentby` the column you will + use most often to filter your data. + * [Use `ALTER MATERIALIZED VIEW` for a continuous aggregate][compression_continuous-aggregate] + ```sql + ALTER MATERIALIZED VIEW assets_candlestick_daily SET ( + timescaledb.enable_columnstore = true, + timescaledb.segmentby = 'symbol'); + ``` - * [Use `CREATE TABLE` for a $HYPERTABLE][hypertable-create-table] + * [Use `CREATE TABLE` for a $HYPERTABLE][hypertable-create-table]. The columnstore policy is created automatically. ```sql CREATE TABLE crypto_ticks ( @@ -57,21 +67,13 @@ To create a $COLUMNSTORE job: day_volume NUMERIC ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.segmentby='symbol', tsdb.orderby='time DESC' ); ``` - * [Use `ALTER MATERIALIZED VIEW` for a continuous aggregate][compression_continuous-aggregate] - ```sql - ALTER MATERIALIZED VIEW assets_candlestick_daily set ( - timescaledb.enable_columnstore = true, - timescaledb.segmentby = 'symbol' ); - ``` - -1. **Add a policy to move chunks to the $COLUMNSTORE at a specific time interval** +- **Add a policy to move chunks to the $COLUMNSTORE at a specific time interval** For example: @@ -114,7 +116,7 @@ To create a $COLUMNSTORE job: ``` -1. **View the policies that you set or the policies that already exist** +- **View the policies that you set or the policies that already exist** ``` sql SELECT * FROM timescaledb_information.jobs @@ -122,7 +124,7 @@ To create a $COLUMNSTORE job: ``` See [timescaledb_information.jobs][informational-views]. - + ## Arguments @@ -159,3 +161,7 @@ Calls to `add_columnstore_policy` require either `after` or `created_before`, bu [hypercore]: /use-timescale/:currentVersion:/hypercore/ [secondary-indexes]: /use-timescale/:currentVersion:/hypercore/secondary-indexes/ [bloom-filters]: https://en.wikipedia.org/wiki/Bloom_filter +[create_table_arguments]: /api/:currentVersion:/hypertable/create_table/#arguments +[alter_job_samples]: /api/:currentVersion:/jobs-automation/alter_job/#samples +[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ +[tsdb-release-2-19-3]: https://github.com/timescale/timescaledb/releases/tag/2.19.3 \ No newline at end of file diff --git a/api/hypercore/alter_table.md b/api/hypercore/alter_table.md index 1289c38f45..ecc76e5f49 100644 --- a/api/hypercore/alter_table.md +++ b/api/hypercore/alter_table.md @@ -15,9 +15,15 @@ import EarlyAccess from "versionContent/_partials/_early_access_2_18_0.mdx"; # ALTER TABLE ($HYPERCORE) -Enable the $COLUMNSTORE or change the $COLUMNSTORE settings for a $HYPERTABLE. The settings are applied on a per-chunk basis. You do not need to convert the entire $HYPERTABLE back to the $ROWSTORE before changing the settings. The new settings apply only to the chunks that have not yet been converted to $COLUMNSTORE, the existing chunks in the $COLUMNSTORE do not change. This means that chunks with different $COLUMNSTORE settings can co-exist in the same $HYPERTABLE. +Enable the $COLUMNSTORE or change the $COLUMNSTORE settings for a $HYPERTABLE. The settings are applied on a per-chunk +basis. You **do not** need to convert the entire $HYPERTABLE back to the $ROWSTORE before changing the settings. The new +settings apply only to the chunks that have not yet been converted to $COLUMNSTORE, the existing chunks in the +$COLUMNSTORE do not change. This means that chunks with different $COLUMNSTORE settings can co-exist in the +same $HYPERTABLE. -$TIMESCALE_DB calculates default $COLUMNSTORE settings for each chunk when it is created. These settings apply to each chunk, and not the entire hypertable. To explicitly disable the defaults, set a setting to an empty string. To remove the current configuration and re-enable the defaults, call `ALTER TABLE RESET ();`. +$TIMESCALE_DB calculates default $COLUMNSTORE settings for each chunk when it is created. These settings apply to each +chunk, and not the entire hypertable. To explicitly disable the defaults, set a setting to an empty string. To remove +the current configuration and re-enable the defaults, call `ALTER TABLE RESET ();`. After you have enabled the $COLUMNSTORE, either: - [add_columnstore_policy][add_columnstore_policy]: create a [job][job] that automatically moves chunks in a hypertable to the $COLUMNSTORE at a @@ -28,12 +34,12 @@ After you have enabled the $COLUMNSTORE, either: ## Samples -To enable the $COLUMNSTORE: +To enable the $COLUMNSTORE using `ALTER TABLE`: -- **Configure a hypertable that ingests device data to use the $COLUMNSTORE**: +- **Configure a $HYPERTABLE that ingests device data to use the $COLUMNSTORE**: - In this example, the `metrics` hypertable is often queried about a specific device or set of devices. - Segment the hypertable by `device_id` to improve query performance. + In this example, the `metrics` $HYPERTABLE is often queried about a specific device or set of devices. + Segment the $HYPERTABLE by `device_id` to improve query performance. ```sql ALTER TABLE metrics SET( diff --git a/api/hypercore/chunk_columnstore_stats.md b/api/hypercore/chunk_columnstore_stats.md index c8147a674e..e9fb181279 100644 --- a/api/hypercore/chunk_columnstore_stats.md +++ b/api/hypercore/chunk_columnstore_stats.md @@ -16,10 +16,11 @@ import Since2180 from "versionContent/_partials/_since_2_18_0.mdx"; Retrieve statistics about the chunks in the $COLUMNSTORE `chunk_columnstore_stats` returns the size of chunks in the $COLUMNSTORE, these values are computed when you call either: -- [add_columnstore_policy][add_columnstore_policy]: create a [job][job] that automatically moves chunks in a hypertable to the $COLUMNSTORE at a - specific time interval. -- [convert_to_columnstore][convert_to_columnstore]: manually add a specific chunk in a hypertable to the $COLUMNSTORE. - +- [CREATE TABLE][hypertable-create-table]: create a $HYPERTABLE with a default [job][job] that automatically + moves chunks in a $HYPERTABLE to the $COLUMNSTORE at a specific time interval. +- [add_columnstore_policy][add_columnstore_policy]: create a [job][job] on an existing $HYPERTABLE that automatically + moves chunks in a $HYPERTABLE to the $COLUMNSTORE at a specific time interval. +- [convert_to_columnstore][convert_to_columnstore]: manually add a specific chunk in a $HYPERTABLE to the $COLUMNSTORE. Inserting into a chunk in the $COLUMNSTORE does not change the chunk size. For more information about how to compute chunk sizes, see [chunks_detailed_size][chunks_detailed_size]. @@ -108,3 +109,4 @@ To retrieve statistics about chunks: [convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ [job]: /api/:currentVersion:/jobs-automation/add_job/ [chunks_detailed_size]: /api/:currentVersion:/hypertable/chunks_detailed_size/ +[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ diff --git a/api/hypercore/index.md b/api/hypercore/index.md index 06b825f078..af49d8079a 100644 --- a/api/hypercore/index.md +++ b/api/hypercore/index.md @@ -8,9 +8,9 @@ api: license: community --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; import Since2180 from "versionContent/_partials/_since_2_18_0.mdx"; import HypercoreIntro from "versionContent/_partials/_hypercore-intro.mdx"; +import CreateHypertableProcedure from "versionContent/_partials/_hypercore_create_hypertable_columnstore_policy.mdx"; # Hypercore @@ -24,43 +24,7 @@ Best practice for using $HYPERCORE is to: -1. **Enable $COLUMNSTORE** - - Create a [$HYPERTABLE][hypertables-section] for your time-series data using [CREATE TABLE][hypertable-create-table]. - For [efficient queries][secondary-indexes] on data in the columnstore, remember to `segmentby` the column you will - use most often to filter your data. For example: - - * [Use `CREATE TABLE` for a $HYPERTABLE][hypertable-create-table] - - ```sql - CREATE TABLE crypto_ticks ( - "time" TIMESTAMPTZ, - symbol TEXT, - price DOUBLE PRECISION, - day_volume NUMERIC - ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time', - tsdb.segmentby='symbol', - tsdb.orderby='time DESC' - ); - ``` - - - * [Use `ALTER MATERIALIZED VIEW` for a continuous aggregate][compression_continuous-aggregate] - ```sql - ALTER MATERIALIZED VIEW assets_candlestick_daily set ( - timescaledb.enable_columnstore = true, - timescaledb.segmentby = 'symbol' ); - ``` - -1. **Add a policy to move chunks to the $COLUMNSTORE at a specific time interval** - - For example, 7 days after the data was added to the table: - ``` sql - CALL add_columnstore_policy('crypto_ticks', after => INTERVAL '7d'); - ``` - See [add_columnstore_policy][add_columnstore_policy]. + 1. **View the policies that you set or the policies that already exist** diff --git a/api/hypertable/create_table.md b/api/hypertable/create_table.md index 05ce9a269f..8f777fa6e5 100644 --- a/api/hypertable/create_table.md +++ b/api/hypertable/create_table.md @@ -9,6 +9,9 @@ api: products: [cloud, mst, self_hosted] --- + +import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; import Since2200 from "versionContent/_partials/_since_2_20_0.mdx"; import DimensionInfo from "versionContent/_partials/_dimensions_info.mdx"; import HypercoreDirectCompress from "versionContent/_partials/_hypercore-direct-compress.mdx"; @@ -24,49 +27,52 @@ a $HYPERTABLE is partitioned on the time dimension. To add secondary dimensions [add_dimension][add-dimension]. To convert an existing relational table into a $HYPERTABLE, call [create_hypertable][create_hypertable]. -As the data cools and becomes more suited for analytics, [add a columnstore policy][add_columnstore_policy] so your data -is automatically converted to the $COLUMNSTORE after a specific time interval. This columnar format enables fast -scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. -In the $COLUMNSTORE conversion, $HYPERTABLE chunks are compressed by up to 98%, and organized for efficient, -large-scale queries. This columnar format enables fast scanning and aggregation, optimizing performance for analytical -workloads. You can also manually [convert chunks][convert_to_columnstore] in a $HYPERTABLE to the $COLUMNSTORE. + $HYPERTABLE_CAP to $HYPERTABLE foreign keys are not allowed, all other combinations are permitted. -The [$COLUMNSTORE][hypercore] settings are applied on a per-chunk basis. You can change the settings by calling [ALTER TABLE][alter_table_hypercore] without first converting the entire $HYPERTABLE back to the [$ROWSTORE][hypercore]. The new settings apply only to the chunks that have not yet been converted to $COLUMNSTORE, the existing chunks in the $COLUMNSTORE do not change. Similarly, if you [remove an existing columnstore policy][remove_columnstore_policy] and then [add a new one][add_columnstore_policy], the new policy applies only to the unconverted chunks. This means that chunks with different $COLUMNSTORE settings can co-exist in the same $HYPERTABLE. +The [$COLUMNSTORE][hypercore] settings are applied on a per-chunk basis. You can change the settings by calling +[ALTER TABLE][alter_table_hypercore] without first converting the entire $HYPERTABLE back to the [$ROWSTORE][hypercore]. +The new settings apply only to the chunks that have not yet been converted to $COLUMNSTORE, the existing chunks in the +$COLUMNSTORE do not change. Similarly, if you [remove an existing columnstore policy][remove_columnstore_policy] and then +[add a new one][add_columnstore_policy], the new policy applies only to the unconverted chunks. This means that chunks +with different $COLUMNSTORE settings can co-exist in the same $HYPERTABLE. -$TIMESCALE_DB calculates default $COLUMNSTORE settings for each chunk when it is created. These settings apply to each chunk, and not the entire hypertable. To explicitly disable the defaults, set a setting to an empty string. +$TIMESCALE_DB calculates default $COLUMNSTORE settings for each chunk when it is created. These settings apply to each +chunk, and not the entire hypertable. To explicitly disable the defaults, set a setting to an empty string. `CREATE TABLE` extends the standard $PG [CREATE TABLE][pg-create-table]. This page explains the features and arguments specific to $TIMESCALE_DB. + + + + + + ## Samples - **Create a $HYPERTABLE partitioned on the time dimension and enable $COLUMNSTORE**: - 1. Create the $HYPERTABLE: + ```sql + CREATE TABLE crypto_ticks ( + "time" TIMESTAMPTZ, + symbol TEXT, + price DOUBLE PRECISION, + day_volume NUMERIC + ) WITH ( + tsdb.hypertable, + tsdb.segmentby='symbol', + tsdb.orderby='time DESC' + ); + ``` - ```sql - CREATE TABLE crypto_ticks ( - "time" TIMESTAMPTZ, - symbol TEXT, - price DOUBLE PRECISION, - day_volume NUMERIC - ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time', - tsdb.segmentby='symbol', - tsdb.orderby='time DESC' - ); - ``` - - 1. Enable $HYPERCORE by adding a columnstore policy: - - ```sql - CALL add_columnstore_policy('crypto_ticks', after => INTERVAL '1d'); - ``` + When you create a $HYPERTABLE using `CREATE TABLE WITH`, $TIMESCALE_DB automatically creates a + [columnstore policy][add_columnstore_policy] that uses the chunk interval as the compression interval, with a default + schedule interval of 1 day. The default partitioning column is automatically selected as the first column with a + timestamp or timestampz data type. - **Create a $HYPERTABLE partitioned on the time with fewer chunks based on time interval**: @@ -77,7 +83,6 @@ arguments specific to $TIMESCALE_DB. value float ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.chunk_interval=3453 ); ``` @@ -109,9 +114,7 @@ arguments specific to $TIMESCALE_DB. - - - + - **Enable data compression during ingestion**: @@ -119,7 +122,7 @@ arguments specific to $TIMESCALE_DB. 1. Create a $HYPERTABLE: ```sql - CREATE TABLE t(time timestamptz, device text, value float) WITH (tsdb.hypertable,tsdb.partition_column='time'); + CREATE TABLE t(time timestamptz, device text, value float) WITH (tsdb.hypertable); ``` 1. Copy data into the $HYPERTABLE: You achieve the highest insert rate using binary format. CSV and text format are also supported. @@ -157,17 +160,17 @@ WITH ( ) ``` -| Name | Type | Default | Required | Description | -|--------------------------------|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `tsdb.hypertable` |BOOLEAN| `true` | ✖ | Create a new [hypertable][hypertable-docs] for time-series data rather than a standard $PG relational table. | -| `tsdb.partition_column` |TEXT| `true` | ✖ | Set the time column to automatically partition your time-series data by. | -| `tsdb.chunk_interval` |TEXT| `7 days` | ✖ | Change this to better suit your needs. For example, if you set `chunk_interval` to 1 day, each chunk stores data from the same day. Data from different days is stored in different chunks. | -| `tsdb.create_default_indexes` | BOOLEAN | `true` | ✖ | Set to `false` to not automatically create indexes.
    The default indexes are:
    • On all hypertables, a descending index on `partition_column`
    • On hypertables with space partitions, an index on the space parameter and `partition_column`
    | -| `tsdb.associated_schema` |REGCLASS| `_timescaledb_internal` | ✖ | Set the schema name for internal hypertable tables. | -| `tsdb.associated_table_prefix` |TEXT| `_hyper` | ✖ | Set the prefix for the names of internal hypertable chunks. | -| `tsdb.orderby` |TEXT| Descending order on the time column in `table_name`. | ✖| The order in which items are used in the $COLUMNSTORE. Specified in the same way as an `ORDER BY` clause in a `SELECT` query. Setting `tsdb.orderby` automatically creates an implicit min/max sparse index on the `orderby` column. | -| `tsdb.segmentby` |TEXT| $TIMESCALE_DB looks at [`pg_stats`](https://www.postgresql.org/docs/current/view-pg-stats.html) and determines an appropriate column based on the data cardinality and distribution. If `pg_stats` is not available, $TIMESCALE_DB looks for an appropriate column from the existing indexes. | ✖| Set the list of columns used to segment data in the $COLUMNSTORE for `table`. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. | -|`tsdb.sparse_index`| TEXT | $TIMESCALE_DB evaluates the columns you already have indexed, checks which data types are a good fit for sparse indexing, then creates a sparse index as an optimization. | ✖ | Configure the sparse indexes for compressed chunks. Requires setting `tsdb.orderby`. Supported index types include:
  • `bloom()`: a probabilistic index, effective for `=` filters. Cannot be applied to `tsdb.orderby` columns.
  • `minmax()`: stores min/max values for each compressed chunk. Setting `tsdb.orderby` automatically creates an implicit min/max sparse index on the `orderby` column.
  • Define multiple indexes using a comma-separated list. You can set only one index per column. Set to an empty string to avoid using sparse indexes and explicitly disable the default behavior. | +| Name | Type | Default | Required | Description | +|--------------------------------|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `tsdb.hypertable` |BOOLEAN| `true` | ✖ | Create a new [hypertable][hypertable-docs] for time-series data rather than a standard $PG relational table. | +| `tsdb.partition_column` |TEXT| The first column in the table with a timestamp data type | ✖ | Set the time column to automatically partition your time-series data by. | +| `tsdb.chunk_interval` |TEXT| `7 days` | ✖ | Change this to better suit your needs. For example, if you set `chunk_interval` to 1 day, each chunk stores data from the same day. Data from different days is stored in different chunks. | +| `tsdb.create_default_indexes` | BOOLEAN | `true` | ✖ | Set to `false` to not automatically create indexes.
    The default indexes are:
    • On all hypertables, a descending index on `partition_column`
    • On hypertables with space partitions, an index on the space parameter and `partition_column`
    | +| `tsdb.associated_schema` |REGCLASS| `_timescaledb_internal` | ✖ | Set the schema name for internal hypertable tables. | +| `tsdb.associated_table_prefix` |TEXT| `_hyper` | ✖ | Set the prefix for the names of internal hypertable chunks. | +| `tsdb.orderby` |TEXT| Descending order on the time column in `table_name`. | ✖| The order in which items are used in the $COLUMNSTORE. Specified in the same way as an `ORDER BY` clause in a `SELECT` query. Setting `tsdb.orderby` automatically creates an implicit min/max sparse index on the `orderby` column. | +| `tsdb.segmentby` |TEXT| $TIMESCALE_DB looks at [`pg_stats`](https://www.postgresql.org/docs/current/view-pg-stats.html) and determines an appropriate column based on the data cardinality and distribution. If `pg_stats` is not available, $TIMESCALE_DB looks for an appropriate column from the existing indexes. | ✖| Set the list of columns used to segment data in the $COLUMNSTORE for `table`. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. | +|`tsdb.sparse_index`| TEXT | $TIMESCALE_DB evaluates the columns you already have indexed, checks which data types are a good fit for sparse indexing, then creates a sparse index as an optimization. | ✖ | Configure the sparse indexes for compressed chunks. Requires setting `tsdb.orderby`. Supported index types include:
  • `bloom()`: a probabilistic index, effective for `=` filters. Cannot be applied to `tsdb.orderby` columns.
  • `minmax()`: stores min/max values for each compressed chunk. Setting `tsdb.orderby` automatically creates an implicit min/max sparse index on the `orderby` column.
  • Define multiple indexes using a comma-separated list. You can set only one index per column. Set to an empty string to avoid using sparse indexes and explicitly disable the default behavior. | @@ -182,7 +185,7 @@ $TIMESCALE_DB returns a simple message indicating success or failure. [hypertable-docs]: /use-timescale/:currentVersion:/hypertables/ [declarative-partitioning]: https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE [inheritance]: https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE -[migrate-data]: /api/:currentVersion:/hypertable/create_table/#arguments +[create_table_arguments]: /api/:currentVersion:/hypertable/create_table/#arguments [dimension-info]: /api/:currentVersion:/hypertable/create_table/#dimension-info [chunk_interval]: /api/:currentVersion:/hypertable/set_chunk_time_interval/ [about-constraints]: /use-timescale/:currentVersion:/schema-management/about-constraints @@ -203,6 +206,7 @@ $TIMESCALE_DB returns a simple message indicating success or failure. [add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ [bloom-filters]: https://en.wikipedia.org/wiki/Bloom_filter -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/ -[uuidv7_functions]: /api/:currentVersion:/uuid-functions/ \ No newline at end of file +[uuidv7_functions]: /api/:currentVersion:/uuid-functions/ +[informational-views]: /api/:currentVersion:/informational-views/jobs/ +[alter_job_samples]: /api/:currentVersion:/jobs-automation/alter_job/#samples \ No newline at end of file diff --git a/api/hypertable/enable_chunk_skipping.md b/api/hypertable/enable_chunk_skipping.md index 0c50ba8338..51262a8c54 100644 --- a/api/hypertable/enable_chunk_skipping.md +++ b/api/hypertable/enable_chunk_skipping.md @@ -10,7 +10,7 @@ api: products: [cloud, mst, self_hosted] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; import EarlyAccess2171 from "versionContent/_partials/_early_access_2_17_1.mdx"; @@ -65,14 +65,13 @@ CREATE TABLE conditions ( temperature DOUBLE PRECISION NULL, humidity DOUBLE PRECISION NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); SELECT enable_chunk_skipping('conditions', 'device_id'); ``` - + ## Arguments diff --git a/api/hypertable/index.md b/api/hypertable/index.md index 015c15e036..1c7cd2a29d 100644 --- a/api/hypertable/index.md +++ b/api/hypertable/index.md @@ -6,6 +6,7 @@ products: [cloud, mst, self_hosted] --- import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; import HypertableOverview from "versionContent/_partials/_hypertable-intro.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Hypertables and chunks @@ -14,41 +15,31 @@ import HypertableOverview from "versionContent/_partials/_hypertable-intro.mdx"; For more information about using hypertables, including chunk size partitioning, see the [hypertable section][hypertable-docs]. -## The hypertable workflow +To create a [$HYPERTABLE][hypertables-section] for your time-series data, use [CREATE TABLE][hypertable-create-table]. +For [efficient queries][secondary-indexes] on data in the columnstore, remember to `segmentby` the column you will +use most often to filter your data. For example: -Best practice for using a $HYPERTABLE is to: +```sql +CREATE TABLE conditions ( + time TIMESTAMPTZ NOT NULL, + location TEXT NOT NULL, + device TEXT NOT NULL, + temperature DOUBLE PRECISION NULL, + humidity DOUBLE PRECISION NULL +) WITH ( + tsdb.hypertable, + tsdb.segmentby = 'device', + tsdb.orderby = 'time DESC' +); +``` - + -1. **Create a $HYPERTABLE** + - Create a [$HYPERTABLE][hypertables-section] for your time-series data using [CREATE TABLE][hypertable-create-table]. - For [efficient queries][secondary-indexes] on data in the columnstore, remember to `segmentby` the column you will - use most often to filter your data. For example: + - ```sql - CREATE TABLE conditions ( - time TIMESTAMPTZ NOT NULL, - location TEXT NOT NULL, - device TEXT NOT NULL, - temperature DOUBLE PRECISION NULL, - humidity DOUBLE PRECISION NULL - ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time', - tsdb.segmentby = 'device', - tsdb.orderby = 'time DESC' - ); - ``` - - -1. **Set the $COLUMNSTORE policy** - - ```sql - CALL add_columnstore_policy('conditions', after => INTERVAL '1d'); - ``` - - +
    [create_hypertable]: /api/:currentVersion:/hypertable/create_hypertable/ [hypertable-docs]: /use-timescale/:currentVersion:/hypertables/ diff --git a/api/jobs-automation/alter_job.md b/api/jobs-automation/alter_job.md index 965122f02f..e5de389cf5 100644 --- a/api/jobs-automation/alter_job.md +++ b/api/jobs-automation/alter_job.md @@ -23,75 +23,7 @@ scheduled $JOBs, as well as in `timescaledb_information.job_stats`. The `job_stats` view also gives information about when each $JOB was last run and other useful statistics for deciding what the new schedule should be. -## Samples - -Reschedules $JOB ID `1000` so that it runs every two days: - -```sql -SELECT alter_job(1000, schedule_interval => INTERVAL '2 days'); -``` - -Disables scheduling of the compression policy on the `conditions` hypertable: - -```sql -SELECT alter_job(job_id, scheduled => false) -FROM timescaledb_information.jobs -WHERE proc_name = 'policy_compression' AND hypertable_name = 'conditions' -``` - -Reschedules continuous aggregate $JOB ID `1000` so that it next runs at 9:00:00 on 15 March, 2020: - -```sql -SELECT alter_job(1000, next_start => '2020-03-15 09:00:00.0+00'); -``` - -## Required arguments - -|Name|Type|Description| -|-|-|-| -|`job_id`|`INTEGER`|The ID of the policy $JOB being modified| - -## Optional arguments - -|Name|Type| Description | -|-|-|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -|`schedule_interval`|`INTERVAL`| The interval at which the job runs. Defaults to 24 hours. | -|`max_runtime`|`INTERVAL`| The maximum amount of time the job is allowed to run by the background worker scheduler before it is stopped. | -|`max_retries`|`INTEGER`| The number of times the job is retried if it fails. | -|`retry_period`|`INTERVAL`| The amount of time the scheduler waits between retries of the job on failure. | -|`scheduled`|`BOOLEAN`| Set to `FALSE` to exclude this job from being run as background job. | -|`config`|`JSONB`| $JOB_CAP-specific configuration, passed to the function when it runs. This includes:
  • verbose_log: boolean, defaults to false. Enable verbose logging output when running the compression policy.
  • maxchunks_to_compress: integer, defaults to 0 (no limit). The maximum number of chunks to compress during a policy run.
  • recompress: boolean, defaults to true. Recompress partially compressed chunks.
  • compress_after: see [add_compression_policy][add-policy].
  • compress_created_before: see [add_compression_policy][add-policy].
  • | -|`next_start`|`TIMESTAMPTZ`| The next time at which to run the job. The job can be paused by setting this value to `infinity`, and restarted with a value of `now()`. | -|`if_exists`|`BOOLEAN`| Set to `true`to issue a notice instead of an error if the job does not exist. Defaults to false. | -|`check_config`|`REGPROC`| A function that takes a single argument, the `JSONB` `config` structure. The function is expected to raise an error if the configuration is not valid, and return nothing otherwise. Can be used to validate the configuration when updating a job. Only functions, not procedures, are allowed as values for `check_config`. | -|`fixed_schedule`|`BOOLEAN`| To enable fixed scheduled job runs, set to `TRUE`. | -|`initial_start`|`TIMESTAMPTZ`| Set the time when the `fixed_schedule` job run starts. For example, `19:10:25-07`. | -|`timezone`|`TEXT`| Address the 1-hour shift in start time when clocks change from [Daylight Saving Time to Standard Time](https://en.wikipedia.org/wiki/Daylight_saving_time). For example, `America/Sao_Paulo`. | - -When a $JOB begins, the `next_start` parameter is set to `infinity`. This -prevents the $JOB from attempting to be started again while it is running. When -the $JOB completes, whether or not the job is successful, the parameter is -automatically updated to the next computed start time. - -Note that altering the `next_start` value is only effective for the next -execution of the $JOB in case of fixed schedules. On the next execution, it will -automatically return to the schedule. - -## Returns - -|Column|Type| Description | -|-|-|---------------------------------------------------------------------------------------------------------------| -|`job_id`|`INTEGER`| The ID of the $JOB being modified | -|`schedule_interval`|`INTERVAL`| The interval at which the $JOB runs. Defaults to 24 hours | -|`max_runtime`|`INTERVAL`| The maximum amount of time the $JOB is allowed to run by the background worker scheduler before it is stopped | -|`max_retries`|INTEGER| The number of times the $JOB is retried if it fails | -|`retry_period`|`INTERVAL`| The amount of time the scheduler waits between retries of the $JOB on failure | -|`scheduled`|`BOOLEAN`| Returns `true` if the $JOB is executed by the TimescaleDB scheduler | -|`config`|`JSONB`| $JOB_CAPs-specific configuration, passed to the function when it runs | -|`next_start`|`TIMESTAMPTZ`| The next time to run the $JOB | -|`check_config`|`TEXT`| The function used to validate updated $JOB configurations | - -## Calculation of next start on failure +### Calculate the next start on failure When a $JOB run results in a runtime failure, the next start of the $JOB is calculated taking into account both its `retry_period` and `schedule_interval`. The `next_start` time is calculated using the following formula: @@ -100,8 +32,6 @@ next_start = finish_time + consecutive_failures * retry_period ± jitter ``` where jitter (± 13%) is added to avoid the "thundering herds" effect. - - To ensure that the `next_start` time is not put off indefinitely or produce timestamps so large they end up out of range, it is capped at 5*`schedule_interval`. Also, more than 20 consecutive failures are not considered, so if the number of consecutive failures is higher, then it multiplies by 20. @@ -111,7 +41,95 @@ There is a distinction between runtime failures that do not cause the $JOB to cr In the event of a $JOB crash, the next start calculation follows the same formula, but it is always at least 5 minutes after the $JOB's last finish, to give an operator enough time to disable it before another crash. - -[add-policy]: /api/:currentVersion:/compression/add_compression_policy/#required-arguments \ No newline at end of file +## Samples + +- **Reschedule $JOB ID `1000` so that it runs every two days**: + + ```sql + SELECT alter_job(1000, schedule_interval => INTERVAL '2 days'); + ``` + +- **Disable scheduling of the compression policy on the `conditions` hypertable**: + + ```sql + SELECT alter_job(job_id, scheduled => false) + FROM timescaledb_information.jobs + WHERE proc_name = 'policy_compression' AND hypertable_name = 'conditions' + ``` + +- **Reschedule continuous aggregate $JOB ID `1000` so that it next runs at 9:00:00 on 15 March, 2020**: + + ```sql + SELECT alter_job(1000, next_start => '2020-03-15 09:00:00.0+00'); + ``` + +- **Alter a columnstore_policy**: + + You can pause and restart a columnstore policy, change how often the policy runs and the job scheduling. + To do this: + + 1. Find the job ID for the columnstore policy: + ```sql + SELECT job_id, hypertable_name, config + FROM timescaledb_information.jobs + WHERE proc_name = 'policy_compression'; + ``` + 1. Update the policy: + + For example, to change the compression interval after 30 days instead of 7: + ```sql + SELECT alter_job(1000, config => '{"compress_after": "30 days"}'); + ``` + However, to change the `after` or `created_before`, the compression settings, or the $HYPERTABLE + the policy is acting on, you must [remove the columnstore policy][remove_columnstore_policy] and + [add a new one][add_columnstore_policy]. + + + +## Arguments + +| Name | Type | Default | Required | Description | +|--------------------------------|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `job_id` |INTEGER| - | ✔ | The ID of the policy $JOB being modified. | +| `schedule_interval` |INTERVAL| 24 hours | ✖ | The interval at which the job runs. | +| `max_runtime` |INTERVAL| - | ✖ | The maximum amount of time the job is allowed to run by the background worker scheduler before it is stopped. | +| `max_retries` | INTEGER | - | ✖ | The number of times the job is retried if it fails. | +| `retry_period` |INTERVAL| - | ✖ | The amount of time the scheduler waits between retries of the job on failure. | +| `scheduled` |BOOLEAN| `true` | ✖ | Set to `false` to exclude this job from being run as a background job. | +| `config` |JSONB| - | ✖| $JOB_CAP-specific configuration, passed to the function when it runs. This includes:
    • `verbose_log`: boolean, defaults to `false`. Enable verbose logging output when running the compression policy.
    • `maxchunks_to_compress`: integer, defaults to `0` (no limit). The maximum number of chunks to compress during a policy run.
    • `recompress`: boolean, defaults to `true`. Recompress partially compressed chunks.
    • `compress_after`: see [`add_compression_policy`][add-policy].
    • `compress_created_before`: see [`add_compression_policy`][add-policy].
    | +| `next_start` |TIMESTAMPTZ| - | ✖ | The next time at which to run the job. The job can be paused by setting this value to `infinity`, and restarted with a value of `now()`. | +| `if_exists` |BOOLEAN| `false` | ✖ | Set to `true` to issue a notice instead of an error if the job does not exist. | +| `check_config` | REGPROC | - | ✖ | A function that takes a single argument, the `JSONB` `config` structure. The function is expected to raise an error if the configuration is not valid, and return nothing otherwise. Can be used to validate the configuration when updating a job. Only functions, not procedures, are allowed as values for `check_config`. | +| `fixed_schedule` |BOOLEAN| `false` | ✖| To enable fixed scheduled job runs, set to `true`. | +|`initial_start`| TIMESTAMPTZ | - | ✖ | Set the time when the `fixed_schedule` job run starts. For example, `19:10:25-07`. | +| `timezone` |TEXT| `UTC` | ✖ | Address the 1-hour shift in start time when clocks change from [Daylight Saving Time to Standard Time](https://en.wikipedia.org/wiki/Daylight_saving_time). For example, `America/Sao_Paulo`. | + +When a $JOB begins, the `next_start` parameter is set to `infinity`. This +prevents the $JOB from attempting to be started again while it is running. When +the $JOB completes, whether or not the job is successful, the parameter is +automatically updated to the next computed start time. + +Note that altering the `next_start` value is only effective for the next +execution of the $JOB in case of fixed schedules. On the next execution, it will +automatically return to the schedule. + +## Returns + +| Column | Type | Description | +|--------------------------------|------------------|---------------------------------------------------------------------------------------------------------------| +|`job_id` |INTEGER | The ID of the $JOB being modified | +|`schedule_interval` |INTERVAL | The interval at which the $JOB runs. Defaults to 24 hours | +|`max_runtime` |INTERVAL | The maximum amount of time the $JOB is allowed to run by the background worker scheduler before it is stopped | +|`max_retries` |INTEGER | The number of times the $JOB is retried if it fails | +|`retry_period` |INTERVAL | The amount of time the scheduler waits between retries of the $JOB on failure | +|`scheduled` |BOOLEAN | Returns `true` if the $JOB is executed by the TimescaleDB scheduler | +|`config` |JSONB | $JOB_CAP-specific configuration, passed to the function when it runs | +|`next_start` |TIMESTAMPTZ | The next time to run the $JOB | +|`check_config` |TEXT | The function used to validate updated $JOB configurations | + + +[add-policy]: /api/:currentVersion:/compression/add_compression_policy/#required-arguments +[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/ +[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ \ No newline at end of file diff --git a/getting-started/try-key-features-timescale-products.md b/getting-started/try-key-features-timescale-products.md index d76c2b8e36..d3a21d7c3e 100644 --- a/getting-started/try-key-features-timescale-products.md +++ b/getting-started/try-key-features-timescale-products.md @@ -7,10 +7,11 @@ content_group: Getting started import HASetup from 'versionContent/_partials/_high-availability-setup.mdx'; import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; import HypercoreIntroShort from "versionContent/_partials/_hypercore-intro-short.mdx"; import HypercoreDirectCompress from "versionContent/_partials/_hypercore-direct-compress.mdx"; import NotAvailableFreePlan from "versionContent/_partials/_not-available-in-free-plan.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; + import NotSupportedAzure from "versionContent/_partials/_not-supported-for-azure.mdx"; import SupportPlans from "versionContent/_partials/_support-plans.mdx"; @@ -81,7 +82,7 @@ relational and time-series data from external files. To more fully understand how to create a $HYPERTABLE, how $HYPERTABLEs work, and how to optimize them for performance by tuning $CHUNK intervals and enabling chunk skipping, see - [the $HYPERTABLEs documentation][hypertables-section]. + [the $HYPERTABLEs documentation][hypertables-section]. @@ -130,12 +131,10 @@ relational and time-series data from external files. day_volume NUMERIC ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.segmentby = 'symbol' ); ``` - - + - For the relational data: @@ -176,47 +175,6 @@ relational and time-series data from external files. -## Enhance query performance for analytics - -$HYPERCORE_CAP is the $TIMESCALE_DB hybrid row-columnar storage engine, designed specifically for real-time -analytics and -powered by time-series data. The advantage of $HYPERCORE is its ability to seamlessly switch between row-oriented and -column-oriented storage. This flexibility enables $TIMESCALE_DB to deliver the best of both worlds, solving the key -challenges in real-time analytics. - -![Move from rowstore to columstore in hypercore](https://assets.timescale.com/docs/images/hypercore.png ) - -When $TIMESCALE_DB converts $CHUNKs from the $ROWSTORE to the $COLUMNSTORE, multiple records are grouped into a single row. -The columns of this row hold an array-like structure that stores all the data. Because a single row takes up less disk -space, you can reduce your $CHUNK size by up to 98%, and can also speed up your queries. This helps you save on storage costs, -and keeps your queries operating at lightning speed. - -$HYPERCORE is enabled by default when you call [CREATE TABLE][hypertable-create-table]. Best practice is to compress -data that is no longer needed for highest performance queries, but is still accessed regularly in the $COLUMNSTORE. -For example, yesterday's market data. - - - -1. **Add a policy to convert $CHUNKs to the $COLUMNSTORE at a specific time interval** - - For example, yesterday's data: - ``` sql - CALL add_columnstore_policy('crypto_ticks', after => INTERVAL '1d'); - ``` - If you have not configured a `segmentby` column, $TIMESCALE_DB chooses one for you based on the data in your - $HYPERTABLE. For more information on how to tune your $HYPERTABLEs for the best performance, see - [efficient queries][secondary-indexes]. - -1. **View your data space saving** - - When you convert data to the $COLUMNSTORE, as well as being optimized for analytics, it is compressed by more than - 90%. This helps you save on storage costs and keeps your queries operating at lightning speed. To see the amount of space - saved, click `Explorer` > `public` > `crypto_ticks`. - - ![Columnstore data savings](https://assets.timescale.com/docs/images/tiger-on-azure/tiger-console-columstore-data-savings.png ) - - - ## Write fast and efficient analytical queries Aggregation is a way of combing data to get insights from it. Average, sum, and count are all diff --git a/integrations/amazon-sagemaker.md b/integrations/amazon-sagemaker.md index bf18a12d26..be3cd98701 100644 --- a/integrations/amazon-sagemaker.md +++ b/integrations/amazon-sagemaker.md @@ -6,7 +6,7 @@ keywords: [connect, integrate, amazon, aws, sagemaker] --- import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Integrate Amazon SageMaker with $CLOUD_LONG @@ -44,11 +44,10 @@ Create a table in $SERVICE_LONG to store model predictions generated by SageMake model_name TEXT NOT NULL, prediction DOUBLE PRECISION NOT NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + diff --git a/integrations/apache-kafka.md b/integrations/apache-kafka.md index c1d944ac45..6a3bca0b17 100644 --- a/integrations/apache-kafka.md +++ b/integrations/apache-kafka.md @@ -7,7 +7,7 @@ keywords: [Apache Kafka, integrations] import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx"; import IntegrationApacheKafka from "versionContent/_partials/_integration-apache-kafka-install.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Integrate Apache Kafka with $CLOUD_LONG @@ -93,11 +93,10 @@ To prepare your $SERVICE_LONG for Kafka integration: name TEXT, city TEXT ) WITH ( - tsdb.hypertable, - tsdb.partition_column='created_at' + tsdb.hypertable ); ``` - + diff --git a/integrations/aws-lambda.md b/integrations/aws-lambda.md index 875092121b..fa42ea286d 100644 --- a/integrations/aws-lambda.md +++ b/integrations/aws-lambda.md @@ -6,7 +6,7 @@ keywords: [connect, integrate, aws, lambda] --- import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Integrate AWS Lambda with Tiger @@ -46,11 +46,10 @@ Create a table in $SERVICE_LONG to store time-series data. sensor_id TEXT NOT NULL, value DOUBLE PRECISION NOT NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + diff --git a/integrations/supabase.md b/integrations/supabase.md index ecfdd7fb25..7447372b4a 100644 --- a/integrations/supabase.md +++ b/integrations/supabase.md @@ -6,7 +6,7 @@ keywords: [integrate] --- import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Integrate Supabase with $CLOUD_LONG @@ -40,11 +40,10 @@ To set up a $SERVICE_LONG optimized for analytics to receive data from Supabase: origin_time timestamptz NOT NULL, name TEXT ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + 1. **Optimize cooling data for analytics** diff --git a/self-hosted/migration/same-db.md b/self-hosted/migration/same-db.md index fdab70175a..9cae3145ff 100644 --- a/self-hosted/migration/same-db.md +++ b/self-hosted/migration/same-db.md @@ -6,7 +6,7 @@ keywords: [data migration, Postgres] tags: [import] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Migrate data to TimescaleDB from the same $PG instance @@ -65,7 +65,7 @@ Migrate your data into $TIMESCALE_DB from within the same database. - + 1. Insert data from the old table to the new table. diff --git a/tutorials/blockchain-query/blockchain-compress.md b/tutorials/blockchain-query/blockchain-compress.md deleted file mode 100644 index 4270fce016..0000000000 --- a/tutorials/blockchain-query/blockchain-compress.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Compress your data using hypercore -excerpt: Compress a sample dataset with Tiger Cloud so you can store the Bitcoin blockchain more efficiently -products: [cloud, self_hosted, mst] -keywords: [beginner, crypto, blockchain, Bitcoin, finance, analytics] -layout_components: [next_prev_large] -content_group: Query the Bitcoin blockchain ---- - -import TutorialsHypercoreIntro from "versionContent/_partials/_tutorials-hypercore-intro.mdx" - -# Compress your data using $HYPERCORE - - - -## Optimize your data in the $COLUMNSTORE - -To compress the data in the `transactions` table, do the following: - - - -1. Connect to your $SERVICE_LONG - - In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed. - You can also connect to your $SERVICE_SHORT using [psql][connect-using-psql]. - -1. Convert data to the $COLUMNSTORE: - - You can do this either automatically or manually: - - [Automatically convert chunks][add_columnstore_policy] in the $HYPERTABLE to the $COLUMNSTORE at a specific time interval: - - ```sql - CALL add_columnstore_policy('transactions', after => INTERVAL '1d'); - ``` - - - [Manually convert all chunks][convert_to_columnstore] in the $HYPERTABLE to the $COLUMNSTORE: - - ```sql - DO $$ - DECLARE - chunk_name TEXT; - BEGIN - FOR chunk_name IN (SELECT c FROM show_chunks('transactions') c) - LOOP - RAISE NOTICE 'Converting chunk: %', chunk_name; -- Optional: To see progress - CALL convert_to_columnstore(chunk_name); - END LOOP; - RAISE NOTICE 'Conversion to columnar storage complete for all chunks.'; -- Optional: Completion message - END$$; - ``` - - - - -## Take advantage of query speedups - -Previously, data in the $COLUMNSTORE was segmented by the `block_id` column value. -This means fetching data by filtering or grouping on that column is -more efficient. Ordering is set to time descending. This means that when you run queries -which try to order data in the same way, you see performance benefits. - - - -1. Connect to your $SERVICE_LONG - - In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed. - -1. Run the following query: - - ```sql - WITH recent_blocks AS ( - SELECT block_id FROM transactions - WHERE is_coinbase IS TRUE - ORDER BY time DESC - LIMIT 5 - ) - SELECT - t.block_id, count(*) AS transaction_count, - SUM(weight) AS block_weight, - SUM(output_total_usd) AS block_value_usd - FROM transactions t - INNER JOIN recent_blocks b ON b.block_id = t.block_id - WHERE is_coinbase IS NOT TRUE - GROUP BY t.block_id; - ``` - - Performance speedup is of two orders of magnitude, around 15 ms when compressed in the $COLUMNSTORE and - 1 second when decompressed in the $ROWSTORE. - - - - - -[hypercore]: /use-timescale/:currentVersion:/hypercore/ -[in-console-editors]: /getting-started/:currentVersion:/run-queries-from-console/ -[services-portal]: https://console.cloud.timescale.com/dashboard/services -[connect-using-psql]: /integrations/:currentVersion:/psql#connect-to-your-service -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ -[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ diff --git a/tutorials/blockchain-query/index.md b/tutorials/blockchain-query/index.md index ade5ca0a6c..9eb1131e87 100644 --- a/tutorials/blockchain-query/index.md +++ b/tutorials/blockchain-query/index.md @@ -28,8 +28,6 @@ This tutorial covers: 1. [Ingest data into a $SERVICE_SHORT][blockchain-dataset]: set up and connect to a $SERVICE_LONG, create tables and $HYPERTABLEs, and ingest data. 1. [Query your data][blockchain-query]: obtain information, including finding the most recent transactions on the blockchain, and gathering information about the transactions using aggregation functions. -1. [Compress your data using $HYPERCORE][blockchain-compress]: compress data that is no longer needed for highest performance queries, but is still accessed regularly - for real-time analytics. When you've completed this tutorial, you can use the same dataset to [Analyze the Bitcoin data][analyze-blockchain], using $TIMESCALE_DB hyperfunctions. diff --git a/tutorials/financial-tick-data/financial-tick-compress.md b/tutorials/financial-tick-data/financial-tick-compress.md deleted file mode 100644 index 45fd4f85b4..0000000000 --- a/tutorials/financial-tick-data/financial-tick-compress.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Compress your data using hypercore -excerpt: Compress a sample dataset with Tiger Cloud to store the financial data more efficiently -products: [cloud, self_hosted, mst] -keywords: [tutorials, finance, learn] -tags: [tutorials, beginner] -layout_components: [next_prev_large] -content_group: Analyze financial tick data ---- - -import TutorialsHypercoreIntro from "versionContent/_partials/_tutorials-hypercore-intro.mdx" - -# Compress your data using $HYPERCORE - - - -## Optimize your data in the $COLUMNSTORE - -To compress the data in the `crypto_ticks` table, do the following: - - - -1. Connect to your $SERVICE_LONG - - In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed. - You can also connect to your $SERVICE_SHORT using [psql][connect-using-psql]. - -1. Convert data to the $COLUMNSTORE: - - You can do this either automatically or manually: - - [Automatically convert chunks][add_columnstore_policy] in the $HYPERTABLE to the $COLUMNSTORE at a specific time interval: - - ```sql - CALL add_columnstore_policy('crypto_ticks', after => INTERVAL '1d'); - ``` - - - [Manually convert all chunks][convert_to_columnstore] in the $HYPERTABLE to the $COLUMNSTORE: - - ```sql - CALL convert_to_columnstore(c) from show_chunks('crypto_ticks') c; - ``` - -1. Now that you have converted the chunks in your $HYPERTABLE to the $COLUMNSTORE, compare the - size of the dataset before and after compression: - - ```sql - SELECT - pg_size_pretty(before_compression_total_bytes) as before, - pg_size_pretty(after_compression_total_bytes) as after - FROM hypertable_columnstore_stats('crypto_ticks'); - ``` - - This shows a significant improvement in data usage: - - ```sql - before | after - --------+------- - 694 MB | 75 MB - (1 row) - ``` - - - - -## Take advantage of query speedups - -Previously, data in the $COLUMNSTORE was segmented by the `block_id` column value. -This means fetching data by filtering or grouping on that column is -more efficient. Ordering is set to time descending. This means that when you run queries -which try to order data in the same way, you see performance benefits. - - - -1. Connect to your $SERVICE_LONG - - In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed. - -1. Run the following query: - - ```sql - SELECT - time_bucket('1 day', time) AS bucket, - symbol, - FIRST(price, time) AS "open", - MAX(price) AS high, - MIN(price) AS low, - LAST(price, time) AS "close", - LAST(day_volume, time) AS day_volume - FROM crypto_ticks - GROUP BY bucket, symbol; - ``` - - Performance speedup is of two orders of magnitude, around 15 ms when compressed in the $COLUMNSTORE and - 1 second when decompressed in the $ROWSTORE. - - - - -[hypercore]: /use-timescale/:currentVersion:/hypercore/ -[in-console-editors]: /getting-started/:currentVersion:/run-queries-from-console/ -[services-portal]: https://console.cloud.timescale.com/dashboard/services -[connect-using-psql]: /integrations/:currentVersion:/psql#connect-to-your-service -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ -[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ diff --git a/tutorials/financial-tick-data/index.md b/tutorials/financial-tick-data/index.md index 1428c43066..fdfe30a7d9 100644 --- a/tutorials/financial-tick-data/index.md +++ b/tutorials/financial-tick-data/index.md @@ -48,9 +48,6 @@ This tutorial shows you how to ingest real-time time-series data into a $SERVICE [Twelve Data][twelve-data] into your $TIMESCALE_DB database. 1. [Query your dataset][financial-tick-query]: create candlestick views, query the aggregated data, and visualize the data in Grafana. -1. [Compress your data using hypercore][financial-tick-compress]: learn how to store and query -your financial tick data more efficiently using compression feature of $TIMESCALE_DB. - To create candlestick views, query the aggregated data, and visualize the data in Grafana, see the [ingest real-time websocket data section][advanced-websocket]. diff --git a/tutorials/page-index/page-index.js b/tutorials/page-index/page-index.js index 4360d04319..00df359bdb 100644 --- a/tutorials/page-index/page-index.js +++ b/tutorials/page-index/page-index.js @@ -40,12 +40,6 @@ module.exports = [ href: "beginner-blockchain-query", excerpt: "Query the Bitcoin blockchain dataset", }, - { - title: "Compress your data using hypercore", - href: "blockchain-compress", - excerpt: - "Compress the dataset so you can store the Bitcoin blockchain more efficiently", - }, ], }, { @@ -81,12 +75,6 @@ module.exports = [ href: "financial-tick-query", excerpt: "Query and visualize financial tick data", }, - { - title: "Compress your data using hypercore", - href: "financial-tick-compress", - excerpt: - "Compress the dataset so you can store the data more efficiently", - }, ], }, { diff --git a/tutorials/real-time-analytics-energy-consumption.md b/tutorials/real-time-analytics-energy-consumption.md index 308493b2c2..5cc5fe35e6 100644 --- a/tutorials/real-time-analytics-energy-consumption.md +++ b/tutorials/real-time-analytics-energy-consumption.md @@ -40,47 +40,6 @@ data optimized for size and speed in the columnstore. -## Optimize your data for real-time analytics - -When $TIMESCALE_DB converts a chunk to the columnstore, it automatically creates a different schema for your -data. $TIMESCALE_DB creates and uses custom indexes to incorporate the `segmentby` and `orderby` parameters when -you write to and read from the columstore. - -To increase the speed of your analytical queries by a factor of 10 and reduce storage costs by up to 90%, convert data -to the columnstore: - - - -1. **Connect to your $SERVICE_LONG** - - In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed. - You can also connect to your $SERVICE_SHORT using [psql][connect-using-psql]. - -1. **Add a policy to convert chunks to the columnstore at a specific time interval** - - For example, 60 days after the data was added to the table: - ``` sql - CALL add_columnstore_policy('metrics', INTERVAL '8 days'); - ``` - See [add_columnstore_policy][add_columnstore_policy]. - -1. **Faster analytical queries on data in the columnstore** - - Now run the analytical query again: - ```sql - SELECT time_bucket('1 day', created, 'Europe/Berlin') AS "time", - round((last(value, created) - first(value, created)) * 100.) / 100. AS value - FROM metrics - WHERE type_id = 5 - GROUP BY 1; - ``` - On this amount of data, this analytical query on data in the columnstore takes about 250ms. - - - -Just to hit this one home, by converting cooling data to the columnstore, you have increased the speed of your analytical -queries by a factor of 10, and reduced storage by up to 90%. - ## Write fast analytical queries Aggregation is a way of combining data to get insights from it. Average, sum, and count are all examples of simple @@ -177,7 +136,6 @@ You have integrated Grafana with a $SERVICE_LONG and made insights based on visu [alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/ [compression_continuous-aggregate]: /api/:currentVersion:/continuous-aggregates/alter_materialized_view/ [informational-views]: /api/:currentVersion:/informational-views/jobs/ -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [hypercore_workflow]: /api/:currentVersion:/hypercore/#hypercore-workflow [alter_job]: /api/:currentVersion:/actions/alter_job/ [remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/ diff --git a/tutorials/real-time-analytics-transport.md b/tutorials/real-time-analytics-transport.md index d8430e7990..056b8dc108 100644 --- a/tutorials/real-time-analytics-transport.md +++ b/tutorials/real-time-analytics-transport.md @@ -31,40 +31,6 @@ of data optimized for size and speed in the columnstore. -## Optimize your data for real-time analytics - - -When $TIMESCALE_DB converts a chunk to the columnstore, it automatically creates a different schema for your -data. $TIMESCALE_DB creates and uses custom indexes to incorporate the `segmentby` and `orderby` parameters when -you write to and read from the columstore. - -To increase the speed of your analytical queries by a factor of 10 and reduce storage costs by up to 90%, convert data -to the columnstore: - - - -1. **Connect to your $SERVICE_LONG** - - In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. The in-Console editors display the query speed. - You can also connect to your $SERVICE_SHORTusing [psql][connect-using-psql]. - -1. **Add a policy to convert chunks to the columnstore at a specific time interval** - - For example, convert data older than 8 days old to the columstore: - ``` sql - CALL add_columnstore_policy('rides', INTERVAL '8 days'); - ``` - See [add_columnstore_policy][add_columnstore_policy]. - - The data you imported for this tutorial is from 2016, it was already added to the $COLUMNSTORE by default. However, - you get the idea. To see the space savings in action, follow [Try the key $COMPANY features][try-timescale-features]. - - - -Just to hit this one home, by converting cooling data to the columnstore, you have increased the speed of your analytical -queries by a factor of 10, and reduced storage by up to 90%. - - ## Monitor performance over time @@ -131,7 +97,6 @@ your data. [alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/ [compression_continuous-aggregate]: /api/:currentVersion:/continuous-aggregates/alter_materialized_view/ [informational-views]: /api/:currentVersion:/informational-views/jobs/ -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [hypercore_workflow]: /api/:currentVersion:/hypercore/#hypercore-workflow [alter_job]: /api/:currentVersion:/actions/alter_job/ [remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/ diff --git a/tutorials/simulate-iot-sensor-data.md b/tutorials/simulate-iot-sensor-data.md index 14a683373c..c6dd53012d 100644 --- a/tutorials/simulate-iot-sensor-data.md +++ b/tutorials/simulate-iot-sensor-data.md @@ -5,8 +5,7 @@ products: [cloud, self_hosted, mst] keywords: [IoT, simulate] --- - -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx"; # Simulate an IoT sensor dataset @@ -48,11 +47,10 @@ To simulate a dataset, run the following queries: cpu DOUBLE PRECISION, FOREIGN KEY (sensor_id) REFERENCES sensors (id) ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + 1. **Populate the `sensors` table**: diff --git a/use-timescale/continuous-aggregates/about-continuous-aggregates.md b/use-timescale/continuous-aggregates/about-continuous-aggregates.md index 29697a5de8..1bd4fd2519 100644 --- a/use-timescale/continuous-aggregates/about-continuous-aggregates.md +++ b/use-timescale/continuous-aggregates/about-continuous-aggregates.md @@ -8,6 +8,7 @@ keywords: [continuous aggregates] import CaggsFunctionSupport from "versionContent/_partials/_caggs-function-support.mdx"; import CaggsIntro from "versionContent/_partials/_caggs-intro.mdx"; import CaggsTypes from "versionContent/_partials/_caggs-types.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # About continuous aggregates @@ -75,11 +76,13 @@ CREATE TABLE conditions ( device_id INTEGER, temperature FLOAT8 ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` + + + See the following `JOIN` examples on continuous aggregates: - `INNER JOIN` on a single equality condition, using the `ON` clause: diff --git a/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md b/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md index 3b126dea24..69b5f93c97 100644 --- a/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md +++ b/use-timescale/continuous-aggregates/create-a-continuous-aggregate.md @@ -67,7 +67,7 @@ hypertable. Additionally, all functions and their arguments included in end_offset => INTERVAL '1 day', schedule_interval => INTERVAL '1 hour'); ``` - + You can use most $PG aggregate functions in continuous aggregations. To diff --git a/use-timescale/extensions/postgis.md b/use-timescale/extensions/postgis.md index 2f620a2e4d..ac7afa866f 100644 --- a/use-timescale/extensions/postgis.md +++ b/use-timescale/extensions/postgis.md @@ -6,7 +6,7 @@ keywords: [services, settings, extensions, postgis] tags: [extensions, postgis] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Analyze geospatial data using postgis @@ -65,11 +65,10 @@ particular location. cases INT NOT NULL, deaths INT NOT NULL ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + 1. To support efficient queries, create an index on the `state_id` column: diff --git a/use-timescale/hypercore/real-time-analytics-in-hypercore.md b/use-timescale/hypercore/real-time-analytics-in-hypercore.md index e7f728b356..23934cef9f 100644 --- a/use-timescale/hypercore/real-time-analytics-in-hypercore.md +++ b/use-timescale/hypercore/real-time-analytics-in-hypercore.md @@ -65,7 +65,6 @@ repeated values, [XOR-based][xor] and [dictionary compression][dictionary] is us [create-hypertable]: /use-timescale/:currentVersion:/compression/ -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [delta]: /use-timescale/:currentVersion:/hypercore/compression-methods/#delta-encoding [delta-delta]: /use-timescale/:currentVersion:/hypercore/compression-methods/#delta-of-delta-encoding [simple-8b]: /use-timescale/:currentVersion:/hypercore/compression-methods/#simple-8b @@ -73,7 +72,6 @@ repeated values, [XOR-based][xor] and [dictionary compression][dictionary] is us [xor]: /use-timescale/:currentVersion:/hypercore/compression-methods/#xor-based-encoding [dictionary]: /use-timescale/:currentVersion:/hypercore/compression-methods/#dictionary-compression [ingest-data]: /getting-started/:currentVersion:/try-key-features-timescale-products/#optimize-time-series-data-in-hypertables -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [run-job]: /api/:currentVersion:/jobs-automation/run_job/ [alter_job]: /api/:currentVersion:/jobs-automation/alter_job/ [informational-views]: /api/:currentVersion:/informational-views/jobs/ diff --git a/use-timescale/hypercore/secondary-indexes.md b/use-timescale/hypercore/secondary-indexes.md index 3ecec43c0a..f6875263bc 100644 --- a/use-timescale/hypercore/secondary-indexes.md +++ b/use-timescale/hypercore/secondary-indexes.md @@ -4,6 +4,8 @@ excerpt: Use segmenting and ordering data in the columnstore to make lookup quer products: [cloud, mst, self_hosted] keywords: [hypertable, compression, row-columnar storage, hypercore] --- +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; + # Improve query and upsert performance @@ -67,11 +69,12 @@ CREATE TABLE metrics ( device_id INT, data JSONB ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` + + 1. **Execute a query on a regular $HYPERTABLE** diff --git a/use-timescale/hyperfunctions/counter-aggregation.md b/use-timescale/hyperfunctions/counter-aggregation.md index ea10223474..5b4d65f68d 100644 --- a/use-timescale/hyperfunctions/counter-aggregation.md +++ b/use-timescale/hyperfunctions/counter-aggregation.md @@ -4,7 +4,7 @@ excerpt: When collecting data from counters, interruptions usually cause the cou products: [cloud, mst, self_hosted] keywords: [hyperfunctions, Toolkit, gauges, counters] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Counter aggregation @@ -113,11 +113,10 @@ going on in each part. PRIMARY KEY (measure_id, ts) ) WITH ( tsdb.hypertable, - tsdb.partition_column='ts', tsdb.chunk_interval='15 days' ); ``` - + 1. Create a counter aggregate and the extrapolated delta function: diff --git a/use-timescale/hypertables/hypertable-crud.md b/use-timescale/hypertables/hypertable-crud.md index e116dbc429..8bb0bd826e 100644 --- a/use-timescale/hypertables/hypertable-crud.md +++ b/use-timescale/hypertables/hypertable-crud.md @@ -6,8 +6,8 @@ keywords: [hypertables, create] --- import IntegrationPrereqs from "versionContent/_partials/_integration-prereqs.mdx"; -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; import HypercoreDirectCompress from "versionContent/_partials/_hypercore-direct-compress.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Optimize time-series data in hypertables @@ -24,8 +24,8 @@ time. Typically, you partition hypertables on columns that hold time values. ## Create a hypertable Create a [$HYPERTABLE][hypertables-section] for your time-series data using [CREATE TABLE][hypertable-create-table]. -For [efficient queries][secondary-indexes] on data in the columnstore, remember to `segmentby` the column you will use -most often to filter your data: +For [efficient queries][secondary-indexes], remember to `segmentby` the column you will use most often to filter your +data: ```sql CREATE TABLE conditions ( @@ -36,13 +36,13 @@ CREATE TABLE conditions ( humidity DOUBLE PRECISION NULL ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.segmentby = 'device', tsdb.orderby = 'time DESC' ); ``` - + + To convert an existing table with data in it, call `create_hypertable` on that table with [`migrate_data` to `true`][api-create-hypertable-arguments]. However, if you have a lot of data, this may take a long time. @@ -51,23 +51,6 @@ To convert an existing table with data in it, call `create_hypertable` on that t -## Optimize cooling data in the $COLUMNSTORE - -As the data cools and becomes more suited for analytics, [add a columnstore policy][add_columnstore_policy] so your data -is automatically converted to the $COLUMNSTORE after a specific time interval. This columnar format enables fast -scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. -In the $COLUMNSTORE conversion, $HYPERTABLE chunks are compressed by up to 98%, and organized for efficient, -large-scale queries. This columnar format enables fast scanning and aggregation, optimizing performance for analytical -workloads. - -To optimize your data, add a $COLUMNSTORE policy: - -```sql -CALL add_columnstore_policy('conditions', after => INTERVAL '1d'); -``` - -You can also manually [convert chunks][convert_to_columnstore] in a $HYPERTABLE to the $COLUMNSTORE. - ## Alter a hypertable You can alter a hypertable, for example to add a column, by using the $PG @@ -118,7 +101,6 @@ All data chunks belonging to the hypertable are deleted. [postgres-altertable]: https://www.postgresql.org/docs/current/sql-altertable.html [hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [install]: /getting-started/:currentVersion:/ [postgres-createtable]: https://www.postgresql.org/docs/current/sql-createtable.html [postgresql-timestamp]: https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_timestamp_.28without_time_zone.29 @@ -129,7 +111,5 @@ All data chunks belonging to the hypertable are deleted. [hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ [hypercore]: /use-timescale/:currentVersion:/hypercore/ [secondary-indexes]: /use-timescale/:currentVersion:/hypercore/secondary-indexes/ -[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/ -[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/ [timestamps-best-practice]: https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_timestamp_.28without_time_zone.29 [uuidv7_functions]: /api/:currentVersion:/uuid-functions/ \ No newline at end of file diff --git a/use-timescale/hypertables/hypertables-and-unique-indexes.md b/use-timescale/hypertables/hypertables-and-unique-indexes.md index 218922a82f..d69200043a 100644 --- a/use-timescale/hypertables/hypertables-and-unique-indexes.md +++ b/use-timescale/hypertables/hypertables-and-unique-indexes.md @@ -5,7 +5,7 @@ products: [cloud, mst, self_hosted] keywords: [hypertables, unique indexes, primary keys] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Enforce constraints with unique indexes @@ -46,12 +46,11 @@ To create a unique index on a $HYPERTABLE: value FLOAT ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.segmentby = 'device_id', tsdb.orderby = 'time DESC' ); ``` - + 1. **Create a unique index on the $HYPERTABLE** diff --git a/use-timescale/hypertables/improve-query-performance.md b/use-timescale/hypertables/improve-query-performance.md index 8207204f8f..693e476024 100644 --- a/use-timescale/hypertables/improve-query-performance.md +++ b/use-timescale/hypertables/improve-query-performance.md @@ -5,7 +5,6 @@ products: [cloud, mst, self_hosted] keywords: [hypertables, indexes, chunks] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; import ChunkInterval from "versionContent/_partials/_chunk-interval.mdx"; import EarlyAccess2171 from "versionContent/_partials/_early_access_2_17_1.mdx"; @@ -43,13 +42,10 @@ Adjusting your hypertable chunk interval can improve performance in your databas humidity DOUBLE PRECISION NULL ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.chunk_interval='1 day' ); ``` - - 1. **Check current setting for chunk intervals** Query the $TIMESCALE_DB catalog for a $HYPERTABLE. For example: diff --git a/use-timescale/hypertables/index.md b/use-timescale/hypertables/index.md index 306d5d3ae2..f7c0f8b275 100644 --- a/use-timescale/hypertables/index.md +++ b/use-timescale/hypertables/index.md @@ -100,7 +100,6 @@ For example: ) WITH( timescaledb.hypertable, - timescaledb.partition_column='time', timescaledb.chunk_interval='1 day' ); ``` diff --git a/use-timescale/query-data/advanced-analytic-queries.md b/use-timescale/query-data/advanced-analytic-queries.md index 4e837d3ffd..1995aeab34 100644 --- a/use-timescale/query-data/advanced-analytic-queries.md +++ b/use-timescale/query-data/advanced-analytic-queries.md @@ -5,7 +5,7 @@ products: [cloud, mst, self_hosted] keywords: [queries, hyperfunctions, analytics] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # Perform advanced analytical queries @@ -353,12 +353,11 @@ CREATE TABLE location ( latitude FLOAT, longitude FLOAT ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + You can use the first table, which gives a distinct set of vehicles, to perform a `LATERAL JOIN` against the location table: diff --git a/use-timescale/schema-management/about-constraints.md b/use-timescale/schema-management/about-constraints.md index 2cd5568419..d383633f34 100644 --- a/use-timescale/schema-management/about-constraints.md +++ b/use-timescale/schema-management/about-constraints.md @@ -5,7 +5,7 @@ products: [cloud, mst, self_hosted] keywords: [schemas, constraints] --- -import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; # About constraints @@ -34,12 +34,11 @@ CREATE TABLE conditions ( location INTEGER REFERENCES locations (id), PRIMARY KEY(time, device_id) ) WITH ( - tsdb.hypertable, - tsdb.partition_column='time' + tsdb.hypertable ); ``` - + This example also references values in another `locations` table using a foreign key constraint. diff --git a/use-timescale/schema-management/indexing.md b/use-timescale/schema-management/indexing.md index d1c7fd0a0a..3fca99ae8a 100644 --- a/use-timescale/schema-management/indexing.md +++ b/use-timescale/schema-management/indexing.md @@ -5,6 +5,8 @@ products: [cloud, mst, self_hosted] keywords: [hypertables, indexes] --- +import CreateHypertablePolicyNote from "versionContent/_partials/_create-hypertable-columnstore-policy-note.mdx"; + # Indexing data You can use an index on your database to speed up read operations. You can @@ -56,13 +58,11 @@ CREATE TABLE conditions ( humidity DOUBLE PRECISION NULL ) WITH ( tsdb.hypertable, - tsdb.partition_column='time', tsdb.create_default_indexes=false ); ``` - - + ## Best practices for indexing From 5947b417dc38760e98581a20dbc29fe66da1efbc Mon Sep 17 00:00:00 2001 From: Eon <123763385+timescale-automation@users.noreply.github.com> Date: Tue, 4 Nov 2025 09:41:38 -0500 Subject: [PATCH 5/5] Updated list of GUCs from TimescaleDB 2.23.0 (#4530) * [create-pull-request] automated change * Apply suggestions from code review Co-authored-by: Iain Cox Signed-off-by: Philip Krauss <35487337+philkra@users.noreply.github.com> --------- Signed-off-by: Philip Krauss <35487337+philkra@users.noreply.github.com> Co-authored-by: Iain Cox Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com> --- _partials/_timescaledb-gucs.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/_partials/_timescaledb-gucs.md b/_partials/_timescaledb-gucs.md index bd57e7e266..8eb327637f 100644 --- a/_partials/_timescaledb-gucs.md +++ b/_partials/_timescaledb-gucs.md @@ -23,6 +23,7 @@ | `enable_bulk_decompression` | `BOOLEAN` | `true` | Increases throughput of decompression, but might increase query memory usage | | `enable_cagg_reorder_groupby` | `BOOLEAN` | `true` | Enable group by clause reordering for continuous aggregates | | `enable_cagg_sort_pushdown` | `BOOLEAN` | `true` | Enable pushdown of ORDER BY clause for continuous aggregates | +| `enable_cagg_wal_based_invalidation` | `BOOLEAN` | `false` | Use WAL to track changes to hypertables for continuous aggregates. This feature is early access from TimescaleDB v2.23.0 | | `enable_cagg_watermark_constify` | `BOOLEAN` | `true` | Enable constifying cagg watermark for real-time caggs | | `enable_cagg_window_functions` | `BOOLEAN` | `false` | Allow window functions in continuous aggregate views | | `enable_chunk_append` | `BOOLEAN` | `true` | Enable using chunk append node | @@ -44,6 +45,9 @@ | `enable_direct_compress_copy` | `BOOLEAN` | `false` | Enable experimental support for direct compression during COPY | | `enable_direct_compress_copy_client_sorted` | `BOOLEAN` | `false` | Correct handling of data sorting by the user is required for this option. | | `enable_direct_compress_copy_sort_batches` | `BOOLEAN` | `true` | Enable batch sorting during direct compress COPY | +| `enable_direct_compress_insert` | `BOOLEAN` | `false` | Enable support for direct compression during INSERT. This feature is early access from TimescaleDB v2.23.0 | +| `enable_direct_compress_insert_client_sorted` | `BOOLEAN` | `false` | Correct handling of data sorting by the user is required for this option. | +| `enable_direct_compress_insert_sort_batches` | `BOOLEAN` | `true` | Enable batch sorting during direct compress INSERT | | `enable_dml_decompression` | `BOOLEAN` | `true` | Enable DML decompression when modifying compressed hypertable | | `enable_dml_decompression_tuple_filtering` | `BOOLEAN` | `true` | Recheck tuples during DML decompression to only decompress batches with matching tuples | | `enable_event_triggers` | `BOOLEAN` | `false` | Enable event triggers for chunks creation | @@ -72,7 +76,6 @@ | `last_tuned` | `STRING` | `NULL` | records last time timescaledb-tune ran | | `last_tuned_version` | `STRING` | `NULL` | version of timescaledb-tune used to tune | | `license` | `STRING` | `TS_LICENSE_DEFAULT` | Determines which features are enabled | -| `materializations_per_refresh_window` | `INTEGER` | `10` | The maximal number of individual refreshes per cagg refresh. If more refreshes need to be performed, they are merged into a larger single refresh.
    min: `0`, max: `INT_MAX` | | `max_cached_chunks_per_hypertable` | `INTEGER` | `1024` | Maximum number of chunks stored in the cache
    min: `0`, max: `65536` | | `max_open_chunks_per_insert` | `INTEGER` | `1024` | Maximum number of open chunk tables per insert
    min: `0`, max: `PG_INT16_MAX` | | `max_tuples_decompressed_per_dml_transaction` | `INTEGER` | `100000` | If the number of tuples exceeds this value, an error will be thrown and transaction rolled back. Setting this to 0 sets this value to unlimited number of tuples decompressed.
    min: `0`, max: `2147483647` | @@ -81,4 +84,4 @@ | `skip_scan_run_cost_multiplier` | `REAL` | `1.0` | Default is 1.0 i.e. regularly estimated SkipScan run cost, 0.0 will make SkipScan to have run cost = 0
    min: `0.0`, max: `1.0` | | `telemetry_level` | `ENUM` | `TELEMETRY_DEFAULT` | Level used to determine which telemetry to send | -Version: [2.22.1](https://github.com/timescale/timescaledb/releases/tag/2.22.1) \ No newline at end of file +Version: [2.23.0](https://github.com/timescale/timescaledb/releases/tag/2.23.0) \ No newline at end of file