diff --git a/docs/development/extensions-core/datasketches-tuple.md b/docs/development/extensions-core/datasketches-tuple.md index fc4f74d5c81d..c9a05b5ab197 100644 --- a/docs/development/extensions-core/datasketches-tuple.md +++ b/docs/development/extensions-core/datasketches-tuple.md @@ -39,19 +39,52 @@ druid.extensions.loadList=["druid-datasketches"] "name" : , "fieldName" : , "nominalEntries": , - "numberOfValues" : , - "metricColumns" : + "metricColumns" : , + "numberOfValues" : } ``` |property|description|required?| |--------|-----------|---------| |type|This String should always be "arrayOfDoublesSketch"|yes| -|name|A String for the output (result) name of the calculation.|yes| +|name|String representing the output column to store sketch values.|yes| |fieldName|A String for the name of the input field.|yes| |nominalEntries|Parameter that determines the accuracy and size of the sketch. Higher k means higher accuracy but more space to store sketches. Must be a power of 2. See the [Theta sketch accuracy](https://datasketches.apache.org/docs/Theta/ThetaErrorTable) for details. |no, defaults to 16384| -|numberOfValues|Number of values associated with each distinct key. |no, defaults to 1| -|metricColumns|If building sketches from raw data, an array of names of the input columns containing numeric values to be associated with each distinct key.|no, defaults to empty array| +|metricColumns|When building sketches from raw data, an array input column that contain numeric values to associate with each distinct key. If not provided, assumes `fieldName` is an `arrayOfDoublesSketch`|no, if not provided `fieldName` is assumed to be an arrayOfDoublesSketch| +|numberOfValues|Number of values associated with each distinct key. |no, defaults to the length of `metricColumns` if provided and 1 otherwise| + +You can use the `arrayOfDoublesSketch` aggregator to: + +- Build a sketch from raw data. In this case, set `metricColumns` to an array. +- Build a sketch from an existing `ArrayOfDoubles` sketch . In this case, leave `metricColumns` unset and set the `fieldName` to an `ArrayOfDoubles` sketch with `numberOfValues` doubles. At ingestion time, you must base64 encode `ArrayOfDoubles` sketches at ingestion time. + +#### Example on top of raw data + +Compute a theta of unique users. For each user store the `added` and `deleted` scores. The new sketch column will be called `users_theta`. + +```json +{ + "type": "arrayOfDoublesSketch", + "name": "users_theta", + "fieldName": "user", + "nominalEntries": 16384, + "metricColumns": ["added", "deleted"], +} +``` + +#### Example ingesting a precomputed sketch column + +Ingest a sketch column called `user_sketches` that has a base64 encoded value of two doubles in its array and store it in a column called `users_theta`. + +```json +{ + "type": "arrayOfDoublesSketch", + "name": "users_theta", + "fieldName": "user_sketches", + "nominalEntries": 16384, + "numberOfValues": 2, +} +``` ### Post Aggregators diff --git a/docs/multi-stage-query/concepts.md b/docs/multi-stage-query/concepts.md index 44e5ea43d427..da0e774152d6 100644 --- a/docs/multi-stage-query/concepts.md +++ b/docs/multi-stage-query/concepts.md @@ -233,7 +233,8 @@ happens: The [`maxNumTasks`](./reference.md#context-parameters) query parameter determines the maximum number of tasks your query will use, including the one `query_controller` task. Generally, queries perform better with more workers. The lowest possible value of `maxNumTasks` is two (one worker and one controller). Do not set this higher than the number of -free slots available in your cluster; doing so will result in a [TaskStartTimeout](reference.md#error-codes) error. +free slots available in your cluster; doing so will result in a [TaskStartTimeout](reference.md#error_TaskStartTimeout) +error. When [reading external data](#extern), EXTERN can read multiple files in parallel across different worker tasks. However, EXTERN does not split individual files across multiple worker tasks. If you have a diff --git a/docs/multi-stage-query/known-issues.md b/docs/multi-stage-query/known-issues.md index c76ab57aa7ac..648d3c297b47 100644 --- a/docs/multi-stage-query/known-issues.md +++ b/docs/multi-stage-query/known-issues.md @@ -33,16 +33,18 @@ sidebar_label: Known issues - Worker task stage outputs are stored in the working directory given by `druid.indexer.task.baseDir`. Stages that generate a large amount of output data may exhaust all available disk space. In this case, the query fails with -an [UnknownError](./reference.md#error-codes) with a message including "No space left on device". +an [UnknownError](./reference.md#error_UnknownError) with a message including "No space left on device". ## SELECT - SELECT from a Druid datasource does not include unpublished real-time data. - GROUPING SETS and UNION ALL are not implemented. Queries using these features return a - [QueryNotSupported](reference.md#error-codes) error. + [QueryNotSupported](reference.md#error_QueryNotSupported) error. -- For some COUNT DISTINCT queries, you'll encounter a [QueryNotSupported](reference.md#error-codes) error that includes `Must not have 'subtotalsSpec'` as one of its causes. This is caused by the planner attempting to use GROUPING SETs, which are not implemented. +- For some COUNT DISTINCT queries, you'll encounter a [QueryNotSupported](reference.md#error_QueryNotSupported) error + that includes `Must not have 'subtotalsSpec'` as one of its causes. This is caused by the planner attempting to use + GROUPING SETs, which are not implemented. - The numeric varieties of the EARLIEST and LATEST aggregators do not work properly. Attempting to use the numeric varieties of these aggregators lead to an error like diff --git a/docs/multi-stage-query/reference.md b/docs/multi-stage-query/reference.md index a4bcbfc27b1f..ae9bc106ca8c 100644 --- a/docs/multi-stage-query/reference.md +++ b/docs/multi-stage-query/reference.md @@ -249,14 +249,14 @@ The following table lists query limits: | Limit | Value | Error if exceeded | |---|---|---| -| Size of an individual row written to a frame. Row size when written to a frame may differ from the original row size. | 1 MB | `RowTooLarge` | -| Number of segment-granular time chunks encountered during ingestion. | 5,000 | `TooManyBuckets` | -| Number of input files/segments per worker. | 10,000 | `TooManyInputFiles` | -| Number of output partitions for any one stage. Number of segments generated during ingestion. |25,000 | `TooManyPartitions` | -| Number of output columns for any one stage. | 2,000 | `TooManyColumns` | -| Number of cluster by columns that can appear in a stage | 1,500 | `TooManyClusteredByColumns` | -| Number of workers for any one stage. | Hard limit is 1,000. Memory-dependent soft limit may be lower. | `TooManyWorkers` | -| Maximum memory occupied by broadcasted tables. | 30% of each [processor memory bundle](concepts.md#memory-usage). | `BroadcastTablesTooLarge` | +| Size of an individual row written to a frame. Row size when written to a frame may differ from the original row size. | 1 MB | [`RowTooLarge`](#error_RowTooLarge) | +| Number of segment-granular time chunks encountered during ingestion. | 5,000 | [`TooManyBuckets`](#error_TooManyBuckets) | +| Number of input files/segments per worker. | 10,000 | [`TooManyInputFiles`](#error_TooManyInputFiles) | +| Number of output partitions for any one stage. Number of segments generated during ingestion. |25,000 | [`TooManyPartitions`](#error_TooManyPartitions) | +| Number of output columns for any one stage. | 2,000 | [`TooManyColumns`](#error_TooManyColumns) | +| Number of cluster by columns that can appear in a stage | 1,500 | [`TooManyClusteredByColumns`](#error_TooManyClusteredByColumns) | +| Number of workers for any one stage. | Hard limit is 1,000. Memory-dependent soft limit may be lower. | [`TooManyWorkers`](#error_TooManyWorkers) | +| Maximum memory occupied by broadcasted tables. | 30% of each [processor memory bundle](concepts.md#memory-usage). | [`BroadcastTablesTooLarge`](#error_BroadcastTablesTooLarge) | @@ -266,30 +266,30 @@ The following table describes error codes you may encounter in the `multiStageQu | Code | Meaning | Additional fields | |---|---|---| -| `BroadcastTablesTooLarge` | The size of the broadcast tables used in the right hand side of the join exceeded the memory reserved for them in a worker task.

Try increasing the peon memory or reducing the size of the broadcast tables. | `maxBroadcastTablesSize`: Memory reserved for the broadcast tables, measured in bytes. | -| `Canceled` | The query was canceled. Common reasons for cancellation:

  • User-initiated shutdown of the controller task via the `/druid/indexer/v1/task/{taskId}/shutdown` API.
  • Restart or failure of the server process that was running the controller task.
| | -| `CannotParseExternalData` | A worker task could not parse data from an external datasource. | `errorMessage`: More details on why parsing failed. | -| `ColumnNameRestricted` | The query uses a restricted column name. | `columnName`: The restricted column name. | -| `ColumnTypeNotSupported` | The column type is not supported. This can be because:

  • Support for writing or reading from a particular column type is not supported.
  • The query attempted to use a column type that is not supported by the frame format. This occurs with ARRAY types, which are not yet implemented for frames.
| `columnName`: The column name with an unsupported type.

`columnType`: The unknown column type. | -| `InsertCannotAllocateSegment` | The controller task could not allocate a new segment ID due to conflict with existing segments or pending segments. Common reasons for such conflicts:

  • Attempting to mix different granularities in the same intervals of the same datasource.
  • Prior ingestions that used non-extendable shard specs.
| `dataSource`

`interval`: The interval for the attempted new segment allocation. | -| `InsertCannotBeEmpty` | An INSERT or REPLACE query did not generate any output rows in a situation where output rows are required for success. This can happen for INSERT or REPLACE queries with `PARTITIONED BY` set to something other than `ALL` or `ALL TIME`. | `dataSource` | -| `InsertCannotOrderByDescending` | An INSERT query contained a `CLUSTERED BY` expression in descending order. Druid's segment generation code only supports ascending order. | `columnName` | -| `InsertCannotReplaceExistingSegment` | A REPLACE query cannot proceed because an existing segment partially overlaps those bounds, and the portion within the bounds is not fully overshadowed by query results.

There are two ways to address this without modifying your query:
  • Shrink the OVERLAP filter to match the query results.
  • Expand the OVERLAP filter to fully contain the existing segment.
| `segmentId`: The existing segment
-| `InsertLockPreempted` | An INSERT or REPLACE query was canceled by a higher-priority ingestion job, such as a real-time ingestion task. | | -| `InsertTimeNull` | An INSERT or REPLACE query encountered a null timestamp in the `__time` field.

This can happen due to using an expression like `TIME_PARSE(timestamp) AS __time` with a timestamp that cannot be parsed. (TIME_PARSE returns null when it cannot parse a timestamp.) In this case, try parsing your timestamps using a different function or pattern.

If your timestamps may genuinely be null, consider using COALESCE to provide a default value. One option is CURRENT_TIMESTAMP, which represents the start time of the job. | -| `InsertTimeOutOfBounds` | A REPLACE query generated a timestamp outside the bounds of the TIMESTAMP parameter for your OVERWRITE WHERE clause.

To avoid this error, verify that the you specified is valid. | `interval`: time chunk interval corresponding to the out-of-bounds timestamp | -| `InvalidNullByte` | A string column included a null byte. Null bytes in strings are not permitted. | `column`: The column that included the null byte | -| `QueryNotSupported` | QueryKit could not translate the provided native query to a multi-stage query.

This can happen if the query uses features that aren't supported, like GROUPING SETS. | | -| `RowTooLarge` | The query tried to process a row that was too large to write to a single frame. See the [Limits](#limits) table for the specific limit on frame size. Note that the effective maximum row size is smaller than the maximum frame size due to alignment considerations during frame writing. | `maxFrameSize`: The limit on the frame size. | -| `TaskStartTimeout` | Unable to launch all the worker tasks in time.

There might be insufficient available slots to start all the worker tasks simultaneously.

Try splitting up the query into smaller chunks with lesser `maxNumTasks` number. Another option is to increase capacity. | `numTasks`: The number of tasks attempted to launch. | -| `TooManyBuckets` | Exceeded the number of partition buckets for a stage. Partition buckets are only used for `segmentGranularity` during INSERT queries. The most common reason for this error is that your `segmentGranularity` is too narrow relative to the data. See the [Limits](#limits) table for the specific limit. | `maxBuckets`: The limit on buckets. | -| `TooManyInputFiles` | Exceeded the number of input files/segments per worker. See the [Limits](#limits) table for the specific limit. | `numInputFiles`: The total number of input files/segments for the stage.

`maxInputFiles`: The maximum number of input files/segments per worker per stage.

`minNumWorker`: The minimum number of workers required for a successful run. | -| `TooManyPartitions` | Exceeded the number of partitions for a stage. The most common reason for this is that the final stage of an INSERT or REPLACE query generated too many segments. See the [Limits](#limits) table for the specific limit. | `maxPartitions`: The limit on partitions which was exceeded | -| `TooManyClusteredByColumns` | Exceeded the number of cluster by columns for a stage. See the [Limits](#limits) table for the specific limit. | `numColumns`: The number of columns requested.

`maxColumns`: The limit on columns which was exceeded.`stage`: The stage number exceeding the limit

| -| `TooManyColumns` | Exceeded the number of columns for a stage. See the [Limits](#limits) table for the specific limit. | `numColumns`: The number of columns requested.

`maxColumns`: The limit on columns which was exceeded. | -| `TooManyWarnings` | Exceeded the allowed number of warnings of a particular type. | `rootErrorCode`: The error code corresponding to the exception that exceeded the required limit.

`maxWarnings`: Maximum number of warnings that are allowed for the corresponding `rootErrorCode`. | -| `TooManyWorkers` | Exceeded the supported number of workers running simultaneously. See the [Limits](#limits) table for the specific limit. | `workers`: The number of simultaneously running workers that exceeded a hard or soft limit. This may be larger than the number of workers in any one stage if multiple stages are running simultaneously.

`maxWorkers`: The hard or soft limit on workers that was exceeded. | -| `NotEnoughMemory` | Insufficient memory to launch a stage. | `serverMemory`: The amount of memory available to a single process.

`serverWorkers`: The number of workers running in a single process.

`serverThreads`: The number of threads in a single process. | -| `WorkerFailed` | A worker task failed unexpectedly. | `errorMsg`

`workerTaskId`: The ID of the worker task. | -| `WorkerRpcFailed` | A remote procedure call to a worker task failed and could not recover. | `workerTaskId`: the id of the worker task | -| `UnknownError` | All other errors. | `message` | +| `BroadcastTablesTooLarge` | The size of the broadcast tables used in the right hand side of the join exceeded the memory reserved for them in a worker task.

Try increasing the peon memory or reducing the size of the broadcast tables. | `maxBroadcastTablesSize`: Memory reserved for the broadcast tables, measured in bytes. | +| `Canceled` | The query was canceled. Common reasons for cancellation:

  • User-initiated shutdown of the controller task via the `/druid/indexer/v1/task/{taskId}/shutdown` API.
  • Restart or failure of the server process that was running the controller task.
| | +| `CannotParseExternalData` | A worker task could not parse data from an external datasource. | `errorMessage`: More details on why parsing failed. | +| `ColumnNameRestricted` | The query uses a restricted column name. | `columnName`: The restricted column name. | +| `ColumnTypeNotSupported` | The column type is not supported. This can be because:

  • Support for writing or reading from a particular column type is not supported.
  • The query attempted to use a column type that is not supported by the frame format. This occurs with ARRAY types, which are not yet implemented for frames.
| `columnName`: The column name with an unsupported type.

`columnType`: The unknown column type. | +| `InsertCannotAllocateSegment` | The controller task could not allocate a new segment ID due to conflict with existing segments or pending segments. Common reasons for such conflicts:

  • Attempting to mix different granularities in the same intervals of the same datasource.
  • Prior ingestions that used non-extendable shard specs.
| `dataSource`

`interval`: The interval for the attempted new segment allocation. | +| `InsertCannotBeEmpty` | An INSERT or REPLACE query did not generate any output rows in a situation where output rows are required for success. This can happen for INSERT or REPLACE queries with `PARTITIONED BY` set to something other than `ALL` or `ALL TIME`. | `dataSource` | +| `InsertCannotOrderByDescending` | An INSERT query contained a `CLUSTERED BY` expression in descending order. Druid's segment generation code only supports ascending order. | `columnName` | +| `InsertCannotReplaceExistingSegment` | A REPLACE query cannot proceed because an existing segment partially overlaps those bounds, and the portion within the bounds is not fully overshadowed by query results.

There are two ways to address this without modifying your query:
  • Shrink the OVERLAP filter to match the query results.
  • Expand the OVERLAP filter to fully contain the existing segment.
| `segmentId`: The existing segment
+| `InsertLockPreempted` | An INSERT or REPLACE query was canceled by a higher-priority ingestion job, such as a real-time ingestion task. | | +| `InsertTimeNull` | An INSERT or REPLACE query encountered a null timestamp in the `__time` field.

This can happen due to using an expression like `TIME_PARSE(timestamp) AS __time` with a timestamp that cannot be parsed. (TIME_PARSE returns null when it cannot parse a timestamp.) In this case, try parsing your timestamps using a different function or pattern.

If your timestamps may genuinely be null, consider using COALESCE to provide a default value. One option is CURRENT_TIMESTAMP, which represents the start time of the job. | +| `InsertTimeOutOfBounds` | A REPLACE query generated a timestamp outside the bounds of the TIMESTAMP parameter for your OVERWRITE WHERE clause.

To avoid this error, verify that the you specified is valid. | `interval`: time chunk interval corresponding to the out-of-bounds timestamp | +| `InvalidNullByte` | A string column included a null byte. Null bytes in strings are not permitted. | `column`: The column that included the null byte | +| `QueryNotSupported` | QueryKit could not translate the provided native query to a multi-stage query.

This can happen if the query uses features that aren't supported, like GROUPING SETS. | | +| `RowTooLarge` | The query tried to process a row that was too large to write to a single frame. See the [Limits](#limits) table for specific limits on frame size. Note that the effective maximum row size is smaller than the maximum frame size due to alignment considerations during frame writing. | `maxFrameSize`: The limit on the frame size. | +| `TaskStartTimeout` | Unable to launch all the worker tasks in time.

There might be insufficient available slots to start all the worker tasks simultaneously.

Try splitting up the query into smaller chunks with lesser `maxNumTasks` number. Another option is to increase capacity. | `numTasks`: The number of tasks attempted to launch. | +| `TooManyBuckets` | Exceeded the maximum number of partition buckets for a stage (5,000 partition buckets).
< br />Partition buckets are created for each [`PARTITIONED BY`](#partitioned-by) time chunk for INSERT and REPLACE queries. The most common reason for this error is that your `PARTITIONED BY` is too narrow relative to your data. | `maxBuckets`: The limit on partition buckets. | +| `TooManyInputFiles` | Exceeded the maximum number of input files or segments per worker (10,000 files or segments).

If you encounter this limit, consider adding more workers, or breaking up your query into smaller queries that process fewer files or segments per query. | `numInputFiles`: The total number of input files/segments for the stage.

`maxInputFiles`: The maximum number of input files/segments per worker per stage.

`minNumWorker`: The minimum number of workers required for a successful run. | +| `TooManyPartitions` | Exceeded the maximum number of partitions for a stage (25,000 partitions).

This can occur with INSERT or REPLACE statements that generate large numbers of segments, since each segment is associated with a partition. If you encounter this limit, consider breaking up your INSERT or REPLACE statement into smaller statements that process less data per statement. | `maxPartitions`: The limit on partitions which was exceeded | +| `TooManyClusteredByColumns` | Exceeded the maximum number of clustering columns for a stage (1,500 columns).

This can occur with `CLUSTERED BY`, `ORDER BY`, or `GROUP BY` with a large number of columns. | `numColumns`: The number of columns requested.

`maxColumns`: The limit on columns which was exceeded.`stage`: The stage number exceeding the limit

| +| `TooManyColumns` | Exceeded the maximum number of columns for a stage (2,000 columns). | `numColumns`: The number of columns requested.

`maxColumns`: The limit on columns which was exceeded. | +| `TooManyWarnings` | Exceeded the maximum allowed number of warnings of a particular type. | `rootErrorCode`: The error code corresponding to the exception that exceeded the required limit.

`maxWarnings`: Maximum number of warnings that are allowed for the corresponding `rootErrorCode`. | +| `TooManyWorkers` | Exceeded the maximum number of simultaneously-running workers. See the [Limits](#limits) table for more details. | `workers`: The number of simultaneously running workers that exceeded a hard or soft limit. This may be larger than the number of workers in any one stage if multiple stages are running simultaneously.

`maxWorkers`: The hard or soft limit on workers that was exceeded. If this is lower than the hard limit (1,000 workers), then you can increase the limit by adding more memory to each task. | +| `NotEnoughMemory` | Insufficient memory to launch a stage. | `serverMemory`: The amount of memory available to a single process.

`serverWorkers`: The number of workers running in a single process.

`serverThreads`: The number of threads in a single process. | +| `WorkerFailed` | A worker task failed unexpectedly. | `errorMsg`

`workerTaskId`: The ID of the worker task. | +| `WorkerRpcFailed` | A remote procedure call to a worker task failed and could not recover. | `workerTaskId`: the id of the worker task | +| `UnknownError` | All other errors. | `message` | diff --git a/extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/indexing/error/TooManyBucketsFault.java b/extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/indexing/error/TooManyBucketsFault.java index 8af20d091910..fdad421e6490 100644 --- a/extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/indexing/error/TooManyBucketsFault.java +++ b/extensions-core/multi-stage-query/src/main/java/org/apache/druid/msq/indexing/error/TooManyBucketsFault.java @@ -41,7 +41,7 @@ public TooManyBucketsFault(@JsonProperty("maxBuckets") final int maxBuckets) super( CODE, "Too many partition buckets (max = %,d); try breaking your query up into smaller queries or " - + "using a wider segmentGranularity", + + "using a wider PARTITIONED BY", maxBuckets ); this.maxBuckets = maxBuckets; diff --git a/web-console/lib/keywords.js b/web-console/lib/keywords.js index e34b2daf45be..bc81153dd77e 100644 --- a/web-console/lib/keywords.js +++ b/web-console/lib/keywords.js @@ -61,6 +61,9 @@ exports.SQL_KEYWORDS = [ 'REPLACE INTO', 'OVERWRITE', 'RETURNING', + 'OVER', + 'PARTITION BY', + 'WINDOW', ]; exports.SQL_EXPRESSION_PARTS = [ diff --git a/web-console/script/create-sql-docs.js b/web-console/script/create-sql-docs.js index 6af65006f8ef..13ed438915ba 100755 --- a/web-console/script/create-sql-docs.js +++ b/web-console/script/create-sql-docs.js @@ -52,9 +52,7 @@ function convertMarkdownToHtml(markdown) { // Concert to markdown markdown = snarkdown(markdown); - return markdown - .replace(/
/g, '

') // Double up the
s - .replace(/]*>(.*?)<\/a>/g, '$1'); // Remove links + return markdown.replace(/]*>(.*?)<\/a>/g, '$1'); // Remove links } const readDoc = async () => { diff --git a/web-console/src/bootstrap/react-table-defaults.tsx b/web-console/src/bootstrap/react-table-defaults.tsx index 4c31928064cd..139a13bcd5a6 100644 --- a/web-console/src/bootstrap/react-table-defaults.tsx +++ b/web-console/src/bootstrap/react-table-defaults.tsx @@ -53,12 +53,12 @@ export function bootstrapReactTable() { .map((row: any) => row[column.id]); const previewCount = countBy(previewValues); return ( - +
{Object.keys(previewCount) .sort() .map(v => `${v} (${previewCount[v]})`) .join(', ')} - +
); }, defaultPageSize: 20, diff --git a/web-console/src/components/segment-timeline/segment-timeline.tsx b/web-console/src/components/segment-timeline/segment-timeline.tsx index c138e82dff25..f8cef06189b6 100644 --- a/web-console/src/components/segment-timeline/segment-timeline.tsx +++ b/web-console/src/components/segment-timeline/segment-timeline.tsx @@ -278,7 +278,7 @@ ORDER BY "start" DESC`; intervals = await queryDruidSql({ query: SegmentTimeline.getSqlQuery(startDate, endDate), }); - datasources = uniq(intervals.map(r => r.datasource)); + datasources = uniq(intervals.map(r => r.datasource).sort()); } else if (capabilities.hasCoordinatorAccess()) { const startIso = startDate.toISOString(); diff --git a/web-console/src/dialogs/compaction-dialog/compaction-dialog.scss b/web-console/src/dialogs/compaction-dialog/compaction-dialog.scss index e3ca37b14ea0..499df985c9e1 100644 --- a/web-console/src/dialogs/compaction-dialog/compaction-dialog.scss +++ b/web-console/src/dialogs/compaction-dialog/compaction-dialog.scss @@ -23,6 +23,11 @@ height: 80vh; } + .legacy-callout { + width: auto; + margin: 10px 15px 0; + } + .form-json-selector { margin: 15px; } diff --git a/web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx b/web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx index d63501b1b0b3..3b5456e7d049 100644 --- a/web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx +++ b/web-console/src/dialogs/compaction-dialog/compaction-dialog.tsx @@ -16,11 +16,16 @@ * limitations under the License. */ -import { Button, Classes, Dialog, Intent } from '@blueprintjs/core'; +import { Button, Callout, Classes, Code, Dialog, Intent } from '@blueprintjs/core'; import React, { useState } from 'react'; import { AutoForm, FormJsonSelector, FormJsonTabs, JsonInput } from '../../components'; -import { COMPACTION_CONFIG_FIELDS, CompactionConfig } from '../../druid-models'; +import { + COMPACTION_CONFIG_FIELDS, + CompactionConfig, + compactionConfigHasLegacyInputSegmentSizeBytesSet, +} from '../../druid-models'; +import { deepDelete, formatBytesCompact } from '../../utils'; import './compaction-dialog.scss'; @@ -55,13 +60,29 @@ export const CompactionDialog = React.memo(function CompactionDialog(props: Comp canOutsideClickClose={false} title={`Compaction config: ${datasource}`} > + {compactionConfigHasLegacyInputSegmentSizeBytesSet(currentConfig) && ( + +

+ You current config sets the legacy inputSegmentSizeBytes to{' '} + {formatBytesCompact(currentConfig.inputSegmentSizeBytes!)} it is + recommended to unset this property. +

+

+