Skip to content

Latest commit

 

History

History
2876 lines (2872 loc) · 291 KB

OPTIONS.md

File metadata and controls

2876 lines (2872 loc) · 291 KB

Main options:

  --help                                                                produce help message
  -V [ --version ]                                                      print version information and exit
  --version-clean                                                       print version in machine-readable
                                                                        format and exit
  -C [ --config-file ] arg                                              config-file path
  -q [ --query ] arg                                                    query; can be specified multiple times
                                                                        (--query "SELECT 1" --query "SELECT
                                                                        2"...)
  --queries-file arg                                                    file path with queries to execute;
                                                                        multiple files can be specified
                                                                        (--queries-file file1 file2...)
  -n [ --multiquery ]                                                   If specified, multiple queries
                                                                        separated by semicolons can be listed
                                                                        after --query. For convenience, it is
                                                                        also possible to omit --query and pass
                                                                        the queries directly after
                                                                        --multiquery.
  -m [ --multiline ]                                                    If specified, allow multiline queries
                                                                        (do not send the query on Enter)
  -d [ --database ] arg                                                 database
  --query_kind arg (=initial_query)                                     One of initial_query/secondary_query/no
                                                                        _query
  --query_id arg                                                        query_id
  --history_file arg                                                    path to history file
  --stage arg (=complete)                                               Request query processing up to
                                                                        specified stage: complete,fetch_columns
                                                                        ,with_mergeable_state,with_mergeable_st
                                                                        ate_after_aggregation,with_mergeable_st
                                                                        ate_after_aggregation_and_limit
  --progress [=arg(=tty)] (=default)                                    Print progress of queries execution -
                                                                        to TTY: tty|on|1|true|yes; to STDERR
                                                                        non-interactive mode: err; OFF:
                                                                        off|0|false|no; DEFAULT - interactive
                                                                        to TTY, non-interactive is off
  -A [ --disable_suggestion ]                                           Disable loading suggestion data. Note
                                                                        that suggestion data is loaded
                                                                        asynchronously through a second
                                                                        connection to ClickHouse server. Also
                                                                        it is reasonable to disable suggestion
                                                                        if you want to paste a query with TAB
                                                                        characters. Shorthand option -A is for
                                                                        those who get used to mysql client.
  -t [ --time ]                                                         print query execution time to stderr in
                                                                        non-interactive mode (for benchmarks)
  --echo                                                                in batch mode, print query before
                                                                        execution
  --verbose                                                             print query and other debugging info
  --log-level arg                                                       log level
  --server_logs_file arg                                                put server logs into specified file
  --suggestion_limit arg (=10000)                                       Suggestion limit for how many
                                                                        databases, tables and columns to fetch.
  -f [ --format ] arg                                                   default output format
  -E [ --vertical ]                                                     vertical output format, same as
                                                                        --format=Vertical or FORMAT Vertical or
                                                                        \G at end of command
  --highlight arg (=1)                                                  enable or disable basic syntax
                                                                        highlight in interactive command line
  --ignore-error                                                        do not stop processing in multiquery
                                                                        mode
  --stacktrace                                                          print stack traces of exceptions
  --hardware-utilization                                                print hardware utilization information
                                                                        in progress bar
  --print-profile-events                                                Printing ProfileEvents packets
  --profile-events-delay-ms arg (=0)                                    Delay between printing `ProfileEvents`
                                                                        packets (-1 - print only totals, 0 -
                                                                        print every single packet)
  --processed-rows                                                      print the number of locally processed
                                                                        rows
  --interactive                                                         Process queries-file or --query query
                                                                        and start interactive mode
  --pager arg                                                           Pipe all output into this command (less
                                                                        or similar)
  --max_memory_usage_in_client arg                                      Set memory limit in client/local server
  -N [ --table ] arg                                                    name of the initial table
  -S [ --structure ] arg                                                structure of the initial table (list of
                                                                        column and type names)
  -f [ --file ] arg                                                     path to file with data of the initial
                                                                        table (stdin if not specified)
  --input-format arg                                                    input format of the initial table data
  --output-format arg                                                   default output format
  --logger.console [=arg(=1)]                                           Log to console
  --logger.log arg                                                      Log file name
  --logger.level arg                                                    Log level
  --no-system-tables                                                    do not attach system tables (better
                                                                        startup time)
  --path arg                                                            Storage path
  --only-system-tables                                                  attach only system tables from
                                                                        specified path
  --top_level_domains_path arg                                          Path to lists with custom TLDs
  --dialect arg                                                         Which dialect will be used to parse
                                                                        query
  --min_compress_block_size arg                                         The actual size of the block to
                                                                        compress, if the uncompressed data less
                                                                        than max_compress_block_size is no less
                                                                        than this value and no less than the
                                                                        volume of data for one mark.
  --max_compress_block_size arg                                         The maximum size of blocks of
                                                                        uncompressed data before compressing
                                                                        for writing to a table.
  --max_block_size arg                                                  Maximum block size for reading
  --max_insert_block_size arg                                           The maximum block size for insertion,
                                                                        if we control the creation of blocks
                                                                        for insertion.
  --min_insert_block_size_rows arg                                      Squash blocks passed to INSERT query to
                                                                        specified size in rows, if blocks are
                                                                        not big enough.
  --min_insert_block_size_bytes arg                                     Squash blocks passed to INSERT query to
                                                                        specified size in bytes, if blocks are
                                                                        not big enough.
  --min_insert_block_size_rows_for_materialized_views arg               Like min_insert_block_size_rows, but
                                                                        applied only during pushing to
                                                                        MATERIALIZED VIEW (default:
                                                                        min_insert_block_size_rows)
  --min_insert_block_size_bytes_for_materialized_views arg              Like min_insert_block_size_bytes, but
                                                                        applied only during pushing to
                                                                        MATERIALIZED VIEW (default:
                                                                        min_insert_block_size_bytes)
  --max_joined_block_size_rows arg                                      Maximum block size for JOIN result (if
                                                                        join algorithm supports it). 0 means
                                                                        unlimited.
  --max_insert_threads arg                                              The maximum number of threads to
                                                                        execute the INSERT SELECT query. Values
                                                                        0 or 1 means that INSERT SELECT is not
                                                                        run in parallel. Higher values will
                                                                        lead to higher memory usage. Parallel
                                                                        INSERT SELECT has effect only if the
                                                                        SELECT part is run on parallel, see
                                                                        'max_threads' setting.
  --max_insert_delayed_streams_for_parallel_write arg                   The maximum number of streams (columns)
                                                                        to delay final part flush. Default -
                                                                        auto (1000 in case of underlying
                                                                        storage supports parallel write, for
                                                                        example S3 and disabled otherwise)
  --max_final_threads arg                                               The maximum number of threads to read
                                                                        from table with FINAL.
  --max_threads_for_indexes arg                                         The maximum number of threads process
                                                                        indices.
  --max_threads arg                                                     The maximum number of threads to
                                                                        execute the request. By default, it is
                                                                        determined automatically.
  --use_concurrency_control arg                                         Respect the server's concurrency
                                                                        control (see the `concurrent_threads_so
                                                                        ft_limit_num` and `concurrent_threads_s
                                                                        oft_limit_ratio_to_cores` global server
                                                                        settings). If disabled, it allows using
                                                                        a larger number of threads even if the
                                                                        server is overloaded (not recommended
                                                                        for normal usage, and needed mostly for
                                                                        tests).
  --max_download_threads arg                                            The maximum number of threads to
                                                                        download data (e.g. for URL engine).
  --max_download_buffer_size arg                                        The maximal size of buffer for parallel
                                                                        downloading (e.g. for URL engine) per
                                                                        each thread.
  --max_read_buffer_size arg                                            The maximum size of the buffer to read
                                                                        from the filesystem.
  --max_read_buffer_size_local_fs arg                                   The maximum size of the buffer to read
                                                                        from local filesystem. If set to 0 then
                                                                        max_read_buffer_size will be used.
  --max_read_buffer_size_remote_fs arg                                  The maximum size of the buffer to read
                                                                        from remote filesystem. If set to 0
                                                                        then max_read_buffer_size will be used.
  --max_distributed_connections arg                                     The maximum number of connections for
                                                                        distributed processing of one query
                                                                        (should be greater than max_threads).
  --max_query_size arg                                                  The maximum number of bytes of a query
                                                                        string parsed by the SQL parser. Data
                                                                        in the VALUES clause of INSERT queries
                                                                        is processed by a separate stream
                                                                        parser (that consumes O(1) RAM) and not
                                                                        affected by this restriction.
  --interactive_delay arg                                               The interval in microseconds to check
                                                                        if the request is cancelled, and to
                                                                        send progress info.
  --connect_timeout arg                                                 Connection timeout if there are no
                                                                        replicas.
  --handshake_timeout_ms arg                                            Timeout for receiving HELLO packet from
                                                                        replicas.
  --connect_timeout_with_failover_ms arg                                Connection timeout for selecting first
                                                                        healthy replica.
  --connect_timeout_with_failover_secure_ms arg                         Connection timeout for selecting first
                                                                        healthy replica (for secure
                                                                        connections).
  --receive_timeout arg                                                 Timeout for receiving data from
                                                                        network, in seconds. If no bytes were
                                                                        received in this interval, exception is
                                                                        thrown. If you set this setting on
                                                                        client, the 'send_timeout' for the
                                                                        socket will be also set on the
                                                                        corresponding connection end on the
                                                                        server.
  --send_timeout arg                                                    Timeout for sending data to network, in
                                                                        seconds. If client needs to sent some
                                                                        data, but it did not able to send any
                                                                        bytes in this interval, exception is
                                                                        thrown. If you set this setting on
                                                                        client, the 'receive_timeout' for the
                                                                        socket will be also set on the
                                                                        corresponding connection end on the
                                                                        server.
  --tcp_keep_alive_timeout arg                                          The time in seconds the connection
                                                                        needs to remain idle before TCP starts
                                                                        sending keepalive probes
  --hedged_connection_timeout_ms arg                                    Connection timeout for establishing
                                                                        connection with replica for Hedged
                                                                        requests
  --receive_data_timeout_ms arg                                         Connection timeout for receiving first
                                                                        packet of data or packet with positive
                                                                        progress from replica
  --use_hedged_requests arg                                             Use hedged requests for distributed
                                                                        queries
  --allow_changing_replica_until_first_data_packet arg                  Allow HedgedConnections to change
                                                                        replica until receiving first data
                                                                        packet
  --queue_max_wait_ms arg                                               The wait time in the request queue, if
                                                                        the number of concurrent requests
                                                                        exceeds the maximum.
  --connection_pool_max_wait_ms arg                                     The wait time when the connection pool
                                                                        is full.
  --replace_running_query_max_wait_ms arg                               The wait time for running query with
                                                                        the same query_id to finish when
                                                                        setting 'replace_running_query' is
                                                                        active.
  --kafka_max_wait_ms arg                                               The wait time for reading from Kafka
                                                                        before retry.
  --rabbitmq_max_wait_ms arg                                            The wait time for reading from RabbitMQ
                                                                        before retry.
  --poll_interval arg                                                   Block at the query wait loop on the
                                                                        server for the specified number of
                                                                        seconds.
  --idle_connection_timeout arg                                         Close idle TCP connections after
                                                                        specified number of seconds.
  --distributed_connections_pool_size arg                               Maximum number of connections with one
                                                                        remote server in the pool.
  --connections_with_failover_max_tries arg                             The maximum number of attempts to
                                                                        connect to replicas.
  --s3_strict_upload_part_size arg                                      The exact size of part to upload during
                                                                        multipart upload to S3 (some
                                                                        implementations does not supports
                                                                        variable size parts).
  --s3_min_upload_part_size arg                                         The minimum size of part to upload
                                                                        during multipart upload to S3.
  --s3_max_upload_part_size arg                                         The maximum size of part to upload
                                                                        during multipart upload to S3.
  --s3_upload_part_size_multiply_factor arg                             Multiply s3_min_upload_part_size by
                                                                        this factor each time
                                                                        s3_multiply_parts_count_threshold parts
                                                                        were uploaded from a single write to
                                                                        S3.
  --s3_upload_part_size_multiply_parts_count_threshold arg              Each time this number of parts was
                                                                        uploaded to S3, s3_min_upload_part_size
                                                                        is multiplied by s3_upload_part_size_mu
                                                                        ltiply_factor.
  --s3_max_inflight_parts_for_one_file arg                              The maximum number of a concurrent
                                                                        loaded parts in multipart upload
                                                                        request. 0 means unlimited. You
  --s3_max_single_part_upload_size arg                                  The maximum size of object to upload
                                                                        using singlepart upload to S3.
  --azure_max_single_part_upload_size arg                               The maximum size of object to upload
                                                                        using singlepart upload to Azure blob
                                                                        storage.
  --s3_max_single_read_retries arg                                      The maximum number of retries during
                                                                        single S3 read.
  --azure_max_single_read_retries arg                                   The maximum number of retries during
                                                                        single Azure blob storage read.
  --s3_max_unexpected_write_error_retries arg                           The maximum number of retries in case
                                                                        of unexpected errors during S3 write.
  --s3_max_redirects arg                                                Max number of S3 redirects hops
                                                                        allowed.
  --s3_max_connections arg                                              The maximum number of connections per
                                                                        server.
  --s3_max_get_rps arg                                                  Limit on S3 GET request per second rate
                                                                        before throttling. Zero means
                                                                        unlimited.
  --s3_max_get_burst arg                                                Max number of requests that can be
                                                                        issued simultaneously before hitting
                                                                        request per second limit. By default
                                                                        (0) equals to `s3_max_get_rps`
  --s3_max_put_rps arg                                                  Limit on S3 PUT request per second rate
                                                                        before throttling. Zero means
                                                                        unlimited.
  --s3_max_put_burst arg                                                Max number of requests that can be
                                                                        issued simultaneously before hitting
                                                                        request per second limit. By default
                                                                        (0) equals to `s3_max_put_rps`
  --s3_list_object_keys_size arg                                        Maximum number of files that could be
                                                                        returned in batch by ListObject request
  --azure_list_object_keys_size arg                                     Maximum number of files that could be
                                                                        returned in batch by ListObject request
  --s3_truncate_on_insert arg                                           Enables or disables truncate before
                                                                        insert in s3 engine tables.
  --azure_truncate_on_insert arg                                        Enables or disables truncate before
                                                                        insert in azure engine tables.
  --s3_create_new_file_on_insert arg                                    Enables or disables creating a new file
                                                                        on each insert in s3 engine tables
  --s3_skip_empty_files arg                                             Allow to skip empty files in s3 table
                                                                        engine
  --azure_create_new_file_on_insert arg                                 Enables or disables creating a new file
                                                                        on each insert in azure engine tables
  --s3_check_objects_after_upload arg                                   Check each uploaded object to s3 with
                                                                        head request to be sure that upload was
                                                                        successful
  --s3_allow_parallel_part_upload arg                                   Use multiple threads for s3 multipart
                                                                        upload. It may lead to slightly higher
                                                                        memory usage
  --s3_throw_on_zero_files_match arg                                    Throw an error, when ListObjects
                                                                        request cannot match any files
  --s3_retry_attempts arg                                               Setting for Aws::Client::RetryStrategy,
                                                                        Aws::Client does retries itself, 0
                                                                        means no retries
  --s3_request_timeout_ms arg                                           Idleness timeout for sending and
                                                                        receiving data to/from S3. Fail if a
                                                                        single TCP read or write call blocks
                                                                        for this long.
  --s3_http_connection_pool_size arg                                    How many reusable open connections to
                                                                        keep per S3 endpoint. Only applies to
                                                                        the S3 table engine and table function,
                                                                        not to S3 disks (for disks, use disk
                                                                        config instead). Global setting, can
                                                                        only be set in config, overriding it
                                                                        per session or per query has no effect.
  --enable_s3_requests_logging arg                                      Enable very explicit logging of S3
                                                                        requests. Makes sense for debug only.
  --s3queue_default_zookeeper_path arg                                  Default zookeeper path prefix for
                                                                        S3Queue engine
  --s3queue_enable_logging_to_s3queue_log arg                           Enable writing to system.s3queue_log.
                                                                        The value can be overwritten per table
                                                                        with table settings
  --hdfs_replication arg                                                The actual number of replications can
                                                                        be specified when the hdfs file is
                                                                        created.
  --hdfs_truncate_on_insert arg                                         Enables or disables truncate before
                                                                        insert in s3 engine tables
  --hdfs_create_new_file_on_insert arg                                  Enables or disables creating a new file
                                                                        on each insert in hdfs engine tables
  --hdfs_skip_empty_files arg                                           Allow to skip empty files in hdfs table
                                                                        engine
  --hsts_max_age arg                                                    Expired time for hsts. 0 means disable
                                                                        HSTS.
  --extremes arg                                                        Calculate minimums and maximums of the
                                                                        result columns. They can be output in
                                                                        JSON-formats.
  --use_uncompressed_cache arg                                          Whether to use the cache of
                                                                        uncompressed blocks.
  --replace_running_query arg                                           Whether the running request should be
                                                                        canceled with the same id as the new
                                                                        one.
  --max_remote_read_network_bandwidth arg                               The maximum speed of data exchange over
                                                                        the network in bytes per second for
                                                                        read.
  --max_remote_write_network_bandwidth arg                              The maximum speed of data exchange over
                                                                        the network in bytes per second for
                                                                        write.
  --max_local_read_bandwidth arg                                        The maximum speed of local reads in
                                                                        bytes per second.
  --max_local_write_bandwidth arg                                       The maximum speed of local writes in
                                                                        bytes per second.
  --stream_like_engine_allow_direct_select arg                          Allow direct SELECT query for Kafka,
                                                                        RabbitMQ, FileLog, Redis Streams and
                                                                        NATS engines. In case there are
                                                                        attached materialized views, SELECT
                                                                        query is not allowed even if this
                                                                        setting is enabled.
  --stream_like_engine_insert_queue arg                                 When stream like engine reads from
                                                                        multiple queues, user will need to
                                                                        select one queue to insert into when
                                                                        writing. Used by Redis Streams and
                                                                        NATS.
  --distributed_directory_monitor_sleep_time_ms arg                     Sleep time for StorageDistributed
                                                                        DirectoryMonitors, in case of any
                                                                        errors delay grows exponentially.
  --distributed_directory_monitor_max_sleep_time_ms arg                 Maximum sleep time for
                                                                        StorageDistributed DirectoryMonitors,
                                                                        it limits exponential growth too.
  --distributed_directory_monitor_batch_inserts arg                     Should StorageDistributed
                                                                        DirectoryMonitors try to batch
                                                                        individual inserts into bigger ones.
  --distributed_directory_monitor_split_batch_on_failure arg            Should StorageDistributed
                                                                        DirectoryMonitors try to split batch
                                                                        into smaller in case of failures.
  --optimize_move_to_prewhere arg                                       Allows disabling WHERE to PREWHERE
                                                                        optimization in SELECT queries from
                                                                        MergeTree.
  --optimize_move_to_prewhere_if_final arg                              If query has `FINAL`, the optimization
                                                                        `move_to_prewhere` is not always
                                                                        correct and it is enabled only if both
                                                                        settings `optimize_move_to_prewhere`
                                                                        and `optimize_move_to_prewhere_if_final
                                                                        ` are turned on
  --move_all_conditions_to_prewhere arg                                 Move all viable conditions from WHERE
                                                                        to PREWHERE
  --enable_multiple_prewhere_read_steps arg                             Move more conditions from WHERE to
                                                                        PREWHERE and do reads from disk and
                                                                        filtering in multiple steps if there
                                                                        are multiple conditions combined with
                                                                        AND
  --move_primary_key_columns_to_end_of_prewhere arg                     Move PREWHERE conditions containing
                                                                        primary key columns to the end of AND
                                                                        chain. It is likely that these
                                                                        conditions are taken into account
                                                                        during primary key analysis and thus
                                                                        will not contribute a lot to PREWHERE
                                                                        filtering.
  --alter_sync arg                                                      Wait for actions to manipulate the
                                                                        partitions. 0 - do not wait, 1 - wait
                                                                        for execution only of itself, 2 - wait
                                                                        for everyone.
  --replication_alter_partitions_sync arg                               Wait for actions to manipulate the
                                                                        partitions. 0 - do not wait, 1 - wait
                                                                        for execution only of itself, 2 - wait
                                                                        for everyone.
  --replication_wait_for_inactive_replica_timeout arg                   Wait for inactive replica to execute
                                                                        ALTER/OPTIMIZE. Time in seconds, 0 - do
                                                                        not wait, negative - wait for unlimited
                                                                        time.
  --load_balancing arg                                                  Which replicas (among healthy replicas)
                                                                        to preferably send a query to (on the
                                                                        first attempt) for distributed
                                                                        processing.
  --load_balancing_first_offset arg                                     Which replica to preferably send a
                                                                        query when FIRST_OR_RANDOM load
                                                                        balancing strategy is used.
  --totals_mode arg                                                     How to calculate TOTALS when HAVING is
                                                                        present, as well as when
                                                                        max_rows_to_group_by and
                                                                        group_by_overflow_mode = ‘any’ are
                                                                        present.
  --totals_auto_threshold arg                                           The threshold for totals_mode = 'auto'.
  --allow_suspicious_low_cardinality_types arg                          In CREATE TABLE statement allows
                                                                        specifying LowCardinality modifier for
                                                                        types of small fixed size (8 or less).
                                                                        Enabling this may increase merge times
                                                                        and memory consumption.
  --allow_suspicious_fixed_string_types arg                             In CREATE TABLE statement allows
                                                                        creating columns of type FixedString(n)
                                                                        with n > 256. FixedString with length
                                                                        >= 256 is suspicious and most likely
                                                                        indicates misusage
  --allow_suspicious_indices arg                                        Reject primary/secondary indexes and
                                                                        sorting keys with identical expressions
  --compile_expressions arg                                             Compile some scalar functions and
                                                                        operators to native code.
  --min_count_to_compile_expression arg                                 The number of identical expressions
                                                                        before they are JIT-compiled
  --compile_aggregate_expressions arg                                   Compile aggregate functions to native
                                                                        code.
  --min_count_to_compile_aggregate_expression arg                       The number of identical aggregate
                                                                        expressions before they are
                                                                        JIT-compiled
  --compile_sort_description arg                                        Compile sort description to native
                                                                        code.
  --min_count_to_compile_sort_description arg                           The number of identical sort
                                                                        descriptions before they are
                                                                        JIT-compiled
  --group_by_two_level_threshold arg                                    From what number of keys, a two-level
                                                                        aggregation starts. 0 - the threshold
                                                                        is not set.
  --group_by_two_level_threshold_bytes arg                              From what size of the aggregation state
                                                                        in bytes, a two-level aggregation
                                                                        begins to be used. 0 - the threshold is
                                                                        not set. Two-level aggregation is used
                                                                        when at least one of the thresholds is
                                                                        triggered.
  --distributed_aggregation_memory_efficient arg                        Is the memory-saving mode of
                                                                        distributed aggregation enabled.
  --aggregation_memory_efficient_merge_threads arg                      Number of threads to use for merge
                                                                        intermediate aggregation results in
                                                                        memory efficient mode. When bigger,
                                                                        then more memory is consumed. 0 means -
                                                                        same as 'max_threads'.
  --enable_memory_bound_merging_of_aggregation_results arg              Enable memory bound merging strategy
                                                                        for aggregation.
  --enable_positional_arguments arg                                     Enable positional arguments in ORDER
                                                                        BY, GROUP BY and LIMIT BY
  --enable_extended_results_for_datetime_functions arg                  Enable date functions like
                                                                        toLastDayOfMonth return Date32 results
                                                                        (instead of Date results) for
                                                                        Date32/DateTime64 arguments.
  --allow_nonconst_timezone_arguments arg                               Allow non-const timezone arguments in
                                                                        certain time-related functions like
                                                                        toTimeZone(), fromUnixTimestamp*(),
                                                                        snowflakeToDateTime*()
  --group_by_use_nulls arg                                              Treat columns mentioned in ROLLUP, CUBE
                                                                        or GROUPING SETS as Nullable
  --max_parallel_replicas arg                                           The maximum number of replicas of each
                                                                        shard used when the query is executed.
                                                                        For consistency (to get different parts
                                                                        of the same partition), this option
                                                                        only works for the specified sampling
                                                                        key. The lag of the replicas is not
                                                                        controlled.
  --parallel_replicas_count arg                                         This is internal setting that should
                                                                        not be used directly and represents an
                                                                        implementation detail of the 'parallel
                                                                        replicas' mode. This setting will be
                                                                        automatically set up by the initiator
                                                                        server for distributed queries to the
                                                                        number of parallel replicas
                                                                        participating in query processing.
  --parallel_replica_offset arg                                         This is internal setting that should
                                                                        not be used directly and represents an
                                                                        implementation detail of the 'parallel
                                                                        replicas' mode. This setting will be
                                                                        automatically set up by the initiator
                                                                        server for distributed queries to the
                                                                        index of the replica participating in
                                                                        query processing among parallel
                                                                        replicas.
  --parallel_replicas_custom_key arg                                    Custom key assigning work to replicas
                                                                        when parallel replicas are used.
  --parallel_replicas_custom_key_filter_type arg                        Type of filter to use with custom key
                                                                        for parallel replicas. default - use
                                                                        modulo operation on the custom key,
                                                                        range - use range filter on custom key
                                                                        using all possible values for the value
                                                                        type of custom key.
  --cluster_for_parallel_replicas arg                                   Cluster for a shard in which current
                                                                        server is located
  --allow_experimental_parallel_reading_from_replicas arg               Use all the replicas from a shard for
                                                                        SELECT query execution. Reading is
                                                                        parallelized and coordinated
                                                                        dynamically. 0 - disabled, 1 - enabled,
                                                                        silently disable them in case of
                                                                        failure, 2 - enabled, throw an
                                                                        exception in case of failure
  --parallel_replicas_single_task_marks_count_multiplier arg            A multiplier which will be added during
                                                                        calculation for minimal number of marks
                                                                        to retrieve from coordinator. This will
                                                                        be applied only for remote replicas.
  --parallel_replicas_for_non_replicated_merge_tree arg                 If true, ClickHouse will use parallel
                                                                        replicas algorithm also for
                                                                        non-replicated MergeTree tables
  --parallel_replicas_min_number_of_granules_to_enable arg              If the number of marks to read is less
                                                                        than the value of this setting -
                                                                        parallel replicas will be disabled
  --skip_unavailable_shards arg                                         If true, ClickHouse silently skips
                                                                        unavailable shards and nodes
                                                                        unresolvable through DNS. Shard is
                                                                        marked as unavailable when none of the
                                                                        replicas can be reached.
  --parallel_distributed_insert_select arg                              Process distributed INSERT SELECT query
                                                                        in the same cluster on local tables on
                                                                        every shard; if set to 1 - SELECT is
                                                                        executed on each shard; if set to 2 -
                                                                        SELECT and INSERT are executed on each
                                                                        shard
  --distributed_group_by_no_merge arg                                   If 1, Do not merge aggregation states
                                                                        from different servers for distributed
                                                                        queries (shards will process query up
                                                                        to the Complete stage, initiator just
                                                                        proxies the data from the shards). If 2
                                                                        the initiator will apply ORDER BY and
                                                                        LIMIT stages (it is not in case when
                                                                        shard process query up to the Complete
                                                                        stage)
  --distributed_push_down_limit arg                                     If 1, LIMIT will be applied on each
                                                                        shard separately. Usually you don't
                                                                        need to use it, since this will be done
                                                                        automatically if it is possible, i.e.
                                                                        for simple query SELECT FROM LIMIT.
  --optimize_distributed_group_by_sharding_key arg                      Optimize GROUP BY sharding_key queries
                                                                        (by avoiding costly aggregation on the
                                                                        initiator server).
  --optimize_skip_unused_shards_limit arg                               Limit for number of sharding key
                                                                        values, turns off optimize_skip_unused_
                                                                        shards if the limit is reached
  --optimize_skip_unused_shards arg                                     Assumes that data is distributed by
                                                                        sharding_key. Optimization to skip
                                                                        unused shards if SELECT query filters
                                                                        by sharding_key.
  --optimize_skip_unused_shards_rewrite_in arg                          Rewrite IN in query for remote shards
                                                                        to exclude values that does not belong
                                                                        to the shard (requires
                                                                        optimize_skip_unused_shards)
  --allow_nondeterministic_optimize_skip_unused_shards arg              Allow non-deterministic functions
                                                                        (includes dictGet) in sharding_key for
                                                                        optimize_skip_unused_shards
  --force_optimize_skip_unused_shards arg                               Throw an exception if unused shards
                                                                        cannot be skipped (1 - throw only if
                                                                        the table has the sharding key, 2 -
                                                                        always throw.
  --optimize_skip_unused_shards_nesting arg                             Same as optimize_skip_unused_shards,
                                                                        but accept nesting level until which it
                                                                        will work.
  --force_optimize_skip_unused_shards_nesting arg                       Same as force_optimize_skip_unused_shar
                                                                        ds, but accept nesting level until
                                                                        which it will work.
  --input_format_parallel_parsing arg                                   Enable parallel parsing for some data
                                                                        formats.
  --min_chunk_bytes_for_parallel_parsing arg                            The minimum chunk size in bytes, which
                                                                        each thread will parse in parallel.
  --output_format_parallel_formatting arg                               Enable parallel formatting for some
                                                                        data formats.
  --merge_tree_min_rows_for_concurrent_read arg                         If at least as many lines are read from
                                                                        one file, the reading can be
                                                                        parallelized.
  --merge_tree_min_bytes_for_concurrent_read arg                        If at least as many bytes are read from
                                                                        one file, the reading can be
                                                                        parallelized.
  --merge_tree_min_rows_for_seek arg                                    You can skip reading more than that
                                                                        number of rows at the price of one seek
                                                                        per file.
  --merge_tree_min_bytes_for_seek arg                                   You can skip reading more than that
                                                                        number of bytes at the price of one
                                                                        seek per file.
  --merge_tree_coarse_index_granularity arg                             If the index segment can contain the
                                                                        required keys, divide it into as many
                                                                        parts and recursively check them.
  --merge_tree_max_rows_to_use_cache arg                                The maximum number of rows per request,
                                                                        to use the cache of uncompressed data.
                                                                        If the request is large, the cache is
                                                                        not used. (For large queries not to
                                                                        flush out the cache.)
  --merge_tree_max_bytes_to_use_cache arg                               The maximum number of bytes per
                                                                        request, to use the cache of
                                                                        uncompressed data. If the request is
                                                                        large, the cache is not used. (For
                                                                        large queries not to flush out the
                                                                        cache.)
  --do_not_merge_across_partitions_select_final arg                     Merge parts only in one partition in
                                                                        select final
  --allow_experimental_inverted_index arg                               If it is set to true, allow to use
                                                                        experimental inverted index.
  --mysql_max_rows_to_insert arg                                        The maximum number of rows in MySQL
                                                                        batch insertion of the MySQL storage
                                                                        engine
  --use_mysql_types_in_show_columns arg                                 Show native MySQL types in SHOW [FULL]
                                                                        COLUMNS
  --mysql_map_string_to_text_in_show_columns arg                        If enabled, String type will be mapped
                                                                        to TEXT in SHOW [FULL] COLUMNS, BLOB
                                                                        otherwise. Will only take effect if
                                                                        use_mysql_types_in_show_columns is
                                                                        enabled too
  --mysql_map_fixed_string_to_text_in_show_columns arg                  If enabled, FixedString type will be
                                                                        mapped to TEXT in SHOW [FULL] COLUMNS,
                                                                        BLOB otherwise. Will only take effect
                                                                        if use_mysql_types_in_show_columns is
                                                                        enabled too
  --optimize_min_equality_disjunction_chain_length arg                  The minimum length of the expression
                                                                        `expr = x1 OR ... expr = xN` for
                                                                        optimization
  --min_bytes_to_use_direct_io arg                                      The minimum number of bytes for reading
                                                                        the data with O_DIRECT option during
                                                                        SELECT queries execution. 0 - disabled.
  --min_bytes_to_use_mmap_io arg                                        The minimum number of bytes for reading
                                                                        the data with mmap option during SELECT
                                                                        queries execution. 0 - disabled.
  --checksum_on_read arg                                                Validate checksums on reading. It is
                                                                        enabled by default and should be always
                                                                        enabled in production. Please do not
                                                                        expect any benefits in disabling this
                                                                        setting. It may only be used for
                                                                        experiments and benchmarks. The setting
                                                                        only applicable for tables of MergeTree
                                                                        family. Checksums are always validated
                                                                        for other table engines and when
                                                                        receiving data over network.
  --force_index_by_date arg                                             Throw an exception if there is a
                                                                        partition key in a table, and it is not
                                                                        used.
  --force_primary_key arg                                               Throw an exception if there is primary
                                                                        key in a table, and it is not used.
  --use_skip_indexes arg                                                Use data skipping indexes during query
                                                                        execution.
  --use_skip_indexes_if_final arg                                       If query has FINAL, then skipping data
                                                                        based on indexes may produce incorrect
                                                                        result, hence disabled by default.
  --ignore_data_skipping_indices arg                                    Comma separated list of strings or
                                                                        literals with the name of the data
                                                                        skipping indices that should be
                                                                        excluded during query execution.
  --force_data_skipping_indices arg                                     Comma separated list of strings or
                                                                        literals with the name of the data
                                                                        skipping indices that should be used
                                                                        during query execution, otherwise an
                                                                        exception will be thrown.
  --max_streams_to_max_threads_ratio arg                                Allows you to use more sources than the
                                                                        number of threads - to more evenly
                                                                        distribute work across threads. It is
                                                                        assumed that this is a temporary
                                                                        solution, since it will be possible in
                                                                        the future to make the number of
                                                                        sources equal to the number of threads,
                                                                        but for each source to dynamically
                                                                        select available work for itself.
  --max_streams_multiplier_for_merge_tables arg                         Ask more streams when reading from
                                                                        Merge table. Streams will be spread
                                                                        across tables that Merge table will
                                                                        use. This allows more even distribution
                                                                        of work across threads and especially
                                                                        helpful when merged tables differ in
                                                                        size.
  --network_compression_method arg                                      Allows you to select the method of data
                                                                        compression when writing.
  --network_zstd_compression_level arg                                  Allows you to select the level of ZSTD
                                                                        compression.
  --zstd_window_log_max arg                                             Allows you to select the max window log
                                                                        of ZSTD (it will not be used for
                                                                        MergeTree family)
  --priority arg                                                        Priority of the query. 1 - the highest,
                                                                        higher value - lower priority; 0 - do
                                                                        not use priorities.
  --os_thread_priority arg                                              If non zero - set corresponding 'nice'
                                                                        value for query processing threads. Can
                                                                        be used to adjust query priority for OS
                                                                        scheduler.
  --log_queries arg                                                     Log requests and write the log to the
                                                                        system table.
  --log_formatted_queries arg                                           Log formatted queries and write the log
                                                                        to the system table.
  --log_queries_min_type arg                                            Minimal type in query_log to log,
                                                                        possible values (from low to high):
                                                                        QUERY_START, QUERY_FINISH,
                                                                        EXCEPTION_BEFORE_START,
                                                                        EXCEPTION_WHILE_PROCESSING.
  --log_queries_min_query_duration_ms arg                               Minimal time for the query to run, to
                                                                        get to the query_log/query_thread_log/q
                                                                        uery_views_log.
  --log_queries_cut_to_length arg                                       If query length is greater than
                                                                        specified threshold (in bytes), then
                                                                        cut query when writing to query log.
                                                                        Also limit length of printed query in
                                                                        ordinary text log.
  --log_queries_probability arg                                         Log queries with the specified
                                                                        probabality.
  --log_processors_profiles arg                                         Log Processors profile events.
  --distributed_product_mode arg                                        How are distributed subqueries
                                                                        performed inside IN or JOIN sections?
  --max_concurrent_queries_for_all_users arg                            The maximum number of concurrent
                                                                        requests for all users.
  --max_concurrent_queries_for_user arg                                 The maximum number of concurrent
                                                                        requests per user.
  --insert_deduplicate arg                                              For INSERT queries in the replicated
                                                                        table, specifies that deduplication of
                                                                        insertings blocks should be performed
  --async_insert_deduplicate arg                                        For async INSERT queries in the
                                                                        replicated table, specifies that
                                                                        deduplication of insertings blocks
                                                                        should be performed
  --insert_quorum arg                                                   For INSERT queries in the replicated
                                                                        table, wait writing for the specified
                                                                        number of replicas and linearize the
                                                                        addition of the data. 0 - disabled,
                                                                        'auto' - use majority
  --insert_quorum_timeout arg                                           If the quorum of replicas did not meet
                                                                        in specified time (in milliseconds),
                                                                        exception will be thrown and insertion
                                                                        is aborted.
  --insert_quorum_parallel arg                                          For quorum INSERT queries - enable to
                                                                        make parallel inserts without
                                                                        linearizability
  --select_sequential_consistency arg                                   For SELECT queries from the replicated
                                                                        table, throw an exception if the
                                                                        replica does not have a chunk written
                                                                        with the quorum; do not read the parts
                                                                        that have not yet been written with the
                                                                        quorum.
  --table_function_remote_max_addresses arg                             The maximum number of different shards
                                                                        and the maximum number of replicas of
                                                                        one shard in the `remote` function.
  --read_backoff_min_latency_ms arg                                     Setting to reduce the number of threads
                                                                        in case of slow reads. Pay attention
                                                                        only to reads that took at least that
                                                                        much time.
  --read_backoff_max_throughput arg                                     Settings to reduce the number of
                                                                        threads in case of slow reads. Count
                                                                        events when the read bandwidth is less
                                                                        than that many bytes per second.
  --read_backoff_min_interval_between_events_ms arg                     Settings to reduce the number of
                                                                        threads in case of slow reads. Do not
                                                                        pay attention to the event, if the
                                                                        previous one has passed less than a
                                                                        certain amount of time.
  --read_backoff_min_events arg                                         Settings to reduce the number of
                                                                        threads in case of slow reads. The
                                                                        number of events after which the number
                                                                        of threads will be reduced.
  --read_backoff_min_concurrency arg                                    Settings to try keeping the minimal
                                                                        number of threads in case of slow
                                                                        reads.
  --memory_tracker_fault_probability arg                                For testing of `exception safety` -
                                                                        throw an exception every time you
                                                                        allocate memory with the specified
                                                                        probability.
  --enable_http_compression arg                                         Compress the result if the client over
                                                                        HTTP said that it understands data
                                                                        compressed by gzip, deflate, zstd, br,
                                                                        lz4, bz2, xz.
  --http_zlib_compression_level arg                                     Compression level - used if the client
                                                                        on HTTP said that it understands data
                                                                        compressed by gzip or deflate.
  --http_native_compression_disable_checksumming_on_decompress arg      If you uncompress the POST data from
                                                                        the client compressed by the native
                                                                        format, do not check the checksum.
  --count_distinct_implementation arg                                   What aggregate function to use for
                                                                        implementation of count(DISTINCT ...)
  --add_http_cors_header arg                                            Write add http CORS header.
  --max_http_get_redirects arg                                          Max number of http GET redirects hops
                                                                        allowed. Ensures additional security
                                                                        measures are in place to prevent a
                                                                        malicious server to redirect your
                                                                        requests to unexpected services.

                                                                        It is the case when an external server
                                                                        redirects to another address, but that
                                                                        address appears to be internal to the
                                                                        company's infrastructure, and by
                                                                        sending an HTTP request to an internal
                                                                        server, you could request an internal
                                                                        API from the internal network,
                                                                        bypassing the auth, or even query other
                                                                        services, such as Redis or Memcached.
                                                                        When you don't have an internal
                                                                        infrastructure (including something
                                                                        running on your localhost), or you
                                                                        trust the server, it is safe to allow
                                                                        redirects. Although keep in mind, that
                                                                        if the URL uses HTTP instead of HTTPS,
                                                                        and you will have to trust not only the
                                                                        remote server but also your ISP and
                                                                        every network in the middle.
  --use_client_time_zone arg                                            Use client timezone for interpreting
                                                                        DateTime string values, instead of
                                                                        adopting server timezone.
  --send_progress_in_http_headers arg                                   Send progress notifications using
                                                                        X-ClickHouse-Progress headers. Some
                                                                        clients do not support high amount of
                                                                        HTTP headers (Python requests in
                                                                        particular), so it is disabled by
                                                                        default.
  --http_headers_progress_interval_ms arg                               Do not send HTTP headers
                                                                        X-ClickHouse-Progress more frequently
                                                                        than at each specified interval.
  --http_wait_end_of_query arg                                          Enable HTTP response buffering on the
                                                                        server-side.
  --http_write_exception_in_output_format arg                           Write exception in output format to
                                                                        produce valid output. Works with JSON
                                                                        and XML formats.
  --http_response_buffer_size arg                                       The number of bytes to buffer in the
                                                                        server memory before sending a HTTP
                                                                        response to the client or flushing to
                                                                        disk (when http_wait_end_of_query is
                                                                        enabled).
  --fsync_metadata arg                                                  Do fsync after changing metadata for
                                                                        tables and databases (.sql files).
                                                                        Could be disabled in case of poor
                                                                        latency on server with high load of DDL
                                                                        queries and high load of disk
                                                                        subsystem.
  --join_use_nulls arg                                                  Use NULLs for non-joined rows of outer
                                                                        JOINs for types that can be inside
                                                                        Nullable. If false, use default value
                                                                        of corresponding columns data type.
  --join_default_strictness arg                                         Set default strictness in JOIN query.
                                                                        Possible values: empty string, 'ANY',
                                                                        'ALL'. If empty, query without
                                                                        strictness will throw exception.
  --any_join_distinct_right_table_keys arg                              Enable old ANY JOIN logic with
                                                                        many-to-one left-to-right table keys
                                                                        mapping for all ANY JOINs. It leads to
                                                                        confusing not equal results for 't1 ANY
                                                                        LEFT JOIN t2' and 't2 ANY RIGHT JOIN
                                                                        t1'. ANY RIGHT JOIN needs one-to-many
                                                                        keys mapping to be consistent with LEFT
                                                                        one.
  --single_join_prefer_left_table arg                                   For single JOIN in case of identifier
                                                                        ambiguity prefer left table
  --preferred_block_size_bytes arg                                      This setting adjusts the data block
                                                                        size for query processing and
                                                                        represents additional fine tune to the
                                                                        more rough 'max_block_size' setting. If
                                                                        the columns are large and with
                                                                        'max_block_size' rows the block size is
                                                                        likely to be larger than the specified
                                                                        amount of bytes, its size will be
                                                                        lowered for better CPU cache locality.
  --max_replica_delay_for_distributed_queries arg                       If set, distributed queries of
                                                                        Replicated tables will choose servers
                                                                        with replication delay in seconds less
                                                                        than the specified value (not
                                                                        inclusive). Zero means do not take
                                                                        delay into account.
  --fallback_to_stale_replicas_for_distributed_queries arg              Suppose max_replica_delay_for_distribut
                                                                        ed_queries is set and all replicas for
                                                                        the queried table are stale. If this
                                                                        setting is enabled, the query will be
                                                                        performed anyway, otherwise the error
                                                                        will be reported.
  --preferred_max_column_in_block_size_bytes arg                        Limit on max column size in block while
                                                                        reading. Helps to decrease cache misses
                                                                        count. Should be close to L2 cache
                                                                        size.
  --parts_to_delay_insert arg                                           If the destination table contains at
                                                                        least that many active parts in a
                                                                        single partition, artificially slow
                                                                        down insert into table.
  --parts_to_throw_insert arg                                           If more than this number active parts
                                                                        in a single partition of the
                                                                        destination table, throw 'Too many
                                                                        parts ...' exception.
  --number_of_mutations_to_delay arg                                    If the mutated table contains at least
                                                                        that many unfinished mutations,
                                                                        artificially slow down mutations of
                                                                        table. 0 - disabled
  --number_of_mutations_to_throw arg                                    If the mutated table contains at least
                                                                        that many unfinished mutations, throw
                                                                        'Too many mutations ...' exception. 0 -
                                                                        disabled
  --insert_distributed_sync arg                                         If setting is enabled, insert query
                                                                        into distributed waits until data will
                                                                        be sent to all nodes in cluster.

                                                                        Enables or disables synchronous data
                                                                        insertion into a `Distributed` table.

                                                                        By default, when inserting data into a
                                                                        Distributed table, the ClickHouse
                                                                        server sends data to cluster nodes in
                                                                        asynchronous mode. When
                                                                        `insert_distributed_sync` = 1, the data
                                                                        is processed synchronously, and the
                                                                        `INSERT` operation succeeds only after
                                                                        all the data is saved on all shards (at
                                                                        least one replica for each shard if
                                                                        `internal_replication` is true).
  --insert_distributed_timeout arg                                      Timeout for insert query into
                                                                        distributed. Setting is used only with
                                                                        insert_distributed_sync enabled. Zero
                                                                        value means no timeout.
  --distributed_ddl_task_timeout arg                                    Timeout for DDL query responses from
                                                                        all hosts in cluster. If a ddl request
                                                                        has not been performed on all hosts, a
                                                                        response will contain a timeout error
                                                                        and a request will be executed in an
                                                                        async mode. Negative value means
                                                                        infinite. Zero means async mode.
  --stream_flush_interval_ms arg                                        Timeout for flushing data from
                                                                        streaming storages.
  --stream_poll_timeout_ms arg                                          Timeout for polling data from/to
                                                                        streaming storages.
  --final arg                                                           Query with the FINAL modifier by
                                                                        default. If the engine does not support
                                                                        final, it does not have any effect. On
                                                                        queries with multiple tables final is
                                                                        applied only on those that support it.
                                                                        It also works on distributed tables
  --partial_result_on_first_cancel arg                                  Allows query to return a partial result
                                                                        after cancel.
  --allow_experimental_partial_result arg                               Enable experimental feature: partial
                                                                        results for running queries.
  --partial_result_update_duration_ms arg                               Interval (in milliseconds) for sending
                                                                        updates with partial data about the
                                                                        result table to the client (in
                                                                        interactive mode) during query
                                                                        execution. Setting to 0 disables
                                                                        partial results. Only supported for
                                                                        single-threaded GROUP BY without key,
                                                                        ORDER BY, LIMIT and OFFSET.
  --max_rows_in_partial_result arg                                      Maximum rows to show in the partial
                                                                        result after every real-time update
                                                                        while the query runs (use partial
                                                                        result limit + OFFSET as a value in
                                                                        case of OFFSET in the query).
  --ignore_on_cluster_for_replicated_udf_queries arg                    Ignore ON CLUSTER clause for replicated
                                                                        UDF management queries.
  --ignore_on_cluster_for_replicated_access_entities_queries arg        Ignore ON CLUSTER clause for replicated
                                                                        access entities management queries.
  --sleep_in_send_tables_status_ms arg                                  Time to sleep in sending tables status
                                                                        response in TCPHandler
  --sleep_in_send_data_ms arg                                           Time to sleep in sending data in
                                                                        TCPHandler
  --sleep_after_receiving_query_ms arg                                  Time to sleep after receiving query in
                                                                        TCPHandler
  --unknown_packet_in_send_data arg                                     Send unknown packet instead of data Nth
                                                                        data packet
  --insert_allow_materialized_columns arg                               If setting is enabled, Allow
                                                                        materialized columns in INSERT.
  --http_connection_timeout arg                                         HTTP connection timeout.
  --http_send_timeout arg                                               HTTP send timeout
  --http_receive_timeout arg                                            HTTP receive timeout
  --http_max_uri_size arg                                               Maximum URI length of HTTP request
  --http_max_fields arg                                                 Maximum number of fields in HTTP header
  --http_max_field_name_size arg                                        Maximum length of field name in HTTP
                                                                        header
  --http_max_field_value_size arg                                       Maximum length of field value in HTTP
                                                                        header
  --http_max_chunk_size arg                                             Maximum value of a chunk size in HTTP
                                                                        chunked transfer encoding
  --http_skip_not_found_url_for_globs arg                               Skip url's for globs with
                                                                        HTTP_NOT_FOUND error
  --optimize_throw_if_noop arg                                          If setting is enabled and OPTIMIZE
                                                                        query didn't actually assign a merge
                                                                        then an explanatory exception is thrown
  --use_index_for_in_with_subqueries arg                                Try using an index if there is a
                                                                        subquery or a table expression on the
                                                                        right side of the IN operator.
  --use_index_for_in_with_subqueries_max_values arg                     The maximum size of set in the right
                                                                        hand side of the IN operator to use
                                                                        table index for filtering. It allows to
                                                                        avoid performance degradation and
                                                                        higher memory usage due to preparation
                                                                        of additional data structures for large
                                                                        queries. Zero means no limit.
  --joined_subquery_requires_alias arg                                  Force joined subqueries and table
                                                                        functions to have aliases for correct
                                                                        name qualification.
  --empty_result_for_aggregation_by_empty_set arg                       Return empty result when aggregating
                                                                        without keys on empty set.
  --empty_result_for_aggregation_by_constant_keys_on_empty_set arg      Return empty result when aggregating by
                                                                        constant keys on empty set.
  --allow_distributed_ddl arg                                           If it is set to true, then a user is
                                                                        allowed to executed distributed DDL
                                                                        queries.
  --allow_suspicious_codecs arg                                         If it is set to true, allow to specify
                                                                        meaningless compression codecs.
  --allow_experimental_codecs arg                                       If it is set to true, allow to specify
                                                                        experimental compression codecs (but we
                                                                        don't have those yet and this option
                                                                        does nothing).
  --enable_deflate_qpl_codec arg                                        Enable/disable the DEFLATE_QPL codec.
  --query_profiler_real_time_period_ns arg                              Period for real clock timer of query
                                                                        profiler (in nanoseconds). Set 0 value
                                                                        to turn off the real clock query
                                                                        profiler. Recommended value is at least
                                                                        10000000 (100 times a second) for
                                                                        single queries or 1000000000 (once a
                                                                        second) for cluster-wide profiling.
  --query_profiler_cpu_time_period_ns arg                               Period for CPU clock timer of query
                                                                        profiler (in nanoseconds). Set 0 value
                                                                        to turn off the CPU clock query
                                                                        profiler. Recommended value is at least
                                                                        10000000 (100 times a second) for
                                                                        single queries or 1000000000 (once a
                                                                        second) for cluster-wide profiling.
  --metrics_perf_events_enabled arg                                     If enabled, some of the perf events
                                                                        will be measured throughout queries'
                                                                        execution.
  --metrics_perf_events_list arg                                        Comma separated list of perf metrics
                                                                        that will be measured throughout
                                                                        queries' execution. Empty means all
                                                                        events. See PerfEventInfo in sources
                                                                        for the available events.
  --opentelemetry_start_trace_probability arg                           Probability to start an OpenTelemetry
                                                                        trace for an incoming query.
  --opentelemetry_trace_processors arg                                  Collect OpenTelemetry spans for
                                                                        processors.
  --prefer_column_name_to_alias arg                                     Prefer using column names instead of
                                                                        aliases if possible.
  --allow_experimental_analyzer arg                                     Allow experimental analyzer
  --prefer_global_in_and_join arg                                       If enabled, all IN/JOIN operators will
                                                                        be rewritten as GLOBAL IN/JOIN. It's
                                                                        useful when the to-be-joined tables are
                                                                        only available on the initiator and we
                                                                        need to always scatter their data
                                                                        on-the-fly during distributed
                                                                        processing with the GLOBAL keyword.
                                                                        It's also useful to reduce the need to
                                                                        access the external sources joining
                                                                        external tables.
  --max_rows_to_read arg                                                Limit on read rows from the most 'deep'
                                                                        sources. That is, only in the deepest
                                                                        subquery. When reading from a remote
                                                                        server, it is only checked on a remote
                                                                        server.
  --max_bytes_to_read arg                                               Limit on read bytes (after
                                                                        decompression) from the most 'deep'
                                                                        sources. That is, only in the deepest
                                                                        subquery. When reading from a remote
                                                                        server, it is only checked on a remote
                                                                        server.
  --read_overflow_mode arg                                              What to do when the limit is exceeded.
  --max_rows_to_read_leaf arg                                           Limit on read rows on the leaf nodes
                                                                        for distributed queries. Limit is
                                                                        applied for local reads only excluding
                                                                        the final merge stage on the root node.
                                                                        Note, the setting is unstable with
                                                                        prefer_localhost_replica=1.
  --max_bytes_to_read_leaf arg                                          Limit on read bytes (after
                                                                        decompression) on the leaf nodes for
                                                                        distributed queries. Limit is applied
                                                                        for local reads only excluding the
                                                                        final merge stage on the root node.
                                                                        Note, the setting is unstable with
                                                                        prefer_localhost_replica=1.
  --read_overflow_mode_leaf arg                                         What to do when the leaf limit is
                                                                        exceeded.
  --max_rows_to_group_by arg                                            If aggregation during GROUP BY is
                                                                        generating more than specified number
                                                                        of rows (unique GROUP BY keys), the
                                                                        behavior will be determined by the
                                                                        'group_by_overflow_mode' which by
                                                                        default is - throw an exception, but
                                                                        can be also switched to an approximate
                                                                        GROUP BY mode.
  --group_by_overflow_mode arg                                          What to do when the limit is exceeded.
  --max_bytes_before_external_group_by arg                              If memory usage during GROUP BY
                                                                        operation is exceeding this threshold
                                                                        in bytes, activate the 'external
                                                                        aggregation' mode (spill data to disk).
                                                                        Recommended value is half of available
                                                                        system memory.
  --max_rows_to_sort arg                                                If more than specified amount of
                                                                        records have to be processed for ORDER
                                                                        BY operation, the behavior will be
                                                                        determined by the 'sort_overflow_mode'
                                                                        which by default is - throw an
                                                                        exception
  --max_bytes_to_sort arg                                               If more than specified amount of
                                                                        (uncompressed) bytes have to be
                                                                        processed for ORDER BY operation, the
                                                                        behavior will be determined by the
                                                                        'sort_overflow_mode' which by default
                                                                        is - throw an exception
  --sort_overflow_mode arg                                              What to do when the limit is exceeded.
  --max_bytes_before_external_sort arg                                  If memory usage during ORDER BY
                                                                        operation is exceeding this threshold
                                                                        in bytes, activate the 'external
                                                                        sorting' mode (spill data to disk).
                                                                        Recommended value is half of available
                                                                        system memory.
  --max_bytes_before_remerge_sort arg                                   In case of ORDER BY with LIMIT, when
                                                                        memory usage is higher than specified
                                                                        threshold, perform additional steps of
                                                                        merging blocks before final merge to
                                                                        keep just top LIMIT rows.
  --remerge_sort_lowered_memory_bytes_ratio arg                         If memory usage after remerge does not
                                                                        reduced by this ratio, remerge will be
                                                                        disabled.
  --max_result_rows arg                                                 Limit on result size in rows. The query
                                                                        will stop after processing a block of
                                                                        data if the threshold is met, but it
                                                                        will not cut the last block of the
                                                                        result, therefore the result size can
                                                                        be larger than the threshold.
  --max_result_bytes arg                                                Limit on result size in bytes
                                                                        (uncompressed).  The query will stop
                                                                        after processing a block of data if the
                                                                        threshold is met, but it will not cut
                                                                        the last block of the result, therefore
                                                                        the result size can be larger than the
                                                                        threshold. Caveats: the result size in
                                                                        memory is taken into account for this
                                                                        threshold. Even if the result size is
                                                                        small, it can reference larger data
                                                                        structures in memory, representing
                                                                        dictionaries of LowCardinality columns,
                                                                        and Arenas of AggregateFunction
                                                                        columns, so the threshold can be
                                                                        exceeded despite the small result size.
                                                                        The setting is fairly low level and
                                                                        should be used with caution.
  --result_overflow_mode arg                                            What to do when the limit is exceeded.
  --max_execution_time arg                                              If query run time exceeded the
                                                                        specified number of seconds, the
                                                                        behavior will be determined by the
                                                                        'timeout_overflow_mode' which by
                                                                        default is - throw an exception. Note
                                                                        that the timeout is checked and query
                                                                        can stop only in designated places
                                                                        during data processing. It currently
                                                                        cannot stop during merging of
                                                                        aggregation states or during query
                                                                        analysis, and the actual run time will
                                                                        be higher than the value of this
                                                                        setting.
  --timeout_overflow_mode arg                                           What to do when the limit is exceeded.
  --min_execution_speed arg                                             Minimum number of execution rows per
                                                                        second.
  --max_execution_speed arg                                             Maximum number of execution rows per
                                                                        second.
  --min_execution_speed_bytes arg                                       Minimum number of execution bytes per
                                                                        second.
  --max_execution_speed_bytes arg                                       Maximum number of execution bytes per
                                                                        second.
  --timeout_before_checking_execution_speed arg                         Check that the speed is not too low
                                                                        after the specified time has elapsed.
  --max_columns_to_read arg                                             If a query requires reading more than
                                                                        specified number of columns, exception
                                                                        is thrown. Zero value means unlimited.
                                                                        This setting is useful to prevent too
                                                                        complex queries.
  --max_temporary_columns arg                                           If a query generates more than the
                                                                        specified number of temporary columns
                                                                        in memory as a result of intermediate
                                                                        calculation, exception is thrown. Zero
                                                                        value means unlimited. This setting is
                                                                        useful to prevent too complex queries.
  --max_temporary_non_const_columns arg                                 Similar to the 'max_temporary_columns'
                                                                        setting but applies only to
                                                                        non-constant columns. This makes sense,
                                                                        because constant columns are cheap and
                                                                        it is reasonable to allow more of them.
  --max_sessions_for_user arg                                           Maximum number of simultaneous sessions
                                                                        for a user.
  --max_subquery_depth arg                                              If a query has more than specified
                                                                        number of nested subqueries, throw an
                                                                        exception. This allows you to have a
                                                                        sanity check to protect the users of
                                                                        your cluster from going insane with
                                                                        their queries.
  --max_analyze_depth arg                                               Maximum number of analyses performed by
                                                                        interpreter.
  --max_ast_depth arg                                                   Maximum depth of query syntax tree.
                                                                        Checked after parsing.
  --max_ast_elements arg                                                Maximum size of query syntax tree in
                                                                        number of nodes. Checked after parsing.
  --max_expanded_ast_elements arg                                       Maximum size of query syntax tree in
                                                                        number of nodes after expansion of
                                                                        aliases and the asterisk.
  --readonly arg                                                        0 - no read-only restrictions. 1 - only
                                                                        read requests, as well as changing
                                                                        explicitly allowed settings. 2 - only
                                                                        read requests, as well as changing
                                                                        settings, except for the 'readonly'
                                                                        setting.
  --max_rows_in_set arg                                                 Maximum size of the set (in number of
                                                                        elements) resulting from the execution
                                                                        of the IN section.
  --max_bytes_in_set arg                                                Maximum size of the set (in bytes in
                                                                        memory) resulting from the execution of
                                                                        the IN section.
  --set_overflow_mode arg                                               What to do when the limit is exceeded.
  --max_rows_in_join arg                                                Maximum size of the hash table for JOIN
                                                                        (in number of rows).
  --max_bytes_in_join arg                                               Maximum size of the hash table for JOIN
                                                                        (in number of bytes in memory).
  --join_overflow_mode arg                                              What to do when the limit is exceeded.
  --join_any_take_last_row arg                                          When disabled (default) ANY JOIN will
                                                                        take the first found row for a key.
                                                                        When enabled, it will take the last row
                                                                        seen if there are multiple rows for the
                                                                        same key.
  --join_algorithm arg                                                  Specify join algorithm.
  --default_max_bytes_in_join arg                                       Maximum size of right-side table if
                                                                        limit is required but max_bytes_in_join
                                                                        is not set.
  --partial_merge_join_left_table_buffer_bytes arg                      If not 0 group left table blocks in
                                                                        bigger ones for left-side table in
                                                                        partial merge join. It uses up to 2x of
                                                                        specified memory per joining thread.
  --partial_merge_join_rows_in_right_blocks arg                         Split right-hand joining data in blocks
                                                                        of specified size. It's a portion of
                                                                        data indexed by min-max values and
                                                                        possibly unloaded on disk.
  --join_on_disk_max_files_to_merge arg                                 For MergeJoin on disk set how much
                                                                        files it's allowed to sort
                                                                        simultaneously. Then this value bigger
                                                                        then more memory used and then less
                                                                        disk I/O needed. Minimum is 2.
  --max_rows_in_set_to_optimize_join arg                                Maximal size of the set to filter
                                                                        joined tables by each other row sets
                                                                        before joining. 0 - disable.
  --compatibility_ignore_collation_in_create_table arg                  Compatibility ignore collation in
                                                                        create table
  --temporary_files_codec arg                                           Set compression codec for temporary
                                                                        files (sort and join on disk). I.e.
                                                                        LZ4, NONE.
  --max_rows_to_transfer arg                                            Maximum size (in rows) of the
                                                                        transmitted external table obtained
                                                                        when the GLOBAL IN/JOIN section is
                                                                        executed.
  --max_bytes_to_transfer arg                                           Maximum size (in uncompressed bytes) of
                                                                        the transmitted external table obtained
                                                                        when the GLOBAL IN/JOIN section is
                                                                        executed.
  --transfer_overflow_mode arg                                          What to do when the limit is exceeded.
  --max_rows_in_distinct arg                                            Maximum number of elements during
                                                                        execution of DISTINCT.
  --max_bytes_in_distinct arg                                           Maximum total size of state (in
                                                                        uncompressed bytes) in memory for the
                                                                        execution of DISTINCT.
  --distinct_overflow_mode arg                                          What to do when the limit is exceeded.
  --max_memory_usage arg                                                Maximum memory usage for processing of
                                                                        single query. Zero means unlimited.
  --memory_overcommit_ratio_denominator arg                             It represents soft memory limit on the
                                                                        user level. This value is used to
                                                                        compute query overcommit ratio.
  --max_memory_usage_for_user arg                                       Maximum memory usage for processing all
                                                                        concurrently running queries for the
                                                                        user. Zero means unlimited.
  --memory_overcommit_ratio_denominator_for_user arg                    It represents soft memory limit on the
                                                                        global level. This value is used to
                                                                        compute query overcommit ratio.
  --max_untracked_memory arg                                            Small allocations and deallocations are
                                                                        grouped in thread local variable and
                                                                        tracked or profiled only when amount
                                                                        (in absolute value) becomes larger than
                                                                        specified value. If the value is higher
                                                                        than 'memory_profiler_step' it will be
                                                                        effectively lowered to
                                                                        'memory_profiler_step'.
  --memory_profiler_step arg                                            Whenever query memory usage becomes
                                                                        larger than every next step in number
                                                                        of bytes the memory profiler will
                                                                        collect the allocating stack trace.
                                                                        Zero means disabled memory profiler.
                                                                        Values lower than a few megabytes will
                                                                        slow down query processing.
  --memory_profiler_sample_probability arg                              Collect random allocations and
                                                                        deallocations and write them into
                                                                        system.trace_log with 'MemorySample'
                                                                        trace_type. The probability is for
                                                                        every alloc/free regardless to the size
                                                                        of the allocation (can be changed with
                                                                        `memory_profiler_sample_min_allocation_
                                                                        size` and `memory_profiler_sample_max_a
                                                                        llocation_size`). Note that sampling
                                                                        happens only when the amount of
                                                                        untracked memory exceeds
                                                                        'max_untracked_memory'. You may want to
                                                                        set 'max_untracked_memory' to 0 for
                                                                        extra fine grained sampling.
  --memory_profiler_sample_min_allocation_size arg                      Collect random allocations of size
                                                                        greater or equal than specified value
                                                                        with probability equal to
                                                                        `memory_profiler_sample_probability`. 0
                                                                        means disabled. You may want to set
                                                                        'max_untracked_memory' to 0 to make
                                                                        this threshold to work as expected.
  --memory_profiler_sample_max_allocation_size arg                      Collect random allocations of size less
                                                                        or equal than specified value with
                                                                        probability equal to
                                                                        `memory_profiler_sample_probability`. 0
                                                                        means disabled. You may want to set
                                                                        'max_untracked_memory' to 0 to make
                                                                        this threshold to work as expected.
  --trace_profile_events arg                                            Send to system.trace_log profile event
                                                                        and value of increment on each
                                                                        increment with 'ProfileEvent'
                                                                        trace_type
  --memory_usage_overcommit_max_wait_microseconds arg                   Maximum time thread will wait for
                                                                        memory to be freed in the case of
                                                                        memory overcommit. If timeout is
                                                                        reached and memory is not freed,
                                                                        exception is thrown.
  --max_network_bandwidth arg                                           The maximum speed of data exchange over
                                                                        the network in bytes per second for a
                                                                        query. Zero means unlimited.
  --max_network_bytes arg                                               The maximum number of bytes
                                                                        (compressed) to receive or transmit
                                                                        over the network for execution of the
                                                                        query.
  --max_network_bandwidth_for_user arg                                  The maximum speed of data exchange over
                                                                        the network in bytes per second for all
                                                                        concurrently running user queries. Zero
                                                                        means unlimited.
  --max_network_bandwidth_for_all_users arg                             The maximum speed of data exchange over
                                                                        the network in bytes per second for all
                                                                        concurrently running queries. Zero
                                                                        means unlimited.
  --max_temporary_data_on_disk_size_for_user arg                        The maximum amount of data consumed by
                                                                        temporary files on disk in bytes for
                                                                        all concurrently running user queries.
                                                                        Zero means unlimited.
  --max_temporary_data_on_disk_size_for_query arg                       The maximum amount of data consumed by
                                                                        temporary files on disk in bytes for
                                                                        all concurrently running queries. Zero
                                                                        means unlimited.
  --backup_restore_keeper_max_retries arg                               Max retries for keeper operations
                                                                        during backup or restore
  --backup_restore_keeper_retry_initial_backoff_ms arg                  Initial backoff timeout for [Zoo]Keeper
                                                                        operations during backup or restore
  --backup_restore_keeper_retry_max_backoff_ms arg                      Max backoff timeout for [Zoo]Keeper
                                                                        operations during backup or restore
  --backup_restore_keeper_fault_injection_probability arg               Approximate probability of failure for
                                                                        a keeper request during backup or
                                                                        restore. Valid value is in interval
                                                                        [0.0f, 1.0f]
  --backup_restore_keeper_fault_injection_seed arg                      0 - random seed, otherwise the setting
                                                                        value
  --backup_restore_keeper_value_max_size arg                            Maximum size of data of a [Zoo]Keeper's
                                                                        node during backup
  --backup_restore_batch_size_for_keeper_multiread arg                  Maximum size of batch for multiread
                                                                        request to [Zoo]Keeper during backup or
                                                                        restore
  --max_backup_bandwidth arg                                            The maximum read speed in bytes per
                                                                        second for particular backup on server.
                                                                        Zero means unlimited.
  --log_profile_events arg                                              Log query performance statistics into
                                                                        the query_log, query_thread_log and
                                                                        query_views_log.
  --log_query_settings arg                                              Log query settings into the query_log.
  --log_query_threads arg                                               Log query threads into
                                                                        system.query_thread_log table. This
                                                                        setting have effect only when
                                                                        'log_queries' is true.
  --log_query_views arg                                                 Log query dependent views into
                                                                        system.query_views_log table. This
                                                                        setting have effect only when
                                                                        'log_queries' is true.
  --log_comment arg                                                     Log comment into system.query_log table
                                                                        and server log. It can be set to
                                                                        arbitrary string no longer than
                                                                        max_query_size.
  --send_logs_level arg                                                 Send server text logs with specified
                                                                        minimum level to client. Valid values:
                                                                        'trace', 'debug', 'information',
                                                                        'warning', 'error', 'fatal', 'none'
  --send_logs_source_regexp arg                                         Send server text logs with specified
                                                                        regexp to match log source name. Empty
                                                                        means all sources.
  --enable_optimize_predicate_expression arg                            If it is set to true, optimize
                                                                        predicates to subqueries.
  --enable_optimize_predicate_expression_to_final_subquery arg          Allow push predicate to final subquery.
  --allow_push_predicate_when_subquery_contains_with arg                Allows push predicate when subquery
                                                                        contains WITH clause
  --low_cardinality_max_dictionary_size arg                             Maximum size (in rows) of shared global
                                                                        dictionary for LowCardinality type.
  --low_cardinality_use_single_dictionary_for_part arg                  LowCardinality type serialization
                                                                        setting. If is true, than will use
                                                                        additional keys when global dictionary
                                                                        overflows. Otherwise, will create
                                                                        several shared dictionaries.
  --decimal_check_overflow arg                                          Check overflow of decimal
                                                                        arithmetic/comparison operations
  --allow_custom_error_code_in_throwif arg                              Enable custom error code in function
                                                                        throwIf(). If true, thrown exceptions
                                                                        may have unexpected error codes.
  --prefer_localhost_replica arg                                        If it's true then queries will be
                                                                        always sent to local replica (if it
                                                                        exists). If it's false then replica to
                                                                        send a query will be chosen between
                                                                        local and remote ones according to
                                                                        load_balancing
  --max_fetch_partition_retries_count arg                               Amount of retries while fetching
                                                                        partition from another host.
  --http_max_multipart_form_data_size arg                               Limit on size of multipart/form-data
                                                                        content. This setting cannot be parsed
                                                                        from URL parameters and should be set
                                                                        in user profile. Note that content is
                                                                        parsed and external tables are created
                                                                        in memory before start of query
                                                                        execution. And this is the only limit
                                                                        that has effect on that stage (limits
                                                                        on max memory usage and max execution
                                                                        time have no effect while reading HTTP
                                                                        form data).
  --calculate_text_stack_trace arg                                      Calculate text stack trace in case of
                                                                        exceptions during query execution. This
                                                                        is the default. It requires symbol
                                                                        lookups that may slow down fuzzing
                                                                        tests when huge amount of wrong queries
                                                                        are executed. In normal cases you
                                                                        should not disable this option.
  --enable_job_stack_trace arg                                          Output stack trace of a job creator
                                                                        when job results in exception
  --allow_ddl arg                                                       If it is set to true, then a user is
                                                                        allowed to executed DDL queries.
  --parallel_view_processing arg                                        Enables pushing to attached views
                                                                        concurrently instead of sequentially.
  --enable_unaligned_array_join arg                                     Allow ARRAY JOIN with multiple arrays
                                                                        that have different sizes. When this
                                                                        settings is enabled, arrays will be
                                                                        resized to the longest one.
  --optimize_read_in_order arg                                          Enable ORDER BY optimization for
                                                                        reading data in corresponding order in
                                                                        MergeTree tables.
  --optimize_read_in_window_order arg                                   Enable ORDER BY optimization in window
                                                                        clause for reading data in
                                                                        corresponding order in MergeTree
                                                                        tables.
  --optimize_aggregation_in_order arg                                   Enable GROUP BY optimization for
                                                                        aggregating data in corresponding order
                                                                        in MergeTree tables.
  --aggregation_in_order_max_block_bytes arg                            Maximal size of block in bytes
                                                                        accumulated during aggregation in order
                                                                        of primary key. Lower block size allows
                                                                        to parallelize more final merge stage
                                                                        of aggregation.
  --read_in_order_two_level_merge_threshold arg                         Minimal number of parts to read to run
                                                                        preliminary merge step during
                                                                        multithread reading in order of primary
                                                                        key.
  --low_cardinality_allow_in_native_format arg                          Use LowCardinality type in Native
                                                                        format. Otherwise, convert
                                                                        LowCardinality columns to ordinary for
                                                                        select query, and convert ordinary
                                                                        columns to required LowCardinality for
                                                                        insert query.
  --cancel_http_readonly_queries_on_client_close arg                    Cancel HTTP readonly queries when a
                                                                        client closes the connection without
                                                                        waiting for response.
  --external_table_functions_use_nulls arg                              If it is set to true, external table
                                                                        functions will implicitly use Nullable
                                                                        type if needed. Otherwise NULLs will be
                                                                        substituted with default values.
                                                                        Currently supported only by 'mysql',
                                                                        'postgresql' and 'odbc' table
                                                                        functions.
  --external_table_strict_query arg                                     If it is set to true, transforming
                                                                        expression to local filter is forbidden
                                                                        for queries to external tables.
  --allow_hyperscan arg                                                 Allow functions that use Hyperscan
                                                                        library. Disable to avoid potentially
                                                                        long compilation times and excessive
                                                                        resource usage.
  --max_hyperscan_regexp_length arg                                     Max length of regexp than can be used
                                                                        in hyperscan multi-match functions.
                                                                        Zero means unlimited.
  --max_hyperscan_regexp_total_length arg                               Max total length of all regexps than
                                                                        can be used in hyperscan multi-match
                                                                        functions (per every function). Zero
                                                                        means unlimited.
  --reject_expensive_hyperscan_regexps arg                              Reject patterns which will likely be
                                                                        expensive to evaluate with hyperscan
                                                                        (due to NFA state explosion)
  --allow_simdjson arg                                                  Allow using simdjson library in 'JSON*'
                                                                        functions if AVX2 instructions are
                                                                        available. If disabled rapidjson will
                                                                        be used.
  --allow_introspection_functions arg                                   Allow functions for introspection of
                                                                        ELF and DWARF for query profiling.
                                                                        These functions are slow and may impose
                                                                        security considerations.
  --splitby_max_substrings_includes_remaining_string arg                Functions 'splitBy*()' with
                                                                        'max_substrings' argument > 0 include
                                                                        the remaining string as last element in
                                                                        the result
  --allow_execute_multiif_columnar arg                                  Allow execute multiIf function columnar
  --formatdatetime_f_prints_single_zero arg                             Formatter '%f' in function
                                                                        'formatDateTime()' produces a single
                                                                        zero instead of six zeros if the
                                                                        formatted value has no fractional
                                                                        seconds.
  --formatdatetime_parsedatetime_m_is_month_name arg                    Formatter '%M' in functions
                                                                        'formatDateTime()' and
                                                                        'parseDateTime()' produces the month
                                                                        name instead of minutes.
  --max_partitions_per_insert_block arg                                 Limit maximum number of partitions in
                                                                        single INSERTed block. Zero means
                                                                        unlimited. Throw exception if the block
                                                                        contains too many partitions. This
                                                                        setting is a safety threshold, because
                                                                        using large number of partitions is a
                                                                        common misconception.
  --throw_on_max_partitions_per_insert_block arg                        Used with max_partitions_per_insert_blo
                                                                        ck. If true (default), an exception
                                                                        will be thrown when max_partitions_per_
                                                                        insert_block is reached. If false,
                                                                        details of the insert query reaching
                                                                        this limit with the number of
                                                                        partitions will be logged. This can be
                                                                        useful if you're trying to understand
                                                                        the impact on users when changing
                                                                        max_partitions_per_insert_block.
  --max_partitions_to_read arg                                          Limit the max number of partitions that
                                                                        can be accessed in one query. <= 0
                                                                        means unlimited.
  --check_query_single_value_result arg                                 Return check query result as single 1/0
                                                                        value
  --allow_drop_detached arg                                             Allow ALTER TABLE ... DROP DETACHED
                                                                        PART[ITION] ... queries
  --postgresql_connection_pool_size arg                                 Connection pool size for PostgreSQL
                                                                        table engine and database engine.
  --postgresql_connection_pool_wait_timeout arg                         Connection pool push/pop timeout on
                                                                        empty pool for PostgreSQL table engine
                                                                        and database engine. By default it will
                                                                        block on empty pool.
  --postgresql_connection_pool_auto_close_connection arg                Close connection before returning
                                                                        connection to the pool.
  --glob_expansion_max_elements arg                                     Maximum number of allowed addresses
                                                                        (For external storages, table
                                                                        functions, etc).
  --odbc_bridge_connection_pool_size arg                                Connection pool size for each
                                                                        connection settings string in ODBC
                                                                        bridge.
  --odbc_bridge_use_connection_pooling arg                              Use connection pooling in ODBC bridge.
                                                                        If set to false, a new connection is
                                                                        created every time
  --distributed_replica_error_half_life arg                             Time period reduces replica error
                                                                        counter by 2 times.
  --distributed_replica_error_cap arg                                   Max number of errors per replica,
                                                                        prevents piling up an incredible amount
                                                                        of errors if replica was offline for
                                                                        some time and allows it to be
                                                                        reconsidered in a shorter amount of
                                                                        time.
  --distributed_replica_max_ignored_errors arg                          Number of errors that will be ignored
                                                                        while choosing replicas
  --allow_experimental_live_view arg                                    Enable LIVE VIEW. Not mature enough.
  --live_view_heartbeat_interval arg                                    The heartbeat interval in seconds to
                                                                        indicate live query is alive.
  --max_live_view_insert_blocks_before_refresh arg                      Limit maximum number of inserted blocks
                                                                        after which mergeable blocks are
                                                                        dropped and query is re-executed.
  --allow_experimental_window_view arg                                  Enable WINDOW VIEW. Not mature enough.
  --window_view_clean_interval arg                                      The clean interval of window view in
                                                                        seconds to free outdated data.
  --window_view_heartbeat_interval arg                                  The heartbeat interval in seconds to
                                                                        indicate watch query is alive.
  --wait_for_window_view_fire_signal_timeout arg                        Timeout for waiting for window view
                                                                        fire signal in event time processing
  --min_free_disk_space_for_temporary_data arg                          The minimum disk space to keep while
                                                                        writing temporary data used in external
                                                                        sorting and aggregation.
  --default_temporary_table_engine arg                                  Default table engine used when ENGINE
                                                                        is not set in CREATE TEMPORARY
                                                                        statement.
  --default_table_engine arg                                            Default table engine used when ENGINE
                                                                        is not set in CREATE statement.
  --show_table_uuid_in_table_create_query_if_not_nil arg                For tables in databases with
                                                                        Engine=Atomic show UUID of the table in
                                                                        its CREATE query.
  --database_atomic_wait_for_drop_and_detach_synchronously arg          When executing DROP or DETACH TABLE in
                                                                        Atomic database, wait for table data to
                                                                        be finally dropped or detached.
  --enable_scalar_subquery_optimization arg                             If it is set to true, prevent scalar
                                                                        subqueries from (de)serializing large
                                                                        scalar values and possibly avoid
                                                                        running the same subquery more than
                                                                        once.
  --optimize_trivial_count_query arg                                    Process trivial 'SELECT count() FROM
                                                                        table' query from metadata.
  --optimize_count_from_files arg                                       Optimize counting rows from files in
                                                                        supported input formats
  --use_cache_for_count_from_files arg                                  Use cache to count the number of rows
                                                                        in files
  --optimize_respect_aliases arg                                        If it is set to true, it will respect
                                                                        aliases in WHERE/GROUP BY/ORDER BY,
                                                                        that will help with partition
                                                                        pruning/secondary indexes/optimize_aggr
                                                                        egation_in_order/optimize_read_in_order
                                                                        /optimize_trivial_count
  --mutations_sync arg                                                  Wait for synchronous execution of ALTER
                                                                        TABLE UPDATE/DELETE queries
                                                                        (mutations). 0 - execute
                                                                        asynchronously. 1 - wait current
                                                                        server. 2 - wait all replicas if they
                                                                        exist.
  --enable_lightweight_delete arg                                       Enable lightweight DELETE mutations for
                                                                        mergetree tables.
  --allow_experimental_lightweight_delete arg                           Enable lightweight DELETE mutations for
                                                                        mergetree tables.
  --optimize_move_functions_out_of_any arg                              Move functions out of aggregate
                                                                        functions 'any', 'anyLast'.
  --optimize_normalize_count_variants arg                               Rewrite aggregate functions that
                                                                        semantically equals to count() as
                                                                        count().
  --optimize_injective_functions_inside_uniq arg                        Delete injective functions of one
                                                                        argument inside uniq*() functions.
  --rewrite_count_distinct_if_with_count_distinct_implementation arg    Rewrite countDistinctIf with
                                                                        count_distinct_implementation
                                                                        configuration
  --convert_query_to_cnf arg                                            Convert SELECT query to CNF
  --optimize_or_like_chain arg                                          Optimize multiple OR LIKE into
                                                                        multiMatchAny. This optimization should
                                                                        not be enabled by default, because it
                                                                        defies index analysis in some cases.
  --optimize_arithmetic_operations_in_aggregate_functions arg           Move arithmetic operations out of
                                                                        aggregation functions
  --optimize_redundant_functions_in_order_by arg                        Remove functions from ORDER BY if its
                                                                        argument is also in ORDER BY
  --optimize_if_chain_to_multiif arg                                    Replace if(cond1, then1, if(cond2,
                                                                        ...)) chains to multiIf. Currently it's
                                                                        not beneficial for numeric types.
  --optimize_multiif_to_if arg                                          Replace 'multiIf' with only one
                                                                        condition to 'if'.
  --optimize_if_transform_strings_to_enum arg                           Replaces string-type arguments in If
                                                                        and Transform to enum. Disabled by
                                                                        default cause it could make
                                                                        inconsistent change in distributed
                                                                        query that would lead to its fail.
  --optimize_monotonous_functions_in_order_by arg                       Replace monotonous function with its
                                                                        argument in ORDER BY
  --optimize_functions_to_subcolumns arg                                Transform functions to subcolumns, if
                                                                        possible, to reduce amount of read
                                                                        data. E.g. 'length(arr)' ->
                                                                        'arr.size0', 'col IS NULL' ->
                                                                        'col.null'
  --optimize_using_constraints arg                                      Use constraints for query optimization
  --optimize_substitute_columns arg                                     Use constraints for column substitution
  --optimize_append_index arg                                           Use constraints in order to append
                                                                        index condition (indexHint)
  --normalize_function_names arg                                        Normalize function names to their
                                                                        canonical names
  --allow_experimental_alter_materialized_view_structure arg            Allow atomic alter on Materialized
                                                                        views. Work in progress.
  --enable_early_constant_folding arg                                   Enable query optimization where we
                                                                        analyze function and subqueries results
                                                                        and rewrite query if there're constants
                                                                        there
  --deduplicate_blocks_in_dependent_materialized_views arg              Should deduplicate blocks for
                                                                        materialized views if the block is not
                                                                        a duplicate for the table. Use true to
                                                                        always deduplicate in dependent tables.
  --materialized_views_ignore_errors arg                                Allows to ignore errors for
                                                                        MATERIALIZED VIEW, and deliver original
                                                                        block to the table regardless of MVs
  --use_compact_format_in_distributed_parts_names arg                   Changes format of directories names for
                                                                        distributed table insert parts.
  --validate_polygons arg                                               Throw exception if polygon is invalid
                                                                        in function pointInPolygon (e.g.
                                                                        self-tangent, self-intersecting). If
                                                                        the setting is false, the function will
                                                                        accept invalid polygons but may
                                                                        silently return wrong result.
  --max_parser_depth arg                                                Maximum parser depth (recursion depth
                                                                        of recursive descend parser).
  --allow_settings_after_format_in_insert arg                           Allow SETTINGS after FORMAT, but note,
                                                                        that this is not always safe (note:
                                                                        this is a compatibility setting).
  --periodic_live_view_refresh arg                                      Interval after which periodically
                                                                        refreshed live view is forced to
                                                                        refresh.
  --transform_null_in arg                                               If enabled, NULL values will be matched
                                                                        with 'IN' operator as if they are
                                                                        considered equal.
  --allow_nondeterministic_mutations arg                                Allow non-deterministic functions in
                                                                        ALTER UPDATE/ALTER DELETE statements
  --lock_acquire_timeout arg                                            How long locking request should wait
                                                                        before failing
  --materialize_ttl_after_modify arg                                    Apply TTL for old data, after ALTER
                                                                        MODIFY TTL query
  --function_implementation arg                                         Choose function implementation for
                                                                        specific target or variant
                                                                        (experimental). If empty enable all of
                                                                        them.
  --data_type_default_nullable arg                                      Data types without NULL or NOT NULL
                                                                        will make Nullable
  --cast_keep_nullable arg                                              CAST operator keep Nullable for result
                                                                        data type
  --cast_ipv4_ipv6_default_on_conversion_error arg                      CAST operator into IPv4, CAST operator
                                                                        into IPV6 type, toIPv4, toIPv6
                                                                        functions will return default value
                                                                        instead of throwing exception on
                                                                        conversion error.
  --alter_partition_verbose_result arg                                  Output information about affected
                                                                        parts. Currently works only for FREEZE
                                                                        and ATTACH commands.
  --allow_experimental_database_materialized_mysql arg                  Allow to create database with
                                                                        Engine=MaterializedMySQL(...).
  --allow_experimental_database_materialized_postgresql arg             Allow to create database with
                                                                        Engine=MaterializedPostgreSQL(...).
  --system_events_show_zero_values arg                                  When querying system.events or
                                                                        system.metrics tables, include all
                                                                        metrics, even with zero values.
  --mysql_datatypes_support_level arg                                   Which MySQL types should be converted
                                                                        to corresponding ClickHouse types
                                                                        (rather than being represented as
                                                                        String). Can be empty or any
                                                                        combination of 'decimal', 'datetime64',
                                                                        'date2Date32' or 'date2String'. When
                                                                        empty MySQL's DECIMAL and
                                                                        DATETIME/TIMESTAMP with non-zero
                                                                        precision are seen as String on
                                                                        ClickHouse's side.
  --optimize_trivial_insert_select arg                                  Optimize trivial 'INSERT INTO table
                                                                        SELECT ... FROM TABLES' query
  --allow_non_metadata_alters arg                                       Allow to execute alters which affects
                                                                        not only tables metadata, but also data
                                                                        on disk
  --enable_global_with_statement arg                                    Propagate WITH statements to UNION
                                                                        queries and all subqueries
  --aggregate_functions_null_for_empty arg                              Rewrite all aggregate functions in a
                                                                        query, adding -OrNull suffix to them
  --optimize_syntax_fuse_functions arg                                  Allow apply fuse aggregating function.
                                                                        Available only with `allow_experimental
                                                                        _analyzer`
  --flatten_nested arg                                                  If true, columns of type Nested will be
                                                                        flatten to separate array columns
                                                                        instead of one array of tuples
  --asterisk_include_materialized_columns arg                           Include MATERIALIZED columns for
                                                                        wildcard query
  --asterisk_include_alias_columns arg                                  Include ALIAS columns for wildcard
                                                                        query
  --optimize_skip_merged_partitions arg                                 Skip partitions with one part with
                                                                        level > 0 in optimize final
  --optimize_on_insert arg                                              Do the same transformation for inserted
                                                                        block of data as if merge was done on
                                                                        this block.
  --optimize_use_projections arg                                        Automatically choose projections to
                                                                        perform SELECT query
  --allow_experimental_projection_optimization arg                      Automatically choose projections to
                                                                        perform SELECT query
  --optimize_use_implicit_projections arg                               Automatically choose implicit
                                                                        projections to perform SELECT query
  --force_optimize_projection arg                                       If projection optimization is enabled,
                                                                        SELECT queries need to use projection
  --async_socket_for_remote arg                                         Asynchronously read from socket
                                                                        executing remote query
  --async_query_sending_for_remote arg                                  Asynchronously create connections and
                                                                        send query to shards in remote query
  --insert_null_as_default arg                                          Insert DEFAULT values instead of NULL
                                                                        in INSERT SELECT (UNION ALL)
  --describe_extend_object_types arg                                    Deduce concrete type of columns of type
                                                                        Object in DESCRIBE query
  --describe_include_subcolumns arg                                     If true, subcolumns of all table
                                                                        columns will be included into result of
                                                                        DESCRIBE query
  --describe_include_virtual_columns arg                                If true, virtual columns of table will
                                                                        be included into result of DESCRIBE
                                                                        query
  --describe_compact_output arg                                         If true, include only column names and
                                                                        types into result of DESCRIBE query
  --mutations_execute_nondeterministic_on_initiator arg                 If true nondeterministic function are
                                                                        executed on initiator and replaced to
                                                                        literals in UPDATE and DELETE queries
  --mutations_execute_subqueries_on_initiator arg                       If true scalar subqueries are executed
                                                                        on initiator and replaced to literals
                                                                        in UPDATE and DELETE queries
  --mutations_max_literal_size_to_replace arg                           The maximum size of serialized literal
                                                                        in bytes to replace in UPDATE and
                                                                        DELETE queries
  --use_query_cache arg                                                 Enable the query cache
  --enable_writes_to_query_cache arg                                    Enable storing results of SELECT
                                                                        queries in the query cache
  --enable_reads_from_query_cache arg                                   Enable reading results of SELECT
                                                                        queries from the query cache
  --query_cache_store_results_of_queries_with_nondeterministic_functions arg
                                                                        Store results of queries with
                                                                        non-deterministic functions (e.g.
                                                                        rand(), now()) in the query cache
  --query_cache_max_size_in_bytes arg                                   The maximum amount of memory (in bytes)
                                                                        the current user may allocate in the
                                                                        query cache. 0 means unlimited.
  --query_cache_max_entries arg                                         The maximum number of query results the
                                                                        current user may store in the query
                                                                        cache. 0 means unlimited.
  --query_cache_min_query_runs arg                                      Minimum number a SELECT query must run
                                                                        before its result is stored in the
                                                                        query cache
  --query_cache_min_query_duration arg                                  Minimum time in milliseconds for a
                                                                        query to run for its result to be
                                                                        stored in the query cache.
  --query_cache_compress_entries arg                                    Compress cache entries.
  --query_cache_squash_partial_results arg                              Squash partial result blocks to blocks
                                                                        of size 'max_block_size'. Reduces
                                                                        performance of inserts into the query
                                                                        cache but improves the compressability
                                                                        of cache entries.
  --query_cache_ttl arg                                                 After this time in seconds entries in
                                                                        the query cache become stale
  --query_cache_share_between_users arg                                 Allow other users to read entry in the
                                                                        query cache
  --enable_sharing_sets_for_mutations arg                               Allow sharing set objects build for IN
                                                                        subqueries between different tasks of
                                                                        the same mutation. This reduces memory
                                                                        usage and CPU consumption
  --optimize_rewrite_sum_if_to_count_if arg                             Rewrite sumIf() and sum(if()) function
                                                                        countIf() function when logically
                                                                        equivalent
  --optimize_rewrite_aggregate_function_with_if arg                     Rewrite aggregate functions with if
                                                                        expression as argument when logically
                                                                        equivalent. For example, avg(if(cond,
                                                                        col, null)) can be rewritten to
                                                                        avgIf(cond, col)
  --optimize_rewrite_array_exists_to_has arg                            Rewrite arrayExists() functions to
                                                                        has() when logically equivalent. For
                                                                        example, arrayExists(x -> x = 1, arr)
                                                                        can be rewritten to has(arr, 1)
  --insert_shard_id arg                                                 If non zero, when insert into a
                                                                        distributed table, the data will be
                                                                        inserted into the shard
                                                                        `insert_shard_id` synchronously.
                                                                        Possible values range from 1 to
                                                                        `shards_number` of corresponding
                                                                        distributed table
  --collect_hash_table_stats_during_aggregation arg                     Enable collecting hash table statistics
                                                                        to optimize memory allocation
  --max_entries_for_hash_table_stats arg                                How many entries hash table statistics
                                                                        collected during aggregation is allowed
                                                                        to have
  --max_size_to_preallocate_for_aggregation arg                         For how many elements it is allowed to
                                                                        preallocate space in all hash tables in
                                                                        total before aggregation
  --kafka_disable_num_consumers_limit arg                               Disable limit on kafka_num_consumers
                                                                        that depends on the number of available
                                                                        CPU cores
  --enable_software_prefetch_in_aggregation arg                         Enable use of software prefetch in
                                                                        aggregation
  --allow_aggregate_partitions_independently arg                        Enable independent aggregation of
                                                                        partitions on separate threads when
                                                                        partition key suits group by key.
                                                                        Beneficial when number of partitions
                                                                        close to number of cores and partitions
                                                                        have roughly the same size
  --force_aggregate_partitions_independently arg                        Force the use of optimization when it
                                                                        is applicable, but heuristics decided
                                                                        not to use it
  --max_number_of_partitions_for_independent_aggregation arg            Maximal number of partitions in table
                                                                        to apply optimization
  --allow_experimental_query_deduplication arg                          Experimental data deduplication for
                                                                        SELECT queries based on part UUIDs
  --engine_file_empty_if_not_exists arg                                 Allows to select data from a file
                                                                        engine table without file
  --engine_file_truncate_on_insert arg                                  Enables or disables truncate before
                                                                        insert in file engine tables
  --engine_file_allow_create_multiple_files arg                         Enables or disables creating a new file
                                                                        on each insert in file engine tables if
                                                                        format has suffix.
  --engine_file_skip_empty_files arg                                    Allows to skip empty files in file
                                                                        table engine
  --engine_url_skip_empty_files arg                                     Allows to skip empty files in url table
                                                                        engine
  --enable_url_encoding arg                                              Allows to enable/disable
                                                                        decoding/encoding path in uri in URL
                                                                        table engine
  --allow_experimental_database_replicated arg                          Allow to create databases with
                                                                        Replicated engine
  --database_replicated_initial_query_timeout_sec arg                   How long initial DDL query should wait
                                                                        for Replicated database to precess
                                                                        previous DDL queue entries
  --database_replicated_enforce_synchronous_settings arg                Enforces synchronous waiting for some
                                                                        queries (see also database_atomic_wait_
                                                                        for_drop_and_detach_synchronously,
                                                                        mutation_sync, alter_sync). Not
                                                                        recommended to enable these settings.
  --max_distributed_depth arg                                           Maximum distributed query depth
  --database_replicated_always_detach_permanently arg                   Execute DETACH TABLE as DETACH TABLE
                                                                        PERMANENTLY if database engine is
                                                                        Replicated
  --database_replicated_allow_only_replicated_engine arg                Allow to create only Replicated tables
                                                                        in database with engine Replicated
  --database_replicated_allow_replicated_engine_arguments arg           Allow to create only Replicated tables
                                                                        in database with engine Replicated with
                                                                        explicit arguments
  --distributed_ddl_output_mode arg                                     Format of distributed DDL query result,
                                                                        one of: 'none', 'throw',
                                                                        'null_status_on_timeout', 'never_throw'
  --distributed_ddl_entry_format_version arg                            Compatibility version of distributed
                                                                        DDL (ON CLUSTER) queries
  --external_storage_max_read_rows arg                                  Limit maximum number of rows when table
                                                                        with external engine should flush
                                                                        history data. Now supported only for
                                                                        MySQL table engine, database engine,
                                                                        dictionary and MaterializedMySQL. If
                                                                        equal to 0, this setting is disabled
  --external_storage_max_read_bytes arg                                 Limit maximum number of bytes when
                                                                        table with external engine should flush
                                                                        history data. Now supported only for
                                                                        MySQL table engine, database engine,
                                                                        dictionary and MaterializedMySQL. If
                                                                        equal to 0, this setting is disabled
  --external_storage_connect_timeout_sec arg                            Connect timeout in seconds. Now
                                                                        supported only for MySQL
  --external_storage_rw_timeout_sec arg                                 Read/write timeout in seconds. Now
                                                                        supported only for MySQL
  --union_default_mode arg                                              Set default mode in UNION query.
                                                                        Possible values: empty string, 'ALL',
                                                                        'DISTINCT'. If empty, query without
                                                                        mode will throw exception.
  --intersect_default_mode arg                                          Set default mode in INTERSECT query.
                                                                        Possible values: empty string, 'ALL',
                                                                        'DISTINCT'. If empty, query without
                                                                        mode will throw exception.
  --except_default_mode arg                                             Set default mode in EXCEPT query.
                                                                        Possible values: empty string, 'ALL',
                                                                        'DISTINCT'. If empty, query without
                                                                        mode will throw exception.
  --optimize_aggregators_of_group_by_keys arg                           Eliminates min/max/any/anyLast
                                                                        aggregators of GROUP BY keys in SELECT
                                                                        section
  --optimize_group_by_function_keys arg                                 Eliminates functions of other keys in
                                                                        GROUP BY section
  --optimize_group_by_constant_keys arg                                 Optimize GROUP BY when all keys in
                                                                        block are constant
  --legacy_column_name_of_tuple_literal arg                             List all names of element of large
                                                                        tuple literals in their column names
                                                                        instead of hash. This settings exists
                                                                        only for compatibility reasons. It
                                                                        makes sense to set to 'true', while
                                                                        doing rolling update of cluster from
                                                                        version lower than 21.7 to higher.
  --query_plan_enable_optimizations arg                                 Apply optimizations to query plan
  --query_plan_max_optimizations_to_apply arg                           Limit the total number of optimizations
                                                                        applied to query plan. If zero,
                                                                        ignored. If limit reached, throw
                                                                        exception
  --query_plan_filter_push_down arg                                     Allow to push down filter by predicate
                                                                        query plan step
  --query_plan_optimize_primary_key arg                                 Analyze primary key using query plan
                                                                        (instead of AST)
  --query_plan_read_in_order arg                                        Use query plan for read-in-order
                                                                        optimisation
  --query_plan_aggregation_in_order arg                                 Use query plan for aggregation-in-order
                                                                        optimisation
  --query_plan_remove_redundant_sorting arg                             Remove redundant sorting in query plan.
                                                                        For example, sorting steps related to
                                                                        ORDER BY clauses in subqueries
  --query_plan_remove_redundant_distinct arg                            Remove redundant Distinct step in query
                                                                        plan
  --regexp_max_matches_per_row arg                                      Max matches of any single regexp per
                                                                        row, used to safeguard
                                                                        'extractAllGroupsHorizontal' against
                                                                        consuming too much memory with greedy
                                                                        RE.
  --limit arg                                                           Limit on read rows from the most 'end'
                                                                        result for select query, default 0
                                                                        means no limit length
  --offset arg                                                          Offset on read rows from the most 'end'
                                                                        result for select query
  --function_range_max_elements_in_block arg                            Maximum number of values generated by
                                                                        function `range` per block of data (sum
                                                                        of array sizes for every row in a
                                                                        block, see also 'max_block_size' and
                                                                        'min_insert_block_size_rows'). It is a
                                                                        safety threshold.
  --function_sleep_max_microseconds_per_block arg                       Maximum number of microseconds the
                                                                        function `sleep` is allowed to sleep
                                                                        for each block. If a user called it
                                                                        with a larger value, it throws an
                                                                        exception. It is a safety threshold.
  --short_circuit_function_evaluation arg                               Setting for short-circuit function
                                                                        evaluation configuration. Possible
                                                                        values: 'enable' - use short-circuit
                                                                        function evaluation for functions that
                                                                        are suitable for it, 'disable' -
                                                                        disable short-circuit function
                                                                        evaluation, 'force_enable' - use
                                                                        short-circuit function evaluation for
                                                                        all functions.
  --storage_file_read_method arg                                        Method of reading data from storage
                                                                        file, one of: read, pread, mmap. The
                                                                        mmap method does not apply to
                                                                        clickhouse-server (it's intended for
                                                                        clickhouse-local).
  --local_filesystem_read_method arg                                    Method of reading data from local
                                                                        filesystem, one of: read, pread, mmap,
                                                                        io_uring, pread_threadpool. The
                                                                        'io_uring' method is experimental and
                                                                        does not work for Log, TinyLog,
                                                                        StripeLog, File, Set and Join, and
                                                                        other tables with append-able files in
                                                                        presence of concurrent reads and
                                                                        writes.
  --remote_filesystem_read_method arg                                   Method of reading data from remote
                                                                        filesystem, one of: read, threadpool.
  --local_filesystem_read_prefetch arg                                  Should use prefetching when reading
                                                                        data from local filesystem.
  --remote_filesystem_read_prefetch arg                                 Should use prefetching when reading
                                                                        data from remote filesystem.
  --read_priority arg                                                   Priority to read data from local
                                                                        filesystem or remote filesystem. Only
                                                                        supported for 'pread_threadpool' method
                                                                        for local filesystem and for
                                                                        `threadpool` method for remote
                                                                        filesystem.
  --merge_tree_min_rows_for_concurrent_read_for_remote_filesystem arg   If at least as many lines are read from
                                                                        one file, the reading can be
                                                                        parallelized, when reading from remote
                                                                        filesystem.
  --merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem arg  If at least as many bytes are read from
                                                                        one file, the reading can be
                                                                        parallelized, when reading from remote
                                                                        filesystem.
  --remote_read_min_bytes_for_seek arg                                  Min bytes required for remote read
                                                                        (url, s3) to do seek, instead of read
                                                                        with ignore.
  --merge_tree_min_bytes_per_task_for_remote_reading arg                Min bytes to read per task.
  --merge_tree_use_const_size_tasks_for_remote_reading arg              Whether to use constant size tasks for
                                                                        reading from a remote table.
  --merge_tree_determine_task_size_by_prewhere_columns arg              Whether to use only prewhere columns
                                                                        size to determine reading task size.
  --async_insert arg                                                    If true, data from INSERT query is
                                                                        stored in queue and later flushed to
                                                                        table in background. If
                                                                        wait_for_async_insert is false, INSERT
                                                                        query is processed almost instantly,
                                                                        otherwise client will wait until data
                                                                        will be flushed to table
  --wait_for_async_insert arg                                           If true wait for processing of
                                                                        asynchronous insertion
  --wait_for_async_insert_timeout arg                                   Timeout for waiting for processing
                                                                        asynchronous insertion
  --async_insert_max_data_size arg                                      Maximum size in bytes of unparsed data
                                                                        collected per query before being
                                                                        inserted
  --async_insert_max_query_number arg                                   Maximum number of insert queries before
                                                                        being inserted
  --async_insert_busy_timeout_ms arg                                    Maximum time to wait before dumping
                                                                        collected data per query since the
                                                                        first data appeared
  --remote_fs_read_max_backoff_ms arg                                   Max wait time when trying to read data
                                                                        for remote disk
  --remote_fs_read_backoff_max_tries arg                                Max attempts to read with backoff
  --enable_filesystem_cache arg                                         Use cache for remote filesystem. This
                                                                        setting does not turn on/off cache for
                                                                        disks (must be done via disk config),
                                                                        but allows to bypass cache for some
                                                                        queries if intended
  --enable_filesystem_cache_on_write_operations arg                     Write into cache on write operations.
                                                                        To actually work this setting requires
                                                                        be added to disk config too
  --enable_filesystem_cache_log arg                                     Allows to record the filesystem caching
                                                                        log for each query
  --read_from_filesystem_cache_if_exists_otherwise_bypass_cache arg     Allow to use the filesystem cache in
                                                                        passive mode - benefit from the
                                                                        existing cache entries, but don't put
                                                                        more entries into the cache. If you set
                                                                        this setting for heavy ad-hoc queries
                                                                        and leave it disabled for short
                                                                        real-time queries, this will allows to
                                                                        avoid cache threshing by too heavy
                                                                        queries and to improve the overall
                                                                        system efficiency.
  --skip_download_if_exceeds_query_cache arg                            Skip download from remote filesystem if
                                                                        exceeds query cache size
  --filesystem_cache_max_download_size arg                              Max remote filesystem cache size that
                                                                        can be downloaded by a single query
  --throw_on_error_from_cache_on_write_operations arg                   Ignore error from cache when caching on
                                                                        write operations (INSERT, merges)
  --load_marks_asynchronously arg                                       Load MergeTree marks asynchronously
  --enable_filesystem_read_prefetches_log arg                           Log to system.filesystem prefetch_log
                                                                        during query. Should be used only for
                                                                        testing or debugging, not recommended
                                                                        to be turned on by default
  --allow_prefetched_read_pool_for_remote_filesystem arg                Prefer prefethed threadpool if all
                                                                        parts are on remote filesystem
  --allow_prefetched_read_pool_for_local_filesystem arg                 Prefer prefethed threadpool if all
                                                                        parts are on remote filesystem
  --prefetch_buffer_size arg                                            The maximum size of the prefetch buffer
                                                                        to read from the filesystem.
  --filesystem_prefetch_step_bytes arg                                  Prefetch step in bytes. Zero means
                                                                        `auto` - approximately the best
                                                                        prefetch step will be auto deduced, but
                                                                        might not be 100% the best. The actual
                                                                        value might be different because of
                                                                        setting filesystem_prefetch_min_bytes_f
                                                                        or_single_read_task
  --filesystem_prefetch_step_marks arg                                  Prefetch step in marks. Zero means
                                                                        `auto` - approximately the best
                                                                        prefetch step will be auto deduced, but
                                                                        might not be 100% the best. The actual
                                                                        value might be different because of
                                                                        setting filesystem_prefetch_min_bytes_f
                                                                        or_single_read_task
  --filesystem_prefetch_min_bytes_for_single_read_task arg              Do not parallelize within one file read
                                                                        less than this amount of bytes. E.g.
                                                                        one reader will not receive a read task
                                                                        of size less than this amount. This
                                                                        setting is recommended to avoid spikes
                                                                        of time for aws getObject requests to
                                                                        aws
  --filesystem_prefetch_max_memory_usage arg                            Maximum memory usage for prefetches.
  --filesystem_prefetches_limit arg                                     Maximum number of prefetches. Zero
                                                                        means unlimited. A setting
                                                                        `filesystem_prefetches_max_memory_usage
                                                                        ` is more recommended if you want to
                                                                        limit the number of prefetches
  --use_structure_from_insertion_table_in_table_functions arg           Use structure from insertion table
                                                                        instead of schema inference from data.
                                                                        Possible values: 0 - disabled, 1 -
                                                                        enabled, 2 - auto
  --http_max_tries arg                                                  Max attempts to read via http.
  --http_retry_initial_backoff_ms arg                                   Min milliseconds for backoff, when
                                                                        retrying read via http
  --http_retry_max_backoff_ms arg                                       Max milliseconds for backoff, when
                                                                        retrying read via http
  --force_remove_data_recursively_on_drop arg                           Recursively remove data on DROP query.
                                                                        Avoids 'Directory not empty' error, but
                                                                        may silently remove detached data
  --check_table_dependencies arg                                        Check that DDL query (such as DROP
                                                                        TABLE or RENAME) will not break
                                                                        dependencies
  --check_referential_table_dependencies arg                            Check that DDL query (such as DROP
                                                                        TABLE or RENAME) will not break
                                                                        referential dependencies
  --use_local_cache_for_remote_storage arg                              Use local cache for remote storage like
                                                                        HDFS or S3, it's used for remote table
                                                                        engine only
  --allow_unrestricted_reads_from_keeper arg                            Allow unrestricted (without condition
                                                                        on path) reads from system.zookeeper
                                                                        table, can be handy, but is not safe
                                                                        for zookeeper
  --allow_deprecated_database_ordinary arg                              Allow to create databases with
                                                                        deprecated Ordinary engine
  --allow_deprecated_syntax_for_merge_tree arg                          Allow to create *MergeTree tables with
                                                                        deprecated engine definition syntax
  --allow_asynchronous_read_from_io_pool_for_merge_tree arg             Use background I/O pool to read from
                                                                        MergeTree tables. This setting may
                                                                        increase performance for I/O bound
                                                                        queries
  --max_streams_for_merge_tree_reading arg                              If is not zero, limit the number of
                                                                        reading streams for MergeTree table.
  --force_grouping_standard_compatibility arg                           Make GROUPING function to return 1 when
                                                                        argument is not used as an aggregation
                                                                        key
  --schema_inference_use_cache_for_file arg                             Use cache in schema inference while
                                                                        using file table function
  --schema_inference_use_cache_for_s3 arg                               Use cache in schema inference while
                                                                        using s3 table function
  --schema_inference_use_cache_for_azure arg                            Use cache in schema inference while
                                                                        using azure table function
  --schema_inference_use_cache_for_hdfs arg                             Use cache in schema inference while
                                                                        using hdfs table function
  --schema_inference_use_cache_for_url arg                              Use cache in schema inference while
                                                                        using url table function
  --schema_inference_cache_require_modification_time_for_url arg        Use schema from cache for URL with last
                                                                        modification time validation (for urls
                                                                        with Last-Modified header)
  --compatibility arg                                                   Changes other settings according to
                                                                        provided ClickHouse version. If we know
                                                                        that we changed some behaviour in
                                                                        ClickHouse by changing some settings in
                                                                        some version, this compatibility
                                                                        setting will control these settings
  --additional_table_filters arg                                        Additional filter expression which
                                                                        would be applied after reading from
                                                                        specified table. Syntax: {'table1':
                                                                        'expression', 'database.table2':
                                                                        'expression'}
  --additional_result_filter arg                                        Additional filter expression which
                                                                        would be applied to query result
  --workload arg                                                        Name of workload to be used to access
                                                                        resources
  --storage_system_stack_trace_pipe_read_timeout_ms arg                 Maximum time to read from a pipe for
                                                                        receiving information from the threads
                                                                        when querying the `system.stack_trace`
                                                                        table. This setting is used for testing
                                                                        purposes and not meant to be changed by
                                                                        users.
  --rename_files_after_processing arg                                   Rename successfully processed files
                                                                        according to the specified pattern;
                                                                        Pattern can include the following
                                                                        placeholders: `%a` (full original file
                                                                        name), `%f` (original filename without
                                                                        extension), `%e` (file extension with
                                                                        dot), `%t` (current timestamp in µs),
                                                                        and `%%` (% sign)
  --parallelize_output_from_storages arg                                Parallelize output for reading step
                                                                        from storage. It allows parallelizing
                                                                        query processing right after reading
                                                                        from storage if possible
  --insert_deduplication_token arg                                      If not empty, used for duplicate
                                                                        detection instead of data digest
  --count_distinct_optimization arg                                     Rewrite count distinct to subquery of
                                                                        group by
  --throw_if_no_data_to_insert arg                                      Enables or disables empty INSERTs,
                                                                        enabled by default
  --compatibility_ignore_auto_increment_in_create_table arg             Ignore AUTO_INCREMENT keyword in column
                                                                        declaration if true, otherwise return
                                                                        error. It simplifies migration from
                                                                        MySQL
  --multiple_joins_try_to_keep_original_names arg                       Do not add aliases to top level
                                                                        expression list on multiple joins
                                                                        rewrite
  --optimize_sorting_by_input_stream_properties arg                     Optimize sorting by sorting properties
                                                                        of input stream
  --insert_keeper_max_retries arg                                       Max retries for keeper operations
                                                                        during insert
  --insert_keeper_retry_initial_backoff_ms arg                          Initial backoff timeout for keeper
                                                                        operations during insert
  --insert_keeper_retry_max_backoff_ms arg                              Max backoff timeout for keeper
                                                                        operations during insert
  --insert_keeper_fault_injection_probability arg                       Approximate probability of failure for
                                                                        a keeper request during insert. Valid
                                                                        value is in interval [0.0f, 1.0f]
  --insert_keeper_fault_injection_seed arg                              0 - random seed, otherwise the setting
                                                                        value
  --force_aggregation_in_order arg                                      Force use of aggregation in order on
                                                                        remote nodes during distributed
                                                                        aggregation. PLEASE, NEVER CHANGE THIS
                                                                        SETTING VALUE MANUALLY!
  --http_max_request_param_data_size arg                                Limit on size of request data used as a
                                                                        query parameter in predefined HTTP
                                                                        requests.
  --function_json_value_return_type_allow_nullable arg                  Allow function JSON_VALUE to return
                                                                        nullable type.
  --function_json_value_return_type_allow_complex arg                   Allow function JSON_VALUE to return
                                                                        complex type, such as: struct, array,
                                                                        map.
  --use_with_fill_by_sorting_prefix arg                                 Columns preceding WITH FILL columns in
                                                                        ORDER BY clause form sorting prefix.
                                                                        Rows with different values in sorting
                                                                        prefix are filled independently
  --allow_experimental_funnel_functions arg                             Enable experimental functions for
                                                                        funnel analysis.
  --allow_experimental_nlp_functions arg                                Enable experimental functions for
                                                                        natural language processing.
  --allow_experimental_hash_functions arg                               Enable experimental hash functions
  --allow_experimental_object_type arg                                  Allow Object and JSON data types
  --allow_experimental_annoy_index arg                                  Allows to use Annoy index. Disabled by
                                                                        default because this feature is
                                                                        experimental
  --allow_experimental_usearch_index arg                                Allows to use USearch index. Disabled
                                                                        by default because this feature is
                                                                        experimental
  --allow_experimental_s3queue arg                                      Allows to use S3Queue engine. Disabled
                                                                        by default, because this feature is
                                                                        experimental
  --max_limit_for_ann_queries arg                                       SELECT queries with LIMIT bigger than
                                                                        this setting cannot use ANN indexes.
                                                                        Helps to prevent memory overflows in
                                                                        ANN search indexes.
  --max_threads_for_annoy_index_creation arg                            Number of threads used to build Annoy
                                                                        indexes (0 means all cores, not
                                                                        recommended)
  --annoy_index_search_k_nodes arg                                      SELECT queries search up to this many
                                                                        nodes in Annoy indexes.
  --throw_on_unsupported_query_inside_transaction arg                   Throw exception if unsupported query is
                                                                        used inside transaction
  --wait_changes_become_visible_after_commit_mode arg                   Wait for committed changes to become
                                                                        actually visible in the latest snapshot
  --implicit_transaction arg                                            If enabled and not already inside a
                                                                        transaction, wraps the query inside a
                                                                        full transaction (begin + commit or
                                                                        rollback)
  --grace_hash_join_initial_buckets arg                                 Initial number of grace hash join
                                                                        buckets
  --grace_hash_join_max_buckets arg                                     Limit on the number of grace hash join
                                                                        buckets
  --optimize_distinct_in_order arg                                      Enable DISTINCT optimization if some
                                                                        columns in DISTINCT form a prefix of
                                                                        sorting. For example, prefix of sorting
                                                                        key in merge tree or ORDER BY statement
  --allow_experimental_undrop_table_query arg                           Allow to use undrop query to restore
                                                                        dropped table in a limited time
  --keeper_map_strict_mode arg                                          Enforce additional checks during
                                                                        operations on KeeperMap. E.g. throw an
                                                                        exception on an insert for already
                                                                        existing key
  --extract_kvp_max_pairs_per_row arg                                   Max number pairs that can be produced
                                                                        by extractKeyValuePairs function. Used
                                                                        to safeguard against consuming too much
                                                                        memory.
  --session_timezone arg                                                This setting can be removed in the
                                                                        future due to potential caveats. It is
                                                                        experimental and is not suitable for
                                                                        production usage. The default timezone
                                                                        for current session or query. The
                                                                        server default timezone if empty.
  --allow_create_index_without_type arg                                 Allow CREATE INDEX query without TYPE.
                                                                        Query will be ignored. Made for SQL
                                                                        compatibility tests.
  --create_index_ignore_unique arg                                      Ignore UNIQUE keyword in CREATE UNIQUE
                                                                        INDEX. Made for SQL compatibility
                                                                        tests.
  --print_pretty_type_names arg                                         Print pretty type names in DESCRIBE
                                                                        query and toTypeName() function
  --max_memory_usage_for_all_queries arg                                Obsolete setting, does nothing.
  --multiple_joins_rewriter_version arg                                 Obsolete setting, does nothing.
  --enable_debug_queries arg                                            Obsolete setting, does nothing.
  --allow_experimental_database_atomic arg                              Obsolete setting, does nothing.
  --allow_experimental_bigint_types arg                                 Obsolete setting, does nothing.
  --allow_experimental_window_functions arg                             Obsolete setting, does nothing.
  --allow_experimental_geo_types arg                                    Obsolete setting, does nothing.
  --async_insert_stale_timeout_ms arg                                   Obsolete setting, does nothing.
  --handle_kafka_error_mode arg                                         Obsolete setting, does nothing.
  --database_replicated_ddl_output arg                                  Obsolete setting, does nothing.
  --replication_alter_columns_timeout arg                               Obsolete setting, does nothing.
  --odbc_max_field_size arg                                             Obsolete setting, does nothing.
  --allow_experimental_map_type arg                                     Obsolete setting, does nothing.
  --merge_tree_clear_old_temporary_directories_interval_seconds arg     Obsolete setting, does nothing.
  --merge_tree_clear_old_parts_interval_seconds arg                     Obsolete setting, does nothing.
  --partial_merge_join_optimizations arg                                Obsolete setting, does nothing.
  --max_alter_threads arg                                               Obsolete setting, does nothing.
  --background_buffer_flush_schedule_pool_size arg                      User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_pool_size arg                                            User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_merges_mutations_concurrency_ratio arg                   User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_move_pool_size arg                                       User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_fetches_pool_size arg                                    User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_common_pool_size arg                                     User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_schedule_pool_size arg                                   User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_message_broker_schedule_pool_size arg                    User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --background_distributed_schedule_pool_size arg                       User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --max_remote_read_network_bandwidth_for_server arg                    User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --max_remote_write_network_bandwidth_for_server arg                   User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --async_insert_threads arg                                            User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --max_replicated_fetches_network_bandwidth_for_server arg             User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --max_replicated_sends_network_bandwidth_for_server arg               User-level setting is deprecated, and
                                                                        it must be defined in the server
                                                                        configuration instead.
  --default_database_engine arg                                         Obsolete setting, does nothing.
  --max_pipeline_depth arg                                              Obsolete setting, does nothing.
  --temporary_live_view_timeout arg                                     Obsolete setting, does nothing.
  --async_insert_cleanup_timeout_ms arg                                 Obsolete setting, does nothing.
  --optimize_fuse_sum_count_avg arg                                     Obsolete setting, does nothing.
  --drain_timeout arg                                                   Obsolete setting, does nothing.
  --backup_threads arg                                                  Obsolete setting, does nothing.
  --restore_threads arg                                                 Obsolete setting, does nothing.
  --optimize_duplicate_order_by_and_distinct arg                        Obsolete setting, does nothing.
  --query_plan_optimize_projection arg                                  Obsolete setting, does nothing.
  --format_csv_delimiter arg                                            The character to be considered as a
                                                                        delimiter in CSV data. If setting with
                                                                        a string, a string has to have a length
                                                                        of 1.
  --format_csv_allow_single_quotes arg                                  If it is set to true, allow strings in
                                                                        single quotes.
  --format_csv_allow_double_quotes arg                                  If it is set to true, allow strings in
                                                                        double quotes.
  --output_format_csv_crlf_end_of_line arg                              If it is set true, end of line in CSV
                                                                        format will be \r\n instead of \n.
  --input_format_csv_enum_as_number arg                                 Treat inserted enum values in CSV
                                                                        formats as enum indices
  --input_format_csv_arrays_as_nested_csv arg                           When reading Array from CSV, expect
                                                                        that its elements were serialized in
                                                                        nested CSV and then put into string.
                                                                        Example: "[""Hello"", ""world"",
                                                                        ""42"""" TV""]". Braces around array
                                                                        can be omitted.
  --input_format_skip_unknown_fields arg                                Skip columns with unknown names from
                                                                        input data (it works for JSONEachRow,
                                                                        -WithNames, -WithNamesAndTypes and TSKV
                                                                        formats).
  --input_format_with_names_use_header arg                              For -WithNames input formats this
                                                                        controls whether format parser is to
                                                                        assume that column data appear in the
                                                                        input exactly as they are specified in
                                                                        the header.
  --input_format_with_types_use_header arg                              For -WithNamesAndTypes input formats
                                                                        this controls whether format parser
                                                                        should check if data types from the
                                                                        input match data types from the header.
  --input_format_import_nested_json arg                                 Map nested JSON data to nested tables
                                                                        (it works for JSONEachRow format).
  --input_format_defaults_for_omitted_fields arg                        For input data calculate default
                                                                        expressions for omitted fields (it
                                                                        works for JSONEachRow, -WithNames,
                                                                        -WithNamesAndTypes formats).
  --input_format_csv_empty_as_default arg                               Treat empty fields in CSV input as
                                                                        default values.
  --input_format_tsv_empty_as_default arg                               Treat empty fields in TSV input as
                                                                        default values.
  --input_format_tsv_enum_as_number arg                                 Treat inserted enum values in TSV
                                                                        formats as enum indices.
  --input_format_null_as_default arg                                    Initialize null fields with default
                                                                        values if the data type of this field
                                                                        is not nullable and it is supported by
                                                                        the input format
  --input_format_arrow_case_insensitive_column_matching arg             Ignore case when matching Arrow columns
                                                                        with CH columns.
  --input_format_orc_row_batch_size arg                                 Batch size when reading ORC stripes.
  --input_format_orc_case_insensitive_column_matching arg               Ignore case when matching ORC columns
                                                                        with CH columns.
  --input_format_parquet_case_insensitive_column_matching arg           Ignore case when matching Parquet
                                                                        columns with CH columns.
  --input_format_parquet_preserve_order arg                             Avoid reordering rows when reading from
                                                                        Parquet files. Usually makes it much
                                                                        slower.
  --input_format_parquet_filter_push_down arg                           When reading Parquet files, skip whole
                                                                        row groups based on the WHERE/PREWHERE
                                                                        expressions and min/max statistics in
                                                                        the Parquet metadata.
  --input_format_allow_seeks arg                                        Allow seeks while reading in
                                                                        ORC/Parquet/Arrow input formats
  --input_format_orc_allow_missing_columns arg                          Allow missing columns while reading ORC
                                                                        input formats
  --input_format_orc_use_fast_decoder arg                               Use a faster ORC decoder
                                                                        implementation.
  --input_format_parquet_allow_missing_columns arg                      Allow missing columns while reading
                                                                        Parquet input formats
  --input_format_parquet_local_file_min_bytes_for_seek arg              Min bytes required for local read
                                                                        (file) to do seek, instead of read with
                                                                        ignore in Parquet input format
  --input_format_arrow_allow_missing_columns arg                        Allow missing columns while reading
                                                                        Arrow input formats
  --input_format_hive_text_fields_delimiter arg                         Delimiter between fields in Hive Text
                                                                        File
  --input_format_hive_text_collection_items_delimiter arg               Delimiter between collection(array or
                                                                        map) items in Hive Text File
  --input_format_hive_text_map_keys_delimiter arg                       Delimiter between a pair of map
                                                                        key/values in Hive Text File
  --input_format_msgpack_number_of_columns arg                          The number of columns in inserted
                                                                        MsgPack data. Used for automatic schema
                                                                        inference from data.
  --output_format_msgpack_uuid_representation arg                       The way how to output UUID in MsgPack
                                                                        format.
  --input_format_max_rows_to_read_for_schema_inference arg              The maximum rows of data to read for
                                                                        automatic schema inference
  --input_format_max_bytes_to_read_for_schema_inference arg             The maximum bytes of data to read for
                                                                        automatic schema inference
  --input_format_csv_use_best_effort_in_schema_inference arg            Use some tweaks and heuristics to infer
                                                                        schema in CSV format
  --input_format_tsv_use_best_effort_in_schema_inference arg            Use some tweaks and heuristics to infer
                                                                        schema in TSV format
  --input_format_csv_detect_header arg                                  Automatically detect header with names
                                                                        and types in CSV format
  --input_format_csv_allow_whitespace_or_tab_as_delimiter arg           Allow to use spaces and tabs(\t) as
                                                                        field delimiter in the CSV strings
  --input_format_csv_trim_whitespaces arg                               Trims spaces and tabs (\t) characters
                                                                        at the beginning and end in CSV strings
  --input_format_csv_use_default_on_bad_values arg                      Allow to set default value to column
                                                                        when CSV field deserialization failed
                                                                        on bad value
  --input_format_csv_allow_variable_number_of_columns arg               Ignore extra columns in CSV input (if
                                                                        file has more columns than expected)
                                                                        and treat missing fields in CSV input
                                                                        as default values
  --input_format_tsv_allow_variable_number_of_columns arg               Ignore extra columns in TSV input (if
                                                                        file has more columns than expected)
                                                                        and treat missing fields in TSV input
                                                                        as default values
  --input_format_custom_allow_variable_number_of_columns arg            Ignore extra columns in CustomSeparated
                                                                        input (if file has more columns than
                                                                        expected) and treat missing fields in
                                                                        CustomSeparated input as default values
  --input_format_json_compact_allow_variable_number_of_columns arg      Ignore extra columns in
                                                                        JSONCompact(EachRow) input (if file has
                                                                        more columns than expected) and treat
                                                                        missing fields in JSONCompact(EachRow)
                                                                        input as default values
  --input_format_tsv_detect_header arg                                  Automatically detect header with names
                                                                        and types in TSV format
  --input_format_custom_detect_header arg                               Automatically detect header with names
                                                                        and types in CustomSeparated format
  --input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference arg
                                                                        Skip columns with unsupported types
                                                                        while schema inference for format
                                                                        Parquet
  --input_format_parquet_max_block_size arg                             Max block size for parquet reader.
  --input_format_protobuf_skip_fields_with_unsupported_types_in_schema_inference arg
                                                                        Skip fields with unsupported types
                                                                        while schema inference for format
                                                                        Protobuf
  --input_format_capn_proto_skip_fields_with_unsupported_types_in_schema_inference arg
                                                                        Skip columns with unsupported types
                                                                        while schema inference for format
                                                                        CapnProto
  --input_format_orc_skip_columns_with_unsupported_types_in_schema_inference arg
                                                                        Skip columns with unsupported types
                                                                        while schema inference for format ORC
  --input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference arg
                                                                        Skip columns with unsupported types
                                                                        while schema inference for format Arrow
  --column_names_for_schema_inference arg                               The list of column names to use in
                                                                        schema inference for formats without
                                                                        column names. The format:
                                                                        'column1,column2,column3,...'
  --schema_inference_hints arg                                          The list of column names and types to
                                                                        use in schema inference for formats
                                                                        without column names. The format:
                                                                        'column_name1 column_type1,
                                                                        column_name2 column_type2, ...'
  --schema_inference_make_columns_nullable arg                          If set to true, all inferred types will
                                                                        be Nullable in schema inference for
                                                                        formats without information about
                                                                        nullability.
  --input_format_json_read_bools_as_numbers arg                         Allow to parse bools as numbers in JSON
                                                                        input formats
  --input_format_json_try_infer_numbers_from_strings arg                Try to infer numbers from string fields
                                                                        while schema inference
  --input_format_json_validate_types_from_metadata arg                  For JSON/JSONCompact/JSONColumnsWithMet
                                                                        adata input formats this controls
                                                                        whether format parser should check if
                                                                        data types from input metadata match
                                                                        data types of the corresponding columns
                                                                        from the table
  --input_format_json_read_numbers_as_strings arg                       Allow to parse numbers as strings in
                                                                        JSON input formats
  --input_format_json_read_objects_as_strings arg                       Allow to parse JSON objects as strings
                                                                        in JSON input formats
  --input_format_json_read_arrays_as_strings arg                        Allow to parse JSON arrays as strings
                                                                        in JSON input formats
  --input_format_json_try_infer_named_tuples_from_objects arg           Try to infer named tuples from JSON
                                                                        objects in JSON input formats
  --input_format_json_infer_incomplete_types_as_strings arg             Use type String for keys that contains
                                                                        only Nulls or empty objects/arrays
                                                                        during schema inference in JSON input
                                                                        formats
  --input_format_json_named_tuples_as_objects arg                       Deserialize named tuple columns as JSON
                                                                        objects
  --input_format_json_ignore_unknown_keys_in_named_tuple arg            Ignore unknown keys in json object for
                                                                        named tuples
  --input_format_json_defaults_for_missing_elements_in_named_tuple arg  Insert default value in named tuple
                                                                        element if it's missing in json object
  --input_format_try_infer_integers arg                                 Try to infer integers instead of floats
                                                                        while schema inference in text formats
  --input_format_try_infer_dates arg                                    Try to infer dates from string fields
                                                                        while schema inference in text formats
  --input_format_try_infer_datetimes arg                                Try to infer datetimes from string
                                                                        fields while schema inference in text
                                                                        formats
  --output_format_markdown_escape_special_characters arg                Escape special characters in Markdown
  --input_format_protobuf_flatten_google_wrappers arg                   Enable Google wrappers for regular
                                                                        non-nested columns, e.g.
                                                                        google.protobuf.StringValue 'str' for
                                                                        String column 'str'. For Nullable
                                                                        columns empty wrappers are recognized
                                                                        as defaults, and missing as nulls
  --output_format_protobuf_nullables_with_google_wrappers arg           When serializing Nullable columns with
                                                                        Google wrappers, serialize default
                                                                        values as empty wrappers. If turned
                                                                        off, default and null values are not
                                                                        serialized
  --input_format_csv_skip_first_lines arg                               Skip specified number of lines at the
                                                                        beginning of data in CSV format
  --input_format_tsv_skip_first_lines arg                               Skip specified number of lines at the
                                                                        beginning of data in TSV format
  --input_format_csv_skip_trailing_empty_lines arg                      Skip trailing empty lines in CSV format
  --input_format_tsv_skip_trailing_empty_lines arg                      Skip trailing empty lines in TSV format
  --input_format_custom_skip_trailing_empty_lines arg                   Skip trailing empty lines in
                                                                        CustomSeparated format
  --input_format_native_allow_types_conversion arg                      Allow data types conversion in Native
                                                                        input format
  --date_time_input_format arg                                          Method to read DateTime from text input
                                                                        formats. Possible values: 'basic',
                                                                        'best_effort' and 'best_effort_us'.
  --date_time_output_format arg                                         Method to write DateTime to text
                                                                        output. Possible values: 'simple',
                                                                        'iso', 'unix_timestamp'.
  --interval_output_format arg                                          Textual representation of Interval.
                                                                        Possible values: 'kusto', 'numeric'.
  --input_format_ipv4_default_on_conversion_error arg                   Deserialization of IPv4 will use
                                                                        default values instead of throwing
                                                                        exception on conversion error.
  --input_format_ipv6_default_on_conversion_error arg                   Deserialization of IPV6 will use
                                                                        default values instead of throwing
                                                                        exception on conversion error.
  --bool_true_representation arg                                        Text to represent bool value in TSV/CSV
                                                                        formats.
  --bool_false_representation arg                                       Text to represent bool value in TSV/CSV
                                                                        formats.
  --input_format_values_interpret_expressions arg                       For Values format: if the field could
                                                                        not be parsed by streaming parser, run
                                                                        SQL parser and try to interpret it as
                                                                        SQL expression.
  --input_format_values_deduce_templates_of_expressions arg             For Values format: if the field could
                                                                        not be parsed by streaming parser, run
                                                                        SQL parser, deduce template of the SQL
                                                                        expression, try to parse all rows using
                                                                        template and then interpret expression
                                                                        for all rows.
  --input_format_values_accurate_types_of_literals arg                  For Values format: when parsing and
                                                                        interpreting expressions using
                                                                        template, check actual type of literal
                                                                        to avoid possible overflow and
                                                                        precision issues.
  --input_format_values_allow_data_after_semicolon arg                  For Values format: allow extra data
                                                                        after semicolon (used by client to
                                                                        interpret comments).
  --input_format_avro_allow_missing_fields arg                          For Avro/AvroConfluent format: when
                                                                        field is not found in schema use
                                                                        default value instead of error
  --input_format_avro_null_as_default arg                               For Avro/AvroConfluent format: insert
                                                                        default in case of null and non
                                                                        Nullable column
  --format_binary_max_string_size arg                                   The maximum allowed size for String in
                                                                        RowBinary format. It prevents
                                                                        allocating large amount of memory in
                                                                        case of corrupted data. 0 means there
                                                                        is no limit
  --format_binary_max_array_size arg                                    The maximum allowed size for Array in
                                                                        RowBinary format. It prevents
                                                                        allocating large amount of memory in
                                                                        case of corrupted data. 0 means there
                                                                        is no limit
  --format_avro_schema_registry_url arg                                 For AvroConfluent format: Confluent
                                                                        Schema Registry URL.
  --output_format_json_quote_64bit_integers arg                         Controls quoting of 64-bit integers in
                                                                        JSON output format.
  --output_format_json_quote_denormals arg                              Enables '+nan', '-nan', '+inf', '-inf'
                                                                        outputs in JSON output format.
  --output_format_json_quote_decimals arg                               Controls quoting of decimals in JSON
                                                                        output format.
  --output_format_json_quote_64bit_floats arg                           Controls quoting of 64-bit float
                                                                        numbers in JSON output format.
  --output_format_json_escape_forward_slashes arg                       Controls escaping forward slashes for
                                                                        string outputs in JSON output format.
                                                                        This is intended for compatibility with
                                                                        JavaScript. Don't confuse with
                                                                        backslashes that are always escaped.
  --output_format_json_named_tuples_as_objects arg                      Serialize named tuple columns as JSON
                                                                        objects.
  --output_format_json_array_of_rows arg                                Output a JSON array of all rows in
                                                                        JSONEachRow(Compact) format.
  --output_format_json_validate_utf8 arg                                Validate UTF-8 sequences in JSON output
                                                                        formats, doesn't impact formats
                                                                        JSON/JSONCompact/JSONColumnsWithMetadat
                                                                        a, they always validate utf8
  --format_json_object_each_row_column_for_object_name arg              The name of column that will be used as
                                                                        object names in JSONObjectEachRow
                                                                        format. Column type should be String
  --output_format_pretty_max_rows arg                                   Rows limit for Pretty formats.
  --output_format_pretty_max_column_pad_width arg                       Maximum width to pad all values in a
                                                                        column in Pretty formats.
  --output_format_pretty_max_value_width arg                            Maximum width of value to display in
                                                                        Pretty formats. If greater - it will be
                                                                        cut.
  --output_format_pretty_color arg                                      Use ANSI escape sequences to paint
                                                                        colors in Pretty formats
  --output_format_pretty_grid_charset arg                               Charset for printing grid borders.
                                                                        Available charsets: ASCII, UTF-8
                                                                        (default one).
  --output_format_parquet_row_group_size arg                            Target row group size in rows.
  --output_format_parquet_row_group_size_bytes arg                      Target row group size in bytes, before
                                                                        compression.
  --output_format_parquet_string_as_string arg                          Use Parquet String type instead of
                                                                        Binary for String columns.
  --output_format_parquet_fixed_string_as_fixed_byte_array arg          Use Parquet FIXED_LENGTH_BYTE_ARRAY
                                                                        type instead of Binary for FixedString
                                                                        columns.
  --output_format_parquet_version arg                                   Parquet format version for output
                                                                        format. Supported versions: 1.0, 2.4,
                                                                        2.6 and 2.latest (default)
  --output_format_parquet_compression_method arg                        Compression method for Parquet output
                                                                        format. Supported codecs: snappy, lz4,
                                                                        brotli, zstd, gzip, none (uncompressed)
  --output_format_parquet_compliant_nested_types arg                    In parquet file schema, use name
                                                                        'element' instead of 'item' for list
                                                                        elements. This is a historical artifact
                                                                        of Arrow library implementation.
                                                                        Generally increases compatibility,
                                                                        except perhaps with some old versions
                                                                        of Arrow.
  --output_format_parquet_use_custom_encoder arg                        Use a faster Parquet encoder
                                                                        implementation.
  --output_format_parquet_parallel_encoding arg                         Do Parquet encoding in multiple
                                                                        threads. Requires output_format_parquet
                                                                        _use_custom_encoder.
  --output_format_parquet_data_page_size arg                            Target page size in bytes, before
                                                                        compression.
  --output_format_parquet_batch_size arg                                Check page size every this many rows.
                                                                        Consider decreasing if you have columns
                                                                        with average values size above a few
                                                                        KBs.
  --output_format_avro_codec arg                                        Compression codec used for output.
                                                                        Possible values: 'null', 'deflate',
                                                                        'snappy'.
  --output_format_avro_sync_interval arg                                Sync interval in bytes.
  --output_format_avro_string_column_pattern arg                        For Avro format: regexp of String
                                                                        columns to select as AVRO string.
  --output_format_avro_rows_in_file arg                                 Max rows in a file (if permitted by
                                                                        storage)
  --output_format_tsv_crlf_end_of_line arg                              If it is set true, end of line in TSV
                                                                        format will be \r\n instead of \n.
  --format_csv_null_representation arg                                  Custom NULL representation in CSV
                                                                        format
  --format_tsv_null_representation arg                                  Custom NULL representation in TSV
                                                                        format
  --output_format_decimal_trailing_zeros arg                            Output trailing zeros when printing
                                                                        Decimal values. E.g. 1.230000 instead
                                                                        of 1.23.
  --input_format_allow_errors_num arg                                   Maximum absolute amount of errors while
                                                                        reading text formats (like CSV, TSV).
                                                                        In case of error, if at least absolute
                                                                        or relative amount of errors is lower
                                                                        than corresponding value, will skip
                                                                        until next line and continue.
  --input_format_allow_errors_ratio arg                                 Maximum relative amount of errors while
                                                                        reading text formats (like CSV, TSV).
                                                                        In case of error, if at least absolute
                                                                        or relative amount of errors is lower
                                                                        than corresponding value, will skip
                                                                        until next line and continue.
  --input_format_record_errors_file_path arg                            Path of the file used to record errors
                                                                        while reading text formats (CSV, TSV).
  --errors_output_format arg                                            Method to write Errors to text output.
  --format_schema arg                                                   Schema identifier (used by schema-based
                                                                        formats)
  --format_template_resultset arg                                       Path to file which contains format
                                                                        string for result set (for Template
                                                                        format)
  --format_template_row arg                                             Path to file which contains format
                                                                        string for rows (for Template format)
  --format_template_rows_between_delimiter arg                          Delimiter between rows (for Template
                                                                        format)
  --format_custom_escaping_rule arg                                     Field escaping rule (for
                                                                        CustomSeparated format)
  --format_custom_field_delimiter arg                                   Delimiter between fields (for
                                                                        CustomSeparated format)
  --format_custom_row_before_delimiter arg                              Delimiter before field of the first
                                                                        column (for CustomSeparated format)
  --format_custom_row_after_delimiter arg                               Delimiter after field of the last
                                                                        column (for CustomSeparated format)
  --format_custom_row_between_delimiter arg                             Delimiter between rows (for
                                                                        CustomSeparated format)
  --format_custom_result_before_delimiter arg                           Prefix before result set (for
                                                                        CustomSeparated format)
  --format_custom_result_after_delimiter arg                            Suffix after result set (for
                                                                        CustomSeparated format)
  --format_regexp arg                                                   Regular expression (for Regexp format)
  --format_regexp_escaping_rule arg                                     Field escaping rule (for Regexp format)
  --format_regexp_skip_unmatched arg                                    Skip lines unmatched by regular
                                                                        expression (for Regexp format)
  --output_format_enable_streaming arg                                  Enable streaming in output formats that
                                                                        support it.
  --output_format_write_statistics arg                                  Write statistics about read rows,
                                                                        bytes, time elapsed in suitable output
                                                                        formats.
  --output_format_pretty_row_numbers arg                                Add row numbers before each row for
                                                                        pretty output format
  --insert_distributed_one_random_shard arg                             If setting is enabled, inserting into
                                                                        distributed table will choose a random
                                                                        shard to write when there is no
                                                                        sharding key
  --exact_rows_before_limit arg                                         When enabled, ClickHouse will provide
                                                                        exact value for rows_before_limit_at_le
                                                                        ast statistic, but with the cost that
                                                                        the data before limit will have to be
                                                                        read completely
  --cross_to_inner_join_rewrite arg                                     Use inner join instead of comma/cross
                                                                        join if there're joining expressions in
                                                                        the WHERE section. Values: 0 - no
                                                                        rewrite, 1 - apply if possible for
                                                                        comma/cross, 2 - force rewrite all
                                                                        comma joins, cross - if possible
  --output_format_arrow_low_cardinality_as_dictionary arg               Enable output LowCardinality type as
                                                                        Dictionary Arrow type
  --output_format_arrow_string_as_string arg                            Use Arrow String type instead of Binary
                                                                        for String columns
  --output_format_arrow_fixed_string_as_fixed_byte_array arg            Use Arrow FIXED_SIZE_BINARY type
                                                                        instead of Binary for FixedString
                                                                        columns.
  --output_format_arrow_compression_method arg                          Compression method for Arrow output
                                                                        format. Supported codecs: lz4_frame,
                                                                        zstd, none (uncompressed)
  --output_format_orc_string_as_string arg                              Use ORC String type instead of Binary
                                                                        for String columns
  --output_format_orc_compression_method arg                            Compression method for ORC output
                                                                        format. Supported codecs: lz4, snappy,
                                                                        zlib, zstd, none (uncompressed)
  --format_capn_proto_enum_comparising_mode arg                         How to map ClickHouse Enum and
                                                                        CapnProto Enum
  --format_capn_proto_use_autogenerated_schema arg                      Use autogenerated CapnProto schema when
                                                                        format_schema is not set
  --format_protobuf_use_autogenerated_schema arg                        Use autogenerated Protobuf when
                                                                        format_schema is not set
  --output_format_schema arg                                            The path to the file where the
                                                                        automatically generated schema will be
                                                                        saved
  --input_format_mysql_dump_table_name arg                              Name of the table in MySQL dump from
                                                                        which to read data
  --input_format_mysql_dump_map_column_names arg                        Match columns from table in MySQL dump
                                                                        and columns from ClickHouse table by
                                                                        names
  --output_format_sql_insert_max_batch_size arg                         The maximum number  of rows in one
                                                                        INSERT statement.
  --output_format_sql_insert_table_name arg                             The name of table in the output INSERT
                                                                        query
  --output_format_sql_insert_include_column_names arg                   Include column names in INSERT query
  --output_format_sql_insert_use_replace arg                            Use REPLACE statement instead of INSERT
  --output_format_sql_insert_quote_names arg                            Quote column names with '`' characters
  --output_format_bson_string_as_string arg                             Use BSON String type instead of Binary
                                                                        for String columns.
  --input_format_bson_skip_fields_with_unsupported_types_in_schema_inference arg
                                                                        Skip fields with unsupported types
                                                                        while schema inference for format BSON.
  --format_display_secrets_in_show_and_select arg                       Do not hide secrets in SHOW and SELECT
                                                                        queries.
  --regexp_dict_allow_hyperscan arg                                     Allow regexp_tree dictionary using
                                                                        Hyperscan library.
  --regexp_dict_flag_case_insensitive arg                               Use case-insensitive matching for a
                                                                        regexp_tree dictionary. Can be
                                                                        overridden in individual expressions
                                                                        with (?i) and (?-i).
  --regexp_dict_flag_dotall arg                                         Allow '.' to match newline characters
                                                                        for a regexp_tree dictionary.
  --dictionary_use_async_executor arg                                   Execute a pipeline for reading from a
                                                                        dictionary with several threads. It's
                                                                        supported only by DIRECT dictionary
                                                                        with CLICKHOUSE source.
  --precise_float_parsing arg                                           Prefer more precise (but slower) float
                                                                        parsing algorithm
  --input_format_arrow_import_nested arg                                Obsolete setting, does nothing.
  --input_format_parquet_import_nested arg                              Obsolete setting, does nothing.
  --input_format_orc_import_nested arg                                  Obsolete setting, does nothing.