Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix ipv6 parser #45871

Merged
merged 2 commits into from Feb 1, 2023
Merged

Fix ipv6 parser #45871

merged 2 commits into from Feb 1, 2023

Conversation

yakov-olkhovskiy
Copy link
Member

@yakov-olkhovskiy yakov-olkhovskiy commented Feb 1, 2023

Changelog category (leave one):

  • Bug Fix (user-visible misbehavior in official stable or prestable release)

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):

Bugfix IPv6 parser for mixed ip4 address with missed first octet (like ::.1.2.3)

Error reported as next:

milovidov-desktop :) create table test (str String) engine = Memory;

CREATE TABLE test
(
   `str` String
)
ENGINE = Memory

Query id: a56da8d1-9653-4eee-a2aa-9db7f6109355

Ok.

0 rows in set. Elapsed: 0.000 sec. 

milovidov-desktop :) insert into test values ('::.1.2.3'), ('::255.255.255.255');

INSERT INTO test FORMAT Values

Query id: 2213c496-67fd-4308-9159-e2bf4f3160e0

Ok.

2 rows in set. Elapsed: 0.001 sec. 

milovidov-desktop :) select toIPv6(str) from test;

SELECT toIPv6(str)
FROM test

Query id: a06da850-3562-4dab-a0c0-af6b43d5b7b1

Aborted (core dumped)

@robot-ch-test-poll3 robot-ch-test-poll3 added the pr-bugfix Pull request with bugfix, not backported by default label Feb 1, 2023
@alexey-milovidov
Copy link
Member

Add this query for improvised fuzzing:

SELECT count() FROM numbers_mt(100000000) WHERE NOT ignore(toIPv6OrZero(randomString(8)))

@alexey-milovidov alexey-milovidov self-assigned this Feb 1, 2023
@KochetovNicolai KochetovNicolai merged commit 6cd0b51 into master Feb 1, 2023
@KochetovNicolai KochetovNicolai deleted the fix-ipv6-parser branch February 1, 2023 13:28
robot-clickhouse added a commit that referenced this pull request Feb 1, 2023
@robot-ch-test-poll4 robot-ch-test-poll4 added the pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore label Feb 1, 2023
alexey-milovidov added a commit that referenced this pull request Feb 2, 2023
liuneng1994 added a commit to Kyligence/ClickHouse that referenced this pull request Feb 22, 2023
* Prefix more typedefs in DB namespace with "Gin"

* Fixed tests

* Update SwapHelper.h

* Move some code around (no other changes)

* "segment file" --> "segment metadata file"

* Cosmetics

* Rename MergeTreeIndexGin.h/cpp to MergeTreeIndexInverted.h/cpp

* Cosmetics

* Remove superfluous check (the same is checked in MergeTreeIndices.cpp)

* Use GinFilters typedef where possible

* Fixing build

* Suffix "GinFilter" --> "Inverted"

* Fix ASan builds for glibc 2.36+ (use RTLD_NEXT for ThreadFuzzer interceptors)

Recently I noticed that clickhouse compiled with ASan does not work with
newer glibc 2.36+, before I though that this was only about compiling
with old but using new, however that was not correct, ASan simply does
not work with glibc 2.36+.

Here is a simple reproducer [1]:

    $ cat > test-asan.cpp <<EOL
    #include <pthread.h>
    int main()
    {
        // something broken in ASan in interceptor for __pthread_mutex_lock
        // and only since glibc 2.36, and for pthread_mutex_lock everything is OK
        pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
        return __pthread_mutex_lock(&mutex);
    }
    EOL
    $ clang -g3 -o test-asan test-asan.cpp -fsanitize=address
    $ ./test-asan
    AddressSanitizer:DEADLYSIGNAL
    =================================================================
    ==15659==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x000000000000 bp 0x7fffffffccb0 sp 0x7fffffffcb98 T0)
    ==15659==Hint: pc points to the zero page.
    ==15659==The signal is caused by a READ memory access.
    ==15659==Hint: address points to the zero page.
        #0 0x0  (<unknown module>)
        #1 0x7ffff7cda28f  (/usr/lib/libc.so.6+0x2328f) (BuildId: 1e94beb079e278ac4f2c8bce1f53091548ea1584)

    AddressSanitizer can not provide additional info.
    SUMMARY: AddressSanitizer: SEGV (<unknown module>)
    ==15659==ABORTING

  [1]: https://gist.github.com/azat/af073e57a248e04488b21068643f079e

I've started observing glibc code, there was some changes in glibc, that
moves pthread functions out from libpthread.so.0 into libc.so.6
(somewhere between 2.31 and 2.35), but
the problem pops up only with 2.36, 2.35 works fine.

After this I've looked into changes between 2.35 and 2.36, and found
this patch [2] - "dlsym: Make RTLD_NEXT prefer default version
definition [BZ ClickHouse#14932]", that fixes this bug [3].

  [2]: https://sourceware.org/git/?p=glibc.git;a=commit;h=efa7936e4c91b1c260d03614bb26858fbb8a0204
  [3]: https://sourceware.org/bugzilla/show_bug.cgi?id=14932

The problem with using DL_LOOKUP_RETURN_NEWEST flag for RTLD_NEXT is
that it does not resolve hidden symbols (and __pthread_mutex_lock is
indeed hidden).

Here is a sample that will show the difference [4]:

    $ cat > test-dlsym.c <<EOL
    #define _GNU_SOURCE
    #include <dlfcn.h>
    #include <stdio.h>

    int main()
    {
        void *p = dlsym(RTLD_NEXT, "__pthread_mutex_lock");
        printf("__pthread_mutex_lock: %p (via RTLD_NEXT)\n", p);
        return 0;
    }
    EOL

    # glibc 2.35: __pthread_mutex_lock: 0x7ffff7e27f70 (via RTLD_NEXT)
    # glibc 2.36: __pthread_mutex_lock: (nil) (via RTLD_NEXT)

  [4]: https://gist.github.com/azat/3b5f2ae6011bef2ae86392cea7789eb7

But ThreadFuzzer uses internal symbols to wrap
pthread_mutex_lock/pthread_mutex_unlock, which are intercepted by ASan
and this leads to NULL dereference.

The fix was obvious - just use dlsym(RTLD_NEXT), however on older
glibc's this leads to endless recursion (see commits in the code). But
only for jemalloc [5], and even though sanitizers does not uses jemalloc
the code of ThreadFuzzer is generic and I don't want to guard it with
more preprocessors macros.

  [5]: https://gist.github.com/azat/588d9c72c1e70fc13ebe113197883aa2

So we have to use RTLD_NEXT only for ASan.

There is also one more interesting issue, if you will compile with clang
that itself had been compiled with newer libc (i.e. 2.36), you will get
the following error:

    $ podman run --privileged -v $PWD/.cmake-asan/programs:/root/bin -e PATH=/bin:/root/bin -e --rm -it ubuntu-dev-v3 clickhouse
    ==1==ERROR: AddressSanitizer failed to allocate 0x0 (0) bytes of SetAlternateSignalStack (error code: 22)
    ...
    ==1==End of process memory map.
    AddressSanitizer: CHECK failed: sanitizer_common.cpp:53 "((0 && "unable to mmap")) != (0)" (0x0, 0x0) (tid=1)
        <empty stack>

The problem is that since GLIBC_2.31, `SIGSTKSZ` is a call to
`getconf(_SC_MINSIGSTKSZ)`, but older glibc does not have it, so `-1`
will be returned and used as `SIGSTKSZ` instead.

The workaround to disable alternative stack:

    $ podman run --privileged -v $PWD/.cmake-asan/programs:/root/bin -e PATH=/bin:/root/bin -e ASAN_OPTIONS=use_sigaltstack=0 --rm -it ubuntu-dev-v3 clickhouse client --version
    ClickHouse client version 22.13.1.1.

Fixes: ClickHouse#43426
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Initial inverted index docs

* Fix

* fix typos

* Fix the case when merge-base does not show the oldest commit

* Fix bad comparison

* Fix typos

* Don't duplicate writing guide, instead point to existing writing guide

* update docs for async insert deduplication

* Fixing build

* fix typo

* Update src/Functions/FunctionsConversion.h

Co-authored-by: Alexander Gololobov <440544+davenger@users.noreply.github.com>

* Make a bit better

* Add docs

* Introduce non-throwing variants of hasToken

* Document functions

* Produce a null map of the correct size

* Update BoundedReadBuffer.cpp

* Better test

* Review fixes

* Fix aborts in arrow lib

* Better comment

* Add more retries to AST Fuzzer

* Update docs/en/operations/settings/merge-tree-settings.md

Co-authored-by: Dan Roscigno <dan@roscigno.com>

* Update docs/en/operations/settings/merge-tree-settings.md

Co-authored-by: Dan Roscigno <dan@roscigno.com>

* Update docs/en/operations/settings/settings.md

Co-authored-by: Dan Roscigno <dan@roscigno.com>

* Update docs/en/operations/settings/settings.md

Co-authored-by: Dan Roscigno <dan@roscigno.com>

* Update docs/en/operations/settings/settings.md

Co-authored-by: Dan Roscigno <dan@roscigno.com>

* address comments

* Fix possible deadlock with allow_asynchronous_read_from_io_pool_for_merge_tree in case of exception from ThreadPool::schedule

* Fix crash when `ListObjects` request fails (ClickHouse#45371)

* Fix schema inference in hdfsCluster

* Cleanup PullingAsyncPipelineExecutor::cancel()

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Catch exception on query cancellation

Since we still want to join the thread, yes it will be done in dtor, but
this looks better.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fix possible (likely distributed) query hung

Recently I saw the following, the client executed long distributed query
and terminated the connection, and in this case query cancellation will
be done from PullingAsyncPipelineExecutor dtor, but during cancellation
one of nodes sent ECONNRESET, and this leads to an exception from
PullingAsyncPipelineExecutor::cancel(), and this leads to a deadlock
when multiple threads waits each others, because cancel() for
LazyOutputFormat wasn't called.

Here is as relevant portion of logs:

    2023.01.04 08:26:09.236208 [ 37968 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Debug> executeQuery: (from 10.61.13.253:44266, user: default)  TooLongDistributedQueryToPost
    ...
    2023.01.04 08:26:09.262424 [ 37968 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 9_330_538_18, approx. 61440 rows starting from 0
    2023.01.04 08:26:09.266399 [ 26788 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Trace> Connection (s4.ch:9000): Connecting. Database: (not specified). User: default
    2023.01.04 08:26:09.266849 [ 26788 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Trace> Connection (s4.ch:9000): Connected to ClickHouse server version 22.10.1.
    2023.01.04 08:26:09.267165 [ 26788 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Debug> Connection (s4.ch:9000): Sent data for 2 scalars, total 2 rows in 3.1587e-05 sec., 62635 rows/sec., 68.00 B (2.03 MiB/sec.), compressed 0.4594594594594595 times to 148.00 B (4.41 MiB/sec.)
    2023.01.04 08:39:13.047170 [ 37968 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Error> PullingAsyncPipelineExecutor: Code: 210. DB::NetException: Connection reset by peer, while writing to socket (10.7.142.115:9000). (NETWORK_ERROR), Stack trace (when copying this message, always include the lines below):

    0. ./.build/./contrib/libcxx/include/exception:133: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1818234c in /usr/lib/debug/usr/bin/clickhouse.debug
    1. ./.build/./src/Common/Exception.cpp:69: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1004fbda in /usr/lib/debug/usr/bin/clickhouse.debug
    2. ./.build/./src/Common/NetException.h:12: DB::WriteBufferFromPocoSocket::nextImpl() @ 0x14e352f3 in /usr/lib/debug/usr/bin/clickhouse.debug
    3. ./.build/./src/IO/BufferBase.h:39: DB::Connection::sendCancel() @ 0x15c21e6b in /usr/lib/debug/usr/bin/clickhouse.debug
    4. ./.build/./src/Client/MultiplexedConnections.cpp:0: DB::MultiplexedConnections::sendCancel() @ 0x15c4d5b7 in /usr/lib/debug/usr/bin/clickhouse.debug
    5. ./.build/./src/QueryPipeline/RemoteQueryExecutor.cpp:627: DB::RemoteQueryExecutor::tryCancel(char const*, std::__1::unique_ptr<DB::RemoteQueryExecutorReadContext, std::__1::default_delete<DB::RemoteQueryExecutorReadContext> >*) @ 0x14446c09 in /usr/lib/debug/usr/bin/clickhouse.debug
    6. ./.build/./contrib/libcxx/include/__iterator/wrap_iter.h:100: DB::ExecutingGraph::cancel() @ 0x15d2c0de in /usr/lib/debug/usr/bin/clickhouse.debug
    7. ./.build/./contrib/libcxx/include/__memory/unique_ptr.h:300: DB::PullingAsyncPipelineExecutor::cancel() @ 0x15d32055 in /usr/lib/debug/usr/bin/clickhouse.debug
    8. ./.build/./contrib/libcxx/include/__memory/unique_ptr.h:312: DB::PullingAsyncPipelineExecutor::~PullingAsyncPipelineExecutor() @ 0x15d31f4f in /usr/lib/debug/usr/bin/clickhouse.debug
    9. ./.build/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::processOrdinaryQueryWithProcessors() @ 0x15cde919 in /usr/lib/debug/usr/bin/clickhouse.debug
    10. ./.build/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x15cd8554 in /usr/lib/debug/usr/bin/clickhouse.debug
    11. ./.build/./src/Server/TCPHandler.cpp:1904: DB::TCPHandler::run() @ 0x15ce6479 in /usr/lib/debug/usr/bin/clickhouse.debug
    12. ./.build/./contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x18074f07 in /usr/lib/debug/usr/bin/clickhouse.debug
    13. ./.build/./contrib/libcxx/include/__memory/unique_ptr.h:54: Poco::Net::TCPServerDispatcher::run() @ 0x180753ed in /usr/lib/debug/usr/bin/clickhouse.debug
    14. ./.build/./contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x181e3807 in /usr/lib/debug/usr/bin/clickhouse.debug
    15. ./.build/./contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::ThreadImpl::runnableEntry(void*) @ 0x181e1483 in /usr/lib/debug/usr/bin/clickhouse.debug
    16. ? @ 0x7ffff7e55fd4 in ?
    17. ? @ 0x7ffff7ed666c in ?
     (version 22.10.1.1)

And here is the state of the threads:

<details>

<summary>system.stack_trace</summary>

```sql
SELECT
    arrayStringConcat(arrayMap(x -> demangle(addressToSymbol(x)), trace), '\n') AS sym
FROM system.stack_trace
WHERE query_id = 'f2ed6149-146d-4a3d-874a-b0b751c7b567'
SETTINGS allow_introspection_functions=1

Row 1:
──────
sym:
pthread_cond_wait
std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&)
bool ConcurrentBoundedQueue<DB::Chunk>::emplaceImpl<DB::Chunk>(std::__1::optional<unsigned long>, DB::Chunk&&)
DB::IOutputFormat::work()
DB::ExecutionThreadContext::executeTask()
DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*)

Row 2:
──────
sym:
pthread_cond_wait
Poco::EventImpl::waitImpl()
DB::PipelineExecutor::joinThreads()
DB::PipelineExecutor::executeImpl(unsigned long)
DB::PipelineExecutor::execute(unsigned long)

Row 3:
──────
sym:
pthread_cond_wait
Poco::EventImpl::waitImpl()
DB::PullingAsyncPipelineExecutor::Data::~Data()
DB::PullingAsyncPipelineExecutor::~PullingAsyncPipelineExecutor()
DB::TCPHandler::processOrdinaryQueryWithProcessors()
DB::TCPHandler::runImpl()
DB::TCPHandler::run()
Poco::Net::TCPServerConnection::start()
Poco::Net::TCPServerDispatcher::run()
Poco::PooledThread::run()
Poco::ThreadImpl::runnableEntry(void*)
```

</details>

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Remove unnecessary getTotalRowCount function calls

* Fix style

* Use new copy s3 functions in S3ObjectStorage.

* Forward declaration of ConcurrentBoundedQueue in ThreadStatus

ThreadStatus is the header that recomplies almost all ClickHouse
modules.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Revert "Merge pull request ClickHouse#44922 from azat/dist/async-INSERT-metrics"

There are the following problems with this patch:
- Looses files on exception
- Existing current_batch.txt on startup leads to ENOENT error and hung
  of distributed sends without ATTACH/DETACH
- Race between creating the queue for sending at table startup and
  INSERT, if it had been created from INSERT, then it will not be
  initialized from disk

They were addressed in ClickHouse#45491, but it makes code more cmoplex and plus
since, likely, the release is comming, it is better to revert the
change.

This reverts commit 94604f7, reversing
changes made to 80f6a45.

* Fix possible in-use table after DETACH

Right now in case of DETACH/ATTACH there can be a window when after the
table had been DETACH'ed someone will still use it, the common example
here is MVs handling.

It happens because TableExclusiveLockHolder does not guards the
shard_ptr of the IStorage, and so if someone holds it, then it can use
it. So if ATTACH will be done for this table then, you can have multiple
instances of it.

It is not possible for DROP, because before using a table, you should
lock it and after table had been DROP'ed you cannot lock it anymore.

So let's do the same for DETACH.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Docs: Fix weird formatting

* Formatting fixup

* Docs: Fix link to writing guide

* Performance report: "Partial queries" --> "Backward-incompatible queries

* Updated backup/restore status when concurrent backups & restores are not allowed
Implementation:
* Moved concurrent backup/restore check inside try-catch block which sets the status so that other nodes in cluster are aware of failures.
* Renamed backup_uuid to restore_uuid in RestoreSettings.
Testing:
* Updated test test_backup_and_restore_on_cluster/test_disallow_concurrency to check for specific backup/restore id.

* Update report.py

* Fix stress test

* fix race in destructor of ParallelParsingInputFormat

* add fields to table system.formats

* Moved settings inside backups section - Updated backup/restore status when concurrent backups & restores are not allowed

* Fix a race between Distributed table creation and INSERT into it

Initializing queues for pending on-disk files for async INSERT cannot be
done after table had been attached and visible to user, since it
initializes the per-table counter, that is used during INSERT.

Now there is a window, when this counter is not initialized and it will
start from the beginning, and this could lead to CANNOT_LINK error:

    Destination file /data/clickhouse/data/urls_v1/urls_in/shard6_replica1/13129817.bin is already exist and have different inode

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Update type-conversion-functions.md

* Improve logging for TeePopen.timeout exceeded

* fix test

* fixing join data was released

* Update column.md

* Update column.md

* Update column.md

* Fix special build

* Better comment

* Fix MSan build

* Update test.py

* Fix typo

* Fix and add a test

* remove JSON

* Review fixes

* Update src/Interpreters/GinFilter.h

Co-authored-by: Sergei Trifonov <sergei@clickhouse.com>

* Update src/Storages/MergeTree/MergeTreeIndexInverted.cpp

Co-authored-by: Sergei Trifonov <sergei@clickhouse.com>

* Update src/Storages/MergeTree/MergeTreeIndexInverted.cpp

Co-authored-by: Sergei Trifonov <sergei@clickhouse.com>

* Fix endian issue in transform function for s390x

* Fix cache policy getter

* Better formatting for exception messages (ClickHouse#45449)

* save format string for NetException

* format exceptions

* format exceptions 2

* format exceptions 3

* format exceptions 4

* format exceptions 5

* format exceptions 6

* fix

* format exceptions 7

* format exceptions 8

* Update MergeTreeIndexGin.cpp

* Update AggregateFunctionMap.cpp

* Update AggregateFunctionMap.cpp

* fix

* add docs for PR 33302

* impl (ClickHouse#45289)

* Refine the solution

* Update formats.md

Google has a new website for Protocol Buffers. The old link expires on Jan 31, 2023

* Add DISTINCT to INTERSECT and EXCEPT

* Fix the build with ENABLE_VECTORSCAN disabled

* Add unit test for recursive checkpoints

* Moved concurrency checks inside functions - Updated backup/restore status when concurrent backups & restores are not allowed

* Update StorageReplicatedMergeTree.cpp

* Revert "Merge pull request ClickHouse#45493 from azat/fix-detach"

This reverts commit a182a6b, reversing
changes made to c47a29a.

* Update stress

* Disable the optimization to avoid sort.xml perf test fail in other PRs

* Ignore utf errors in clickhouse-test reportLogStats

* WIP

* Move fsync inside transaction callback in DataPartStorageOnDisk::rename()

Otherwise, it is useless.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Do fsync all files at once for fetched parts to decrease latency

For filessystems like ext4, fsync of one file will handle all operations
before, so this can be pretty time consuming.

And in case of you write multiple files in a loop, and at the end of
each iteration sync each file, then during writing of this file there
can be other operations in journal, and hence more work for fsync.

Let's call fsync for all files at once instead, like
MergedBlockOutputStream does.

Hope that keeping all file buffers till the end, will not cause troubles
(buffering and so forth).

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Fsync all small files at once after mutation

Everything else is handled in MergedBlockOutputStream::finalizePart()

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>

* Revert "Revert "Merge pull request ClickHouse#45493 from azat/fix-detach""

This reverts commit 9dc4f02.

* fix

* fix try_shared_lock() in SharedMutex and CancelableSharedMutex

* Update merge-tree-settings.md

* Use ProfileEvents::S3CopyObject again.

* Remove FuseSumCountAggregatesVisitor

* fix

* Fix tests

* Addressed review comments and renamed function to hasConcurrentBackups/Restores - Updated backup/restore status when concurrent backups & restores are not allowed

* Update docs/en/operations/settings/merge-tree-settings.md

Co-authored-by: Alexander Tokmakov <tavplubix@clickhouse.com>

* Remove tests with optimize_fuse_sum_count_avg

* Make retries in copyS3File() more correct.

* Fix cleanup in tests test_replicated_merge_tree_s3_restore.

* Docs: mini semicolon fix

* Document start of week in function date_diff()

* Fix report sending in case of FastTest failure

* split Format settings out

* add new settings for s3 and hdfs

* fix note formatting

* Review suggestions

* Additional check in MergeTreeReadPool (ClickHouse#45515)

* Check ranges

* Check equality just in case

* Check under ndebug

* Fix typo

* Update tests/ci/fast_test_check.py

* Apply suggestions from code review

* update for split of format settings

* Typo: "Granulesis" --> "Granules"

* Docs: fix docs of EXPLAIN PLAN indexes=1

* add PARTITION BY to s3 and hdfs docs

* add PARTITION BY to file and url docs

* Create mongodb.md

* Update mongodb.md

* Fix version in autogenerated_versions.txt

* Added two metrics about memory usage in cgroup to asynchronous metrics (ClickHouse#45301)

* Update version to 23.1.2.1

* Backport ClickHouse#45636 to 23.1: Trim refs/tags/ from GITHUB_TAG in release workflow

* Backport ClickHouse#45603 to 23.1: Fix wiping sensitive info in logs

* Backport ClickHouse#45630 to 23.1: Fix performance of short queries with `Array` columns

* Backport ClickHouse#45686 to 23.1: Fix key description when encountering duplicate primary keys

* Update version to 23.1.3.1

* Backport ClickHouse#45818 to 23.1: Get rid of progress timestamps in release publishing

* Backport ClickHouse#45871 to 23.1: Fix ipv6 parser

* ignore warning

* fix problems

* fix problems

---------

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Co-authored-by: Robert Schulze <robert@clickhouse.com>
Co-authored-by: Maksim Kita <kitaetoya@gmail.com>
Co-authored-by: Kseniia Sumarokova <54203879+kssenii@users.noreply.github.com>
Co-authored-by: Anton Popov <anton@clickhouse.com>
Co-authored-by: Maksim Kita <maksim@clickhouse.com>
Co-authored-by: Nikolai Kochetov <nik-kochetov@yandex-team.ru>
Co-authored-by: Azat Khuzhin <a3at.mail@gmail.com>
Co-authored-by: kssenii <sumarokovakseniia@mail.ru>
Co-authored-by: Sema Checherinda <104093494+CheSema@users.noreply.github.com>
Co-authored-by: Mikhail f. Shiryaev <felixoid@clickhouse.com>
Co-authored-by: Han Fei <hanfei19910905@gmail.com>
Co-authored-by: Kruglov Pavel <48961922+Avogar@users.noreply.github.com>
Co-authored-by: Alexander Tokmakov <tavplubix@clickhouse.com>
Co-authored-by: Alexander Gololobov <440544+davenger@users.noreply.github.com>
Co-authored-by: Antonio Andelic <antonio2368@users.noreply.github.com>
Co-authored-by: avogar <pav.cruglov@yandex.ru>
Co-authored-by: Nikolay Degterinsky <evillique@gmail.com>
Co-authored-by: ltrk2 <107155950+ltrk2@users.noreply.github.com>
Co-authored-by: Nikita Mikhaylov <nikitamikhaylov@clickhouse.com>
Co-authored-by: vdimir <vdimir@clickhouse.com>
Co-authored-by: robot-ch-test-poll4 <69306974+robot-ch-test-poll4@users.noreply.github.com>
Co-authored-by: Dan Roscigno <dan@roscigno.com>
Co-authored-by: Nikolai Kochetov <KochetovNicolai@users.noreply.github.com>
Co-authored-by: Vitaly Baranov <vitlibar@yandex.ru>
Co-authored-by: Vitaly Baranov <vitlibar@clickhouse.com>
Co-authored-by: Nikolay Degterinsky <43110995+evillique@users.noreply.github.com>
Co-authored-by: Smita Kulkarni <Smita.Kulkarni@clickhouse.com>
Co-authored-by: Sergei Trifonov <sergei@clickhouse.com>
Co-authored-by: Denny Crane <denis.zhuravlov@gmail.com>
Co-authored-by: Dale Mcdiarmid <dale@clickhouse.com>
Co-authored-by: HarryLeeIBM <Harry.Lee@ibm.com>
Co-authored-by: Alexey Milovidov <milovidov@clickhouse.com>
Co-authored-by: Nikita Taranov <nikita.taranov@clickhouse.com>
Co-authored-by: Rich Raposa <richraposa@gmail.com>
Co-authored-by: Igor Nikonov <igor@clickhouse.com>
Co-authored-by: SmitaRKulkarni <64093672+SmitaRKulkarni@users.noreply.github.com>
Co-authored-by: Igor Nikonov <954088+devcrafter@users.noreply.github.com>
Co-authored-by: robot-ch-test-poll1 <47390204+robot-ch-test-poll1@users.noreply.github.com>
Co-authored-by: Dmitry Novik <n0vik@clickhouse.com>
Co-authored-by: sichenzhao <sichen.zhao@clickhouse.com>
Co-authored-by: robot-clickhouse <robot-clickhouse@users.noreply.github.com>
Co-authored-by: alesapin <alesapin@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-backports-created Backport PRs are successfully created, it won't be processed by CI script anymore pr-bugfix Pull request with bugfix, not backported by default v23.1-must-backport
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants