Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binding variables for lists do not work #9

Closed
nickhristov opened this issue Dec 22, 2017 · 2 comments
Closed

Binding variables for lists do not work #9

nickhristov opened this issue Dec 22, 2017 · 2 comments
Assignees

Comments

@nickhristov
Copy link

I have the following code:

@RequestMapping(value = "/queryByIdAndDate" , method = POST)
public @ResponseBody String isoDateFormat(@RequestBody QueryTestRequest testRequest) {
     ImmutableMap<String, Object> parameters = ImmutableMap.of("ldids", ImmutableList.of(testRequest.getLdId()));
     Statement statement = new SimpleStatement("SELECT * FROM location_data.location_data WHERE ld_id IN :ldids", parameters);
     ResultSet resultSet = session.execute(statement); 
     List<Row> rows = resultSet.all();
     return String.valueOf(rows.size());
}

And kicking it off with:

curl -H 'Content-Type: application/json' -vvv --data '{"ldId": "0a44ea7e-2b26-40ff-b486-7b86c0853a08"}' http://localhost/api/locationUpdates/queryByIdAndDate

I get 0 as a response.

The problem is that the data exists in the table:

cqlsh:location_data> SELECT ld_id FROM location_data.location_data WHERE ld_id = 0a44ea7e-2b26-40ff-b486-7b86c0853a08 LIMIT 1;

 ld_id
--------------------------------------
 0a44ea7e-2b26-40ff-b486-7b86c0853a08

(1 rows)

Now, if I were to embed the list in the statement, instead of using bind variables, things work:

@RequestMapping(value = "/queryByIdAndDate" , method = POST)
public @ResponseBody String isoDateFormat(@RequestBody QueryTestRequest testRequest) {
	ImmutableMap<String, Object> parameters = ImmutableMap.of();
	String ldIdFragment = Joiner.on(",").join(ImmutableList.of(testRequest.getLdId()));
	Statement statement = new SimpleStatement("SELECT * FROM location_data.location_data WHERE ld_id IN ( " + ldIdFragment + " )", parameters);
	ResultSet resultSet = session.execute(statement);
	List<Row> rows = resultSet.all();
	return String.valueOf(rows.size());
}

With this code I now get 14 as a result, which matches the query in cqlsh:

cqlsh:location_data> SELECT ld_id FROM location_data.location_data WHERE ld_id = 0a44ea7e-2b26-40ff-b486-7b86c0853a08;

 ld_id
--------------------------------------
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08
 0a44ea7e-2b26-40ff-b486-7b86c0853a08

(14 rows)
@robertpang
Copy link
Contributor

@nickhristov Thank you for reporting the issue. There was a small glitch in resolving bind variable for list type and we have just fixed it.

@kmuthukk
Copy link
Collaborator

kmuthukk commented Jan 8, 2018

Relevant commit: 0365a89

yugabyte-ci pushed a commit that referenced this issue Feb 2, 2018
… memtable

Summary:
There was a crash during one of our performance integration tests that was caused by Frontiers() not being set on a memtable. That could only possibly happen if the memtable is empty, and it is still not clear how an empty memtable could get into the list of immutable memtables. Regardless of that, instead of crashing, we should just flush that memtable and log an error message.

```
#0  operator() (memtable=..., __closure=0x7f2e454b67b0) at ../../../../../src/yb/tablet/tablet_peer.cc:178
#1  std::_Function_handler<bool(const rocksdb::MemTable&), yb::tablet::TabletPeer::InitTabletPeer(const std::shared_ptr<yb::tablet::enterprise::Tablet>&, const std::shared_future<std::shared_ptr<yb::client::YBClient> >&, const scoped_refptr<yb::server::Clock>&, const std::shared_ptr<yb::rpc::Messenger>&, const scoped_refptr<yb::log::Log>&, const scoped_refptr<yb::MetricEntity>&, yb::ThreadPool*)::<lambda()>::<lambda(const rocksdb::MemTable&)> >::_M_invoke(const std::_Any_data &, const rocksdb::MemTable &) (__functor=..., __args#0=...)  at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:1857
#2  0x00007f2f7346a70e in operator() (__args#0=..., this=0x7f2e454b67b0) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:2267
#3  rocksdb::MemTableList::PickMemtablesToFlush(rocksdb::autovector<rocksdb::MemTable*, 8ul>*, std::function<bool (rocksdb::MemTable const&)> const&) (this=0x7d02978, ret=ret@entry=0x7f2e454b6370, filter=...)
    at ../../../../../src/yb/rocksdb/db/memtable_list.cc:259
#4  0x00007f2f7345517f in rocksdb::FlushJob::Run (this=this@entry=0x7f2e454b6750, file_meta=file_meta@entry=0x7f2e454b68d0) at ../../../../../src/yb/rocksdb/db/flush_job.cc:143
#5  0x00007f2f7341b7c3 in rocksdb::DBImpl::FlushMemTableToOutputFile (this=this@entry=0x89d2400, cfd=cfd@entry=0x7d02300, mutable_cf_options=..., made_progress=made_progress@entry=0x7f2e454b709e,
    job_context=job_context@entry=0x7f2e454b70b0, log_buffer=0x7f2e454b7280) at ../../../../../src/yb/rocksdb/db/db_impl.cc:1586
#6  0x00007f2f7341c19f in rocksdb::DBImpl::BackgroundFlush (this=this@entry=0x89d2400, made_progress=made_progress@entry=0x7f2e454b709e, job_context=job_context@entry=0x7f2e454b70b0,
    log_buffer=log_buffer@entry=0x7f2e454b7280) at ../../../../../src/yb/rocksdb/db/db_impl.cc:2816
#7  0x00007f2f7342539b in rocksdb::DBImpl::BackgroundCallFlush (this=0x89d2400) at ../../../../../src/yb/rocksdb/db/db_impl.cc:2838
#8  0x00007f2f735154c3 in rocksdb::ThreadPool::BGThread (this=0x3b0bb20, thread_id=0) at ../../../../../src/yb/rocksdb/util/thread_posix.cc:133
#9  0x00007f2f73515558 in rocksdb::BGThreadWrapper (arg=0xd970a20) at ../../../../../src/yb/rocksdb/util/thread_posix.cc:157
#10 0x00007f2f6c964694 in start_thread (arg=0x7f2e454b8700) at pthread_create.c:333
```

Test Plan: Jenkins

Reviewers: hector, sergei

Reviewed By: hector, sergei

Subscribers: sergei, bogdan, bharat, ybase

Differential Revision: https://phabricator.dev.yugabyte.com/D4044
mbautin pushed a commit that referenced this issue Jun 20, 2019
#7 - Add Forum link to 'Have a technical question' footer
mbautin added a commit that referenced this issue Jul 11, 2019
…ed to the

earlier commit 864e72b

Original commit message:

ENG-2793 Do not fail when deciding if we can flush an empty immutable memtable

Summary:
There was a crash during one of our performance integration tests that was caused by Frontiers() not being set on a memtable. That could only possibly happen if the memtable is empty, and it is still not clear how an empty memtable could get into the list of immutable memtables. Regardless of that, instead of crashing, we should just flush that memtable and log an error message.

```
#0  operator() (memtable=..., __closure=0x7f2e454b67b0) at ../../../../../src/yb/tablet/tablet_peer.cc:178
#1  std::_Function_handler<bool(const rocksdb::MemTable&), yb::tablet::TabletPeer::InitTabletPeer(const std::shared_ptr<yb::tablet::enterprise::Tablet>&, const std::shared_future<std::shared_ptr<yb::client::YBClient> >&, const scoped_refptr<yb::server::Clock>&, const std::shared_ptr<yb::rpc::Messenger>&, const scoped_refptr<yb::log::Log>&, const scoped_refptr<yb::MetricEntity>&, yb::ThreadPool*)::<lambda()>::<lambda(const rocksdb::MemTable&)> >::_M_invoke(const std::_Any_data &, const rocksdb::MemTable &) (__functor=..., __args#0=...)  at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:1857
#2  0x00007f2f7346a70e in operator() (__args#0=..., this=0x7f2e454b67b0) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:2267
#3  rocksdb::MemTableList::PickMemtablesToFlush(rocksdb::autovector<rocksdb::MemTable*, 8ul>*, std::function<bool (rocksdb::MemTable const&)> const&) (this=0x7d02978, ret=ret@entry=0x7f2e454b6370, filter=...)
    at ../../../../../src/yb/rocksdb/db/memtable_list.cc:259
#4  0x00007f2f7345517f in rocksdb::FlushJob::Run (this=this@entry=0x7f2e454b6750, file_meta=file_meta@entry=0x7f2e454b68d0) at ../../../../../src/yb/rocksdb/db/flush_job.cc:143
#5  0x00007f2f7341b7c3 in rocksdb::DBImpl::FlushMemTableToOutputFile (this=this@entry=0x89d2400, cfd=cfd@entry=0x7d02300, mutable_cf_options=..., made_progress=made_progress@entry=0x7f2e454b709e,
    job_context=job_context@entry=0x7f2e454b70b0, log_buffer=0x7f2e454b7280) at ../../../../../src/yb/rocksdb/db/db_impl.cc:1586
#6  0x00007f2f7341c19f in rocksdb::DBImpl::BackgroundFlush (this=this@entry=0x89d2400, made_progress=made_progress@entry=0x7f2e454b709e, job_context=job_context@entry=0x7f2e454b70b0,
    log_buffer=log_buffer@entry=0x7f2e454b7280) at ../../../../../src/yb/rocksdb/db/db_impl.cc:2816
#7  0x00007f2f7342539b in rocksdb::DBImpl::BackgroundCallFlush (this=0x89d2400) at ../../../../../src/yb/rocksdb/db/db_impl.cc:2838
#8  0x00007f2f735154c3 in rocksdb::ThreadPool::BGThread (this=0x3b0bb20, thread_id=0) at ../../../../../src/yb/rocksdb/util/thread_posix.cc:133
#9  0x00007f2f73515558 in rocksdb::BGThreadWrapper (arg=0xd970a20) at ../../../../../src/yb/rocksdb/util/thread_posix.cc:157
#10 0x00007f2f6c964694 in start_thread (arg=0x7f2e454b8700) at pthread_create.c:333
```

Test Plan: Jenkins

Reviewers: hector, sergei

Reviewed By: hector, sergei

Subscribers: sergei, bogdan, bharat, ybase

Differential Revision: https://phabricator.dev.yugabyte.com/D4044
mbautin added a commit to mbautin/yugabyte-db that referenced this issue Jul 16, 2019
… memtable

Summary:
There was a crash during one of our performance integration tests that was caused by Frontiers() not being set on a memtable. That could only possibly happen if the memtable is empty, and it is still not clear how an empty memtable could get into the list of immutable memtables. Regardless of that, instead of crashing, we should just flush that memtable and log an error message.

```
#0  operator() (memtable=..., __closure=0x7f2e454b67b0) at ../../../../../src/yb/tablet/tablet_peer.cc:178
yugabyte#1  std::_Function_handler<bool(const rocksdb::MemTable&), yb::tablet::TabletPeer::InitTabletPeer(const std::shared_ptr<yb::tablet::enterprise::Tablet>&, const std::shared_future<std::shared_ptr<yb::client::YBClient> >&, const scoped_refptr<yb::server::Clock>&, const std::shared_ptr<yb::rpc::Messenger>&, const scoped_refptr<yb::log::Log>&, const scoped_refptr<yb::MetricEntity>&, yb::ThreadPool*)::<lambda()>::<lambda(const rocksdb::MemTable&)> >::_M_invoke(const std::_Any_data &, const rocksdb::MemTable &) (__functor=..., __args#0=...)  at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:1857
yugabyte#2  0x00007f2f7346a70e in operator() (__args#0=..., this=0x7f2e454b67b0) at /n/jenkins/linuxbrew/linuxbrew_2018-01-09T08_28_02/Cellar/gcc/5.5.0/include/c++/5.5.0/functional:2267
yugabyte#3  rocksdb::MemTableList::PickMemtablesToFlush(rocksdb::autovector<rocksdb::MemTable*, 8ul>*, std::function<bool (rocksdb::MemTable const&)> const&) (this=0x7d02978, ret=ret@entry=0x7f2e454b6370, filter=...)
    at ../../../../../src/yb/rocksdb/db/memtable_list.cc:259
yugabyte#4  0x00007f2f7345517f in rocksdb::FlushJob::Run (this=this@entry=0x7f2e454b6750, file_meta=file_meta@entry=0x7f2e454b68d0) at ../../../../../src/yb/rocksdb/db/flush_job.cc:143
yugabyte#5  0x00007f2f7341b7c3 in rocksdb::DBImpl::FlushMemTableToOutputFile (this=this@entry=0x89d2400, cfd=cfd@entry=0x7d02300, mutable_cf_options=..., made_progress=made_progress@entry=0x7f2e454b709e,
    job_context=job_context@entry=0x7f2e454b70b0, log_buffer=0x7f2e454b7280) at ../../../../../src/yb/rocksdb/db/db_impl.cc:1586
yugabyte#6  0x00007f2f7341c19f in rocksdb::DBImpl::BackgroundFlush (this=this@entry=0x89d2400, made_progress=made_progress@entry=0x7f2e454b709e, job_context=job_context@entry=0x7f2e454b70b0,
    log_buffer=log_buffer@entry=0x7f2e454b7280) at ../../../../../src/yb/rocksdb/db/db_impl.cc:2816
yugabyte#7  0x00007f2f7342539b in rocksdb::DBImpl::BackgroundCallFlush (this=0x89d2400) at ../../../../../src/yb/rocksdb/db/db_impl.cc:2838
yugabyte#8  0x00007f2f735154c3 in rocksdb::ThreadPool::BGThread (this=0x3b0bb20, thread_id=0) at ../../../../../src/yb/rocksdb/util/thread_posix.cc:133
yugabyte#9  0x00007f2f73515558 in rocksdb::BGThreadWrapper (arg=0xd970a20) at ../../../../../src/yb/rocksdb/util/thread_posix.cc:157
yugabyte#10 0x00007f2f6c964694 in start_thread (arg=0x7f2e454b8700) at pthread_create.c:333
```

Test Plan: Jenkins

Reviewers: hector, sergei

Reviewed By: hector, sergei

Subscribers: sergei, bogdan, bharat, ybase

Differential Revision: https://phabricator.dev.yugabyte.com/D4044

Note:
This commit provides additional functionality that is logically related to
the earlier commit yugabyte@864e72b
and supersedes the commit yugabyte@2932b0a
mbautin added a commit that referenced this issue Jan 3, 2020
…up on macOS

Summary:
Add a DNS lookup of the local hostname to postmaster startup to force
macOS network libraries to get initialized before any fork() calls
happen.  This fixes failures of ~20 tests in macOS debug mode. Without
this, PostgreSQL backends would frequently crash with SIGSEGV and dump
cores when trying to do the same DNS lookup.

Here is a SIGSEGV stack trace that we would previously get without this
fix:

```
frame #0: 0x00007fff7bd53e34 libsystem_trace.dylib`_os_log_cmp_key + 4
frame #1: 0x00007fff7bbfcb74 libsystem_c.dylib`rb_tree_find_node + 53
frame #2: 0x00007fff7bd52021 libsystem_trace.dylib`os_log_create + 368
frame #3: 0x00007fff7bc5b127 libsystem_info.dylib`gai_log_init + 23
frame #4: 0x00007fff7bd37ce3 libsystem_pthread.dylib`__pthread_once_handler + 65
frame #5: 0x00007fff7bd2daab libsystem_platform.dylib`_os_once_callout + 18
frame #6: 0x00007fff7bd37c7f libsystem_pthread.dylib`pthread_once + 56
frame #7: 0x00007fff7bc5a4ab libsystem_info.dylib`gai_log + 27
frame #8: 0x00007fff7bc5b33f libsystem_info.dylib`_gai_load_libnetwork_once + 63
frame #9: 0x00007fff7bd37ce3 libsystem_pthread.dylib`__pthread_once_handler + 65
frame #10: 0x00007fff7bd2daab libsystem_platform.dylib`_os_once_callout + 18
frame #11: 0x00007fff7bd37c7f libsystem_pthread.dylib`pthread_once + 56
frame #12: 0x00007fff7bc5b29b libsystem_info.dylib`_gai_load_libnetwork + 27
frame #13: 0x00007fff7bc5b64f libsystem_info.dylib`_gai_nat64_v4_address_requires_synthesis + 31
frame #14: 0x00007fff7bc5aaa0 libsystem_info.dylib`_gai_nat64_second_pass + 512
frame #15: 0x00007fff7bc39847 libsystem_info.dylib`si_addrinfo + 1959
frame #16: 0x00007fff7bc38f77 libsystem_info.dylib`_getaddrinfo_internal + 231
frame #17: 0x00007fff7bc38e7d libsystem_info.dylib`getaddrinfo + 61
frame #18: 0x000000011512f8e5 libyb_util.dylib`yb::GetFQDN(hostname="...") at net_util.cc:371:20
```

Test Plan: Jenkins

Reviewers: mihnea, dmitry

Reviewed By: dmitry

Subscribers: yql

Differential Revision: https://phabricator.dev.yugabyte.com/D7757
hectorgcr added a commit that referenced this issue Sep 5, 2020
Summary:
This reverts commit 4c0a2fe.

There is currently an issue with the new clock introduced with this diff that causes a crash in some scenarios when the metric is queried.
```#0  Now (this=0x742f680192a46a02) at ../../src/yb/common/clock.h:27
#1  lag_ms (this=0x5dadc40) at ../../src/yb/util/metrics.h:1355
#2  yb::AtomicMillisLag::WriteForPrometheus (this=0x5dadc40, writer=0x7fbdc2f84bb0, attr=..., opts=...) at ../../src/yb/util/metrics.h:1374
#3  0x00007fbdedc2ab83 in yb::MetricEntity::WriteForPrometheus (this=<optimized out>, writer=writer@entry=0x7fbdc2f84bb0, opts=...) at ../../src/yb/util/metrics.cc:351
#4  0x00007fbdedc2cf05 in yb::MetricRegistry::WriteForPrometheus (this=this@entry=0x1a70a80, writer=writer@entry=0x7fbdc2f84bb0, opts=...) at ../../src/yb/util/metrics.cc:491
#5  0x00007fbdf2cfe6d0 in yb::(anonymous namespace)::WriteForPrometheus (metrics=0x1a70a80, req=..., resp=0x7fbdc2f84de0) at ../../src/yb/server/default-path-handlers.cc:278
#6  0x00007fbdf2d2d95c in operator() (__args#1=0x7fbdc2f84de0, __args#0=..., this=<optimized out>)
    at /home/yugabyte/yb-software/yugabyte-2.3.0.0-b88-centos-x86_64/linuxbrew-xxxxxxxxxxxxx/Cellar/gcc/5.5.0_4/include/c++/5.5.0/functional:2267
#7  yb::Webserver::RunPathHandler (this=this@entry=0x1d64000, handler=..., connection=connection@entry=0x7d88000, request_info=request_info@entry=0x7d88000) at ../../src/yb/server/webserver.cc:423
#8  0x00007fbdf2d2e5ea in yb::Webserver::BeginRequestCallback (this=0x1d64000, connection=0x7d88000, request_info=0x7d88000) at ../../src/yb/server/webserver.cc:360
#9  0x00007fbdf2d438f6 in handle_request () from /home/yugabyte/yb-software/yugabyte-2.3.0.0-b88-centos-x86_64/lib/yb/libserver_process.so
#10 0x00007fbdf2d464de in worker_thread () from /home/yugabyte/yb-software/yugabyte-2.3.0.0-b88-centos-x86_64/lib/yb/libserver_process.so
#11 0x00007fbde84c5694 in start_thread (arg=0x7fbdc2f8f700) at pthread_create.c:333
#12 0x00007fbde7c0241d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109```

Test Plan: Build and unit tests

Reviewers: bogdan, amitanand, kannan

Reviewed By: kannan

Subscribers: kannan, ybase

Differential Revision: https://phabricator.dev.yugabyte.com/D9322
ttyusupov added a commit that referenced this issue Oct 5, 2020
Summary:
```
WARNING: ThreadSanitizer: data race (pid=11311)
1762	   Read of size 8 at 0x7b74000cfb58 by thread T155:
1763	     #0 std::__1::unique_ptr<rocksdb::DB, std::__1::default_delete<rocksdb::DB> >::operator bool() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20200829090443-f431681041-centos/installed/tsan/libcxx/include/c++/v1/memory:2619:19 (libtablet.so+0x211067)
1764	     #1 yb::tablet::Tablet::Flush(yb::tablet::FlushMode, yb::tablet::FlushFlags, long) src/yb/tablet/tablet.cc:1847 (libtablet.so+0x211067)
1765	     #2 yb::tserver::MiniTabletServer::FlushTablets(yb::tablet::FlushMode, yb::tablet::FlushFlags)::$_2::operator()(yb::tablet::TabletPeer*) const src/yb/tserver/mini_tablet_server.cc:200:35 (libtserver.so+0x1a0997)
...
1771	     #8 yb::tserver::MiniTabletServer::FlushTablets(yb::tablet::FlushMode, yb::tablet::FlushFlags) src/yb/tserver/mini_tablet_server.cc:196:10 (libtserver.so+0x19f4e8)
1772	     #9 yb::MiniCluster::FlushTablets(yb::tablet::FlushMode, yb::tablet::FlushFlags) src/yb/integration-tests/mini_cluster.cc:369:5 (libintegration-tests.so+0x10cf88)
1773	     #10 yb::client::QLStressTest_LongRemoteBootstrap_Test::TestBody()::$_8::operator()() const src/yb/client/ql-stress-test.cc:972:7 (ql-stress-test+0x4f2f1d)

Previous write of size 8 at 0x7b74000cfb58 by thread T49 (mutexes: write M263877791325157728):
1780	     #0 std::__1::unique_ptr<rocksdb::DB, std::__1::default_delete<rocksdb::DB> >::reset(rocksdb::DB*) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20200829090443-f431681041-centos/installed/tsan/libcxx/include/c++/v1/memory:2632:20 (libtablet.so+0x209c9c)
1781	     #1 yb::tablet::ResetRocksDB(bool, rocksdb::Options const&, std::__1::unique_ptr<rocksdb::DB, std::__1::default_delete<rocksdb::DB> >*) src/yb/tablet/tablet.cc:950 (libtablet.so+0x209c9c)
1782	     #2 yb::tablet::Tablet::ResetRocksDBs(yb::StronglyTypedBool<yb::tablet::Destroy_Tag>, yb::StronglyTypedBool<yb::tablet::DisableFlushOnShutdown_Tag>) src/yb/tablet/tablet.cc:967:27 (libtablet.so+0x209b3b)
1783	     #3 yb::tablet::Tablet::CompleteShutdown(yb::StronglyTypedBool<yb::tablet::IsDropTable_Tag>) src/yb/tablet/tablet.cc:931:3 (libtablet.so+0x205bac)
1784	     #4 yb::tablet::TabletPeer::CompleteShutdown(yb::StronglyTypedBool<yb::tablet::IsDropTable_Tag>) src/yb/tablet/tablet_peer.cc:475:14 (libtablet.so+0x287f71)
1785	     #5 yb::tablet::TabletPeer::Shutdown(yb::StronglyTypedBool<yb::tablet::IsDropTable_Tag>) src/yb/tablet/tablet_peer.cc:529:5 (libtablet.so+0x28896d)
1786	     #6 yb::tserver::TSTabletManager::DeleteTablet(string const&, yb::tablet::TabletDataState, boost::optional<long> const&, boost::optional<yb::tserver::TabletServerErrorPB_Code>*) src/yb/tserver/ts_tablet_manager.cc:1297:16 (libtserver.so+0x22d59e)
1787	     #7 yb::tserver::TabletServiceAdminImpl::DeleteTablet(yb::tserver::DeleteTabletRequestPB const*, yb::tserver::DeleteTabletResponsePB*, yb::rpc::RpcContext) src/yb/tserver/tablet_service.cc:1158:41 (libtserver.so+0x1e1de7)
1788	     #8 yb::tserver::TabletServerAdminServiceIf::Handle(std::__1::shared_ptr<yb::rpc::InboundCall>) src/yb/tserver/tserver_admin.service.cc:130:7 (libtserver_admin_proto.so+0x73cb0)
1789	     #9 yb::rpc::ServicePoolImpl::Handle(std::__1::shared_ptr<yb::rpc::InboundCall>) src/yb/rpc/service_pool.cc:262:19 (libyrpc.so+0x20aa57)
1790	     #10 yb::rpc::InboundCall::InboundCallTask::Run() src/yb/rpc/inbound_call.cc:212:13 (libyrpc.so+0x1745ee)
```

Added missing `ScopedRWOperation` into `Tablet::Flush` to avoid destroying tablets during flush.

Test Plan: `ybd --remote --dltp tsan --cxx-test client_ql-stress-test --gtest_filter QLStressTest.LongRemoteBootstrap -n 500 -- -p 1`

Reviewers: bogdan, mikhail

Reviewed By: mikhail

Subscribers: zyu, ybase

Differential Revision: https://phabricator.dev.yugabyte.com/D9541
ttyusupov added a commit that referenced this issue Jul 16, 2021
Summary:
Some time ago we had optimizations disabled for debug build type, but it was enabled during fix of #1291: f710367 ( https://phabricator.dev.yugabyte.com/D6660 ). Now we no longer have `retryable_rpc_single_call_timeout_ms` flag, also optimizations in debug build make it harder to investigate issues because of optimized stack traces and variables. So, we can disable these optimizations again to make debugging easier.

Before (note <optimized out> values that are not available for debugging):
```
(gdb) bt
#0  0x00007f97990bfa6b in raise (sig=sig@entry=11) at ../sysdeps/unix/sysv/linux/pt-raise.c:35
#1  0x00007f97a45268b9 in AddHash (num_probes=6, total_bits=523776, num_lines=<optimized out>, data=0x2bdc000 "", h=4266458700) at ../../src/yb/rocksdb/util/bloom.cc:66
#2  rocksdb::(anonymous namespace)::FixedSizeFilterBitsBuilder::AddKey (this=<optimized out>, key=...) at ../../src/yb/rocksdb/util/bloom.cc:463
#3  0x00007f97a44f0818 in rocksdb::FixedSizeFilterBlockBuilder::AddKey (this=this@entry=0x1c87a40, key=...) at ../../src/yb/rocksdb/table/fixed_size_filter_block.cc:97
#4  0x00007f97a44f08b0 in rocksdb::FixedSizeFilterBlockBuilder::Add (this=0x1c87a40, key=...) at ../../src/yb/rocksdb/table/fixed_size_filter_block.cc:91
#5  0x00007f97a44cf295 in rocksdb::BlockBasedTableBuilder::Add (this=0x1cd1c00, key=..., value=...) at ../../src/yb/rocksdb/table/block_based_table_builder.cc:468
#6  0x00007f97a439b9a4 in rocksdb::BuildTable (dbname=..., env=0x7f97a48a2c00 <rocksdb::Env::Default()::default_env>, ioptions=..., env_options=..., table_cache=0x1ccf740, iter=0x7f97969858f8, meta=0x7f97969864a0, internal_comparator=
    std::shared_ptr<const rocksdb::InternalKeyComparator> (use count 3, weak count 0) = {...}, int_tbl_prop_collector_factories=std::vector of length 1, capacity 1 = {...}, column_family_id=0,
    snapshots=std::vector of length 0, capacity 0, earliest_write_conflict_snapshot=72057594037927935, compression=rocksdb::kSnappyCompression, compression_opts=..., paranoid_file_checks=false, internal_stats=0x1d40200,
    boundary_values_extractor=0x1cd03f0, io_priority=rocksdb::Env::IO_HIGH, table_properties=0x7f9796987000) at ../../src/yb/rocksdb/db/builder.cc:160
#7  0x00007f97a4444ced in rocksdb::FlushJob::WriteLevel0Table (this=this@entry=0x7f9796986f40, mems=..., edit=0x1d44268, meta=meta@entry=0x7f97969864a0) at ../../src/yb/rocksdb/db/flush_job.cc:290
#8  0x00007f97a444669c in rocksdb::FlushJob::Run (this=this@entry=0x7f9796986f40, file_meta=file_meta@entry=0x7f9796986d00) at ../../src/yb/rocksdb/db/flush_job.cc:191
#9  0x00007f97a43fb5ba in rocksdb::DBImpl::FlushMemTableToOutputFile (this=this@entry=0x1d24000, cfd=cfd@entry=0x1a7b000, mutable_cf_options=..., made_progress=made_progress@entry=0x7f9796987f47,
    job_context=job_context@entry=0x7f9796987d70, log_buffer=0x7f9796987480) at ../../src/yb/rocksdb/db/db_impl.cc:1873
#10 0x00007f97a43fc505 in rocksdb::DBImpl::BackgroundFlush (this=this@entry=0x1d24000, made_progress=made_progress@entry=0x7f9796987f47, job_context=job_context@entry=0x7f9796987d70, log_buffer=log_buffer@entry=0x7f9796987480,
    cfd=0x1a7b000, cfd@entry=0x0) at ../../src/yb/rocksdb/db/db_impl.cc:3202
#11 0x00007f97a4406cb3 in rocksdb::DBImpl::BackgroundCallFlush (this=this@entry=0x1d24000, cfd=cfd@entry=0x0) at ../../src/yb/rocksdb/db/db_impl.cc:3276
#12 0x00007f97a4406f6d in rocksdb::DBImpl::BGWorkFlush (db=db@entry=0x1d24000) at ../../src/yb/rocksdb/db/db_impl.cc:3132
#13 0x00007f97a4540875 in rocksdb::ThreadPool::BGThread (this=0x1adeb60, thread_id=0) at ../../src/yb/rocksdb/util/thread_posix.cc:126
#14 0x00007f97a4540899 in operator() (__closure=<optimized out>) at ../../src/yb/rocksdb/util/thread_posix.cc:165
#15 std::_Function_handler<void(), rocksdb::ThreadPool::StartBGThreads()::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...)
    at /opt/yb-build/brew/linuxbrew-20181203T161736v9-3ba4c2ed9b0587040949a4a9a95b576f520bae/Cellar/gcc/5.5.0_4/include/c++/5.5.0/functional:1871
#16 0x00007f979d804626 in operator() (this=0x1cbdc78) at /opt/yb-build/brew/linuxbrew-20181203T161736v9-3ba4c2ed9b0587040949a4a9a95b576f520bae/Cellar/gcc/5.5.0_4/include/c++/5.5.0/functional:2267
#17 yb::Thread::SuperviseThread (arg=0x1cbdc20) at ../../src/yb/util/thread.cc:771
#18 0x00007f97990b7694 in start_thread (arg=0x7f9796990700) at pthread_create.c:333
#19 0x00007f9798df941d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
```

After:
```
#0  0x00007f1a9109ba6b in raise (sig=11) at ../sysdeps/unix/sysv/linux/pt-raise.c:35
#1  0x00007f1a9edd7343 in rocksdb::(anonymous namespace)::AddHash (h=4266458700, data=0x2d58000 "", num_lines=1023, total_bits=523776, num_probes=6) at ../../src/yb/rocksdb/util/bloom.cc:66
#2  0x00007f1a9edd87c8 in rocksdb::(anonymous namespace)::FixedSizeFilterBitsBuilder::AddKey (this=0x2d30040, key=...) at ../../src/yb/rocksdb/util/bloom.cc:463
#3  0x00007f1a9ed8e757 in rocksdb::FixedSizeFilterBlockBuilder::AddKey (this=0x1ef22d0, key=...) at ../../src/yb/rocksdb/table/fixed_size_filter_block.cc:97
#4  0x00007f1a9ed8e6eb in rocksdb::FixedSizeFilterBlockBuilder::Add (this=0x1ef22d0, key=...) at ../../src/yb/rocksdb/table/fixed_size_filter_block.cc:91
#5  0x00007f1a9ed5b2ea in rocksdb::BlockBasedTableBuilder::Add (this=0x1e4dc00, key=..., value=...) at ../../src/yb/rocksdb/table/block_based_table_builder.cc:468
#6  0x00007f1a9eb80bd4 in rocksdb::BuildTable (dbname="/tmp/yb_tests__2020-12-14T18_15_29__23449.18214.17918/mytestdb-814110369", env=0x7f1a9f35fe40 <rocksdb::Env::Default()::default_env>, ioptions=..., env_options=...,
    table_cache=0x1e4b740, iter=0x7f1a8e961858, meta=0x7f1a8e962480, internal_comparator=std::shared_ptr<const rocksdb::InternalKeyComparator> (use count 3, weak count 0) = {...},
    int_tbl_prop_collector_factories=std::vector of length 1, capacity 1 = {...}, column_family_id=0, snapshots=std::vector of length 0, capacity 0, earliest_write_conflict_snapshot=72057594037927935,
    compression=rocksdb::kSnappyCompression, compression_opts=..., paranoid_file_checks=false, internal_stats=0x1ebc200, boundary_values_extractor=0x1e4c3f0, io_priority=rocksdb::Env::IO_HIGH, table_properties=0x7f1a8e962f70)
    at ../../src/yb/rocksdb/db/builder.cc:160
#7  0x00007f1a9ec8e56b in rocksdb::FlushJob::WriteLevel0Table (this=0x7f1a8e962eb0, mems=..., edit=0x1ec0268, meta=0x7f1a8e962480) at ../../src/yb/rocksdb/db/flush_job.cc:290
#8  0x00007f1a9ec8d767 in rocksdb::FlushJob::Run (this=0x7f1a8e962eb0, file_meta=0x7f1a8e962c70) at ../../src/yb/rocksdb/db/flush_job.cc:191
#9  0x00007f1a9ec10c56 in rocksdb::DBImpl::FlushMemTableToOutputFile (this=0x1ea0000, cfd=0x1bf7000, mutable_cf_options=..., made_progress=0x7f1a8e9640b7, job_context=0x7f1a8e963ee0, log_buffer=0x7f1a8e9635f0)
    at ../../src/yb/rocksdb/db/db_impl.cc:1873
#10 0x00007f1a9ec18bb6 in rocksdb::DBImpl::BackgroundFlush (this=0x1ea0000, made_progress=0x7f1a8e9640b7, job_context=0x7f1a8e963ee0, log_buffer=0x7f1a8e9635f0, cfd=0x1bf7000) at ../../src/yb/rocksdb/db/db_impl.cc:3202
#11 0x00007f1a9ec1914d in rocksdb::DBImpl::BackgroundCallFlush (this=0x1ea0000, cfd=0x0) at ../../src/yb/rocksdb/db/db_impl.cc:3276
#12 0x00007f1a9ec182fa in rocksdb::DBImpl::BGWorkFlush (db=0x1ea0000) at ../../src/yb/rocksdb/db/db_impl.cc:3132
#13 0x00007f1a9ee02747 in rocksdb::ThreadPool::BGThread (this=0x1c5ab60, thread_id=0) at ../../src/yb/rocksdb/util/thread_posix.cc:126
#14 0x00007f1a9ee028c6 in rocksdb::ThreadPool::<lambda()>::operator()(void) const (__closure=0x1e39c78) at ../../src/yb/rocksdb/util/thread_posix.cc:165
#15 0x00007f1a9ee03140 in std::_Function_handler<void(), rocksdb::ThreadPool::StartBGThreads()::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...)
    at /opt/yb-build/brew/linuxbrew-20181203T161736v9-3ba4c2ed9b0587040949a4a9a95b576f520bae/Cellar/gcc/5.5.0_4/include/c++/5.5.0/functional:1871
#16 0x00007f1aa1efe732 in std::function<void ()>::operator()() const (this=0x1e39c78) at /opt/yb-build/brew/linuxbrew-20181203T161736v9-3ba4c2ed9b0587040949a4a9a95b576f520bae/Cellar/gcc/5.5.0_4/include/c++/5.5.0/functional:2267
#17 0x00007f1a95e6bcf9 in yb::Thread::SuperviseThread (arg=0x1e39c20) at ../../src/yb/util/thread.cc:771
#18 0x00007f1a91093694 in start_thread (arg=0x7f1a8e96c700) at pthread_create.c:333
#19 0x00007f1a90dd541d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
```

Test Plan:
```
#!/usr/bin/env bash

set -euo pipefail

for i in {1..50}
do
  echo "Iteration: $i"
  rm -rf build/debug-gcc-dynamic-ninja/share/initial_sys_catalog_snapshot
  ybd --sj packaged
  ./bin/yb-ctl destroy
  ./bin/yb-ctl start
  ./bin/ysqlsh -c "SELECT 1"
done
./bin/yb-ctl destroy
```

Reviewers: bogdan, sergei, dmitry, mbautin

Reviewed By: mbautin

Subscribers: eng

Differential Revision: https://phabricator.dev.yugabyte.com/D10121
frozenspider added a commit that referenced this issue Oct 27, 2021
Summary:
When running `org.yb.pgsql.TestYsqlUpgrade#upgradeIsIdempotent` in ASAN, the following (slightly groomed) error is produced:

```
==10620==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7efff21dd790 at pc 0x7f0012f16eaa bp 0x7efff21dcd30 sp 0x7efff21dcd28
READ of size 1 at 0x7efff21dd790 thread T30 (rpc_tp_TabletSe)
    #0 0x7f0012f16ea9 in std::__1::basic_string<...>::__is_long() ...
    #1 0x7f0012f16ea9 in std::__1::basic_string<...>::__get_pointer() ...
    #2 0x7f0012f16ea9 in std::__1::basic_string<...>::data() ...
    #3 0x7f0012f16ea9 in std::__1::basic_string<...>::c_str() ...
    #4 0x7f0012f16ea9 in yb::PgDeriveSocketDir(std::__1::basic_string<...> const&) yb/util/pg_util.cc:29
    #5 0x7f002863cf14 in yb::pgwrapper::YsqlUpgradeHelper::Connect(std::__1::basic_string<...> const&) yb/yql/pgwrapper/ysql_upgrade.cc:233:7
    #6 0x7f002863d6aa in yb::pgwrapper::YsqlUpgradeHelper::Upgrade() yb/yql/pgwrapper/ysql_upgrade.cc:251:24
    #7 0x7f0029043b61 in yb::tserver::TabletServiceAdminImpl::UpgradeYsql(...) yb/tserver/tablet_service.cc:1558:38
    #8 0x7f001d74316e in yb::tserver::TabletServerAdminServiceIf::Handle(...) yb/tserver/tserver_admin.service.cc:490:7
    #9 0x7f001427742d in yb::rpc::ServicePoolImpl::Handle(...) yb/rpc/service_pool.cc:260:19
    #10 0x7f00141283f0 in yb::rpc::InboundCall::InboundCallTask::Run() yb/rpc/inbound_call.cc:218:13
    #11 0x7f00142a4c2b in yb::rpc::(anonymous namespace)::Worker::Execute() yb/rpc/thread_pool.cc:105:15
    #12 0x7f0012fbcdd0 in yb::Thread::SuperviseThread(void*) yb/util/thread.cc:771:3
    #13 0x7f0007547ea4 in start_thread (/lib64/libpthread.so.0+0x7ea4)
    #14 0x7f00072709fc in __clone (/lib64/libc.so.6+0xfe9fc)
```

Apparently, `server_->pgsql_proxy_bind_address()` passed to `YsqlUpgradeHelper` constructor by reference from `TabletServiceAdminImpl::UpgradeYsql` somehow goes out of scope.
To fix this, it's now passed by value.

Test Plan: ybd asan --java-test org.yb.pgsql.TestYsqlUpgrade#upgradeIsIdempotent

Reviewers: sergei, mihnea, jason, dmitry

Reviewed By: dmitry

Subscribers: bogdan, yql

Differential Revision: https://phabricator.dev.yugabyte.com/D13557
adithya-kb pushed a commit that referenced this issue Dec 1, 2022
Summary:
We observed a crash while running TPCC workload with CDCSDK enabled.
The stack trace is:

```
(gdb) bt
#0  0x0000557f25b11910 in yb::DatumMessagePB::MergeFrom(yb::DatumMessagePB const&) ()
#1  0x0000557f258a41ef in yb::cdc::PopulateBeforeImage(std::__1::shared_ptr<yb::tablet::TabletPeer> const&, yb::ReadHybridTime const&, yb::cdc::RowMessage*, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::docdb::SubDocKey const&, yb::Schema const&, unsigned int) ()
#2  0x0000557f258a7304 in yb::cdc::PopulateCDCSDKIntentRecord(yb::OpId const&, yb::StronglyTypedUuid<yb::TransactionId_Tag> const&, std::__1::vector<yb::docdb::IntentKeyValueForCDC, std::__1::allocator<yb::docdb::IntentKeyValueForCDC> > const&, yb::cdc::StreamMetadata const&, std::__1::shared_ptr<yb::tablet::TabletPeer> const&, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::cdc::GetChangesResponsePB*, yb::ScopedTrackedConsumption*, unsigned int*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*, yb::Schema*, unsigned int, unsigned long const&) ()
#3  0x0000557f258aaa27 in yb::cdc::ProcessIntents(yb::OpId const&, yb::StronglyTypedUuid<yb::TransactionId_Tag> const&, yb::cdc::StreamMetadata const&, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::cdc::GetChangesResponsePB*, yb::ScopedTrackedConsumption*, yb::cdc::CDCSDKCheckpointPB*, std::__1::shared_ptr<yb::tablet::TabletPeer> const&, std::__1::vector<yb::docdb::IntentKeyValueForCDC, std::__1::allocator<yb::docdb::IntentKeyValueForCDC> >*, yb::docdb::ApplyTransactionState*, yb::client::YBClient*, std::__1::shared_ptr<yb::Schema>*, unsigned int*, unsigned long const&) ()
#4  0x0000557f258b00c1 in yb::cdc::GetChangesForCDCSDK(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, yb::cdc::CDCSDKCheckpointPB const&, yb::cdc::StreamMetadata const&, std::__1::shared_ptr<yb::tablet::TabletPeer> const&, std::__1::shared_ptr<yb::MemTracker> const&, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::client::YBClient*, yb::consensus::ReplicateMsgsHolder*, yb::cdc::GetChangesResponsePB*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*, std::__1::shared_ptr<yb::Schema>*, unsigned int*, yb::OpId*, long*, std::__1::chrono::time_point<yb::CoarseMonoClock, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000000000l> > >) ()
#5  0x0000557f2586c448 in yb::cdc::CDCServiceImpl::GetChanges(yb::cdc::GetChangesRequestPB const*, yb::cdc::GetChangesResponsePB*, yb::rpc::RpcContext) ()
#6  0x0000557f25908246 in std::__1::__function::__func<yb::cdc::CDCServiceIf::InitMethods(scoped_refptr<yb::MetricEntity> const&)::$_3, std::__1::allocator<yb::cdc::CDCServiceIf::InitMethods(scoped_refptr<yb::MetricEntity> const&)::$_3>, void (std::__1::shared_ptr<yb::rpc::InboundCall>)>::operator()(std::__1::shared_ptr<yb::rpc::InboundCall>&&) ()
#7  0x0000557f2590a6af in yb::cdc::CDCServiceIf::Handle(std::__1::shared_ptr<yb::rpc::InboundCall>) ()
#8  0x0000557f26227a1e in yb::rpc::ServicePoolImpl::Handle(std::__1::shared_ptr<yb::rpc::InboundCall>) ()
#9  0x0000557f2616db2f in yb::rpc::InboundCall::InboundCallTask::Run() ()
#10 0x0000557f26236583 in yb::rpc::(anonymous namespace)::Worker::Execute() ()
#11 0x0000557f268698cf in yb::Thread::SuperviseThread(void*) ()
#12 0x00007fa6fce89694 in ?? ()
#13 0x0000000000000000 in ?? ()
```

The problem is in the method: PopulateBeforeImage
When we drop a column, the the row won't have data for the dropped column, and hence will not be added to the "old_tuple" member of RowMessage. This will mean the size of "old_tuple" does not match the number of columns in the schema.
Which means this line: "row_message->old_tuple(static_cast<int>(index))" could lead to an out of bounds exception.
Instead,  now we are keeping track of the found columns in the row.

Test Plan: Running existing ctests

Reviewers: srangavajjula, sdash, skumar

Reviewed By: sdash, skumar

Differential Revision: https://phabricator.dev.yugabyte.com/D21338
adithya-kb pushed a commit that referenced this issue Dec 1, 2022
…h before image

Summary:
Original commit: 136e713 / D21338
We observed a crash while running TPCC workload with CDCSDK enabled.
The stack trace is:
```
(gdb) bt
#0  0x0000557f25b11910 in yb::DatumMessagePB::MergeFrom(yb::DatumMessagePB const&) ()
#1  0x0000557f258a41ef in yb::cdc::PopulateBeforeImage(std::__1::shared_ptr<yb::tablet::TabletPeer> const&, yb::ReadHybridTime const&, yb::cdc::RowMessage*, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::docdb::SubDocKey const&, yb::Schema const&, unsigned int) ()
#2  0x0000557f258a7304 in yb::cdc::PopulateCDCSDKIntentRecord(yb::OpId const&, yb::StronglyTypedUuid<yb::TransactionId_Tag> const&, std::__1::vector<yb::docdb::IntentKeyValueForCDC, std::__1::allocator<yb::docdb::IntentKeyValueForCDC> > const&, yb::cdc::StreamMetadata const&, std::__1::shared_ptr<yb::tablet::TabletPeer> const&, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::cdc::GetChangesResponsePB*, yb::ScopedTrackedConsumption*, unsigned int*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*, yb::Schema*, unsigned int, unsigned long const&) ()
#3  0x0000557f258aaa27 in yb::cdc::ProcessIntents(yb::OpId const&, yb::StronglyTypedUuid<yb::TransactionId_Tag> const&, yb::cdc::StreamMetadata const&, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::cdc::GetChangesResponsePB*, yb::ScopedTrackedConsumption*, yb::cdc::CDCSDKCheckpointPB*, std::__1::shared_ptr<yb::tablet::TabletPeer> const&, std::__1::vector<yb::docdb::IntentKeyValueForCDC, std::__1::allocator<yb::docdb::IntentKeyValueForCDC> >*, yb::docdb::ApplyTransactionState*, yb::client::YBClient*, std::__1::shared_ptr<yb::Schema>*, unsigned int*, unsigned long const&) ()
#4  0x0000557f258b00c1 in yb::cdc::GetChangesForCDCSDK(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, yb::cdc::CDCSDKCheckpointPB const&, yb::cdc::StreamMetadata const&, std::__1::shared_ptr<yb::tablet::TabletPeer> const&, std::__1::shared_ptr<yb::MemTracker> const&, std::__1::unordered_map<unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, std::__1::unordered_map<unsigned int, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> >, std::__1::hash<unsigned int>, std::__1::equal_to<unsigned int>, std::__1::allocator<std::__1::pair<unsigned int const, std::__1::vector<yb::master::PgAttributePB, std::__1::allocator<yb::master::PgAttributePB> > > > > const&, yb::client::YBClient*, yb::consensus::ReplicateMsgsHolder*, yb::cdc::GetChangesResponsePB*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*, std::__1::shared_ptr<yb::Schema>*, unsigned int*, yb::OpId*, long*, std::__1::chrono::time_point<yb::CoarseMonoClock, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000000000l> > >) ()
#5  0x0000557f2586c448 in yb::cdc::CDCServiceImpl::GetChanges(yb::cdc::GetChangesRequestPB const*, yb::cdc::GetChangesResponsePB*, yb::rpc::RpcContext) ()
#6  0x0000557f25908246 in std::__1::__function::__func<yb::cdc::CDCServiceIf::InitMethods(scoped_refptr<yb::MetricEntity> const&)::$_3, std::__1::allocator<yb::cdc::CDCServiceIf::InitMethods(scoped_refptr<yb::MetricEntity> const&)::$_3>, void (std::__1::shared_ptr<yb::rpc::InboundCall>)>::operator()(std::__1::shared_ptr<yb::rpc::InboundCall>&&) ()
#7  0x0000557f2590a6af in yb::cdc::CDCServiceIf::Handle(std::__1::shared_ptr<yb::rpc::InboundCall>) ()
#8  0x0000557f26227a1e in yb::rpc::ServicePoolImpl::Handle(std::__1::shared_ptr<yb::rpc::InboundCall>) ()
#9  0x0000557f2616db2f in yb::rpc::InboundCall::InboundCallTask::Run() ()
#10 0x0000557f26236583 in yb::rpc::(anonymous namespace)::Worker::Execute() ()
#11 0x0000557f268698cf in yb::Thread::SuperviseThread(void*) ()
#12 0x00007fa6fce89694 in ?? ()
#13 0x0000000000000000 in ?? ()
```

The problem is in the method: PopulateBeforeImage
When we drop a column, the the row won't have data for the dropped column, and hence will not be added to the "old_tuple" member of RowMessage. This will mean the size of "old_tuple" does not match the number of columns in the schema.
Which means this line: "row_message->old_tuple(static_cast<int>(index))" could lead to an out of bounds exception.
Instead, now we are keeping track of the found columns in the row.

Test Plan: Running existing ctests

Reviewers: skumar, srangavajjula, sdash

Reviewed By: sdash

Subscribers: bogdan

Differential Revision: https://phabricator.dev.yugabyte.com/D21339
timothy-e added a commit to timothy-e/yugabyte-db that referenced this issue Dec 13, 2022
* remove unnecessary comment, only prefetch when necessary

* cleanup relcache comment, do not count empty password

* do not inc past cap

* do not close unopened heap
vrajat pushed a commit to timothy-e/yugabyte-db that referenced this issue Dec 14, 2022
* remove unnecessary comment, only prefetch when necessary

* cleanup relcache comment, do not count empty password

* do not inc past cap

* do not close unopened heap
jasonyb pushed a commit that referenced this issue Dec 15, 2023
Summary:
YB Seq Scan code path is not hit because Foreign Scan is the default and
pg hint plan does not work.  Upcoming merge with YB master will bring in
master commit 465ee2c which changes the
default to YB Seq Scan.

To test YB Seq Scan, a temporary patch is needed (see the test plan).
With that, two bugs are encountered: fix them.

1. FailedAssertion("TTS_IS_VIRTUAL(slot)"

   On simple test case

       create table t (i int primary key, j int);
       select * from t;

   get

       TRAP: FailedAssertion("TTS_IS_VIRTUAL(slot)", File: "../../../../../../../src/postgres/src/backend/access/yb_access/yb_scan.c", Line: 3473, PID: 2774450)

   Details:

       #0  0x00007fd52616eacf in raise () from /lib64/libc.so.6
       #1  0x00007fd526141ea5 in abort () from /lib64/libc.so.6
       #2  0x0000000000af33ad in ExceptionalCondition (conditionName=conditionName@entry=0xc2938d "TTS_IS_VIRTUAL(slot)", errorType=errorType@entry=0xc01498 "FailedAssertion",
           fileName=fileName@entry=0xc28f18 "../../../../../../../src/postgres/src/backend/access/yb_access/yb_scan.c", lineNumber=lineNumber@entry=3473)
           at ../../../../../../../src/postgres/src/backend/utils/error/assert.c:69
       #3  0x00000000005c26bd in ybFetchNext (handle=0x2600ffc43680, slot=slot@entry=0x2600ff6c2980, relid=16384)
           at ../../../../../../../src/postgres/src/backend/access/yb_access/yb_scan.c:3473
       #4  0x00000000007de444 in YbSeqNext (node=0x2600ff6c2778) at ../../../../../../src/postgres/src/backend/executor/nodeYbSeqscan.c:156
       #5  0x000000000078b3c6 in ExecScanFetch (node=node@entry=0x2600ff6c2778, accessMtd=accessMtd@entry=0x7de2b9 <YbSeqNext>, recheckMtd=recheckMtd@entry=0x7de26e <YbSeqRecheck>)
           at ../../../../../../src/postgres/src/backend/executor/execScan.c:133
       #6  0x000000000078b44e in ExecScan (node=0x2600ff6c2778, accessMtd=accessMtd@entry=0x7de2b9 <YbSeqNext>, recheckMtd=recheckMtd@entry=0x7de26e <YbSeqRecheck>)
           at ../../../../../../src/postgres/src/backend/executor/execScan.c:182
       #7  0x00000000007de298 in ExecYbSeqScan (pstate=<optimized out>) at ../../../../../../src/postgres/src/backend/executor/nodeYbSeqscan.c:191
       #8  0x00000000007871ef in ExecProcNodeFirst (node=0x2600ff6c2778) at ../../../../../../src/postgres/src/backend/executor/execProcnode.c:480
       #9  0x000000000077db0e in ExecProcNode (node=0x2600ff6c2778) at ../../../../../../src/postgres/src/include/executor/executor.h:285
       #10 ExecutePlan (execute_once=<optimized out>, dest=0x2600ff6b1a10, direction=<optimized out>, numberTuples=0, sendTuples=true, operation=CMD_SELECT,
           use_parallel_mode=<optimized out>, planstate=0x2600ff6c2778, estate=0x2600ff6c2128) at ../../../../../../src/postgres/src/backend/executor/execMain.c:1650
       #11 standard_ExecutorRun (queryDesc=0x2600ff675128, direction=<optimized out>, count=0, execute_once=<optimized out>)
           at ../../../../../../src/postgres/src/backend/executor/execMain.c:367
       #12 0x000000000077dbfe in ExecutorRun (queryDesc=queryDesc@entry=0x2600ff675128, direction=direction@entry=ForwardScanDirection, count=count@entry=0, execute_once=<optimized out>)
           at ../../../../../../src/postgres/src/backend/executor/execMain.c:308
       #13 0x0000000000982617 in PortalRunSelect (portal=portal@entry=0x2600ff90e128, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x2600ff6b1a10)
           at ../../../../../../src/postgres/src/backend/tcop/pquery.c:954
       #14 0x000000000098433c in PortalRun (portal=portal@entry=0x2600ff90e128, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,
           dest=dest@entry=0x2600ff6b1a10, altdest=altdest@entry=0x2600ff6b1a10, qc=0x7fffc14a13c0) at ../../../../../../src/postgres/src/backend/tcop/pquery.c:786
       #15 0x000000000097e65b in exec_simple_query (query_string=0x2600ffdc6128 "select * from t;") at ../../../../../../src/postgres/src/backend/tcop/postgres.c:1321
       #16 yb_exec_simple_query_impl (query_string=query_string@entry=0x2600ffdc6128) at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5060
       #17 0x000000000097b7a5 in yb_exec_query_wrapper_one_attempt (exec_context=exec_context@entry=0x2600ffdc6000, restart_data=restart_data@entry=0x7fffc14a1640,
           functor=functor@entry=0x97e033 <yb_exec_simple_query_impl>, functor_context=functor_context@entry=0x2600ffdc6128, attempt=attempt@entry=0, retry=retry@entry=0x7fffc14a15ff)
           at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5028
       #18 0x000000000097d077 in yb_exec_query_wrapper (exec_context=exec_context@entry=0x2600ffdc6000, restart_data=restart_data@entry=0x7fffc14a1640,
           functor=functor@entry=0x97e033 <yb_exec_simple_query_impl>, functor_context=functor_context@entry=0x2600ffdc6128)
           at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5052
       #19 0x000000000097d0ca in yb_exec_simple_query (query_string=query_string@entry=0x2600ffdc6128 "select * from t;", exec_context=exec_context@entry=0x2600ffdc6000)
           at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5075
       #20 0x000000000097fe8a in PostgresMain (dbname=<optimized out>, username=<optimized out>) at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5794
       #21 0x00000000008c8354 in BackendRun (port=0x2600ff8423c0) at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:4791
       #22 BackendStartup (port=0x2600ff8423c0) at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:4491
       #23 ServerLoop () at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:1878
       #24 0x00000000008caa55 in PostmasterMain (argc=argc@entry=25, argv=argv@entry=0x2600ffdc01a0) at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:1533
       #25 0x0000000000804ba8 in PostgresServerProcessMain (argc=25, argv=0x2600ffdc01a0) at ../../../../../../src/postgres/src/backend/main/main.c:208
       #26 0x0000000000804bc8 in main ()

       3469    ybFetchNext(YBCPgStatement handle,
       3470                            TupleTableSlot *slot, Oid relid)
       3471    {
       3472            Assert(slot != NULL);
       3473            Assert(TTS_IS_VIRTUAL(slot));

       (gdb) p *slot
       $2 = {type = T_TupleTableSlot, tts_flags = 18, tts_nvalid = 0, tts_ops = 0xeaf5e0 <TTSOpsHeapTuple>, tts_tupleDescriptor = 0x2600ff6416c0, tts_values = 0x2600ff6c2a00, tts_isnull = 0x2600ff6c2a10, tts_mcxt = 0x2600ff6c2000, tts_tid = {ip_blkid = {bi_hi = 0, bi_lo = 0}, ip_posid = 0, yb_item = {ybctid = 0}}, tts_tableOid = 0, tts_yb_insert_oid = 0}

   Fix by making YB Seq Scan always use virtual slot.  This is similar
   to what is done for YB Foreign Scan.

2. segfault in ending scan

   Same simple test case gives segfault at a later stage.

   Details:

       #0  0x00000000007de762 in table_endscan (scan=0x3debfe3ab88) at ../../../../../../src/postgres/src/include/access/tableam.h:997
       #1  ExecEndYbSeqScan (node=node@entry=0x3debfe3a778) at ../../../../../../src/postgres/src/backend/executor/nodeYbSeqscan.c:298
       #2  0x0000000000787a75 in ExecEndNode (node=0x3debfe3a778) at ../../../../../../src/postgres/src/backend/executor/execProcnode.c:649
       #3  0x000000000077ffaf in ExecEndPlan (estate=0x3debfe3a128, planstate=<optimized out>) at ../../../../../../src/postgres/src/backend/executor/execMain.c:1489
       #4  standard_ExecutorEnd (queryDesc=0x2582fdc88928) at ../../../../../../src/postgres/src/backend/executor/execMain.c:503
       #5  0x00000000007800f8 in ExecutorEnd (queryDesc=queryDesc@entry=0x2582fdc88928) at ../../../../../../src/postgres/src/backend/executor/execMain.c:474
       #6  0x00000000006f140c in PortalCleanup (portal=0x2582ff900128) at ../../../../../../src/postgres/src/backend/commands/portalcmds.c:305
       #7  0x0000000000b3c36a in PortalDrop (portal=portal@entry=0x2582ff900128, isTopCommit=isTopCommit@entry=false)
           at ../../../../../../../src/postgres/src/backend/utils/mmgr/portalmem.c:514
       #8  0x000000000097e667 in exec_simple_query (query_string=0x2582ffdc6128 "select * from t;") at ../../../../../../src/postgres/src/backend/tcop/postgres.c:1331
       #9  yb_exec_simple_query_impl (query_string=query_string@entry=0x2582ffdc6128) at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5060
       #10 0x000000000097b79a in yb_exec_query_wrapper_one_attempt (exec_context=exec_context@entry=0x2582ffdc6000, restart_data=restart_data@entry=0x7ffc81c0e7d0,
           functor=functor@entry=0x97e028 <yb_exec_simple_query_impl>, functor_context=functor_context@entry=0x2582ffdc6128, attempt=attempt@entry=0, retry=retry@entry=0x7ffc81c0e78f)
           at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5028
       #11 0x000000000097d06c in yb_exec_query_wrapper (exec_context=exec_context@entry=0x2582ffdc6000, restart_data=restart_data@entry=0x7ffc81c0e7d0,
           functor=functor@entry=0x97e028 <yb_exec_simple_query_impl>, functor_context=functor_context@entry=0x2582ffdc6128)
           at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5052
       #12 0x000000000097d0bf in yb_exec_simple_query (query_string=query_string@entry=0x2582ffdc6128 "select * from t;", exec_context=exec_context@entry=0x2582ffdc6000)
           at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5075
       #13 0x000000000097fe7f in PostgresMain (dbname=<optimized out>, username=<optimized out>) at ../../../../../../src/postgres/src/backend/tcop/postgres.c:5794
       #14 0x00000000008c8349 in BackendRun (port=0x2582ff8403c0) at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:4791
       #15 BackendStartup (port=0x2582ff8403c0) at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:4491
       #16 ServerLoop () at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:1878
       #17 0x00000000008caa4a in PostmasterMain (argc=argc@entry=25, argv=argv@entry=0x2582ffdc01a0) at ../../../../../../src/postgres/src/backend/postmaster/postmaster.c:1533
       #18 0x0000000000804b9d in PostgresServerProcessMain (argc=25, argv=0x2582ffdc01a0) at ../../../../../../src/postgres/src/backend/main/main.c:208
       #19 0x0000000000804bbd in main ()

       294             /*
       295              * close heap scan
       296              */
       297             if (tsdesc != NULL)
       298                     table_endscan(tsdesc);

   Reason is initial merge 55782d5
   incorrectly merges end of ExecEndYbSeqScan.  Upstream PG
   9ddef36278a9f676c07d0b4d9f33fa22e48ce3b5 removes code, but initial
   merge duplicates lines.  Remove those lines.

Test Plan:
Apply the following patch to activate YB Seq Scan:

    diff --git a/src/postgres/src/backend/optimizer/path/allpaths.c b/src/postgres/src/backend/optimizer/path/allpaths.c
    index 8a4c38a965..854d84a648 100644
    --- a/src/postgres/src/backend/optimizer/path/allpaths.c
    +++ b/src/postgres/src/backend/optimizer/path/allpaths.c
    @@ -576,7 +576,7 @@ set_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,
                     else
                     {
                         /* Plain relation */
    -                    if (IsYBRelationById(rte->relid))
    +                    if (false)
                         {
                             /*
                              * Using a foreign scan which will use the YB FDW by

On almalinux 8,

    ./yb_build.sh fastdebug --gcc11
    pg15_tests/run_all_tests.sh fastdebug --gcc11 --sj --sp --scb

fails the following tests:

- test_D29546
- test_pg15_regress: yb_pg15
- test_types_geo: yb_pg_box
- test_hash_in_queries: yb_hash_in_queries

Manually check to see that they are due to YB Seq Scan explain output
differences.

Reviewers: aagrawal, tfoucher

Reviewed By: tfoucher

Subscribers: yql

Differential Revision: https://phorge.dev.yugabyte.com/D31139
dr0pdb pushed a commit to dr0pdb/yugabyte-db that referenced this issue Jul 24, 2024
ddhodge added a commit that referenced this issue Jul 30, 2024
…23065)

* initial commit for logical replication docs

* title changes

* changes to view table

* fixed line break

* fixed line break

* added content for delete and update

* added more content

* replaced hyperlink todos with reminders

* added snapshot metrics

* added more content

* added more config properties to docs

* added more config properties to docs

* added more config properties to docs

* replaced postgresql instances with yugabytedb

* added properties

* added complete properties

* changed postgresql to yugabytedb

* added example for all record types

* fixed highlighting of table header

* added type representations

* added type representations

* full content in now;

* full content in now;

* changed postgres references appropriately

* added a missing keyword

* changed name

* self review comments

* self review comments

* added section for logical replication

* added section for logical replication

* modified content for monitor page

* added content for monitoring

* rebased to master;

* CDC logical replication overview (#3)


Co-authored-by: Vaibhav Kushwaha <34186745+vaibhav-yb@users.noreply.github.com>

* advanced-topic (#5)


Co-authored-by: Vaibhav Kushwaha <34186745+vaibhav-yb@users.noreply.github.com>

* removed references to incremental and ad-hoc snapshots

* replaced index page with an empty one

* addressed review comments

* added getting started section

* added section for get started

* self review comments

* self review comments

* group review comments

* added hstore and domain type docs

* Advance configurations for CDC using logical replication (#2)

* Fix overview section (#7)

* Monitor section (#4)


Co-authored-by: Vaibhav Kushwaha <34186745+vaibhav-yb@users.noreply.github.com>

* Initial Snapshot content (#6)

* Add getting started (#1)

* Fix for broken note (#9)

* Fix the issue yaml parsing

Summary:
Fixes the issue yaml parsing. We changed the formatting for yaml list. This diff fixes the
usage for the same.

Test Plan:
Prepared alma9 node using ynp.
Verified universe creation.

Reviewers: vbansal, asharma

Reviewed By: asharma

Subscribers: yugaware

Differential Revision: https://phorge.dev.yugabyte.com/D36711

* [PLAT-14534]Add regex match for GCP Instance template

Summary:
Added regex match for gcp instance template.
Regex taken from gcp documentation [[https://cloud.google.com/compute/docs/reference/rest/v1/instanceTemplates | here]].

Test Plan: Tested manually that validation fails with invalid characters.

Reviewers: #yba-api-review!, svarshney

Reviewed By: svarshney

Subscribers: yugaware

Differential Revision: https://phorge.dev.yugabyte.com/D36543

* update diagram (#23245)

* [/PLAT-14708] Fix JSON field name in TaskInfo query

Summary: This was missed when task params were moved out from details field.

Test Plan: Trivial - existing tests should succeed.

Reviewers: vbansal, cwang

Reviewed By: vbansal

Subscribers: yugaware

Differential Revision: https://phorge.dev.yugabyte.com/D36705

* [#23173] DocDB: Allow large bytes to be passed to RateLimiter

Summary:
RateLimiter has a debug assert that you cannot `Request` more than `GetSingleBurstBytes`. In release mode we do not perform this check and any call gets stuck forever. This change allows large bytes to be requested on RateLimiter. It does so by breaking requests larger than `GetSingleBurstBytes` into multiple smaller requests.

This change is a temporary fix to allow xCluster to operate without any issues. RocksDB RateLimiter has multiple enhancements over the years that would help avoid this and more starvation issues. Ex: facebook/rocksdb@cb2476a. We should consider pulling in those changes.

Fixes #23173
Jira: DB-12112

Test Plan: RateLimiterTest.LargeRequests

Reviewers: slingam

Reviewed By: slingam

Subscribers: ybase

Differential Revision: https://phorge.dev.yugabyte.com/D36703

* [#23179] CDCSDK: Support data types with dynamically alloted oids in CDC

Summary:
This diff adds support for data types with dynamically alloted oids in CDC (for ex: hstore, enum array, etc). Such types contain invalid pg_type_oid for the corresponding columns in docdb schema.

In the current implemtation, in `ybc_pggate`, while decoding the cdc records we look at the `type_map_` to obtain YBCPgTypeEntity, which is then used for decoding. However the `type_map_` does not contain any entries for the data types with dynamically alloted oids. As a result, this causes segmentation fault. To prevent such crashes, CDC prevents addition of tables with such columns to the stream.

This diff removes the filtering logic and adds the tables to the stream even if it has such a type column. A function pointer will now be passed to `YBCPgGetCDCConsistentChanges`, which takes attribute number and the table_oid and returns the appropriate type entity by querying the `pg_type` catalog table. While decoding if a column is encountered with invalid pg_type_oid then, the passed function is invoked and type entity is obtained for decoding.

**Upgrade/Rollback safety:**
This diff adds a field `optional int32 attr_num` to DatumMessagePB. These changes are protected by the autoflag `ysql_yb_enable_replication_slot_consumption` which already exists but has not yet been released.
Jira: DB-12118

Test Plan:
Jenkins: urgent

All the existing cdc tests

./yb_build.sh --java-test 'org.yb.pgsql.TestPgReplicationSlot#replicationConnectionConsumptionAllDataTypesWithYbOutput'

Reviewers: skumar, stiwary, asrinivasan, dmitry

Reviewed By: stiwary, dmitry

Subscribers: steve.varnau, skarri, yql, ybase, ycdcxcluster

Tags: #jenkins-ready

Differential Revision: https://phorge.dev.yugabyte.com/D36689

* [PLAT-14710] Do not return apiToken in response to getSessionInfo

Summary:
**Context**
The GET /session_info YBA API returns:
{
    "authToken": "…",
    "apiToken": "….",
    "apiTokenVersion": "….",
    "customerUUID": "uuid1",
    "userUUID": "useruuid1"
}

The apiToken and apiTokenVersion is supposed to be the last generated token that is valid. We had the following sequence of changes to this API.

https://yugabyte.atlassian.net/browse/PLAT-8028 - Do not store YBA token in YBA.

After the above fix, YBA does not store the apiToken anymore. So it cannot return it as part of the /session_info. The change for this ticket returned the hashed apiToken instead.

https://yugabyte.atlassian.net/browse/PLAT-14672 - getSessionInfo should generate and return api key in response

Since the hashed apiToken value is not useful to any client, and it broke YBM create cluster (https://yugabyte.atlassian.net/browse/CLOUDGA-22117), the first change for this ticket returned a new apiToken instead.

Note that GET /session_info is meant to get customer and user information for the currently authenticated session. This is useful for automation starting off an authenticated session from an existing/cached API token. It is not necessary for the /session_info API to return the authToken and apiToken. The client already has one of authToken or apiToken with which it invoked /session_info API. In fact generating a new apiToken whenever /session_info is called will invalidate the previous apiToken which would not be expected by the client. There is a different API /api_token to regenerate the apiToken explicitly.

**Fix in this change**
So the right behaviour is for /session_info to stop sending the apiToken in the response. In fact, the current behaviour of generating a new apiToken everytime will break a client (for example node-agent usage of /session_info here (https://github.com/yugabyte/yugabyte-db/blob/4ca56cfe27d1cae64e0e61a1bde22406e003ec04/managed/node-agent/app/server/handler.go#L19).

**Client impact of not returning apiToken in response of /session_info**

This should not impact any normal client that was using /session_info only to get the user uuid and customer uuid.

However, there might be a few clients (like YBM for example) that invoked /session_info to get the last generated apiToken from YBA. Unfortunately, this was a mis-use of this API. YBA generates the apiToken in response to a few entry point APIs like /register, /api_login and /api_token. The apiToken is long lived. YBA could choose to expire these apiTokens after a fixed amount of (long) time, but for now there is no expiration. The clients are expected to store the apiToken at their end and use the token to reestablish a session with YBA whenever needed. After establishinig a new session, clients would call GET /session_info to get the user uuid and customer uuid. This is getting fixed in YBM with https://yugabyte.atlassian.net/browse/CLOUDGA-22117. So this PLAT change should be taken up by YBM only after CLOUDGA-22117 is fixed.

Test Plan:
* Manually verified that session_info does not return authToken
* Shubham verified that node-agent works with this fix. Thanks Shubham!

Reviewers: svarshney, dkumar, tbedi, #yba-api-review!

Reviewed By: svarshney

Subscribers: yugaware

Differential Revision: https://phorge.dev.yugabyte.com/D36712

* [docs] updates to CVE table status column (#23225)

* updates to status column

* review comment

* format

---------

Co-authored-by: Dwight Hodge <ghodge@yugabyte.com>

* [docs] Fix load balance keyword in drivers page (#23253)

[docs] Fix `load_balance` -> `load-balance` in jdbc driver
[docs] Fix `load_balance` -> `loadBalance` in nodejs driver

* fixed compilation

* fix link, format

* format, links

* links, format

* format

* format

* minor edit

* best practice (#8)

* moved sections

* moved pages

* added key concepts page

* added link to getting started

* Dynamic table doc changes (#11)

* icons

* added box for lead link

* revert ybclient change

* revert accidental change

* revert accidental change

* revert accidental change

* fix link block for getting started page

* format

* minor edit

* links, format

* format

* links

* format

* remove reminder references

* Modified output plugin docs (#12)

* Naming edits

* format

* review comments

* diagram

* review comment

* fix links

* format

* format

* link

* review comments

* copy to stable

* link

---------

Co-authored-by: siddharth2411 <43139012+siddharth2411@users.noreply.github.com>
Co-authored-by: Shubham <svarshney@yugabyte.com>
Co-authored-by: asharma-yb <asharma@yugabyte.com>
Co-authored-by: Dwight Hodge <79169168+ddhodge@users.noreply.github.com>
Co-authored-by: Naorem Khogendro Singh <nsingh@yugabyte.com>
Co-authored-by: Hari Krishna Sunder <hari90@users.noreply.github.com>
Co-authored-by: Sumukh-Phalgaonkar <sumukhphalgaonkar@gmail.com>
Co-authored-by: Subramanian Neelakantan <sneelakantan@yugabyte.com>
Co-authored-by: Aishwarya Chakravarthy <ashchakravarthy@gmail.com>
Co-authored-by: Dwight Hodge <ghodge@yugabyte.com>
Co-authored-by: ddorian <dorian.hoxha@gmail.com>
Co-authored-by: Sumukh-Phalgaonkar <61342752+Sumukh-Phalgaonkar@users.noreply.github.com>
myang2021 added a commit that referenced this issue Aug 8, 2024
Summary:
The DDL atomicity stress tests failed more on pg15 branch with an error like:

```
WARNING: ThreadSanitizer: data race (pid=180911)
  Write of size 8 at 0x7b2c000257b8 by thread T17 (mutexes: write M0):
    #0 profile_open_file prof_file.c (libkrb5.so.3+0xf45b3)
    #1 profile_init_flags <null> (libkrb5.so.3+0xfb056)
    #2 k5_os_init_context <null> (libkrb5.so.3+0xe5546)
    #3 krb5_init_context_profile <null> (libkrb5.so.3+0xabc90)
    #4 krb5_init_context <null> (libkrb5.so.3+0xabbd5)
    #5 krb5_gss_init_context init_sec_context.c (libgssapi_krb5.so.2+0x448da)
    #6 acquire_cred_from acquire_cred.c (libgssapi_krb5.so.2+0x39159)
    #7 krb5_gss_acquire_cred_from acquire_cred.c (libgssapi_krb5.so.2+0x39072)
    #8 gss_add_cred_from <null> (libgssapi_krb5.so.2+0x1fcd3)
    #9 gss_acquire_cred_from <null> (libgssapi_krb5.so.2+0x1f69d)
    #10 gss_acquire_cred <null> (libgssapi_krb5.so.2+0x1f431)
    #11 pg_GSS_have_cred_cache ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-gssapi-common.c:68:10 (libpq.so.5+0x543fe)
    #12 PQconnectPoll ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-connect.c:2909:22 (libpq.so.5+0x359ca)
    #13 connectDBComplete ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-connect.c:2241:10 (libpq.so.5+0x30807)
    #14 PQconnectdb ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-connect.c:719:10 (libpq.so.5+0x30af1)
    #15 yb::pgwrapper::PGConn::Connect(string const&, std::chrono::time_point<yb::CoarseMonoClock, std::chrono::duration<long long, std::ratio<1l, 1000000000l>>>, bool, string const&) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_utils.cc:348:24 (libpq_utils.so+0x13c5b)
    #16 yb::pgwrapper::PGConn::Connect(string const&, bool, string const&) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_utils.h:254:12 (libpq_utils.so+0x1a77e)
    #17 yb::pgwrapper::PGConnBuilder::Connect(bool) const ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_utils.cc:743:10 (libpq_utils.so+0x1a77e)
    #18 yb::pgwrapper::LibPqTestBase::ConnectToDBAsUser(string const&, string const&, bool) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_test_base.cc:54:6 (libpg_wrapper_test_base.so+0x26f34)
    #19 yb::pgwrapper::LibPqTestBase::ConnectToDB(string const&, bool) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_test_base.cc:44:10 (libpg_wrapper_test_base.so+0x26b1e)
    #20 yb::pgwrapper::LibPqTestBase::Connect(bool) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_test_base.cc:40:10 (libpg_wrapper_test_base.so+0x26b1e)
    #21 yb::pgwrapper::PgDdlAtomicityStressTest::Connect() ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/pg_ddl_atomicity_stress-test.cc:147:25 (pg_ddl_atomicity_stress-test+0x136d6c)
    #22 yb::pgwrapper::PgDdlAtomicityStressTest::TestDdl(std::vector<string, std::allocator<string>> const&, int) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/pg_ddl_atomicity_stress-test.cc:165:15 (pg_ddl_atomicity_stress-test+0x136df5)
    #23 yb::pgwrapper::PgDdlAtomicityStressTest_StressTest_Test::TestBody()::$_2::operator()() const ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/pg_ddl_atomicity_stress-test.cc:316:5 (pg_ddl_atomicity_stress-test+0x13d2eb)
```

It appears that the function `yb::pgwrapper::LibPqTestBase::Connect` isn't
thread safe. I restructured the code to make the connections in a single thread
and then pass them to various concurrent threads for testing.
Jira: DB-2996

Test Plan:
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/0 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/1 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/2 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/3 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/4 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/5 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/6 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/7 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/8 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/9 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/10 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/11 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/12 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/13 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/14 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/15 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/16 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/17 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/18 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/19 --clang17

Verified that no more tsan errors.

Reviewers: fizaa

Reviewed By: fizaa

Subscribers: yql

Differential Revision: https://phorge.dev.yugabyte.com/D37111
myang2021 added a commit that referenced this issue Aug 8, 2024
… in tsan build

Summary:
The DDL atomicity stress tests failed more on pg15 branch with an error like:

```
WARNING: ThreadSanitizer: data race (pid=180911)
  Write of size 8 at 0x7b2c000257b8 by thread T17 (mutexes: write M0):
    #0 profile_open_file prof_file.c (libkrb5.so.3+0xf45b3)
    #1 profile_init_flags <null> (libkrb5.so.3+0xfb056)
    #2 k5_os_init_context <null> (libkrb5.so.3+0xe5546)
    #3 krb5_init_context_profile <null> (libkrb5.so.3+0xabc90)
    #4 krb5_init_context <null> (libkrb5.so.3+0xabbd5)
    #5 krb5_gss_init_context init_sec_context.c (libgssapi_krb5.so.2+0x448da)
    #6 acquire_cred_from acquire_cred.c (libgssapi_krb5.so.2+0x39159)
    #7 krb5_gss_acquire_cred_from acquire_cred.c (libgssapi_krb5.so.2+0x39072)
    #8 gss_add_cred_from <null> (libgssapi_krb5.so.2+0x1fcd3)
    #9 gss_acquire_cred_from <null> (libgssapi_krb5.so.2+0x1f69d)
    #10 gss_acquire_cred <null> (libgssapi_krb5.so.2+0x1f431)
    #11 pg_GSS_have_cred_cache ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-gssapi-common.c:68:10 (libpq.so.5+0x543fe)
    #12 PQconnectPoll ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-connect.c:2909:22 (libpq.so.5+0x359ca)
    #13 connectDBComplete ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-connect.c:2241:10 (libpq.so.5+0x30807)
    #14 PQconnectdb ${YB_SRC_ROOT}/src/postgres/src/interfaces/libpq/../../../../../../src/postgres/src/interfaces/libpq/fe-connect.c:719:10 (libpq.so.5+0x30af1)
    #15 yb::pgwrapper::PGConn::Connect(string const&, std::chrono::time_point<yb::CoarseMonoClock, std::chrono::duration<long long, std::ratio<1l, 1000000000l>>>, bool, string const&) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_utils.cc:348:24 (libpq_utils.so+0x13c5b)
    #16 yb::pgwrapper::PGConn::Connect(string const&, bool, string const&) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_utils.h:254:12 (libpq_utils.so+0x1a77e)
    #17 yb::pgwrapper::PGConnBuilder::Connect(bool) const ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_utils.cc:743:10 (libpq_utils.so+0x1a77e)
    #18 yb::pgwrapper::LibPqTestBase::ConnectToDBAsUser(string const&, string const&, bool) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_test_base.cc:54:6 (libpg_wrapper_test_base.so+0x26f34)
    #19 yb::pgwrapper::LibPqTestBase::ConnectToDB(string const&, bool) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_test_base.cc:44:10 (libpg_wrapper_test_base.so+0x26b1e)
    #20 yb::pgwrapper::LibPqTestBase::Connect(bool) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/libpq_test_base.cc:40:10 (libpg_wrapper_test_base.so+0x26b1e)
    #21 yb::pgwrapper::PgDdlAtomicityStressTest::Connect() ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/pg_ddl_atomicity_stress-test.cc:147:25 (pg_ddl_atomicity_stress-test+0x136d6c)
    #22 yb::pgwrapper::PgDdlAtomicityStressTest::TestDdl(std::vector<string, std::allocator<string>> const&, int) ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/pg_ddl_atomicity_stress-test.cc:165:15 (pg_ddl_atomicity_stress-test+0x136df5)
    #23 yb::pgwrapper::PgDdlAtomicityStressTest_StressTest_Test::TestBody()::$_2::operator()() const ${YB_SRC_ROOT}/src/yb/yql/pgwrapper/pg_ddl_atomicity_stress-test.cc:316:5 (pg_ddl_atomicity_stress-test+0x13d2eb)
```

It appears that the function `yb::pgwrapper::LibPqTestBase::Connect` isn't
thread safe. I restructured the code to make the connections in a single thread
and then pass them to various concurrent threads for testing.
Jira: DB-2996

Original commit: bd4874b / D37111

Test Plan:
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/0 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/1 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/2 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/3 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/4 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/5 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/6 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/7 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/8 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/9 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/10 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/11 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/12 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/13 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/14 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/15 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/16 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/17 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/18 --clang17
./yb_build.sh tsan --cxx-test pgwrapper_pg_ddl_atomicity_stress-test --gtest_filter PgDdlAtomicityStressTest/PgDdlAtomicityStressTest.StressTest/19 --clang17

Verified that no more tsan errors.

Reviewers: fizaa

Reviewed By: fizaa

Subscribers: yql

Tags: #jenkins-ready

Differential Revision: https://phorge.dev.yugabyte.com/D37167
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants