New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DocDB] TEST_docdb_log_write_batches fails with ysql_enable_packed_row=true #16665
Comments
Got the same message on sst_dump: CREATE TABLE t (i int);
INSERT INTO t VALUES (1); build/latest/bin/yb-admin flush_table ysql.yugabyte t
build/latest/bin/yb-admin list_tablets ysql.yugabyte t | tail -n +2 | awk '{print $1}' | while read -r line; do build/latest/bin/sst_dump --command=scan --output_format=decoded_regulardb --file=~/yugabyte-data/node-1/disk-1/yb-data/tserver/data/rocksdb/table-$(build/latest/bin/yb-admin list_tables include_table_id | grep yugabyte.t\ | awk '{print$2}')/tablet-$line/; done
This was alma8, fastdebug, gcc11, recent master commit 5a0324f. |
Note: need to remove workarounds in code base when this is fixed; see src/yb/integration-tests/xcluster_ysql-test.cc |
…ng is used, part I Summary: Currently, TEST_docdb_log_write_batches fails with ysql_enable_packed_row=true. This is because we don't make any attempt to obtain a list of the packing schemas that would be needed to decode the packed rows. This diff fixes this. What list of packing schemas will we need? The answer is that the list depends on the table we are unpacking rows for. This is not simply the primary table of the current tablet, but depends on a row by row basis. In particular, because of co-mingling of rows in the same tablet due to co-location, we actually need the ability to get the packing list for any table on the current tablet. To handle this, instead of passing down a SchemaPackingStorage I pass down a SchemaPackingProvider. When it comes time to do the logging, we create one of these for the current tablet and pass it down. At the appropriate lower level, we can extract the relevant table from the row key, fetch the right packing list for it, then do the correct decoding. NOTE: Although this fixes the logging for --TEST_docdb_log_write_batches, there is other debug dump code that is still broken. I will fix that code in the next diff of this stack. In the meantime, I have left the old entry points marked as deprecated with implementations that use a temporary adapter that I will remove in the next diff. Jira: DB-6045 Test Plan: Run: ``` yb_build.sh --cxx-test xcluster_ysql-test --gtest_filter '*ReplicationWithPackedColumnsAndSchemaVersionMismatchColocated' --test-args '--gtest_also_run_disabled_tests --TEST_docdb_log_write_batches=true --TEST_dcheck_for_missing_schema_packing=false' ``` and look at the output, particularly looking for any occurrences of "schema_packing", which indicates failure to find a packing schema. (That test hits packed rows with cotable, colocation, as well as external intents.) Discovered the following is also necessary to test a subtle bug fix: ``` yb_build.sh --cxx-test doc_operation-test ``` The following test verifies the output of external intents is good: ``` ybd --cxx-test docdb_docdb-test --gtest_filter 'DocDBTests/DocDBTestWrapper.CompactionWithTransactions/*' ``` Reviewers: sergei, xCluster, qhu Reviewed By: sergei, qhu Subscribers: pjain, qhu, sergei, ybase, bogdan Differential Revision: https://phorge.dev.yugabyte.com/D24917
https://phorge.dev.yugabyte.com/D26682 will remove the workaround among other things |
Summary: D25696 introduced a regression where {cql,pgsql}_operation.cc do not have packing information for printing out some verbose information about collisions. The relevant code for CQL is: ``` // Check if a duplicate value is inserted into a unique index. Result<bool> QLWriteOperation::HasDuplicateUniqueIndexValue(const DocOperationApplyData& data) { VLOG(3) << "Looking for collisions in\n" << DocDBDebugDumpToStr( data.doc_write_batch->doc_db(), nullptr /*schema_packing_provider*/); ``` The other code is similar, dumping the entire RocksDB contents at verbose level 3 or higher. This diff fixes this by providing packing information for this code in almost every case. We have to provide the operation objects (e.g., QLWriteOperation) with a SchemaPackingProvider they can use at their leisure to dump RocksDBs. Because these objects can be around for a while and outlive the underlying Tablet, we cannot just provide them a raw pointer to a SchemaPackingProvider provided by the Tablet because that raw pointer can become invalid before the operation object uses it. UPDATE: @spolitov convinced me that a raw pointer here will work because we only actually try and use the provider when the tablet still exists. Making a copy of the schema information won't work either because a concurrent operation might well add data using a different packing schema version to the RocksDB after the original operation is created. Note that we can't use the doc read context either because it only has packing information for the table being written and we need to dump the entire RocksDB, including other colocated tables. The only good solution as far as I can tell is to create a new implementation of SchemaPackingProvider that wraps a weak pointer to a Tablet and pass that to the operation objects. This weak pointer is available in the main pathway because WriteQuery (indirectly) keeps a weak pointer to the underlying Tablet being written to. There are three other pathways: * bulk loader: a little bit of plumbing work suffices to get a shared_ptr to the Tablet down to the operation object * restore catalog: similar, but a lot more plumbing work was required. * doc_operation-test.cc: I didn't bother fixing this (it currently has no packing information) because the tests don't actually call this code even with high verbose levels. UPDATE: switched from using the new implementation of SchemaPackingProvider to just passing raw pointers as it simpler. UPDATE: removed workaround for #16665 (previous diff in the stack) as it is no longer needed Test Plan: Commands to test the [D]VLOGing: First command for CQL: ``` yb_build.sh --cxx-test cassandra_cpp_driver-test --gtest_filter '*.TestCreateUniqueIndexPasses' --test-args '-vmodule=cql_operation=4' >& /tmp/generic.mdl.log ``` used to look like: ``` [ts-1] I0706 13:30:40.722496 13991 cql_operation.cc:616] Looking for collisions in [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey(0x675b, ["two"], []), [HT{ days: 19544 time: 13:30:38.905960 }]) [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey(0xaa65, ["one"], []), [HT{ days: 19544 time: 13:30:38.905960 }]) [ts-1] I0706 13:30:40.722715 13991 cql_operation.cc:688] Found collision while checking at { read: { days: 19544 time: 13:30:40.714147 } local_limit: { days: 19544 time: 13:30:41.214147 } global_limit: { days: 19544 time: 13:30:41.214147 } in_txn_limit: <max> serial_no: 0 } ... [ts-1] I0706 13:30:40.722759 13991 cql_operation.cc:692] DocDB is now: [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey(0x675b, ["two"], []), [HT{ days: 19544 time: 13:30:38.905960 }]) [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey(0xaa65, ["one"], []), [HT{ days: 19544 time: 13:30:38.905960 }]) ``` now looks like (not necessarily same output point): ``` [ts-3] I0706 13:33:56.859169 20977 cql_operation.cc:620] Looking for collisions in [ts-3] SubDocKey(DocKey(0xe1fa, ["four"], []), [HT{ days: 19544 time: 13:33:56.832847 }]) -> { 11: 4 } [ts-3] SubDocKey(DocKey(0xf070, ["three"], []), [HT{ days: 19544 time: 13:33:55.024586 }]) -> { 11: 3 } [ts-3] I0706 13:33:56.859690 20977 cql_operation.cc:689] Found collision while checking at { read: { days: 19544 time: 13:33:56.855951 } local_limit: { days: 19544 time: 13:33:57.355951 } global_limit: { days: 19544 time: 13:33:57.355951 } in_txn_limit: <max> serial_no: 0 } ... [ts-3] I0706 13:33:56.859746 20977 cql_operation.cc:692] DocDB is now: [ts-3] SubDocKey(DocKey(0xe1fa, ["four"], []), [HT{ days: 19544 time: 13:33:56.832847 }]) -> { 11: 4 } [ts-3] SubDocKey(DocKey(0xf070, ["three"], []), [HT{ days: 19544 time: 13:33:55.024586 }]) -> { 11: 3 } ``` Second command for PGSQL: ``` yb_build.sh --cxx-test pg_index_backfill-test --gtest_filter '*.Unique' --test-args '-vmodule=pgsql_operation=4' >& /tmp/generic.mdl.log ``` used to look like: ``` [ts-1] I0706 13:32:39.990581 16140 pgsql_operation.cc:559] Looking for collisions in [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey([], [11, null]), [HT{ days: 19544 time: 13:32:39.281280 }]) [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey([], [12, null]), [HT{ days: 19544 time: 13:32:39.281280 }]) [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey([], [13, null]), [HT{ days: 19544 time: 13:32:39.281280 w: 1 }]) [ts-1] I0706 13:32:39.991027 16140 pgsql_operation.cc:653] Found collision while checking at { read: { days: 19544 time: 13:32:39.281280 } local_limit: { days: 19544 time: 13:32:39.281280 } global_limit: { days: 19544 time: 13:32:39.281280 } in_txn_limit: { days: 19544 time: 13:32:39.989347 } serial_no: 0 } ... [ts-1] I0706 13:32:39.991113 16140 pgsql_operation.cc:657] DocDB is now: [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey([], [11, null]), [HT{ days: 19544 time: 13:32:39.281280 }]) [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey([], [12, null]), [HT{ days: 19544 time: 13:32:39.281280 }]) [ts-1] Not found (yb/docdb/kv_debug.cc:113): No packing information available: . Key: SubDocKey(DocKey([], [13, null]), [HT{ days: 19544 time: 13:32:39.281280 w: 1 }]) ``` now looks like (not necessarily same output point): ``` [ts-3] I0706 13:35:06.639981 22437 pgsql_operation.cc:559] Looking for collisions in [ts-3] SubDocKey(DocKey([], [11, null]), [HT{ days: 19544 time: 13:35:05.959051 }]) -> { 12: "G\x18\\SV\xaf\xd2\xc0o^M\xd2\xaaa>H\x8b}\"\xb2\x00\x00!!" } [ts-3] SubDocKey(DocKey([], [12, null]), [HT{ days: 19544 time: 13:35:05.959051 w: 1 }]) -> { 12: "G\x1bkS\xc9\x1e?\xa24\xa6G\xf2\x9a\xa8r\xfc\x9e\xc6\x14\xbf\x00\x00!!" } [ts-3] SubDocKey(DocKey([], [13, null]), [HT{ days: 19544 time: 13:35:05.959051 }]) -> { 12: "G#\xe2S\x8d\x1d6\x10#\x17C\xc7\xa6\xf6\xad \xafqY-\x00\x00!!" } [m-1] I0706 13:35:06.639909 23459 backfill_index.cc:1327] Done backfilling the tablet 0x0000369a7ccf6800 -> 546e6d3545d942278d22ccf2aa9ff378 (table ttt [id=000033f5000030008000000000004000]) [ts-3] I0706 13:35:06.640645 22437 pgsql_operation.cc:653] Found collision while checking at { read: { days: 19544 time: 13:35:05.959051 } local_limit: { days: 19544 time: 13:35:05.959051 } global_limit: { days: 19544 time: 13:35:05.959051 } in_txn_limit: { days: 19544 time: 13:35:06.638627 } serial_no: 0 } ... [ts-3] I0706 13:35:06.640774 22437 pgsql_operation.cc:657] DocDB is now: [ts-3] SubDocKey(DocKey([], [11, null]), [HT{ days: 19544 time: 13:35:05.959051 }]) -> { 12: "G\x18\\SV\xaf\xd2\xc0o^M\xd2\xaaa>H\x8b}\"\xb2\x00\x00!!" } [ts-3] SubDocKey(DocKey([], [12, null]), [HT{ days: 19544 time: 13:35:05.959051 w: 1 }]) -> { 12: "G\x1bkS\xc9\x1e?\xa24\xa6G\xf2\x9a\xa8r\xfc\x9e\xc6\x14\xbf\x00\x00!!" } [ts-3] SubDocKey(DocKey([], [13, null]), [HT{ days: 19544 time: 13:35:05.959051 }]) -> { 12: "G#\xe2S\x8d\x1d6\x10#\x17C\xc7\xa6\xf6\xad \xafqY-\x00\x00!!" } ``` Reviewers: sergei, xCluster Reviewed By: sergei Subscribers: ybase, qhu, bogdan Differential Revision: https://phorge.dev.yugabyte.com/D26682
Reactivating for 2.18 and 2.16 backports, if possible. |
…ches when packing is used, part I Summary: Original commit: 2ba0093 / D24917 Only interesting part of backporting merge is going past the refactoring out of some of docdb folders/namespace into dockv. Currently, TEST_docdb_log_write_batches fails with ysql_enable_packed_row=true. This is because we don't make any attempt to obtain a list of the packing schemas that would be needed to decode the packed rows. This diff fixes this. What list of packing schemas will we need? The answer is that the list depends on the table we are unpacking rows for. This is not simply the primary table of the current tablet, but depends on a row by row basis. In particular, because of co-mingling of rows in the same tablet due to co-location, we actually need the ability to get the packing list for any table on the current tablet. To handle this, instead of passing down a SchemaPackingStorage I pass down a SchemaPackingProvider. When it comes time to do the logging, we create one of these for the current tablet and pass it down. At the appropriate lower level, we can extract the relevant table from the row key, fetch the right packing list for it, then do the correct decoding. NOTE: Although this fixes the logging for --TEST_docdb_log_write_batches, there is other debug dump code that is still broken. I will fix that code in the next diff of this stack. In the meantime, I have left the old entry points marked as deprecated with implementations that use a temporary adapter that I will remove in the next diff. Jira: DB-6045 Test Plan: Run: ``` yb_build.sh --cxx-test xcluster_ysql-test --gtest_filter '*ReplicationWithPackedColumnsAndSchemaVersionMismatchColocated' --test-args '--gtest_also_run_disabled_tests --TEST_docdb_log_write_batches=true --TEST_dcheck_for_missing_schema_packing=false' ``` and look at the output, particularly looking for any occurrences of "schema_packing", which indicates failure to find a packing schema. (That test hits packed rows with cotable, colocation, as well as external intents.) Discovered the following is also necessary to test a subtle bug fix: ``` yb_build.sh --cxx-test doc_operation-test ``` The following test verifies the output of external intents is good: ``` ybd --cxx-test docdb_docdb-test --gtest_filter 'DocDBTests/DocDBTestWrapper.CompactionWithTransactions/*' ``` Reviewers: sergei, qhu Reviewed By: qhu Subscribers: bogdan, ybase, sergei, qhu, pjain Differential Revision: https://phorge.dev.yugabyte.com/D27128
backported to 2.18.2 |
…ches when packing is used, part II Summary: Original commit: 4943644 / D25696 only a few minor real conflicts. * DocOperationApplyData read_time is a field instead of a function * dockv hasn't been pulled out of docdb The previous diff in this stack, https://phorge.dev.yugabyte.com/D24917, fixed the underlying debugging code but did not ensure we got a good SchemaPackingProvider passed in from all the various call sites. This diff deals with that part. In particular, * now correctly handling packed rows from any table kind: * the dump functions (e.g., TEST_DocDBDumpToLog) in tablet.cc work now correctly because they take the SchemaPackingProvider provided by the tablet * src/yb/tools/sst_dump.cc * I have added code to retrieve the needed information from the superblock so this now works in all cases * src/yb/tablet/tablet-split-test.cc * src/yb/docdb/docdb_util.cc * now take the SchemaPackingProvider they already had * No change (have no packing information as before): * src/yb/consensus/log-dump.cc * a future diff could take an extra argument for the superblock path if desired * src/yb/tools/data-patcher.cc * @bogdan says "I think that was some emergency tooling for #internal-hudson-trading, when their clock went 87 years into the future and we ended up persisting records with that hybrid time and had to patch up their data, to un-mess up their cluster (edited)" * src/yb/docdb/cql_operation.cc, src/yb/docdb/pgsql_operation.cc * as of this diff these have no packing information * debug was called only by VLOG 2/3 to provide information about inserting duplicate values into a unique index * I didn't think it was worth the extra plumbing and I didn't think it was worth the extra plumbing/adding work to the non-VLOG path to fix this * THIS IS A STRICT REGRESSION for the CQL case, which did have packing information for this case * the YSQL case was already missing packing information The deprecated code from the previous diff has been removed. Fixes #17661 Fixes #16665 Jira: DB-6045 Test Plan: I had to construct a fairly elaborate recipe to test the SST dump functionality: ``` alias ybadmin="./build/latest/bin/yb-admin -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100" bin/yb-ctl destroy bin/yb-ctl --rf 3 start --master_flags "ysql_enable_packed_row=true,ysql_enable_packed_row_for_colocated_table=true,ycql_enable_packed_row=true" --tserver_flags "ysql_enable_packed_row=true,ysql_enable_packed_row_for_colocated_table=true,ycql_enable_packed_row=true" --timeout-processes-running-sec 600 bin/ysqlsh <<EOF CREATE DATABASE mdl_database WITH COLOCATION = true; \c mdl_database CREATE TABLE test_mdl (key int, x int, y int, PRIMARY KEY (key)); INSERT INTO test_mdl SELECT x, x*2, x+3 FROM GENERATE_SERIES(1,100) AS x; CREATE TABLE test_mdl2 (key int, s text, z int, q int, PRIMARY KEY (key)); INSERT INTO test_mdl2 SELECT x, 'foobar', x+3, -x FROM GENERATE_SERIES(1,100) AS x; CREATE TABLE test_mdl3 (key int, s text, z int, q int, PRIMARY KEY (key)) WITH (colocation = 0); INSERT INTO test_mdl3 SELECT x, 'foobar', x+3, -x FROM GENERATE_SERIES(1,100) AS x; EOF ybadmin list_tables include_table_id | grep test_mdl | sed 's/.* //' | xargs -L1 ./build/latest/bin/yb-admin -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100 flush_table_by_id ybadmin flush_sys_catalog for i in `find /home/mdbridge/yugabyte-data/node-1/disk-1/yb-data/tserver/data/rocksdb/ -name 'tablet-*' -print | grep -E '[0-9a-f]$'`; do j=`echo $i | sed 's|.*/tablet-\(.*\)|\1|'` echo echo "Tablet: $j" build/latest/bin/sst_dump --command=scan --output_format=decoded_regulardb --file=$i --formatter_tablet_metadata=/home/mdbridge/yugabyte-data/node-1/disk-1/yb-data/tserver/tablet-meta/$j done >& /tmp/generic.mdl.log for i in `find /home/mdbridge/yugabyte-data/node-1/disk-1/yb-data/master/data/rocksdb/ -name 'tablet-*' -print | grep -E '[0-9a-f]$'`; do j=`echo $i | sed 's|.*/tablet-\(.*\)|\1|'` echo echo "Tablet: $j" build/latest/bin/sst_dump --command=scan --output_format=decoded_regulardb --file=$i --formatter_tablet_metadata=/home/mdbridge/yugabyte-data/node-1/disk-1/yb-data/master/tablet-meta/$j done >& /tmp/generic.mdl.log2 ``` From the TServer tablets we see that we have correct packing for normal tables and colocated ones: ``` I0524 16:32:28.058239 12449 kv_formatter.cc:35] Found info for table ID 00004000000030008000000000004001 (namespace , table_type PGSQL_TABLE_TYPE, name test_mdl, cotable_id 01400000-0000-0080-0030-000000400000, colocation_id 647238229) in superblock I0524 16:32:28.058396 12449 kv_formatter.cc:35] Found info for table ID 00004000000030008000000000004007 (namespace , table_type PGSQL_TABLE_TYPE, name test_mdl2, cotable_id 07400000-0000-0080-0030-000000400000, colocation_id 4153788404) in superblock I0524 16:32:28.058399 12449 kv_formatter.cc:35] Found info for table ID 00004000000030008000000000004004.colocation.parent.uuid (namespace mdl_database, table_type PGSQL_TABLE_TYPE, name 00004000000030008000000000004004.colocation.parent.tablename, cotable_id 00000000-0000-0000-0000-000000000000, colocation_id 0) in superblock SubDocKey(DocKey(ColocationId=647238229, [], [3]), [HT{ physical: 1684971074684393 w: 2 }]) -> { 1: 6 2: 6 } SubDocKey(DocKey(ColocationId=647238229, [], [4]), [HT{ physical: 1684971074684393 w: 3 }]) -> { 1: 8 2: 7 } SubDocKey(DocKey(ColocationId=647238229, [], [5]), [HT{ physical: 1684971074684393 w: 4 }]) -> { 1: 10 2: 8 } SubDocKey(DocKey(ColocationId=4153788404, [], [2]), [HT{ physical: 1684971074744496 w: 1 }]) -> { 1: "foobar" 2: 5 3: -2 } SubDocKey(DocKey(ColocationId=4153788404, [], [3]), [HT{ physical: 1684971074744496 w: 2 }]) -> { 1: "foobar" 2: 6 3: -3 } SubDocKey(DocKey(ColocationId=4153788404, [], [4]), [HT{ physical: 1684971074744496 w: 3 }]) -> { 1: "foobar" 2: 7 3: -4 } SubDocKey(DocKey(ColocationId=4153788404, [], [5]), [HT{ physical: 1684971074744496 w: 4 }]) -> { 1: "foobar" 2: 8 3: -5 } ---- I0524 16:32:28.091348 12453 kv_formatter.cc:35] Found info for table ID 0000400000003000800000000000400c (namespace mdl_database, table_type PGSQL_TABLE_TYPE, name test_mdl3, cotable_id 00000000-0000-0000-0000-000000000000, colocation_id 0) in superblock Sst file format: block-based SubDocKey(DocKey(0xacf5, [100], []), [HT{ physical: 1684971075001899 w: 16 }]) -> { 1: "foobar" 2: 103 3: -100 } SubDocKey(DocKey(0xae00, [37], []), [HT{ physical: 1684971075001899 w: 5 }]) -> { 1: "foobar" 2: 40 3: -37 } ``` From the master tablet we see that we have correct packing for the cotable case: ``` I0524 16:31:43.201089 12411 kv_formatter.cc:35] Found info for table ID 00004000000030008000000000000e10 (namespace , table_type PGSQL_TABLE_TYPE, name pg_ts_dict, cotable_id 100e0000-0000-0080-0030-000000400000, colocation_id 0) in superblock I0524 16:31:43.200613 12411 kv_formatter.cc:35] Found info for table ID 00004000000030008000000000000a2a (namespace , table_type PGSQL_TABLE_TYPE, name pg_amop, cotable_id 2a0a0000-0000-0080-0030-000000400000, colocation_id 0) in superblock SubDocKey(DocKey(CoTableId=100e0000-0000-0080-0030-000000400000, [], [13034]), [HT{ physical: 1684971073824193 w: 1116 }]) -> { 1: "danish_stem" 2: 11 3: 10 4: 13033 5: "language = 'danish', stopwords = 'danish'" } SubDocKey(DocKey(CoTableId=2a0a0000-0000-0080-0030-000000400000, [], [10812]), [HT{ physical: 1684971073708320 w: 895 }]) -> { 1: 4059 2: 1082 3: 1184 4: 1 5: 115 6: 2358 7: 3580 8: 0 } ``` ------ Test case for [DocDB] IntentsDB debug page not working with packed rows #17661: ``` yb_build.sh debug alias ybadmin="./build/latest/bin/yb-admin -master_addresses 127.0.0.1:7100,127.0.0.2:7100,127.0.0.3:7100" bin/yb-ctl --rf 3 start --master_flags "ysql_enable_packed_row=true,ysql_enable_packed_row_for_colocated_table=true,ycql_enable_packed_row=true" --tserver_flags "ysql_enable_packed_row=true,ysql_enable_packed_row_for_colocated_table=true,ycql_enable_packed_row=true,enable_intentsdb_page=true" --timeout-processes-running-sec 600 bin/ysqlsh \c yugabyte CREATE TABLE test_mdl (key int, x int, y int, PRIMARY KEY (key)); START TRANSACTION; INSERT INTO test_mdl SELECT x, x*2, x+3 FROM GENERATE_SERIES(1,10) AS x; ``` Verify can view intents at localhost:9000/intentsdb and that they have packing information. For example, ``` SubDocKey(DocKey(0xae50, [3211], []), []) [kStrongRead, kStrongWrite] HT{ days: 19515 time: 10:39:33.009282 w: 31 } -> TransactionId(9b94a398-fe66-4c83-9033-59c6e6092046) WriteId(545) { 11: 6422 12: 3214 } ``` Reviewers: xCluster, sergei, qhu Reviewed By: qhu Subscribers: yql, ybase, bogdan, qhu Differential Revision: https://phorge.dev.yugabyte.com/D27254
@mdbridge , Can we backport to 2.16 branch as well? |
I have been trying to do that. Ran into some issues that require more debugging then got pulled into the high priority Amex investigation. Hope to get back to this next week. |
have resumed working on this; have backport for 2.16 up for review |
Backport to 2.16 is risky due to additional dependencies. We decided to use the sst_dump tool from 2.18 branch if needed. Thanks @mdbridge for driving the discussions around this. |
for the record, a partially completed backport for 2.16 was https://phorge.dev.yugabyte.com/D28519 |
Jira Link: DB-6045
Description
When running with --tserver_flags TEST_docdb_log_write_batches=true, it fails with
schema_packing.cc:200] Check failed: _s.ok() Bad status: Not found (yb/docdb/schema_packing.cc:197): Schema packing not found: 0, available_versions: []
This doesn't happen if ysql_enable_packed_row=false.
Warning: Please confirm that this issue does not contain any sensitive information
The text was updated successfully, but these errors were encountered: