forked from percona/percona-server
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removed CURL depends on RTMP checks from the CMake files #1
Open
percona-ysorokin
wants to merge
26
commits into
dutow:mysql8018merge2
Choose a base branch
from
percona-ysorokin:ps-8.0.18-merge-rtmp_fix
base: mysql8018merge2
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Removed CURL depends on RTMP checks from the CMake files #1
percona-ysorokin
wants to merge
26
commits into
dutow:mysql8018merge2
from
percona-ysorokin:ps-8.0.18-merge-rtmp_fix
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Also: * Reverting our patch for check_basedir from the 8.0.17 merge, as upstream fixed it differently. * Libdbug was removed, it is part of mysys now * Cleaning up main test suite trivial failures * Reverting PS-3410, as upstream implemented it differently * Removed encryption assertion: it could be false, as tested by table_encrypt_4
Testcase wasn't aware of compression dictionaries.
* Compilation fixes * Rerecorded rocksdb replication results * Rerecorded tokudb replication results as original test has been shortened (extra/rpl_tests/rpl_mulit_update2.test) * Rerecorded result file for for tokudb_rpl.rpl_tokudb_row_crash_safe and tokudb_rpl.rpl_tokudb_stm_mixed_crash_safe * Rerecorded result file for for tokudb.ext_key_1_innodb, tokudb.ext_key_1_tokudb, tokudb.ext_key_2_innodb, tokudb.ext_key_2_tokudb (hash_join, default 'on' introduced in 8.0.18) * Fixed tokudb MTR tests (select ... order by missing) * Rerecorded result for tokudb.type_time MTR test (added cast wrappers in item_cmpfunc.cc) * Fokudb.type_temporal_fractional MTR testfixed (zero values are allowed for time type (item_timefunc.cc date_should_be_null()) * Tokudb.type_newdecimal.result rerecorded. Cause by change in upstream: fix of Bug#29463760
… fixed missing signal in test after changed done in Group_action_coordinator::signal_action_terminated() when setting debug sync.
…upstream because of PS-5473 fix.
2) Suppressed warning (needed by changes from PS-3829)
…ling of ON/OFF values for innodb_track_changed_pages variable in debug mode
… end of the list, because they do not fit into spare area provided by upstream anymore.
This test was never executed on 8.0 before.
…odb_row_log_encryption Deadlock between debug sync used in test and exclusive lock added in 8.0.18
examined_rows_count calculation has changed because of upstream change 4f4466a. Now SELECT_LEX_UNIT::ExecuteIteratorQuery() at sql_union.cc executes FakeSingleRowIterator::Read() which increments examined rows counter.
It was failing, because on platforms with TLSv1.3 there are implicitly configured ciphers that were used instead of tested cipher string. More info: https://jira.percona.com/browse/PS-5996
We rely on page cleaners to do all the flush list flushing. Upstream can do fushing from foreground (user) threads. To request all page cleaners to flush all pages, we use buf_flush_request_force() with LSN_MAX. With the new changes in 044f509, LSN_MAX usage is disallowed. The LSN_MAX usage is valid for us. The intention is to flush all pages from all flush_lists and we don't care about fine granular flushing across buffer pools.
Assertion is_server_active failed. This bug could have occured on regular 8.0 after crash recovery as well. After crash recovery or due to upgrade, there is acitivity just before log file resize. is_server_active considers old server acitivity as well. Just after acitivity, if there is resize, the assertion failed. Fix: ---- We force acitivity be to zero for the period of log file resize.
dutow
force-pushed
the
mysql8018merge2
branch
3 times, most recently
from
November 29, 2019 07:47
4a51fa1
to
fd0b440
Compare
dutow
force-pushed
the
mysql8018merge2
branch
2 times, most recently
from
December 2, 2019 18:43
a7e0029
to
8aa24c2
Compare
dutow
pushed a commit
that referenced
this pull request
Aug 3, 2020
…TIONS Description: We have a few scenarios that don't seem to behave correctly as expected. All the scenarios involve the options passed to it. 1. It is our belief that passing { validation: {} } to create collection should work and leave you with default settings. Currently it throws invalid # of arguements. 2. Passing { validation: { level: "off" }} to create collection should also work since that is a default state. Currently it throws invalid # of arguments 3. Passing { validation: { foo: "bar" }} should throw an error about an unknown keyword 'foo'. Right now it throws invalid # of arguments. It seems as this all stems from the requirement that the schema property be included with create collection. #1 and percona#2 is a matter of opinion and reasonable people can agree they are not bugs. RB:23277 Reviewed-by: Lukasz Kotula <lukasz.kotula@oracle.com> Change-Id: I3528dc1dbcc8da97e74341908f999e5d24706396
dutow
pushed a commit
that referenced
this pull request
Aug 3, 2020
Upstream commit ID : fb-mysql-5.6.35/77032004ad23d21a4c386f8136ecfbb071ea42d6 PS-6865 : Merge fb-prod201903 Summary: Currently during primary key's value encode, its ttl value can be from either one of these 3 cases 1. ttl column in primary key 2. non-ttl column a. old record(update case) b. current timestamp 3. ttl column in non-key field Workflow #1: first in Rdb_key_def::pack_record() find and store pk_offset, then in value encode try to parse key slice to fetch ttl value by using pk_offset. Workflow percona#3: fetch ttl value from ttl column The change is to merge #1 and percona#3 by always fetching TTL value from ttl column, not matter whether the ttl column is in primary key or not. Of course, remove pk_offset, since it isn't used. BTW, for secondary keys, its ttl value is always from m_ttl_bytes, which is stored by primary value encoding. Reviewed By: yizhang82 Differential Revision: D14662716 fbshipit-source-id: 6b4e5f044fd
dutow
pushed a commit
that referenced
this pull request
Sep 7, 2020
…o: object '/lib64/libtirpc.so' from LD_PRELOAD cannot be preloaded Problem ======= Running mtr with ASAN build on Gentoo tests fails since the path to libtirpc is not /lib64/libtirpc.so which is the path mtr uses for preloading the library. Further more the libasan path in Gentoo may contain also underscores and minus which mtr safe_process does not recognize. Fails on Gentoo since /lib64/libtirpc.so do not exist +ERROR: ld.so: object '/lib64/libtirpc.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. Fails on Gentoo since /usr/lib64/libtirpc.so is a GNU LD script +ERROR: ld.so: object '/usr/lib64/libtirpc.so' from LD_PRELOAD cannot be preloaded (invalid ELF header): ignored. Need to preload /lib64/libtirpc.so.3 on gentoo. When compiling with GNU C++ libasan path also include minus and underscores: $ less mysql-test/lib/My/SafeProcess/ldd_asan_test_result linux-vdso.so.1 (0x00007ffeba962000) libasan.so.4 => /usr/lib/gcc/x86_64-pc-linux-gnu/7.3.0/libasan.so.4 (0x00007f3c2e827000) Tests that been affected in different ways are for example: $ ./mtr group_replication.gr_clone_integration_clone_not_installed [100%] group_replication.gr_clone_integration_clone_not_installed w3 [ fail ] ... ERROR: ld.so: object '/usr/lib/gcc/x86' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. ERROR: ld.so: object '/lib64/libtirpc.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. mysqltest: At line 21: Query 'START GROUP_REPLICATION' failed. ERROR 2013 (HY000): Lost connection to MySQL server during query ... ASAN:DEADLYSIGNAL ================================================================= ==11970==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f0e5cecfb8c bp 0x7f0e340f1650 sp 0x7f0e340f0dc8 T44) ==11970==The signal is caused by a READ memory access. ==11970==Hint: address points to the zero page. #0 0x7f0e5cecfb8b in xdr_uint32_t (/lib64/libc.so.6+0x13cb8b) #1 0x7f0e5fbe6d43 (/usr/lib/gcc/x86_64-pc-linux-gnu/7.3.0/libasan.so.4+0x87d43) percona#2 0x7f0e3c675e59 in xdr_node_no plugin/group_replication/libmysqlgcs/xdr_gen/xcom_vp_xdr.c:88 percona#3 0x7f0e3c67744d in xdr_pax_msg_1_6 plugin/group_replication/libmysqlgcs/xdr_gen/xcom_vp_xdr.c:852 ... $ ./mtr ndb.ndb_config [100%] ndb.ndb_config [ fail ] ... --- /.../src/mysql-test/suite/ndb/r/ndb_config.result 2019-06-25 21:19:08.308997942 +0300 +++ /.../bld/mysql-test/var/log/ndb_config.reject 2019-06-26 11:58:11.718512944 +0300 @@ -30,16 +30,22 @@ == 16 == bug44689 192.168.0.1 192.168.0.2 192.168.0.3 192.168.0.4 192.168.0.1 192.168.0.1 == 17 == bug49400 +ERROR: ld.so: object '/usr/lib/gcc/x86' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. +ERROR: ld.so: object '/lib64/libtirpc.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. ERROR -- at line 25: TCP connection is a duplicate of the existing TCP link from line 14 ERROR -- at line 25: Could not store section of configuration file. $ ./mtr ndb.ndb_basic [100%] ndb.ndb_basic [ pass ] 34706 ERROR: ld.so: object '/usr/lib/gcc/x86' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. ERROR: ld.so: object '/lib64/libtirpc.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. Solution ======== In safe_process use same trick for libtirpc as for libasan to determine path to library for pre loading. Also allow underscores and minus in paths. In addition also add some memory leak suppressions for perl. Change-Id: Ia02e354a20cf8b279eb2573f3f8c2c39776343dc (cherry picked from commit e88706d)
dutow
pushed a commit
that referenced
this pull request
Apr 18, 2022
*Problem:* ASAN complains about stack-buffer-overflow on function `mysql_heartbeat`: ``` ==90890==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fe746d06d14 at pc 0x7fe760f5b017 bp 0x7fe746d06cd0 sp 0x7fe746d06478 WRITE of size 24 at 0x7fe746d06d14 thread T16777215 Address 0x7fe746d06d14 is located in stack of thread T26 at offset 340 in frame #0 0x7fe746d0a55c in mysql_heartbeat(void*) /home/yura/ws/percona-server/plugin/daemon_example/daemon_example.cc:62 This frame has 4 object(s): [48, 56) 'result' (line 66) [80, 112) '_db_stack_frame_' (line 63) [144, 200) 'tm_tmp' (line 67) [240, 340) 'buffer' (line 65) <== Memory access at offset 340 overflows this variable HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork (longjmp and C++ exceptions *are* supported) Thread T26 created by T25 here: #0 0x7fe760f5f6d5 in __interceptor_pthread_create ../../../../src/libsanitizer/asan/asan_interceptors.cpp:216 #1 0x557ccbbcb857 in my_thread_create /home/yura/ws/percona-server/mysys/my_thread.c:104 percona#2 0x7fe746d0b21a in daemon_example_plugin_init /home/yura/ws/percona-server/plugin/daemon_example/daemon_example.cc:148 percona#3 0x557ccb4c69c7 in plugin_initialize /home/yura/ws/percona-server/sql/sql_plugin.cc:1279 percona#4 0x557ccb4d19cd in mysql_install_plugin /home/yura/ws/percona-server/sql/sql_plugin.cc:2279 percona#5 0x557ccb4d218f in Sql_cmd_install_plugin::execute(THD*) /home/yura/ws/percona-server/sql/sql_plugin.cc:4664 percona#6 0x557ccb47695e in mysql_execute_command(THD*, bool) /home/yura/ws/percona-server/sql/sql_parse.cc:5160 percona#7 0x557ccb47977c in mysql_parse(THD*, Parser_state*, bool) /home/yura/ws/percona-server/sql/sql_parse.cc:5952 percona#8 0x557ccb47b6c2 in dispatch_command(THD*, COM_DATA const*, enum_server_command) /home/yura/ws/percona-server/sql/sql_parse.cc:1544 percona#9 0x557ccb47de1d in do_command(THD*) /home/yura/ws/percona-server/sql/sql_parse.cc:1065 percona#10 0x557ccb6ac294 in handle_connection /home/yura/ws/percona-server/sql/conn_handler/connection_handler_per_thread.cc:325 percona#11 0x557ccbbfabb0 in pfs_spawn_thread /home/yura/ws/percona-server/storage/perfschema/pfs.cc:2198 percona#12 0x7fe760ab544f in start_thread nptl/pthread_create.c:473 ``` The reason is that `my_thread_cancel` is used to finish the daemon thread. This is not and orderly way of finishing the thread. ASAN does not register the stack variables are not used anymore which generates the error above. This is a benign error as all the variables are on the stack. *Solution*: Finish the thread in orderly way by using a signalling variable.
dutow
pushed a commit
that referenced
this pull request
May 18, 2022
This error happens for queries such as: SELECT ( SELECT 1 FROM t1 ) AS a, ( SELECT a FROM ( SELECT x FROM t1 ORDER BY a ) AS d1 ); Query_block::prepare() for query block percona#4 (corresponding to the 4th SELECT in the query above) calls setup_order() which again calls find_order_in_list(). That function replaces an Item_ident for 'a' in Query_block.order_list with an Item_ref pointing to query block percona#2. Then Query_block::merge_derived() merges query block percona#4 into query block percona#3. The Item_ref mentioned above is then moved to the order_list of query block percona#3. In the next step, find_order_in_list() is called for query block percona#3. At this point, 'a' in the select list has been resolved to another Item_ref, also pointing to query block percona#2. find_order_in_list() detects that the Item_ref in the order_list is equivalent to the Item_ref in the select list, and therefore decides to replace the former with the latter. Then find_order_in_list() calls Item::clean_up_after_removal() recursively (via Item::walk()) for the order_list Item_ref (since that is no longer needed). When calling clean_up_after_removal(), no Cleanup_after_removal_context object is passed. This is the actual error, as there should be a context pointing to query block percona#3 that ensures that clean_up_after_removal() only purge Item_subselect.unit if both of the following conditions hold: 1) The Item_subselect should not be in any of the Item trees in the select list of query block percona#3. 2) Item_subselect.unit should be a descendant of query block percona#3. These conditions ensure that we only purge Item_subselect.unit if we are sure that it is not needed elsewhere. But without the right context, query block percona#2 gets purged even if it is used in the select lists of query blocks #1 and percona#3. The fix is to pass a context (for query block percona#3) to clean_up_after_removal(). Both of the above conditions then become false, and Item_subselect.unit is not purged. As an additional shortcut, find_order_in_list() will not call clean_up_after_removal() if real_item() of the order item and the select list item are identical. In addition, this commit changes clean_up_after_removal() so that it requires the context to be non-null, to prevent similar errors. It also simplifies Item_sum::clean_up_after_removal() by removing window functions unconditionally (and adds a corresponding test case). Change-Id: I449be15d369dba97b23900d1a9742e9f6bad4355
dutow
pushed a commit
that referenced
this pull request
May 18, 2022
…nt [#1] Problem ======= When the coordinator receives a stale schema event. It crashes due to assert failure. Description =========== After bug#32593352 fix, client/user thread can now detect schema distribution timeout by itself and can free the schema object. So, if a stale schema event reaches the coordinator after the client/user thread have freed the schema object, then the coordinator will try to get the schema object and will hit the assert failure. prior to bug#32593352, the schema distribution timeout can be detected only by the coordinator. So, it is assumed that the schema object should be always valid inside coordinator. As, there exists a valid scenario where schema object can be invalid the assert check now is not useful and can be removed. Fix === Fixed by removing the assert check. Change-Id: I0482ccc940505e83d66cbf2258528fbac6951599
dutow
pushed a commit
that referenced
this pull request
May 18, 2022
…NSHIP WITH THE BUFFER SIZE Bug #33501541: Unmanageable Sort Buffer Behavior in 8.0.20+ Implement direct disk-to-disk copies of large packed addons during the filesort merge phase; if a single row is so large that its addons do not fit into its slice of the sort buffer during merging (even after emptying that slice of all other rows), but the sort key _does_ fit, simply sort the truncated row as usual, and then copy the rest of the addon incrementally from the input to the output, 4 kB at a time, when the row is to be written to the merge output. This is possible because the addon itself doesn't need to be in RAM for the row to be compared against other rows; only the sort key must. This greatly relaxes the sort buffer requirements for successful merging, especially when it comes to JSON rows or small blobs (which are typically used as packed addons, not sort keys). The rules used to be: 1. During initial chunk generation: The sort buffer must be at least as large as the largest row to be sorted. 2. During merging: Merging is guaranteed to pass if the sort buffer is at least 15 times as large as the largest row (sort key + addons), but one may be lucky and pass with only the demands from #1. Now, for sorts implemented using packed addons (which is the common case for small blobs and JSON), the new rules are: 1. Unchanged from #1 above. 2. During merging: Merging is guaranteed to pass if the sort buffer is at least 15 times are large as the largest _sort key_ (plus 4-byte length marker), but one may be lucky and pass with only the demands from #1. In practice, this means that filesort merging will almost never fail due to insufficient buffer space anymore; the query will either fail because a single row is too large in the sort step, or it will pass nearly all of the time. However, do note that while such merges will work, they will not always be very performant, as having lots of 1-row merge chunks will mean many merge passes and little work being done during the initial in-memory sort. Thus, the main use of this functionality is to be able to do sorts where there are a few rows with large JSON values or similar, but where most fit comfortably into the buffer. Also note that since requirement #1 is unchanged, one still cannot sort e.g. 500 kB JSON values using the default 256 kB sort buffer. Older recommendations to keep sort buffers small at nearly any cost are no longer valid, and have not been for a while. Sort buffers should be sized to as much RAM as one can afford without interfering with other tasks (such as the buffer pool, join buffers, or other concurrent sorts), and small sorts are not affected by the maximum sort buffer size being set to a larger value, as the sort buffer is incrementally allocated. Change-Id: I85745cd513402a42ed5fc4f5b7ddcf13c5793100
dutow
pushed a commit
that referenced
this pull request
Nov 23, 2022
Upstream commit ID : fb-mysql-5.6.35/8cb1dc836b68f1f13e8b2655b2b8cb2d57f400b3 PS-5217 : Merge fb-prod201803 Summary: Original report: https://jira.mariadb.org/browse/MDEV-15816 To reproduce this bug just following below steps, client 1: USE test; CREATE TABLE t1 (i INT) ENGINE=MyISAM; HANDLER t1 OPEN h; CREATE TABLE t2 (i INT) ENGINE=RocksDB; LOCK TABLES t2 WRITE; client 2: FLUSH TABLES WITH READ LOCK; client 1: INSERT INTO t2 VALUES (1); So client 1 acquired the lock and set m_lock_rows = RDB_LOCK_WRITE. Then client 2 calls store_lock(TL_IGNORE) and m_lock_rows was wrongly set to RDB_LOCK_NONE, as below ``` #0 myrocks::ha_rocksdb::store_lock (this=0x7fffbc03c7c8, thd=0x7fffc0000ba0, to=0x7fffc0011220, lock_type=TL_IGNORE) #1 get_lock_data (thd=0x7fffc0000ba0, table_ptr=0x7fffe84b7d20, count=1, flags=2) percona#2 mysql_lock_abort_for_thread (thd=0x7fffc0000ba0, table=0x7fffbc03bbc0) percona#3 THD::notify_shared_lock (this=0x7fffc0000ba0, ctx_in_use=0x7fffbc000bd8, needs_thr_lock_abort=true) percona#4 MDL_lock::notify_conflicting_locks (this=0x555557a82380, ctx=0x7fffc0000cc8) percona#5 MDL_context::acquire_lock (this=0x7fffc0000cc8, mdl_request=0x7fffe84b8350, lock_wait_timeout=2) percona#6 Global_read_lock::lock_global_read_lock (this=0x7fffc0003fe0, thd=0x7fffc0000ba0) ``` Finally, client 1 "INSERT INTO..." hits the Assertion 'm_lock_rows == RDB_LOCK_WRITE' failed in myrocks::ha_rocksdb::write_row() Fix this bug by not setting m_locks_rows if lock_type == TL_IGNORE. Closes facebook/mysql-5.6#838 Pull Request resolved: facebook/mysql-5.6#871 Differential Revision: D9417382 Pulled By: lth fbshipit-source-id: c36c164e06c
dutow
pushed a commit
that referenced
this pull request
Nov 23, 2022
Upstream commit ID : fb-mysql-5.6.35/77032004ad23d21a4c386f8136ecfbb071ea42d6 PS-6865 : Merge fb-prod201903 Summary: Currently during primary key's value encode, its ttl value can be from either one of these 3 cases 1. ttl column in primary key 2. non-ttl column a. old record(update case) b. current timestamp 3. ttl column in non-key field Workflow #1: first in Rdb_key_def::pack_record() find and store pk_offset, then in value encode try to parse key slice to fetch ttl value by using pk_offset. Workflow percona#3: fetch ttl value from ttl column The change is to merge #1 and percona#3 by always fetching TTL value from ttl column, not matter whether the ttl column is in primary key or not. Of course, remove pk_offset, since it isn't used. BTW, for secondary keys, its ttl value is always from m_ttl_bytes, which is stored by primary value encoding. Reviewed By: yizhang82 Differential Revision: D14662716 fbshipit-source-id: 6b4e5f044fd
dutow
pushed a commit
that referenced
this pull request
Nov 23, 2022
PS-5741: Incorrect use of memset_s in keyring_vault. Fixed the usage of memset_s. The arguments should be: void memset_s(void *dest, size_t dest_max, int c, size_t n) where the 2nd argument is size of buffer and the 3rd is argument is character to fill. --------------------------------------------------------------------------- PS-7769 - Fix use-after-return error in audit_log_exclude_accounts_validate --- *Problem:* `st_mysql_value::val_str` might return a pointer to `buf` which after the function called is deleted. Therefore the value in `save`, after reuturnin from the function, is invalid. In this particular case, the error is not manifesting as val_str` returns memory allocated with `thd_strmake` and it does not use `buf`. *Solution:* Allocate memory with `thd_strmake` so the memory in `save` is not local. --------------------------------------------------------------------------- Fix test main.bug12969156 when WITH_ASAN=ON *Problem:* ASAN complains about stack-buffer-overflow on function `mysql_heartbeat`: ``` ==90890==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fe746d06d14 at pc 0x7fe760f5b017 bp 0x7fe746d06cd0 sp 0x7fe746d06478 WRITE of size 24 at 0x7fe746d06d14 thread T16777215 Address 0x7fe746d06d14 is located in stack of thread T26 at offset 340 in frame #0 0x7fe746d0a55c in mysql_heartbeat(void*) /home/yura/ws/percona-server/plugin/daemon_example/daemon_example.cc:62 This frame has 4 object(s): [48, 56) 'result' (line 66) [80, 112) '_db_stack_frame_' (line 63) [144, 200) 'tm_tmp' (line 67) [240, 340) 'buffer' (line 65) <== Memory access at offset 340 overflows this variable HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork (longjmp and C++ exceptions *are* supported) Thread T26 created by T25 here: #0 0x7fe760f5f6d5 in __interceptor_pthread_create ../../../../src/libsanitizer/asan/asan_interceptors.cpp:216 #1 0x557ccbbcb857 in my_thread_create /home/yura/ws/percona-server/mysys/my_thread.c:104 percona#2 0x7fe746d0b21a in daemon_example_plugin_init /home/yura/ws/percona-server/plugin/daemon_example/daemon_example.cc:148 percona#3 0x557ccb4c69c7 in plugin_initialize /home/yura/ws/percona-server/sql/sql_plugin.cc:1279 percona#4 0x557ccb4d19cd in mysql_install_plugin /home/yura/ws/percona-server/sql/sql_plugin.cc:2279 percona#5 0x557ccb4d218f in Sql_cmd_install_plugin::execute(THD*) /home/yura/ws/percona-server/sql/sql_plugin.cc:4664 percona#6 0x557ccb47695e in mysql_execute_command(THD*, bool) /home/yura/ws/percona-server/sql/sql_parse.cc:5160 percona#7 0x557ccb47977c in mysql_parse(THD*, Parser_state*, bool) /home/yura/ws/percona-server/sql/sql_parse.cc:5952 percona#8 0x557ccb47b6c2 in dispatch_command(THD*, COM_DATA const*, enum_server_command) /home/yura/ws/percona-server/sql/sql_parse.cc:1544 percona#9 0x557ccb47de1d in do_command(THD*) /home/yura/ws/percona-server/sql/sql_parse.cc:1065 percona#10 0x557ccb6ac294 in handle_connection /home/yura/ws/percona-server/sql/conn_handler/connection_handler_per_thread.cc:325 percona#11 0x557ccbbfabb0 in pfs_spawn_thread /home/yura/ws/percona-server/storage/perfschema/pfs.cc:2198 percona#12 0x7fe760ab544f in start_thread nptl/pthread_create.c:473 ``` The reason is that `my_thread_cancel` is used to finish the daemon thread. This is not and orderly way of finishing the thread. ASAN does not register the stack variables are not used anymore which generates the error above. This is a benign error as all the variables are on the stack. *Solution*: Finish the thread in orderly way by using a signalling variable. --------------------------------------------------------------------------- PS-8204: Fix XML escape rules for audit plugin https://jira.percona.com/browse/PS-8204 There was a wrong length specified for some XML escape rules. As a result of this terminating null symbol from replacement rule was copied into resulting string. This lead to quer text truncation in audit log file. In addition added empty replacement rules for '\b' and 'f' symbols which just remove them from resulting string. These symboles are not supported in XML 1.0.
dutow
pushed a commit
that referenced
this pull request
Mar 23, 2023
… Signal (get_store_key at sql/sql_select.cc:2383) These are two related but distinct problems manifested in the shrinkage of key definitions for derived tables or common table expressions, implemented in JOIN::finalize_derived_keys(). The problem in Bug#34572040 is that we have two references to one CTE, each with a valid key definition. The function will first loop over the first reference (cte_a) and move its used key from position 0 to position 1. Next, it will attempt to move the key for the second reference (cte_b) from position 4 to position 2. However, for each iteration, the function will calculate used key information. On the first iteration, the values are correct, but since key value #1 has been moved into position #0, the old information is invalid and provides wrong information. The problem is thus that for subsequent iterations we read data that has been invalidated by earlier key moves. The best solution to the problem is to move the keys for all references to the CTE in one operation. This way, we can calculate used keys information safely, before any move operation has been performed. The problem in Bug#34634469 is also related to having more than one reference to a CTE, but in this case the first reference (ref_3) has a key in position 5 which is moved to position 0, and the second reference (ref_4) has a key in position 3 that is moved to position 1. However, the key parts of the first key will overlap with the key parts of the second key after the first move, thus invalidating the key structure during the copy. The actual problem is that we move a higher-numbered key (5) before a lower-numbered key (3), which in this case makes it impossible to find an empty space for the moved key. The solution to this problem is to ensure that keys are moved in increasing key order. The patch changes the algorithm as follows: - When identifying a derived table/common table expression, ensure to move all its keys in one operation (at least those references from the same query block). - First, collect information about all key uses: hash key, unique index keys and actual key references. For the key references, also populate a mapping array that enumerates table references with key references in order of increasing key number. Also clear used key information for references that do not use keys. - For each table reference with a key reference in increasing key order, move the used key into the lowest available position. This will ensure that used entries are never overwritten. - When all table references have been processed, remove unused key definitions. Change-Id: I938099284e34a81886621f6a389f34abc51e78ba
dutow
pushed a commit
that referenced
this pull request
Apr 27, 2023
… Signal (get_store_key at sql/sql_select.cc:2383) These are two related but distinct problems manifested in the shrinkage of key definitions for derived tables or common table expressions, implemented in JOIN::finalize_derived_keys(). The problem in Bug#34572040 is that we have two references to one CTE, each with a valid key definition. The function will first loop over the first reference (cte_a) and move its used key from position 0 to position 1. Next, it will attempt to move the key for the second reference (cte_b) from position 4 to position 2. However, for each iteration, the function will calculate used key information. On the first iteration, the values are correct, but since key value #1 has been moved into position #0, the old information is invalid and provides wrong information. The problem is thus that for subsequent iterations we read data that has been invalidated by earlier key moves. The best solution to the problem is to move the keys for all references to the CTE in one operation. This way, we can calculate used keys information safely, before any move operation has been performed. The problem in Bug#34634469 is also related to having more than one reference to a CTE, but in this case the first reference (ref_3) has a key in position 5 which is moved to position 0, and the second reference (ref_4) has a key in position 3 that is moved to position 1. However, the key parts of the first key will overlap with the key parts of the second key after the first move, thus invalidating the key structure during the copy. The actual problem is that we move a higher-numbered key (5) before a lower-numbered key (3), which in this case makes it impossible to find an empty space for the moved key. The solution to this problem is to ensure that keys are moved in increasing key order. The patch changes the algorithm as follows: - When identifying a derived table/common table expression, ensure to move all its keys in one operation (at least those references from the same query block). - First, collect information about all key uses: hash key, unique index keys and actual key references. For the key references, also populate a mapping array that enumerates table references with key references in order of increasing key number. Also clear used key information for references that do not use keys. - For each table reference with a key reference in increasing key order, move the used key into the lowest available position. This will ensure that used entries are never overwritten. - When all table references have been processed, remove unused key definitions. Change-Id: I938099284e34a81886621f6a389f34abc51e78ba
dutow
pushed a commit
that referenced
this pull request
May 21, 2023
Introduce class NdbSocket, which includes both an ndb_socket_t and an SSL *, and wraps all socket operations that might use TLS. Change-Id: I20d7aeb4854cdb11cfd0b256270ab3648b067efa
dutow
pushed a commit
that referenced
this pull request
May 21, 2023
The patch for WL#15130 Socket-level TLS patch #1: class NdbSocket re-introduced a -Wcast-qual warning. Use const_cast, and reinterpret_cast to fix it, since the corresponding posix version of ndb_socket_writev() is const-correct. Change-Id: Ib446a926b4108edf51eda7d8fd27ada560b67a24
dutow
pushed a commit
that referenced
this pull request
May 21, 2023
The shadow tables used by ndb_binlog thread when handling data events have no knowledge about wheter the table is a foreign key parent in NDB, this is mainly because the NDB dictionary extra metadata for parent table is not updated when creating the child tables. The information about being parent table is required in order to properly detect a transaction conflict while writing changes to the binlog for changes of parent key tables. This is fixed by extending the ndb_binlog thread with a metadata cache that maintains a list of tables in NDB which are foreign key parents. This makes it possible to update the shadow table's knowledge about being parent table while handling data events. Change-Id: Ic551fde3e9460e3668d5aa1ac951ee3bec442cf5
dutow
pushed a commit
that referenced
this pull request
May 21, 2023
If mysql_bind_param() fails, it crashes. glibc reports: double free or corruption (fasttop) ASAN reports: heap-use-after-free #1 0x55ad770cda23 in mysql_bind_param libmysql/libmysql.cc:2477:9 freed by thread T0 here: #1 0x55ad770cda23 in mysql_bind_param libmysql/libmysql.cc:2477:9 The wrong counter-variable is used which results in the same index being freed again and again. Change ------ - use the right index-variable when freeing ->names if mysql_bind_param() fails. Change-Id: I580267d5913c55b00151409c25d90f2ffe1b4119
dutow
pushed a commit
that referenced
this pull request
May 21, 2023
Compilation warning/error: storage/ndb/test/src/UtilTransactions.cpp:1544: error: variable 'eof' may be uninitialized when used here [-Werror,-Wconditional- uninitialized] Manual code inspection also shows there are problems with retry logic as well as potentially releasing resources. Fix by refactoring ´verifyTableReplicasWithSource` into functions where the retry, scan and compare logic are separated. This makes it possible to make sure that the "eof" variable is always initialized while fetching the next row during scan. Change-Id: I111e0a6613622aa2279a5ec5845e48e8ca0e115f
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Checked on Jenkins (
./mtr main.1st --unit-tests
)https://ps80.cd.percona.com/job/percona-server-8.0-param/453/