Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1414221: Warning "The log was not applied to the intended LSN" should optionally be an error #4

Merged
merged 1 commit into from
Mar 23, 2015

Conversation

gl-sergei
Copy link
Contributor

Trivial merge

             should optionally be an error

Instead of warning print an error message.
Print value of srv_start_lsn right after the error message.
@gl-sergei gl-sergei merged commit 5e13258 into percona:2.2 Mar 23, 2015
@gl-sergei gl-sergei deleted the 2.3-xb-bug1414221 branch November 24, 2016 01:27
gl-sergei pushed a commit that referenced this pull request Jul 19, 2019
…E TO A SERVER

Problem
========================================================================
Running the GCS tests with ASAN seldomly reports a user-after-free of
the server reference that the acceptor_learner_task uses.

Here is an excerpt of ASAN's output:

==43936==ERROR: AddressSanitizer: heap-use-after-free on address 0x63100021c840 at pc 0x000000530ff8 bp 0x7fc0427e8530 sp 0x7fc0427e8520
WRITE of size 8 at 0x63100021c840 thread T3
    #0 0x530ff7 in server_detected /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:962
    #1 0x533814 in buffered_read_bytes /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:1249
    #2 0x5481af in buffered_read_msg /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:1399
    #3 0x51e171 in acceptor_learner_task /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:4690
    #4 0x562357 in task_loop /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/task.c:1140
    #5 0x5003b2 in xcom_taskmain2 /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:1324
    #6 0x6a278a in Gcs_xcom_proxy_impl::xcom_init(unsigned short, node_address*) /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/gcs_xcom_proxy.cc:164
    #7 0x59b3c1 in xcom_taskmain_startup /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/gcs_xcom_control_interface.cc:107
    #8 0x7fc04a2e4dd4 in start_thread (/lib64/libpthread.so.0+0x7dd4)
    #9 0x7fc047ff2bfc in __clone (/lib64/libc.so.6+0xfebfc)

0x63100021c840 is located 64 bytes inside of 65688-byte region [0x63100021c800,0x63100022c898)
freed by thread T3 here:
    #0 0x7fc04a5d7508 in __interceptor_free (/lib64/libasan.so.4+0xde508)
    #1 0x52cf86 in freesrv /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:836
    #2 0x52ea78 in srv_unref /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:868
    #3 0x524c30 in reply_handler_task /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:4914
    #4 0x562357 in task_loop /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/task.c:1140
    #5 0x5003b2 in xcom_taskmain2 /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:1324
    #6 0x6a278a in Gcs_xcom_proxy_impl::xcom_init(unsigned short, node_address*) /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/gcs_xcom_proxy.cc:164
    #7 0x59b3c1 in xcom_taskmain_startup /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/gcs_xcom_control_interface.cc:107
    #8 0x7fc04a2e4dd4 in start_thread (/lib64/libpthread.so.0+0x7dd4)

previously allocated by thread T3 here:
    #0 0x7fc04a5d7a88 in __interceptor_calloc (/lib64/libasan.so.4+0xdea88)
    #1 0x543604 in mksrv /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:721
    #2 0x543b4c in addsrv /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:755
    #3 0x54af61 in update_servers /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_transport.c:1747
    #4 0x501082 in site_install_action /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:1572
    #5 0x55447c in import_config /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/site_def.c:486
    #6 0x506dfc in handle_x_snapshot /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:5257
    #7 0x50c444 in xcom_fsm /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:5325
    #8 0x516c36 in dispatch_op /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:4510
    #9 0x521997 in acceptor_learner_task /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:4772
    #10 0x562357 in task_loop /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/task.c:1140
    #11 0x5003b2 in xcom_taskmain2 /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/xcom/xcom_base.c:1324
    #12 0x6a278a in Gcs_xcom_proxy_impl::xcom_init(unsigned short, node_address*) /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/gcs_xcom_proxy.cc:164
    #13 0x59b3c1 in xcom_taskmain_startup /home/tvale/mysql/plugin/group_replication/libmysqlgcs/src/bindings/xcom/gcs_xcom_control_interface.cc:107
    #14 0x7fc04a2e4dd4 in start_thread (/lib64/libpthread.so.0+0x7dd4)

Analysis
========================================================================
The server structure is reference counted by the associated sender_task
and reply_handler_task.
When they finish, they unreference the server, which leads to its memory
being freed.

However, the acceptor_learner_task keeps a "naked" reference to the
server structure.
Under the right ordering of operations, i.e. the sender_task and
reply_handler_task terminating after the acceptor_learner_task acquires,
but before it uses, the reference to the server structure, leads to the
acceptor_learner_task accessing the server structure after it has been
freed.

Solution
========================================================================
Let the acceptor_learner_task also reference count the server structure
so it is not freed while still in use.

Reviewed-by: André Negrão <andre.negrao@oracle.com>
Reviewed-by: Venkatesh Venugopal <venkatesh.venugopal@oracle.com>
RB: 21209
rahulmalik87 pushed a commit that referenced this pull request Feb 4, 2021
…TH VS 2019 [#4] [noclose]

storage\ndb\src\ndbapi\ObjectMap.hpp(168,1): warning C4302: 'type cast': truncation from 'void *' to 'long'

Change-Id: I11ffba127bc19db15e9a50307b50532941f9fdb2
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Oct 29, 2021
Increment the "start_stmt_count" variable only when handler
has been registered sucessfully and thus remove two unneccessary
decrement calls. Improve description of the "start_stmt_count"
variable.

Change-Id: I50f269fb024002a122cf898c84fb578c50007980
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Oct 29, 2021
Patch percona#4: Wrong results with AND of JSON_CONTAINS and multi-valued indexes

For a multi-valued index on json array field f=[1, 2], this statement
wrongly returned empty result set:

  SELECT * FROM f WHERE JSON_CONTAINS(f, 1) AND JSON_CONTAINS(f, 2);

The cause is that get_func_mm_tree() converts two JSON_CONTAINS() to
`f_idx = 1 AND f_idx = 2` which is always false for single value index
but possible for multi-valued index.

Fixed by anding the two key ranges, rather than marking the condition
as always false.

This is a contribution by Yubao Liu.

Change-Id: I535fc6ce8755f4f3b6e8cbd77b4c0ee4aa685cae
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Jan 26, 2022
…close]

Make the range optimizer return AccessPaths instead of TABLE_READ_PLAN.
This is the first step of getting rid of TABLE_READ_PLAN and moving
everything into AccessPath; currently, it's just a very thin shell:

 1. TRPs are still used internally, and AccessPath is created
    at the very end.
 2. Child TRPs are still child TRPs (ie., there are no child
    AccessPaths).
 3. All returned AccessPaths are still of the type INDEX_RANGE_SCAN,
    wrapping a TRP.
 4. Some callers still reach directly into the TRP, assuming percona#3.

Most callers (save for the aforemented percona#4) use a set of simple wrapper
functions to access TRP-derived properties from AccessPaths; as we
continue the transformation, this is the main place we'll change the
interaction (ie., most of the calling code will remain unchanged).

Change-Id: I3d9dc9e33c53d1e5124ea9c47b7d6d9270cd1906
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Jan 26, 2022
This error happens for queries such as:

SELECT ( SELECT 1 FROM t1 ) AS a,
  ( SELECT a FROM ( SELECT x FROM t1 ORDER BY a ) AS d1 );

Query_block::prepare() for query block percona#4 (corresponding to the 4th
SELECT in the query above) calls setup_order() which again calls
find_order_in_list(). That function replaces an Item_ident for 'a' in
Query_block.order_list with an Item_ref pointing to query block percona#2.
Then Query_block::merge_derived() merges query block percona#4 into query
block percona#3. The Item_ref mentioned above is then moved to the order_list
of query block percona#3.

In the next step, find_order_in_list() is called for query block percona#3.
At this point, 'a' in the select list has been resolved to another
Item_ref, also pointing to query block percona#2. find_order_in_list()
detects that the Item_ref in the order_list is equivalent to the
Item_ref in the select list, and therefore decides to replace the
former with the latter. Then find_order_in_list() calls
Item::clean_up_after_removal() recursively (via Item::walk()) for the
order_list Item_ref (since that is no longer needed).

When calling clean_up_after_removal(), no
Cleanup_after_removal_context object is passed. This is the actual
error, as there should be a context pointing to query block percona#3 that
ensures that clean_up_after_removal() only purge Item_subselect.unit
if both of the following conditions hold:

1) The Item_subselect should not be in any of the Item trees in the
   select list of query block percona#3.

2) Item_subselect.unit should be a descendant of query block percona#3.

These conditions ensure that we only purge Item_subselect.unit if we
are sure that it is not needed elsewhere. But without the right
context, query block percona#2 gets purged even if it is used in the select
lists of query blocks #1 and percona#3.

The fix is to pass a context (for query block percona#3) to clean_up_after_removal().
Both of the above conditions then become false, and Item_subselect.unit is
not purged. As an additional shortcut, find_order_in_list() will not call
clean_up_after_removal() if real_item() of the order item and the select
list item are identical.

In addition, this commit changes clean_up_after_removal() so that it
requires the context to be non-null, to prevent similar errors. It
also simplifies Item_sum::clean_up_after_removal() by removing window
functions unconditionally (and adds a corresponding test case).

Change-Id: I449be15d369dba97b23900d1a9742e9f6bad4355
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Apr 26, 2022
…warnings.

Change-Id: I579c2691165865101559915239b2cd027c10ab56
patrickbirch added a commit to patrickbirch/percona-xtrabackup that referenced this pull request Jul 26, 2022
PXB-2803 Convert .rst files to .md (8.0)
satya-bodapati pushed a commit that referenced this pull request Aug 18, 2022
The NdbEventOperationImpl owns a NdbDictionary::Event by having a
pointer to the Event's implementation.

Make sure the pointer is always initialized in constructor and mark the
pointer const. Also mark NdbEventImpl's pointer to the Event as const.
Thus the m_eventImpl->m_facade pointer path should now be enforced by
compiler.

Change-Id: I469b39d66a4d83daf08307d980d555da1ab79827
satya-bodapati pushed a commit that referenced this pull request Aug 18, 2022
-- Patch #1: Persist secondary load information --

Problem:
We need a way of knowing which tables were loaded to HeatWave after
MySQL restarts due to a crash or a planned shutdown.

Solution:
Add a new "secondary_load" flag to the `options` column of mysql.tables.
This flag is toggled after a successful secondary load or unload. The
information about this flag is also reflected in
INFORMATION_SCHEMA.TABLES.CREATE_OPTIONS.

-- Patch #2 --

The second patch in this worklog triggers the table reload from InnoDB
after MySQL restart.

The recovery framework recognizes that the system restarted by checking
whether tables are present in the Global State. If there are no tables
present, the framework will access the Data Dictionary and find which
tables were loaded before the restart.

This patch introduces the "Data Dictionary Worker" - a MySQL service
recovery worker whose task is to query the INFORMATION_SCHEMA.TABLES
table from a separate thread and find all tables whose secondary_load
flag is set to 1.

All tables that were found in the Data Dictionary will be appended to
the list of tables that have to be reloaded by the framework from
InnoDB.

If an error occurs during restart recovery we will not mark the recovery
as failed. This is done because the types of failures that can occur
when the tables are reloaded after a restart are less critical compared
to previously existing recovery situations. Additionally, this code will
soon have to be adapted for the next worklog in this area so we are
proceeding with the simplest solution that makes sense.

A Global Context variable m_globalStateEmpty is added which indicates
whether the Global State should be recovered from an external source.

-- Patch #3 --

This patch adds the "rapid_reload_on_restart" system variable. This
variable is used to control whether tables should be reloaded after a
restart of mysqld or the HeatWave plugin. This variable is persistable
(i.e., SET PERSIST RAPID_RELOAD_ON_RESTART = TRUE/FALSE).

The default value of this variable is set to false.

The variable can be modified in OFF, IDLE, and SUSPENDED states.

-- Patch #4 --

This patch refactors the recovery code by removing all recovery-related
code from ha_rpd.cc and moving it to separate files:

  - ha_rpd_session_factory.h/cc:
  These files contain the MySQLAdminSessionFactory class, which is used
to create admin sessions in separate threads that can be used to issue
SQL queries.

  - ha_rpd_recovery.h/cc:
  These files contain the MySQLServiceRecoveryWorker,
MySQLServiceRecoveryJob and ObjectStoreRecoveryJob classes which were
previously defined in ha_rpd.cc. This file also contains a function that
creates the RecoveryWorkerFactory object. This object is passed to the
constructor of the Recovery Framework and is used to communicate with
the other section of the code located in rpdrecoveryfwk.h/cc.

This patch also renames rpdrecvryfwk to rpdrecoveryfwk for better
readability.

The include relationship between the files is shown on the following
diagram:

        rpdrecoveryfwk.h◄──────────────rpdrecoveryfwk.cc
            ▲    ▲
            │    │
            │    │
            │    └──────────────────────────┐
            │                               │
        ha_rpd_recovery.h◄─────────────ha_rpd_recovery.cc──┐
            ▲                               │           │
            │                               │           │
            │                               │           │
            │                               ▼           │
        ha_rpd.cc───────────────────────►ha_rpd.h       │
                                            ▲           │
                                            │           │
            ┌───────────────────────────────┘           │
            │                                           ▼
    ha_rpd_session_factory.cc──────►ha_rpd_session_factory.h

Other changes:
  - In agreement with Control Plane, the external Global State is now
  invalidated during recovery framework startup if:
    1) Recovery framework recognizes that it should load the Global
    State from an external source AND,
    2) rapid_reload_on_restart is set to OFF.

  - Addressed review comments for Patch #3, rapid_reload_on_restart is
  now also settable while plugin is ON.

  - Provide a single entry point for processing external Global State
  before starting the recovery framework loop.

  - Change when the Data Dictionary is read. Now we will no longer wait
  for the HeatWave nodes to connect before querying the Data Dictionary.
  We will query it when the recovery framework starts, before accepting
  any actions in the recovery loop.

  - Change the reload flow by inserting fake global state entries for
  tables that need to be reloaded instead of manually adding them to a
  list of tables scheduled for reload. This method will be used for the
  next phase where we will recover from Object Storage so both recovery
  methods will now follow the same flow.

  - Update secondary_load_dd_flag added in Patch #1.

  - Increase timeout in wait_for_server_bootup to 300s to account for
  long MySQL version upgrades.

  - Add reload_on_restart and reload_on_restart_dbg tests to the rapid
  suite.

  - Add PLUGIN_VAR_PERSIST_AS_READ_ONLY flag to "rapid_net_orma_port"
  and "rapid_reload_on_restart" definitions, enabling their
  initialization from persisted values along with "rapid_bootstrap" when
  it is persisted as ON.

  - Fix numerous clang-tidy warnings in recovery code.

  - Prevent suspended_basic and secondary_load_dd_flag tests to run on
  ASAN builds due to an existing issue when reinstalling the RAPID
  plugin.

-- Bug#33752387 --

Problem:
A shutdown of MySQL causes a crash in queries fired by DD worker.

Solution:
Prevent MySQL from killing DD worker's queries by instantiating a
DD_kill_immunizer before the queries are fired.

-- Patch #5 --

Problem:
A table can be loaded before the DD Worker queries the Data Dictionary.
This means that table will be wrongly processed as part of the external
global state.

Solution:
If the table is present in the current in-memory global state we will
not consider it as part of the external global state and we will not
process it by the recovery framework.

-- Bug#34197659 --

Problem:
If a table reload after restart causes OOM the cluster will go into
RECOVERYFAILED state.

Solution:
Recognize when the tables are being reloaded after restart and do not
move the cluster into RECOVERYFAILED. In that case only the current
reload will fail and the reload for other tables will be attempted.

Change-Id: Ic0c2a763bc338ea1ae6a7121ff3d55b456271bf0
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request May 4, 2023
…reams

New classes SecureSocketInputStream and SecureSocketOutputStream
take a const NdbSocket reference. The existing classes
SocketInputStream and SocketOutuputStream are re-implemented
to use the NdbSocket versions; in this case, any SSL * will
be fetched from the SSL socket table.

Change-Id: Iaacb7be5e842bdf7da1b7674eaef90886bac6671
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request May 4, 2023
…ting_sharing

ASAN reports a use-after-free in the exit-handlers of the
routing-sharing integration tests:

AddressSanitizer: heap-use-after-free on address ...
   ...
   percona#4 0x708d05 in std::default_delete<SharedRestartableRouter>::
      operator()(SharedRestartableRouter*) const
      .../include/c++/9/bits/unique_ptr.h:81:2
   percona#5 0x708c98 in std::unique_ptr<SharedRestartableRouter,
      std::default_delete<SharedRestartableRouter> >::~unique_ptr()
      .../include/c++/9/bits/unique_ptr.h:292:4
   percona#6 0x6a7448 in std::array<std::unique_ptr<SharedRestartableRouter,
      std::default_delete<SharedRestartableRouter> >, 3ul>::~array()
      .../include/c++/9/array:94:12
   percona#7 0x7f160a5e68d6 in __run_exit_handlers
      .../glibc-2.31/stdlib/exit.c:108:8

... as the RestartedRouters are destructed after their process-manager
is destructed.

- The process manager is owned by the test
- The RestartableRouters are 'static' and outlive the test.

Change
------

- explicitely free the intermediate routers at test-suite teardown to
  ensure the proper sequence.

Change-Id: Id405fb26b0519c49820a58b5f70d0ee59d2cb83f
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request May 4, 2023
Potential read of uninitialized dataNodes[0] when no alive nodes are
available. Fix by checking for zero data nodes and fail test.

trunk/storage/ndb/test/src/UtilTransactions.cpp:1710:9: warning: 3rd
function call argument is an uninitialized value [clang-analyzer-
core.CallAndMessage]

Change-Id: I7c1e362eb0d62bdb560967144ca39966aed8a3c1
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request May 4, 2023
  # This is the 1st commit message:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA

  Problem Statement
  -----------------
  Currently customers cannot enable heatwave analytics service to their
  HA DBSystem or enable HA if they are using Heatwave enabled DBSystem.
  In this change, we attempt to remove this limitation and provide
  failover support of heatwave in an HA enabled DBSystem.

  High Level Overview
  -------------------
  To support heatwave with HA, we extended the existing feature of auto-
  reloading of tables to heatwave on MySQL server restart (WL-14396). To
  provide seamless failover functionality to tables loaded to heatwave,
  each node in the HA cluster (group replication) must have the latest
  view of tables which are currently loaded to heatwave cluster attached
  to the primary, i.e., the secondary_load flag should be in-sync always.

  To achieve this, we made following changes -
    1. replicate secondary load/unload DDL statements to all the active
       secondary nodes by writing the DDL into the binlog, and
    2. Control how secondary load/unload is executed when heatwave cluster
       is not attached to node executing the command

  Implementation Details
  ----------------------
  Current implementation depends on two key assumptions -
   1. All MDS DBSystems will have RAPID plugin installed.
   2. No non-MDS system will have the RAPID plugin installed.

  Based on these assumptions, we made certain changes w.r.t. how server
  handles execution of secondary load/unload statements.
   1. If secondary load/unload command is executed from a mysql client
      session on a system without RAPID plugin installed (i.e., non-MDS),
      instead of an error, a warning message will be shown to the user,
      and the DDL is allowed to commit.
   2. If secondary load/unload command is executed from a replication
      connection on an MDS system without heatwave cluster attached,
      instead of throwing an error, the DDL is allowed to commit.
   3. If no error is thrown from secondary engine, then the DDL will
      update the secondary_load metadata and write a binlog entry.

  Writing to binlog implies that all the consumer of binlog now need to
  handle this DDL gracefully. This has an adverse effect on Point-in-time
  Recovery. If the PITR backup is taken from a DBSystem with heatwave, it
  may contain traces of secondary load/unload statements in its binlog.
  If such a backup is used to restore a new DBSystem, it will cause failure
  while trying to execute statements from its binlog because
   a) DBSystem will not heatwave cluster attached at this time, and
   b) Statements from binlog are executed from standard mysql client
      connection, thus making them indistinguishable from user executed
      command.
  Customers will be prevented (by control plane) from using PITR functionality
  on a heatwave enabled DBSystem until there is a solution for this.

  Testing
  -------
  This commit changes the behavior of secondary load/unload statements, so it
   - adjusts existing tests' expectations, and
   - adds a new test validating new DDL behavior under different scenarios

  Change-Id: Ief7e9b3d4878748b832c366da02892917dc47d83

  # This is the commit message percona#2:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (PITR SUPPORT)

  Problem
  -------
  A PITR backup taken from a heatwave enabled system could have traces
  of secondary load or unload statements in binlog. When such a backup
  is used to restore another system, it can cause failure because of
  following two reasons:

  1. Currently, even if the target system is heatwave enabled, heatwave
  cluster is attached only after PITR restore phase completes.
  2. When entries from binlogs are applied, a standard mysql client
  connection is used. This makes it indistinguishable from other user
  session.

  Since secondary load (or unload) statements are meant to throw error
  when they are executed by user in the absence of a healthy heatwave
  cluster, PITR restore workflow will fail if binlogs from the backup
  have any secondary load (or unload) statements in them.

  Solution
  --------
  To avoid PITR failure, we are introducing a new system variable
  rapid_enable_delayed_secondary_ops. It controls how load or unload
  commands are to be processed by rapid plugin.

    - When turned ON, the plugin silently skips the secondary engine
      operation (load/unload) and returns success to the caller. This
      allows secondary load (or unload) statements to be executed by the
      server in the absence of any heatwave cluster.
    - When turned OFF, it follows the existing behavior.
    - The default value is OFF.
    - The value can only be changed when rapid_bootstrap is IDLE or OFF.
    - This variable cannot be persisted.

  In PITR workflow, Control Plane would set the variable at the start of
  PITR restore and then reset it at the end of workflow. This allows the
  workflow to complete without failure even when heatwave cluster is not
  attached. Since metadata is always updated when secondary load/unload
  DDLs are executed, when heatwave cluster is attached at a later point
  in time, the respective tables get reloaded to heatwave automatically.

  Change-Id: I42e984910da23a0e416edb09d3949989159ef707

  # This is the commit message percona#3:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (TEST CHANGES)

  This commit adds new functional tests for the MDS HA + HW integration.

  Change-Id: Ic818331a4ca04b16998155efd77ac95da08deaa1

  # This is the commit message percona#4:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA
  BUG#34776485: RESTRICT DEFAULT VALUE FOR rapid_enable_delayed_secondary_ops

  This commit does two things:
  1. Add a basic test for newly introduced system variable
  rapid_enable_delayed_secondary_ops, which controls the behavior of
  alter table secondary load/unload ddl statements when rapid cluster
  is not available.

  2. It also restricts the DEFAULT value setting for the system variable
  So, following is not allowed:
  SET GLOBAL rapid_enable_delayed_secondary_ops = default
  This variable is to be used in restricted scenarios and control plane
  only sets it to ON/OFF before and after PITR apply. Allowing set to
  default has no practical use.

  Change-Id: I85c84dfaa0f868dbfc7b1a88792a89ffd2e81da2

  # This is the commit message percona#5:

  Bug#34726490: ADD DIAGNOSTICS FOR SECONDARY LOAD / UNLOAD DDL

  Problem:
  --------
  If secondary load or unload DDL gets rolled back due to some error after
  it had loaded / unloaded the table in heatwave cluster, there is no undo
  of the secondary engine action. Only secondary_load flag update is
  reverted and binlog is not written. From User's perspective, the table
  is loaded and can be seen on performance_schema. There are also no
  error messages printed to notify that the ddl didn't commit. This
  creates a problem to debug any issue in this area.

  Solution:
  ---------
  The partial undo of secondary load/unload ddl will be handled in
  bug#34592922. In this commit, we add diagnostics to reveal if the ddl
  failed to commit, and from what stage.

  Change-Id: I46c04dd5dbc07fc17beb8aa2a8d0b15ddfa171af

  # This is the commit message percona#6:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (TEST FIX)

  Since ALTER TABLE SECONDARY LOAD / UNLOAD DDL statements now write
  to binlog, from Heatwave's perspective, SCN is bumped up.

  In this commit, we are adjusting expected SCN values in certain
  tests which does secondary load/unload and expects SCN to match.

  Change-Id: I9635b3cd588d01148d763d703c72cf50a0c0bb98

  # This is the commit message percona#7:

  Adding MTR tests for ML in rapid group_replication suite

  Added MTR tests with Heatwave ML queries with in
  an HA setup.

  Change-Id: I386a3530b5bbe6aea551610b6e739ab1cf366439

  # This is the commit message percona#8:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (MTR TEST ADJUSTMENT)

  In this commit we have adjusted the existing test to work with the
  new MTR test infrastructure which extends the functionalities to
  HA landscape. With this change, a lot of mannual settings have now
  become redundant and thus removed in this commit.

  Change-Id: Ie1f4fcfdf047bfe8638feaa9f54313d509cbad7e

  # This is the commit message percona#9:

  WL#15280: HEATWAVE SUPPORT FOR MDS HA (CLANG-TIDY FIX)

  Fix clang-tidy warnings found in previous change#16530, patch#20

  Change-Id: I15d25df135694c2f6a3a9146feebe2b981637662

Change-Id: I3f3223a85bb52343a4619b0c2387856b09438265
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Oct 26, 2023
Part of WL#15135 Certificate Architecture

In Ndb_cluster_connection, this patch provides a new top-level
method configure_tls(). It also implements TLS initialization
in connect(), calling down through the TransporterFacade layer
to TransporterRegistry.

In the MySQL server this adds the new read-only configuration
option ndb-tls-search-path, with a compile-time default that is
configurable in CMake, WITH_NDB_TLS_SEARCH_PATH.

Unmodified API nodes that do not call into configure_tls() will
still be able to make TLS connections if keys are found
somewhere in the default search path.

Change-Id: Id1d046ff3c5a48a30131c3d15274f5ed625933a9
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Oct 26, 2023
On Win32, every binary needs one instance of openssl/applink.c.

MySQL has one in client_authentication.cc, and this one is present
in the server and in the client library.

This patch adds instances to ndb_sign_keys, ndb_mgmd, ndbd,
and testNodeCertificate-t, and includes fixes for other
assorted compiler errors and warnings on Win32.

Change-Id: I2d7f940b92ddac7d860d2c6fc2d98ead23e195b2
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Oct 26, 2023
In class Transporter, add two new member variables:
  m_require_tls is a boolean TLS requirement
  m_encrypted is true only when TLS is actually in use

A corresponding change in struct TransporterConfiguration also
adds authMode.

Some application logic is added in IPConfig.cpp to configure
the new variables.

On the server side, TransporterRegistry always uses a TLS
authenticator. On the client side, all Transporter clients
initialize a SocketAuthSimple authenticator, but then TCP
Transporter clients delete this in the TCP_Transporter
constructor and replace it with a TLS authenticator.

Change-Id: I6392eecfc712f8a8f500697f34324eea01d29a8c
altmannmarcelo pushed a commit to altmannmarcelo/percona-xtrabackup that referenced this pull request Oct 26, 2023
In the NDB configuration, add boolen options RequireTls and
RequireCertificate to the [MGM] section. Both options default to
false.

Add a new test testMgmd -n MgmdWithoutCertificate

In NdbStdOpt, add the --ndb-mgm-tls command-line option. The allowed
values are "relaxed" and "strict". The default is "relaxed".  This option
will be used for utility programs, allowing the user to specify the
TLS-related behavior of MGM clients.

Change-Id: Id32bb8805ca19a8cf8b52f45c54a7be4d912c5e4
aybek pushed a commit to aybek/percona-xtrabackup that referenced this pull request May 30, 2024
When built with ASAN, a use-after-free is reported for the TcpPortPool.

AddressSanitizer: heap-use-after-free on address 0x60200019f190 at pc
0x00000076a18d bp 0x7fff51e7d1d0 sp 0x7fff51e7d1c0

    percona#4 0x770b73 in UniqueId::ProcessUniqueIds::erase(unsigned int)
       ../router/tests/helpers/tcp_port_pool.h:112
    percona#5 0x770c48 in UniqueId::~UniqueId()
       ../router/tests/helpers/tcp_port_pool.cc:234
    ...
    percona#12 0x82faa3 in testing::UnitTest::~UnitTest()
	../extra/googletest/googletest-release-1.12.0/googletest/src/gtest.cc:5496
    percona#13 0x7f5fe085ace8 in __run_exit_handlers (/lib64/libc.so.6+0x39ce8)

0x60200019f190 is located 0 bytes inside of 16-byte region
[0x60200019f190,0x60200019f1a0)
freed by thread T0 here:
    #0 0x7f5fe3cbd10f in operator delete(void*, unsigned long)
       (/lib64/libasan.so.6+0xb710f)
    #1 0x7f5fe085ace8 in __run_exit_handlers (/lib64/libc.so.6+0x39ce8)

Background
==========

__run_exit_handlers destroys "static" and "global" variables in reverse
order of their creation.

googletest's unit-tests are a static, and the TcpPortPool also has
ProcessUniqueId's which contains the process-wide unique-ids.

At construct: unittest -> tcp-port-pool -> proces-unique-ids
At destruct : process-unique-ids -> tcp-port-pool -> 💥

The use-after-free happens as the process-unique-ids static is
destructed before the tcp-port-pool which tries to its Ids from the
process-unique-ids.

Change
======

- extend the lifetime of the process-unique-ids to after the last use of
  the tcp-port-pool via a std::shared_ptr<>

Change-Id: I75b8b781e1d240f18ca72f2c86182639a7699f06
aybek pushed a commit to aybek/percona-xtrabackup that referenced this pull request May 30, 2024
…nt on Windows and posix [percona#4]

Introduce quoting functions suitable for POSIX shell (sh) and running
C/C++ programs on Windows via CMD.EXE.

Use them when running a program via ssh. A simple heuristic to guess the
kind of quoting needed on remote host is. If a \ appears in any argument
use the quoting function for Windows. If / appears in any argument use
the quoting function for POSIX.

Change-Id: I851eb3da22d716d181319e825e888631cd16aeb7
aybek pushed a commit to aybek/percona-xtrabackup that referenced this pull request May 30, 2024
Problem:
Starting ´ndb_mgmd --bind-address´ may potentially cause abnormal
program termination in MgmtSrvr destructor when ndb_mgmd restart itself.

  Core was generated by `ndb_mgmd --defa'.
  Program terminated with signal SIGABRT,   Aborted.
  #0  0x00007f8ce4066b8f in raise () from /lib64/libc.so.6
  #1  0x00007f8ce4039ea5 in abort () from /lib64/libc.so.6
  percona#2  0x00007f8ce40a7d97 in __libc_message () from /lib64/libc.so.6
  percona#3  0x00007f8ce40af08c in malloc_printerr () from /lib64/libc.so.6
  percona#4  0x00007f8ce40b132d in _int_free () from /lib64/libc.so.6
  percona#5  0x00000000006e9ffe in MgmtSrvr::~MgmtSrvr (this=0x28de4b0) at
mysql/8.0/storage/ndb/src/mgmsrv/MgmtSrvr.cpp:
890
  percona#6  0x00000000006ea09e in MgmtSrvr::~MgmtSrvr (this=0x2) at mysql/8.0/
storage/ndb/src/mgmsrv/MgmtSrvr.cpp:849
  percona#7  0x0000000000700d94 in mgmd_run () at
mysql/8.0/storage/ndb/src/mgmsrv/main.cpp:260
  percona#8  0x0000000000700775 in mgmd_main (argc=<optimized out>,
argv=0x28041d0) at mysql/8.0/storage/ndb/src/
mgmsrv/main.cpp:479

Analysis:
While starting up, the ndb_mgmd will allocate memory for bind_address in
order to potentially rewrite the parameter. When ndb_mgmd restart itself
the memory will be released and dangling pointer causing double free.

Fix:
Drop support for bind_address=[::], it is not documented anywhere, is
not useful and doesn't work.
This means the need to rewrite bind_address is gone and bind_address
argument need neither alloc or free.

Change-Id: I7797109b9d8391394587188d64d4b1f398887e94
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant