New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sql/memory: Fix build on musl #455
base: 8.0
Are you sure you want to change the base?
Conversation
…es [postfix] After the rename of the replication terms for source and replica in the error-msgs, the routertests started to fail as the expected errormsgs didn't match anymore. Change ------ In the binlog related tests, changed the expected error-msg texts to contain the updated terms: - allow 'master' and 'source' - allow 'slave' and 'replica' Change-Id: I6784fd8fccc287e5321330d5c6fa9611c6b336e8
…d on PB2 weekly 8.0 Test gr_acf_group_member_maintenance was failing due to a error log message `failed registering replica on source`. Though the message is expected since the test is triggering source failures and the respective asynchronous replication channel reconnection and failover. To solve the above issue, now we do supress the error log message. Change-Id: I44906c06a84c7a2ad313a0af015832a4f665b84c
… Signal (get_store_key at sql/sql_select.cc:2383) These are two related but distinct problems manifested in the shrinkage of key definitions for derived tables or common table expressions, implemented in JOIN::finalize_derived_keys(). The problem in Bug#34572040 is that we have two references to one CTE, each with a valid key definition. The function will first loop over the first reference (cte_a) and move its used key from position 0 to position 1. Next, it will attempt to move the key for the second reference (cte_b) from position 4 to position 2. However, for each iteration, the function will calculate used key information. On the first iteration, the values are correct, but since key value mysql#1 has been moved into position #0, the old information is invalid and provides wrong information. The problem is thus that for subsequent iterations we read data that has been invalidated by earlier key moves. The best solution to the problem is to move the keys for all references to the CTE in one operation. This way, we can calculate used keys information safely, before any move operation has been performed. The problem in Bug#34634469 is also related to having more than one reference to a CTE, but in this case the first reference (ref_3) has a key in position 5 which is moved to position 0, and the second reference (ref_4) has a key in position 3 that is moved to position 1. However, the key parts of the first key will overlap with the key parts of the second key after the first move, thus invalidating the key structure during the copy. The actual problem is that we move a higher-numbered key (5) before a lower-numbered key (3), which in this case makes it impossible to find an empty space for the moved key. The solution to this problem is to ensure that keys are moved in increasing key order. The patch changes the algorithm as follows: - When identifying a derived table/common table expression, ensure to move all its keys in one operation (at least those references from the same query block). - First, collect information about all key uses: hash key, unique index keys and actual key references. For the key references, also populate a mapping array that enumerates table references with key references in order of increasing key number. Also clear used key information for references that do not use keys. - For each table reference with a key reference in increasing key order, move the used key into the lowest available position. This will ensure that used entries are never overwritten. - When all table references have been processed, remove unused key definitions. Change-Id: I938099284e34a81886621f6a389f34abc51e78ba
The GSS plugin for SASL appears to have leaks. This causes the LDAP SASL client plugin to fail with ASAN and valgrind. Fixed by: 1. making sure sasl_client_done is called by the client's deinit method. 2. add vg and asan suppressions to cover the library leaks. Change-Id: Iceb6fbb2d9483b2fcc51c2a0f004735b288bb4f0
…g on PB2 - Windows Disable test gr_primary_mode_group_operations_net_partition_4 on Windows until the bug is fixed. Change-Id: I32e247363eefab08372989c24670e5238c720f2d
The failing queries are 'semi-joins', which semantically are expected to eliminate join duplicates after the first match has been found - Contrary to normal joins where all matching rows should be returned (in any order). Thus the differens semi-join iterators in the mysql server does some kind of skip-read after the first matching set of row(s) has been found for a semi-join nest of tables. This may also skip over result rows from other tables depending on the table(s) being skip-read. I.e tables being in the same query tree branch as the rows being skipped. That is fine when these tables are a part of the same semi-join as being skip-then - Then this is intended behavior. However, we sometimes ends up with query plans where the semi-join'ed tables are evaluated first, and the inner joined tables ends up depending in the semi-join'ed parts. Usually (only?) seen when the 'duplicate eliminate' iterators are use din the query plan. Note that this effectively turns the table order in the originating SQL query upside down. E.g. the pseudo SQL query: select ... from t1 where <column1> in (select <column2> from t2 where <pred>) Might get the query plan duplicate eliminate (select <column2> from t2) join t1 on <pred> Thus, we have a plan where t1 depends on a semi-joined t2, without being part of the semi-join itself. However it will have t2 as an ancestor in the SPJ query tree if the query is pushed -> t1 becomes a subject of the t2 duplicate elimination, effectively a skip-read operation Due to the finite size of the batch row buffers when returning SPJ results to the API, we might need to return t1 result rows over multiple batches, with the t2 result rows being reused/repeated. Thus they will appear as dupliacted to the iterators, and be skipped over, together with the t1 rows which should not have been skipped. Patch identifies when we have such query plans where non-semi-joined tables are depending on semi-joined tables, _and_ both tables are scan operation subject to such batching mechanisms. We will then reject pushing of depending scan-tables not being an intended part of the semi-joins itself. Note that such query plans seems to be a rare corner case. Patch also changes some test cases where: - Added two variants of existing test cases where coverage of duplicate eliminating iterators were not sufficient - Added SEMIJOIN(LOOSESCAN) hint to enisure that intended planes where produced. - Added two test cases for bug itself. That ^ smoked out a query plan which returned incorrect results after modification. With the patch pushability was reduced, and result became correct. Change-Id: Iae890ef702cac8a50564d5fb0e493a4715c4dafd
Windows specific: Replaced use of jemalloc for memory management within OpenSSL (on Windows) via the call to CRYPTO_set_mem_functions in mysqld.cc. The OpenSSL memory management functions used on Windows now use std::malloc, std::free and std::realloc instead. The memory management code in my_malloc.cc is refactored using function templates to avoid duplicating the performance schema instrumentation and debugging code. Change-Id: I4df2d3974f215f3a8a9a7bd0fd82dd54c96fecb7
…n PB2 Test gr_member_actions_error_on_read_on_mpm_to_spm does test how a group mode switch handles a failure during the update of the member actions table, causing the member to leave the group. That is achieved by enabling a debug flag that returns a error when we close the member actions table. Though that flag, which is set on a common code path can affect other steps of the group mode switch, which will continue to fail and leave the group. The test was failing because the expected error message was not logged into the error log, which means that the group mode switch did error out before reaching the member actions table error. Given that the point on which group mode switch fails is not deterministic, we remove the error log message assert from the test. Change-Id: I42c9e3564f79c15b80ae99a1c2edee634be0f524
…d on weekly-trunk Test gr_acf_start_failover_channels_error_on_bootstrap was failing due to a error log message ``` [ERROR] [MY-013211] [Repl] Plugin group_replication reported: 'Error while sending message. Context: primary election process.' ``` though the message is expected since the test is triggering group bootstrap errors, which does include a primary election. To solve the above issue, now we do suppress the error log message. Change-Id: I0eb504fec68189191dc0591effd56ba26f8b3283
gr_parallel_start_uninstall forces a race condition between `UNINSTALL PLUGIN group_replication;` and `START GROUP_REPLICATION;` Despite the test first asynchronously executes the `UNINSTALL`, there is the possibility that the `START` is executed first. `START` does enabled `super_read_only`, disabling it after the member joins the group and it is a primary. When the `UNINSTALL` is allowed to execute once the `START` is complete, that may happen before the `super_read_only` is disabled. If that happens the `UNINSTALL` will hit the error: ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement Since the above error is possible, we added to one of possible error status of `UNINSTALL PLUGIN group_replication;`. Change-Id: I9847def076ec1236a2e273befbef52d3fcdf1376
gr_parallel_stop_dml forces the execution of
`INSERT INTO t1 VALUES(1)`
while
`STOP GROUP_REPLICATION`
is ongoing.
The `INSERT` must fail, throwing one of the errors:
1) `Error on observer while running replication hook
'before_commit'`
when the plugin is stopping.
2) `The MySQL server is running with the
--super-read-only option so it cannot execute this statement`
when the plugin already stopped and enabled `super_read_only`.
The test was not considering the second error, thence we added it.
Change-Id: I1d4e539cea1a37c11c9e133f92add3615f7aabf0
corruption if both are set. Issue : Check table shall check if both the version and instant bit are set for a compact/dynamic row. This is a corruption scenario and check table shall report it. Fix : check table checks the INSTANT/VERSION bits in the records and report if both are set. Change-Id: I551d6d6296d8df052bcca9450e7856a24a2c5416
When a table is first created with a reference to a non-existing variable, the derived type is text. The second time an identical table is created, the derived type is mediumblob. This is due to an actual variable being created on the first table creation, and this variable is then used on the second table creation. The main problem with this is that the variable is created with a binary character set, whereas the first table creation is given the correct default character set. This problem is fixed by assigning the correct default character set to the source item when creating the user variable. Even after this fix, there is still a minor difference between the two table creations: the first table gets a column with maximum length 262140 bytes, wheras the second table gets a column with maximum length 4294967295 bytes. This is because the first creation utilizes a default character type, whereas the second utilizes the created user variable, and those instances use different maximum lengths. Fixing this will require a large rewrite and is not deemed worthwhile at the time being. Change-Id: I8cd1f946dbf87047c261bfeca9d8ba7d23a9629c
Post push fix, re-recorded spj_rqg_hyeprgraph.results Change-Id: Ifcb0cfabef31004b5aa2af32f24736810cc2ffec
Post push fix: static inline functions std_realloc and redirecting_realloc are only used when USE_MALLOC_WRAPPER is not defined, so make these functions conditionally compiled to avoid build breakage when compiling in maintainer mode (-Werror). Change-Id: If98ef4bba95289fbdd92c9cf9808ab83e4fe1d42
DEFUALTS During UPDATE Issue : This is a followup issue of 34558510 which fixes the cases for which, During UPDATE, we shall not materialize INSTANT ADD columns added in earlier implementation. If a table has versions, it indicates it has INSTANT ADD/DROP columns in new implementation. And in new implementation it is made sure that the maximum possible row is within the permissible limit, otherwise INSTANT ADD is rejected. Fix: While deciding to materialize, check if table has An INSTANT ADD columns with added in a row versions. If it does, then we can be assured that if INSTANT DEFAULT are materialized, row will be within permissible limit. Change-Id: Ia22ab7a5aa96966741ee1b95833a5eb6705448d7
…40243300361984
Issue:
When user keeps adding and dropping columns instantly, n_def increases.
When n_def is increased beyond REC_MAX_N_FIELDS, it rotates back to 0
causing the assertion.
Fix:
Alter handler must know if INSTANT is possible. Hence we must check the
value of n_def and number of columns being added before proceeding with
ALGORITHM=INSTANT. Further we must ensure that if we cannot use INSTANT;
we must:
1. Fall-back to INPLACE if algorithm=DEFAULT, or not specified.
2. Error out with ER_TOO_MANY_FIELDS(Too many columns) if algorithm=INSTANT;
Note:
Current patch will not allow n_def to cross 1022. This is because when we
add even 1 more column, n_def could become 1023 (which is equal to
REC_MAX_N_FIELDS). Furthermore, this patch will error with ER_TOO_MANY_FIELDS
only when ADDing a new column with INSTANT. We can still drop any number of
columns instantly
Thanks to Marcelo Altmann (marcelo.altmann@percona.com) and Percona for the contribution
Change-Id: Iff5c7d6e45c294548d515458cddfb35c00aff43e
… ONE Post push fix : Adding a wait to fsync. Reviewed by: Mauritz Sundell <mauritz.sundell@oracle.com> Change-Id: I26a19b9c653fd9a46849a2a3af20b9d815fcccdc
Change-Id: I78a4c09a1790d8843b6ca14ba8856c88425966a4
Change-Id: I67c36b3afcc0c1fea40efbea8c8a0b283ccbabd1
- We have much use of sprintf, which is now flagged by clang as unsafe. Silence this, since we have too many uses to rewrite easily. - This version if Xcode also flags loss of precision from 64 to 32 bits integer, silence this also. Typically when x of type size_t is assigned to an int. Change-Id: I3e5f829c7fdb8ddb08c56149bc0db1a5dc277f34
This commit fixes the above bug by making better row estimates for "GROUP BY". We now use (non-hash) indexes and histograms to make row estimates where possible. Otherwise, we use rules-of-thumb based on table sizes and input set sizes. Change-Id: Ibfdd246f7251c29bb6a8b3a641ea067d65b72dbc
Remove all old source files. Change-Id: I82837b85aeafa1f80da66b5f34097be5648783be (cherry picked from commit f8a70670e8b58a2054bd3c26777bae8c00953393)
We have new functionality, implemented by
WL#15131, WL#15133: Innodb: Support Bulk Load
so do not disable the FILE protocol in Curl.
Change-Id: Ib05f4656c2d13c620756518638ef73fa373cf63f
(cherry picked from commit e85db298f4ba0a2de53baa978f452d1107c48f7a)
Unpack source tarball, git add everything. Change-Id: Ib6eb64f8e132ca59539208f7bf69245268804ee5
Remove things we do not need/want. git rm -rf amiga/ contrib/ doc/ examples/ nintendods/ Makefile zconf.h Change-Id: Ibd76884411c6596f2fcfcb6c3fe2f1f4aabadb73
Bump MIN_ZLIB_VERSION_REQUIRED to "1.2.13" and adjust paths to bundled zlib sources. In extra/zlib/zlib-1.2.13/CMakeLists.txt: - apply cumulative patches from previous zlib upgrade - apply fix to MacOS build (bug #34776172) Change-Id: I1a0aeff115a96a0993f2f396c643eda1c1b4900b
Remove all old source files. Change-Id: I456635823feb21faa42b683f0bfb62d353cb80d4
When a socket is shutdown() on both sides, but not closed AND the socket is still monitoed via epoll_wait(), epoll_wait will return EPOOLHUP|EPOLLERR. It will be logged as: after_event_fired(54, 00000000000000000000000000011000) not in 11000000000000000000000000000000 As EPOLLHUP and EPOLLERR are always watched for even if they aren't explicitely requested, not handling them may lead to an infinite loop and high CPU usage until the socket gets closed. Additionally, events may be reported for fds which are already closed which may happen if: 1. io_context::poll_one() led to epoll_wait() fetching multiple events: [(1, IN|HUP), (2, IN)] 2. when the first event is processed, event handler (for fd=1), closes fd=2 (which leads to epoll_ctl(DEL, fd=2) and close(2) 3. io_context::poll_one() processes the next event: (2, IN) ... but no handler for fd=2 exists. This is more problematic if a new connection which fd=2 was opened in the meantime: 1. io_context::poll_one() led to epoll_wait() fetching multiple events: [(1, IN|HUP), (2, HUP)] 2. when the first event is processed, event handler (for fd=1), closes fd=2 (which leads to epoll_ctl(DEL, fd=2) and close(2) 3. new connection with fd=2 gets accepted. 4. io_context::poll_one() processes the next event: (2, HUP) ... sends event to fd=2 which gets closed event though the HUP event was for the old fd=2, not the current one. Change ====== - expose EPOLLHUP and EPOLLERR as their own, seperate events. - if none of EPOLLHUP|EPOLLERR|EPOLLIN|EPOLLOUT is requested, don't pass the fd to epoll_wait(). - remove polled-events when the fd is removed from the io-context Change-Id: I145cacd457fa9876112789eb4bfd06fce1722c45
Change ====== Repeat the changes done for linux_epoll in [1/3] - expose POLLHUP and POLLERR as their own, seperate events. - if no interest for any of POLLHUP, POLLERR, POLLIN or POLLOUT is registered, don't pass that fd to poll() - treat POLLHUP as POLLIN if only POLLIN is waited for to handle the connection-close case nicely on windows. - remove queued events if a fd is removed from the registered set. - added unittests for the poll io-service Change-Id: I1311513492fe755d5f23432b34721e0ab1fc88a7
Change ====== linux timestamping reports when a packet stepped through the layers of the linux network stack on the send and receive side. - kernel -> driver - driver -> cable linux timestamping are reported as EPOLLERR without EPOLLHUP and serve as test-bed for the EPOLLERR handling. Change-Id: I083e304d23c72880b974863d29c29aa9d25b8694
Reverting the following WLs and bug fixes: - WL#14772 InnoDB: Parallel Index Build - WL#15131 Innodb: Support Bulk Load with Sorted data - WL#15133 Innodb: Support Bulk Load from OCI Object Store - Bug #34840684 Assertion failure: mtr0log.cc:175:!page || !page_zip || !fil_page_index_page_che - Bug #34819343 Assertion failure: btr0btr.cc:731:ib::fatal triggered thread 140005531973376 - Bug #34646510 innodb.zlob_ddl_big failing on pb2 daily-trunk Reverted Commit-Ids: a8940134dd8d33e7fc25f641d627b640d56769b6 ae9fd03687486b5d01a7dbe766d73993d7c78efa c4388545dc98e472b0f3d96db0e0d19d8231dc56 fd950026c1a4d11294b3448d8bfcd94631618611 ae9fd03687486b5d01a7dbe766d73993d7c78efa 226765401a5daa4a2443e1507343ed264f62f60f Change-Id: I392bda99eeb825174d156fcd169caef7c4b712b0
… statements
for connect_timeout seconds, causing pileups
Description:
------------
This is a regression caused due to the fix made for the Bug 34094706. When a
connection somehow stalls/blocks during the authentication phase, where a mutex
is held, the other connections that are executing queries on I_S and P_S are
blocked until the first connection release the mutex.
Fix:
----
Instead of using the mutex and checking the thd->active_vio, we now check the
value of net.vio type in the is_secure_transport() check.
Change-Id: I02f50f7e90c6e683a7bbe0b5f99b932e819f1f08
…read to stop Problem ------- In case a binary log dump thread waits for new events with a heartbeat configured and a new event arrives, it is possible that a binary log dump thread will send an EOF packet to connected client (replica/mysqlbinlog/custom client...) before sending all of the events. Analysis / Root-cause analysis ------------------------------ It happens in case binary log dump thread exits with a timeout on conditional variable just before position gets updated. Function 'wait_with_heartbeat' exits with a code 1, which is treated later on as the end of the execution. Solution -------- Ignore the code returned from the 'wait' function, since a timeout is not important information for the binary dump log thread. In case a timeout occurs, binary log dump thread should continue execution or abort in case thread was stopped. Return 0 from the wait_with_heartbeat or 1 in case of send/flush error. Signed-off-by: Karolina Szczepankiewicz <karolina.szczepankiewicz@oracle.com> Change-Id: I027985aafc1234194f0798ba52b65cce36936f24
gcc12 reports:
harness/tests/linux_timestamping.cc:741:15: error: narrowing conversion
of ‘attr_type’ from ‘size_t’ {aka ‘long unsigned int’} to ‘short
unsigned int’ [-Werror=narrowing]
741 | return {attr_type, {payload, payload_len}};
Change-Id: I28fb1a1ca32e6ffd1febe44c704a1ae438b414a2
PROBLEM: - In current version pattern for naming hidden dropped column has changed. - When cfg file is taken from older version hidden dropped column name follows old pattern. - When INSTANT operations are done in current version exactly in same order as done before creating cfg file, then the server crashes. FIX: - When searching dropped column with older name version returns null, IMPORT fails with error SCHEMA_MISMATCH. Change-Id: Ifd93adafb78f0aa7b5ae1980b64a3230f94deae9
_SC_LEVEL1_DCACHE_LINESIZE is not always available on Linux, e.g. with musl libc. It's provided by glibc instead and is a glibc-internal.
* Adds -DWITH_BUILD_ID=OFF to workaround various build issues * Patches the source to work with musl Upstream-PR: mysql/mysql-server#455 Closes: https://bugs.gentoo.org/886474 Closes: https://bugs.gentoo.org/903415 Closes: https://bugs.gentoo.org/885035 Signed-off-by: orbea <orbea@riseup.net>
* Adds -DWITH_BUILD_ID=OFF to workaround various build issues * Patches the source to work with musl Upstream-PR: mysql/mysql-server#455 Closes: https://bugs.gentoo.org/886474 Closes: https://bugs.gentoo.org/903415 Closes: https://bugs.gentoo.org/885035 Signed-off-by: orbea <orbea@riseup.net> Closes: #30517 Signed-off-by: Sam James <sam@gentoo.org>
|
Hi, thank you for submitting this pull request. In order to consider your code we need you to sign the Oracle Contribution Agreement (OCA). Please review the details and follow the instructions at https://oca.opensource.oracle.com/ |
|
Hi, thank you for your contribution. Please confirm this code is submitted under the terms of the OCA (Oracle's Contribution Agreement) you have previously signed by cutting and pasting the following text as a comment: |
|
I confirm the code being submitted is offered under the terms of the OCA, and that I am authorized to contribute it. |
_SC_LEVEL1_DCACHE_LINESIZE is not always available on Linux, e.g. with musl libc.
It's provided by glibc instead and is a glibc-internal.