-
Notifications
You must be signed in to change notification settings - Fork 711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mysql 5.6.11 #3
Mysql 5.6.11 #3
Commits on Apr 19, 2013
-
Fixed 5.6.11 code/test/distro errors.
Summary: Corrections to make upstream release build cleanly. Oracle failed to keep the CLIENT_REMEMBER_OPTIONS bit set in the flags for the IO thread's connection so that on failure connecting to the master mysql_real_connect does not call mysql_close_free_options and stomp the connection timeout setting (Had "=" instead of "|="). Oracle renamed "thread_running" to "num_thread_running" in 5.6.6, but they apparently didn't fix all the uses of "thread_running". Limit added to a test with hard-coded values for <= 16k page size. Oracle included two .gitignore files in their release tarball. Oracle included generated sql_yacc.{cc|h} in their release tarball. Oracle blindly defined _GNU_SOURCE, which may already be defined. Oracle's code was missing a "using" for "isfinite", needed in C++11. Added blind_fwrite to call for fwrites where mysql apparently doesn't care about actual success or failure of fwrite(). Added DBUG_ONLY define to __attribute__ debug-only vars as ((unused)) in non-debug compiles to avoid gcc unused warnings. Since bzero is a macro in newer glibc's, we need to undefine it if it is already defined, before redefining it. In C++11, "" is not parsed as before. So "A""B" is not the same as "AB". Instead, whitespace is required, like: "A" "B" Added that whitespace everywhere gcc-4.7 found it missing in our builds. Corrected all of the gcc warnings from upstream MySQL source. Only two warnings were suppressed with pragma: 1) Intentional unintialized variables in sql/sql_planner.cc 2) Array code that triggers a gcc bug in strings/ctype-uca.c Removed the volatile internal hash info from some test results. These should never be output by a test, and thus tested for, since they change anytime the yacc spec is changed. Disabled a massive myisam test (large_tests.alter_table) in debug mode. Bump the buffer pool ram used by the test system: 8M should be enough for anyone, except, no, it wasn't. Bloat up to 32M, which beyond a doubt will be enough for anyone ever. Fixed the tests which were broken by the increase in buffer pool size. Allow more time to restart in tests in 5.6. Debug builds are slow, so some big tests time-out on restart. This changes the hard-coded test restart timeout to roughly 5 minutes, instead of the previous value of roughly 50 seconds. The "parts" test suite ran with a restricted load path, but loads data from paths outside that restricted path. This just disables that restriction (for now). We disable profiling in our release builds, but main.wl6443_deprecation uses it. So, this one test was made debug-only. Test Plan: Built with gcc-4.6 and gcc-4.7, with -std=c++0x for C++. Built both build types with devserver builds, no warnings. Built both build types with Jenkins builds, no warnings. Re-ran the tests, they all now pass. Viewed test results, they no longer include internal hashes. Jenkins' nighly tests will show if this fixes the occasional fails. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for ff9c9e5 - Browse repository at this point
Copy the full SHA ff9c9e5View commit details -
Disable binlog.binlog_server_start_options
Summary: The test has a few flaws. First, it does not support running multiple instances concurrently because it writes and removes a cnf file in a shared location. Second, the datadir it creates assumes that the server was built without specifying a default location for datadir other than 'data/'. Also, because it uses the compiled-in datadir, that also prevents it from being able to run multiple instances of the test in parallel. Finally, if one does try to run multiple instances of the test in parallel, the test may end up doing a rmtree('/data') which may be bad if '/data' exists. Test Plan: None Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for eec604a - Browse repository at this point
Copy the full SHA eec604aView commit details -
Added fb.cmake Facebook build config file.
Summary: This contains the build settings for our environment. Test Plan: Built 5.6 in Debug and Release. All looks good. Built a new 5.6 rpm. All looks good. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 9dd3834 - Browse repository at this point
Copy the full SHA 9dd3834View commit details -
Summary: Updates our cmake rules to turn off perfschema by default. PerfSchema turned out to have too much overhead to be worth it: www.facebook.com/notes/mysql-at-facebook/my-mysql-is-faster-than-your-mysql/10151250402570933 Also, this changed the output of some tests (rows accessed, proc state). All these changes looked legit, so their results files have been updated. And, there was a hard-coded check of perfschema values in: main.mysql_client_test ...so I removed that code, and that check. Test Plan: I confirmed that Jenkins builds this with perfschema off. Reviewers: jtolmer, mcallaghan Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 5aee376 - Browse repository at this point
Copy the full SHA 5aee376View commit details -
Add -DMY_PTHREAD_FASTMUTEX for 5.6
Summary: Builds MySQL-5.6 with -DMY_PTHREAD_FASTMUTEX now. For both development and rpm builds. This is based on Yoshinori's recommendation, and Mark's ok. Test Plan: Built it each different way, with instrumented code #errors. Confirmed it works in each case (release/debug, clean/not). Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for e9bfa36 - Browse repository at this point
Copy the full SHA e9bfa36View commit details -
Stop spawning dummy threads on client library initialization (5.6)
Summary: Let's revert the fix for Bug#24507. To quote Monty from 2006: "After 1/2 a year, when all glibc versions are updated, we can delete this code." Test Plan: jenkins Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 057035a - Browse repository at this point
Copy the full SHA 057035aView commit details -
Summary: This is a base change required for lots of stuff, including innodb_fake_changes, stats, etc.... I am factoring this out to allow diff re-ordering. Test Plan: Build is clean, and so are mtr tests. This code was factored out of the existing basic stats diff. There are no overall code changes once that diff is applied. Reviewers: jtolmer, chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for f37b8e8 - Browse repository at this point
Copy the full SHA f37b8e8View commit details -
Summary: This adds the unmodified xtrabackup distribution from Percona. Files not useful to us (.bzr and patch files) are excluded. Test Plan: This is just the base, it doesn't do or change anything. Reviewers: rudradevbasak Reviewed By: rudradevbasak
Configuration menu - View commit details
-
Copy full SHA for 6eb74f8 - Browse repository at this point
Copy the full SHA 6eb74f8View commit details -
Integrate xtrabackup into tree
Summary: This is just the "innodb56.patch" from Percona, done differently. Instead of a separate patch, this is integrated into the main code. All changes are now protected via the XTRABACKUP define. This allows the same code to compile for both mysqld and xtrabackup. This should have a net ZERO effect, until the xtrabackup build is added. Test Plan: Jenkins, build, etc.... All looks good. Reviewers: rudradevbasak, jtolmer Reviewed By: rudradevbasak, jtolmer
Configuration menu - View commit details
-
Copy full SHA for 74b6cc6 - Browse repository at this point
Copy the full SHA 74b6cc6View commit details -
Prevent xtrabackup apply-log disk-space bloat
Summary: Prevents xtrabackup apply-log from blowing up disk space by 200% or more. The bug was reported here: https://bugs.launchpad.net/percona-xtrabackup/+bug/950334 Using the fix that Percona provided. Test Plan: Tested that apply-log doesn't blow up disk space any more. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 922d355 - Browse repository at this point
Copy the full SHA 922d355View commit details -
Facebook changes to xtrabackup
Summary: We build this in our tree, beside our normal out-of-dir cmake build. Basically, this can be built just by running this on a clean source tree: cd xtrabackup; ./utils/build.sh innodb56; cd .. Note that this only works right in a clean source tree. But, since we build out-of-dir, our source tree is always clean. Also note that some corner-cases (Debug builds of xtrabackup, for example) will probably still require more build magic to work. The real solution here would be to integrate this build into the main cmake build of mysql-5.6 itself, but we didn't go that far with this (yet). Sections not required for mysql-5.6 were removed, for simplicity. Also, some compile/build warnings/errors were cleaned up. And, innobackupex was modified. For apply log, if xtrabackup binary is explicitly specified, dont try to autodetect. Also, changes for 5.6.10 -> 5.6.11: * The os_file_delete() macro/function got a new argument, added that. * The call to fil_mtr_rename_log was moved, moved the #ifdef also. Test Plan: xtrabackup build works (in CLEAN source dir): cd xtrabackup; ./utils/build.sh innodb56; cd .. Reviewers: rudradevbasak, jtolmer Reviewed By: rudradevbasak
Configuration menu - View commit details
-
Copy full SHA for 38d9cb3 - Browse repository at this point
Copy the full SHA 38d9cb3View commit details -
Port v5.1 innodb stress tests to 5.6
Summary: This is a port of all stress, bigstress, and hugestress tests to 5.6. This is a complete port, except: Removed all *_gc.test. These tested a group commit mechanism that is not in 5.6, and will never be ported to it. This can stress test secondary indexes as well, but this is disabled. Removed extra innodb_file_format_check. It is no longer needed in 5.6. Updated results files (partly with --record, partly manually). Removed use of vars we've not (yet) ported. These included: innodb_zlib_wrap innodb_log_compressed_pages innodb_background_checkpoint innodb_prepare_commit_mutex innodb_prefix_index_cluster_optimization Updated and verified changes to all results files. Test Plan: Ran all the non-huge tests, and one huge test, myself, so far so good. Jenkins nightlies will start hammering these after this is pushed. Reviewers: nizamordulu, chip, jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for c45dad7 - Browse repository at this point
Copy the full SHA c45dad7View commit details -
Add trx pointer to struct mtr_t
Summary: This is a base change required for lots of stuff, including innodb_fake_changes, table stats, etc.... I am factoring this out to allow diff re-ordering. Test Plan: Build is clean, and so are mtr tests. This code was factored out of the existing table stats diff. There are no overall code changes once table stats diff is applied. Reviewers: jtolmer, chip Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for bd4a4bc - Browse repository at this point
Copy the full SHA bd4a4bcView commit details -
Port Percona fake changes patch to 5.6
Summary: Ports Percona's innodb_fake_changes patch, and our fixes, to 5.6. Also adds a simple stress test to verify that innodb_fake_changes=1 prevents writing to the database. Important/TODO: Enabling innodb_fake_changes during index online ddl operations has undefinded effects right now. For example in row_upd_sec_index_entry() there is "switch(dict_index_get_online_status(index))" instruction that ignores whether fake_changes are enabled or not and it may cause problems. It is not the only place it may happen. Also locking logic in that case needs to be reviewed and tested before using it with faker. Test Plan: Ran included tests Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 77f5b2b - Browse repository at this point
Copy the full SHA 77f5b2bView commit details -
Parse, but ignore, MEMECACHE_DIRTY keyword.
Summary: Port only the parsing of the MEMECACHE_DIRTY keyword. Test Plan: MTR tests included. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 64c14da - Browse repository at this point
Copy the full SHA 64c14daView commit details -
Use both upstream and our crc algo, migrating to upstream.
Summary: Facebook's 5.1 mysql wrote checksums not compatible with mysql 5.6. This change allows us to read either fb or upstream checksums, but always writes the new upstream (standard mysql-5.6) checksums. This calculates both the real crc32c algo, and our addition-based one, and compares to each on read, accepting either as valid. On write, it only writes the real crc32c (as used upstream). This dual implementation should not be much more computationally complex than only calculating one or the other. Also updated extra/innochecksum program. This change handles compressed and uncompressed checksums, which (of course) differ slightly in their codepath. Test Plan: mtr as well as running as a slave Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for b6311ac - Browse repository at this point
Copy the full SHA b6311acView commit details -
Fix our acl damage in mysql_upgrade
Summary: For Admission Control, we added two columns to the acl table. The code to read this in mysql 5.1 and 5.6 is completely terrible. So, acl table data created by our 5.1 causes 5.6 to explode. Skip reading the last two acl columns, if there aren't at least 3 there. This allows us to dodge those we used for admission control in 5.1, since, if the extra columns from 5.6 are present, there will be at least 3 more. This works around reading these tables, long enough to run mysql_upgrade. Also mysql_upgrade now drops these two obsolete Admission Control columns, if present, before addding the three new 5.6 acl columns. For this to work, mysql_upgrade also had to ignore: ERROR 1091: Can't DROP '<column_name>'; ... So, this was also added to its expected errors list. This, in turn, required updating of some perfschema test results, since they include line numbers of errors. Note that if mysql_upgrade is not run, things (like new GRANTS) will cause bad failures in 5.6 if our old 5.1 format tables are used. Test Plan: Deployed 5.1 and 5.6 RPMs on a spare, and upgraded cleanly! Jenkins 'arc unit' likes this too. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for e3e2235 - Browse repository at this point
Copy the full SHA e3e2235View commit details -
Allow reading of headerless compressed pages.
Summary: Our 5.1 writes headerless zlib data blobs. This can read those now, in addition to those with headers. But, this still writes only with zlib headers, not without. Test Plan: Tested upgrade with a replica of prod data, worked. Reviewers: nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 0a1f919 - Browse repository at this point
Copy the full SHA 0a1f919View commit details -
Port v5.1 SQL_NO_FCACHE to 5.6
Summary: When running full table scans, provide a mechanism to disable caching in flash so that the working set is not evicted. This is particularly useful when doing database dumps and not blowing out the entire working set for data that most likely will not be read again. This should also reduce the overall writes performed to the flash, improving endurance. Adds Innodb_dblwr_page_number to SHOW GLOBAL STATUS. This is the page number at which the doublewrite buffer starts in the system tablespace. We need this for flashcache. We turn off caching by default in flashcache and only enable it when a process asks for it. As long as the process group has flashcache enabled, then all members will have it enabled. Test Plan: Setup a slave and did the following: 1. SELECT * FROM TABLE INTO OUTFILE 'foo' saw writes to flash. 2. SELECT SQL_NO_FCACHE * FROM TABLE2 INTO OUTFILE 'foo2' and saw no writes to flash. 3. SELECT /*!40001 SQL_NO_FCACHE */ * FROM TABLE3 INTO OUTFILE 'foo2' and saw no writes to flash. 4. SELECT /*!40001 SQL_NO_CACHE */ /*!40002 SQL_NO_FCACHE */ * FROM TABLE4 INTO OUTFILE 'foo2' and saw no writes to flash. 5. Ran mysqldump on a host which supports SQL_NO_FCACHE and it auto detected properly 6. Ran mysqldump on a host which does not support SQL_NO_FCACHE and it does not add the comment hint to the SQL statement. Reviewers: chip, mcallaghan Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for e5c4318 - Browse repository at this point
Copy the full SHA e5c4318View commit details -
Summary: Fix a bug with unsigned arithmetic and a compile warning There was an SQL_MODE option for subtracting underflows, but not for adding negative overflow. This fixes it by moving the check to the parent arithmetic class. Also, allow global NO_UNSIGNED_SUBTRACTION to override sql_mode in replication thread. sql_mode is part of replication events, and therefore almost all sql_mode settings are ignored by the replication thread. This breaks replicating from 5.1->5.6 because of behavior of unsigned underflow. This diff allows NO_UNSIGNED_SUBTRACTION to pierce into the replication thread if it is set globally. Test Plan: mtr, replicate from 5.1 master, run on 5.6 replica test against failing replica, verify properly replicates in statement mode Reviewers: mcallaghan, steaphan Reviewed By: mcallaghan
Configuration menu - View commit details
-
Copy full SHA for a334b43 - Browse repository at this point
Copy the full SHA a334b43View commit details -
Port Control Cross-Table Access to 5.6
Summary: Ported our v5.1 sysvar allow_noncurrent_db_rw, which detects when a query accesses data not in the current database. Depending on its setting, when a query tries to read or write to another database, it will: ON: Allow it. LOG: Allow it, but log that it occurred. LOG_WARN: Allow it, log it occurred, and send an error to client. OFF: Block it, and send an error to the client. Test Plan: mtr test: main.allow_noncurrent_db_rw Reviewers: heyongqiang, mcallaghan, hfisk, steaphan, nponnekanti Reviewed By: nponnekanti
Configuration menu - View commit details
-
Copy full SHA for 386ab02 - Browse repository at this point
Copy the full SHA 386ab02View commit details -
Summary: Implements super_read_only global, which activates read_only, and also blocks writes by SUPER. Linked to the read_only global as follows: * Turning read_only off also turns off super_read_only. * Turning super_read_only on also turns read_only on. * All other changes to either one of these have no affect on the other. Test Plan: New test included, passes. Reviewers: rongrong Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for ef2e91b - Browse repository at this point
Copy the full SHA ef2e91bView commit details -
Summary: This replaces the fast_timers we added in 5.1, using the timers which are now included in upstream v5.6 for perfschema. Initializes the timers, selects which one to use, points the global "my_timer_now" function pointer to the correct timer, puts the stats for that timer in the global struct "my_timer", and sets the new SHOW STATUS variable "Timer_in_use" to indicate which was chosen. Added my_timer_ok mtr test to confirm the timer chosen is not "None". Added my_timer_good mtr test to confirm the timer chosen is millisecond precision or better. All timers are type ulonglong, and are stored in native timer units. Provides my_timer_now() function pointer to get current time. Provides my_timer_since(t) function to get time passed since time t, automatically adjusted for the estimated timer overhead. Provides my_timer_since_and_update(&t) function to get time passed since time t, adjusted for the estimated timer overhead. Also updates t to the current time, skipping that overhead. To count the time not counted here, add my_timer.overhead for each call to this function. Provides my_timer_to_seconds(t) function to convert native timer units to seconds (including fractions, in a double) for the current timer. Also adds _milliseconds and _microseconds versions of this. Adds a new show variable type "SHOW_TIMER" which stores its value as a 64-bit unsigned integer, in native timer units, but displays as a double, in seconds, automatically converting it based on the current timer's frequency. Since this conversion is only done for display, no precision is ever lost in the actual variable, but the variable is still shown as seconds with a fractional component, just like doubles. This is the base for all our various stats which require timers. Test Plan: Jenkins, including follow-up diff which use these. All pass. Reviewers: chip, mcallaghan Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 22a8b8c - Browse repository at this point
Copy the full SHA 22a8b8cView commit details -
Port v5.1 Extra Stats: Basic Stats
Summary: Improve InnoDB disk IO performance counters: This renames variables used for InnoDB IO performance counters, cleans up some of the code that updates the counters, displays aggregate performance for all async requests in SHOW INNODB STATUS. Report InnoDB fsync time in SHOW INNODB STATUS: Report microseconds waiting for fsync or fdatasync in SHOW INNODB STATUS Report time for synchronous IO requests in SHOW INNODB STATUS: Time and report via SHOW INNODB STATUS number of synchronous IO requests and the average time per call. Report old IO requests in SHOW INNODB STATUS: An old request is one that is in the IO request array for more than 2 seconds. Refactor innodb IO performance counters to use a common struct Report per-tablespace IO statistics in SHOW INNODB STATUS: When innodb_file_per_table is used, each table uses a separate tablespace and this feature then provides per-table IO stats. The results look like: -------------- TABLESPACE I/O -------------- write: ./ibdata1 564 requests, 0 old, 71316.88 bytes/r, svc: 0.91 secs, 1.62 msecs/r, 60.60 max msecs, wait: 33.05 secs 58.61 msecs/r, 60.60 max msecs write: ./ib_logfile0 2098 requests, 0 old, 4458.65 bytes/r, svc: 3.13 secs, 1.49 msecs/r, 296.15 max msecs, wait: 3.58 secs 1.71 msecs/r, 296.15 max msecs Added counters for number of pending and completed aio operations. Brought over SHOW INNODB FILE STATUS from 5.0.84 and put the data into the information_schema.innodb_file_status table. Sample of the output follows: +-------------------------------------+-----------+----------+-----+----------+------------------+----------+---------------------+---------------+-----------+---------------------+----------------+ | FILE | OPERATION | REQUESTS | OLD | BYTES | BYTES/R | SVC:SECS | SVC:MSECS/R | SVC:MAX_MSECS | WAIT:SECS | WAIT:MSECS/R | WAIT:MAX_MSECS | +-------------------------------------+-----------+----------+-----+----------+------------------+----------+---------------------+---------------+-----------+---------------------+----------------+ | /data/mysql/ibdata1 | read | 320 | 0 | 7307264 | 22835.2 | 1.527591 | 4.773721875 | 17 | 1.527591 | 4.773721875 | 17 | | /data/mysql/ibdata1 | write | 118 | 0 | 17367040 | 147178.305084746 | 0.052502 | 0.444932203389831 | 2 | 0.555864 | 4.71071186440678 | 4 | | /data/mysql/ib_logfile0 | read | 6 | 0 | 69632 | 11605.3333333333 | 8.6e-05 | 0.0143333333333333 | 0 | 8.6e-05 | 0.0143333333333333 | 0 | | /data/mysql/ib_logfile0 | write | 7466 | 0 | 9551360 | 1279.31422448433 | 0.046362 | 0.00620975087061345 | 0 | 0.047235 | 0.00632668095365658 | 0 | | ./db7733/alerts_log.ibd | read | 217 | 0 | 3588096 | 16535.0046082949 | 1.34283 | 6.18815668202765 | 20 | 1.688229 | 7.77985714285714 | 76 | | ./db7733/api_data.ibd | read | 93 | 0 | 1523712 | 16384 | 0.517229 | 5.56160215053763 | 14 | 0.517229 | 5.56160215053763 | 14 | | ./db7733/api_data.ibd | write | 4 | 0 | 65536 | 16384 | 0.000575 | 0.14375 | 0 | 0.004231 | 1.05775 | 2 | +-------------------------------------+-----------+----------+-----+----------+------------------+----------+---------------------+---------------+-----------+---------------------+----------------+ Changes: Adapts those stats which used doubles for timing to use new my_timer. Some of this is actually redundant with the new perfschema stuff. However, I included it all anyway, so they can be compared. Test Plan: Jenkins, 'arc unit' all tests passed. Reviewers: rongrong, chip Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for 27e4c3b - Browse repository at this point
Copy the full SHA 27e4c3bView commit details -
Port v5.1 Extra Stats: BinLog BaseDirs
Summary: Add global variables to list directory for binlog files and binlog index file. The variables are binlog_file_basedir and binlog_index_basedir. Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for a535b68 - Browse repository at this point
Copy the full SHA a535b68View commit details -
Port v5.1 Extra Stats: mutex and rw-lock
Summary: Add mutex and rw-lock stats to SHOW STATUS. Added counters for number of pending and completed aio operations. Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 83c4e4f - Browse repository at this point
Copy the full SHA 83c4e4fView commit details -
Port v5.1 Extra Stats: InnoDB transaction logging
Summary: Add more details from InnoDB transaction loggging to SHOW STATUS and SHOW INNODB STATUS. These are for performance debugging. Test Plan: Jenkins Reviewers: chip, jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 79ee703 - Browse repository at this point
Copy the full SHA 79ee703View commit details -
Port v5.1 Extra Stats: More buffer pool stats.
Summary: Display more buffer pool status in SHOW INNODB STATUS: For SHOW STATUS display Innodb_buffer_pool_pages_dirty_pct, Innodb_buffer_pool_pages_flushed_lru, Innodb_buffer_pool_pages_flushed_list, Innodb_buffer_pool_pages_flushed_page and Innodb_buffer_pool_pages_lru_old. For SHOW INNODB STATUS display lines that start with "Read ahead:", "Percent pages dirty:", "Pending writes:" and "Total writes:" Innodb_buffer_pool_pages_flushed_lru: Number of writes done for pages from the LRU list. Innodb_buffer_pool_pages_flushed_list: Number of writes done for pages from the flush list. Innodb_buffer_pool_pages_flushed_page: Number of writes done for other reasons. Innodb_buffer_pool_pages_lru_old: Number of old pages on the LRU list. Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 44bed9b - Browse repository at this point
Copy the full SHA 44bed9bView commit details -
Port v5.1 Extra Stats: Command Timers
Summary: Adds command, slave_command, pre-exec, parse, and execution timers. This ports these timers from v5.1 to 5.6, using the new timers. I also added an mtr test to confirm they are at least partly working. Test Plan: Jenkins. All tests, including the new one, pass. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for a42b065 - Browse repository at this point
Copy the full SHA a42b065View commit details -
Port v5.1 Allow innodb_lock_wait_timeout=0
Summary: Allow innodb_lock_wait_timeout=0, the previous min value was 1. To see how it is used, read srv0srv.c Add test innodb.innodb_lock_wait_timeout Test Plan: Includes mtr test. It passes. Reviewers: chip, nizamordulu Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for a334bd1 - Browse repository at this point
Copy the full SHA a334bd1View commit details -
Port v5.1 Extra Stats: records_in_range_seconds
Summary: Added timer for time spent in innodb records_in_range, using new timers. Test Plan: Jenkins, plus: show status like '%records_in_range_seconds'; Innodb_records_in_range_seconds 0.000000 Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 141168b - Browse repository at this point
Copy the full SHA 141168bView commit details -
Port v5.1 Extra Stats: callers of os_file_flush
Summary: Report on the callers of os_file_flush in SHOW INNODB STATUS. This is displayed in the TABLESPACE section as 'fsync callers:' -------------- TABLESPACE I/O -------------- fsync callers: 4 buffer pool, 1 other, 1 checkpoint, 11 log aio, 1622 log sync, 0 archive Included range check added in follow-up diff. Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 7b50c38 - Browse repository at this point
Copy the full SHA 7b50c38View commit details -
Port v5.1 Extra Stats: IO Perf Counters
Summary: Display more InnoDB IO performance counters in SHOW STATUS. The counters are: Innodb_data_fsync_seconds: Total seconds for fsync calls Innodb_data_async_read_requests: Number of read requests from background IO threads Innodb_data_async_read_bytes: Number of bytes read by background IO threads Innodb_data_async_read_svc_seconds: Seconds for reads by background IO threads Same as above, but for writes: Innodb_data_async_write_requests Innodb_data_async_write_bytes Innodb_data_async_write_svc_seconds Same as above, but for requests not handled by the background IO threads: Innodb_data_sync_read_requests Innodb_data_sync_read_bytes Innodb_data_sync_read_svc_seconds Innodb_data_sync_write_requests Innodb_data_sync_write_bytes Innodb_data_sync_write_svc_seconds Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 6fcbe16 - Browse repository at this point
Copy the full SHA 6fcbe16View commit details -
Port v5.1 Extra Stats: Insert Buffer and Adaptive Hash
Summary: Add insert buffer and adaptive hash stats to SHOW STATUS Add these to SHOW STATUS: Innodb_adaptive_hash_hits Adaptive hash search hits Innodb_adaptive_hash_misses Adaptive hash search misses Innodb_ibuf_merges Number of secondary index pages updated with merged records Innodb_ibuf_inserts Number of records added to the insert buffer Innodb_ibuf_delete_marks Number of records added to the delete mark buffer Innodb_ibuf_deletes Number of records added to the delete buffer Innodb_ibuf_size Size of the insert buffer in pages Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for a32d118 - Browse repository at this point
Copy the full SHA a32d118View commit details -
Port v5.1 Extra Stats: Time Master Thread Tasks
Summary: Add timers for work done by the master thread when active to SHOW STATUS and SHOW INNODB STATUS. Uses the new my_timer stuff. All of this seems totally redundant with the new perfschema stuff. However, I included it all anyway, so they can be compared. Add these for SHOW STATUS: Innodb_checkpoint_seconds seconds doing a checkpoint Innodb_srv_drop_table_seconds seconds doing background table drop Innodb_srv_free_log_seconds seconds checking log freespace Innodb_srv_cache_limit_seconds seconds enforcing dict cache limit Innodb_srv_log_flush_seconds seconds flushing logs Innodb_srv_ibuf_contract_seconds seconds merging insert buffer entries Innodb_srv_purge_seconds seconds running trx_purge() to remove deleted rows Add this for SHOW INNODB STATUS: ----------------- BACKGROUND THREAD ----------------- Seconds in background IO: 0.00 insert buffer, 0.00 log flush, 0.00 purge, 0.00 cache limit, 0.00 free log, 0.00 drop table, 0.00 checkpoint Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 24b3d11 - Browse repository at this point
Copy the full SHA 24b3d11View commit details -
Port v5.1 SysVar: innodb_deadlock_detect
Summary: Add innodb_deadlock_detect to disable deadlock detection on row locks. This is set to 1 by default. When set to 0, row lock wait timeouts will break deadlocks. This is done to avoid the performance hit from deadlock detection. The detection code holds kernel_mutex when making checks which prevents anything else from making progress. Test Plan: Follow-up diff includes mtr using this, it passes. Jenkins Reviewers: chip Reviewed By: chip
5Configuration menu - View commit details
-
Copy full SHA for dbc3c69 - Browse repository at this point
Copy the full SHA dbc3c69View commit details -
Port v5.1 Extra Stats: InnoDB Lock Status
Summary: Add InnoDB lock status details to SHOW [INNODB] STATUS Add these to SHOW STATUS: Innodb_row_lock_deadlocks Number of transaction aborts from deadlock Innodb_row_lock_wait_timeouts Number of transaction aborts from lock wait Innodb_purge_pending Number of transactions for which purge must be done. This is already named 'History list length' in SHOW INNODB STATUS Added to TRANSACTIONS section in SHOW INNODB STATUS: Lock stats: 0 deadlocks, 0 lock wait timeouts Added the innodb_lock_status mtr test. Test Plan: Includes mtr test - it passes. Jenkins. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for c54c3d3 - Browse repository at this point
Copy the full SHA c54c3d3View commit details -
Port v5.1 Extra Stats: Remove Transactions
Summary: Disable including transactions in SHOW INNODB STATUS. Transactions are already visible in information_schema.innodb_trx. This change removes them from the SHOW INNODB STATUS output in order to keep the size managable. It also removes the size limit on the output to avoid truncation. Test Plan: mtr, Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for a90e2a3 - Browse repository at this point
Copy the full SHA a90e2a3View commit details -
Port v5.1 Extra Stats: Flushed Neighbor Counters
Summary: Add counters for flushed neighbor pages. Innodb writes dirty pages from the same extent when one of the pages in the extent is eligible to be written. This is done to reduce disk seeks. The pages in the same extent are called neighbor pages. Counters in SHOW STATUS are exported for this: Innodb_buffer_pool_neighbors_flushed_from_list Number of neighbor pages flushed from the buffer pool flush list. Innodb_buffer_pool_neighbors_flushed_from_lru Number of neighbor pages flushed from the buffer pool LRU list. This is displayed in SHOW INNODB STATUS: Neighbor pages flushed: 644 from list, 0 from LRU Test Plan: Jenkins. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 90429dc - Browse repository at this point
Copy the full SHA 90429dcView commit details -
Port v5.1 Extra Stats: SHOW ENGINE INNODB TRANSACTION STATUS
Summary: Add SHOW ENGINE InnoDB TRANSACTION STATUS. Not all of the locking information is available in information_schema.innodb_trx so I put in SHOW InnoDB TRANSACTION STATUS like in 5.0.84. I also updated the partition_innodb test to use the new command (it checks some locking information). Perfschema hard-coded the total number of classes into its test results files, so they all then fail, since this adds a class of statement. This updates that hard-coded number in all of these results files. Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 0dcb3f1 - Browse repository at this point
Copy the full SHA 0dcb3f1View commit details -
Summary: Add tests for SHOW STATUS. Ensures these commands work: SHOW STATUS SHOW ENGINE INNODB MUTEX SHOW ENGINE INNODB STATUS SHOW ENGINE INNODB TRANSACTION STATUS Test Plan: Ran test, it passes. Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 8068673 - Browse repository at this point
Copy the full SHA 8068673View commit details -
Port v5.1 Slow Query Log Changes
Summary: MySQL 5.6's slow query log outputs the wrong timestamp time. Instead of the start-of-query time, it outputs the end-of-query time. This fixes that problem and tests for regressions. Also adds more attributes to slow query log.... Query_time, Lock_time, Rows_sent and Rows_examined have always been there. All others are new and improve performance debugging. The new output is: # Query_time: 0 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 \ Thread_id: 3 Errno: 0 Killed: 0 Bytes_received: 110 \ Bytes_sent: 134 Read_first: 0 Read_last: 0 Read_key: 2 \ Read_next: 0 Read_prev: 0 Read_rnd: 0 Read_rnd_next: 0 \ Sort_merge_passes: 0 Sort_range_count: 0 Sort_rows: 0 \ Sort_scan_count: 0 Created_tmp_disk_tables: 0 \ Created_tmp_tables: 0 Start: 9:22:58 End: 9:22:58 Also, improves performance by moving some code out of the block in which the log mutex is locked. Test Plan: Jenkins (new tests included, pass without --record). Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for a4a48c3 - Browse repository at this point
Copy the full SHA a4a48c3View commit details -
Port v5.1 Extra Stats: Pages Evicted
Summary: Adds output to SHOW ENGINE INNODB STATUS. Displays total number of pages evicted after read ahead without access. Previously only the rate from the last run of is displayed. The new output in SHOW ENGINE INNODB STATUS is: Evicted after read ahead without access: X Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 1482a90 - Browse repository at this point
Copy the full SHA 1482a90View commit details -
Port v5.1 InnoDB Commit and Rollback Counters
Summary: Added counters for innodb commit and rollback. All of this seems totally redundant with the new perfschema stuff. However, I included it all anyway, so they can be compared. Added to SHOW STATUS: Innodb_transaction_commit_all: Counts all commits (readwrite, readonly and autocommit-readonly). Innodb_transaction_commit_with_undo: Counts commits for which undo was created. Innodb_transaction_rollback_total Counts rollbacks of transactions. Innodb_transaction_rollback_parital: Counts rollbacks to savepoint or from one statement. Also, display these in TRANSACTION section of SHOW ENGINE INNODB STATUS. Test Plan: Jenkins (new test included, passes without --record). Reviewers: chip, jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 64a2cef - Browse repository at this point
Copy the full SHA 64a2cefView commit details -
Port v5.1 Extra Stats: Per-Table
Summary: This is a full port of the basic table stats patches from v5.1. Adds full support for table stats for: innodb, myisam, blackhole, heap and partitioned tables. Added these to information_schema.table_statistics: TABLE_SCHEMA: Database name, mysql is moving to 'SCHEMA' instead of database. TABLE_NAME: Name of table. TABLE_ENGINE: Engine for the table (InnoDB, MyISAM, ...). ROWS_INSERTED: Total number of rows inserted. ROWS_UPDATED: Total number of rows updated. ROWS_DELETED: Total number of rows deleted. ROWS_READ: Total number of rows read. ROWS_REQUESTED: Number of requests for rows. This is >= ROWS_READ as some requests don't return a row. IO_READ_BYTES: Total bytes read. IO_READ_REQUESTS: Total read requests. IO_READ_SVC_USECS: Total microseconds spent doing reads. IO_READ_SVC_USECS_MAX: Max microseconds doing a single read. IO_READ_WAIT_USECS: Total microseconds waiting for reads. IO_READ_WAIT_USECS_MAX: Max microseconds waiting for a single read. IO_READ_OLD_IOS: Number of reads resulting in "slow" ios. IO_WRITE_BYTES: Total bytes written. IO_WRITE_REQUESTS: Total write requests. IO_WRITE_SVC_USECS: Total microseconds spent doing writes. IO_WRITE_SVC_USECS_MAX: Max microseconds doing a single write. IO_WRITE_WAIT_USECS: Total microseconds waiting for writes. IO_WRITE_WAIT_USECS_MAX: Max microseconds waiting for a single write. IO_WRITE_OLD_IOS: Number of writes resulting in "slow" ios. IO_READ_BYTES_BLOB: Total bytes blob-read. IO_READ_REQUESTS_BLOB: Total blob-read requests. IO_READ_SVC_USECS_BLOB: Total microseconds spent doing blob-reads. IO_READ_SVC_USECS_MAX_BLOB: Max microseconds doing a single blob-read. IO_READ_WAIT_USECS_BLOB: Total microseconds waiting for blob-reads. IO_READ_WAIT_USECS_MAX_BLOB: Max microseconds waiting for a single blob-read. IO_READ_OLD_IOS_BLOB: Number of blob-reads resulting in "slow" ios. IO_INDEX_INSERTS: Total number of inserts into index. QUERIES_USED: Total number of times a table has been used in an SQL statement. ROWS_INDEX_FIRST: Total number of rows fetched on index scans from first cursor use. Excludes rows counted for ROWS_INDEX_NEXT. Excludes rows fetched from table scans. ROWS_INDEX_NEXT: Total number of rows fetched on index scans after the first row. This includes rows fetched from index_next and index_prev. Excludes rows fetched from table scans. Some of these are also added to the slow query log, when log_slow_extra=true. Example output: Query_time: 2.093879 Lock_time: 0.000050 Rows_sent: 346 Rows_examined: 346 Thread_id: 274595 Errno: 0 Killed: 0 Bytes_received: 313 Bytes_sent: 42659 Read_first: 0 Read_last: 0 Read_key: 2 Read_next: 0 Read_prev: 346 Read_rnd: 0 Read_rnd_next: 0 Sort_merge_passes: 0 Sort_range_count: 0 Sort_rows: 0 Sort_scan_count: 0 Created_tmp_disk_tables: 0 Created_tmp_tables: 0 Start: 12:06:36 End: 12:06:39 Reads: 340 Read_time: 1.967283 These special namespaces are used for special sets of stats: sys:innodb/doublewrite: Doublewrite buffer disk IO. sys:innodb/log: Logging stats. sys:innodb/system: System tablespace stats. sys:innodb/other: Unknown system stuff (that doesn't map to a real table). Test Plan: Jenkins All tests have been run in all builds with Jenkins. All test result changes have been manually verified as correct. All new tests use the results files copied from 5.1 (not just --record), with changes to results only when manually verified to be correct. Reviewers: rongrong, chip Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for 246d2fa - Browse repository at this point
Copy the full SHA 246d2faView commit details -
Port v5.1 os_aio_old/slow changes
Summary: Changed OS_AIO_OLD_USECS to a runtime variable innodb_aio_old_usecs. Set seconds before request is marked "slow" from variable (def: 2) to 1. Any read, write or fsync that takes at least that long in service time is counted and reported as an "old" request. Now reports this in SHOW ENGINE INNODB STATUS, like: File flushes: 1216 requests, 12.37 seconds, 10.17 msecs/r, 0 old Test Plan: Jenkins. Reviewers: chip, nizamordulu Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for c230745 - Browse repository at this point
Copy the full SHA c230745View commit details -
Port v5.1 Extra Stats: Hide BinLog Threads
Summary: Moves binlog threads from Threads_running to Threads_binlog_client. Test Plan: Jenkins Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 6fc26a7 - Browse repository at this point
Copy the full SHA 6fc26a7View commit details -
Port v5.1 IS.User_Stats to 5.6
Summary: Port of all of our 5.1 user stats diffs. Table columns: USER_NAME // data stats BINLOG_BYTES_WRITTEN - number of bytes written to the binlo BYTES_RECEIVED, BYTES_SENT - number of network bytes received or sent // commands count COMMANDS_DDL - number of DDL commands COMMANDS_DELETE - number of DELETE and TRUNCATE commands COMMANDS_HANDLER - number of HANDLER COMMANDS_INSERT - number of INSERT, INSERT SELECT, LOAD and REPLACE commands COMMANDS_OTHER - number of other commands COMMANDS_SELECT - number of SELECT commands COMMANDS_TRANSACTION - number of BEGIN, COMMIT, ROLLBACK commands COMMANDS_UPDATE - number of UPDATE commands // connections CONNECTIONS_CONCURRENT - concurrent connections for this user CONNECTIONS_DENIED_MAX_GLOBAL - number of connections denied because the global connection limit was reached CONNECTIONS_DENIED_MAX_USER -number of connections denied because the per user limit was reached CONNECTIONS_LOST - number of connections lost rather than properly closed CONNECTIONS_TOTAL - total number of connections // io perf DISK_READ_BYTES DISK_READ_REQUESTS DISK_READ_SVC_USECS DISK_READ_WAIT_USECS DISK_READ_BYTES_BLOB DISK_READ_REQUESTS_BLOB DISK_READ_SVC_USECS_BLOB DISK_READ_WAIT_USECS_BLOB // error counts ERRORS_ACCESS_DENIED - number of access denied errors ERRORS_TOTAL // timers MICROSECONDS_CPU - number of CPU seconds running commands, this is only accurate when you have the appropriate libs and kernel MICROSECONDS_RECORDS_IN_RANGE MICROSECONDS_WALL - number of wallclock seconds running commands MICROSECONDS_DDL MICROSECONDS_DELETE MICROSECONDS_HANDLER MICROSECONDS_INSERT MICROSECONDS_OTHER MICROSECONDS_SELECT MICROSECONDS_TRANSACTION MICROSECONDS_UPDATE // empty query QUERIES_EMPTY - number of queries that return 0 rows // expensive call RECORDS_IN_RANGE_CALLS // row stats ROWS_DELETED - number of rows deleted ROWS_FETCHED - number of rows sent back to a user ROWS_INSERTED - number of rows inserted ROWS_READ - number of rows read from a table ROWS_UPDATED - number of rows updated ROWS_INDEX_FIRST ROWS_INDEX_NEXT // transaction stats TRANSACTIONS_COMMIT - number of commits, this includes auto-commit TRANSACTIONS_ROLLBACK number of rollbacks // queries stats QUERIES_RUNNING QUERIES_WAITING Tests added: user_stats_bytes user_stats_max_conns user_stats_records_in_range user_stats_rows_primary user_stats_rows user_stats_commands user_stats_disk user_stats_errors user_stats_more user_stats_rows_fetched user_stats_rows_secondary user_stats_transactions rpl.rpl_user_stats_rows rpl.rpl_user_stats_bytes Tests modified: information_schema sys_vars.max_user_connections_func Test results changed: information_schema_db mysqlshow funcs_1.is_columns_is funcs_1.is_tables_is Test Plan: mtr Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 118e571 - Browse repository at this point
Copy the full SHA 118e571View commit details -
Log changes to 'read_only' and 'super_read_only'
Summary: Logs changes to these variables to the server log. Test Plan: New test main.super_read_only. Unmodified original test main.read_only. Reviewers: rongrong Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for cffd602 - Browse repository at this point
Copy the full SHA cffd602View commit details -
Adding logging for durability changes
Summary: Adds logging for changes to sync_binlog and innodb_flush_log_at_trx_commit. The code for sync_binlog was basically copy and paste from the similar change to the read_only variable. In sys_vars.cc the change for sync_binlog required that Sys_sync_binlog_period be extended to use more of the available parameters in order to be able to use ON_UPDATE. * The first new parameter notates which mutex protects changes to the variable. If no mutex is to be used (as in this case) the parameter should be NO_MUTEX_GUARD. * The second new parameter designates whether a change to the variable should go into the binary log. In this case the change should not be logged. * The third new parameter designates which function is to be called when the variables is accessed. In this case the function pointer is NULL. * The fourth and final new variables is a function pointer to a new function log_sync_binlog_change which actually logs the changes of this variable. Unfortunately the code where innodb_flush_log_at_trx_commit is updated, inside of InnoDB, does not have easy access to the functions for dealing a thread handler. This patch includes a bunch of glue for InnoDB to get back into MySQL to gain information about the user and host. Test Plan: new tests added to confirm sanity and they pass Reviewers: steaphan, rongrong, jtolmer, mcallaghan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for e5c8ab5 - Browse repository at this point
Copy the full SHA e5c8ab5View commit details -
Port binlog_bytes_written status to 5.6
Summary: Add to SHOW GLOBAL STATUS the counter Binlog_bytes_written that counts all (or most) bytes written to the binlog. Also, change per-user counter updates to not use atomic ops as LOCK_log is held during all increments. Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 3db5967 - Browse repository at this point
Copy the full SHA 3db5967View commit details -
Porting per session stats from 5.1 to 5.6
Summary: Collect the following status variables (including some new ones) per-session with global rollup: Collect_stats_seconds (new), Command_seconds, Exec_seconds, Open_tables_seconds (renamed), Parse_seconds, Pre_exec_seconds, Read_requests (new), Read_seconds (new). Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for eb5859e - Browse repository at this point
Copy the full SHA eb5859eView commit details -
Port per query stats from 5.1 to 5.6
Summary: Make the log-slow-extra counters per query in the slow query log Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 92c3a2a - Browse repository at this point
Copy the full SHA 92c3a2aView commit details -
Porting more table stats changes from 5.1 to 5.6
Summary: Improved the determinism of some table stats tests. IO stats collected with table space split between secondary and PK indexes. Add an queries_empty column in information_schema.table_statistics. Only select queries are considered so far, as the empty queries in IS.user_statistics. Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for e2b9411 - Browse repository at this point
Copy the full SHA e2b9411View commit details -
port global per page IO stats from 5.1 to 5.6
Summary: Adds stats per page type for all page types defined in storage/innodb_plugin/include/fil0fil.h:131-144 Test Plan: mysqltest.sh global_page_stats Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for b581af4 - Browse repository at this point
Copy the full SHA b581af4View commit details -
Porting secondary index to 5.6
Summary: Add counters to show global status for InnoDB. The counters are: Innodb_secondary_index_record_read_check 8878 this is the number of times the visibility check was done for a secondary index access. Innodb_secondary_index_record_read_sees 8878 this is the number of times the leaf page does not require reading the row from the clustered index to determine visibility Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 84ee91f - Browse repository at this point
Copy the full SHA 84ee91fView commit details -
Port v5.1 SHOW MASTER LOGS without SUPER privilege
Summary: Makes SHOW MASTER LOGS work for users without SUPER privilege. Test Plan: New test included, passes. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 1c1c30d - Browse repository at this point
Copy the full SHA 1c1c30dView commit details -
Port v5.1 process_can_disable_bin_log
Summary: SUPER has been required to do 'set sql_log_bin=...'. This provides an option that PROCESS is sufficient, but only when process_can_disable_bin_log is set. Test Plan: New test included, passes. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 44bb61a - Browse repository at this point
Copy the full SHA 44bb61aView commit details -
Summary: When sql_log_bin is OFF, don't get S row locks during insert into T select from S statements on rows of S. The code already checks for srv_locks_unsafe_for_binlog. In addition, also check for session level sql_log_bin option. Test Plan: New test included, passes. Reviewers: nponnekanti Reviewed By: nponnekanti
Configuration menu - View commit details
-
Copy full SHA for 42346f9 - Browse repository at this point
Copy the full SHA 42346f9View commit details -
Port replication counter stats to 5.6
Summary: Adds Rpl_count* for event counts and Rpl_seconds* for event timers " show global status like "rpl_count%"; Variable_name Value Rpl_count_append_block 0 Rpl_count_begin_load_query 0 Rpl_count_create_file 0 Rpl_count_delete_file 0 Rpl_count_delete_rows 0 Rpl_count_exec_load 0 Rpl_count_execute_load_query 0 Rpl_count_format_description 0 Rpl_count_incident 0 Rpl_count_intvar 0 Rpl_count_load 0 Rpl_count_new_load 0 Rpl_count_other 0 Rpl_count_pre_ga_delete_rows 0 Rpl_count_pre_ga_update_rows 0 Rpl_count_pre_ga_write_rows 0 Rpl_count_query 0 Rpl_count_rand 0 Rpl_count_rotate 0 Rpl_count_slave 0 Rpl_count_start_v3 0 Rpl_count_stop 0 Rpl_count_table_map 0 Rpl_count_unknown 0 Rpl_count_update_rows 0 Rpl_count_user_var 0 Rpl_count_write_rows 0 Rpl_count_xid 0 " show global status like "rpl_seconds%"; Variable_name Value Rpl_seconds_append_block 0.000000 Rpl_seconds_begin_load_query 0.000000 Rpl_seconds_create_file 0.000000 Rpl_seconds_delete_file 0.000000 Rpl_seconds_delete_rows 0.000000 Rpl_seconds_exec_load 0.000000 Rpl_seconds_execute_load_query 0.000000 Rpl_seconds_format_description 0.000000 Rpl_seconds_incident 0.000000 Rpl_seconds_intvar 0.000000 Rpl_seconds_load 0.000000 Rpl_seconds_new_load 0.000000 Rpl_seconds_other 0.000000 Rpl_seconds_pre_ga_delete_rows 0.000000 Rpl_seconds_pre_ga_update_rows 0.000000 Rpl_seconds_pre_ga_write_rows 0.000000 Rpl_seconds_query 0.000000 Rpl_seconds_rand 0.000000 Rpl_seconds_rotate 0.000000 Rpl_seconds_slave 0.000000 Rpl_seconds_start_v3 0.000000 Rpl_seconds_stop 0.000000 Rpl_seconds_table_map 0.000000 Rpl_seconds_unknown 0.000000 Rpl_seconds_update_rows 0.000000 Rpl_seconds_user_var 0.000000 Rpl_seconds_write_rows 0.000000 Rpl_seconds_xid 0.000000 Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 08286b9 - Browse repository at this point
Copy the full SHA 08286b9View commit details -
Port Stats: binlog slave offset to 5.6
Summary: This uses a variation of code from the Google patch to display the binlog name and offset currently downloaded by a replication slave SHOW PROCESSLIST; Id User Host db Command Time State Info 3 root localhost:47190 test Query 1 Writing to net INSERT INTO t1 SELECT * FROM t1 5 root localhost:47194 NULL Binlog Dump 1 NULL slave offset: master-bin.000001 4 Test Plan: mysqltest Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 6483211 - Browse repository at this point
Copy the full SHA 6483211View commit details -
Port Stats: slow_log_if_rows_examined to 5.6
Summary: add slow_log_if_rows_examined for slow query log Test Plan: mysqltest.sh New test: slow_log_extra_rows_examined_exceed Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for d456588 - Browse repository at this point
Copy the full SHA d456588View commit details -
Port Per-Tablespace Mutex Contention Fix to 5.6
Summary: Use a separate mutex and hash table for per-tablespace stats. This is done to reduce mutex contention on fil_system->mutex as that mutex is briefly locked before every page read and we don't want to make those operations wait. The new hash is fil_system->stats_hash and it uses an array of mutexes. Entries are searched by tablespace ID. There is a small window right after a tablespace is created and right before it is deleted when there might not be an entry in stats_hash for a valid tablespace. The hash can be searched via the function fil_stats_get_by_id. Test Plan: Tested with mysqltest.sh and passed in both release and debug modes. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 2940cda - Browse repository at this point
Copy the full SHA 2940cdaView commit details -
Port Fuzzy Checkpoint Limits to 5.6
Summary: Adds the Innodb my.cnf variable innodb_sync_checkpoint_limit. This is a percentage and is used to override the sync and async checkpoint limits. This is ignored when set to 0. Those limits determine when dirty pages are flushed to enforce fuzzy checkpoint constraints. Making this greater than 80 might reduce the dirty page flush rate. Assume that max_size is the sum of the transaction log file sizes, then the current code sets the sync checkpoint limit to 0.9 * 0.9 * (15/16) of max_size (about 75% of it) and the async limit to 0.9 * 0.9 * (7/8) of max size (about 70% of it). When this is set to a non-zero value, then the sync limit is set to (sync_checkpoint_limit/100) * 0.95 of max_size and the async limit is set to (sync_checkpoint_limit/100) * 0.90 of max_size. The impact of this is that you get the performance benefits of a larger transaction log file without making it larger. " Below, "age" is Innodb_lsn_current - Innodb_lsn_oldest Also adds SHOW STATUS counters: Innodb_preflush_async_limit - limit at which async page flushes are done for fuzzy checkpoint Innodb_preflush_sync_limit - limit at which sync page flushes are done for fuzzy checkpoint Innodb_preflush_async_margin - age minus preflush_async_limit Innodb_preflush_sync_margin - age minus preflush_sync_limit Innodb_checkpoint_lsn - LSN of last checkpoint Innodb_lsn_current - current LSN Innodb_lsn_current_minus_oldest - current LSN minus oldest LSN for which there is a dirty page in the buffer pool Innodb_lsn_current_minus_last_checkpoint Innodb_lsn_oldest - min of the oldest modification LSN from all dirty pages Innodb_purged_pages - number of pages processed by trx_purge Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for dd377be - Browse repository at this point
Copy the full SHA dd377beView commit details -
Port Stats: Log -vs- Double-Write to 5.6
Summary: Adds separate counters for log and doublewrite IO. Add separate performance counters for log and doublewrite writes as these have different characteristics then page IO. This adds new SHOW GLOBAL STATUS counters Innodb_data_log_write_requests 530 Innodb_data_log_write_bytes 9830912 Innodb_data_log_write_svc_seconds 0.010076 Innodb_data_double_write_requests 19 Innodb_data_double_write_bytes 8568832 Innodb_data_double_write_svc_seconds 0.314883 This adds 2 lines to the FILE IO section of SHOW INNODB STATUS Log writes: 535 requests, 0 old, 18386.06 bytes/r, svc: 0.01 secs, 0.02 msecs/r, 3.31 max msecs, wait: 0.01 secs 0.02 msecs/r, 3.31 max msecs Doublewrite buffer writes: 20 requests, 0 old, 421888.00 bytes/r, svc: 0.01 secs, 0.30 msecs/r, 0.81 max msecs, wait: 0.01 secs 0.30 msecs/r, 0.81 max msecs Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for dad6a4d - Browse repository at this point
Copy the full SHA dad6a4dView commit details -
Port Ignore innodb_thread_concurrency for Replication Thread to 5.6
Summary: Ignore innodb_thread_concurrency for the SQL replication thread. Set trx->always_enter_innodb for the SQL replication thread in the Innodb handler. Ignore the thread queue in srv_conc_enter_innodb when it is set. Give it more tickets to avoid more of the expensive checks in srv_conc_enter_innodb Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 91ad928 - Browse repository at this point
Copy the full SHA 91ad928View commit details -
Port Fix Locality of trx_ts to 5.6
Summary: Previously, trx_t's were allocated at random heap locations. This causes performance issues when iterating over all open transactions (such as when creating a new transaction's list of visible transactions). This change allocates larger blocks of trx_t's and tracks the next usable trx_t with a singly linked list. As blocks are filled, new blocks are allocated. This results in a 50% improvement in performance in the 2000 transaction test case. Test Plan: All unit tests pass Reviewers: chip, steaphan Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 3bd6c63 - Browse repository at this point
Copy the full SHA 3bd6c63View commit details -
Summary: Uses "slow_ios" in place of "old_ios" in SHOW STATUS, SHOW INNODB STATUS and IS tables because the results are for slow IOs as defined here. * slow IOs are requests for which the service time is too long (innodb_aio_slow_usecs) * old IOs are requests that wait too long to be serviced (innodb_aio_old_usecs). This is only possible for async reads and async writes. Adds the my.cnf variable innodb_aio_slow_usecs. File reads, writes and fsyncs that take longer than this are counted as "slow_ios". Changes the default value for innodb_aio_old_usecs from 2 seconds to 0.5 seconds. Adds SHOW STATUS counters: * data_fsync_max_seconds - slowest fsync for InnoDB * data_fsync_slow - number of slow fsyncs These count the number of operations that are slow: * data_async_read_slow_ios * data_sync_read_slow_ios * data_async_write_slow_ios * data_sync_write_slow_ios * data_log_write_slow_ios * data_double_write_slow_ios These count the number of operations that wait too long in request queues for background IO threads: * data_async_read_old_ios * data_async_write_old_ios Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 5f9cf36 - Browse repository at this point
Copy the full SHA 5f9cf36View commit details -
Port Optimizations for os_aio_thread_buffer to 5.6
Summary: Make the number of slots in the IO array is a function of innodb_io_capacity. Use os_aio_thread_buffer and os_aio_thread_buffer_size to reduce malloc/free calls. Increment the slot pointer rather than calling os_aio_array_get_nth_slots. Use one pass instead of 2 on the request array to determine the oldest and lowest request. Use one pass instead of nested loop on the array to determine the requests to be merged. Test Plan: mysqltest.sh Reviewers: steaphan, jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 5a4c16f - Browse repository at this point
Copy the full SHA 5a4c16fView commit details -
Port per page type table stats, and fix native_aio data stats
Summary: Read/write per page stats for tablespaces are collected from os_aio, global per page stats are collected in a bit different place (buf_page_io_complete) but these numbers still should be comparable. Native aio stats are only collected to update the IO_READ_XXX, IO_WRITTEN_XXX in table_stats; os_async_read_perf, os_async_write_perf used in innodb status dump. Test Plan: added test: show_table_stats_per_page_type added test: innodb.innodb_aio_stats Reviewers: rongrong Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for 48af57e - Browse repository at this point
Copy the full SHA 48af57eView commit details -
Replication IO thread retries more frequently
Summary: Change get_master_version_and_clock so that it returns a retryable error code when queries to the master fail, regardless of the failure. It will still return a non-retryable error if the master query succeeds but returns data the slave views as bad. Test Plan: mtr Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 0cc3917 - Browse repository at this point
Copy the full SHA 0cc3917View commit details -
Summary: Stats added to table_stats: COMPRESSED_PAGE_SIZE COMPRESS_OPS COMPRESS_OPS_OK COMPRESS_PRIMARY_OPS COMPRESS_PRIMARY_OPS_OK COMPRESS_USECS COMPRESS_OK_USECS COMPRESS_PRIMARY_USECS COMPRESS_PRIMARY_OK_USECS UNCOMPRESS_OPS UNCOMPRESS_USECS Stats added to show global status: buffer_pool_pages_unzip_lru, zip_{1024/2048/4096/8192/16384}_compressed, zip_{1024/2048/4096/8192/16384}_compressed_ok, zip_{1024/2048/4096/8192/16384}_compressed_seconds, zip_{1024/2048/4096/8192/16384}_compressed_ok_seconds, zip_{1024/2048/4096/8192/16384}_compressed_primary, zip_{1024/2048/4096/8192/16384}_compressed_primary_ok, zip_{1024/2048/4096/8192/16384}_compressed_primary_seconds, zip_{1024/2048/4096/8192/16384}_compressed_primary_ok_seconds, zip_{1024/2048/4096/8192/16384}_compressed_secondary, zip_{1024/2048/4096/8192/16384}_compressed_secondary_ok, zip_{1024/2048/4096/8192/16384}_compressed_secondary_seconds, zip_{1024/2048/4096/8192/16384}_compressed_secondary_ok_seconds, zip_{1024/2048/4096/8192/16384}_decompressed, zip_{1024/2048/4096/8192/16384}_decompressed_seconds, zip_{1024/2048/4096/8192/16384}_decompressed_primary, zip_{1024/2048/4096/8192/16384}_decompressed_primary_seconds, zip_{1024/2048/4096/8192/16384}_decompressed_secondary, zip_{1024/2048/4096/8192/16384}_decompressed_secondary_seconds Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for bf47c0a - Browse repository at this point
Copy the full SHA bf47c0aView commit details -
Make replication and client connection compression configurable.
Summary: Adds the sysvar net_compression_level. Test Plan: mysqltest.sh Reviewers: rongrong, mcallaghan Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for c1c7d8c - Browse repository at this point
Copy the full SHA c1c7d8cView commit details -
Port disable_slave_update_table_stats change to 5.6
Summary: Adds config disable_slave_update_table_stats to disable table stats in slave thread Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 2dd83ca - Browse repository at this point
Copy the full SHA 2dd83caView commit details -
Port unix datagram socket from 5.1 to 5.6
Summary: This implements the same functionality of unix datagram slocket of 5.0/5.1. Test Plan: All test cases ported, and all pass. Reviewers: steaphan, jtolmer, zshao Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for db8eb38 - Browse repository at this point
Copy the full SHA db8eb38View commit details -
Ported Working Set Size to 5.6
Summary: Added a new DB level statistics tables. Added statistics 'working_set_size' to above table. Working set size is computed using a modified version of HyperLogLog algorithm. Sysbench results: Without this change: Transactions : 57.71 per sec Read/Write requests : 1038.86 per sec With this change: Transactions : 57.05 per sec Read/Write requests : 1027.24 per sec Test Plan: MTR tests included for correctness Performance tested using Sysbench Reviewers: steaphan, nponnekanti Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for f2c05a0 - Browse repository at this point
Copy the full SHA f2c05a0View commit details -
Check how max_binlog_cache_size affects LOAD DATA INFILE
Summary: Need to check whether load data infile fails if it tries to create a binlog cache fails with size greater than max_binlog_cache_size. Its failing with ER_TRANS_CACHE_FULL error. Added a test case so that any future modifications in the code notifies the error. Test Plan: Created a dat file greater than 4k and set max_binlog_cache_size to 4k and ran the load data infile on that file. Added a test in mtr Reviewers: jtolmer, steaphan, kkwiatkowski Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 192f35c - Browse repository at this point
Copy the full SHA 192f35cView commit details -
Fix an inifinite loop while reading compressed pages
Summary: After loading a compressed page in buf_page_get_gen() we allocate a new block for decompression. The problem is that the compressed page is neither buffer-fixed nor I/O-fixed by the time we call buf_LRU_get_free_block(), so it may end up being evicted and returned back as a new block. We then call buf_page_hash_get() to see if the page hash has been modified, and indeed it returns NULL because the original page in not in the buffer pool anymore. In which case we go back to read the page again causing an infinite loop. Details of the bug is in here http://bugs.mysql.com/bug.php?id=61132. The bug is fixed as same as described in http://bugs.mysql.com/bug.php?id=61132. First the block is buffer-fixed so that it cannot be evicted or relocated. Test Plan: arc unit, mtr. Reviewers: jtolmer, nponnekanti, steaphan Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for adadc00 - Browse repository at this point
Copy the full SHA adadc00View commit details -
Port additional session stats in SHOW PROCESSLIST
Summary: stats added: - examined rows - sent rows Test Plan: show_processlist.test Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 8bdd21d - Browse repository at this point
Copy the full SHA 8bdd21dView commit details -
Port v5.1 fix for bug #60682 to 5.6
Summary: Fix http://bugs.mysql.com/60682, which is a deadlock when InnoDB calls thd_security_context while holding kernel_mutex. The change is to use pthread_mutex_trylock in thd_security_context and display "::BUSY::" for the query text when the lock is not acquired. Test Plan: Added test with reproduction case from bug 60682 Reviewers: jtolmer, steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 2c08050 - Browse repository at this point
Copy the full SHA 2c08050View commit details -
Make DROP DATABASE replicate safely with FK constraints
Summary: Currently if you issue a DROP DATABASE dbname, MySQL will actually log to the binary log a multi-table DROP TABLE statement. If a database consists of enough tables, this DROP TABLE statement can be split into 2 or more multi-table statements. If a table in the first multi-table DROP TABLE statement has a FK constraint that declares it a parent of a table not included in the DROP TABLE statement, then when a replication slave or binary log playback of the event occurs, MySQL will reject the DROP TABLE statement saying that it would violate the FK constraint. This bug can be avoided if the DROP TABLE statements are in a correct order (child tables are dropped before parent tables and all tables in a circular reference should be in one statement) or setting OPTION_NO_FOREIGN_KEY_CHECKS flag in Query_log_event in which case FK references do not matter when deleting parent tables. I used the later since the order do not really matter in slave as they were already deleted in master. Test Plan: Added a test which reproduces the replicaiton bug. Reviewers: jtolmer, lachlan, chip Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 36a220d - Browse repository at this point
Copy the full SHA 36a220dView commit details -
Check for a killed connection while building a previous version.
Summary: There are O(n^2) undo page reads on secondary index lookups without any checks for whether the query has been killed. Right now, when this hits in prod mysqld has to be killed. Add a killed check so that the query will exit without having to restart the server. Test Plan: Adds a new test. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 69645eb - Browse repository at this point
Copy the full SHA 69645ebView commit details -
Summary: Add more SHOW STATUS counters for monitoring replication: Relay_log_io_events, Relay_log_io_bytes: Number of events and bytes written by the IO thread Relay_log_sql_events, Relay_log_sql_bytes: Number of events and bytes written by the SQL thread Relay_log_sql_wait_seconds: Number of seconds that the SQL thread waits for the IO thread to provide events. This occurs when the master is idle or the network between master/slave is slow. Test Plan: mysqltest.sh Reviewers: rongrong Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for e3c31a0 - Browse repository at this point
Copy the full SHA e3c31a0View commit details -
Summary: This adds the my.cnf variable "rpl_read_size" with a default of 8kb. It must be a multiple of 4kb. This sets the size for reads done from the relay log and binlog. Making it larger might help with IO stalls while reading these files when they are not in the OS buffer cache. Test Plan: Ported test case and added sys_var test. Reviewers: jtolmer, steaphan Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for c5369e9 - Browse repository at this point
Copy the full SHA c5369e9View commit details -
Add reset_seconds_behind_master and Relay_log_io_connected
Summary: Adds my.cnf var reset_seconds_behind_master: When TRUE (default value) reset_seconds_behind_master preserves original behavior which is to reset Seconds_Behind_Master to zero when the SQL thread catches up to the IO thread. When FALSE the reset is not done to avoid SBM flip-flops between zero and the real lag. Adds SHOW STATUS value Relay_log_io_connected: The SHOW STATUS value Relay_log_io_connected counts the number of times the IO thread connects to the master. This might detect network problems from frequent disconnects. Test Plan: mtr Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 95dfb4a - Browse repository at this point
Copy the full SHA 95dfb4aView commit details -
Reduce LOCK_log mutex contention during lock during binlog reads.
Summary: This removes acquisition of LOCK_log mutex during binlog reads. binlog_last_valid_pos is set whenever binlog event is written to log_file. When binlog dump threads read from the binlog file, EOF is changed to the previous stable position by using get_binlog_last_valid_pos() which causes it to read complete or valid events. If there are no further complete events to read, it waits for signal_update() before starting reading from the binlog again. Also if reading from inactive binlogs(binlogs which already got rotated and no events will be written) mutex is never held. This also happens to fix: http://bugs.mysql.com/61545 Test Plan: mtr. Need to verify the mutex contention when feasible. innodb_stress_tests are passing. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 7f9705c - Browse repository at this point
Copy the full SHA 7f9705cView commit details -
Summary: Adds peak slave lag measurement, 'peak_lag', in 'show slave status'. Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 49ecae0 - Browse repository at this point
Copy the full SHA 49ecae0View commit details -
Fix failure of set relay_log_info_repository = 'file'
Summary: set relay_log_info_repository = 'file' fails when rli file has old format. This is due to the fact that old relay log info format has no number in the first line, internal_id(used to differentiate multiple rli instances) is initialized to 0 instead of 1(default value). This causing error "Multiple replication metadata repository instances found with data in them" when trying to set rli repo mode to file. Test Plan: mtr, built a new rpm and upgraded a test slave to 5.6 and verified: set @@global.relay_log_info_repository='file' works properly without any error. Reviewers: jtolmer, steaphan Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for bc267c3 - Browse repository at this point
Copy the full SHA bc267c3View commit details -
Fix checksum errors in 5.6 slave when 5.6 master is downgraded to 5.1
Summary: When master and slave are running 5.6 with replicaiton on, downgrading master to 5.1 and starting replication, 5.1 master needs to send 5.6 FDE and other events (since slave may ask to dump events in the same binlog file as before downgrading). When master sends FDE and RE it needs to recompute checksum, since contents of the events are changed (for e.g. log_pos is set to 0 in fde). 5.1 master don't recomputes the checksum causing replication to break with error "events are corrupt due to network error". Test Plan: allocated 2 test instances and setup cascading replicaiton and ran the inplace_upgrade_downgrade rpmtest to see replication working. Ran mysqlrpmtest.sh fully just to ensure no other problems. Reviewers: jtolmer, steaphan Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 44051a2 - Browse repository at this point
Copy the full SHA 44051a2View commit details -
Test autoinc intvar not persisted across relay-log events on slave
Summary: Adds a test case to check autoinc intvar are not persisted across relay-log events on slave. When master and slave have different schemas, in particular different AUTO_INCREMENT columns, INSERT_ID events logged for a given table on master may be applied to a different table on slave on SBR. For example in the following sequence: 1) Insert into table t3 created only in slave with auto-inc column. 2) Insert into table t1 with auto-inc column in master but not in slave. 3) Insert or update into table t2 with no auto-inc column and has a trigger only in slave but not in master. The trigger inserts a row into table t3. 4) The insert into table t3 gets INSERT_ID from the previous insert into table t1 causing duplicate key errors. The update on t2 causes inconsistency since the INSERT_ID values from previous event (insert into t1) was used for inserting into table t3. This is fixed in 5.6.10 branch. Added a test case to confirm. Test Plan: mtr Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 5df7a35 - Browse repository at this point
Copy the full SHA 5df7a35View commit details -
Port Warn on Purge with Active Slave to 5.6
Summary: added a function called check_log_in_use which checks if a log is being used specifically by a slave. calls this function inside purge_logs and purge_logs_before_date and pushes warning in the true case. included a test plan for this scenario. Test Plan: Ported the test case. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for cf410f6 - Browse repository at this point
Copy the full SHA cf410f6View commit details -
port add rpl_event_buffer_size to 5.6
Summary: This pre-allocates a buffer per slave connection to avoid calling malloc & free for every event sent to that slave. Events small enough to fit in the buffer avoid the calls. In 5.6.10 dump thread is sending heart beat events after it skips sending some transactions (if they are already replicated in slave) and just before sending next transaction. This heart beat events makes slave IO thread to update master positions. Currently I am not using any pre-allocated buffer for this type of events as I feel it is not that necessary right now (as this can block the port). Test Plan: Ported the test case rpl_event_buffer_size.test to 5.6. Added a new test case rpl_event_buffer_size_basic in sys_vars suite Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 382ea2c - Browse repository at this point
Copy the full SHA 382ea2cView commit details -
Set Default metadata_locks_hash_instances=256
Summary: Based on Mark Callaghan's recommendations, changed this default: metadata_locks_hash_instances: 8 -> 256 Updated the test results files (only changes default value displayed). Test Plan: Jenkins www.facebook.com/notes/mysql-at-facebook/my-mysql-is-faster-than-your-mysql/10151250402570933 Reviewers: jtolmer, mcallaghan Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 57ecb0a - Browse repository at this point
Copy the full SHA 57ecb0aView commit details -
Port Compression: Reduce Calls to btr_cur_optimistic_insert to 5.6
Summary: During the record insertion, InnoDB finds out the page the record should be inserted, acquires the latch for that page and attempts to insert the record to the page. If there is not enough room in the page then the b-tree structure must be changed so it releases the page latch and then gets the b-tree latch. It then re-computes the page to which the record should be inserted. Most of the time this second page is the same page as the previously found page and it still doesn't have room for the record insertion. Rarely, another thread (eg. purge thread) gets a hold of the previously found page and deletes some records on that page so that there is room for insertion in InnoDB's second attempt. This diff eliminates the unnecessary attempts to insert the record into the page by checking if the page has been modified since the last time the insertion had been attempted. Test Plan: innodb.innodb_optimistic_insert_race executes two threads: an inserter and a deleter. The inserter thread inserts records in random order while the deleter deletes in random order. There is a debug-only status variable called 'innodb_optimistic_insert_calls_in_pessimistic_descent' which counts the number of times btr_cur_optimistic_insert() needed to be called during the pessimistic descent. This number is around 13 for the numbers hardcoded in this test which is much less than the number of total pessimistic descents (which is in thousands). Reviewers: nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 72e95f1 - Browse repository at this point
Copy the full SHA 72e95f1View commit details -
Change the default value for innodb_log_compressed_pages to false
Summary: As the title says. Test Plan: Jenkins Reviewers: nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 10f24ce - Browse repository at this point
Copy the full SHA 10f24ceView commit details -
Fix changing from gtid-based to position-based replication
Summary: This fixes changing from gtid based replication protocol to position based replication protocol. When the command CHANGE MASTER TO MASTER_LOG_FILE='', MASTER_LOG_POS='' is executed, it fails with an error ERROR 1776 (HY000): Parameters MASTER_LOG_FILE, MASTER_LOG_POS, RELAY_LOG_FILE and RELAY_LOG_POS cannot be set when MASTER_AUTO_POSITION is active. Mysql 5.6.10 documentation says that "Currently, MASTER_AUTO_POSITION does not accept any value other than 1. To revert to the older file-based replication protocol, you can issue a new CHANGE MASTER TO statement that specifies at least one of MASTER_LOG_FILE or MASTER_LOG_POSITION. This statement should not include any MASTER_AUTO_POSITION clause, as discussed previously" But even though MASTER_AUTO_POSITION is not specified in CHANGE MASTER, it's fetching the previous MASTER_AUTO_POSITION value. Overwrite the old value of MASTER_AUTO_POSITION in such cases where MASTER_AUTO_POSITION is not specified in CHANGE MASTER but coordinates are specified. Test Plan: mtr Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 64ba92e - Browse repository at this point
Copy the full SHA 64ba92eView commit details -
create new config option enable_gitd_on_new_slave_with_old_master
Summary: create new config option enable_gitd_on_new_slave_with_old_master. This option is used for testing purpose only when one need to setup a new slave from a production master and enable gtid on the new slave. Test Plan: mtr. Also tested on a 5.6 test instance replicating from a production host. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for d830046 - Browse repository at this point
Copy the full SHA d830046View commit details -
Set Default table_open_cache_instances=8
Summary: Based on Mark Callaghan's recommendations, change this default: table_open_cache_instances: 1 -> 8 Also fix tests or results to account for the change. 3 of the tests I fixed in a way that they will continue to pass despite the value of table-open-cache-instances with which the server is running. Test table_open_cache_instances_basic is hardcoded to the new default value. For rpl_trigger, there is a new warning generated when executing the SELECT from t1. The source of the warning is because the test replays an old binlog where the trigger did not define a definer. The test didn't previously generate a warning because the warning is only generated when opening a table. Previously in the test the replication SQL thread generated the warning, which isn't visible in any way, but the SELECT did not because the table was served out of the open table cache. When the cache instances > 1, the SELECT hits a different cache, does not find the table and has to reopen it. Thus, the warning is generated. I've updated the test so that the warning is also generated for the cache instances == 1 by flushing the cache before the SELECT. Test Plan: mtr Reviewers: steaphan, mcallaghan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for f172125 - Browse repository at this point
Copy the full SHA f172125View commit details -
Port v5.1 InnoChecksum: Page Type
Summary: Adds options to innochecksum: Count pages by type (index, allocated, undo log, ...) and by index id. Sample output is: 0 bad checksum 29481088 FIL_PAGE_INDEX 21746699 FIL_PAGE_UNDO_LOG 255501 FIL_PAGE_INODE 59 FIL_PAGE_IBUF_FREE_LIST 525471 FIL_PAGE_TYPE_ALLOCATED 0 FIL_PAGE_IBUF_BITMAP 0 FIL_PAGE_TYPE_SYS 0 FIL_PAGE_TYPE_TRX_SYS 0 FIL_PAGE_TYPE_FSP_HDR 0 FIL_PAGE_TYPE_XDES 277134 FIL_PAGE_TYPE_BLOB 0 FIL_PAGE_TYPE_ZBLOB 0 other 3800 max index_id undo type: 785 insert, 21745694 update, 220 other undo state: 0 active, 26 cached, 21170534 to_free, 557793 to_purge, 0 prepared, 18346 other Also adds the "-u" option to skip corrupt pages. Test Plan: Don't know a good way to test this. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 8a133e2 - Browse repository at this point
Copy the full SHA 8a133e2View commit details -
Port v5.1 extra page stats for innochecksum
Summary: Add #leaf_pages, #recs_per_page, #data_bytes_per_page, and page_size_histogram to innochecksum. Add per-page details for non-index pages as well, as previous code prints per-page details only for index pages. Test Plan: Tested on Jenkins Server Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 40f3202 - Browse repository at this point
Copy the full SHA 40f3202View commit details -
Port compressed, and fix uncompressed, page support in innochecksum
Summary: Ports support for compressed pages in innochecksum to 5.6, and fixes uncompressed pages innochecksum. 1. Makes innochecksum support compressed tables & different page sizes. 2. 5.6 innochecksum didn't work for 5.6 uncompressed pages. It only checked for old format checksum rather than both old and new form. Fixed that. Test Plan: run innochecksum on 5.6 compressed & uncompressed pages. Reviewers: steaphan, nizamordulu Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 6bceb58 - Browse repository at this point
Copy the full SHA 6bceb58View commit details -
Port v5.1 tests for non-blocking read-only
Summary: This ports the tests we used to confirm our non-blocking read-only worked. In 5.6, this behavior is included, so we just use these to confirm this. Test Plan: mysqltest.sh read_only_innodb read_only_innodb_basic Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for ee8dedf - Browse repository at this point
Copy the full SHA ee8dedfView commit details -
Port Add --timeout option for mysqldump to 5.6
Summary: Allow per-session control of server user variables for mysqldump, thus avoiding session timeouts on write blockign (mostly in NFS environments) This adds --timeout option, which will set: * net_write_timeout * wait_timeout for mysqldump's connection to MySQL server Test Plan: mysqltest.sh Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 107a4b7 - Browse repository at this point
Copy the full SHA 107a4b7View commit details -
Summary: When innodb_flush_method=ALL_O_DIRECT, O_DIRECT is used for both the data and log files. Test Plan: mysqltest.sh Reviewers: steaphan, mcallaghan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 8c30d8f - Browse repository at this point
Copy the full SHA 8c30d8fView commit details -
Allow mysqldump from 5.6 to dump 5.1 tables.
Summary: Do not throw error if variable gtid_mode not found. Test Plan: Manually dumped and restored a 5.1 table Also checked that dumping 5.6 table was unbroken. Reviewers: wultsch Reviewed By: wultsch
Configuration menu - View commit details
-
Copy full SHA for e19fd7c - Browse repository at this point
Copy the full SHA e19fd7cView commit details -
Add support for MYSQL_SYSVAR_DOUBLE.
Summary: This diff adds support for MYSQL_SYSVAR_DOUBLE. Test Plan: Tested by creating a double sysvar and reading it in a test file. Also, tested on Jenkins. Reviewers: nizamordulu, jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for b763904 - Browse repository at this point
Copy the full SHA b763904View commit details -
Port Fast Index Creation to 5.6
Summary: Port Percona's patch for fast index creation and a few bug fixes. Adds the --innodb-optimize-keys option to mysqldump: On using this, secondary keys are not created/skipped during table creation, and they are added to the table at the end via an alter statement, after all data has been inserted. This can potentially make the restore from a mysqldump quicker, because it is faster to create a key at the end, rather than handling secondary keys on every insert. This can also slightly reduce the disk space used, because there will be less fragmentation in the secondary key trees. Adds the --expand_fast_index_creation to mysql: Which does the same thing as above (skipping secondary keys, and adding them later) on any alter table statement that requires a copy of the entire data. Test Plan: mtr Relevant tests added. Dumped and restored an udb database using --innodb-optimize-keys. Checked that both the schema of the ~450 tables and the data were unchanged. Did perf test, by checking the restore time (and disk space) for a 50G database. Reviewers: mcallaghan, steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 68a7fe1 - Browse repository at this point
Copy the full SHA 68a7fe1View commit details -
Port Compression: Making empty space reserved dynamic to 5.6
Summary: * segment_reserve_factor determines the ratio of empty pages, the last segment in an ibd can have before innodb creates a new extent. If a new page is needed and the last segment has less than segment_reserve_factor * (current number of pages in this segment) empty pages, then a new extent is created and the new page is allocated from the new extent. The empty pages are useful when B-tree grows so that pages in the upper level of the B-tree can be allocated contiguously. An extent is 64 pages, and a segment is 256 extents. Before this diff, this variable was not configurable and it was determined by 1/x where x was the value defined by the compile time than 1/8th pages were empty, then InnoDB would allocate a new extent leaving many empty pages in the current extent. This caused lots of fragmentation. The ideal value for this variable needs investigation. The current default will have the previous behavior. A segment has 16384 pages, so for eg. If we are okay with leaving at most 16 empty pages per segment, then we can set innodb_segment_reserve_factor=0.001. innodb_segment_reserve_factor.test loads data into two identical tables but sets a lower segment_reserve_factor before loading data to the second table. This makes the second table about 10% smaller than the first table. The default value for this variable is set to 0.01 which should mean 1% of the pages will be allocated for growth. Test Plan: Tested on Jenkins. Reviewers: nizamordulu, jtolmer Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 3cfd01f - Browse repository at this point
Copy the full SHA 3cfd01fView commit details -
Add innodb_page_cleaner_interval_millis
Summary: This adds a my.cnf option to set the interval at which the page cleaner thread runs. The default is 1000 millis and the range is 100 to 1000. The thread runs at most once per this many millis. Setting it to a value less than 1000 allows smaller values to be used for innodb_io_capacity and innodb_lru_scan_depth and hopefully reduce mutex contention. Note that the page cleaner thread is responsible for flushing dirty pages from the flush and LRU lists and innodb_io_capacity determines how many pages can be flushed from the flush list, innodb_lru_scan_depth does the same for the LRU list. Those limits are per buffer pool instance, not global. This has results for sysbench where a query is update 1 row by PK. The database is ~32gb and the buffer pool is 4gb. The host has flash and the test is IO bound. The results are for MySQL 5.6 with a few patches from Oracle to fix stalls. io=X means that innodb_io_capacity and innodb_lru_scan_depth=X -- results without my change Updates/second 8 16 32 64 128 256 concurrent clients 15192 20771 24702 24643 24519 24516 fil_flush patch,O_DIRECT, io=1024 17231 23661 22663 21892 21230 21401 fil_flush patch,O_DIRECT, io=8192 --- results with my change and page_cleaner_interval_millis=125 Updates/second 8 16 32 64 128 256 concurrent clients 17697 24218 26822 27034 27014 27020 fil_flush patch,O_DIRECT, io=1024 17486 24316 26651 26721 26493 26719 fil_flush patch,O_DIRECT, io=2048 17575 24035 25749 25505 25439 25231 fil_flush patch,O_DIRECT, io=4096 17211 23500 25157 24080 23812 24408 fil_flush patch,O_DIRECT, io=8192 Test Plan: mtr Revert Plan: Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for b191c2c - Browse repository at this point
Copy the full SHA b191c2cView commit details -
Made innodb_max_dirty_pages_pct my.cnf variable a double
Summary: This change is to fix: http://bugs.mysql.com/62534 This makes innodb_max_dirty_pages_pct a double with min,default,max values 0.001, 75, 99.999. This also makes innodb_max_dirty_pages_pct_lwm a double, as these sysvars are inter-dependent. Added more to the BUFFER POOL AND MEMORY section of SHOW INNODB STATUS: Percent pages dirty: 8.939 --> this is all n_dirty_pages / used_pages Percent all pages dirty: 0.195 --> this is n_dirty_pages / all-pages Max dirty pages percent: 75.000 --> this is innodb_max_dirty_pages_pct Also changed all of buf from 2 to 3 digits of precision (%.2f -> %.3f). Test Plan: Jenkins Reviewers: jtolmer, nizamordulu Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 6130989 - Browse repository at this point
Copy the full SHA 6130989View commit details -
Summary: Tries to submit multiple aio page read requests together to improve read performance. This code adds an array to buffer aio requests in os_aio_array_t. So far only os_aio_read_array uses it. A new parameter (should_buffer) is added to indicate whether an aio request should be buffered or submitted. If should_submit is true, it will submit all bufferred aio requests on the os_aio_array. Only buf_read_ahead_linear is modified to utilize this functionality so far. All other call sites are setting should_submit to true. Other os_aio_array_t arrays will also ignore this. If one thread calling buf_read_ahead_linear is buffering io requests but another thread issues a normal os_aio_request, that other request will submit all the buffered requests from buf_read_ahead_linear. This is still better than nothing I suppose. Test Plan: Perf tests were run manually and approved by Yoshinori. Reviewers: steaphan, jtolmer, yoshinori, mcallaghan Reviewed By: steaphan, nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 07ee2aa - Browse repository at this point
Copy the full SHA 07ee2aaView commit details -
Add logical read-ahead to InnoDB.
Summary: When the session variable innodb_lra_size is set to N, we issue async read requests for the next M logical pages where the total size of the M pages on disk is N megabytes. The max allowed value of innodb_lra_size is is 16384 which corresponds to prefetching 16GB of data. We may choose to use smaller values in production. When the flashcache is available, the logical-read-ahead tells flashcache to not cache the pages it reads. We always sort the page numbers before issuing read requests for them because sorting is cheap and this way we don't have to rely on the block layer of linux kernel to sort our read requests. Another advantage is that if the read array is small then block layer doesn't get a chance to coalesce reads. I added status variables for the number of pages prefetched by logical-read-ahead, the number of pages that are missed (a page is missed if we notice that it was not prefetched while doing the scan. this can happen if the b-tree was modified since the last time we prefetched pages.), and the number of pages that were already in buffer pool. These are innodb_logical_read_ahead_prefetched innodb_logical_read_ahead_missed innodb_logical_read_ahead_in_buf_pool. There are two more session variables that control the behaviour of logical read ahead: innodb_lra_n_node_recs_before_sleep: this variable determines how many node pointer records should be traversed by logical read ahead before going to sleep. innodb_lra_sleep: this is the amount of time (in milliseconds) that logical read ahead sleeps in order to give other transactions a chance to x-latch the index lock. I had to make the following modifications: * Persistent cursor can not be restored on a level other than the leaf level. I provided a way to do this but it only works for PAGE_CUR_LE mode. * Make btr_pcur_restore_position_func() always re-traverse the B-tree in debug build. This is to test the above functionality in debug build. Test Plan: * innodb_logical_read_ahead.test tests that logical read ahead fetches all pages necessary when the read ahead size is large enough. It also tests whether the asynchronous io requests made for the table was the same as the number of prefetches done by the logical read ahead. * Add a unit test where one thread creates a lot of splits and merges on the B-tree while the other scans the table. This is to stress test the correctness of the changes. * innodb_logical_read_ahead_correctness.test: This test tests for the base cases (empty table, table with one row, table with many rows) and a case where pages are merged while the scan is being performed. If the code was not careful in restoring the cursor in row_read_ahead_logical() or if row_search_for_mysql() did not check the return value of this function, then we could skip a record. If the return value of row_read_ahead_logical() in row_search_for_mysql() is ignored, this test fails. * Add stress test for logical read ahead: checksum of the table is computed in a separate thread while the table is modified by bunch of threads. This makes sure that innodb_lra_size works under concurrency. * Run on a production table without traffic. * Run on a production shadow with read/write traffic. Reviewers: rongrong, mcallaghan, steaphan, yoshinori Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for ee024c6 - Browse repository at this point
Copy the full SHA ee024c6View commit details -
Add mysqldump support for logical read ahead
Summary: Adds options to mysqldump: --lra-size=X --lra-sleep=X --lra-n-node-recs-before-sleep=X These just inject SET statements to set these session variables. Test Plan: Ran the mysqldump test on mysqld WITHOUT logical_read_ahead: --lra-size=5 Failed on "SET innodb_lra_size=5" with unknown variable: mysqldump: Couldn't execute 'SET innodb_lra_size=5': Unknown system variable 'innodb_lra_size' (1193) --lra-size=5 --lra-sleep=10 Failed on "SET innodb_lra_size=5" with unknown variable: mysqldump: Couldn't execute 'SET innodb_lra_size=5': Unknown system variable 'innodb_lra_size' (1193) --lra-sleep=10 Worked (passed nothing) - forced the code to send anyway, and got: mysqldump: Couldn't execute 'SET innodb_lra_sleep=10': Unknown system variable 'innodb_lra_sleep' (1193) ...and likewise for --lra-n-node-recs-before-sleep=X Ran all of this on mysqld WITH logical_read_ahead: Tests all passed without error. Reviewers: nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for fd05d7d - Browse repository at this point
Copy the full SHA fd05d7dView commit details -
Port testing of random compression failures
Summary: Provides a method to test against random compression failures. Test Plan: arc unit / Jenkins Reviewers: nizamordulu, rahulgulati Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for e1ff3c4 - Browse repository at this point
Copy the full SHA e1ff3c4View commit details -
Add support for srv_unzip_LRU_pct and innodb_lru_io_to_unzip_factor.
Summary: This diff adds support for srv_unzip_LRU_pct (which sets the limit on the length of the unzip_LRU expressed as a percentage of the length of the LRU) and innodb_lru_io_to_unzip_factor (The factor to multiply the IO rate so that the cost of it is equivalent to the cost of the rate of page decompression). Test Plan: arc unit / Jenkins Reviewers: jtolmer, nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for c23127a - Browse repository at this point
Copy the full SHA c23127aView commit details -
Implement START TRANSACTION WITH CONSISTENT INNODB SNAPSHOT
Summary: Provide better alternative to FLUSH TABLES WITH READ LOCK for use to take backups. See mysqlha.blogspot.com/2009/10/be-careful-with-flush-tables-with-read.html for context. This diff adds START TRANSACTION WITH CONSISTENT INNODB SNAPSHOT which starts an InnoDB transaction and returns the master binlog filename and offset which corresponds to all transactions visible to the InnoDB view for the transaction. When this is used for backups, replication should be restarted at those return values. The new START command returns 1 row with 2 columns (File, Position) on success. In order to get the correct binlog filename and position for the view it is required to block all commit progress, which is accomplished by acquring all the locks which are also acquired by MYSQL_BIN_LOG::ordered_commit. Test Plan: Added new mtr tests. I also plan to add a stress test, which tests this feature in combination with binlog group commits happening concurrently, in a follow-up diff. Reviewers: steaphan, hfisk Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for a73beda - Browse repository at this point
Copy the full SHA a73bedaView commit details -
Add the value of padding to table statistics in 5.6.
Summary: This diff adds the value of padding to table statistics in 5.6. Test Plan: 1. arc unit (Jenkins) 2. Also tested to see that the value of COMPRESS_PADDING is shown in table statistics. Reviewers: jtolmer, nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 86e9ffc - Browse repository at this point
Copy the full SHA 86e9ffcView commit details -
Make "FACEBOOK" the default crc setting.
Summary: This sets the default of this var so that upgrading to 5.6 will work, without requiring a new my.cnf var to be set. Also added description of this option to the var's info text. Test Plan: Deployed in an rpm, got past this blocker to the next one. Reviewers: chip Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 9ff3d3f - Browse repository at this point
Copy the full SHA 9ff3d3fView commit details -
Summary: This writes the old fb crcs, to allow downgrade back to 5.1. This can still read both, so just remove this diff after 5.1 is dead. Test Plan: Deployed 5.1 and 5.6 RPMs on a spare, and upgraded/downgraded freely! Jenkins 'arc unit' run likes this too. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for b11c652 - Browse repository at this point
Copy the full SHA b11c652View commit details
Commits on Apr 23, 2013
-
Waste less memory on os_event structs
Summary: This fixes: http://bugs.mysql.com/62535 Allow os_event structs to be packed into their parent structures. Split memory-management linked-list overhead out of event struct. Don't allocate linked-list overhead when it won't be used (when packed). Packed event structures into mutex and rw_lock structures. Adjust event and fast_mutex counts on exit, since rw_locks are never freed. Trimmed down some wasted space within the event strutures. Added macros to access sub-elements in optimized packed values. Share the pthread mutex and condvar data in a pool for packed events. Still allocate dedicated pthread data for each non-packed os_event_t. Added innodb_sync_pool_size sysvar to select pool size (def: 1024). Memory usage test results (with 54G buffer pool, from 5.6.10): 61510268 KB - Without this change 60266124 KB - With this change Space savings ~ 1.2 GB Sysbench results (from 5.6.10): QPS for 24 cores, 22G database, 1G buffer pool, read-only: 8 16 32 64 128 256 threads 51253 100571 164190 165912 159370 170362 without this change 51717 103426 166108 168247 159133 172756 with this change Test Plan: Passed all unit tests, including the big and huge stress tests. Reviewers: rudradevbasak Reviewed By: rudradevbasak
Configuration menu - View commit details
-
Copy full SHA for b723a0a - Browse repository at this point
Copy the full SHA b723a0aView commit details
Commits on Apr 29, 2013
-
Port fix for mysqlbinlog reading big records from stdin
Summary: Fixed mysqlbinlog error when reading big records from stdin. When mysqlbinlog tries to read a big record (over 128KB) from stdin, the glibc read in my_read will return 128KB instead of the full size. This causes _my_b_read to fail. Ported the folowing change In mysqlbinlog.cc, enable MY_FULL_IO, and in my_read.c, fix the MY_FULL_IO logic. Test Plan: ported the test case which generates an event of size greater than greater than 128kb and runs mysqlbinlog. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for f8c37c4 - Browse repository at this point
Copy the full SHA f8c37c4View commit details -
port streaming binlogs to 5.6.11
Summary: Currently 5.6 doesn't suppport compressed binlogs as input to mysqlbinlog. The original bug link is http://bugs.mysql.com/bug.php?id=49336 This fix allows mysqlbinlog to take multiple compressed files as direct input to mysqlbinlog rather than first decompressing the input files and passing them to mysqlbinlog. Ports the following change: 'Fix mysqlbinlog to take streaming file input' The following are the changes 1) Use read_log_event in check_header rather than my_b_seek if it is streaming file 2) After check_header, skip start_position - current_position bytes (rather than start_position bytes) Ports the following fix: 'Handle multiple files being streamed to mysqlbinlog' 5.6 takes multiple compressed file as input to mysqlbinlog. Changes in the original diff are 1) When multiple files are streamed, the binlog magic number occurs in the middle of stream and is causing problems. Skip such binlog magic magic numbers that occur in the middle of stream. Also, do not allow --stop-position with stream input because we do not know when the last file starts. Port the following fix: rpl.rpl_row_mysqlbinlog is using --stop-position with stdin so that rpl.rpl_row_mysqlbinlog passes. rpl.rpl_row_mysqlbinlog has 2 tests using --stop-position with stdin. Change them to expect error. Test Plan: ported the test cases. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 9d571cb - Browse repository at this point
Copy the full SHA 9d571cbView commit details -
Make mysqldump not lock all tables with --single-transaction
Summary: Change the behavior of --single-transaction to use START TRANSACTION WITH CONSISTENT INNODB SNAPSHOT to begin transactions, which returns the binlog file and offset. This is later used if --master-data is specified and becomes part of the output (otherwise it just avoids the lock tables). Test Plan: mtr tests pass. main.mysqldump contains a test for this, check for Bug#12809202 Not sure if more testing is required. Reviewers: jtolmer, steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 09cb719 - Browse repository at this point
Copy the full SHA 09cb719View commit details -
Make default value of binlog_checksum 'NONE'
Summary: Replciation from 5.6 to 5.1 fails because of checksum values in binlog events which are not handled by 5.1. Setting binlog_checksum to NONE as default value solves the issue. I think this is better than setting binlog_checksum=NONE explicitly in cnf files while setting up replication from 5.6 to 5.1. It would be good to change the default value again to CRC32 once 5.6 is deployed everywhere. Test Plan: Tested replication from 5.1 slave with 5.6 master. Reviewers: jtolmer, steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 71e9e5a - Browse repository at this point
Copy the full SHA 71e9e5aView commit details -
Two fixes for logical read-ahead feature
Summary: 1) Do not use adaptive hash index for node pointer records. 2) Fix logical read-ahead for debug build. This diff is meant to fix two bugs related to logical read ahead: 1- Compute the depth of B-tree by looking at the height of the root node. Earlier we were searching for a record and reading the value from btr_cursor.tree_height but this field is not set when adaptive hash index is used. 2- When UNIV_DEBUG is enabled trx->lra_sort_arr array was zeroed before it was filled with new page numbers. A previous revision changed the sort array size but I forgot to change the size parameter used for memset(). Test Plan: * mtr Reviewers: yoshinori, steaphan Reviewed By: yoshinori
Configuration menu - View commit details
-
Copy full SHA for 9261f0f - Browse repository at this point
Copy the full SHA 9261f0fView commit details -
Add config option gtid_deployment_step useful for deployment
Summary: With gtid_deployment_step enabled, a host cannot generate gtids on its own, but if gtid logged events are received through replication stream from master, gtids will be logged. Modified logic such that sql_thread never generates gtids automatically on it's own. This diff is likely to be reverted when Oracle adds support for gtid_mode=UPGRADE_STEP_1 & 2 which may be useful for deployment. Test Plan: Tested replication working with the following scenarios. 1) Both Master and slave with gtid_deployment_step=0 and gtid_mode=OFF. 2) Master with gtid_deployment_step=0 and gtid_mode=OFF, slave with gtid_deployment_step=1 and gtid_mode=ON. 3) Both master and slave with gtid_deployment_step=1 and gtid_mode=ON. 4) Master with gtid_deployment_step=0 and gtid_mode=ON, slave with gtid_deployment_step=1 and gtid_mode=ON. 5) Both master and slave with gtid_deployment_step=0 and gtid_mode=ON. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 39309ee - Browse repository at this point
Copy the full SHA 39309eeView commit details
Commits on May 1, 2013
-
Summary: The initial port of innodb_deadlock_detect was completely broken. Rather than disable deadlock detection, it just ignored the results of it. This flaw was found and reported by: github.com/zhaiwx1987 It is unclear if this is still a useful feature, but this fixes it. Test Plan: Jenkins Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for eb621d9 - Browse repository at this point
Copy the full SHA eb621d9View commit details
Commits on May 7, 2013
-
Port v5.1 Compression: Reduce Log Records
Summary: InnoDB logs the entire compressed page every time the page is compressed/re-compressed in the redo log. These log records contain the compressed data for the entire page image. Most of the time this is unnecessary. This revision fixes this partially by eliminating the compression log records caused by INSERTs. Note that there will still be some log records that contain the compressed page image because an INSERT can cause the page to split, when that happens the data on the page is split into two and placed on two different pages. After this, the compressed images of both of the pages are logged instead of the INSERT log records to both pages. Removes the unnecessary compressed page image logging that was performed because of UPDATEs. The DELETEs do not incur any compressed page image log records, because they are performed by just marking records as deleted. (Later a purge operation compactifies the pages that have holes in them, but that too doesn't cause unnecessary page image log records) UPDATEs are performed by first marking the existing record as deleted, then inserting a new record therefore most of the unnecessary page-image logging was already removed. Move page_zip_compress_write_log_no_data() and page_zip_parse_compress_no_data() to page0zip.ic. UNIV_INLINE functions must be put in .ic files. It doesn't cause the build to break in debug mode, but it does so on opt mode. When innodb_log_compressed_pages is OFF, we may run into problems during crash recovery because of innodb_compression_level inconsistencies. Two ways this could happen are: 1- innodb_compression_level may be changed by a DBA during runtime. After a crash, we don't really know which pages were compressed using what level of compression. 2- xtrabackup has no way of knowing what level was used to compress pages in PAGE_COMPRESS_NO_DATA entries. Page compressions happen during page reorganization too, so we also need to log the compression level in PAGE_REORGANIZE log records. To overcome this, we introduce a new log record called MLOG_ZIP_PAGE_REORGANIZE which contains the compression level used during runtime. When an INSERT triggers page-recompression (when modification log becomes full) InnoDB logs INSERT and ZIP_PAGE_COMPRESS_NO_DATA. During recovery while processing the INSERT record, we should not re-compress the page even if the INSERT triggers a re-compression because the compression level we are using might be wrong. We just wait until the next log record to process the page. This exposes more zlib config parameters as system vars. The most important of these variables is innodb_zlib_wrap which makes zlib not write adler32 checksums between compressed blocks. The default value for wrapping is OFF now. This should save some calls to adler32 and some space. Changes are backwards compatible. In other words it's ok to copy an old compressed server using xtrabackup to a server with the rpm that has this change. This is because I updated the transaction log records to include the value for wrap and the compression strategy in addition to compression level. This was fairly straightforward because the same was done earlier with compression level. Prior to this change, we stored the compression level in a byte. Now, the spare bits in that byte are used to store values for the new options. innodb_zlib_wrap and innodb_zlib_strategy also effect blob compression. It's ok to change these values dynamically. this won't break xtrabackup or crash recovery. Needed to backport inflateReset2() from a future version of zlib in order to reset a decompression stream without allocating memory for its internal state and with specifying windowBits. This was necessary to automatically determine whether a compressed page was compressed using adler32 checksums or not. There should not be any issues in terms of backwards compatibility with this. After this, a clean build is required for stress tests to pass. For some reason, zlib's inflate() reads more than just the header if adler32 checksums were not computed. This causes a problem when a blob that spans multiple overflow pages is decompressed. The problem exists for the blobs that are contained within a page too, but in that case innodb ignores the error code from zlib and assumes that the blob ends after the first page (which is true). Added error messages and debug assertions when a blob truncation is detected. Change innodb_zlib_wrap randomly in stress tests Test Plan: Tested on Jenkins (arc unit) Reviewers: jtolmer, nizamordulu steaphan Reviewed By: nizamordulu, steaphan, jtolmer
Configuration menu - View commit details
-
Copy full SHA for a4dad83 - Browse repository at this point
Copy the full SHA a4dad83View commit details -
Port v5.1 Cache buf_page_t Memory Allocations
Summary: We cache the malloc'd buf_page_t structs in a linked list and reuse those mallocations next time a buf_page_t needs to be malloc'd. Caching is only enabled for the codepaths that use buf_page_alloc_descriptor and buf_page_free_descriptor. The length of the cache is determined by innodb_malloc_cache_len. Test Plan: mysqltest.sh Reviewers: nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 1594ae8 - Browse repository at this point
Copy the full SHA 1594ae8View commit details -
Port v5.1 Cache mem_block_t Memory Allocations
Summary: We modify mem_block_struct to support caching of the mem_block_t objects. Note that mem_heap_t is a linked list of allocated memory blocks. A call to mem_heap_create_cached() produces a cached heap only if the requested initial memory size doesn't exceed the cache block size. If that is not the case than none of the blocks in the returned heap is cached. This makes it important to have good cache block size estimates. If the initial block in a heap is cached (i.e. mem_heap_create_cached returns a heap whose heap->cache is not NULL) then consecutive blocks are cached if the consecutive requested memory sizes do not exceed the cache block size. (see the change in the implementation of mem_heap_add_block() for details) It's possible for a heap to have some of its blocks cached and some of them not cached. Non-cached blocks will have block->cache=NULL. The rule of thumb is that if block->cache is not NULL then the block must be put back to cache when the heap is freed unless the cache is full. If the cache is full then the block gets free()'d. This diff also fixes the memory usage estimations used in page_zip_compress() page_zip_decompress(). Earlier estimations did not take into account the size of deflate_state and the size of inflate_state respectively. I went through the ZALLOC() calls in deflate.c and inflate.c to obtain the new estimations. Also fix the tests innodb_log_compressed_pages* and innodb_change_compression_level* because their result file depended on the default value of innodb_log_compressed_pages. We use the same malloc_cache object to cache both page compressions and blob compressions. blob compressions allocate 250K whereas page compressions require about 350K. We also use the same malloc_cache for blob decompressions and page decompressions. blob decompressions allocate 40K and page decompressions allocate around 50K. Change memLevel for blob compressions from 7 to 8. I made this change because: 1- it's backwards compatible. we don't need to recompress the data in the existing tables. 2- Since we are using malloc_cache we already have excess memory in cached malloc blocks. By using a higher memLevel we can get better compression for blobs at no cost. Test Plan: mysqltest.sh, arc unit (Jenkins) Reviewers: nizamordulu Reviewed By: nizamordulu
Configuration menu - View commit details
-
Copy full SHA for 5db55fd - Browse repository at this point
Copy the full SHA 5db55fdView commit details -
Stabalize main.partition_innodb test
Summary: Replace the old (restart) test stabalization attempt with a new one. This inserts sleeps to ensure we lose the races with the purge thread. Test Plan: mysqltest.sh --repeat=8 --parallel=8 main.partition_innodb(x8) Reviewers: jtolmer, rongrong Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for 58e9fbc - Browse repository at this point
Copy the full SHA 58e9fbcView commit details
Commits on May 13, 2013
-
Move InnoDB message from warning to info
Summary: Jenkins hit a failure in innodb.innodb-wl5522-debug-zip because InnoDB generated a warning that, after the server was intentionally SIGKILL'ed as part of the test, it hit a corrupt page. But in this specific case the InnoDB wil attempt to recover the page from the doublewrite buffer so there is no need for the message to be a warning. Test Plan: Jenkins Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 5ef7807 - Browse repository at this point
Copy the full SHA 5ef7807View commit details -
Add error messages for main.slow_log_extra_big
Summary: main.slow_log_extra_big fails intermittenly. We add a few error messages so that the log tells us more when this test fails. Test Plan: Jenkins (arc unit) Reviewers: jtolmer, nizamordulu Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 8af6176 - Browse repository at this point
Copy the full SHA 8af6176View commit details -
Initialize innodb_sync_pool_size, even without mysql.
Summary: Set the initial value of srv_sync_pool_size to 1024. This has no effect on anything in mysql, as it is overwritten on init. However, when innodb is used without mysql (like xtrabackup), it matters. Test Plan: Jenkins Reviewers: rudradevbasak Reviewed By: rudradevbasak
Configuration menu - View commit details
-
Copy full SHA for 0bb21a2 - Browse repository at this point
Copy the full SHA 0bb21a2View commit details -
Initialize error messages for xtrabackup
Summary: Without this initialization any access to error messages will cause a segfault. Test Plan: Successfully carried out a full backup on admarketdbs Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 4596d7d - Browse repository at this point
Copy the full SHA 4596d7dView commit details -
xtrabackup: Do not FLUSH TABLES WITH READ LOCK on master
Summary: Enable option_no_lock by default Run mysql_lockall() always for slaves Test Plan: Will run a couple of backups to verify, but should be safe enough. Apparently we have been making this change everytime a new version of xtrabackup was merged into ours. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 58f3e15 - Browse repository at this point
Copy the full SHA 58f3e15View commit details -
Add GTID info to START TRANSACTION WITH CONSISTENT INNODB SNAPSHOT
Summary: Print gtid information with START TRANSACTION WITH CONSISTENT INNODB SNAPSHOT so that it is easier to take backup on slaves when we deploy gtid feature. Test Plan: Added gtid_mode to rpl_innodb_snapshot mtr test case. Use the Gtid_executed output of START TRANSACTION WITH CONSISTENT INNODB SNAPSHOT and run SET @@global.gtid_purged = "gtid_executed" on slave. Finally verify that both master and slave are consistent with each other. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 7b2bb76 - Browse repository at this point
Copy the full SHA 7b2bb76View commit details -
Oracle's diff for MTS recovery SEGV
Summary: http://bugs.mysql.com/bug.php?id=68506 contains a one line fix for the SEGV hit when running multi-threaded slave. I tested the fix in our 5.6.11 branch and it fixed the problem with Yoshi's repro case. Test Plan: Jenkins Reviewers: sanpra90, yoshinori Reviewed By: sanpra90
Configuration menu - View commit details
-
Copy full SHA for df04aae - Browse repository at this point
Copy the full SHA df04aaeView commit details -
Back-port Oracle's stop-sleeping patch from 5.6
Summary: This is a fix for: bugs.mysql.com/68588 This is being used as a stop-gap for the fact that this is not fixed: bugs.mysql.com/45892 (Re-opened as: bugs.mysql.com/68555) This should be part of 5.6.12, so we won't need to port this forward. Hopefully bug # 45892/68555 will be fixed in 5.6.12 too. Test Plan: Jenkins Reviewers: jtolmer, yoshinori Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for fc14f6c - Browse repository at this point
Copy the full SHA fc14f6cView commit details -
Port v5.1: Skip fsyncs on O_DIRECT
Summary: Oracle's implementation of this (O_DIRECT_NO_FSYNC) is broken. On xfs, it misses required fsync()s when an ibd file changes size. This changes O_DIRECT to skip fsync()s which are not required on xfs, while leaving O_DIRECT_NO_FSYNC unmodified. Test Plan: Jenkins, Yoshinori Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for e68ec58 - Browse repository at this point
Copy the full SHA e68ec58View commit details -
Port v5.1 Prefix Index Queries Optimization
Summary: Optimize prefix index queries to skip cluster index lookup when possible. Currently InnoDB will always fetch the clustered index (primary key index) for all prefix columns in an index, even when the value of a particular record is smaller than the prefix length. This change optimizes that case to use the record from the secondary index and avoid the extra lookup. Also adds two status vars that track how effective this is: innodb_secondary_index_triggered_cluster_reads: Times secondary index lookup triggered cluster lookup. innodb_secondary_index_triggered_cluster_reads_avoided: Times prefix optimization avoided triggering cluster lookup. Test Plan: Jenkins, all pass. New test included, fails without this change, passes with it. Random toggle added to stress tests, the small ones all pass, and the nightly Jenkins stress tests should all pass too. Reviewers: chip, hfisk Reviewed By: chip
Configuration menu - View commit details
-
Copy full SHA for 4dd11b5 - Browse repository at this point
Copy the full SHA 4dd11b5View commit details
Commits on May 20, 2013
-
Create xtrabackup_logfile in the supplied target_dir
Summary: Without this change, xtrabackup_logfile is created in the directory specified as tmpdir in my.cnf file. As the logfile can be quite large (>10 GB) the tmpdir is not always the best option. Hence, create xtrabackup_logfile in the supplied target_dir. Test Plan: Checked with lsof | grep xtrabackup_logfile, that the logfile is created in the directory specified with --tmpdir in innobackupex and --target_dir in xtrabackup Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for e8d01a8 - Browse repository at this point
Copy the full SHA e8d01a8View commit details -
Xtrabackup: Always use default error messages
Summary: Do not use thread specific locales for error messages. Instead always use the default ones. Test Plan: Artificially threw some warnings in xtrabackup. Reviewers: steaphan Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 3bbd76e - Browse repository at this point
Copy the full SHA 3bbd76eView commit details -
Change binlog index file format to incude previous gtid set.
Summary: Format of binlog index file is changed to include previous gtid set. This is used during server start-up where gtid_executed and gtid_purged are initialized without scanning and opening binary logs. The functions find_log_pos, find_next_log, are changed so that they skip previous gtid set(stored as binary string) in the index file while reading. The new change will handle both old and new formats. The initialization of gtid_sets is called (init_gtid_sets()) before opening binlog (open_binlog()) during server start-up. This is done to avoid the extra work of opening the active file again to write the previous gtid log event. This also avoids writing the the previous gtid set again in the index file which needs to be done if init_gtid_sets() is called after open_binlog(). Added a mapping from binlog name to previous gtid set in that binlog in memory(previous_gtid_set_map) so that opening binlog files is not necessary by the dump thread when a slave reconnects. Modified the logic in find_first_log_not_in_gtid_set so that it reverse iterates over the in memory data structure previous_gtid_set_map. Test Plan: Added a test case to check @@gtid_executed and @@gtid_purged are initialized properly on both master and slave after FLUSH LOGS, PURGE BINARY LOGS command and also after restarting server several times. No extra tests are added to test the scenario where gtid_mode=OFF and reading binlog index files since binlog.binlog_index tests all those scenarios. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for d0aad62 - Browse repository at this point
Copy the full SHA d0aad62View commit details -
Fix CREATE TABLE ... LIKE to handle KEY_BLOCK_SIZE
Summary: When doing a CREATE TABLE ... LIKE where the origin table uses both ROW_FORMAT=COMPRESSED and a KEY_BLOCK_SIZE which is not the default, then the created table claims that it is using the same key block as the origin but in fact uses the default. SHOW CREATE TABLE will report an incorrect key block size but innochecksum will show that the file on disk is actually using the default. I entered this as http://bugs.mysql.com/bug.php?id=66738. I also corrected the output of innochecksum so that it doesn't report things like 4096K. Test Plan: Wrote a new test Reviewers: rongrong Reviewed By: rongrong
Configuration menu - View commit details
-
Copy full SHA for 911133c - Browse repository at this point
Copy the full SHA 911133cView commit details -
Zero-padding trx log write to OS block size
Summary: When writing transaction logs to disk, zero-padding the log up to a full OS block size. A new sysvar OS_FILE_PAD is added and passed down to fil_io for transaction log writes. We only pad writes to actual log entires, excluding writes to log headers, since log headers are reused and updated. Test Plan: Domas tested this manually. Jenkins tests all pass. Reviewers: steaphan, dmituzas Reviewed By: steaphan
Configuration menu - View commit details
-
Copy full SHA for 0fd6a6d - Browse repository at this point
Copy the full SHA 0fd6a6dView commit details -
Oracle's fix for flush tables breaking replication
Summary: This is a fix for http://bugs.mysql.com/bug.php?id=69045 This fixes the statement flush tables breaking replication. This should be part of 5.6.12, so we don't need to port this forward. Test Plan: Added a new test case. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for cc11939 - Browse repository at this point
Copy the full SHA cc11939View commit details -
Back-port Oracle's fix for CREATE TABLE IF NOT EXISTS
Summary: This fixes the hang when using CREATE TABLE IF NOT EXISTS. This should become part of mysql-5.6.13, when it's released. Test Plan: mtr test changes included, they all pass. Yoshinori will test further. Reviewers: jtolmer Reviewed By: jtolmer
Configuration menu - View commit details
-
Copy full SHA for 4c478e2 - Browse repository at this point
Copy the full SHA 4c478e2View commit details