-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature merge request #1
Comments
Unfortunately fallocate is not supported within our production systems. We might take a look at this again in the future. |
Ah, I see. Even without fallocate it should work (there are 2-3 ifdefs for this), steps 2-4 will use normal file extension instead of it. |
darnaut
pushed a commit
that referenced
this issue
Jun 15, 2012
PROBLEM: After WL 4144, when using MyISAM Merge tables, the routine open_and_lock_tables will append to the list of tables to lock, the base tables that make up the MERGE table. This has two side-effects in replication: 1. On the master side, we log additional table maps for the base tables, since they appear in the list of locked tables, even though we don't really use them at the slave. 2. On the slave side, when opening a MERGE table while applying a ROW event, additional tables are appended to the list of tables to lock. Side-effect #1 is not harmful. It's just that when using MyISAM Merge tables a few table maps more may be logged. Side-effect #2, is harmful, because the list rli->tables_to_lock is an extended structure from TABLE_LIST in which the extra fields are filled from the table maps that are processed. Since open_and_lock_tables appends tables to the list after all table map events have been processed we end up with entries without replication/table map data on them. Thus when trying to access that info for these extra tables, the server will crash. SOLUTION: We fix side-effect #2 by making sure that we access the replication part of the structure for those in the list that were accounted for when processing the correspondent table map events. All in all, we never go beyond rli->tables_to_lock_count. We also deploy an assertion when clearing rli->tables_to_lock, making sure that the base tables are not in the list anymore (were closed in close_thread_tables). --BZR-- revision-id: luis.soares@oracle.com-20120224160743-pt2iw2esti6vvv47 property-branch-nick: mysql-5.5-security testament3-sha1: 6763d0e86d59a679bdc27ac25a99b6014fb5c080
liangg
pushed a commit
that referenced
this issue
Jun 11, 2013
Twitter DBA audit logging writes activities by concerned users to error log. By default, all users are logged, except users with a new IGNORE LOGGING privilege. That is, users can be dynamically registered for audit logging with IGNORE LOGGING privilege grant. Remove --general_log_super_only general log superuser filtering, and --general_log_query_error that was added for MYSQL-146. Remove log_super_only test and sys_vars baisc tests. Requirement #1: A new global system variable --twitter_audit_logging is used to set audit log level with valid range [0,2]. Requirement #2: A new privilege IGNORE LOGGING is added to dynamically enable user for audit logging. Root users with all privileges are always logged. For example, grant IGNORE LOGGING on *.* to user_no_log; Requirement #3: Eligible activities are logged to error log file with a new twitter_log_write() function. Enable and disable of twitter audit logging are logged. Requirement #4: Log line format is as follows, where timestamp format is YYYY-MM-DD HH:MM:SS. local_timestamp<tab>user@host_or_ip<tab>result<tab>query_string Requirement #5: SELECT and SHOW query commands are not logged, unless option --twitter_audit_logging = 2. Requirement #6: Query result is logged in the log line in form of error code, code 0 for success and error code for failure. Requirement #7: User logins incl. failures are logged for users without the IGNORE LOGGING privilege.
liangg
pushed a commit
that referenced
this issue
Jul 26, 2013
Fixed the get_data_size() methods for multi-point features to check properly for end of their respective data arrays. Extended the point checking function to take a 3d optional argument so cases where there's additional data in each array element (besides the point data itself) can be covered by the helper function. Fixed the 3 cases where such offset was present to use the proper checking helper function. Test cases added. Fixed review comments. --BZR-- revision-id: georgi.kodinov@oracle.com-20130328153729-962aiteezh6yerl2 property-audit-revid: georgi.kodinov@oracle.com-20130328153729-llhy2f5dkqkk6zt3 property-branch-nick: B16451878-addendum-5.1 testament3-sha1: 1c72e38efe5d6af9197ae8a793dc797373961865
kevincrane
pushed a commit
that referenced
this issue
Aug 21, 2014
Removed unused variable. Fixed long (>80 lines) --BZR-- revision-id: georgi.kodinov@oracle.com-20140411074230-vguwmcotl790m29k property-audit-revid: georgi.kodinov@oracle.com-20140411074230-u3o1oh7muy1jv0jf property-branch-nick: mysql-5.5 testament3-sha1: f11471105bd64387151be8233702466267fce0a4
kevincrane
pushed a commit
that referenced
this issue
Aug 21, 2014
The test case makes use of the fine DEBUG_SYNC facility. Furthermore, since it needs synchronization on internal threads (dump and SQL threads) the server code has DEBUG_SYNC commands internally deployed and activated through the DBUG_EXECUTE_IF macro. The internal DBUG_SYNC commands are then controlled from the test case through the DEBUG variable. There were three problems around the DEBUG + DEBUG_SYNC facility usage: 1. When signaling the SQL thread to continue, the test would reset immediately the DEBUG_SYNC variable. This could mean that the SQL thread might loose the signal and continue to wait forever; 2. A similar scenario was happening with the dump thread on the master. This thread was instructed to wait, and later it would be signaled to continue, but immediately after the DEBUG_SYNC would be reset. This could lead to the dump thread missing the signal and wait forever; 3. The test was not cleaning itself up with respect to the instrumentation of the dump thread. This would leave the conditional execution of an internal DEBUG_SYNC command active (through the usage of DBUG_EXECUTE_IF). We fix #1 and #2 by waiting for the threads to receive the signal and only then issue the reset. We fix #3 by reseting the DEBUG variable, thus deactivating the dump thread internal DEBUG_SYNC command. --BZR-- revision-id: luis.soares@oracle.com-20140626115427-jvt19v8nn2n8bk55 property-audit-revid: luis.soares@oracle.com-20140626115427-tsxaul0zh6l6pajl property-branch-nick: mysql-5.5 testament3-sha1: 5f8bb46be17109262dbc24ebeadb1d51f26cbae8
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
After the release of twitter mysql on github, I noticed https://github.com/twitter/mysql/wiki/Table-Options -- to have pre-determined initial size for per-table tablespaces;
I have some code which should complement it pretty well.
Mainly the code deals with,
1. Use fallocate in os_file_set_size for initial creation of innodb files.
2. Use #1 for extension of ibdata1 files.
3. Use #1 for ibd files.
4. Ability to set increment size for extension of ibd files through a variable.
It starts from,
http://bazaar.launchpad.net/~raghavendra-prabhu/+junk/mysql-server-fallocate/revision/3547
upto
http://bazaar.launchpad.net/~raghavendra-prabhu/+junk/mysql-server-fallocate/revision/3550
(Commits after that deal with other issues).
Also, the code is merged and up-to-date with latest 5.6 bzr pull, also I have tested it myself and results have been pretty good.
I can send these with git-format-patch + git send-email if you need it.
The text was updated successfully, but these errors were encountered: