Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

List fragmented tables #8

Closed
ari opened this Issue · 8 comments

4 participants

@ari

It would be very useful for this nice tool to show which tables are fragmented, not just the count. Perhaps in the recommendations at the bottom it could display the actual SQL to defragment all the relevant tables.

@arruor

Hello ari,
are you sure that listing all fragmented tables is a good idea?
Here is a quick example from one of my developer's servers:
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster
[--] Data in MyISAM tables: 1G (Tables: 3049)
[--] Data in InnoDB tables: 1G (Tables: 2508)
[--] Data in FEDERATED tables: 0B (Tables: 3)
[--] Data in MEMORY tables: 0B (Tables: 51)
[!!] Total fragmented tables: 2675

I think that listing all 2600+ table names will be a little bit annoying.

@ari

How are you going to defragment them unless you know which ones are fragmented?

@arruor

Hello ari,
I have written some Stored procedures to check fragmented tables and execute OPTIMIZE TABLE

queries.
I should rewrite them to user-defined function before releasing them to public domain.

To check which tables are fragmented you could use some query like this one:
SELECT CONCAT_WS('.', TABLE_SCHEMA, TABLE_NAME) AS _table FROM information_schema.TABLES WHERE DATA_FREE > 0 ORDER BY TABLE_SCHEMA ASC, TABLE_NAME ASC;
The result will show table names in format .

ordered by database and table name alphabetically.

If you want to check which tables are fragmented in exact database you could use this one:
SELECT TABLE_NAME AS _table FROM information_schema.TABLES WHERE TABLE_SCHEMA = ' 0 ORDER BY TABLE_NAME;

Or if you want to check for single table it is fragmented or not, you could use something like this:
SELECT CONCAT_WS('.', TABLE_SCHEMA, TABLE_NAME) AS _table FROM information_schema.TABLES WHERE TABLE_NAME = '' AND DATA_FREE > 0 ORDER BY TABLE_SCHEMA;

@mnikhil-git

Thanks arruor, this could significantly be another script or a switch to the existing perl script to accomplish this run of optimize table command on the fragmented tables. I will take this forward into another script actually, mysqltuner should only report and advise but should not update/do any write actions on the mysqld server aesthetically in design and it looks much better to have a seperate script to deal with the fragmented tables.

On the other hand, I would want to clear my assumption here on the run of optimize table on the fragmented table, would the table file be sparsed and would not cause the data usage by the table file on the filesystem to be reduced? and I do have to copy it to some other location and then back on the filesystem to reduce the actual file data usage. Is it so? I would add a header noting the same in the script. Thanks for the overall idea again.

@mnikhil-git

Perhaps, its better to store the report the list of fragmented tables per databases into a seperate text output file so that you have the track of, not necessarily that it be printed on the console/screen.

@mnikhil-git

@arruor @ari @scop

Hi guys,

Is there a quick method, I can develop fragmented tables on the database, just to simulate the fragmented tables environment and check/test my perl script. I just need a test environment(databases with fragmented tables) to perform the preliminary checks...

@mnikhil-git

Simple method I had been using earlier is:

for each database in the database_list:
show table status from database where data_free > 0;

@mnikhil-git

Please feel free to checkout, download and put to the test, the basic working version of MySQL-Optimize-Tables.pl available at https://github.com/mnikhil-git/my-db-tools/blob/master/MySQL-Optimize-Tables.pl

Thanks for the idea, @scop

@major major closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.