-
Notifications
You must be signed in to change notification settings - Fork 900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log the tables sizes and statistics with db activity #22665
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -401,7 +401,7 @@ def table_statistics | |
, last_autovacuum AS last_autovacuum_date | ||
, last_analyze AS last_analyze_date | ||
, last_autoanalyze AS last_autoanalyze_date | ||
FROM pg_stat_all_tables | ||
FROM pg_stat_user_tables | ||
WHERE schemaname NOT IN ('pg_catalog', 'information_schema') | ||
ORDER BY relname ASC ; | ||
SQL | ||
|
@@ -462,6 +462,7 @@ def table_size | |
FROM pg_class | ||
WHERE reltuples > 1 | ||
AND relname NOT LIKE 'pg_%' | ||
AND relkind = 'r' | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I made the above two changes to eliminate system tables and only include ordinary tables and not indexes, etc. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. may want to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instead of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sorry - didn't see that. keep what is there Just saw the example had a bunch of tables that it seems should not display (as they have 0 rows) |
||
ORDER BY reltuples DESC | ||
, relpages DESC ; | ||
SQL | ||
|
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -113,7 +113,7 @@ | |||
end | ||||
end | ||||
|
||||
describe ".log_statistics" do | ||||
describe ".log_activity" do | ||||
before do | ||||
@buffer = StringIO.new | ||||
class << @buffer | ||||
|
@@ -123,9 +123,9 @@ class << @buffer | |||
end | ||||
|
||||
it "normal" do | ||||
described_class.log_statistics(@buffer) | ||||
described_class.log_activity(@buffer) | ||||
lines = @buffer.string.lines | ||||
expect(lines.shift).to eq "MIQ(VmdbDatabaseConnection.log_statistics) <<-ACTIVITY_STATS_CSV\n" | ||||
expect(lines.shift).to eq "MIQ(VmdbDatabaseConnection.log_csv) <<-ACTIVITY_STATS_CSV\n" | ||||
expect(lines.pop).to eq "ACTIVITY_STATS_CSV" | ||||
|
||||
header, *rows = CSV.parse lines.join | ||||
|
@@ -154,8 +154,93 @@ class << @buffer | |||
|
||||
it "exception" do | ||||
allow(described_class).to receive(:all).and_raise("FAILURE") | ||||
described_class.log_statistics(@buffer) | ||||
expect(@buffer.string.lines.first).to eq("MIQ(VmdbDatabaseConnection.log_statistics) Unable to log stats, 'FAILURE'") | ||||
described_class.log_activity(@buffer) | ||||
expect(@buffer.string.lines.first).to eq("MIQ(VmdbDatabaseConnection.log_activity) Unable to log activity, 'FAILURE'") | ||||
end | ||||
end | ||||
|
||||
|
||||
Comment on lines
+161
to
+162
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||
describe ".log_table_size" do | ||||
before do | ||||
@buffer = StringIO.new | ||||
class << @buffer | ||||
alias_method :info, :write | ||||
alias_method :warn, :write | ||||
end | ||||
end | ||||
|
||||
it "normal" do | ||||
described_class.log_table_size(@buffer) | ||||
lines = @buffer.string.lines | ||||
expect(lines.shift).to eq "MIQ(VmdbDatabaseConnection.log_csv) <<-TABLE_SIZE_CSV\n" | ||||
expect(lines.pop).to eq "TABLE_SIZE_CSV" | ||||
|
||||
header, *rows = CSV.parse lines.join | ||||
expect(header).to eq %w( | ||||
table_name | ||||
rows | ||||
pages | ||||
size | ||||
average_row_size | ||||
) | ||||
|
||||
expect(rows.length).to be > 0 | ||||
rows.each do |row| | ||||
expect(row.first).to be_truthy | ||||
end | ||||
end | ||||
|
||||
it "exception" do | ||||
allow(ApplicationRecord.connection).to receive(:table_size).and_raise("FAILURE") | ||||
described_class.log_table_size(@buffer) | ||||
expect(@buffer.string.lines.first).to eq("MIQ(VmdbDatabaseConnection.log_table_size) Unable to log activity, 'FAILURE'") | ||||
end | ||||
end | ||||
|
||||
describe ".log_table_statistics" do | ||||
before do | ||||
@buffer = StringIO.new | ||||
class << @buffer | ||||
alias_method :info, :write | ||||
alias_method :warn, :write | ||||
end | ||||
end | ||||
|
||||
it "normal" do | ||||
described_class.log_table_statistics(@buffer) | ||||
lines = @buffer.string.lines | ||||
expect(lines.shift).to eq "MIQ(VmdbDatabaseConnection.log_csv) <<-TABLE_STATS_CSV\n" | ||||
expect(lines.pop).to eq "TABLE_STATS_CSV" | ||||
|
||||
header, *rows = CSV.parse lines.join | ||||
expect(header).to eq %w( | ||||
table_name | ||||
table_scans | ||||
sequential_rows_read | ||||
index_scans | ||||
index_rows_fetched | ||||
rows_inserted | ||||
rows_updated | ||||
rows_deleted | ||||
rows_hot_updated | ||||
rows_live | ||||
rows_dead | ||||
last_vacuum_date | ||||
last_autovacuum_date | ||||
last_analyze_date | ||||
last_autoanalyze_date | ||||
) | ||||
|
||||
expect(rows.length).to be > 0 | ||||
rows.each do |row| | ||||
expect(row.first).to be_truthy | ||||
end | ||||
end | ||||
|
||||
it "exception" do | ||||
allow(ApplicationRecord.connection).to receive(:table_statistics).and_raise("FAILURE") | ||||
described_class.log_table_statistics(@buffer) | ||||
expect(@buffer.string.lines.first).to eq("MIQ(VmdbDatabaseConnection.log_table_statistics) Unable to log activity, 'FAILURE'") | ||||
end | ||||
end | ||||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably should be
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we even need csv? The only reason we had this was because it was easier for the old log scraper from THenn back in the ManageIQ Inc days. I always thought it was weird that the logs had CSV inside them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it was done as a CSV to avoid having to print the same keys over and over again. If you do a hash, at least half of each row of data will be printing the keys, unless we abbreviate them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Fryguy I'm fine with changing formats. The reason CSV was used was because of the single header row instead of repeating the same header values in other structures such as keys in a hash or json. As it is though, the log line limit gets reached so we can't even log the CSV in a single line so we'll need to split up anyway.