Permalink
Browse files

PDB-476 Decorate the terminus code with Puppet profiling blocks

This patch adds some select profiling blocks to the PuppetDB terminus code.

The profiler is provided by puppet core from Puppet::Util::Puppetdb#profile,
which has recently become public for our use. We provide here in our own utils
library our own wrapper implementation that can be mixed in.

Key areas of our terminus functionality have now been profiled with this
patch:

* Entry points are profiled and identified by their entry methods (save, find,
  search etc.)
* Remote calls, HTTP gets/posts
* Code that does any form of encoding/decoding that might be potentially slow
  at capacity.

The style of messages I've used follow along with the existing Puppet profiling
examples already in place so as to be readable together. We have prefixed our
profile message with "PuppetDB" for easy searchability also.

I have provided a small FAQ entry that explains in brief the process of
debugging, although we lack something to link to in Puppet for a more detailed
explanation. This will probably need to be fixed if better documentation comes
available.

Signed-off-by: Ken Barber <ken@bob.sh>
  • Loading branch information...
1 parent b876cf9 commit bfc45a0648458599b4e06a604fc55048af3fbb1e @kbarber kbarber committed Feb 27, 2014
@@ -138,3 +138,11 @@ The catalog duplication rate can be found on the
heavier I/O load on the database. Refer to the [Troubleshooting Low
Catalog Duplication guide][low_catalog_dupe] for steps to diagnose the
problem.
+
+## My puppet master is going slow since enabling PuppetDB. How can I profile it?
+
+In Puppet 3.x a new profiling capability was introduced that we have leveraged in the PuppetDB terminus client code. By simply adding `profile=true` to your `puppet.conf` you can enable detailed profiling of all apsects of Puppet including the PuppetDB terminus. For this to work you must enable debugging on your master instance as well.
+
+Of course use your common sense, any profiling mechanism will add more load which can increase your problems when you already have limited capacity. Enabling profiling in production should only be done with care and for a very short amount of time.
+
+All PuppetDB profiling events are prefixed with `PuppetDB:` so can by easily searched for. This information is helpful to our developers to debug performance issues also, so feel free to include these details when raising tickets against PuppetDB.

Large diffs are not rendered by default.

Oops, something went wrong.
@@ -1,3 +1,4 @@
+require 'uri'
require 'puppet/node/facts'
require 'puppet/indirector/rest'
require 'puppet/util/puppetdb'
@@ -13,40 +14,51 @@ def initialize
end
def save(request)
- facts = request.instance.dup
- facts.values = facts.values.dup
- facts.stringify
- payload = {"name" => facts.name, "values" => facts.values}.to_pson
+ profile "facts#save" do
+ payload = profile "Encode facts command submission payload" do
+ facts = request.instance.dup
+ facts.values = facts.values.dup
+ facts.stringify
+ {"name" => facts.name, "values" => facts.values}.to_json
+ end
- submit_command(request.key, payload, CommandReplaceFacts, 1)
+ submit_command(request.key, payload, CommandReplaceFacts, 1)
+ end
end
def find(request)
- begin
- response = http_get(request, "/v3/nodes/#{CGI.escape(request.key)}/facts", headers)
- log_x_deprecation_header(response)
-
- if response.is_a? Net::HTTPSuccess
- result = JSON.parse(response.body)
- # Note: the Inventory Service API appears to expect us to return nil here
- # if the node isn't found. However, PuppetDB returns an empty array in
- # this case; for now we will just look for that condition and assume that
- # it means that the node wasn't found, so we will return nil. In the
- # future we may want to improve the logic such that we can distinguish
- # between the "node not found" and the "no facts for this node" cases.
- if result.empty?
- return nil
+ profile "facts#find" do
+ begin
+ url = "/v3/nodes/#{CGI.escape(request.key)}/facts"
+ response = profile "Query for nodes facts: #{url}" do
+ http_get(request, url, headers)
end
- facts = result.inject({}) do |a,h|
- a.merge(h['name'] => h['value'])
+ log_x_deprecation_header(response)
+
+ if response.is_a? Net::HTTPSuccess
+ profile "Parse fact query response (size: #{response.body.size})" do
+ result = JSON.parse(response.body)
+ # Note: the Inventory Service API appears to expect us to return nil here
+ # if the node isn't found. However, PuppetDB returns an empty array in
+ # this case; for now we will just look for that condition and assume that
+ # it means that the node wasn't found, so we will return nil. In the
+ # future we may want to improve the logic such that we can distinguish
+ # between the "node not found" and the "no facts for this node" cases.
+ if result.empty?
+ return nil
+ end
+ facts = result.inject({}) do |a,h|
+ a.merge(h['name'] => h['value'])
+ end
+ Puppet::Node::Facts.new(request.key, facts)
+ end
+ else
+ # Newline characters cause an HTTP error, so strip them
+ raise "[#{response.code} #{response.message}] #{response.body.gsub(/[\r\n]/, '')}"
end
- Puppet::Node::Facts.new(request.key, facts)
- else
- # Newline characters cause an HTTP error, so strip them
- raise "[#{response.code} #{response.message}] #{response.body.gsub(/[\r\n]/, '')}"
+ rescue => e
+ raise Puppet::Error, "Failed to find facts from PuppetDB at #{self.class.server}:#{self.class.port}: #{e}"
end
- rescue => e
- raise Puppet::Error, "Failed to find facts from PuppetDB at #{self.class.server}:#{self.class.port}: #{e}"
end
end
@@ -62,40 +74,47 @@ def find(request)
# `operator` may be one of {eq, ne, lt, gt, le, ge}, and will default to 'eq'
# if unspecified.
def search(request)
- return [] unless request.options
- operator_map = {
- 'eq' => '=',
- 'gt' => '>',
- 'lt' => '<',
- 'ge' => '>=',
- 'le' => '<=',
- }
- filters = request.options.sort.map do |key,value|
- type, name, operator = key.to_s.split('.')
- operator ||= 'eq'
- raise Puppet::Error, "Fact search against keys of type '#{type}' is unsupported" unless type == 'facts'
- if operator == 'ne'
- ['not', ['=', ['fact', name], value]]
- else
- [operator_map[operator], ['fact', name], value]
+ profile "facts#search" do
+ return [] unless request.options
+ operator_map = {
+ 'eq' => '=',
+ 'gt' => '>',
+ 'lt' => '<',
+ 'ge' => '>=',
+ 'le' => '<=',
+ }
+ filters = request.options.sort.map do |key,value|
+ type, name, operator = key.to_s.split('.')
+ operator ||= 'eq'
+ raise Puppet::Error, "Fact search against keys of type '#{type}' is unsupported" unless type == 'facts'
+ if operator == 'ne'
+ ['not', ['=', ['fact', name], value]]
+ else
+ [operator_map[operator], ['fact', name], value]
+ end
end
- end
- query = ["and"] + filters
- query_param = CGI.escape(query.to_json)
+ query = ["and"] + filters
+ query_param = CGI.escape(query.to_json)
- begin
- response = http_get(request, "/v3/nodes?query=#{query_param}", headers)
- log_x_deprecation_header(response)
+ begin
+ url = "/v3/nodes?query=#{query_param}"
+ response = profile "Fact query request: #{URI.unescape(url)}" do
+ http_get(request, url, headers)
+ end
+ log_x_deprecation_header(response)
- if response.is_a? Net::HTTPSuccess
- JSON.parse(response.body).collect {|s| s["name"]}
- else
- # Newline characters cause an HTTP error, so strip them
- raise "[#{response.code} #{response.message}] #{response.body.gsub(/[\r\n]/, '')}"
+ if response.is_a? Net::HTTPSuccess
+ profile "Parse fact query response (size: #{response.body.size})" do
+ JSON.parse(response.body).collect {|s| s["name"]}
+ end
+ else
+ # Newline characters cause an HTTP error, so strip them
+ raise "[#{response.code} #{response.message}] #{response.body.gsub(/[\r\n]/, '')}"
+ end
+ rescue => e
+ raise Puppet::Error, "Could not perform inventory search from PuppetDB at #{self.class.server}:#{self.class.port}: #{e}"
end
- rescue => e
- raise Puppet::Error, "Could not perform inventory search from PuppetDB at #{self.class.server}:#{self.class.port}: #{e}"
end
end
@@ -1,6 +1,7 @@
require 'puppet/indirector/rest'
require 'puppet/util/puppetdb'
require 'json'
+require 'uri'
class Puppet::Resource::Puppetdb < Puppet::Indirector::REST
include Puppet::Util::Puppetdb
@@ -11,46 +12,55 @@ def initialize
end
def search(request)
- type = request.key
- host = request.options[:host]
- filter = request.options[:filter]
- scope = request.options[:scope]
-
- # At minimum, we want to filter to the right type of exported resources.
- expr = ['and',
- ['=', 'type', type],
- ['=', 'exported', true],
- ['not',
- ['=', 'certname', host]]]
-
- filter_expr = build_expression(filter)
- expr << filter_expr if filter_expr
-
- query_param = CGI.escape(expr.to_json)
-
- begin
- response = http_get(request, "/v3/resources?query=#{query_param}", headers)
- log_x_deprecation_header(response)
-
- unless response.is_a? Net::HTTPSuccess
- # Newline characters cause an HTTP error, so strip them
- raise "[#{response.code} #{response.message}] #{response.body.gsub(/[\r\n]/, '')}"
+ profile "resource#search" do
+ type = request.key
+ host = request.options[:host]
+ filter = request.options[:filter]
+ scope = request.options[:scope]
+
+ # At minimum, we want to filter to the right type of exported resources.
+ expr = ['and',
+ ['=', 'type', type],
+ ['=', 'exported', true],
+ ['not',
+ ['=', 'certname', host]]]
+
+ filter_expr = build_expression(filter)
+ expr << filter_expr if filter_expr
+
+ query_param = CGI.escape(expr.to_json)
+
+ begin
+ url = "/v3/resources?query=#{query_param}"
+ response = profile "Resources query: #{URI.unescape(url)}" do
+ http_get(request, url, headers)
+ end
+ log_x_deprecation_header(response)
+
+ unless response.is_a? Net::HTTPSuccess
+ # Newline characters cause an HTTP error, so strip them
+ raise "[#{response.code} #{response.message}] #{response.body.gsub(/[\r\n]/, '')}"
+ end
+ rescue => e
+ raise Puppet::Error, "Could not retrieve resources from the PuppetDB at #{self.class.server}:#{self.class.port}: #{e}"
end
- rescue => e
- raise Puppet::Error, "Could not retrieve resources from the PuppetDB at #{self.class.server}:#{self.class.port}: #{e}"
- end
- resources = JSON.load(response.body)
+ resources = profile "Parse resource query response (size: #{response.body.size})" do
+ JSON.load(response.body)
+ end
- resources.map do |res|
- params = res['parameters'] || {}
- params = params.map do |name,value|
- Puppet::Parser::Resource::Param.new(:name => name, :value => value)
+ profile "Build up collected resource objects (count: #{resources.count})" do
+ resources.map do |res|
+ params = res['parameters'] || {}
+ params = params.map do |name,value|
+ Puppet::Parser::Resource::Param.new(:name => name, :value => value)
+ end
+ attrs = {:parameters => params, :scope => scope}
+ result = Puppet::Parser::Resource.new(res['type'], res['title'], attrs)
+ result.collector_id = "#{res['certname']}|#{res['type']}|#{res['title']}"
+ result
+ end
end
- attrs = {:parameters => params, :scope => scope}
- result = Puppet::Parser::Resource.new(res['type'], res['title'], attrs)
- result.collector_id = "#{res['certname']}|#{res['type']}|#{res['title']}"
- result
end
end
@@ -17,7 +17,9 @@
def process
- submit_command(self.host, report_to_hash, CommandStoreReport, 2)
+ profile "report#process" do
+ submit_command(self.host, report_to_hash, CommandStoreReport, 2)
+ end
end
# TODO: It seems unfortunate that we have to access puppet_version and
@@ -44,36 +46,40 @@ def puppet_version
#
# @api private
def report_to_hash
- add_v4_fields_to_report(
- {
- "certname" => host,
- "puppet-version" => puppet_version,
- "report-format" => report_format,
- "configuration-version" => configuration_version.to_s,
- "start-time" => Puppet::Util::Puppetdb.to_wire_time(time),
- "end-time" => Puppet::Util::Puppetdb.to_wire_time(time + run_duration),
- "resource-events" => build_events_list
- })
+ profile "Convert report to wire format hash" do
+ add_v4_fields_to_report(
+ {
+ "certname" => host,
+ "puppet-version" => puppet_version,
+ "report-format" => report_format,
+ "configuration-version" => configuration_version.to_s,
+ "start-time" => Puppet::Util::Puppetdb.to_wire_time(time),
+ "end-time" => Puppet::Util::Puppetdb.to_wire_time(time + run_duration),
+ "resource-events" => build_events_list
+ })
+ end
end
# @api private
def build_events_list
- filter_events(resource_statuses.inject([]) do |events, status_entry|
- _, status = *status_entry
- if ! (status.events.empty?)
- events.concat(status.events.map { |event| event_to_hash(status, event) })
- elsif status.skipped
- events.concat([fabricate_event(status, "skipped")])
- elsif status.failed
- # PP-254:
- # We have to fabricate resource events here due to a bug/s in report providers
- # that causes them not to include events on a resource status that has failed.
- # When PuppetDB is able to make a hard break from older version of Puppet that
- # have this bug, we can remove this behavior.
- events.concat([fabricate_event(status, "failure")])
- end
- events
- end)
+ profile "Build events list (count: #{resource_statuses.count})" do
+ filter_events(resource_statuses.inject([]) do |events, status_entry|
+ _, status = *status_entry
+ if ! (status.events.empty?)
+ events.concat(status.events.map { |event| event_to_hash(status, event) })
+ elsif status.skipped
+ events.concat([fabricate_event(status, "skipped")])
+ elsif status.failed
+ # PP-254:
+ # We have to fabricate resource events here due to a bug/s in report providers
+ # that causes them not to include events on a resource status that has failed.
+ # When PuppetDB is able to make a hard break from older version of Puppet that
+ # have this bug, we can remove this behavior.
+ events.concat([fabricate_event(status, "failure")])
+ end
+ events
+ end)
+ end
end
# @api private
@@ -162,7 +168,9 @@ def add_v4_fields_to_event(resource_status, event_hash)
# @api private
def filter_events(events)
if config.ignore_blacklisted_events?
- events.select { |e| ! config.is_event_blacklisted?(e) }
+ profile "Filter blacklisted events" do
+ events.select { |e| ! config.is_event_blacklisted?(e) }
+ end
else
events
end
Oops, something went wrong.

0 comments on commit bfc45a0

Please sign in to comment.