The Puppet Summary is a web interface providing reporting features for Puppet, it replaces the Puppet Dashboard project
Go Other
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
.travis Moved the build-script into ./.travis/ Jul 26, 2018
.gitignore Ignore generated files Mar 21, 2018
.travis.yml Moved the build-script into ./.travis/ Jul 26, 2018 Allow /api/state/$state to return text/xml/json. Dec 12, 2017
Dockerfile Automatically prune reports once per week, when running under Docker Mar 21, 2018 Move away from go-bindata. Apr 8, 2018
LICENSE Added license & shields for github. Aug 4, 2017 Added coverage-badge Apr 9, 2018
cmd_metrics.go Avoid panic() - instead output a human-readable message on failure Jul 26, 2018
cmd_metrics_test.go Added tests for metrics. Aug 27, 2017
cmd_prune.go Allow pruning orphaned nodes. Mar 28, 2018
cmd_prune_test.go Updated to add test-coverage of orphaned nodes. Oct 26, 2017
cmd_serve.go Collect metrics May 31, 2018
cmd_serve_test.go Report IDs must be numeric, not alphanumeric May 13, 2018
cmd_version.go goimports is better than gofmt Mar 22, 2018
cmd_version_test.go Added test for verbose version of command Aug 27, 2017
cmd_yaml.go goimports is better than gofmt Mar 22, 2018
db.go Fixed some warnings from goreportcard: Jul 26, 2018
db_test.go Changed name of variable. Mar 21, 2018
main.go goimports is better than gofmt Mar 22, 2018
static.go Fixed ineffective assignment Jul 25, 2018
static_test.go Added simple tests of static-resources. Apr 12, 2018
timespan.go Bump test-coverage. Aug 29, 2017
timespan_test.go 100% test-coverage. Aug 29, 2017
yaml_parser.go goimports is better than gofmt Mar 22, 2018
yaml_parser_test.go Move away from go-bindata. Apr 8, 2018

Travis CI Go Report Card license Release gocover store

Puppet Summary

This is a simple golang based project which is designed to offer a dashboard of your current puppet-infrastructure:

  • Listing all known-nodes, and their current state.
  • Viewing the last few runs of a given system.
  • etc.

This project is directly inspired by the puppet-dashboard project, reasons why you might prefer this project:

  • It is actively maintained.
  • Deployment is significantly simpler.
    • This project only involves deploying a single binary.
  • It allows you to submit metrics to a carbon-receiver.
    • The metrics include a distinct count of each state, allowing you to raise alerts when nodes in the failed state are present.
  • The output can be used for scripting, and automation.
    • All output is available as JSON/XML in addition to human-viewable HTML.

You can get a good idea of what the project does by looking at the online demo, which is available here:

Puppet Reporting

The puppet-server has integrated support for submitting reports to a central location, via HTTP POSTs. This project is designed to be a target for such submission:

  • Your puppet-master submits reports to this software.
    • The reports are saved locally, as YAML files, beneath ./reports
    • They are parsed and a simple SQLite database keeps track of them.
  • The SQLite database is used to present a visualization layer.

The reports are expected to be pruned over time, but as the SQLite database only contains a summary of the available data it will not grow excessively.

The software has been reported to cope with 16k reports per day, archive approximately 27Gb of data over 14 days!


Providing you have a working go-installation you should be able to install this software by running:

go get -u

NOTE: If you've previously downloaded the code this will update your installation to the most recent available version.

If you don't have a golang environment setup you should be able to download a binary for GNU/Linux from the github release page:


Once installed you can launch it directly like so:

$ puppet-summary serve
Launching the server on

If you wish to change the host/port you can do so like this:

$ puppet-summary serve -host -port 4321
Launching the server on

Other sub-commands are described later, or can be viewed via:

$ puppet-summary help

Importing Puppet State

Once you've got an instance of puppet-summary installed and running the next step is to populate it with report data. The expectation is that you'll update your puppet server to send the reports to it directly, by editing puppet.conf on your puppet-master:

reports = store, http
reporturl = http://localhost:3001/upload
  • If you're running the dashboard on a different host you'll need to use the external IP/hostname here.
  • Once you've changed your master's configuration don't forget to restart the service!

If you don't wish to change your puppet-server initially you can test what it would look like by importing the existing YAML reports from your puppet-master. Something like this should do the job:

# cd /var/lib/puppet/reports
# find . -name '*.yaml' -exec \
   curl --data-binary @\{\} http://localhost:3001/upload \;
  • That assumes that your reports are located beneath /var/lib/puppet/reports, but that is a reasonable default.
  • It also assumes you're running the puppet-summary instance upon the puppet-master, if you're on a different host remember to change the URI.


Over time your reports will start to consuming ever-increasing amounts of disk-space so they should be pruned. To prune (read: delete) old reports run:

puppet-summary prune -days 7 -prefix ./reports/

That will remove the saved YAML files from disk which are over 7 days old, and it will also remove the associated database entries that refer to them.

If you're happy with the default pruning behaviour, which is particularly useful when you're running this software in a container, described in, you can prune old reports automatically once per week without the need to add a cron-job like so:

puppet-summary serve  -auto-prune [options..]

If you don't do this you'll need to add a cronjob to ensure that the prune-subcommand runs regularly.

Nodes which had previously submitted updates to your puppet-master, and puppet-summary service, but which have failed to do so "recently", will be listed in the web-based user-interface, in the "orphaned" column. Orphaned nodes will be reaped over time, via the days option just discussed. If you explicitly wish to clean removed-hosts you can do so via:

puppet-summary prune -verbose -orphaned


If you have a carbon-server running locally you can also submit metrics to it :

puppet-summary metrics \
  -host \
  -port 2003 \
  -prefix puppet.example_com  [-nop]

The metrics include the count of nodes in each state, changed, unchanged, failed, and orphaned and can be used to raise alerts when things fail. When running with -nop the metrics will be dumped to the console instead of submitted.

Notes On Deployment

If you can run this software upon your puppet-master then that's the ideal, that way your puppet-master would be configured to uploaded your reports to, and the dashboard itself may be viewed via a reverse-proxy.

The appeal of allowing submissions from the loopback is that your reverse-proxy can deny access to the upload end-point, ensuring nobody else can submit details. A simple nginx configure might look like this:

 server {
     listen [::]:80  default ipv6only=off;

     ## Puppet-master is the only host that needs access here
     ## it is configured to POST to localhost:3001 directly
     ## so we can disable access here.
     location /upload {
        deny all;

     ## send all traffic to the back-end
     location / {
       proxy_redirect off;
       proxy_set_header        X-Forwarded-For $remote_addr;
  • Please don't run this application as root.
  • The defaults are sane, YAML files are stored beneath ./reports, and the SQLite database is located at "./ps.db.
    • Both these values can be changed, but if you change them you'll need to remember to change for all appropriate actions.
      • For example "puppet-summary serve -db-file ./new.db", "puppet-summary metrics -db-file ./new.db", and "puppet-summary prune -db-file ./new.db".