Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse code

Merge branch 'jan_2013_external_sources'

Conflicts:
	.gitignore
	Rakefile
  • Loading branch information...
commit 897d8b2eb38d930c5c2632185122c561998c6cd6 2 parents c87a8c8 + 662a0fe
Nick Fagerlund authored

Showing 46 changed files with 228 additions and 4,821 deletions. Show diff stats Hide diff stats

  1. +2 1  .gitignore
  2. +127 58 Rakefile
  3. +0 1  marionette-collective
  4. +17 0 source/_config.yml
  5. +0 1  source/mcollective
  6. +0 99 source/puppetdb/1.1/api/commands.markdown
  7. +0 91 source/puppetdb/1.1/api/index.markdown
  8. +0 71 source/puppetdb/1.1/api/query/curl.markdown
  9. +0 76 source/puppetdb/1.1/api/query/experimental/event.markdown
  10. +0 72 source/puppetdb/1.1/api/query/experimental/report.markdown
  11. +0 310 source/puppetdb/1.1/api/query/tutorial.markdown
  12. +0 62 source/puppetdb/1.1/api/query/v1/facts.markdown
  13. +0 148 source/puppetdb/1.1/api/query/v1/metrics.markdown
  14. +0 61 source/puppetdb/1.1/api/query/v1/nodes.markdown
  15. +0 107 source/puppetdb/1.1/api/query/v1/resources.markdown
  16. +0 39 source/puppetdb/1.1/api/query/v1/status.markdown
  17. +0 38 source/puppetdb/1.1/api/query/v2/fact-names.markdown
  18. +0 111 source/puppetdb/1.1/api/query/v2/facts.markdown
  19. +0 150 source/puppetdb/1.1/api/query/v2/metrics.markdown
  20. +0 181 source/puppetdb/1.1/api/query/v2/nodes.markdown
  21. +0 180 source/puppetdb/1.1/api/query/v2/operators.markdown
  22. +0 69 source/puppetdb/1.1/api/query/v2/query.markdown
  23. +0 182 source/puppetdb/1.1/api/query/v2/resources.markdown
  24. +0 231 source/puppetdb/1.1/api/wire_format/catalog_format.markdown
  25. +0 32 source/puppetdb/1.1/api/wire_format/facts_format.markdown
  26. +0 107 source/puppetdb/1.1/api/wire_format/report_format.markdown
  27. +0 31 source/puppetdb/1.1/community_add_ons.markdown
  28. +0 406 source/puppetdb/1.1/configure.markdown
  29. +0 137 source/puppetdb/1.1/connect_puppet_apply.markdown
  30. +0 132 source/puppetdb/1.1/connect_puppet_master.markdown
  31. BIN  source/puppetdb/1.1/images/perf-dash-large.png
  32. BIN  source/puppetdb/1.1/images/perf-dash-small.png
  33. +0 103 source/puppetdb/1.1/index.markdown
  34. +0 107 source/puppetdb/1.1/install_from_packages.markdown
  35. +0 247 source/puppetdb/1.1/install_from_source.markdown
  36. +0 40 source/puppetdb/1.1/install_via_module.markdown
  37. +0 20 source/puppetdb/1.1/known_issues.markdown
  38. +0 84 source/puppetdb/1.1/maintain_and_tune.markdown
  39. +0 119 source/puppetdb/1.1/postgres_ssl.markdown
  40. +0 93 source/puppetdb/1.1/puppetdb-faq.markdown
  41. +0 537 source/puppetdb/1.1/release_notes.markdown
  42. +0 73 source/puppetdb/1.1/repl.markdown
  43. +0 92 source/puppetdb/1.1/scaling_recommendations.markdown
  44. +0 87 source/puppetdb/1.1/upgrade.markdown
  45. +0 35 source/puppetdb/1.1/using.markdown
  46. +82 0 vendor/bin/git-new-workdir
3  .gitignore
@@ -12,4 +12,5 @@ log
12 12 pdf_output
13 13 pdf_source
14 14 puppetdocs-latest.tar.gz*
15   -.bundle
  15 +.bundle
  16 +externalsources
185 Rakefile
@@ -19,18 +19,86 @@ end
19 19 $LOAD_PATH.unshift File.expand_path('lib')
20 20
21 21 references = %w(configuration function indirection metaparameter report type developer)
  22 +top_dir = Dir.pwd
  23 +
  24 +namespace :externalsources do
  25 +
  26 + # For now, we're using things in the _config.yml, just... because it's there I guess.
  27 + def load_externalsources
  28 + require 'yaml'
  29 + all_config = YAML.load(File.open("source/_config.yml"))
  30 + return all_config['externalsources']
  31 + end
  32 +
  33 + def repo_name(repo_url)
  34 + repo_url.split('/')[-1].sub(/\.git$/, '')
  35 + end
  36 +
  37 + # "Update all working copies defined in source/_config.yml"
  38 + task :update do
  39 + Rake::Task['externalsources:clone'].invoke
  40 + externalsources = load_externalsources
  41 + Dir.chdir("externalsources") do
  42 + externalsources.each do |name, info|
  43 + unless File.directory?(name)
  44 + puts "Making new working directory for #{name}"
  45 + system ("#{top_dir}/vendor/bin/git-new-workdir #{repo_name(info['repo'])} #{name} #{info['commit']}")
  46 + end
  47 + Dir.chdir(name) do
  48 + puts "Updating #{name}"
  49 + system ("git fetch origin && git checkout --force #{info['commit']} && git clean --force .")
  50 + end
  51 + end
  52 + end
  53 + end
  54 +
  55 + # "Clone any external documentation repos (from externalsources in source/_config.yml) that don't yet exist"
  56 + task :clone do
  57 + externalsources = load_externalsources
  58 + repos = []
  59 + externalsources.each do |name, info|
  60 + repos << info['repo']
  61 + end
  62 + Dir.chdir("externalsources") do
  63 + repos.uniq.each do |repo|
  64 + system ("git clone #{repo}") unless File.directory?("#{repo_name(repo)}")
  65 + end
  66 + end
  67 + end
  68 +
  69 + # "Symlink external documentation into place in the source directory"
  70 + task :link do
  71 + Rake::Task['externalsources:clean'].invoke # Bad things happen if any of these symlinks already exist, and Jekyll will run FOREVER
  72 + Rake::Task['externalsources:clean'].reenable
  73 + externalsources = load_externalsources
  74 + externalsources.each do |name, info|
  75 + # Have to use absolute paths for the source, since we have no idea how deep in the hierarchy info['url'] is (and thus how many ../..s it would need).
  76 + FileUtils.ln_sf "#{top_dir}/externalsources/#{name}/#{info['subdirectory']}", "source#{info['url']}"
  77 + end
  78 + end
  79 +
  80 + # "Clean up any external source symlinks from the source directory" # In the current implementation, all external sources are symlinks and there are no other symlinks in the source. This means we can naively kill all symlinks in ./source.
  81 + task :clean do
  82 + system("find ./source -type l -print0 | xargs -0 rm")
  83 + end
  84 +end
22 85
23 86 desc "Generate the documentation"
24 87 task :generate do
  88 + Rake::Task['externalsources:update'].invoke # Create external sources if necessary, and check out the required working directories
  89 + Rake::Task['externalsources:link'].invoke # Link docs folders from external sources into the source at the appropriate places.
  90 +
25 91 system("mkdir -p output")
26 92 system("rm -rf output/*")
27 93 system("mkdir output/references")
28   - Dir.chdir("source")
29   - system("bundle exec jekyll ../output")
  94 + Dir.chdir("source") do
  95 + system("bundle exec jekyll ../output")
  96 + end
  97 +
30 98 Rake::Task['references:symlink'].invoke
31   - Dir.chdir("..")
32   - puts Dir.pwd
33   -
  99 +
  100 + Rake::Task['externalsources:clean'].invoke # The opposite of externalsources:link. Delete all symlinks in the source.
  101 + Rake::Task['externalsources:clean'].reenable
34 102 end
35 103
36 104
@@ -50,23 +118,25 @@ task :generate_pdf do
50 118 system("cp -rf source pdf_source")
51 119 system("cp -rf pdf_mask/* pdf_source") # Copy in and/or overwrite differing files
52 120 # The point being, this way we don't have to maintain separate copies of the actual source files, and it's clear which things are actually different for the PDF version of the page.
53   - Dir.chdir("pdf_source")
54   - system("bundle exec jekyll ../pdf_output")
  121 + Dir.chdir("pdf_source") do
  122 + system("bundle exec jekyll ../pdf_output")
  123 + end
55 124 Rake::Task['references:symlink:for_pdf'].invoke
56   - Dir.chdir("../pdf_output")
57   - pdf_targets = YAML.load(File.open("../pdf_mask/pdf_targets.yaml"))
58   - pdf_targets.each do |target, pages|
59   - system("cat #{pages.join(' ')} > #{target}")
60   - if target == 'puppetdb1.html'
61   - content = File.read('puppetdb1.html')
62   - content.gsub!('-puppetdb-1-install_from_source-html-step-3-option-b-manually-create-a-keystore-and-truststore', '-puppetdb-1-install_from_source-html-step-3-option-b-manuallu-create-a-keystore-and-truststore')
63   - File.open('puppetdb1.html', "w") {|pdd1| pdd1.print(content)}
64   - # Yeah, so, I found the magic string that, when used as an element ID and then
65   - # linked to from elsewhere in the document, causes wkhtmltopdf to think an
66   - # unthinkable thought and corrupt the output file.
67   - # Your guess is as good as mine. #doomed #sorcery #wat
68   - # >:|
69   - # -NF
  125 + Dir.chdir("../pdf_output") do
  126 + pdf_targets = YAML.load(File.open("../pdf_mask/pdf_targets.yaml"))
  127 + pdf_targets.each do |target, pages|
  128 + system("cat #{pages.join(' ')} > #{target}")
  129 + if target == 'puppetdb1.html'
  130 + content = File.read('puppetdb1.html')
  131 + content.gsub!('-puppetdb-1-install_from_source-html-step-3-option-b-manually-create-a-keystore-and-truststore', '-puppetdb-1-install_from_source-html-step-3-option-b-manuallu-create-a-keystore-and-truststore')
  132 + File.open('puppetdb1.html', "w") {|pdd1| pdd1.print(content)}
  133 + # Yeah, so, I found the magic string that, when used as an element ID and then
  134 + # linked to from elsewhere in the document, causes wkhtmltopdf to think an
  135 + # unthinkable thought and corrupt the output file.
  136 + # Your guess is as good as mine. #doomed #sorcery #wat
  137 + # >:|
  138 + # -NF
  139 + end
70 140 end
71 141 end
72 142 # system("cat `cat ../pdf_source/page_order.txt` > rebuilt_index.html")
@@ -74,31 +144,30 @@ task :generate_pdf do
74 144 # system("mv rebuilt_index.html index.html")
75 145 puts "Remember to run rake serve_pdf"
76 146 puts "Remember to run rake compile_pdf (while serving on localhost:9292)"
77   - Dir.chdir("..")
78 147 end
79 148
80 149 desc "Temporary task for debugging PDF compile failures"
81 150 task :reshuffle_pdf do
82 151 require 'yaml'
83   - Dir.chdir("pdf_output")
84   - pdf_targets = YAML.load(File.open("../pdf_mask/pdf_targets.yaml"))
85   - pdf_targets.each do |target, pages|
86   - system("cat #{pages.join(' ')} > #{target}")
87   - if target == 'puppetdb1.html'
88   - content = File.read('puppetdb1.html')
89   - content.gsub!('-puppetdb-1-install_from_source-html-step-3-option-b-manually-create-a-keystore-and-truststore', '-puppetdb-1-install_from_source-html-step-3-option-b-manuallu-create-a-keystore-and-truststore')
90   - File.open('puppetdb1.html', "w") {|pdd1| pdd1.print(content)}
91   - # Yeah, so, I found the magic string that, when used as an element ID and then
92   - # linked to from elsewhere in the document, causes wkhtmltopdf to think an
93   - # unthinkable thought and corrupt the output file.
94   - # Your guess is as good as mine. #doomed #sorcery #wat
95   - # >:|
96   - # -NF
  152 + Dir.chdir("pdf_output") do
  153 + pdf_targets = YAML.load(File.open("../pdf_mask/pdf_targets.yaml"))
  154 + pdf_targets.each do |target, pages|
  155 + system("cat #{pages.join(' ')} > #{target}")
  156 + if target == 'puppetdb1.html'
  157 + content = File.read('puppetdb1.html')
  158 + content.gsub!('-puppetdb-1-install_from_source-html-step-3-option-b-manually-create-a-keystore-and-truststore', '-puppetdb-1-install_from_source-html-step-3-option-b-manuallu-create-a-keystore-and-truststore')
  159 + File.open('puppetdb1.html', "w") {|pdd1| pdd1.print(content)}
  160 + # Yeah, so, I found the magic string that, when used as an element ID and then
  161 + # linked to from elsewhere in the document, causes wkhtmltopdf to think an
  162 + # unthinkable thought and corrupt the output file.
  163 + # Your guess is as good as mine. #doomed #sorcery #wat
  164 + # >:|
  165 + # -NF
  166 + end
97 167 end
98 168 end
99 169 puts "Remember to run rake serve_pdf"
100 170 puts "Remember to run rake compile_pdf (while serving on localhost:9292)"
101   - Dir.chdir("..")
102 171 end
103 172
104 173
@@ -121,10 +190,10 @@ end
121 190 desc "Create tarball of documentation"
122 191 task :tarball do
123 192 tarball_name = "puppetdocs-latest.tar.gz"
124   - FileUtils.cd 'output'
125   - sh "tar -czf #{tarball_name} *"
126   - FileUtils.mv tarball_name, '..'
127   - FileUtils.cd '..'
  193 + FileUtils.cd('output') do
  194 + sh "tar -czf #{tarball_name} *"
  195 + FileUtils.mv tarball_name, '..'
  196 + end
128 197 sh "git rev-parse HEAD > #{tarball_name}.version" if File.directory?('.git') # Record the version of this tarball, but only if we're in a git repo.
129 198 end
130 199
@@ -141,7 +210,7 @@ namespace :references do
141 210
142 211 namespace :symlink do
143 212
144   - desc "Show the versions that will be symlinked"
  213 + # "Show the versions that will be symlinked"
145 214 task :versions do
146 215 require 'puppet_docs'
147 216 PuppetDocs::Reference.special_versions.each do |name, (version, source)|
@@ -149,11 +218,11 @@ namespace :references do
149 218 end
150 219 end
151 220
152   - desc "Symlink the latest & stable directories when generating a flat page for PDFing"
  221 + # "Symlink the latest & stable directories when generating a flat page for PDFing"
153 222 task :for_pdf do
154 223 require 'puppet_docs'
155 224 PuppetDocs::Reference.special_versions.each do |name, (version, source)|
156   - Dir.chdir '../pdf_output/references' do
  225 + Dir.chdir 'pdf_output/references' do
157 226 FileUtils.ln_sf version.to_s, name.to_s
158 227 end
159 228 end
@@ -162,11 +231,11 @@ namespace :references do
162 231
163 232 end
164 233
165   - desc "Symlink the latest & stable directories"
  234 + # "Symlink the latest & stable directories"
166 235 task :symlink do
167 236 require 'puppet_docs'
168 237 PuppetDocs::Reference.special_versions.each do |name, (version, source)|
169   - Dir.chdir '../output/references' do
  238 + Dir.chdir 'output/references' do
170 239 FileUtils.ln_sf version.to_s, name.to_s
171 240 end
172 241 end
@@ -175,7 +244,7 @@ namespace :references do
175 244 namespace :puppetdoc do
176 245
177 246 references.each do |name|
178   - desc "Write references/VERSION/#{name}"
  247 + # "Write references/VERSION/#{name}"
179 248 task name => 'references:check_version' do
180 249 require 'puppet_docs'
181 250 PuppetDocs::Reference::Generator.new(ENV['VERSION'], name).generate
@@ -189,7 +258,7 @@ namespace :references do
189 258
190 259 namespace :index do
191 260
192   - desc "Generate a stub index for VERSION"
  261 + # "Generate a stub index for VERSION"
193 262 task :stub => 'references:check_version' do
194 263 filename = Pathname.new('source/references') + ENV['VERSION'] + 'index.markdown'
195 264 filename.parent.mkpath
@@ -218,9 +287,9 @@ namespace :references do
218 287 end
219 288
220 289 task :fetch_tags do
221   - Dir.chdir("vendor/puppet")
222   - sh "git fetch --tags"
223   - Dir.chdir("../..")
  290 + Dir.chdir("vendor/puppet") do
  291 + sh "git fetch --tags"
  292 + end
224 293 end
225 294
226 295 desc "Update the contents of source/man/{app}.markdown" # Note that the index must be built manually if new applications are added. Also, let's not ever have a `puppet index` command.
@@ -257,10 +326,10 @@ end
257 326
258 327 task :default => :spec
259 328
260   -require 'rdoc/task'
261   -Rake::RDocTask.new do |rdoc|
262   - rdoc.rdoc_dir = 'rdoc'
263   - rdoc.title = "puppet-docs"
264   - rdoc.rdoc_files.include('README*')
265   - rdoc.rdoc_files.include('lib/**/*.rb')
266   -end
  329 +# require 'rdoc/task'
  330 +# Rake::RDocTask.new do |rdoc|
  331 +# rdoc.rdoc_dir = 'rdoc'
  332 +# rdoc.title = "puppet-docs"
  333 +# rdoc.rdoc_files.include('README*')
  334 +# rdoc.rdoc_files.include('lib/**/*.rb')
  335 +# end
1  marionette-collective
... ... @@ -1 +0,0 @@
1   -Subproject commit fabf25c7235f8a5099c2634a3416fa86bf2d110c
17 source/_config.yml
@@ -18,4 +18,21 @@ defaultnav:
18 18 /hiera/1: hiera1.html
19 19 destination: ../output
20 20 url: "http://docs.puppetlabs.com"
  21 +externalsources:
  22 + puppetdb_1.1:
  23 + url: /puppetdb/1.1
  24 + repo: git://github.com/puppetlabs/puppetdb.git
  25 + # Change this to origin/1.1.x once puppetdb team cuts a branch
  26 + commit: origin/master
  27 + subdirectory: documentation
  28 + puppetdb_master:
  29 + url: /puppetdb/master
  30 + repo: git://github.com/puppetlabs/puppetdb.git
  31 + commit: origin/master
  32 + subdirectory: documentation
  33 + marionette-collective:
  34 + url: /mcollective
  35 + repo: git://github.com/puppetlabs/marionette-collective.git
  36 + commit: origin/master
  37 + subdirectory: website
21 38 ---
1  source/mcollective
99 source/puppetdb/1.1/api/commands.markdown
Source Rendered
... ... @@ -1,99 +0,0 @@
1   ----
2   -title: "PuppetDB 1.1 » API » Commands"
3   -layout: default
4   -canonical: "/puppetdb/1.1/api/commands.html"
5   ----
6   -
7   -[facts]: ./wire_format/facts_format.html
8   -[catalog]: ./wire_format/catalog_format.html
9   -[report]: ./wire_format/report_format.html
10   -
11   -Commands are used to change PuppetDB's
12   -model of a population. Commands are represented by `command objects`,
13   -which have the following JSON wire format:
14   -
15   - {"command": "...",
16   - "version": 123,
17   - "payload": <json object>}
18   -
19   -`command` is a string identifying the command.
20   -
21   -`version` is a JSON integer describing what version of the given
22   -command you're attempting to invoke.
23   -
24   -`payload` must be a valid JSON object of any sort. It's up to an
25   -individual handler function to determine how to interpret that object.
26   -
27   -The entire command MUST be encoded as UTF-8.
28   -
29   -## Command submission
30   -
31   -Commands are submitted via HTTP to the `/commands/` URL and must
32   -conform to the following rules:
33   -
34   -* A `POST` is used
35   -* There is a parameter, `payload`, that contains the entire command object as
36   - outlined above. (Not to be confused with the `payload` field inside the command object.)
37   -* There is an `Accept` header that contains `application/json`.
38   -* The POST body is url-encoded
39   -* The content-type is `x-www-form-urlencoded`.
40   -
41   -Optionally, there may be a parameter, `checksum`, that contains a SHA-1 hash of
42   -the payload which will be used for verification.
43   -
44   -When a command is successfully submitted, the submitter will
45   -receive the following:
46   -
47   -* A response code of 200
48   -* A content-type of `application/json`
49   -* A response body in the form of a JSON object, containing a single key 'uuid', whose
50   - value is a UUID corresponding to the submitted command. This can be used, for example, by
51   - clients to correlate submitted commands with server-side logs.
52   -
53   -The terminus plugins for puppet masters use this command API to update facts, catalogs, and reports for nodes.
54   -
55   -## Command Semantics
56   -
57   -Commands are processed _asynchronously_. If PuppetDB returns a 200
58   -when you submit a command, that only indicates that the command has
59   -been _accepted_ for processing. There are no guarantees as to when
60   -that command will be processed, nor that when it is processed it will
61   -be successful.
62   -
63   -Commands that fail processing will be stored in files in the "dead
64   -letter office", located under the MQ data directory, in
65   -`discarded/<command>`. These files contain the command and diagnostic
66   -information that may be used to determine why the command failed to be
67   -processed.
68   -
69   -## List of Commands
70   -
71   -### "replace catalog", version 1
72   -
73   -The payload is expected to be a Puppet catalog, as a JSON string, including the
74   -fields of the [catalog wire format][catalog]. Extra fields are
75   -ignored.
76   -
77   -### "replace catalog", version 2
78   -
79   -The payload is expected to be a Puppet catalog, as either a JSON string or an
80   -object, conforming exactly to the [catalog wire
81   -format][catalog]. Extra or missing fields are an error.
82   -
83   -### "replace facts", version 1
84   -
85   -The payload is expected to be a set of facts, as a JSON string, conforming to
86   -the [fact wire format][facts]
87   -
88   -### "deactivate node", version 1
89   -
90   -The payload is expected to be the name of a node, as a JSON string, which will be deactivated
91   -effective as of the time the command is *processed*.
92   -
93   -## Experimental commands
94   -
95   -### "store report", version 1
96   -
97   -The payload is expected to be a report, containing events that occurred on Puppet
98   -resources. It is structured as a JSON object, confirming to the
99   -[report wire format][report].
91 source/puppetdb/1.1/api/index.markdown
Source Rendered
... ... @@ -1,91 +0,0 @@
1   ----
2   -title: "PuppetDB 1.1 » API » Overview"
3   -layout: default
4   -canonical: "/puppetdb/1.1/api/index.html"
5   ----
6   -
7   -[commands]: ./commands.html
8   -[terminus]: ../connect_puppet_master.html
9   -
10   -Since PuppetDB collects lots of data from Puppet, it's an ideal platform for new tools and applications that use that data. You can use the HTTP API described in these pages to interact with PuppetDB's data.
11   -
12   -Summary
13   ------
14   -
15   -PuppetDB's API uses a Command/Query Responsibility Separation (CQRS) pattern. This means:
16   -
17   -* Data can be **queried** using a standard REST-style API. Queries are processed immediately.
18   -* When **making changes** to data (facts, catalogs, etc), you must send an explicit **command** (as opposed to submitting data without comment and letting the receiver determine intent). Commands are processed asynchronously in FIFO order.
19   -
20   -The PuppetDB API consists of the following parts:
21   -
22   -* [The REST interface for queries](#queries)
23   -* [The HTTP command submission interface](#commands)
24   -* [The wire formats that PuppetDB requires for incoming data](#wire-formats)
25   -
26   -Queries
27   ------
28   -
29   -PuppetDB 1.1 supports versions 1 and 2 of the query API. Version 1 is backwards-compatible with PuppetDB 1.0.x, but version 2 has significant new capabilities, including subqueries.
30   -
31   -PuppetDB's data can be queried with a REST API.
32   -
33   -* [Specification of the General Query Structure](./query/v2/query.html)
34   -* [Available Operators](./query/v2/operators.html)
35   -* [Query Tutorial](./query/tutorial.html)
36   -* [Curl Tips](./query/curl.html)
37   -
38   -The available query endpoints are documented in the pages linked below.
39   -
40   -### Query Endpoints
41   -
42   -#### Version 2
43   -
44   -Version 2 of the query API adds new endpoints, and introduces subqueries and regular expression operators for more efficient requests and better insight into your data. The following endpoints will continue to work for the foreseeable future.
45   -
46   -* [Facts Endpoint](./query/v2/facts.html)
47   -* [Resources Endpoint](./query/v2/resources.html)
48   -* [Nodes Endpoint](./query/v2/nodes.html)
49   -* [Fact-Names Endpoint](./query/v2/fact-names.html)
50   -* [Metrics Endpoint](./query/v2/metrics.html)
51   -
52   -#### Version 1
53   -
54   -Version 1 of the query API works with PuppetDB 1.1 and 1.0. It isn't deprecated, but we encourage you to use version 2 if you can.
55   -
56   -In PuppetDB 1.0, you could access the version 1 endpoints without the `/v1/` prefix. This still works but **is now deprecated,** and we currently plan to remove support in PuppetDB 2.0. Please change your version 1 applications to use the `/v1/` prefix.
57   -
58   -* [Facts Endpoint](./query/v1/facts.html)
59   -* [Resources Endpoint](./query/v1/resources.html)
60   -* [Nodes Endpoint](./query/v1/nodes.html)
61   -* [Status Endpoint](./query/v1/status.html)
62   -* [Metrics Endpoint](./query/v1/metrics.html)
63   -
64   -#### Experimental
65   -
66   -These endpoints are not yet set in stone, and their behavior may change at any time without regard for normal versioning rules. We invite you to play with them, but you should be ready to adjust your application on your next upgrade.
67   -
68   -* [Report Endpoint](./query/experimental/report.html)
69   -* [Event Endpoint](./query/experimental/event.html)
70   -
71   -Commands
72   ------
73   -
74   -Commands are sent via HTTP but do not use a REST-style interface.
75   -
76   -PuppetDB supports a relatively small number of commands. The command submission interface and the available commands are all described at the commands page:
77   -
78   -* [Commands (all commands, all API versions)][commands]
79   -
80   -Unlike the query API, these commands are generally only useful to Puppet itself, and all format conversion and command submission is handled by the [PuppetDB terminus plugins][terminus] on your puppet master.
81   -
82   -The "replace" commands all require data in one of the wire formats described below.
83   -
84   -Wire Formats
85   ------
86   -
87   -All of PuppetDB's "replace" commands contain payload data, which must be in one of the following formats. These formats are also linked from the [commands](#commands) that use them.
88   -
89   -* [Facts wire format](./wire_format/facts_format.html)
90   -* [Catalog wire format](./wire_format/catalog_format.html)
91   -* [Report wire format (experimental)](./wire_format/report_format.html)
71 source/puppetdb/1.1/api/query/curl.markdown
Source Rendered
... ... @@ -1,71 +0,0 @@
1   ----
2   -layout: default
3   -title: "PuppetDB 1.1 » API » Query » Curl Tips"
4   -canonical: "/puppetdb/1.1/api/query/curl.html"
5   ----
6   -
7   -[Facts]: ./v2/facts.html
8   -[Nodes]: ./v2/nodes.html
9   -[fact-names]: ./v2/fact-names.html
10   -[Resources]: ./v2/resources.html
11   -[Metrics]: ./v2/metrics.html
12   -[curl]: http://curl.haxx.se/docs/manpage.html
13   -[dashboard]: ../../maintain_and_tune.html#monitor-the-performance-dashboard
14   -[whitelist]: ../../configure.html#certificate-whitelist
15   -
16   -
17   -You can use [`curl`][curl] to directly interact with PuppetDB's REST API. This is useful for testing, prototyping, and quickly fetching arbitrary data.
18   -
19   -The instructions below are simplified. For full usage details, see [the curl manpage][curl] . For additional examples, please see the docs for the individual REST endpoints:
20   -
21   -* [facts][]
22   -* [fact-names][]
23   -* [nodes][]
24   -* [resources][]
25   -* [metrics][]
26   -
27   -## Using `curl` From `localhost` (Non-SSL/HTTP)
28   -
29   -With its default settings, PuppetDB accepts unsecured HTTP connections at port 8080 on `localhost`. This allows you to SSH into the PuppetDB server and run curl commands without specifying certificate information:
30   -
31   - curl -H "Accept: application/json" 'http://localhost:8080/v2/facts/<node>'
32   - curl -H "Accept: application/json" 'http://localhost:8080/v2/metrics/mbean/java.lang:type=Memory'
33   -
34   -If you have allowed unsecured access to other hosts in order to [monitor the dashboard][dashboard], these hosts can also use plain HTTP curl commands.
35   -
36   -## Using `curl` From Remote Hosts (SSL/HTTPS)
37   -
38   -To make secured requests from other hosts, you will need to supply the following via the command line:
39   -
40   -* Your site's CA certificate (`--cacert`)
41   -* An SSL certificate signed by your site's Puppet CA (`--cert`)
42   -* The private key for that certificate (`--key`)
43   -
44   -Any node managed by puppet agent will already have all of these and you can re-use them for contacting PuppetDB. You can also generate a new cert on the CA puppet master with the `puppet cert generate` command.
45   -
46   -> **Note:** If you have turned on [certificate whitelisting][whitelist], you must make sure to authorize the certificate you are using.
47   -
48   - curl -H "Accept: application/json" 'https://<your.puppetdb.server>:8081/v2/facts/<node>' --cacert /etc/puppet/ssl/certs/ca.pem --cert /etc/puppet/ssl/certs/<node>.pem --key /etc/puppet/ssl/private_keys/<node>.pem
49   -
50   -### Locating Puppet Certificate Files
51   -
52   -Locate Puppet's `ssldir` as follows:
53   -
54   - $ sudo puppet config print ssldir
55   -
56   -Within this directory:
57   -
58   -* The CA certificate is found at `certs/ca.pem`
59   -* The corresponding private key is found at `private_keys/<name>.pem`
60   -* Other certificates are found at `certs/<name>.pem`
61   -
62   -
63   -## Dealing with complex query strings
64   -
65   -Many query strings will contain characters like `[` and `]`, which must be URL-encoded. To handle this, you can use `curl`'s `--data-urlencode` option.
66   -
67   -If you do this with an endpoint that accepts `GET` requests, **you must also use the `-G` or `--get` option.** This is because `curl` defaults to `POST` requests when the `--data-urlencode` option is present.
68   -
69   - curl -G -H "Accept: application/json" 'http://localhost:8080/v2/nodes' --data-urlencode 'query=["=", ["node", "active"], true]'
70   -
71   -
76 source/puppetdb/1.1/api/query/experimental/event.markdown
Source Rendered
... ... @@ -1,76 +0,0 @@
1   ----
2   -title: "PuppetDB 1.1 » API » Experimental » Querying Events"
3   -layout: default
4   -canonical: "/puppetdb/1.1/api/query/experimental/event.html"
5   ----
6   -
7   -[curl]: ../curl.html#using-curl-from-localhost-non-sslhttp
8   -[report]: ./report.html
9   -
10   -# Events
11   -
12   -Querying events from reports is accomplished by making an HTTP request to the
13   -`/events` REST endpoint.
14   -
15   -# Query format
16   -
17   -* The HTTP method must be `GET`.
18   -
19   -* There must be an `Accept` header specifying `application/json`.
20   -
21   -* The `query` parameter is a JSON array of query predicates, in prefix
22   - form, conforming to the format described below.
23   -
24   -The `query` parameter is described by the following grammar:
25   -
26   - query: [ {match} {field} {value} ]
27   - field: string
28   - match: "="
29   -
30   -`field` may be any of:
31   -
32   -`report`: the unique id of the report; this is a hash built up from the contents
33   - of the report which allow us to distinguish it from other reports. These ids
34   - can be acquired via the [`/reports`][report] query endpoint.
35   -
36   -For example, for all events in the report with id
37   -'38ff2aef3ffb7800fe85b322280ade2b867c8d27', the JSON query structure would be:
38   -
39   - ["=", "report", "38ff2aef3ffb7800fe85b322280ade2b867c8d27"]
40   -
41   -# Response format
42   -
43   - The response is a JSON array of events that matched the input parameters.
44   - The events are sorted by their timestamps, in descending order:
45   -
46   -`[
47   - {
48   - "old-value": "absent",
49   - "property": "ensure",
50   - "timestamp": "2012-10-30T19:01:05.000Z",
51   - "resource-type": "File",
52   - "resource-title": "/tmp/reportingfoo",
53   - "new-value": "file",
54   - "message": "defined content as '{md5}49f68a5c8493ec2c0bf489821c21fc3b'",
55   - "report": "38ff2aef3ffb7800fe85b322280ade2b867c8d27",
56   - "status": "success"
57   - },
58   - {
59   - "old-value": "absent",
60   - "property": "message",
61   - "timestamp": "2012-10-30T19:01:05.000Z",
62   - "resource-type": "Notify",
63   - "resource-title": "notify, yo",
64   - "new-value": "notify, yo",
65   - "message": "defined 'message' as 'notify, yo'",
66   - "report": "38ff2aef3ffb7800fe85b322280ade2b867c8d27",
67   - "status": "success"
68   - }
69   - ]`
70   -
71   -
72   -# Example
73   -
74   -[You can use `curl`][curl] to query information about events like so:
75   -
76   - curl -G -H "Accept: application/json" 'http://localhost:8080/experimental/events' --data-urlencode 'query=["=", "report", "38ff2aef3ffb7800fe85b322280ade2b867c8d27"]'
72 source/puppetdb/1.1/api/query/experimental/report.markdown
Source Rendered
... ... @@ -1,72 +0,0 @@
1   ----
2   -title: "PuppetDB 1.1 » API » Experimental » Querying Reports"
3   -layout: default
4   -canonical: "/puppetdb/1.1/api/query/experimental/report.html"
5   ----
6   -
7   -[curl]: ../curl.html#using-curl-from-localhost-non-sslhttp
8   -
9   -# Reports
10   -
11   -Querying reports is accomplished by making an HTTP request to the `/reports` REST
12   -endpoint.
13   -
14   -# Query format
15   -
16   -* The HTTP method must be `GET`.
17   -
18   -* There must be an `Accept` header specifying `application/json`.
19   -
20   -* The `query` parameter is a JSON array of query predicates, in prefix
21   - form, conforming to the format described below.
22   -
23   -The `query` parameter is described by the following grammar:
24   -
25   - query: [ {match} {field} {value} ]
26   - field: string
27   - match: "="
28   -
29   -`field` may be any of:
30   -
31   -`certname`
32   -: the name of the node associated with the report
33   -
34   -For example, for all reports run on the node with certname 'example.local', the
35   -JSON query structure would be:
36   -
37   - ["=", "certname", "example.local"]
38   -
39   -# Response format
40   -
41   -The response is a JSON array of report summaries for all reports
42   -that matched the input parameters. The summaries are sorted by
43   -the completion time of the report, in descending order:
44   -
45   -`[
46   - {
47   - "end-time": "2012-10-29T18:38:01.000Z",
48   - "puppet-version": "3.0.1",
49   - "receive-time": "2012-10-29T18:38:04.238Z",
50   - "configuration-version": "1351535883",
51   - "start-time": "2012-10-29T18:38:00.000Z",
52   - "id": "d4bcb35a-fb7b-45da-84e0-fceb7a1df713",
53   - "certname": "foo.local",
54   - "report-format": 3
55   - },
56   - {
57   - "end-time": "2012-10-26T22:39:32.000Z",
58   - "puppet-version": "3.0.1",
59   - "receive-time": "2012-10-26T22:39:35.305Z",
60   - "configuration-version": "1351291174",
61   - "start-time": "2012-10-26T22:39:31.000Z",
62   - "id": "5ec13ff5-c6fd-43fb-b5b1-59a00ec8e1f1",
63   - "certname": "foo.local",
64   - "report-format": 3
65   - }
66   -]`
67   -
68   -# Example
69   -
70   -[You can use `curl`][curl] to query information about reports like so:
71   -
72   - curl -G -H "Accept: application/json" 'http://localhost:8080/experimental/reports' --data-urlencode 'query=["=", "certname", "example.local"]'
310 source/puppetdb/1.1/api/query/tutorial.markdown
Source Rendered
... ... @@ -1,310 +0,0 @@
1   ----
2   -title: "PuppetDB 1.1 » API » Query Tutorial"
3   -layout: default
4   -canonical: "/puppetdb/1.1/api/query/tutorial.html"
5   ----
6   -
7   -This page is a walkthrough for constructing several types of PuppetDB queries. It uses the **version 2 API** in all of its examples; however, most of the general principles are also applicable to the version 1 API.
8   -
9   -If you need to use the v1 API, note that it lacks many of v2's capabilities, and be sure to consult the v1 endpoint references before attempting to use these examples with it.
10   -
11   -## How to query
12   -
13   -Queries are performed by performing a GET request to an endpoint URL and supplying a querystring parameter called `query`,
14   -which contains the query to execute. Results are always returned in
15   -`application/json` form. A curl command like the following can be used to
16   -easily try queries from the command line:
17   -
18   -`curl -H 'Accept: application/json' -X GET http://puppetdb:8080/v2/<resources-or-facts> --data-urlencode query@<filename>`
19   -
20   -where `filename` contains the query to execute.
21   -
22   -## Resources Walkthrough
23   -
24   -### Our first query
25   -
26   -Let's start by taking a look at a simple resource query. Suppose we want to
27   -find the user "nick" on every node. We can use this query:
28   -
29   - ["and",
30   - ["=", "type", "User"],
31   - ["=", "title", "nick"]]
32   -
33   -This query has two `"="` clauses, which both must be true.
34   -
35   -In general, the `"="` operator follows a specific structure:
36   -
37   -`["=", <attribute to compare>, <value>]`
38   -
39   -In this case, the attributes are "type" and "title", and the values are "User"
40   -and "nick".
41   -
42   -The `"and"` operator also has a well-defined structure:
43   -
44   -`["and", <query clause>, <query clause>, <query clause>, ...]`
45   -
46   -The query clauses can be any legal query (including another `"and"`). At least
47   -one clause has to be specified, and all the clauses have to be true for the
48   -`"and"` clause to be true. An `"or"` operator is also available, which looks
49   -just like the `"and"` operator, except that, as you'd expect, it's true if
50   -*any* specified clause is true.
51   -
52   -The query format is declarative; it describes conditions the results must
53   -satisfy, not how to find them. So the order of the clauses is irrelevant.
54   -Either the type clause or the title clause could come first, without affecting
55   -the performance or the results of the query.
56   -
57   -If we execute this query against the `/resources` route, we get results that
58   -look something like this:
59   -
60   - [{
61   - "parameters" : {
62   - "comment" : "Nick Lewis",
63   - "uid" : "1115",
64   - "shell" : "/bin/bash",
65   - "managehome" : false,
66   - "gid" : "allstaff",
67   - "home" : "/home/nick",
68   - "groups" : "developers",
69   - "ensure" : "present"
70   - },
71   - "sourceline" : 111,
72   - "sourcefile" : "/etc/puppet/manifests/user.pp",
73   - "exported" : false,
74   - "tags" : [ "firewall", "default", "node", "nick", "role::base", "users", "virtual", "user", "account", "base", "role::firewall::office", "role", "role::firewall", "class", "account::user", "office", "virtual::users", "allstaff" ],
75   - "title" : "nick",
76   - "type" : "User",
77   - "resource" : "0ae7e1230e4d540caa451d0ade2424f316bfbf39",
78   - "certname" : "foo.example.com"
79   - }]
80   -
81   -Our results are an array of "resources", where each resource is an object with
82   -a particular set of keys.
83   -
84   -parameters: this field is itself an object, containing all the parameters and values of the resource
85   -sourceline: the line the resource was declared on
86   -sourcefile: the file the resource was specified in
87   -exported: true if the resource was exported by this node, or false otherwise
88   -tags: all the tags on the resource
89   -title: the resource title
90   -type: the resource type
91   -resources: this is an internal identifier for the resource used by PuppetDB
92   -certname: the node that the resource came from
93   -
94   -There will be an entry in the list for every resource. A resource is specific
95   -to a single node, so if the resource is on 100 nodes, there will be 100 copies
96   -of the resource (each with at least a different certname field).
97   -
98   -### Excluding results
99   -
100   -We know this instance of the user "nick" is defined on line 111 of
101   -/etc/puppet/manifests/user.pp. What if
102   -we want to check whether or not we define the same resource somewhere else?
103   -After all, if we're repeating ourselves, something may be wrong! Fortunately,
104   -there's an operator to help us:
105   -
106   - ["and",
107   - ["=", "type", "User"],
108   - ["=", "title", "nick"],
109   - ["not",
110   - ["and",
111   - ["=", "sourceline", "/etc/puppet/manifests/user.pp"],
112   - ["=", "sourcefile", 111]]]]
113   -
114   -The `"not"` operator wraps another clause, and returns results for which the
115   -clause is *not* true. In this case, we want resources which aren't defined on
116   -line 111 of /etc/puppet/manifests/user.pp.
117   -
118   -### Resource attributes
119   -
120   -So far we've seen that we can query for resources based on their `certname`,
121   -`type`, `title`, `sourcefile`, and `sourceline`. There are a few more available:
122   -
123   - ["and",
124   - ["=", "tag", "foo"],
125   - ["=", "exported", true],
126   - ["=", ["parameter", "ensure"], "present"]]
127   -
128   -This query returns resources whose set of tags *contains* the tag
129   -"foo", and which are exported, and whose "ensure" parameter is
130   -"present". Because the parameter name can take any value (including
131   -that of another attribute), it must be namespaced using
132   -`["parameter", <parameter name>]`.
133   -
134   -The full set of queryable attributes can be found in [the resource
135   -endpoint documentation](./v2/resources.html) for easy reference.
136   -
137   -### Regular expressions
138   -
139   -What if we want to restrict our results to a certain subset of nodes? Certainly, we could do something like:
140   -
141   - ["or",
142   - ["=", "certname", "www1.example.com"],
143   - ["=", "certname", "www2.example.com"],
144   - ["=", "certname", "www3.example.com"]]
145   -
146   -And this works great if we know exactly the set of nodes we want. But what if
147   -we want all the 'www' servers, regardless of how many we have? In this case, we
148   -can use the regular expression match operator `~`:
149   -
150   - ["~", "certname", "www\\d+\\.example\\.com"]
151   -
152   -Notice that, because our regular expression is specified inside a string, the
153   -backslash characters must be escaped. The rules for which constructs can be
154   -used in the regexp depend on which database is in use, so common features
155   -should be used for interoperability. The regexp operator can be used on every
156   -field of resources except for parameters, and `exported`.
157   -
158   -## Facts Walkthrough
159   -
160   -In addition to resources, we can also query for facts. This looks similar,
161   -though the available fields and operators are a bit different. Some things are
162   -the same, though. For instance, support you want all the facts for a certain
163   -node:
164   -
165   - ["=", "certname", "foo.example.com"]
166   -
167   -This gives results that look something like this:
168   -
169   - [ {
170   - "certname" : "foo.example.com",
171   - "name" : "architecture",
172   - "value" : "amd64"
173   - }, {
174   - "certname" : "foo.example.com",
175   - "name" : "fqdn",
176   - "value" : "foo.example.com"
177   - }, {
178   - "certname" : "foo.example.com",
179   - "name" : "hostname",
180   - "value" : "foo"
181   - }, {
182   - "certname" : "foo.example.com",
183   - "name" : "ipaddress",
184   - "value" : "192.168.100.102"
185   - }, {
186   - "certname" : "foo.example.com",
187   - "name" : "kernel",
188   - "value" : "Linux"
189   - }, {
190   - "certname" : "foo.example.com",
191   - "name" : "kernelversion",
192   - "value" : "2.6.32"
193   - } ]
194   -
195   -### Fact attributes
196   -
197   -In the last query, we saw that a "fact" consists of a "certname", a "name", and
198   -a "value". As you might expect, we can query using "name" or "value".
199   -
200   - ["and",
201   - ["=", "name", "operatingsystem"],
202   - ["=", "value", "Debian"]]
203   -
204   -This will find all the "operatingsystem = Debian" facts, and their
205   -corresponding nodes. As you see, "and" is supported for facts, as are "or" and
206   -"not".
207   -
208   -### Fact operators
209   -
210   -As with resources, facts also support the `~` regular expression match
211   -operator, for all their fields. In addition to that, numeric comparisons are
212   -supported for fact values:
213   -
214   - ["and",
215   - ["=", "name", "uptime_seconds"],
216   - [">=", "value", 100000],
217   - ["<", "value", 1000000]]
218   -
219   -This will find nodes for which the uptime_seconds fact is in the half-open
220   -range [100000, 1000000). Numeric comparisons will *always be false* for fact
221   -values which are not numeric. Importantly, version numbers such as 2.6.12 are
222   -not numeric, and the numeric comparison operators can't be used with them at
223   -this time.
224   -
225   -## Nodes Walkthrough
226   -
227   -We can also query for nodes. Once again, this is quite similar to resource and
228   -fact queries:
229   -
230   - ["=", "name", "foo.example.com"]
231   -
232   -The result of this query is:
233   -
234   - ["foo.example.com"]
235   -
236   -This will find the node foo.example.com. Note that the results of a node query
237   -contain only the node names, rather than an object with multiple fields as with
238   -resources and facts.
239   -
240   -### Querying on facts
241   -
242   -Nodes can also be queried based on their facts, using the same operators as for
243   -fact queries:
244   -
245   - ["and",
246   - ["=", ["fact", "operatingsystem"], "Debian"],
247   - ["<", ["fact", "uptime_seconds"], 10000]]
248   -
249   -This will return Debian nodes with uptime_seconds < 10000.
250   -
251   -## Subquery Walkthrough
252   -
253   -The queries we've looked at so far are quite powerful and useful, but what if
254   -your query needs to consider both resources *and* facts? For instance, suppose
255   -you need the IP address of your Apache servers, to configure a load balancer.
256   -You could find those servers using this resource query:
257   -
258   - ["and",
259   - ["=", "type", "Class"],
260   - ["=", "title", "Apache"]]
261   -
262   -This will find all the Class[Apache] resources, which each knows the certname
263   -of the node it came from. Then you could put all those certnames into a fact
264   -query:
265   -
266   - ["and",
267   - ["=", "name", "ipaddress"],
268   - ["or",
269   - ["=", "certname", "a.example.com"],
270   - ["=", "certname", "b.example.com"],
271   - ["=", "certname", "c.example.com"],
272   - ["=", "certname", "d.example.com"],
273   - ["=", "certname", "e.example.com"]]]
274   -
275   -But this query is lengthy, and it requires some logic to assemble and run the
276   -second query. No, there has to be a better way. What if we could find the
277   -Class[Apache] servers and use the results of that directly to find the
278   -certname? It turns out we can, with this fact query:
279   -
280   - ["and",
281   - ["=", "name", "ipaddress"],
282   - ["in", "certname",
283   - ["extract", "certname", ["select-resources",
284   - ["and",
285   - ["=", "type", "Class"],
286   - ["=", "title", "Apache"]]]]
287   -
288   -This may appear a little daunting, so we'll look at it piecewise.
289   -
290   -Let's start with "select-resources". This operator takes one argument, which is
291   -a resource query, and returns the results of that query, in exactly the form
292   -you would expect to see them if you did a plain resource query.
293   -
294   -We then use an operator called "extract" to turn our list of resources into
295   -just a list of certnames. So we now conceptually have something like
296   -
297   - ["in", "certname", ["foo.example.com", "bar.example.com", "baz.example.com"]]
298   -
299   -The "in" operator matches facts whose "certname" is in the supplied list. (For
300   -now, that list has to be generated from a subquery, and can't be supplied
301   -directly in the query, so if you want a literal list, you'll unfortunately
302   -still have to use a combination of "or" and "="). At this point, our query
303   -seems a lot like the one above, except we didn't have to specify exactly which
304   -certnames to use, and instead we get them in the same query.
305   -
306   -Similarly, there is a "select-facts" operator which will perform a fact
307   -subquery. Either kind of subquery is usable from every kind of query (facts,
308   -resources, and nodes), subqueries may be nested, and multiple subqueries may be
309   -used in a single query. Finding use cases for some of those combinations is
310   -left as an exercise to the reader.
62 source/puppetdb/1.1/api/query/v1/facts.markdown
Source Rendered
... ... @@ -1,62 +0,0 @@
1   ----
2   -title: "PuppetDB 1.1 » API » v1 » Querying Facts"
3   -layout: default
4   -canonical: "/puppetdb/1.1/api/query/v1/facts.html"
5   ----
6   -
7   -[curl]: ../curl.html#using-curl-from-localhost-non-sslhttp
8   -
9   -Querying facts occurs via an HTTP request to the
10   -`/facts` REST endpoint.
11   -
12   -
13   -## Query format
14   -
15   -Facts are queried by making a request to a URL in the following form:
16   -
17   -The HTTP request must conform to the following format:
18   -
19   -* The URL requested is `/facts/<node>`
20   -* A `GET` is used.
21   -* There is an `Accept` header containing `application/json`.
22   -
23   -The supplied `<node>` path component indicates the certname for which
24   -facts should be retrieved.
25   -
26   -## Response format
27   -
28   -Successful responses will be in `application/json`. Errors will be returned as
29   -non-JSON strings.
30   -
31   -The result is a JSON object containing two keys, "name" and "facts". The
32   -"facts" entry is itself an object mapping fact names to values:
33   -
34   - {"name": "<node>",
35   - "facts": {
36   - "<fact name>": "<fact value>",
37   - "<fact name>": "<fact value>",
38   - ...
39   - }
40   - }
41   -
42   -If no facts are known for the supplied node, an HTTP 404 is returned.
43   -
44   -## Example
45   -
46   -[Using `curl` from localhost][curl]:
47   -
48   - curl -H "Accept: application/json" 'http://localhost:8080/facts/<node>'
49   -
50   -Where `<node>` is the name of the node from which you wish to retrieve facts.
51   <