Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow users to define and run custom reports to validate data within NetBox #1511

Closed
jeremystretch opened this issue Sep 19, 2017 · 3 comments
Milestone

Comments

@jeremystretch
Copy link
Member

Issue type

[x] Feature request
[ ] Bug report
[ ] Documentation

Environment

  • Python version: 3.4.3
  • NetBox version: 2.1.4

Description

NetBox is intended to serve as the "source of truth" for a network, acting as the authoritative source for IP addressing, interface connections, and so on. To help guarantee the integrity of data within NetBox, I'd like to establish a mechanism by which users can write and run custom reports to inspect NetBox objects and alert on any deviations from the norm.

For example, a user might write reports to validate the following:

  • Every top-of-rack switch has a console and out-of-band connection defined in NetBox
  • There is a minimum amount of free IP space available within each site
  • All devices whose name match a pattern have been assigned the same functional role
  • Every router has a loopback interface configured with an IP address

A report would take the form of a Python class saved to a file within a parent reports directory (which would not be tracked by git) in the NetBox installation path. Each report class can have several methods, each of which might perform specific validation relevant to the report's purpose. This arrangement closely mimics the implementation of Python unit tests: The major difference is that we are validating data rather than code.

Reports would be executed via the API, with individual methods being run in the order they are defined. A management command (e.g. manage.py runreport <name>) will also be provided for development purposes and for execution by cron jobs.

Each report method can produce logs and ultimately yield a pass for fail status; if one or more tests fail, the report is marked as failed. Results of the most recent test runs will be stored in the database as raw JSON, but no historical tracking will be provided. The web UI will provide a view showing the latest results of each report.

@jeremystretch jeremystretch added this to the v2.2 milestone Sep 20, 2017
@candlerb
Copy link
Contributor

Each report method can produce logs and ultimately yield a pass for fail status

A couple of queries.

  1. Regarding reports which highlight data inconsistences as described above. Rather than returning a single "FAIL" status, or a message about the first failed instance, are you expecting it would be possible to return a structured table of all failure instances - e.g. as a CSV? (Aside: for some use cases see Report on duplicate addresses and prefixes #801, Report on missing primary addresses #863)

  2. Are you expecting that reports could be used as a general data summary or export facility - e.g. you could use this to run a query to generate tables of stats grouped by site, rack etc, but no "failures" as such? Or is that considered a different type of "report"?

@jeremystretch
Copy link
Member Author

Rather than returning a single "FAIL" status, or a message about the first failed instance, are you expecting it would be possible to return a structured table of all failure instances - e.g. as a CSV?

Users will able to log arbitrary messages within a report. Each message can be associated with a log level: success, info, warning, or failure. A report with one or more failures logged is considered to have failed.

Are you expecting that reports could be used as a general data summary or export facility - e.g. you could use this to run a query to generate tables of stats grouped by site, rack etc, but no "failures" as such?

Probably not. I mean, you could use reports for that, but it would be impractical to accommodate the plethora of different output formats and structures people might want to use. I think the primary focus here will be validation of the data within NetBox, in support of its function as the "source of truth." The reports page will provide a quick summary of any data in NetBox which does not conform to rules the user has defined.

jeremystretch added a commit that referenced this issue Sep 28, 2017
@jeremystretch
Copy link
Member Author

The reports branch has been merged into develop-2.2 and will be included in v2.2-beta2.

@lock lock bot locked as resolved and limited conversation to collaborators Jan 18, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants