-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow users to define and run custom reports to validate data within NetBox #1511
Comments
A couple of queries.
|
Users will able to log arbitrary messages within a report. Each message can be associated with a log level: success, info, warning, or failure. A report with one or more failures logged is considered to have failed.
Probably not. I mean, you could use reports for that, but it would be impractical to accommodate the plethora of different output formats and structures people might want to use. I think the primary focus here will be validation of the data within NetBox, in support of its function as the "source of truth." The reports page will provide a quick summary of any data in NetBox which does not conform to rules the user has defined. |
Closes #1511: Implemented reports
The reports branch has been merged into develop-2.2 and will be included in v2.2-beta2. |
Issue type
[x] Feature request
[ ] Bug report
[ ] Documentation
Environment
Description
NetBox is intended to serve as the "source of truth" for a network, acting as the authoritative source for IP addressing, interface connections, and so on. To help guarantee the integrity of data within NetBox, I'd like to establish a mechanism by which users can write and run custom reports to inspect NetBox objects and alert on any deviations from the norm.
For example, a user might write reports to validate the following:
A report would take the form of a Python class saved to a file within a parent
reports
directory (which would not be tracked by git) in the NetBox installation path. Each report class can have several methods, each of which might perform specific validation relevant to the report's purpose. This arrangement closely mimics the implementation of Python unit tests: The major difference is that we are validating data rather than code.Reports would be executed via the API, with individual methods being run in the order they are defined. A management command (e.g.
manage.py runreport <name>
) will also be provided for development purposes and for execution by cron jobs.Each report method can produce logs and ultimately yield a pass for fail status; if one or more tests fail, the report is marked as failed. Results of the most recent test runs will be stored in the database as raw JSON, but no historical tracking will be provided. The web UI will provide a view showing the latest results of each report.
The text was updated successfully, but these errors were encountered: