Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
[Suggestion] baseline diff #1286
I am in the process of testing a number of Apache servers across a company.
As expected I am seeing subtle (and sometimes not so subtle) differences in "hardness" between the various deployments stemming from the usual culprits host OS, version defaults, sysadmin configuration effort and skill and this got me thinking.
I have a baseline machine that has been crafted to be "as desired" but the actual human process of comparing results from other machines to this baseline diff is actually quite laborious.
I was wondering if there was interest in discussing ideas how to make this process less painful and potentially more powerful.
I can imagine being able to so something like
That'll be one of the features for future, once I finally find time to finish this release. But I am happy for any code or discussion before.
With the current means/version I'd suggest to use post-processing. Probably by using a machine readable output (CSV or JSON). Then use this as a template to diff against. Up to a year ago I did that similar to what you did -- in a network for some kind of manual unit tests. It costed some work was when either the code changed or the server side.
Doing this within testssl.sh the current
What I learnt from my experiment that every feature / change in testssl.sh requires work to re-adjust the template. This situation should be avoided or minimized. That means e.g. that the keys and values in the template should be rather static. Also new checks in testssl.sh should not lead everytime when this new check is being executed to a complaint.
I have not taken screen (and HTML) output into account yet. At the moment normally those outputs will be done closer to the check, which means there's no hook or general function like
1 That is a simplified view: There are functions but it's often not a oneliner like
Very interesting answer and beyond anything I had considered.
I have tried to think of any way these results could be compared in a simpler manner (post-processing) and as you highlight it is not simple and by its very nature cannot ever be simple. The target result set is just too complex with too many interrelated parts and evolves constantly.
I did at one point ponder the viability of a fingerprint based approach and I could not get it to the point where it made enough sense to document. My only take away from that avenue of thinking was that fingerprinting, where the community could submit validated examples, is another interesting conversation in itself. I am not sure if it would add any value to say the established nmap approach other than it could be used to perhaps beat some obfuscation and as a tool to establish a version and security level.
json seens the sensible way to go although I always find vast skill differences between sysops who can manipulate json/xml. I myself have never truely got to grips with it but I suspect the community will step up with complimentary tools and examples.