Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Suggestion] baseline diff #1286

Open
anoma opened this issue Jun 25, 2019 · 2 comments

Comments

@anoma
Copy link
Contributor

commented Jun 25, 2019

I am in the process of testing a number of Apache servers across a company.

As expected I am seeing subtle (and sometimes not so subtle) differences in "hardness" between the various deployments stemming from the usual culprits host OS, version defaults, sysadmin configuration effort and skill and this got me thinking.

I have a baseline machine that has been crafted to be "as desired" but the actual human process of comparing results from other machines to this baseline diff is actually quite laborious.

I was wondering if there was interest in discussing ideas how to make this process less painful and potentially more powerful.

I can imagine being able to so something like ./testssl.sh --baseline IIS7.0_high target might be an interesting area to expand into.

@drwetter

This comment has been minimized.

Copy link
Owner

commented Jun 27, 2019

That sounds good and actually is a known feature request, see #1085 (/#1108).

That'll be one of the features for future, once I finally find time to finish this release. But I am happy for any code or discussion before.

With the current means/version I'd suggest to use post-processing. Probably by using a machine readable output (CSV or JSON). Then use this as a template to diff against. Up to a year ago I did that similar to what you did -- in a network for some kind of manual unit tests. It costed some work was when either the code changed or the server side.

Doing this within testssl.sh the current fileout()-functions need to be changed (or another function needs to be added to it) so when the baseline won't be met there will be file output. In other cases: Nope :) . That would require for the user to provide a template and for testssl.sh to provide a parser for the template.

What I learnt from my experiment that every feature / change in testssl.sh requires work to re-adjust the template. This situation should be avoided or minimized. That means e.g. that the keys and values in the template should be rather static. Also new checks in testssl.sh should not lead everytime when this new check is being executed to a complaint.

I have not taken screen (and HTML) output into account yet. At the moment normally those outputs will be done closer to the check, which means there's no hook or general function like fileout(). 1 which can be used. As this would require more work I'd rather would see this to be postponed.

1 That is a simplified view: There are functions but it's often not a oneliner like fileout(), There are pr_srvrty_* functions with linefeed, without linefeed and some comment or debugging text via *out* functions. And: There's always a headline or a name of the check via screen and HTML output.

@drwetter drwetter added this to the 3.1dev milestone Jun 27, 2019
@anoma

This comment has been minimized.

Copy link
Contributor Author

commented Jul 2, 2019

Very interesting answer and beyond anything I had considered.

I have tried to think of any way these results could be compared in a simpler manner (post-processing) and as you highlight it is not simple and by its very nature cannot ever be simple. The target result set is just too complex with too many interrelated parts and evolves constantly.

I did at one point ponder the viability of a fingerprint based approach and I could not get it to the point where it made enough sense to document. My only take away from that avenue of thinking was that fingerprinting, where the community could submit validated examples, is another interesting conversation in itself. I am not sure if it would add any value to say the established nmap approach other than it could be used to perhaps beat some obfuscation and as a tool to establish a version and security level.

json seens the sensible way to go although I always find vast skill differences between sysops who can manipulate json/xml. I myself have never truely got to grips with it but I suspect the community will step up with complimentary tools and examples.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.