-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lint for RSA close prime Fermat factorization susceptibility #674
Conversation
It should be fine to use 100 rounds, quoted from https://fermatattack.secvuln.info/:
This is the same value used by crt.sh and I thought it was the default for boulder. |
I'm nervous about including this as a defauilt-on given the performance implications (or without a major version release so that folks have the chance to disable it). Even with 100 rounds, it sounds like it will have a significant performance hit. I think we should consider how we note or segment these. Maybe there's a new category? |
Well bumping the round down to 100 at least becomes workable. On my local without this lint it takes 85s. With the lint at 100 rounds it takes 163s. We see an even more drastic difference in CICD seeing as the test was timing out after 20minutes but it now appears to complete in ~6-7 minutes (which is a doubling from the3-4minutes it usually takes). |
Wow yeah, I see the same test-time explosion that you see. I had no idea the zlint integration test corpus was so large. I think this check, or something very similar, is still worth adding. And I think the default should be 100 rounds, at least to match crt.sh and Boulder. To me the only question is whether it is worth landing now, or whether it should wait until after configurable lints become a thing, and then the integration tests could configure this one to (e.g.) 10 rounds to save on test execution time. |
I suspect it's actually a fair amount worse than that, because that cost is including all the parsing, which is non-negligible compared to the lints themselves. A long while back, someone ran https://github.com/zmap/zlint/blob/master/v3/benchmarks_test.go against all our lints. It may be worth running again. If this more than doubles the cost of running lints, that's going to be tremendous for folks like Censys. |
Howdy folks @aarongable @zakird With our merge of the feature for configurable lints we can now make this lint, in particular, configurable. I set the default to @zakird 34% is not much a burden on this repo as it is relatively small in absolute terms. However, as you pointed out there are those out there (namely Censys) that do indeed consume certs in mass quantities for which a 34% slowdown is not acceptable. Fortunately, this lint is indeed configurable by any party that deems the default setting to be unacceptable. The questions is - whose desired default should win out? The way I see this is that either:
|
The current CA/BF weak key ballot proposes enshrining the "100 rounds" language directly in the Baseline Requirements, so I'd lean toward having that be the default and letting mass certificate consumers adjust it downwards if they desire. |
If CABF formalizes 100 rounds, I agree then that makes this a pretty easy decision. |
@aarongable
@vanbroup
Adds a lint that checks for RSA key pair that are susceptible to Fermat's factorization method.
https://fermatattack.secvuln.info/
https://nvd.nist.gov/vuln/detail/CVE-2022-26320
https://en.wikipedia.org/wiki/Fermat%27s_factorization_method
What I find most tricky about this change is that we still seem a bit off from being able to merge support for configuring individual lints (#648) lints. This makes the value for the number of rounds a hardcoded shot-in-the-dark. My first stab is
1000
, but I'm checking to see what the perf impact is on the test corpus. On the one hand, we don't want a hardcoded value exploding our scan time. On the other hand, we don't want a hardcoded value that is too low and utterly useless in detecting dangerous primes.EDIT:
Indeed, this lint alone has exploded the runtime of the test corpus on my local by 16x 😦 . CICD also appears to be choking on this setting.
Addressed #673