Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lint for RSA close prime Fermat factorization susceptibility #674

Merged
merged 6 commits into from
Jul 31, 2022

Conversation

christopher-henderson
Copy link
Member

@christopher-henderson christopher-henderson commented May 8, 2022

@aarongable
@vanbroup

Adds a lint that checks for RSA key pair that are susceptible to Fermat's factorization method.

https://fermatattack.secvuln.info/
https://nvd.nist.gov/vuln/detail/CVE-2022-26320
https://en.wikipedia.org/wiki/Fermat%27s_factorization_method

What I find most tricky about this change is that we still seem a bit off from being able to merge support for configuring individual lints (#648) lints. This makes the value for the number of rounds a hardcoded shot-in-the-dark. My first stab is 1000, but I'm checking to see what the perf impact is on the test corpus. On the one hand, we don't want a hardcoded value exploding our scan time. On the other hand, we don't want a hardcoded value that is too low and utterly useless in detecting dangerous primes.

EDIT:
Indeed, this lint alone has exploded the runtime of the test corpus on my local by 16x 😦 . CICD also appears to be choking on this setting.

Addressed #673

@christopher-henderson christopher-henderson changed the title Lint for close prime Fermat factorization Lint for RSA close prime Fermat factorization susceptibility May 8, 2022
@vanbroup
Copy link
Contributor

vanbroup commented May 8, 2022

It should be fine to use 100 rounds, quoted from https://fermatattack.secvuln.info/:

With common RSA key sizes (2048 bit) in our tests the Fermat algorithm with 100 rounds reliably factors numbers where p and q differ up to 2^517.

This is the same value used by crt.sh and I thought it was the default for boulder.

@zakird
Copy link
Member

zakird commented May 8, 2022

I'm nervous about including this as a defauilt-on given the performance implications (or without a major version release so that folks have the chance to disable it). Even with 100 rounds, it sounds like it will have a significant performance hit. I think we should consider how we note or segment these. Maybe there's a new category?

@christopher-henderson
Copy link
Member Author

christopher-henderson commented May 8, 2022

Well bumping the round down to 100 at least becomes workable.

On my local without this lint it takes 85s. With the lint at 100 rounds it takes 163s.

We see an even more drastic difference in CICD seeing as the test was timing out after 20minutes but it now appears to complete in ~6-7 minutes (which is a doubling from the3-4minutes it usually takes).

@aarongable
Copy link
Contributor

Wow yeah, I see the same test-time explosion that you see. I had no idea the zlint integration test corpus was so large.

I think this check, or something very similar, is still worth adding. And I think the default should be 100 rounds, at least to match crt.sh and Boulder.

To me the only question is whether it is worth landing now, or whether it should wait until after configurable lints become a thing, and then the integration tests could configure this one to (e.g.) 10 rounds to save on test execution time.

@zakird
Copy link
Member

zakird commented May 23, 2022

I suspect it's actually a fair amount worse than that, because that cost is including all the parsing, which is non-negligible compared to the lints themselves. A long while back, someone ran https://github.com/zmap/zlint/blob/master/v3/benchmarks_test.go against all our lints. It may be worth running again. If this more than doubles the cost of running lints, that's going to be tremendous for folks like Censys.

@christopher-henderson
Copy link
Member Author

Howdy folks @aarongable @zakird

With our merge of the feature for configurable lints we can now make this lint, in particular, configurable.

I set the default to 100 as per Aaron's suggestion and added a benchmark test around that setting. It is worth nothing that, at least in GitHub Actions, that this does expand the time of running the test corpus by approximately 34% (last merge to master at 5m35s vs this PR a 7m29s).

@zakird 34% is not much a burden on this repo as it is relatively small in absolute terms. However, as you pointed out there are those out there (namely Censys) that do indeed consume certs in mass quantities for which a 34% slowdown is not acceptable.

Fortunately, this lint is indeed configurable by any party that deems the default setting to be unacceptable. The questions is - whose desired default should win out?

The way I see this is that either:

  1. The default is something like 100 and mass consumers such as Censys can configure it all the way down to 0 if they like.
    1a. Pro: It casts a wider net and gives the community at large a better shot at catching these bad key pairs.
    1b. Con: A potentially burdensome computation cost for a scenario that seems unlikely.
  2. The default is 0-10 and consumers that wish to take a deeper look (LE/Boulder) can set it to any desired goal.
    2a. Pro: Still catches some obviously bad pairs while keeping mass lints fast.
    2b. Con: Reduces the odds of finding a bad key pair by intentionally an order of magnitude.

@aarongable
Copy link
Contributor

The current CA/BF weak key ballot proposes enshrining the "100 rounds" language directly in the Baseline Requirements, so I'd lean toward having that be the default and letting mass certificate consumers adjust it downwards if they desire.

@zakird
Copy link
Member

zakird commented Jul 27, 2022

If CABF formalizes 100 rounds, I agree then that makes this a pretty easy decision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants