Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add fips indicator requirements doc #23609
base: master
Are you sure you want to change the base?
Add fips indicator requirements doc #23609
Changes from 9 commits
e59df02
3b8b2dc
16de85a
0c8620c
f418328
2e5eb84
61f0251
1af9e64
21399d4
fc4ec34
66393d8
157686b
efc6ca0
e3d5ab0
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest adding the third option mooted by OTC yesterday (which amounts to implicit indicators):
fipsmodule.cnf
) or per algorithm context.fipsmodule.cnf
(but backward compatibility).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must admit I don't fully grasp the reasons behind this approach. If someone is going through the trouble of running a validated version of the FIPS provider, do they even have the liberty of setting these flags to the "non-compliant" option? Would the security policy not mandate that these flags are always set to the "compliant" options? It's my understanding that compliance is black and white: either the module is compliant, or it is not.
But perhaps the various programs that use FIPS (e.g. FedRAMP) have a different view.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is just black and white we wouldnt need indicators right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's a little blunt. Indicators would indicate a non-approved service, which is explicitly within the scope of FIPS. E.g. there are requirements specifically for non-approved services, they are tested and listed in the security policy. They basically shift the responsibility to the operator who is supposed to check the indicator and then do... something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, @slontis is spot on. FIPS has a number of grey areas and permits backward compatibility that isn't within current guidelines.
FIPS allows old algorithms to be used in legacy situations -- e.g. triple DES was (until recently) allowed for decryption but not encryption.
Similarly for public key operations with short key sizes. Likewise for MACs with short keys. OpenSSL often (but not always) cannot distinguish between an operation to support something legacy from an operation that must be strictly FIPS approved as per the validated standard. I.e. there must be a way for the caller to distinguish the two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only difference between setting the flag to allow non-approved and the explicit indicators (unless we implement both) is that without an explicit indicator setting the flag automatically means the operation is non-approved even though it was actually an approved operation.
On the other hand having to set such flag for non-approved operation makes it much more clean for applications to keep the FIPS compliance. Otherwise they would have to add the indicator check to each and every call into OpenSSL. And how is an application supposed to do it when it for example uses the FIPS module indirectly, i.e. via libssl or some third party library?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, setting the non-approved flag doesn't make any subsequent operation automatically non-approved. So long as only approved algorithms are used in accordance with the standards, after setting the non-approved flag, the operation will still be approved. It's up to the user to adhere if they set the non-approved flag. This needs to be documented.
Explicit indicators are also quite a bit more effort, every algorithm beyond those in doubt needs to be instrumented to return "approved".
Agreed about the flag making compliance more obvious. If you want non-FIPS, you must set the flag explicitly, if you don't whatever you do will either be approved or will error. Checking the indicator after the operations is a far larger onus on the application IMO and one that will be ignored.
Upgrading the FIPS provider will cause applications to fail due to the changing standards. That's part of FIPS compliance and application/library writers will have to deal with it. While I agree that we should provide a work around, I'm far less sure that it should be enabled by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So these flags would be set programmatically by the caller, rather than in the config file? Would the default behavior (i.e. the flag is not set) be compliant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compliant by default would be my expectation, however the OTC might decide compatible by default instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Easy to detect with the helper functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like this isn't a problem in practice if you make the getter a tri-state of
{APPROVED, UNAPPROVED, UNDETERMINED}
and just return the latter if it is not clear whether the use is approved at the time of the call, possibly even in a way that makesUNDETERMINED
the default value (i.e., a value of 0).This is what Red Hat did for a signature context when PSS padding is chosen, but no digest is known yet – see #19724 for discussion of why an indicator could not provide data without knowing the digest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing requires adding the indicator params to the default provider, so I'd just make the
OSSL_FIPS_PARAM_APPROVED
param only work in the FIPS provider by wrapping its use in theCTX_get_params()
function in#ifdef FIPS_MODULE
. Theossl_xxx_fips_approved()
function would then only have to be implemented in the FIPS provider.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe????
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thinking being that it wont handle the OSSL_PARAM so it is not approved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the FIPS provider has many more algorithms which are approved rather than non-approved. Would it not be easier to have an OSSL_PARAM indicating the non-approved status? If this parameter is present and set to
true
, the service would be non-approved. The absence of the parameter, or (if it is present) a value offalse
would mean the service is approved. That way you don't have to add the parameter to every single API, but only the ones where non-approvedness is possible.On the other hand, this becomes more difficult to explain in the Security Policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking this applies to all providers, and in that case undefined is better to mean not approved, especially if we apply this to any provider that doesnt understand this. The set of approved algorithms will always be a small subset of all algorithms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From that point of view that's correct – however keep in mind that only the FIPS provider is certified anyway. So from a certification point of view, it doesn't matter what values other providers do or don't return for this param. If you want to be FIPS compliant, you already have to make sure that you are using the FIPS provider.
Red Hat went for the "if it doesn't have an indicator and is in the FIPS provider, it's approved" default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe you are missing KMAC here.
KDFs also need a key size >= 112 bits per Section 8 of SP 800-131Ar2, including HKDF, SSKDF, KBKDF.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
KMAC is only checking > 4 currently so you are correct (I was thinking it was sufficient that it uses the KECCAK_KMAC algorithm - but it is not).
HKDF, SSKDF and KBKDF fall under the category of (HMAC which applies to all algorithms that use HMAC also)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently Hash and HMAC DRBGs are the only impacted algorithms.
Nonetheless, this is a concern going forwards.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. The DRBG's are the only algorithms that are doing the right thing currently because you added code to check what digests are allowed. Unless the FIPS cert lists all the digests that matches all the FIPS digest algorithms then we are not currently doing it correctly for a FIPS 140-3 validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 140-3 should list everything. I've not seen the actual submission so I don't know if this is correct or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find it hard to believe it was done for absolutely everything.. (including things like SSKDF, and HKDF)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not make sense to not test&validate things that are possible to be tested and validated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understand your point, look at the 3.0.8 validation for example and you will see that the digests tested vary wildly. This could be because there was not even an option to test them at that point in time, I dont know since I wasnt part of that process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As @jvdsn has pointed out there are cases where subsets are only allowed (such as TLS1.3 PRF)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The commit message in https://gitlab.com/redhat/centos-stream/rpms/openssl/-/blob/c9s/0078-KDF-Add-FIPS-indicators.patch?ref_type=heads has a nice overview of which combinations can be tested, and which ones we were told to consider unapproved (and thus marked with an indicator saying so).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In OpenSSL 3.2.0, it seems like SSKDF supports using SHAKE128, SHAKE256, KECCAK-KMAC128, and KECCAK-KMAC256 as digests. Those are not valid auxiliary functions according to Section 4.2 of SP 800-56Cr2, so they should probably be removed (I do not know why someone would actually use them).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The general rule is the KECCAK ones are only for KMAC.
We will have to figure out which algorithms will allow SHAKE (i.e. even if they are allowed we can optionally chose to support them).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, I can't think of any KDF that allows SHAKE. Only recently has NIST been allowing SHAKE in higher-level algorithms. FIPS 140-3 IG C.C still states "The SHAKE128 and SHAKE256 extendable-output functions may only be used as the standalone algorithm", but that's obviously outdated now with SP 800-208 (LMS) and FIPS 186-5 allowing SHAKE for RSA/ECDSA/EdDSA signatures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After lengthy discussion, Red Hat has been advised that SHA-1 cannot be used with TLS1PRF and TLS1.3PRF (no surprise here), and X9.63KDF(!). I recall that the latter was a surprise, but I don't have a reference for you with the details at the moment. Feel free to ping me if it's needed later on, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The discussion on X9.63 KDF with CAVP can be found here: usnistgov/ACVP#1403.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some other items I can think of:
(I do not know what "a careful risk analysis" would entail for a general-purpose cryptographic library like OpenSSL)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the input...
I would be fine with K & B curves being removed altogether - but if we keep them then what you say applies.
I do mention salt length in the Algorithm Transitions - it will need to be checked. I think it needs an indicator.
-Looks like KMAC needs an indicator on the lower bound.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the specification (Table 12), ACVP testing for OAEP is restricted to SHA-1, SHA-2, or SHA-3. Can also be confirmed by looking at the ACVP server source code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is sufficent to say that if we cant select as a tick box on the submission paperwork then its not supported
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now it has
FIPS_UNAPPROVED_PROPERTIES
in fipsprov.c, would that not be sufficient?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are going to use indicators everywhere then it is probably better to not mix implicit and explicit indicators. Querying anything in the default provider for example should return not approved. (i.e.- keep the mechanism simple),.