Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add fips indicator requirements doc #23609

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
271 changes: 271 additions & 0 deletions doc/designs/fips_indicator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,271 @@
OpenSSL FIPS Indicators
=======================

References
----------

- [1] FIPS 140-3 Standards: <https://csrc.nist.gov/projects/cryptographic-module-validation-program/fips-140-3-standards>
- [2] Approved Security Functions: <https://csrc.nist.gov/projects/cryptographic-module-validation-program/sp-800-140-series-supplemental-information/sp800-140c>
- [3] Approved SSP generation and Establishment methods: <https://csrc.nist.gov/projects/cryptographic-module-validation-program/sp-800-140-series-supplemental-information/sp800-140d>
- [4] Key transitions: <https://csrc.nist.gov/pubs/sp/800/131/a/r2/final>
- [5] FIPS 140-3 Implementation Guidance: <https://csrc.nist.gov/csrc/media/Projects/cryptographic-module-validation-program/documents/fips 140-3/FIPS 140-3 IG.pdf>

Requirements
------------

The following information was extracted from the FIPS 140-3 IG [5] “2.4.C Approved Security Service Indicator”

- A module must have an approved mode of operation that requires at least one service to use an approved security function (defined by [2] and [3]).
- A FIPS 140-3 compliant module requires a built-in service indicator capable of indicating the use of approved security services
- If a module only supports approved services in an approved manner an implicit indicator can be used (e.g. successful completion of a service is an indicator).
- An approved algorithm is not considered to be an approved implementation if it does not have a CAVP certificate or does not include its required self-tests. (i.e. My interpretation of this is that if the CAVP certificate lists an algorithm with only a subset of key sizes, digests, and/or ciphers compared to the implementation, the differences ARE NOT APPROVED. In many places we have no restrictions on the digest or cipher selected).
- Documentation is required to demonstrate how to use indicators for each approved cryptographic algorithm.
- Testing is required to execute all services and verify that the indicator provides an unambiguous indication of whether the service utilizes an approved cryptographic algorithm, security function or process in an approved manner or not.
- The Security Policy may require updates related to indicators. AWS/google have added a table in their security policy called ‘Non-Approved Algorithms not allowed in the approved mode of operation’. An example is RSA with a keysize of < 2048 bits (which has been enforced by [4]).

Legacy Support
--------------

Due to key transitions [4] we may have some legacy algorithms that are in a state of only being approved for processing (verification, decryption, validation), and not for protection (signing, encrypting, keygen).
For example DSA.

The options are:

- Completely remove the algorithm from the FIPS provider. This is simple but means older applications can no longer process existing data, which is not ideal.
- Allow the algorithm but make it not approved with an context specific indicator.

It is safer to make the protection operations fail rather than use an indicator.
The processing operation for DSA would set the indicator to approved.

Security Checks
---------------

OpenSSL currently defines configurable FIPS options.
These options are supplied via the FIPS configuration file - which is normally setup via fipsinstall.
paulidale marked this conversation as resolved.
Show resolved Hide resolved

- FIPS_FEATURE_CHECK(FIPS_security_check_enabled, fips_security_checks, 1)
- FIPS_FEATURE_CHECK(FIPS_tls_prf_ems_check, fips_tls1_prf_ems_check, 0)
- FIPS_FEATURE_CHECK(FIPS_restricted_drbg_digests_enabled, 0)
- OSSL_PROV_FIPS_PARAM_CONDITIONAL_ERRORS selftest_params.conditional_error_check

The following functions are available in providers/common/security_check.c.

- ossl_rsa_check_key()
- ossl_ec_check_key()
- ossl_dsa_check_key()
- ossl_dh_check_key()
- ossl_digest_get_approved_nid_with_sha1()
- ossl_digest_is_allowed()

Anywhere where these functions are called an indicator MAY be required.
slontis marked this conversation as resolved.
Show resolved Hide resolved
Because these options are available I do not think it is sufficient to
document this in the security policy.

Each of these functions contains code of the following form:

``` c
#if !defined(OPENSSL_NO_FIPS_SECURITYCHECKS)
if (ossl_securitycheck_enabled(ctx)) {
// Do some checks, and maybe return 0 for a hard failure
...
}
}
```

OPENSSL_NO_FIPS_SECURITYCHECKS is also a configurable option
If the security checks are not enabled then it is unapproved?

Implementation options
----------------------

The above requirements indicate 2 options.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest adding the third option mooted by OTC yesterday (which amounts to implicit indicators):

  • Detect non-FIPS operations and return an error.
    • For situations where we cannot know, err generously seems sensible (but up for discussion).
  • Have a flag param to stop this behaviour to permit legacy access.
    • The flag could be either global (from fipsmodule.cnf) or per algorithm context.
      • A global flag is easier but dangerous.
      • A per context flag makes it obvious when the caller is possibly bypassing FIPS standards (because setparam).
      • A per context flag is not an indicator, it only indicates that usage might not be FIPS approved.
      • IMO per context is the better option.
    • "legacy" isn't a suitable name for this param IMO.
    • The flag could replace the ad-hoc settings in fipsmodule.cnf (but backward compatibility).
      • Nonetheless, it could replace all ad-hoc settings moving forwards.
  • The complexity lies between the two other options because we still need to instrument all FIPS mandated checks but we don't need to add an indicator param everywhere.
  • Non-FIPS algorithms are still possible either because they aren't FIPS approved (e.g. 3DES wouldn't need the flag param) or the flag is deliberately set.
  • The default and other providers will ignore this flag param, so the solution won't break them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must admit I don't fully grasp the reasons behind this approach. If someone is going through the trouble of running a validated version of the FIPS provider, do they even have the liberty of setting these flags to the "non-compliant" option? Would the security policy not mandate that these flags are always set to the "compliant" options? It's my understanding that compliance is black and white: either the module is compliant, or it is not.

But perhaps the various programs that use FIPS (e.g. FedRAMP) have a different view.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is just black and white we wouldnt need indicators right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's a little blunt. Indicators would indicate a non-approved service, which is explicitly within the scope of FIPS. E.g. there are requirements specifically for non-approved services, they are tested and listed in the security policy. They basically shift the responsibility to the operator who is supposed to check the indicator and then do... something.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, @slontis is spot on. FIPS has a number of grey areas and permits backward compatibility that isn't within current guidelines.

FIPS allows old algorithms to be used in legacy situations -- e.g. triple DES was (until recently) allowed for decryption but not encryption.

Similarly for public key operations with short key sizes. Likewise for MACs with short keys. OpenSSL often (but not always) cannot distinguish between an operation to support something legacy from an operation that must be strictly FIPS approved as per the validated standard. I.e. there must be a way for the caller to distinguish the two.

  • Indicators are one approach: a caller can ask if what I just did was within the letter of the standards or not.
  • The approach suggested by the OTC is the reverse: the caller has to intentionally say that the following might not meet the standards but allow it nonetheless.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only difference between setting the flag to allow non-approved and the explicit indicators (unless we implement both) is that without an explicit indicator setting the flag automatically means the operation is non-approved even though it was actually an approved operation.

On the other hand having to set such flag for non-approved operation makes it much more clean for applications to keep the FIPS compliance. Otherwise they would have to add the indicator check to each and every call into OpenSSL. And how is an application supposed to do it when it for example uses the FIPS module indirectly, i.e. via libssl or some third party library?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, setting the non-approved flag doesn't make any subsequent operation automatically non-approved. So long as only approved algorithms are used in accordance with the standards, after setting the non-approved flag, the operation will still be approved. It's up to the user to adhere if they set the non-approved flag. This needs to be documented.

Explicit indicators are also quite a bit more effort, every algorithm beyond those in doubt needs to be instrumented to return "approved".

Agreed about the flag making compliance more obvious. If you want non-FIPS, you must set the flag explicitly, if you don't whatever you do will either be approved or will error. Checking the indicator after the operations is a far larger onus on the application IMO and one that will be ignored.

Upgrading the FIPS provider will cause applications to fail due to the changing standards. That's part of FIPS compliance and application/library writers will have to deal with it. While I agree that we should provide a work around, I'm far less sure that it should be enabled by default.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So these flags would be set programmatically by the caller, rather than in the config file? Would the default behavior (i.e. the flag is not set) be compliant?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compliant by default would be my expectation, however the OTC might decide compatible by default instead.


### Option 1

Dont allow ANY non approved algorithms and then a indicator is not required.

- Pros: Simple
- Cons: Problematic since we already have configurable options that are used for security checks etc.
- Cons: We would need to return errors anywhere where an algorithm is not approved, which would cause compatibility issues

### Preferred Option

Add an indicator everywhere that it is required.

- Pros: Flexible solution
- Cons: Requires a lot more effort to add the indicator to all the required places.

Note that in order for a service to be ‘fips approved’ the following requirements would need to be met.

- Any algorithms come from FIPS provider.
- A service is a series of one or more API calls that must all succeed
- A extra API call is needed after the service succeeds, that should return 1 if the service is FIPS approved.

Solutions for the preferred Option
----------------------------------

### Solution 1 (Using an indicator everywhere)

Use a per thread global counter that is incremented when an algorithm is approved. AWS/google have added this in places where a service is at the point of completing its output (e.g. digest_final). This design is complicated by the fact that a service may call another service (e.g HMAC using SHA256) that also wants to increment the approved counter. To get around this issue they have a second variable that is used for nesting. If the variable is non zero then the approved counter doesnt increment. This also allows non security relevant functions to not increment the approved count. Another variation of this would be to use flags instead of a counter.

- Cons: At the fips provider level this would require some plumbing to go from the core to the fips provider, which seems overly complex.
- Cons: The query can only be done after the output is set.
- Cons: The indicator code would end up having to be set in different places depending on the algorithm after the output is finalized. This would be fairly messy as the point where it is called is set could be different for different algorithms.
- Cons: The locking increment seems messy.

### Proposed Solution (Using an indicator everywhere)

Add a OSSL_PARAM getter to each provider algorithm context.
By default if the getter is not handled then it would return not approved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Easy to detect with the helper functions.


- Pros: The code is easier to find since it is part of the get_ctx_params function.
- Pros: The getter can be called at any point after the setting is done.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this isn't a problem in practice if you make the getter a tri-state of {APPROVED, UNAPPROVED, UNDETERMINED} and just return the latter if it is not clear whether the use is approved at the time of the call, possibly even in a way that makes UNDETERMINED the default value (i.e., a value of 0).

This is what Red Hat did for a signature context when PSS padding is chosen, but no digest is known yet – see #19724 for discussion of why an indicator could not provide data without knowing the digest.


Any fips algorithm that is approved would then need a setter that at a minimum contains code similar to the following

``` C
int ossl_xxx_fips_approved(void)
{
ifdef FIPS_MODULE
return 1; // conditional code would go here for each algorithm if required *
else
return 0;
endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing requires adding the indicator params to the default provider, so I'd just make the OSSL_FIPS_PARAM_APPROVED param only work in the FIPS provider by wrapping its use in the CTX_get_params() function in #ifdef FIPS_MODULE. The ossl_xxx_fips_approved() function would then only have to be implemented in the FIPS provider.

}
```

and in the algorithms get_ctx() function

``` C
int xxx_get_fips_approved(OSSL_PARAM params[])
{
p = OSSL_PARAM_locate(params, OSSL_FIPS_PARAM_APPROVED);
if (p != NULL && !OSSL_PARAM_set_int(p, ossl_xxx_fips_approved()))
return 0;
return 1;
}
```

### API’s that would be used to support this are

- EVP_PKEY Keygen, Encryption, Signatures, Key Exchange, KEM

``` C
EVP_PKEY_CTX_get_params(ctx, );
```

(Note that this would mean you could not use functions that hide the ctx such as EVP_PKEY_Q_keygen()!)

- Ciphers

``` C
EVP_CIPHER_CTX_get_params()
```

- Digests

``` C
EVP_MD_CTX_get_params()
```

- KDF’s

``` C
EVP_KDF_CTX_get_params()
```

- MAC’s

``` C
EVP_MAC_CTX_get_params()
```

- RAND

``` C
EVP_RAND_CTX_get_params()
```

### Backwards Compatibility

Previous providers do not support this operation, so they will return not approved if they are not handled.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe????

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thinking being that it wont handle the OSSL_PARAM so it is not approved.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the FIPS provider has many more algorithms which are approved rather than non-approved. Would it not be easier to have an OSSL_PARAM indicating the non-approved status? If this parameter is present and set to true, the service would be non-approved. The absence of the parameter, or (if it is present) a value of false would mean the service is approved. That way you don't have to add the parameter to every single API, but only the ones where non-approvedness is possible.

On the other hand, this becomes more difficult to explain in the Security Policy.

Copy link
Member Author

@slontis slontis Feb 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking this applies to all providers, and in that case undefined is better to mean not approved, especially if we apply this to any provider that doesnt understand this. The set of approved algorithms will always be a small subset of all algorithms.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From that point of view that's correct – however keep in mind that only the FIPS provider is certified anyway. So from a certification point of view, it doesn't matter what values other providers do or don't return for this param. If you want to be FIPS compliant, you already have to make sure that you are using the FIPS provider.

Red Hat went for the "if it doesn't have an indicator and is in the FIPS provider, it's approved" default.


### Alternate Solution

If we had different kinds of compliance requirements (something other than FIPS) either a separate getter could be added or the getter could return a int type instead of a 0 or 1..
(e.g 1 = fips approved, 2 = some other compliance approved)

Changes Required for indicators
-------------------------------

### key size >= 112 bits

There are a few places where we do not enforce key size that need to be addressed.

- HMAC Which applies to all algorithms that use HMAC also (e.g. HKDF, SSKDF, KBKDF)
- CMAC
- KMAC

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you are missing KMAC here.

KDFs also need a key size >= 112 bits per Section 8 of SP 800-131Ar2, including HKDF, SSKDF, KBKDF.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KMAC is only checking > 4 currently so you are correct (I was thinking it was sufficient that it uses the KECCAK_KMAC algorithm - but it is not).
HKDF, SSKDF and KBKDF fall under the category of (HMAC which applies to all algorithms that use HMAC also)

### Algorithm Transitions

Should we remove these algorithms completely from the fips provider, or use indicators?

- DES_EDE3_ECB. Disallowed for encryption, allowed for legacy decryption
- DSA. Keygen and Signing are no longer approved, verify is still allowed.
- ECDSA B & K curves are deprecated, but still approved according to (IG C.K Resolution 4). Should we remove these? If not we need to check that OSSL_PKEY_PARAM_USE_COFACTOR_ECDH is set for key agreement if the cofactor is not 1.
- ED25519/ED448 is now approved.
- X25519/X448 is not approved currently. keygen and keyexchange would also need an indicator if we allow it?
- RSA encryption(transport) using PKCSV15 is no longer allowed. (Note that this break TLS 1.2 using RSA for KeyAgreement), Padding mode updates required. Check RSA KEM also.
- RSA signing using X931 is no longer allowed. (Still allowed for verification). Check if PSS saltlen needs a indicator (Note FIPS 186-4 Section 5.5 bullet(e). Padding mode updates required in rsa_check_padding(). Check if sha1 is allowed?
- RSA - (From SP800-131Ar2) RSA >= 2048 is approved for keygen, signatures and key transport. Verification allows 1024 also. Note also that according to the (IG section C.F) that fips 186-2 verification is also allowed (So this may need either testing OR an indicator - it also mentions the modulus size must be 1024 * 256*s). Check that rsa_keygen_pairwise_test() and RSA self tests are all compliant with the above RSA restrictions.

- TLS1_PRF If we are only trying to support TLS1.2 here then we should remove the tls1.0/1.1 code from the FIPS MODULE.

### Digest Checks

Any algorithms that use a digest need to make sure that the CAVP certificate lists all supported FIPS digests otherwise an indicator is required.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently Hash and HMAC DRBGs are the only impacted algorithms.
Nonetheless, this is a concern going forwards.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree. The DRBG's are the only algorithms that are doing the right thing currently because you added code to check what digests are allowed. Unless the FIPS cert lists all the digests that matches all the FIPS digest algorithms then we are not currently doing it correctly for a FIPS 140-3 validation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 140-3 should list everything. I've not seen the actual submission so I don't know if this is correct or not.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find it hard to believe it was done for absolutely everything.. (including things like SSKDF, and HKDF)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not make sense to not test&validate things that are possible to be tested and validated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I understand your point, look at the 3.0.8 validation for example and you will see that the digests tested vary wildly. This could be because there was not even an option to test them at that point in time, I dont know since I wasnt part of that process.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @jvdsn has pointed out there are cases where subsets are only allowed (such as TLS1.3 PRF)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commit message in https://gitlab.com/redhat/centos-stream/rpms/openssl/-/blob/c9s/0078-KDF-Add-FIPS-indicators.patch?ref_type=heads has a nice overview of which combinations can be tested, and which ones we were told to consider unapproved (and thus marked with an indicator saying so).

This applies to the following algorithms:

- SSKDF
Copy link
Contributor

@jvdsn jvdsn Feb 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In OpenSSL 3.2.0, it seems like SSKDF supports using SHAKE128, SHAKE256, KECCAK-KMAC128, and KECCAK-KMAC256 as digests. Those are not valid auxiliary functions according to Section 4.2 of SP 800-56Cr2, so they should probably be removed (I do not know why someone would actually use them).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The general rule is the KECCAK ones are only for KMAC.
We will have to figure out which algorithms will allow SHAKE (i.e. even if they are allowed we can optionally chose to support them).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, I can't think of any KDF that allows SHAKE. Only recently has NIST been allowing SHAKE in higher-level algorithms. FIPS 140-3 IG C.C still states "The SHAKE128 and SHAKE256 extendable-output functions may only be used as the standalone algorithm", but that's obviously outdated now with SP 800-208 (LMS) and FIPS 186-5 allowing SHAKE for RSA/ECDSA/EdDSA signatures.

- TLS_1_3_KDF (Only SHA256 and SHA384 Are allowed due to RFC 8446 Appendix B.4)
- SSHKDF
- X963KDF
- X942KDF
slontis marked this conversation as resolved.
Show resolved Hide resolved
- PBKDF2
- HKDF
slontis marked this conversation as resolved.
Show resolved Hide resolved
- TLS1_PRF
- HMAC
- KBKDF
slontis marked this conversation as resolved.
Show resolved Hide resolved
- KMAC
- Any signature algorithms such as RSA, DSA, ECDSA.

The FIPS 140-3 IG Section C.B & C.C have notes related to Vendor affirmation.

Note many of these (such as KDF's will not support SHAKE). ECDSA and RSA-PSS Signatures allow use of SHAKE.
KECCAK-KMAC-128 and KECCAK-KMAC-256 should not be allowed for anything other than KMAC.
Do we need to check which algorithms allow SHA1 also?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After lengthy discussion, Red Hat has been advised that SHA-1 cannot be used with TLS1PRF and TLS1.3PRF (no surprise here), and X9.63KDF(!). I recall that the latter was a surprise, but I don't have a reference for you with the details at the moment. Feel free to ping me if it's needed later on, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The discussion on X9.63 KDF with CAVP can be found here: usnistgov/ACVP#1403.


Test that Deterministic ECDSA does not allow SHAKE (IG C.K Additional Comments 6)

### Cipher Checks

- CMAC
- KBKDF CMAC
- GMAC

We should only allow AES. We currently just check the mode.

### Configurable options

- PBKDF2 'lower_bound_checks' needs to be part of the indicator check
- See the "security checks" Section. Anywhere using ossl_securitycheck_enabled() may need an indicator

Other Changes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some other items I can think of:

  • SP 800-56Ar3 Section 5.7.1.2 specifically defines the EC Cofactor Diffie-Hellman primitive. Therefore, it seems like OSSL_PKEY_PARAM_USE_COFACTOR_ECDH must be set to 1 for an approved EC key exchange. This doesn't impact the P curves (as their cofactor is always 1), but it does impact K and B curves.
  • FISP 186-5 Section 5.4, bullet (g) has some restrictions on the salt length for RSA-PSS signatures. Are those enforced? In the case of a 1024-bit modulus, FIPS 186-4 Section 5.5, bullet (e) has an additional restriction.
  • X9.31 padding would need to be removed or marked non-approved for signature generation, as it was removed by FIPS 186-5
  • It seems like RSA OAEP allows SHAKE128, SHAKE256, KECCAK-KMAC128, and KECCAK-KMAC256 to be used as the md. Is that well-defined?
  • KMAC128 and KMAC256 currently allow the output of very short tag lengths, less than 32 bits. According to Section 8.4.2 of SP 800-185:
When used as a MAC, applications of this Recommendation shall not select an output length L
that is less than 32 bits, and shall only select an output length less than 64 bits after a careful risk
analysis is performed

(I do not know what "a careful risk analysis" would entail for a general-purpose cryptographic library like OpenSSL)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the input...
I would be fine with K & B curves being removed altogether - but if we keep them then what you say applies.
I do mention salt length in the Algorithm Transitions - it will need to be checked. I think it needs an indicator.

  • RSA OEAP digests should be determined by what we can actually test.
    -Looks like KMAC needs an indicator on the lower bound.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the specification (Table 12), ACVP testing for OAEP is restricted to SHA-1, SHA-2, or SHA-3. Can also be confirmed by looking at the ACVP server source code

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is sufficent to say that if we cant select as a tick box on the submission paperwork then its not supported

-------------

- AES-GCM Security Policy must list AES GCM IV generation scenarios
- TEST_RAND is not approved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now it has FIPS_UNAPPROVED_PROPERTIES in fipsprov.c, would that not be sufficient?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are going to use indicators everywhere then it is probably better to not mix implicit and explicit indicators. Querying anything in the default provider for example should return not approved. (i.e.- keep the mechanism simple),.

- SSKDF The security policy needs to be specific about what it supports i.e. hash, kmac 128/256, hmac-hash. There are also currently no limitations on the digest for hash and hmac
- KBKDF Security policy should list KMAC-128, KMAC-256 otherwise an indicator is required.
- KMAC may need a lower bound check on the output size (SP800-185 Section 8.4.2)
- HMAC (FIPS 140-3 IG Section C.D has notes about the output length when using a Truncated HMAC)