You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a punch list of open questions from the 2021-12-07 Tech Team meeting. Please comment on this issue with any discussion, proposed answers, or additional questions you have:
Do we want to have signatures as integrity methods?
How are signatures going to be used (for example, element vs artifact)?
How can this model fit in with best practices on signing and verification practices?
What is list of minimum capabilities so can include?
How does integrity apply to element vs. collections vs. documents?
Can integrity be applied to anything other than sequence of bytes?
How can we be sure that references to other SPDX documents can keep integrity intact?
The text was updated successfully, but these errors were encountered:
Hashes and signatures can be computed only over artifacts - serialization means putting information into a specific sequence of bytes. A single Element can be serialized into a file and a signature or hash computed over the bytes of that file. This is true even if that Element is first serialized into a file containing many Elements - the multi-Element file will have one signature that validates all of the Elements within it. A different multi-Element file will have a different signature. But when a single Element is extracted from both files its value, and thus its single-Element file hash or deterministic signature value must be identical. (The signature value of a probabilistic signature scheme will be different each time the same tbs value is signed, but each signature must validate that tbs data.)
Integrity does not apply to non-serialized Elements, including Collection Elements. Integrity does apply to documents (the serialized value of one or more Elements of any type).
We can only be sure that integrity of referenced elements is intact up to the strength of the integrity mechanism. Although MD5 collision resistance has long been broken, I don't know if second-preimage attacks are currently practical. The prudent thing to "be sure" is to use stronger hash algorithms where even collision attacks are impractical (and of course ensure that the integrity validation code is uncompromised.) Reducing opportunities for preimage fuzzing (such as by canonicalizing documents into an information-dense serialization and validating both length and hash value) can improve the security of even weak hash algorithms.
Subgroup will look at verification of elements themselves, independent of 3.x timeline. verifiedUsing will continue to be verification of what the element references (e.g. an artifact).
This is a punch list of open questions from the 2021-12-07 Tech Team meeting. Please comment on this issue with any discussion, proposed answers, or additional questions you have:
The text was updated successfully, but these errors were encountered: