Skip to content

Latest commit

 

History

History
42 lines (28 loc) · 12.7 KB

risks.md

File metadata and controls

42 lines (28 loc) · 12.7 KB

Risks facing Distributed Security, and how they are mitigated

Scope: Clearly we cannot guarantee that a computer will continue to work after, say, an asteroid hits the earth. In our case, the asteroid is the possibility that quantum computing or other advances will break encryption. We're not discussing that here.

Encryption: Distributed Security uses the standard, recommended encryption algorithms and parameters. Note that when we say "key", we are really referring to a set consisting of an RSA-OAEP keypair for encrypt/decrypt, and an ECDSA keypair for sign/verify using ES384. Encryption of messages and wrapping of keys is done with a one-time use AES-GCM symmetric key for the payload (A256GCMKW), and an RSA-OAEP asymmetric encryption (RSA-OAEP-256) of the symmetric key for each intended decryptor. The internal formats and methodologies are from the JOSE standards: JWK, JWS, and JWE, including multiple key general-form key-sets, signatures, and encryptions. These are produced with the widely used Panva JOSE library. See implementation.md.

Durability: If a user's key is lost, they can't do anything. Similarly and expensively for a key representing some role or authority in an enterprise. Distributed Security does not follow the popular approach of trusting a third party to hold an unencrypted custodial copy of the key. Such arrangements create a risk to the user that the custodian might not keep the key safe, and a risk to the custodian that they may be forced to divulge the key under threat of violence or legal action. Instead, we ask only that an application provide durable storage that verifies signatures before writes. We then encrypt the keys such that the storage provider cannot themselves make use of them, and sign the encryption. This allows the application to construct or join in some long-term, highly-durable distributed general key-value store. Our use is constructed in such a way that the performance demands are minimal.

Loss and Recovery: Users change their devices fairly often, and individual device are lost or otherwise fail to work. In such cases, a user still needs to be able to access their individual user key.

  • The most convenient safeguard for this is to simply have multiple devices. Any member of a team - or any device of an individual - can unlock the team key and update it for a new set of members.
  • However, not everyone can have multiple devices, and in the case of total catastrophic losses, it is best to reclaim one's identity (one's key) based on something you know rather than only on something you have. To this end, applications are encouraged to insist that each individual have not only one or more device keys, but also a virtual device based on security questions. Distributed Security allows for the creation of a team-like key whose "member" is not a public tag associated with a key, but rather a "security question" tag (or a hash of a standardized question), associated with a key derived from the text of a "security question answer". In the simplest form, any correct answer would unlock the key. Multiple question/answer pairs can also be concatenated, and multiple recovery members can be added with different valid pairs.

Team Key Theft: Having an unencrypted copy of a key would allow the possessor to sign for a user or team, and read text that was encrypted for the sole use of the user or team. This is not possible as long as encryption holds, and that a member device key has not been stolen. All team keys are wrapped (encrypted) within the vault when they are created or when their membership is modified. Only the wrapped keys is sent through the application code to storage provided by the application. It can then only be decrypted within a vault that has one of the member keys.

Device Key Theft: Device keys are not used by the application, but are used within the vault to gain access to the encrypted team keys that are stored in the cloud. Device keys are created within the vault and spend their entire lifetime there.

Vault Penetration: The vault provides runtime isolation and persistent storage isolation:

  • If external software could be made to alter the vault software or gain access to its data structures, keys could be stolen. Ordinarily, such software could be poorly-written or malicious application code, or such code introduced by a library dependency, a cross-site scripting attack, or a browser extension. Runtime isolation prevents this through a multi-layer defense:
    • The vault runs in an iframe dynamically created by the distributed security code itself, with the API accessed only through postMessage. Browsers do not expose any iframe code or data to other frames that are in different domains, nor to extensions.
    • Applications must host the iframe in a second https domain that is different from the application. The distributed security code verifies this at runtime, displaying an explanation to the user warning them not to proceed. (Such warnings can be ignored during development.) The code also confirms that each postMessage originated from the parent frame.
    • Code and data are then further isolated by running in a worker within the iframe.
    • The distributed security software itself is open source and has no external dependencies other than Panva JOSE.
  • The device key is persisted from one session to the next using browser storage. If an attacker could gain access to this, they could unlock the individual user's key, and then their teams. This is prevented through several layers of persistent storage isolation:
    • Browsers do not let application code or extensions read persisted data from workers and iframes that are in a different domain. (See runtime isolation, above.)
    • While separate applications can share the same team tags (for individuals and teams) if they share the same module domain, they still cannot access each other's device tags. Browsers are beginning to enforce dynamic state partitioning, which prevents one domain that includes our vault from hijacking the same keys of the vault used by another domain. While not every browser does this yet, the distributed security software stores the referrer origin alongside the device keys, effectively creating its own dynamic state partitioning.
    • Finally, we don't trust the browser's persistent storage on disk - we encrypt each device key using a secret provided by the application. The application could do a poor job, of course, but we recommend using some combination of the Credentials API with an application server-provided token. Our weak requirement is only that the application always provide Distributed Security with the same secret string for the same user+device. We use the string to create a PBKDF2 derived key, that is then used to encrypt the device key for persistent storage, and to decrypt it for use in the next session.

Note:

  • Browsers and operating systems do have bugs. However, to exploit such a bug, an attacker would need to simultaneously penetrate each of these layers. One is not enough.
  • Browsers have developer tools by which iframes and workers may be debugged. While an application might be created at a time where the above isolation techniques keep, e.g., runtime data secret, it is not impossible for some browser maker to introduce an "improvement" that could make users succeptible to a phishing attack. (e.g., by cuting and pasting to or from a developer tool). Existing such tools only reveal the encrypted device keys.
  • Browsers and operating systems may also have back doors (e.g., for administrative or government use). Distributed Security has no special protection for this case.
  • Finally, developers could accidentally obtain a malicious or incompetent copy of Distributed Security that has none of these provisions, but a similar API. We want to encourage forks and improvements, but want to try to protect developers against crapware. Our license and terms of service (TBD) are intended to make improper implementations prosecutable if they are publically shared.

Unauthorized Changes to Membership: We depend on team keys being unencrypted only by the members, and for the members to be changeable over time without changing the key or tag itself. The key wrapping technique (see implementation.md) works in part by encrypting N copies of text for N members, each using the public key for the Nth member tag. This ensures that only a member can read the key, but allows any member to wrap the key for a new set of members. However, accepting a new wrapped key for storage is up to the application. The system automatically signs the wrapping as the team and as the particular member of the team, and includes a time stamp, in a standard JOSE format. Distributed-Security verify will make sure that such signatures are indeed by a current member of the team and that the timestamp is great than the current one. The application's implementation of cloud storage should do the same (using Distributed-Security verify itself, or any JOSE equivalent). This protects against replay by a bad actor to undo a change by restoring a previous version, and against actions by non-current-members.

However, while the unwrapped team key is not available to application software through the vault, the storage format is standardized and so a soon-to-be-kicked-out member could manually unwrap the team key and attempt to use it manually after later being removed from the team. Thus with sufficient planning, a former member could sign for the team after being kicked out. If this is a problem for some use case, we recommend that the application require that when a team signature is required, that the member tag also be used to sign important documents or transactions. (Indeed, this is what is required by Distributed Security itself for modifying a team, above.) Of course, nothing can stop a member from decrypting resources before getting kicked out, and some applications may need to use additional read-time tracking, which would also then be useful for new documents created after a member is removed from the team.

Privacy: A key tag is pseudonymous, in that it does not by itself indicate a particular individual's identity. However, tags are stable over time, and with additional information, a tag could come to be associated with a particular human, making their activities known via public documents bearing their tag's signature. This can be disirable in some cases, and not in others. Applications can mitigate this by creating multiple identities, each associated with the same set of devices or with multiple devices. The Distributed Security API allows the creation of either, and allows multiple device keys to be created on the same physical device for the same vault URL. It is up to the application as to how this is managed. (For example, an application could create a new device tag and a new individual tag for each transaction, as is done in Bitcoin.)

Confused Deputy: While the keys are secure by encryption and the vault, the use of the keys is entirely under the control of the application. One form of this is where a malicious actor is able to insert their own code into the application itself, and from there call the Distributed Security operations using the keys that had been generated for the application. This is the biggest risk, and the prevention through good application programming and deployment policies is outside the scope of the Distributed Security API. Another variation would be where a separate malicious application attempts to directly use the same vault as the application (e.g, by importing the same url as the application's vault module). This can be prevented by having the application host its own https vault (i.e., not the demo vault at github) with same-origin headers, and by supplying its own application-specific getUserDeviceSecret response.