Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Threat Modeling for Decentralized Identities #115

Open
simoneonofri opened this issue May 14, 2024 · 18 comments
Open

Threat Modeling for Decentralized Identities #115

simoneonofri opened this issue May 14, 2024 · 18 comments

Comments

@simoneonofri
Copy link

simoneonofri commented May 14, 2024

[Threat Modeling mode, consider this a work in progress, and it may be moved somewhere. Contributors welcome!!!]

Introduction

Status of this document

An outline of the many concerns related to these areas of work, for discussion starting, and initial principles for addressing user considerations.

Editor: Simone Onofri, simone@w3.org

Scope

This document is the living Threat Model related to Decentralized Identities.

The topic is vast and intricate. Our primary focus is on Digital Credentials, particularly the cases associated with the proposed Digital Credentials API for the FedID WG.

Considering the four-layered SSI Technology Stack from ToITP, we are starting from Layer 3: "Credentials" and precisely the Credential-Presentation phase, using the architecture related to Verifiable Credentials, which is an open standard and, therefore, can be analyzed by everybody as a reference architecture.

As the Threat Model is a living document, it can be expanded on the other parts of the architecture and at a different level of detail, e.g., going deep into cryptographic aspects of a specific profile.

In any case, particularly when analyzing broader contexts such as Security, Privacy and Harm, the various mitigations and the scope of the analysis also span the other elements of the stack.

It is intended to be a choral analysis. It stems from the need to understand a Threat Model to guide the development of Decentralized Identities in a secure/privacy-preserving way and avoid harm. It will start from the Digital Credentials API from a high-level perspective.

Terminology

In ISO/IEC 24760-1:2019, "IT Security and Privacy - A framework for identity management", the identities “are a set of attributes related to an entity”. Where the entity is in something "that has recognizably distinct existence" and that can be "logical or physical" such as "A person, an organization, a device, a group of such items, a human subscriber to a telecom service, a SIM card, a passport, a network interface card, a software application, a service or a website".

Taking the example of a person, these characteristics can be physical appearance, voice, a set of beliefs, habits, and so on. It is important to distinguish identity from the identifier (e.g., a user name).

It is usual to think of Digital Credentials as those related to humans and particularly those issued by a government, also known as "Real-world Identities".

This leads to a broader consideration of the Threat Model as it also brings in Privacy as a right and also Harm components.

As indicated by ISO, digital identities can refer to others such as devices (i.e., IoT), prototypes in the supply chain, or even pets.

Related Work

Methodology

Since security is a separation function between the asset and the threat, the threat can have different impacts, such as on security, privacy, or harm.

There are many approaches to Threat Modeling. The first approach we will use is based on Adam Shostack 4 questions frame:

  • What are we working on? (understanding the architecture, actors involved, etc...)
  • What can go wrong? (threats, threat actors, attacks, etc...)
  • What are we going to do about it? (countermeasures, residual risk, etc...)
  • Did we do a good job? (reiterating until we are happy with the result)

For the central phases, we can use (as in Risk Management) prompt lists or checklists of either threats, attacks, or controls, for example:

It is nice to frame the analysis with OSSTMM. OSSTMM controls allow both analyses of what can go wrong (e.g., loss of control) and whether there are problems in individual controls. Even if it is control-oriented and seems security-oriented, it has Privacy as one of the Operational Controls and can glue the different pieces together.

Channel and Vector

OSSTMM is very precise when used to analyze, so it defines a channel and a vector.

For an accurate analysis, We are considering the COMSEC Data Networks Channel in the specific Internet/Web vector for this issue.

Although different digital credentials may have a different channel/vector (e.g., Wireless), they can still be analyzed similarly.

Analysis

What are we building?

To begin to create a good Threat Model, we can first consider the components of the Decentralized Identity architecture (which in this context is synonymous with Self-Sovereign Identity) as defined in W3C's Verifiable Credentials Data Model.

Architecture and Actors

  • We have a Holder who, inside a Wallet, has its credentials.
  • We have an Issuer who issues the credentials to the Holder and manages the revocation.
  • We have a Verifier who verifies the Holder's credentials to give access to a resource or a service.
  • We also have a Verifiable Data Registry (VRP) that stores identifiers and schemas.

Interactions between actors occur normatively through software or other technological mediums. We will generically call Agents such components. One agent might be embedded in a Wallet (the component that contains the Holder's credentials), and another might be a Browser (which, by definition, is a user agent).

Flows

We can consider three general flows, with four "ceremonies" where the various actors interact.

  • Credential-Issuing
  • Credential-Presentation and Credential Verification
  • Credential Revocation

It is important to note that the flow stops here and can be safely continued in several ways. For example, the Holder receives credentials from an Issuer and uses them to identify themself on a Verifier to buy a physical object or ticket to an event. So the Verifier could become an Issuer to issue a certificate of authenticity for good or issue the ticket directly into the Holder's Wallet.

Credential Issuing (CI)

  1. The Issuer requests a certain authentication mechanism from the Holder. Typically, the higher the level of assurance, the stronger the authentication requires.
  2. After authentication, the Holder asks the Issuer for the credential, or the Issuer submits it.
  3. If both parties agree, the Issuer sends the credential to the Holder in a specific format.
  4. The holder enters his credential into the Wallet.

Credential-Presentation (CP)

  1. The Holder requests access to a specific resource or service from the Verifier.
  2. The Verifier then presents a request for proof to the Holder. This can either be done actively (e.g., the Verifier presents a QR code that the Holder has to scan) or passively (e.g., they accessed a web page and were asked to access a credential).
  3. Through the Wallet, the holder's Agent determines if there are credentials to generate the required Proof.
  4. The Holder may use the proof explicitly if they possess it.
  5. The Agent of the Holder then prepares the Presentation - which can contain the full credential or part of it- and sends it to the Verifier.

Credential-Verification (CV)

  1. The Agent of the Verifier verifies the Presentation (e.g., if the Presentation and the contained Credentials are signed correctly, issued by an Issuer they trust, compliant with their policy, the Holder is entitled to hold it, and that it has not been revoked or expired). The revocation check can be done using the methods defined by the specific credential.
  2. If the verification is successful, the Verifier gives the Holder the access.

Credential-Revocation (CR)

  1. The Issuer can revoke a credential in various ways.

Trust and Trust Boundaries

Trust is a key element in threat modeling. In fact, in OSSTMM, it is an element of privileged access to the asset, which, by trusting, lowers the various operational controls.

At the Process level, trust relationships are:

  • The Holder trusts the Issuer during issuance.
  • The Holder trusts its agents and wallet, always.
  • The Holder trusts the verifier, during the Presentation.
  • The Verifier must trust the Issuer during Verification.
  • All actors trust the record of verifiable data.
  • Both the holder and verifier must trust the issuer to revoke VCs that have been compromised or are no longer true.

At the Software level, Trust boundaries are documented in the Data Model in section 8.2:

  • An issuer's user agent (issuer software), such as an online education platform, is expected to issue only verifiable credentials to individuals that the issuer asserts have completed their educational program.
  • A verifier's user agent(verification software), such as a hiring website, is expected to only allow access to individuals with a valid verification status for verifiable credentials and verifiable presentations provided to the platform by such individuals.
  • A holder's user agent (holder software), such as a digital wallet, is expected to divulge information to a verifier only after the holder has consented to its release.

Data Model, Formats, Protocols

Modeling Decentralized Credentials and, in particular, Verifiable Credentials at a high level, we must consider that the technology is structured with the following:

  • Data Models: for Credentials and Presentation (e.g., the Verifiable Credentials Data Model).
  • Identifiers: in particular DIDs, a decentralized type of URIs, that have different methods,I which may be more or less privacy-preserving.
  • Formats: for encoding Data Models (e.g., JSON-LD, JWT, SD-JWT) that can be cryptographic agile (support multiple cryptographic formats) and thus have different features (e.g., JWT does not support Selective Disclosure, SD-JWT does).
  • Signature Algorithms: Formats can support different signature algorithms (e.g., BBS, CL, EcDSA), which may or may not be ZKP, support different privacy features, and be quantum-resistant.
  • Revocation Algorithms: Issuers can implement several credential revocation methodologies (e.g., a Revocation List, Cryptographic Accumulators, etc.).
  • Protocols: for the different phases of Issuance and Presentation (e.g. OID4VCI, OID4VP, SIOPv2).

We can combine these elements in a technology implementation. Therefore, it is natural to think about how some profiles may be more secure or privacy-preserving than others.

Assets

Assuming that the main asset is the credentials and information derived during its life cycle, we can consider the protection of its three Privacy Properties, as they were defined by Ben Laurie, as the basis:

  • Verifiable
  • Minimal
  • Unlinkable

These properties were defined in a very specific case of Decentralized Identities. Those related to people, and even more specifically, those issued by governments, are based on the concept of Privacy, specifically for the protection of the Holder.

While we can, therefore, consider the Minimal and Unlinkable properties as elements of the Holder, the Verifiable property is of interest to all. Verifiable means that the Verifier can confirm who issued the credential, that it has not been tampered with, expired, or revoked, contains the required data, and is possibly associated with the holder.

Therefore, The Threat Model wants to start from this specific use case, that of government-issued credentials for people, considering that it is one of the most complex.

Minimization and Unlinkability are generally interrelated (e.g., the less data I provide, the less they can be related). They must coexist with Verifiability (e.g., if I need to know that the credential has been revoked, I usually need to contact the Issuer, who has a list of revoked credentials, but in this way, it is possible to link the credential).

Minimization Scale

To try to qualify Minimization, we can use a scale defined by the various cryptographic techniques developed for Digital Credentials:

  • Full Disclosure (e.g., I show the whole passport).
  • Selective Disclosure (e.g., I show only the date of birth).
  • Predicate Disclosure (e.g., I show only the age).
  • Range Disclosure (e.g., I show only that I am an adult).

Unlinkability Scale

To try to qualify Unlinkability, we can use the Nymity Slider, which classifies credentials by:

  • Verinymity (e.g., Legal name or Government Identifier).
  • Persistent Pseudonymity (e.g., Nickname).
  • Linkable Anonymity (e.g., Bitcoin/Ethereum Address).
  • Unlinkable Anonymity (e.g., Anonymous Remailers).

Therefore, it might be possible to consider "moving" the slider toward Unlinkable Anonymity, as per the properties.

What can go wrong?

After reasoning about assets, what we can protect and "who" becomes obvious.

Threat Actors

We have mentioned before that one of the key points is the protection of the Holder. Still, by simplifying and referring to the well-known practice of "trust-no-one", we can easily get the list of actors involved:

Holder, Issuer, Verifier, and their agents/software components (e.g., Browser, Wallet, Web Sites). Which of these can be a Threat Actor? To simplify the question, each actor is potentially a Threat Actor for the others. So, all against all. Indeed, one is often also a Threat Actor to oneself (e.g., Alert fatigue).

In addition, although there are trust relationships between the various actors and their software (valid in the various steps), such software can also be malicious. Specifically, it can track the Holders, the type of credential they have, and how and where they use it through telemetry and statistical collection, and it can push the user to certain attitudes.

One must also consider a possible external threat actor, who could also be an observer or use active techniques, and who wants to track the three main actors or their agents, such as Marketers, Data brokers, Stalkers, Identity thieves, intelligence and law enforcement agencies (laws often constrain them), and OSINT investigators.

A further case is the combination of such actors, such as multiple Verifiers in cahoots or a potential collaboration between Issuer and Verifier to track the Holder.

Evil user stories

Using the information we now have, we can create some generic Evil User Stories:

  • A malicious Verifier who wants to collect too much data from the Holder.
  • A malicious Holder who wants to get what he is not entitled to from the verifier.
  • A malicious Issuer who wants to track its holders.
  • A malicious Agent who wants to track its holder.
  • An external Adversary who wants to track the Issuer, how a Verifier works or a specific Holder.

Finding the Threats

One effective though inefficient approach to threat modeling is to cycle the various lists of threats and attacks, controls, and objectives in a brainstorming session to assess how they may affect architecture components, actors, assets, and the flow in general. Using multiple frameworks may repeat some elements.

Ben's Privacy Properties (Objectives)

  • Verifiable:

    • Description: There’s often no point in making a statement unless the relying party has some way of checking it is true. Note that this isn’t always a requirement—I don’t have to prove my address is mine to Amazon because it's up to me where my goods get delivered. But I may have to prove I’m over 18 to get alcohol delivered.
    • Analysis: This brings us to the concept of integrity (via cryptographic means), which is authenticated and trusted at the level of the issuer and from what exits.
  • Minimal:

    • Description: This is the privacy-preserving bit - I want to tell the relying party the very least he needs to know. I shouldn’t have to reveal my date of birth or prove I’m over 18 somehow.
    • Analysis: We must release only what is strictly necessary. Since this is an interaction, we can consider Subjugation interactive OSSTMM control. For example, if we need to verify age with our credentials, it is one thing to show the whole document or the date of birth (Selective Disclosure) or to answer a specific query with true-false (Predicate Proofs). This property also helps Unlinkability (the less data I have, the less correlation I can do).
  • Unlinkable:

    • Description: If the relying party or parties, or other actors in the system, can, either on their own or in collusion, link together my various assertions, then I’ve blown the minimality requirement out of the water.
    • Analysis: It must be minimal. It should not be possible to map the issuer (Blind Signature), contact them to know if the credential has been revoked (e.g., Revocation via Cryptographic Accumulation), or use revocation lists that expose the list of credentials. Generally, if an identifier can be exploited to link identities, it should rotate (Rotational Identifiers), as with PAN numbers using Apple Pay.

LINDDUN (Threats)

  • Linking:

    • Description: Learning more about an individual or a group by associating data items or user actions. Linking may lead to unwanted privacy implications, even if it does not reveal one's identity.
    • Threat: We are generally driven to think of this as a threat to the Holder and linking its attributes, but per se, even an Issuer can have the problem of tracking its users. This applies to both a Verifier (or a group of Verifiers) and an external third party observing the various exchanges or otherwise any Revocation list.
    • Mitigations:
      • Use Blinded Signatures.
        The Verifier should request the following in order: Range Proof, Predicate Proof, Selective Disclosure, and the credential.
      • The Issuer should use an anonymous revocation method such as Cryptographic Accumulators.
      • The Issuer should use random identifiers when generating the credential.
      • The Holder should use - when interacting with the Verifier rotational and always random identifiers specific to that interaction session.
      • The Issuer should use (e.g., DID) privacy-preserving identifiers. Once resolved, they do not generate a connection to a system controlled directly or indirectly by the Issuer itself.
  • Identifying:

    • Description: Identifying threats arises when the identity of individuals can be revealed through leaks, deduction, or inference in cases where this is not desired.
    • Threat: The threat is the ability to identify an individual using his credentials.
    • Mitigations:
      The Verifier should request the following in order: Range Proof, Predicate Proof, Selective Disclosure, and the credential.
      • The Issuer and the Holder must not write personally identifiable information (PII) or linkable identifiers in the VDR.
      • The Issuer should use an anonymous revocation method.
  • Non-Repudiation:

    • Description: Non-repudiation threats pertain to situations where an individual can no longer deny specific claims.
    • Threat: The inability of an actor to deny the issuance or presentation of a credential; an example is a use-cases from DHS.
    • Mitigations
      • The Issuer must use proper Authentication during the issuing process depending on the Levels of Assurance (LOAs).
      • The Issuer, the Verifier and the Holder (and their agents) need to have proper logging, e.g., following the OWASP ASVS 7.1 e.g., each log must contain enough metadata for an investigation, time with timezone reference, without PII but with session identifiers but in a hashed format, in a common machine-readable format and possibly signed.
  • Detecting:

    • Description: Detecting threats pertains to situations where an individual's involvement, participation, or membership can be deduced through observation.
    • Threat: In this case, the threat can happen in several stages: when a credential is required to be presented, the credential is verified.
    • Mitigations:
      • When proof or a credential is requested, the Holder agent must return the same message and behavior (including timing, to avoid side-channel attacks) whether or not a wallet is present, whether the wallet has a credential or not, whether it has a valid credential, or whether the user does not accept instead. It is the same whether or not the user gives the browser access to the wallet.
      • When a credential's validity is verified, there should be no direct connections or systems controlled by the Issuer (e.g., when a DID is resolved) to avoid back-channel connections.
  • Data Disclosure:

    • Description: Data disclosure threats represent cases in which disclosures of personal data to, within, and from the system are considered problematic.
    • Threat: The threat will be disclosed during presentation and verification.
    • Mitigations:
      The Verifier should request the following in order: Range Proof, Predicate Proof, Selective Disclosure, and the credential.
      • The Issuer and the Holder must not write personally identifiable information (PII) or linkable identifiers in the VDR.
      • The Issuer should use an anonymous revocation method.
  • Unawareness & Unintervenability:

    • Description: Unawareness and unintervenability threats occur when individuals are insufficiently informed, involved, or empowered concerning the processing of their data.
    • Threat: For the Holder, unaware of how their credentials are used or shared.
    • Mitigations:
      • The Holder must be informed when a Verifier asks for the credential's Full Disclosure or Selected Disclosure.
        - The Holder must be informed when their credentials is Phoning Home or possible back-channel connections
        The Holder must consent to each use of their credential and must identify the Verifier, the Proof Requested (at the moment of request), and which credentials and information are shared with the Verifier after the selection.
  • Non-Compliance:

    • Description: Non-compliance threats arise when the system deviates from legislation, regulation, standards, and best practices, leading to incomplete risk management.
    • Threat: The risk of credentials not complying with legal, regulatory, or policy requirements. It is also possible to translate this element about minimal training for the Holder, particularly if they are in a protected or at-risk category, so they can be aware of what they are doing and the risks associated with Social Engineering.
    • Mitigations:
      • Provide Security Awareness Training to the Holder
      • Verifier and Issuers must be subjected to regular audit
        The standards and their implementation must contain mitigations for Harms such as Surveillance, Discrimination, Dehumanization, Loss of Autonomy, and Exclusion.

RFC 6973 (Threats)

  • Surveillance:

    • Description: Surveillance observes or monitors an individual's communications or activities.
    • Threat: Although we can semantically link this threat to surveillance of governments (precisely of the Holder or an adversary), we can actually consider surveillance also related to profiling for targeted advertising (and thus from software agents also used to trust the Holder) or even of threat actors such as stalkers or similar.
    • Mitigations
      • refer to LINDDUN's Linking, Identifying, Data Disclosure
  • Stored Data Compromise:

    • Description: End systems that do not take adequate measures to secure stored data from unauthorized or inappropriate access expose individuals to potential financial, reputational, or physical harm.
    • Threat: All actors can be compromised. Therefore, they must be considered, especially in implementing wallets and agents (for the Holder), compromising the end-user device and the signature keys of the Issuer.
    • Mitigations:
      • Keys must be stored securely and protected from compromise of the device or location where they are contained (e.g., Secure Enclave, Keystore, HSMs).
      • At the Issuer's organizational level, the Incident Response Plan must include what to do in case of compromise of private keys or underlying device technology.
  • Intrusion:

    • Description: Intrusion consists of invasive acts that disturb or interrupt one's life or activities. Intrusion can thwart individuals' desires to be left alone, sap their time or attention, or interrupt their activities. This threat is focused on intrusion into one's life rather than direct intrusion into one's communications.
    • Threat: Intrusive and multiple data requests by Verifier
    • Mitigations:
      • refer to LINDDUN's Linking, Identifying, Data Disclosure
      • Implement time-based throttling to requests
  • Misattribution:

    • Description: Misattribution occurs when data or communications related to one individual are attributed to another.
    • Threat: Incorrect issuance or verification of credentials.
    • Mitigations:
      • refer to LINDDUN's Non-Reputiation
  • Correlation:

    • Description: Correlation is the combination of various information related to an individual or that obtains that characteristic when combined.
    • Threats: Linking multiple credentials or interactions to profile or track a Holder. We are linking individuals to the same Issuer.
    • Mitigations:
      - refer to LINDDUN's Linking, Identifying, Data Disclosure
  • Identification:

    • Description: Identification is linking information to a particular individual to infer an individual's identity or to allow the inference of an individual's identity.
    • Threats: Verifiers asking more information than necessary during credential verification.
    • Mitigations:
      • refer to LINDDUN's Unawareness & Unintervenability and Identifying.
  • Secondary Use :

    • Description: Secondary use is the use of collected information about an individual without the individual's consent for a purpose different from that for which the information was collected.
    • Threat: Unauthorized use of collected information, e.g., for targeted advertising or create profiles, and Abuse of Functionality on collected data
    • Mitigations:
      • refer to LINDDUN's Non-Compliance.
  • Disclosure:

    • Description: Disclosure is the revelation of information about an individual that affects how others judge the individual. Disclosure can violate individuals' expectations of the confidentiality of the data they share.
    • Threat: A Verifier that asks for more data than needed.
    • Mitigations:
      • refer to LINDDUN's Data Disclosure
  • Exclusion:

    • Description: Exclusion is the failure to let individuals know about the data that others have about them and participate in its handling and use. Exclusion reduces accountability on the part of entities that maintain information about people and creates a sense of vulnerability about individuals' ability to control how information about them is collected and used.
    • Threats: Lack of transparency in using the data provided.
    • Mitigations;
      • refer to LINDUNN's Unawareness & Unintervenability.

RFC 3552 (Attacks)

  • Passive Attacks:

    • Description: In a passive attack, the attacker reads packets off the network but does not write them, which can bring Confidentiality Violations, Password Sniffing, and Offline Cryptographic Attacks.
    • Mitigations:
      • Encrypt Traffic.
      • Use Quantum-Resistant Algorithms.
      • Use Key Management practices to rotate keys.
  • Active Attacks:

    • Description: When an attack involves writing data to the network. This can bring Replay Attacks (e.g., recording the message and resending it), Message Insertion (e.g., forging a message and injecting it into the network), Message Deletion (e.g., removing a legit message from the network), Message Modification (e.g., copying the message, deleting the original one, modifying the copied message reinjecting it into the flow), Man-In-The-Middle (e.g., combination of all the previous attacks).
    • Mitigations:
      • Use a nonce to prevent replay attacks
      • Use Message Authentication Codes/Digital Signatures for message integrity and authenticity
      • Use a specific field to bind the request to a specific interaction between the Issuer, Verifier, and Issuer to Holder.
      • Encrypt Traffic.
      • Use Quantum-Resistant Algorithms.

STRIDE (Threats)

  • Spoofing (Threats to Authentication):

    • Description: Pretending to be something or someone other than yourself.
    • Mitigations:
      • Implement Digital Signatures
      • During the presentation, Indicate proper messages for identifying the Verifier to limit Phishing Attacks.
      • During issuing, use proper LOAs depending on the issued credentials.
  • Tampering (Threats to Integrity):

    • Description: Modifying something on disk, network, memory, or elsewhere.
    • Mitigations:
      • Implement Digital Signatures in transit and at rest.
  • Repudiation (Threats to Non-Repudiation):

    • Description: Claiming that you didn't do something or were not responsible can be honest or false
    • Mitigations:
      • refer to LINDDUN's Non-Repudiation
  • Information disclosure (Threat to Confidentiality and Privacy):

    • Description: Confidentiality Someone obtaining information they are not authorized to access
    • Mitigations:
      • refer to LINDDUN Data Disclosure
  • Denial of service (Threats to Availability and Continuity):

    • Description: Exhausting resources needed to provide service

    • Mitigations:

      • Use a decentralized VRP for verification
    • Elevation of privilege (Threats to Authorization):

      • Description: Allowing someone to do something they are not authorized to do
      • Mitigations:
        • During issuing, use proper LOAs depending on the issued credentials.

OSSTMM (Controls)

  • Visibility:

    • Description: Police science places “opportunity” as one of the three elements that encourage theft, along with “benefit” and “diminished risk.” visibility is a means of calculating opportunity. It is each target’s asset known to exist within the scope. Unknown assets are only in danger of being discovered as opposed to being in danger of being targeted.
    • Analysis: In the specific case of (request for) submission, the visibility of a specific wallet credential or assertion should be limited as much as possible when the website requests it. The whole thing must be handled at the user-agent level—or even better. It has to be hidden from it and go directly to the Wallet.
  • Access

    • Description: Access in OSSTMM is precisely when you allow interaction.
    • Analysis: In this case, the only way to do this is with the available API subset, which must be a specific request.
  • Trust:

    • Description: Trust in OSSTMM is when we leverage an existing trust relationship to interact with the asset. Normally, this involves a "relaxation" of the security controls that otherwise manage the interaction.
      - Analysis: There should be no trusted access in this specific case. However, the whole thing could be triggered when asking permission for powerful features. Consider avoiding or limiting this over time (balancing Trust with Subjugation).
  • Authentication:

    • Description: is control through the challenge of credentials based on identification and authorization.
    • Analysis: This can be considered the Trust of the issuers and the signatures (in the OSSTMM definition, Identity, Authentication, and Authorization are collapsed in the Authentication).
  • Indemnification:

    • Description: is a control through a contract between the asset owner and the interacting party. This contract may be a visible warning as a precursor to legal action if posted rules are not followed, specific, public legislative protection, or with a third-party assurance provider in case of damages like an insurance company.

    • Analysis: This is the agreement between the interacting parties, such as contracts. In this case, Notifications can describe what happens in a "secure" context (e.g., Payments API); all operations must be specifically authorized with Informed Consent. The holder must be notified if the Verifier asks for Full Disclosure, if the Issued Credentials do not support Selective Disclosure, or if it is phoning home.

      Note: this can be used as a nudge (famous in Behavioural Economics) and then can be used to educate the Verifiers, Holders, and Issuers to do the right thing.

  • Resilience:

    • Description: Control all interactions to protect assets in the event of corruption or failure.
    • Analysis: In this context, it means failing securely, which can be considered a failure in the case of cryptographic problems.
  • Subjugation:

    • Description: It is a control that assures that interactions occur only according to defined processes. The asset owner defines how the interaction occurs, which removes the interacting party's freedom of choice and liability for loss.
    • Analysis: This is a particularly interesting aspect based on the context. As mentioned earlier, one would need to make sure that one is always asked for the minimum information, if and when available (e.g., priority to Predicates Proof, then to Selective Disclosure, bad the whole credential), somewhat like what happens on SSL/TLS with the ciphers.
  • Continuity:

    • Description: controls all interactions to maintain interactivity with assets in the event of corruption or failure.
    • Analysis: This is the concept of continuity balancing in case of problems, although in this case, if there are problems, it's the case to terminate the interaction. But in general terms, for the Holder, there is the need for a secure backup.
  • Non-Repudiation:

    • Description: is a control that prevents the interacting party from denying its role in any interactivity.
    • Analysis: The issue of non-repudiation is interesting, and it makes me think about the logging issue, where it should be done, by whom, and how. It requires a strong trust element. Still, in general, it's useful for the Holder to have the log of what happened and probably the Verifier as well (e.g., in case of checks of having given access to a certain service by those who had the right to do so having presented a credential, but what to keep considering privacy as well)?
  • Confidentiality:

    • Description: is a control for assuring an asset displayed or exchanged between interacting parties cannot be known outside of those parties.
    • Analysis: One of cryptography's important aspects is how effectively it can guarantee confidentiality. A point can be post-quantum cryptography (PQC) readiness. A countermeasure is limiting a credential's lifetime and re-issuing it with a more secure cryptosuite.
  • Privacy:

    • Description: is a control for assuring the means of how an asset is accessed, displayed, or exchanged between parties cannot be known outside of those parties.
    • Analysis: mainly unlinkability and minimization, as described before. In the context of the Digital Credentials API, this also the act of preventing third parties from unnecessarily learning anything about the end-user's environment (e.g., which wallets are available, their brand, and their capabilities).
  • Integrity:

    • Description: It is a control to ensure that interacting parties know when assets and processes have changed.
    • Analysis: The credential and its presentation to be verified must be cryptographically signed.
  • Alarm:

    • Description: is a control to notify that an interaction is occurring or has occurred.
    • Analysis: It is important to notify users and then allow when an interaction happens, particularly when it has low values related to the Unlinkabiity Scale or Minimization Scale; this can happen for several reasons. For example, the Issuer uses a technology that "calls home," or the Verifier asks for data that could be minimized instead.

Other Threats and Harms

Considering the specific case of government credentials issued to people, it is useful to think about some use cases:

  • In some countries, at-risk workers who are taken abroad have their passports seized by those who exploit them so that they can be controlled. Digital Credentials can generally mitigate this as being intangible; they can be regenerated in case of theft. A further consideration is how the threat agent will act when faced with this situation and what mitigations (more process than merely technical) governments can implement.
  • Normally, we assume that the Holder of the credential is also the Subject to whom the credential refers. This is not necessarily the case.
    • One particularly useful and interesting aspect is the delegation of a credential (we use the term delegation loosely, as questions such as Guardianship have a precise legal meaning). This prevents abuse and identity theft and should be modeled properly as Issuer rules on the upper layers of the architecture.
    • Also, delegation could be a crucial feature if the government generates a credential at the organizational level, which is then used by legal representatives (who are people).

What should you do about those things that can go wrong?

Countermeasures/Features:

  • Blinded Signature: is a type of digital signature for which the content of the message is concealed before it is signed. With Public-Private Key Cryptography, the signature can be correlated with who signed it, specifically to their public key (and this is an important feature if we think about when we want to be sure of the sender when using GPG). Zero-knowledge cryptographic methods do not reveal the actual signature. Instead, with ZKP, we can send cryptographic proof of signature without providing the verifier with any other information about who signed. Thus protecting the public key of the holder.
  • Selective disclosure: is the ability to show only a part (claim) of the credential and not all of it or show only possession of that credential. as needed in the context of the transaction. For example, we can show only the date of birth rather than the entire driver's license where it is contained. This allows us to minimize the data sent to the verifier further.
  • Predicate Proofs and Range Proof: is the ability to respond to a boolean assertion (true-false) to a specific request, and it is an additional step for privacy and minimization. For example, if we say we are of age, I don't have to show just the date of birth but compute the answer.
  • Anonymous Revocation: A credential has its life cycle: it is issued, it is used, and then it can be revoked for various reasons. Therefore, a verifier must be able to verify whether the credential has been revoked, but this must be done without allowing the ability to correlate information about other revoked credentials. There are different Techniques:
  • Rotational Identifiers: As indicated by the Security and Privacy Questionnaire, identifiers can be used to correlate, so it is important that they are temporary as much as possible during a session and changed after they are used. In this context, the identifiers that can be exploited to correlate can be present at different levels.
  • No Phoning home or back-channel communication: Software often "calls home" for several reasons. They normally do this to collect usage or crash statistics (which could indicate a vulnerability). The problem is that this feature, often critical to software improvement and security, has privacy implications for the user, in this case, the Holder. At the Credentials level, this call can be made at different times and by different agents. For example, suppose the Issuer is contacted by the Verifier to check the revocation of a credential, or the Wallet can contact its vendor to collect usage statistics. In that case, we can consider two types of countermeasures:
    • Do not phone home or back-channel communication: This could also be an operational necessity (several use cases require the presentation to be made in offline environments or with limited connection) or a choice of the Holder, who should always consent to telemetry and external connections to third-parties.
    • Minimize and Anonymize the Data: Limit the data passed or, even better, cryptographic privacy-preserving techniques like STAR that implements k-anonymity for telemetry.
    • Using Privacy-Preserving DIDs: When resolving a DID, it is possible that the method, uses a connection to a system for resolution. If this system is under the direct or indirect control of the Issuer, generating potential privacy issues. For example, this typically happens with did:web as mentioned in section 2.5.2 where a GET is generated that retrieves a file, effectively exposing the requesting user agent and allowing the Issuer to make statistics.
  • Notification/Alerts: An important aspect, particularly about interactions where the user is required to interact through an Internet credential, is communication with the user, which occurs at the following times
    • Before the request for the proof: for example, a website requests age verification, permission must first be given to the website to access the functionality, and when the user decides whether or not to give access, the URL and type of credential requested and the level of Minimization (to discourage you know the Verifier and the Holder from using Full Disclosure) must be indicated in a secure context.
    • Before sending the proof, the user selects the Wallet of his choice, the credential or set of credentials from the wallet, and the specific claims from the credentials. The Holder must be notified and asked for confirmation and consent, particularly when the type of presentation he proposes has phone calling or back-channel communication features (to discourage the Issuer and Verifier from these practices).
@OR13
Copy link
Contributor

OR13 commented May 14, 2024

The strategy link issue is 404.

There are solutions for dynamic state, that rely on getting a fresh status from the issuer at some interval, but where the holder requests the status, not the verifier.

This kind of approach is especially useful for enumerations status values, that are not boolean, since you don't need to do bit logic on the status list, and storing enumerations in cryptographic accumulators can be complicated.

@simoneonofri
Copy link
Author

@OR13 Thank you for the 404 check; it was fixed. And also for the status attestations

@simoneonofri simoneonofri changed the title Privacy properties Threat Modeling May 19, 2024
@simoneonofri
Copy link
Author

reasoning about the issue, I began to structure a Threat Model, to which I would then apply privacy (or other) techniques

@OR13
Copy link
Contributor

OR13 commented May 20, 2024

Nick also mentioned threat model here: w3c/strategy#458

I'm generally supportive of revising the threat model, as the API aligns to support protocols and credential formats.

Especially because some threats will be out of scope for the API, but maybe in scope for a protocol or format.

It can be helpful to identify what the API can change, and where the API's security or privacy considerations are subject to constraints from the choice of supporting a protocol or format.

@simoneonofri
Copy link
Author

thanks @OR13 I've just update it (and maybe I need to move from an issue to something different).

Clarified a bit the scope that I agree it is broader to the API, but for the model I think we need to analyze also the full flow/architeture and then part of the model can be applied to the API.

@simoneonofri
Copy link
Author

@weizman have you some insight as you have experience in the Wallets-in-the-Browsers (even if probably more on high-level than your article and also on user experience?

@simoneonofri
Copy link
Author

simoneonofri commented May 23, 2024

by @peppelinux on linkedin

  1. Foundational digital identity systems streamline and secure identification across platforms, enhancing security, reducing fraud, and improving service access, which cuts costs beyond just money.

  2. Driven by urgent modernization and security needs, these programs use strict data protection, encryption, and audits, adhering to international standards. Despite their robustness, challenges may arise. Open, inclusive processes are vital for evolving these technologies, allowing wide, representative contributions and enhancing global understanding.

  3. Mandatory enrollment ensures universal coverage and effectiveness, facilitating comprehensive integration across services and platforms for equal access to essential services.

Inclusivity and transparency are crucial; time should not be an enemy, nor haste an ally. Empiricism and field experience are essential for making informed decisions.

@simoneonofri
Copy link
Author

simoneonofri commented May 23, 2024

by Stephan Engberg on linkedin

It depends on how you define Digital Identity.

If it is vulnurable to loss, theft, renting, man-in-the-middle, tracking, lock-in etc. or simply weak, we are very likely talking about identity as an attack on citizens and society rather than security.

If is it empowering, i.e. mainly non-linkable, digital identity can be the critical enabler. The key question is if citizens can share data WITHOUT LOOSING CONTROL.

If not, we are talking various form of means that always feed surveillance, e.g. payment cards or smartphones that by design are invasive.

In this, it should be noted that Verified Credentials is not an enabler in itself even though it can be an element. You need to determine linkability on the entire transaction, not just the single attribute credential.

And politely, World Bank has for a long time appeared to be working on behalf of some shady commercial agenda where human rights are reduced to the right to be commoditized, i.e. the opposite - e.g. biometric identity is the opposite of freedom, it is designing for oppression.

I made this roadmap in 2007 - getting close to being state-of-the-art. But notice how e.g. EU 2.0 ARF is far down to the right.

Image: https://media.licdn.com/dms/image/D4D2CAQElLaerjehW7w/comment-image-shrink_8192_480/0/1716361039335?e=1717052400&v=beta&t=01c6j5dk0i2CcNuNz2F034nc9bLdjOT3IFkZ4V0O9d0

@simoneonofri
Copy link
Author

I updated the model by improving the scope, architectural, and flow analysis parts, then wrote the various prompt lists to brainstorm on.

@simoneonofri simoneonofri changed the title Threat Modeling Threat Modeling for Decentralized Credentials May 28, 2024
@simoneonofri simoneonofri changed the title Threat Modeling for Decentralized Credentials Threat Modeling for Decentralized Identities May 28, 2024
@TomCJones
Copy link

this is more of a threat meta-model as the details about the vulnerabilities, costs, mitigations, and justifications are missing. That is ok as a start, although mixing models makes it harder to come up with a single lists of vulnerabilities for the analysis. Or is the point to create the model and then every implementer would build the analysis?

One item missing is the verifier proof of identity and purpose. As one example, you don't say that the holder must trust the verifier.

There is an implicit assumption here, that the holder is the subject with a slight nod to delegation in "One particularly useful and interesting aspect is that of delegation of a credential (I use the term delegation loosely, as questionies such as Guardianship have a precise legal meaning), which prevents much abuse and identity theft and should be modeled properly as Issuer rules." If you mean this you need much more detail and the separation of the holder from the subject.

@simoneonofri
Copy link
Author

simoneonofri commented Jun 3, 2024

Hi @TomCJones

this is more of a threat meta-model as the details about the vulnerabilities, costs, mitigations, and justifications are missing. That is ok as a start, although mixing models makes it harder to come up with a single lists of vulnerabilities for the analysis. Or is the point to create the model and then every implementer would build the analysis?

Yes, it is still a meta-model to be codified in the clear structure "threat>mitigation>residual threat, as in a common final form.

There are several "profiles" and elements that change the setup a lot (e.g., I can choose to have formats that do not support selective disclosure, while an output that doesn't come out and has to be used/predilected those). Also, because I didn't find that reasoning, it seemed like a good place to start.

The mix between Security and Privacy in this case is given by the fact that I started doing the analysis with the OSSTMM. Still, yes in a later round (the fateful fourth step) the various analyses have to be harmonized.

One item missing is the verifier proof of identity and purpose. As one example, you don't say that the holder must trust the verifier.

Great finding, I will update it thank you

There is an implicit assumption here, that the holder is the subject with a slight nod to delegation in "One particularly useful and interesting aspect is that of delegation of a credential (I use the term delegation loosely, as questionies such as Guardianship have a precise legal meaning), which prevents much abuse and identity theft and should be modeled properly as Issuer rules." If you mean this you need much more detail and the separation of the holder from the subject.

Yes, you are right. I added that sentence as my memento since I thought the concept was interesting. Lately, at least in Europe, we talk a lot about government-issued credentials for people, but there are many use cases where the subject is not the holder (animals, supply chain, etc.).

Thanks again for your comment!

@csuwildcat
Copy link

In the section that pertains to the Status List revocation approach, it claims the approach discloses personal information, but that doesn't seem accurate. Assuming the revocation document only contains flipped bits at positions that can only be tied a given credential if you'd been privy to disclosure of their association, what personal information does the author of that passage think is contained in this revocation document?

@simoneonofri
Copy link
Author

hi @csuwildcat , thank you for the feedback.

i was more thinking of a correlation issues as specified here:

https://www.w3.org/TR/vc-bitstring-status-list/#privacy-considerations

for sure, I am going to explain better the concept

@simoneonofri
Copy link
Author

We're also started working with Fondazione Bruno Kessler as they have a Threat Model to on the Wallet/Protocol side: https://drive.google.com/drive/folders/1mgwhZ0jTAeGIE8Ewf3kK34dLjPwOTM5L

@bvandersloot-mozilla
Copy link

One thing I think is missing from the "threat>mitigation>residual threat" form that this is taking is a discussion of the actors/assets/invariants that are useful to describe the assumptions made in constructing the model. That sort of discussion can help us get on common ground, which could be really nice for defining exactly the security/privacy relationship of the wallet and browser. I'm still thinking about how to approach this, but this may be an additional section worth adding.

@simoneonofri
Copy link
Author

@bvandersloot-mozilla I agree with you. it is needed a refinement "numbering" the threats, the mitigation so that we can understand the residial part (and understand what to do)

@simoneonofri
Copy link
Author

By @slistrom and @peppelinux

A classification of the Wallet types, considering that each wallet, at a specific level, can have a different Threat Model.

Wallet Instance Types

There are many ways to technically implement Wallet Instances to manage Digital Credentials. There are typically two types of Wallet End-Users: one is a natural person and another is an Organisational Entity such as a legal person. These two types of users may have different usage and functional requirements.

Below a non-exaustive list of the different Wallet Instance types.

Mobile Wallet Native Application:

Also known as Mobile Wallet only, is an application that runs natively on a Personal Device under the sole control of an End-User and provided through a platform vendor specific app-store, on behalf of the Wallet Solution. In some cases the End-User as natural person uses the Mobile Wallet representing a legal person.
Web Wallet Native Application:

Also known as Cloud Wallet or Web Wallet only, is a Wallet that uses native web technologies for its components, such as UI components. Cloud Wallets are typically suited for Organisational Entities that requires automated Digital Credential operations (request, issuance, store, presentation, revocations) in unsupervised flows, therefore without any human control. Web Wallets are divided into two additional subtypes: Custodial Web Wallets and Non-Custodial Web Wallets.

Custodial Web Wallet
Cloud Wallets that have dependency on a cloud infrastructure, not necessarily hosted by the Wallet Provider, are typically classified as Custodial Web Wallets; in this case, the cryptographic keys used and the Digital Credentials are stored in the cloud infrastructure.

Non-Custodial Web Wallet
A Web Wallet where the cryptographic keys are stored and managed on a media in possession by the End-User and the Digital Credentials can only be used by the End-User, e.g. using a FIDO enabled security hardware token, no matter whether the Credentials are stored locally in a Personal Device or in cloud storage.
Progressive Web Application Wallet (PWAW)

PWAW is a web application that looks like a native app. It can be installed on a Personal Device and not necessarly using the operative system specific app-store. The advantage with a PWAW is that it gives the End-User the same experience as a Mobile Native Wallet Application while also offering the benefits of a web application. PWAW can be Custodial or Non-Custodial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants