Skip to content

Trustchain design principles

Tim Hobson edited this page Jun 22, 2023 · 6 revisions

Trustchain Phase 0: Design Principles [DRAFT]

Motivation

Digital identity systems that have been suggested or are currently in use often have problematic characteristics. Amongst others, they rely heavily on trusted authorities and trusted third parties. However, authorities are often not necessarily trustworthy, and even if they are at a certain point in time, there is no reason to assume that they will remain trustworthy in the future. Moreover, most systems are based on centralised designs, where one or a small number of influential organisations hold control over the system (and hence also the sensitive data it holds), making them susceptible to failures and attacks. The data that are aggregated by such systems can also be easily misused to profile individuals and for mass surveillance.

These security concerns highlight the weaknesses that are inherent to such flawed systems and the stark consequences they have for the individual (e.g. being excluded from services, leaking of sensitive information such as biometrics that compromise the privacy and safety of the individual), but also for organisations to operate securely and function properly.

Before a trustworthy identity system can be designed, let alone implemented, we first need to establish design principles that put the user and their rights at the centre of the system architecture. These principles aim to inform and guide design choices and also give a framework to discuss and scrutinise any proposed solution. The following document briefly outlines the proposed design principles.

Principles

Mainly based on Goodell & Aste.

  1. Privacy:
    • Privacy is a human right and must be at the centre of any proposed system architecture.
    • The individual must be in control of managing their identity and personal information in a multitude of contexts. This includes the ability to create multiple unrelated identities.
  2. Data minimisation:
    • Favour sharing verifiable attributes over individual identification
    • No collection of data beyond what is minimally required for operation.
    • Credential provider must not learn about service used.
  3. Trust:
    • The system should minimise dependence on artificial trust relationships.
    • Minimise the required trust in third parties.
    • Every link in the trust chain should be a recognisable entity.
    • No non-consensual trust relationships should be imposed.
    • Users at all levels should be able to independently and cryptographically verify the information on which trust relationships are based.
    • Where rules and constraints exist they should be enforced cryptographically.
  4. Institutional identity should be unique and public whereas individual identity need not be either.
    • Where strong non-transferability (of credentials) is required, it should be imposed interactively and at the edge of the network.
    • Credentials to access many everyday services (e.g. library, public transport) do not require strong non-transferability and hence should not be imposed.
  5. Transparency
    • Open source code
    • Open standards
    • No hypothetical or vapourware dependencies.
    • Users must know that identifiers do not contain personally identifying information.
    • No exceptional access for authorities (no "backdoors").
  6. Accessibility without coercion
    • "Protocols, not platforms" (Report, add link)
    • Open access for all
    • Consensual trust relationships should be realisable without needing permission from an authority
    • Identity services should be opt-in
    • Opting-out may reduce convenience but should not be penalised
  7. Prevent monopolies.
    • Avoid proprietary code and platforms
    • No gatekeepers
  8. Isolate participants of the system (e.g. no sharing of information between issuer of credentials and service provider).

Maxims:

  • Digital signatures do not create trust, they transport trust. Proof of Work creates trust.