Skip to content

Commit

Permalink
formatting to update
Browse files Browse the repository at this point in the history
  • Loading branch information
AoifeHughes committed Aug 3, 2023
1 parent 2a72482 commit 86dd1d9
Show file tree
Hide file tree
Showing 5 changed files with 220 additions and 113 deletions.
44 changes: 31 additions & 13 deletions site/docs/guidance/assurance-ecosystem.md
Expand Up @@ -8,22 +8,40 @@ tags:

# Understanding the Assurance Ecosystem

In 2021, the UK Government's Centre for Data Ethics and Innovation released their [AI Assurance Roadmap](https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem).
This publication set an agenda and series of recommendations for how to build an effective AI Assurance ecosystem, including specifying key roles and responsibilities.
The following diagram shows a key set of actors, identified in this report:
In 2021, the UK Government's Centre for Data Ethics and Innovation released
their
[AI Assurance Roadmap](https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem).
This publication set an agenda and series of recommendations for how to build an
effective AI Assurance ecosystem, including specifying key roles and
responsibilities. The following diagram shows a key set of actors, identified in
this report:

![This diagram depicts the AI assurance ecosystem, illustrating interactions between AI supply chain participants, AI Assurance Service Providers, Independent Researchers, and Supporting Structures like regulators and standards bodies.](../assets/images/actors.png)

As the diagram depicts, certain actors have a direct influence into the (simplified) supply chain for AI systems.
For instance, organisations may have dedicated teams internally who are responsible for quality assurance of products or services (e.g. compliance with safety standards, adherence to data privacy and protection legislation).
However, there is a growing marketplace of independent assurance providers who offer consultancy or services to other companies or organisations.[^market]

[^market]: For example, [Credo AI](https://www.credo.ai/) offer a paid-for service that comprises an interactive dashboard and set of tools to help companies comply with existing and emerging policies and regulation. Whereas, other organisations, such as the [Ada Lovelace Institute](https://www.adalovelaceinstitute.org/project/algorithmic-impact-assessment-healthcare/) have developed open-source tools for teams to implement within their own projects.

This is a helpful starting point for gaining some purchase on the complex set of interacting roles and responsibilities that collectively make up what is admittedly a hard to delineate assurance ecosystem.
Rather than trying to build a map of this ecosystem, we can instead focus on some of the typical roles and responsibilities that the different actors have.

<!-- add a table of actors and their roles and responsibilities (see https://cdeiuk.github.io/ai-assurance-guide/needs-and-responsibilities#ai-supply-chain)
As the diagram depicts, certain actors have a direct influence into the
(simplified) supply chain for AI systems. For instance, organisations may have
dedicated teams internally who are responsible for quality assurance of products
or services (e.g. compliance with safety standards, adherence to data privacy
and protection legislation). However, there is a growing marketplace of
independent assurance providers who offer consultancy or services to other
companies or organisations.[^market]

[^market]:
For example, [Credo AI](https://www.credo.ai/) offer a paid-for service that
comprises an interactive dashboard and set of tools to help companies comply
with existing and emerging policies and regulation. Whereas, other
organisations, such as the
[Ada Lovelace Institute](https://www.adalovelaceinstitute.org/project/algorithmic-impact-assessment-healthcare/)
have developed open-source tools for teams to implement within their own
projects.

This is a helpful starting point for gaining some purchase on the complex set of
interacting roles and responsibilities that collectively make up what is
admittedly a hard to delineate assurance ecosystem. Rather than trying to build
a map of this ecosystem, we can instead focus on some of the typical roles and
responsibilities that the different actors have.

<!-- add a table of actors and their roles and responsibilities (see https://cdeiuk.github.io/ai-assurance-guide/needs-and-responsibilities#ai-supply-chain)
To what extent does the TEA methodology and platform align with the CDEI's 5 elements of assurance: https://cdeiuk.github.io/ai-assurance-guide/five-elements
-->
130 changes: 83 additions & 47 deletions site/docs/guidance/components.md
Expand Up @@ -11,12 +11,19 @@ tags:

In this section we will look at the core elements of an assurance case and how they relate to one another.

There are many ways to construct an assurance case and several standards exist to help users adopt shared practices.
For instance, the [Goal Structuring Notation](https://scsc.uk/r141C:1?t=1) has thorough and comprehensive documentation for building assurance cases that align with their community standard.

Trustworthy and Ethical Assurance is inspired by GSN's form of argument-based assurance, but aims to simplify the process of developing, communicating, and evaluating an argument and the evidence that justifies it, in order to make the process more open and inclusive to a broader community of stakeholders and users.
That is, we prioritise *accessibility* and *simplicity*.
The trade-off is that assurance cases developed using our platform are *less expressive* than others, but (hopefully) easier to understand.
There are many ways to construct an assurance case and several standards exist
to help users adopt shared practices. For instance, the
[Goal Structuring Notation](https://scsc.uk/r141C:1?t=1) has thorough and
comprehensive documentation for building assurance cases that align with their
community standard.

Trustworthy and Ethical Assurance is inspired by GSN's form of argument-based
assurance, but aims to simplify the process of developing, communicating, and
evaluating an argument and the evidence that justifies it, in order to make the
process more open and inclusive to a broader community of stakeholders and
users. That is, we prioritise _accessibility_ and _simplicity_. The trade-off is
that assurance cases developed using our platform are _less expressive_ than
others, but (hopefully) easier to understand.

!!! warning "A Note on Terminology"

Expand All @@ -27,7 +34,7 @@ All assurance cases contain the following core elements:
```mermaid
flowchart TD
G1[Goal Claim]
C1([Context])
C1([Context])
S1[\Strategy\]
P1[Property Claim];
E1[(Evidence)];
Expand All @@ -46,44 +53,57 @@ There are two types of claims:

#### Goal Claims

A *goal claim* serves to direct the process of developing an assurance case towards some value or principle that is desirable or significant.
For instance, it may be important to communicate how a product is 'Sustainable', how an algorithmic decision-making system is 'Explainable', or how the deployment of some service is 'Fair'.
The type of goal chosen will determine the set of lower-level property claims and evidence that are *relevant* and *necessary* for the overall assurance case.
As such, a goal claim should be the first element to be established.
Although, like all elements, it can be iteratively revised and refined as the assurance process develops.
A _goal claim_ serves to direct the process of developing an assurance case
towards some value or principle that is desirable or significant. For instance,
it may be important to communicate how a product is 'Sustainable', how an
algorithmic decision-making system is 'Explainable', or how the deployment of
some service is 'Fair'. The type of goal chosen will determine the set of
lower-level property claims and evidence that are _relevant_ and _necessary_ for
the overall assurance case. As such, a goal claim should be the first element to
be established. Although, like all elements, it can be iteratively revised and
refined as the assurance process develops.

Because a goal claim for will be *high-level*, it will not have the necessary specificity to link directly to evidence.
Consider the following example.
Because a goal claim for will be _high-level_, it will not have the necessary
specificity to link directly to evidence. Consider the following example.

```mermaid
graph TD
G1["`**Goal**
The outputs of our system are *explainable*.`"];
```

Here, *explainable* is a broad goal that is insufficiently operationalised or specified.
Resolving this requires the use of additional elements.
Here, _explainable_ is a broad goal that is insufficiently operationalised or
specified. Resolving this requires the use of additional elements.

!!! info "Multiple Goals and Modular Arguments"

In this section, we only discuss arguments with a single goal. However, nested (or, modular) assurance cases can also be developed where multiple goal claims serve as sub-claims into a broader argument that subsumes the lower-level arguments.

#### Property Claim(s)

Goal claims need to be succinct and accessible.
However, this comes at the cost of *specificity*.
For instance, what does it mean to deploy a service in a fair manner, or to develop an explainable system?
Property claims help to answer such questions.

In one respect, property claims can be treated as lower-level goals[^gsn].
That is, when formulated they represent aspirations that may need to be established and justified through linking to evidence.

[^gsn]: In the GSN standard, all claims are treated as goals and no distinction is made between goal claims and property claims. Our methodology maintains consistency with this standard, which is why property claims have the same type as goal claims, but adds an additional descriptive layer to better represent the ethical process of deliberation and reflection (see section on [Operationalising Principles](operationalising-principles.md))

An assurance case may have only one goal claim[^modularity], but multiple property claims.
Collectively, the property claims serve to establish the central argument for how the goal claim has been established by detailing properties of a project or the system that help justify why the top-level goal has been sufficiently established.
That is, they are the additional premises that support the conclusion.
Consider the following example.
Goal claims need to be succinct and accessible. However, this comes at the cost
of _specificity_. For instance, what does it mean to deploy a service in a fair
manner, or to develop an explainable system? Property claims help to answer such
questions.

In one respect, property claims can be treated as lower-level goals[^gsn]. That
is, when formulated they represent aspirations that may need to be established
and justified through linking to evidence.

[^gsn]:
In the GSN standard, all claims are treated as goals and no distinction is
made between goal claims and property claims. Our methodology maintains
consistency with this standard, which is why property claims have the same
type as goal claims, but adds an additional descriptive layer to better
represent the ethical process of deliberation and reflection (see section on
[Operationalising Principles](operationalising-principles.md))

An assurance case may have only one goal claim[^modularity], but multiple
property claims. Collectively, the property claims serve to establish the
central argument for how the goal claim has been established by detailing
properties of a project or the system that help justify why the top-level goal
has been sufficiently established. That is, they are the additional premises
that support the conclusion. Consider the following example.

```mermaid
graph TD
Expand All @@ -103,12 +123,16 @@ graph TD

### Evidence

Evidence is what grounds an assurance case.
Whereas goal claims orient a case and property claims help specify and establish an argument, evidence is what provides the basis for trusting the validity of the case as a whole.
Evidence is what grounds an assurance case. Whereas goal claims orient a case
and property claims help specify and establish an argument, evidence is what
provides the basis for trusting the validity of the case as a whole.

The types of evidence that need to be communicated will depend on the claims being put forward.
For instance, if a claim is made about user's attitudes towards some technology or system, then findings from a user workshop may be needed.
Alternatively, if the claim is about a model's performance exceeding some threshold, then evidence about the test will be needed (e.g. benchmarking scores and methodology).
The types of evidence that need to be communicated will depend on the claims
being put forward. For instance, if a claim is made about user's attitudes
towards some technology or system, then findings from a user workshop may be
needed. Alternatively, if the claim is about a model's performance exceeding
some threshold, then evidence about the test will be needed (e.g. benchmarking
scores and methodology).

```mermaid
graph TD
Expand All @@ -119,15 +143,15 @@ graph TD
!!! info "Evidential Standards"

Similar to a legal case, where evidence needs to be admissible, relevant, and reliable, there are also standards for which types of evidence are appropriate in a given context.

In some cases, technical standards may exist that can help bolster the trustworthiness of an argument, by allowing a project team to show how their actions adhere to standards set by an external community.

In other cases, consensus may only emerge through the communication and evaluation of the evidence itself.

## Context

There are various types of context statements that can be added to the core elements of an assurance case.
For instance, consider the following example:
There are various types of context statements that can be added to the core
elements of an assurance case. For instance, consider the following example:

```mermaid
graph RL
Expand All @@ -140,8 +164,9 @@ There are two types of links that are used in Trustworthy and Ethical Assurance.

### Support Links

The primary link used in Trustworthy and Ethical assurance cases is a *support link*.
These links represent a uni-directional relationship between two elements, such that the parent element is *supported by* the child element.
The primary link used in Trustworthy and Ethical assurance cases is a _support
link_. These links represent a uni-directional relationship between two
elements, such that the parent element is _supported by_ the child element.

<!-- add more information -->

Expand All @@ -157,12 +182,15 @@ They are rendered as follows:
- Goal Claim to Property Claim
- Strategy to Property Claim
- Property Claim to Property Claim
- Property Claim to Evidence
- Property Claim to Evidence

### Context Links

Context links provide additional information for relevant elements, which has a constraining effect on the scope of the claim being made.
For instance, goal claims made about a system may be constrained by a specific use context (e.g. an algorithm may operate fairly in the context of a highly constrained information environment where input data follow a particular structure).
Context links provide additional information for relevant elements, which has a
constraining effect on the scope of the claim being made. For instance, goal
claims made about a system may be constrained by a specific use context (e.g. an
algorithm may operate fairly in the context of a highly constrained information
environment where input data follow a particular structure).

They are rendered as follows:

Expand All @@ -185,11 +213,19 @@ Some examples of contextual information that could be added include:

#### Evidential Claims

If the rationale for selecting some evidence to support a specific property claim (or set of claims) is not clear, an intermediate 'evidential claim' may be required.
If the rationale for selecting some evidence to support a specific property
claim (or set of claims) is not clear, an intermediate 'evidential claim' may be
required.

For instance, the relevance of a partial dependency plot as supporting evidence for how a machine learning model is interpretable may be clear to some stakeholders, but a) this depends on prior expertise and b) may not address further questions, such as why individual feature importance is sufficient for establishing interpretability.
For instance, the relevance of a partial dependency plot as supporting evidence
for how a machine learning model is interpretable may be clear to some
stakeholders, but a) this depends on prior expertise and b) may not address
further questions, such as why individual feature importance is sufficient for
establishing interpretability.

An evidential claim would help provide further clarity, by making explicit any assumptions made by the project team (e.g. interpretations of the system's behaviour will only be undertaken by trained experts).
An evidential claim would help provide further clarity, by making explicit any
assumptions made by the project team (e.g. interpretations of the system's
behaviour will only be undertaken by trained experts).

```mermaid
graph TD
Expand Down

0 comments on commit 86dd1d9

Please sign in to comment.