Skip to content

3 Building trustworthy systems using coded rules

Tim de Sousa edited this page Feb 23, 2021 · 19 revisions

When you build a system based on coded rules (especially a system that makes decisions), you should be aiming to build a system that models the characteristics of an ideal fair human-mediated process.

Key principles

Your system should be:

  • Transparent - your rules should be visible.
  • Traceable - the steps in the decision-making process should be explainable/auditable.
  • Accountable - you should stand by the decision made by the system as a valid decision of your organisation.
  • Appealable - the subject of the decision should be able to seek a review of the decision (for example, regarding an error of fact, or if they believe a rule has been incorrectly applied).

A system that doesn’t meet these criteria is unlikely to be considered trustworthy; if that is the case, consider redesigning.

Transparent

If your rules are coded, your code should be visible

When we write laws, those laws are published and publicly accessible. The same should be true of the rules that your systems runs on. The code should be inspectable and testable, so the community can determine for themselves whether the rule or the code are correct.

This is one reason why open systems like OpenFisca are popular for Rules as Code projects.

Failure to provide transparency of the rules - specifically, to make the coded rules accessible independent of the system that implements them - can contribute to adverse results if there is a flaw in the coding of the rules or the system. For example, in 2021, it was reported that a 'software bug' in inmate management software being used by the Arizona Department of Corrections led to hundreds of inmates being incarcerated longer than their sentence required because the software could not interpret a 2019 amendment to sentencing laws. As the coded rules were not publicly viewable, this issue only came to light as a result of whistleblower action.

If you automate decision-making, that should be clear

If your system doesn't involve human input, you should also be open about that. Ideally, you should declare that your system or your decision is automated, and provide a way for the user in involve a human (such as a human-mediated appeals process; see below under 'Appealable').

Failure to be transparent will compromise public trust in the system.

In some jurisdictions, this is a legislative requirement. For example, under the EU General Data Protection Regulation (GDPR), EU data subjects must be informed where they will be subject to automated decision-making (Art. 22).

Traceable

'Computer says no' is something of a cliche when dealing with decisions that made by systems. It's a frustrating experience for the user to be denied, and to not understand why. This is completely avoidable.

It is a foundational principle of administrative decision-making that, when a decision is made, the decision maker must give reasons for the decision. A decision is a decision, whether it is made by a person, or delegated to a machine. So when we create decision-making systems, we should adhere to that same principle of traceablity.

Using systems that consume coded rules gives us the opportunity to easily and accurately detail how and why decisions have been made. We can design systems to deliver:

  • the decision,
  • the rules applying to that decision, and
  • the evidence to which the rules were applied.

This better enables the recipient or subject of the decision to assure themselves that the right rules and evidence were applied and, therefore, that the decision is correct.

Further, if the recipient considers that the wrong rules were applied, or that the evidence was incorrect or incomplete, then a traceable decision better enables them to appeal the decision and, ideally, makes it easier to determine the validity of the appeal.

Examples

Australia: The Australasian Legal Information Institute (Austlii) has built a rule-base legal inferencing platform called DataLex. They have built a proof-of-concept chatbot that interprets s44 of Australia’s Constitution and can answer questions on whether a person is eligible to stand for Parliament. The chatbot delivers a report that sets out its conclusion, together with the relevant rules and the evidence applied.

Accountable

A decision is a decision. Ideally, a government agency that makes a decision should be accountable for that decision, whether it was made by a person or by an agency system. Generally speaking, users do not care about the internal workings of your agency, but they want your systems to work well and they want you to be true to your word.

Legally, courts are still grappling with the effects of machine-mediated decisions.

In October 2018, the Australian Federal Court ruled that a letter from the Australian Taxation Office (ATO) advising a taxpayer of their tax debt could not be relied on, as it was automatically generated by a system and there was no 'mental' element to the decision, even though the recipient had no way of knowing that the decision was automated and the letter was system-generated. Leave to appeal to the High Court of Australia was denied, but the ATO has since taken steps to clarify the language of their communications.

Appealable

Systems are made by people, and anything made by people will have flaws. It is inevitable that your system will make an incorrect decision or create an incorrect result at some point, whether due to a coding error, poor data, or user input error.

That's why it's critical that there be a clear appeals process - a way to get a human in the decision-making loop, that the user can communicate with and to whom they can explain why the decision is wrong.

If you have build your system to deliver transparent, traceable decisions, then the appeal should be easy to resolve.

Without an effective, accessible appeals system, some users will have incorrect outcomes and negative experience; this poses a risk to overall trust in your system. No one wants to be at the mercy of an uncaring machine, let alone one that is wrong.

Other Issues

Closely related to these concerns is the challenge of fairness.

Fairness

See also:

MIT Technology Review, Can you make AI fairer than a judge? Play our courtroom algorithm game link

Inherent Trade-Offs in the Fair Determination of Risk Scores, Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan link

Regulation

Some jurisdictions are now moving to regulate the use of automated decision making.

  • In the EU, Art. 22 of the EU General Data Protection Regulation (GDPR) prohibits ‘wholly automated decisions’ (i.e., where there is no 'human in the loop'), except in listed circumstances (such as where the individual explicitly consents or the process is authorised by law). The GDPR requires that measures be implemented to safeguard the rights, freedoms and legitimate interests of the individual. This includes but is not limited to the right to appeal an automated decision to a human arbiter.
  • New Zealand has implemented an algorithm charter which includes similar principles to those set out above, plus additional governance measures such as formalised risk assessments.
  • The Government of Canada has established a set of Digital Standards, to guide the development of digital government products and services. Standard 9 is Design ethical services - this requires that systems and services be designed to ensure that everyone receives fair treatment, and that there is compliance with ethical guidelines in the design and use of systems which automate decision-making. The Standard includes a mandatory algorithmic impact assessment for any automated decision system.