Futurice Principles for Ethical AI
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


Our Futurice Principles for Ethical AI as PDF & rtf

The Futurice Principles for Ethical AI

Ethics is an integral part of our way of working. We will always uphold our responsibility to identify and raise ethical implications and concerns related to our work and help our clients deal with ethical questions related to autonomous systems in a responsible way.

Autonomous systems do what they do as a result of countless technological, economic, ethical and political decisions by human beings. As designers and builders of autonomous systems, we must never relinquish our responsibility for the greater good in the pursuit of business, governmental or political outcomes by us or our clients. We remain committed to retaining human control and the greatest possible degree of transparency in the systems we build.

The following ethical principles are meant to support and guide our decision-making when creating autonomous systems and dealing with data and algorithms.

1. Purpose and Impact

Focus on the purpose and impact.

  • Respect and be mindful about the impact on people affected by the system.
  • Ensure that the systems we design and build have a clear purpose and can be trusted to behave as expected and anticipated.
  • Consider the impact of the system beyond the user and consider any positive and negative consequences the system might have.

2. Transparency & Trust

Prioritize transparency in the systems we design and build, and strive to increase trust in all of them.

  • Go for maximum transparency and openness in the systems whenever possible.
  • Be mindful about how the systems impact people’s behavior.
  • When being able to justify the system’s working principles and outcomes is paramount, make sure to design and build in explainability from the beginning.
  • Build systems that are ready for auditing.

3. Inclusion & Fairness

Aim for inclusion by striving to understand whom the system we are designing and building will impact.

  • Design the system carefully from the beginning, with input from as diverse a group of people as possible.
  • Avoid creating or reinforcing bias that can lead to unfair outcomes.
  • Use diverse/inclusive training and test data to ensure fairness and inclusivity.
  • Make sure to create use cases that represent all impacted people.

4. Privacy and Safety

Collect, store and use personal data safely, and default to high privacy.

  • Make it explicit to users what kind of personal data is being used and how.
  • Collect and store as little sensitive data as possible.
  • Make it as easy as possible for users to exercise their rights for data privacy (GDPR).
  • Anonymise data as much as possible.

5. Don’ts

Do not work on systems that go against human rights.

Do not manipulate.

  • Do not use private data to promote ideas or actions that impacted people might consider unwanted or harmful.
  • Do not use manipulative features or design, or exploit human biases – instead, design for understanding.

Do not harm people or the environment.

  • The systems we build should never raise a direct threat towards people or the environment. The systems we build must always guarantee the protection of the physical, psychological as well as social safety of individuals.

Do not incite violence.

  • Violence is sparked by disrespect and distrust between individuals and groups. The systems we build should never promote the division of societies or social groups.


  1. The original blog post