Skip to content

Latest commit

 

History

History
97 lines (70 loc) · 5.63 KB

use-or-be-used.md

File metadata and controls

97 lines (70 loc) · 5.63 KB
title abstract author date venue transition
Use or be Used: Regaining Control of AI
It's said that Henry Ford's customers wanted a "a faster horse". If Henry Ford was selling us artificial intelligence today, what would the customer call for, "a smarter human"? That's certainly the picture of machine intelligence we find in science fiction narratives, but the reality of what we've developed is far more mundane. Car engines produce prodigious power from petrol. Machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. Does unbridled data consumption also have consequences for us? If we devolve decision making to machines, we depend on those machines to accommodate our needs. If we don't understand how those machines operate, we lose control over our destiny. Much of the debate around AI makes the mistake of seeing machine intelligence as a reflection of our intelligence. In this talk we argue that to control the machine we need to understand the machine, but to understand the machine we first need to understand ourselves.
family given gscholar institute twitter url
Lawrence
Neil D.
r3SJcvoAAAAJ
University of Cambridge
lawrennd
2023-05-02
Strachey Lecture, Oxford
None

\include{_ai/includes/henry-ford-intro.md}

\notes{In Greek mythology, Panacea was the goddess of the universal remedy. One consequence of the pervasive potential of AI is that it is positioned, like Panacea, as the purveyor of a universal solution. Whether it is overcoming industry’s productivity challenges, or as a salve for strained public sector services, or a remedy for pressing global challenges in sustainable development, AI is presented as an elixir to resolve society’s problems.

In practice, translation of AI technology into practical benefit is not simple. Moreover, a growing body of evidence shows that risks and benefits from AI innovations are unevenly distributed across society.

When carelessly deployed, AI risks exacerbating existing social and economic inequalities.}

\include{_ai/includes/embodiment-factors-short.md} \include{_ai/includes/baby-shoes.md} \include{_data-science/includes/lies-damned-lies.md} \include{_ai/includes/evolved-relationship-ai.md}

\include{_ai/includes/cuneiform.md}

\include{_books/includes/the-future-of-professions.md}

\notes{And this is very likely true, but in practice we know that even if the disruption is being felt initially by the professional classes, those groups tend to be protected by their ability to adapt, which is correlated with higher education.}

\notes{Whether this remains true this time is another question. I'm particularly struck by the "convergent evolution" of ChatGPT. The model is trained by reinforcement learning with feedback provided by people. ChatGPT's answers are highly plausible, make use of sophisticated language in an intelligent sounding way and are often incorrect. I'm struck by the similarity to fresh Oxbridge graduates. I wonder if this is also an example of convergent evolution.} \include{_policy/includes/coin-pusher.md} \include{_ml/includes/rs-report-machine-learning.md} \include{_ml/includes/rs-report-mori-poll-art.md} \include{_ml/includes/chat-gpt-mercutio.md} \include{_physics/includes/d-day-weather.md}

\include{_ai/includes/the-great-ai-fallacy.md}

\notes{So far we haven't seen AI that does a very good job with understanding humans, but with large generative models we're starting to see AI that through consuming vast quantities of our data has begun to understand something about our culture.}

\include{_ai/includes/p-n-fairness.md}

\include{_books/includes/a-question-of-trust.md}

\include{_ai/includes/naca-proving.md}

\subsection{AI Proving Grounds}

\slides{* Understand the nature of the tool.

  • What is the potential, what are the pitfalls?
  • Build societal AI capability.}

\notes{We need mechanisms to rapidly understand the capabilities of these new tools, what is the potential of the technology, and what are the pitfalls? With this in mind we can build a societal AI capability that means understanding is pervasive.}

\notes{Innovating to serve science and society requires a pipeline of interventions. As well as advances in the technical capabilities of AI technologies, engineering knowhow is required to safely deploy and monitor those solutions in practice. Regulatory frameworks need to adapt to ensure trustworthy use of these technologies. Aligning technology development with public interests demands effective stakeholder engagement to bring diverse voices and expertise into technology design.}

\notes{Building this pipeline will take coordination across research, engineering, policy and practice. It also requires action to address the digital divides that influence who benefits from AI advances. These include digital divides within the socioeconomic strata that need to be overcome – AI must not exacerbate existing equalities or create new ones. In addressing these challenges, we can be hindered by divides that exist between traditional academic disciplines. We need to develop common understanding of the problems and a shared knowledge of possible solutions.}

\notes{\subsection{Making AI equitable}}

\include{_data-science/includes/data-science-africa.md} \notes{\include{_governance/includes/data-trusts.md}} \include{_governance/includes/data-trusts-initiative.md} \include{_accelerate/includes/accelerate-programme.md} \include{_ai/includes/ai-at-cam.md}

\thanks

\references