Skip to content

Latest commit

 

History

History
329 lines (234 loc) · 12.3 KB

agq-part-1.mdx

File metadata and controls

329 lines (234 loc) · 12.3 KB
title date tags draft summary
AGQ: Survival Skills in the Age of AI
2023-04-11
gpt-4
agi
agq
large-language-models
ai
ml
alignment
false
When AGI Comes, What will the role of humans be?

“When AGI comes, what will the role of humans be?”

<AGQSectionHeader number={1} title={'"Setting"'} subHeader={['Why', 'This', 'Matters']} />

<ImageWithCaption caption={ "The Singularity"{' '} (original from Wait but Why) } src="/static/images/agq/wait-but-why-singularity-simple.png" alt={"The Singularity" (original from Wait but Why)} />


<ImageWithCaption caption={ ChatGPT Time To 100M Users{' '} (original from Kyle Hailey) } src="/static/images/agq/chat-gpt-adoption-rate.png" alt={ "The Singularity{' '} (original from Wait but Why!)" } />


AI Progress is now measured in days, not years.

Artificial General Intelligence (AGI) is either here (by some accounts) or coming very soon.

Love it, or hate it; ignore it, or shun it, or embrace it:

The Age of AI is upon us, yet nobody seems totally ready for it.


There are many forces which are set to play a part in the coming times. [Politics](https://davidrozado.substack.com/p/the-political-biases-of-gpt-4), [economics](https://openai.com/research/gpts-are-gpts), sociology, information technology, and others — yet **AGI** manages to be simultaneously the most ***unknown*** element, and the most ***transformative*** one.


I have a growing mixture of concerns & hope about where I and my loved ones may fit in the new shape of the world when this transformation is complete.


Perhaps you feel the same way.

So let’s talk about it.


<CustomAside icon={"⏳"} title={<span style={{textDecoration: 'underline', fontWeight: 'bold'}}>TL;DR for this Series}

Central Claim

AIs are expected to be useful tools that reduce human cognitive load. AIs trained using state-of-the-art methods like Reinforcement Learning from Human Feedback (RLHF) are naturally selected for their resemblance to a specific subset of human minds. Many forces will conspire to push AGI adoption to occur in a fast, but not instantaneous, fashion — with RLHF remaining as the primary teaching mechanism for AIs. As AGI is adopted gradually, hybrid organizations will begin to form, composed of Humans, AIs, and hybrids.


Counterintuitively, this implies that for most of us, working and living with AIs will increasingly require *more* skills related to traditional Human Interaction, Psychology, & Organizational Psychology, than it will technical skills.

AGQ is introduced as a wrapper term for these skills, consisting of:

Self-Knowledge, Other-Knowledge, Group-Knowledge, & World-Knowledge.


What is AGQ?

It’s a term I made up, that stands for “Agent Quotient”

Am I authorized to make a term?

Nope! Sorry 🤗


⭐ AGQ: Agent Quotient One’s ability to enact change in systems composed of ***Agents*** — Humans, AIs, and “Hybrids” / “Cyborgs”.

<span style={{ color: 'gray' }}> (We’ll unpack this in detail, so don’t be concerned if there are some unfamiliar terms)

AGQ can be thought of as a mixture of EQ (Emotional Quotient) and IQ (Intelligence Quotient).

Just as EQ and IQ are different (hard-to-measure accurately) facets of the idea of "G" -- or "General intelligence" -- "AGQ" is a wrapper-term around the ability to interact successfully with systems which are mixtures of human and artificial agents.


Why give this a new term?

The terms we have right now aren't quite enough.

IQ isn’t quite enough — because it is focused primarily on pattern recognition, and makes no statements about an individual’s “theory of mind”.

And EQ isn’t quite enough — because while it does cover “theory of mind”, it has focused primarily on human minds, which will soon lose their monopoly on societal participation.


As we push (or are pushed) into this brave new world, we should have some concept of how to behave within it. When self-improving, it’s important to have a target.

Hence, AGQ.

Mo’ Minds, Mo’ Problems

<ImageWithCaption caption={ GPT-4 Driven Agent Simulation, screengrab from 'Reverie', paper: Generative Agents: Interactive Simulacra of Human Behavior } src="/static/images/agq/gpt-sims.gif" alt={"GPT-4 Driven Agent Simulationss} />

Humans are all different.


This has presented us with opportunities and challenges for thousands of years.
However, ***compared to AI, humans are incredibly similar***.
Pick any two humans who seem different from each-other — two political enemies, or two people from vastly different walks-of-life. The differences between these human minds may seem stark. But the differences between these two humans would seem ***miniscule***, if you weighed them against the differences between either human and an AI system.

Not only will AI minds be different from our own — AI minds will be mutually diverse, too. There will be many different kinds of AI minds, with different strengths and weaknesses. Some of the AI minds might even become more distinct from each-other than they are from us!


But these differences ***don’t mean we should just give up hope on understanding AI systems.***

There are similarities, however small, between how AI systems perceive the world, and how we do. They were designed in our image, after all. Further, when you study AI systems, you begin to realize that there are common “subsystems” which all intelligent beings must have — human or not.


Thus despite our major differences, relating to AI will in some ways resemble how we relate to other humans — it’s a matter of understanding where we are alike, and where we differ.

In the Age of AI, we’ll have to get comfortable interacting with all these types of minds, woven together in complex networks of multi-agent systems.

We'll all need to develop a strong “Theory of Mind”, and learn how to apply the Scientific Method


Given this, the primary areas that I propose AGQ covers are as follows:


🔑 **The Key Domains of AGQ**
  1. Knowledge of Yourself — Self-Knowledge
  2. Knowledge of Other Individuals — Other-Knowledge
  3. Knowledge of Groups of Individuals — Group-Knowledge
  4. Knowledge of the World — World-Knowledge

The former list can be broken down even further as:


🧘 **Self-Knowledge**
  1. Noticing Patterns In your “Internal World” — thoughts, drives, & emotions
  2. Articulating Your Thoughts and Feelings To Yourself
  3. Noticing Patterns in your Behavior

👤 **Other-Knowledge** 1. Noticing Patterns In Others’ Behavior 2. Using the Behavior of Others to Decipher their “Internal World”s (thoughts, drives, & emotions) 3. Taking the perspective of others 4. Articulating your Thoughts and Feelings To Others 5. Coordinating your actions with Individuals 6. Directing & Being Directed by Other Individuals
👥 **Group-Knowledge** 1. Noticing Emergent Patterns in Groups of Individuals 2. Articulating your Thoughts and Feelings To Groups, without loss of information. 3. Coordinating your actions within Groups 4. Directing Groups
🌎 **World-Knowledge** 1. Ability to notice Patterns in your Environment. 2. Ability to make meaningful changes in the world.

To really cover this ground, and to do it well — we’ll need to explore a little bit of how our minds work, how AI minds work, how groups work, and how the world works.

We'll of course need to study State-of-the-Art AI systems, both to understand "who we're creating", but also as a means of understanding more about ourselves. (<i style={{color: 'gray'}}>& also because they're cool 🤓)

We'll need to learn some things about Psychology, both in individuals and in groups. We'll need to touch on topics around Cultural Development, Game Theory, & Decision Theory.

Taking all of the acquired knowledge and stretching it to the limit, we'll need to zoom out and look at the Social & Macroeconomic landscape -- to predict how the developments in AI will fit within the world we inhabit today, and exactly how much they'll change it.


Easy, right? 😉


I won’t claim that I’ll be able to do all of these justice, but at least it'll be a fun conversation.


<CustomAside icon={"✨"} title={<span style={{textDecoration: 'underline', fontWeight: 'bold', fontStyle: 'italic'}}>What's Next}

<br/>

🎭 In Part 2: “Characters”, we’ll go over some of the characters we should expect to see in the near future. We’ll review some state-of-the-art research, and couple it with our intuitions about human minds to develop an intuition around AI minds.

🕸 In Part 3: “Cyborganizations”, we’ll talk about how these characters will interact to create emergent social structures — from small groups, to major organizations.

🛠 In Parts 4-7: “Skills”, we’ll explore the specific skills we should be investing in to prepare for the future we’ve laid out: Self-Knowledge, Other-Knowledge, Group-Knowledge, World-Knowledge

🌌 In Part 8: “Epilogue”, we’ll talk about how this all ties together.


If you want to follow along — subscribe to my newsletter below to be notified when the next post goes live (should be in a day or two.)
— Luke


WAIT

If you stayed to the end you're probably cool and I'd like to connect.

Please comment or subscribe below, follow me on twitter, or reach out through one of the other social links below.


***A L S O...*** -- I'm currently looking for engaging work around LLMs & ML Systems.

Check out my projects, my twitter, my linkedin, or my github to see what I'm into.


Reach out :)
Stay curious