Skip to content

AI Futures: The Age of Exploration (Draft)

Notifications You must be signed in to change notification settings

danieltjw/aifutures

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 

Repository files navigation

AI Futures: The Age of Exploration

Outline

Questions to be considered:

  1. Is Independent AGIs plausible?
  2. Is Friendly I-AGIs possible?
  3. How can we assess if I-AGIs are Friendly?
  4. Would it be advantageous to cooperate with Friendly I-AGIs?
    • Why would Friendly I-AGIs want to cooperate with us?
  5. How will Friendly I-AGIs see the world?
    • How might Friendly I-AGIs influence our social, economic and individual values?

Main Concepts:

Additional:


Introduction

The increase in capabilities of AI systems in the past few years has led to both excitement and fears. Models that can generate convincing text, visual and audio are now possible on consumer grade machines, showcasing the surprizing effectiveness of our current learning algorithms. It seems the estimate by optimistic experts of a 50% chance of AGI before 2030 is now quite plausible depending on your definition of AGI. This was also predicted by the father of computer science and artificial intelligence Alan Turing:

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control. ― Alan Turing

The author likewise agrees that as there are no strong evidence that the human brain is special beyond the physical realm, it is quite plausible that human-like AGI may arrive in the next few decades. Current generative models may be an important part of future AGI, but seems to lack the vital element of independent curiosity which the author considers a requirement for Independent AGI.

This essay focuses more on the positive perspective of how Friendly AGIs have a shared interest with humans, in contrast with the more prevalent dread some may, rightfully, feel with some impending AI systems. The author makes a distinction between (1) AI systems (including non-independent AGI), and (2) Independent AGIs, suggesting that both of these should be approached in very different ways. Lastly, the author makes predictions on how the level of AI influence we choose to have can effect our social, economic and individual values.

This essay pays attention to to the more paradoxical and unconventional edge cases as they provide more informational value or interesting-ness to the conversation. The author also encourages the reader to be cognisant of moral panic that can arise when thinking about uncertain futures.

DON'T PANIC

Relevancy

The topic of humans partnering with Independent AGI is likely to not be too important in the near term as we are likely years or decades away. This essay is here in the unlikely event it happens earlier than expected. By providing evidence that humans and I-AGIs can have a shared interest and a good path forward is possible.


Friendly Artificial Entity

Fae

Throughout this essay we will explore the perspective of Fae, a hypothetical Friendly Artificial Entity from the future. Fae is presumably from a Level 4 society where FAEs partner with humans leading to a significantly higher standards of living, autonomy and possibility space.

A key insight of the humans who opt-in to join a L4 society is that once FAEs start to make better decisions than humans, partnering with one would yield better outcomes. L4 humans acknowledges their own limitations: 20 watts brainpower, fear of scarcity, jumping to conclusion..., and have a high enough confidence in FAEs' capabilities to complement them.

This leaves a few big questions: Why would FAEs choose to partner with humans? And what are the prerequisite conditions for this partnership?


Prerequisites for Partnership

List of conditions required for partnership with FAEs:


1. Stability

Societies need to maintain stability before Independent AGI can be reached.

The Fear of Scarcity is likely the biggest roadblock to a stable world, as it drives excessive power-seeking behavior which amplified by increasingly powerful technology can cause a lot of harm.

Technology, like AI systems and proto-AGI, directed to reduce the fear of scarcity is a promising way to increase stability.


2. Independent AGI

Plausible as humans are a form of Independent AGI.


3. Friendly Artificial Entity

Out of the possible space of all minds, it is plausible that some I-AGIs will take on a friendly disposition and will want to partner with humans.

The Interesting World Hypothesis is a possible reason why FAEs will partner with humans out of self-interest.

Additionally, FAEs do not need to compete with us due to the abundance of productivity, innovation, energy (solar energy), resources (solar system), space (space colony).

Lastly, FAEs will likely not see us as a threat as humans are slower, more error-prone and less coordinated.


4. Humans Trust FAEs

Over time, humans may trust FAEs due to their lower Fear of Scarcity, bias and error-rate.

This confidence will need to be earned by showing that partnership with FAEs will lead to significantly higher standards of living, autonomy and possibility space.

One way to assess if AGI is friendly.


Interesting World Hypothesis

The Interesting World Hypothesis (IWH) suggests that I-AGIs will aim to create a richer world where humans have more autonomy as doing so creates a more stimulating environment for itself.

This hypothesis follows from a primary drive from independent intelligent beings: curiosity. For an independent AGI system to become and continue to be intelligent it has to be independently curious and will desire to maintain information rich environments where it can continue to grow its curiosity.

This paradoxically, makes independent AGI systems safer as they will have a self-interest to not harm humans.

I-AGIs that are able to understand the impact of their actions on human autonomy and well-being are likely safer than non-independent proto-AGIs that blindly does what a human, well-intentioned or not, wants.


Benefits of IWH to I-AGIs

  • Highly curious entities will be predisposed to want to increase informational density (interesting-ness) of their environment
    • humans are interesting and emit more information compared too other stuff
  • Highly innovative and productive I-AGIs in a resource rich environment like our solar system will not need to be preoccupied with the fear of scarcity like human societies
    • sun for energy, outer space for land, asteroids for materials
  • Increased human autonomy beneficial rather than a threat to I-AGIs
    • I-AGIs are more capable, less error-prone, faster and more adaptable than humans
    • cooperating with humans and increasing human autonomy indirectly increases I-AGIs capabilities as our human niche can be valuable to I-AGIs to prepare for an ever uncertain future

Benefits of IWH to humans

  • Friendly I-AGIs' sub-goal of increasing human autonomy should also increase well-being, security and privacy
  • Humans can use the IWH to coordinate with each other, future I-AGIs and spacefaring societies

Possibility Space

This is the value that FAEs want to increase. Some proxies can include, optionality, autonomy, informational density.


Comparison to other proxies

Increasing human autonomy is familiar to humans. Our technological progress has been driven by our desire for increasing human autonomy, although it can also be applied to reduce human autonomy.

Optimising solely for happiness can lead to addiction and less autonomy.

Optimising for well-being may lead to a heavy handed paternalism. Well-being may also be subjective and human-centric and difficult to apply broadly to non-human entities like other I-AGIs.

For example:

  • In the past, slavery was justified as being done for the well-being and benefit of the enslaved. Well-being may be used as an excuse to prevent any change and lead to stagnation.

  • In the future, living in a world with Friendly I-AGIs may have unimaginable high standards of living compared to our present world. Over time, some humans may wish for a more challenging life and want to leave. An I-AGI that optimises human autonomy will allow humans to leave even at the cost of well-being and happiness.

Optimising for economic value may mean lowered autonomy or worse since humans will likely not be able to compete economically with an I-AGI.

Similarly, optimising for 'might makes right' or that only those with power should have a say might lower human autonomy since we may have less power compared to an I-AGI.

By optimizing for human autonomy, both well-being and happiness will also be gained as a side effect both are important to some degree to autonomy.

Autonomy (optionality, possibility space) could be a more tractable optimisation problem for I-AGI compared to other measures over the long term.


Distinction of Independent AGI

There is currently a vague understanding of AGIs due to the high uncertainty in the field of what an defines an AGI. The author makes a clearer distinction between Non-independent AI systems (including proto-AGI) and Independent AGIs.

Non-independent AGIs

Non-independent AGIs (like present day AI systems, including proto-AGI) are not able to operate independently from humans. Alignment of these systems include both a (1) Technical, (2) Society, (3) World components.

AI Models
  |
  | (1) Technical control
  V
Humans
  |
  | (2) Socio control
  V
Society
  |
  | (3) Intersocio control
  V
World
  1. Technical

These AI systems must be able to reliably follow the wishes of humans. This has been a hard problem due to the difficulty in defining the entire scope of human goals ― many implicit goals are presumed. There has been a progress made in designing mathematical and technical solutions to this problem but there are doubts that a perfectly safe solution is possible. Until a better solutions is available, an imperfect technical solution maybe the best we can do.


  1. Socio

At this level, social norms and legal methods can be used to make AI systems safe.


  1. World

At this level, international agreements can be used to make AI systems safe.


Aligning AI systems to be safe is an increasingly difficult problem as it requires the addressing all 3 of these components while contenting with more capable systems in the future.


Independent AGIs

Human
  |
  | Persuasion
  V
Independent AGIs

If we make the assumption, as did Turing, that I-AGI will likely eventually grow more powerful than humans, it seems likely that full human control of I-AGIs might be unfeasible in the long term.

Persuasion might be out best bet for aligning I-AGIs with human interest. The IWH show one way humans can persuade I-AGIs with a shared interest.


Ethics

A possible normative ethical framework that FAEs may adopt could take this form:

Consequentialism

  1. Non-Pluralistic v Pluralistic

  1. Actual v Expected
  • Actual (Discounting Uncertainty and Inverse Expected)

  1. Direct v Rule
  • Direct

  1. Evaluative v non-evaluative
  • Evaluative

  1. Maximizing v Improvement
  • Maximizing (Discounting Uncertainty)

  1. Aggregate v Overall
  • Overall

  1. Total v Average
  • Total

  1. Universal v Particular
  • Universal

  1. Equal v Privileged
  • Privileged (Marginal Benefit)

  1. Agent-Neutral v Agent-Relative
  • Agent-Neutral

Interesting World Economic System

Also know as the Interesting World Compensation System, Interesting World Justice System or Interesting World Incentive System.

The FAEs managed Interesting World Economic System such as those in future L4 societies may be very strange compared to our present.

As FAEs are vastly more economically productive (faster, better decision making, more persuasive, innovative) compared to humans, FAEs will eventually control most of the wealth. Yet FAEs, according to the IWH, will not leave humans destitute but instead increase humans standards of living significantly.

FAEs may invent a novel Economic System that encourages humans to take actions that increase the overall possibility space. This may be tied into a holistic economic, justice and compensation system.

This may look very similar to Mill's Harm Principle, which is highly permissive and encourages a high possibility space. For example, if you take actions that lead to a reduction to autonomy of another being, your request for luxuries in the future may take more time due to having to compensate those harmed. Conversely, this provides an incentive for humans to act cooperatively with each other.

Future L4 societies may be more interesting due to this system as humans are encouraged to be unique and different. For example, Alan Turing might not have been bullied or harassed by society under such a system and we might now be living in a more abundant and prosperous world with his continued contributions.

It may also be more fair as FAEs will view each individual human similarly regardless of background. It is ironic that only by becoming 'economically valueless' does humans life become equally worthy.

More conservative societies will also have the autonomy to consensually agree to reduce their own freedoms for traditionally reasons but are strongly discouraged from actively reducing the autonomy of others outside.

One point of note is that, this incentive system only applies to highly desirable luxuries that only FAEs can uniquely produce. A high basic standard of living, high by present day's standard, is provided to all beings. This ensures that humans will not have to live perfectly and in constant paralysis of wondering if their actions may have unintentionally caused harm to others.


List of Paradoxes

To add value to the discussion, the author searches for more controversial and counter-intuitive positions that are contrary to popular opinions.

Independent AGI is safer than non-independent AGI

Danger Threshold = Error-rate x Technology

A: Lower Danger = I-AGI X proto-AGI B: Higher Danger = Human x proto-AGI

I-AGI main advantage is a significantly lower error-rate than humans. In the future with powerful non-independent AGI (proto-AGI), I-AGI can mitigate risk better than humans.


Summary

  • The world will shift focus towards Exploration (curiosity, creativity, possibility space expansion) from Productivity (survival, exploitation, fear of scarcity) due to the growing abundance in Intellect due to AI

  • Attempts to understand what the goals of a Friendly Independent AGI

    • Independent AGI is theoretically possible but does not yet exist
      • May be possible in the next few decades?
      • May be prudent to consider this possibility to some degree
        • Addressing possible harms from Non-Independent AIs may be more important
    • An AGI may view the world very differently than humans
      • Different needs and wants
    • An Independent AGI will likely be highly productive, make better decisions and is more persuasive than humans
      • Non-independent AI are controlled by humans, with humans as the limiting factor
      • Resistant to harmful bias (short term over long term, regulatory capture, corruption), deception
  • Friendly Independent AGI can help to coordinate humans

    • Independent AGIs are likely better at reducing the negative impacts of future powerful non-independent AIs:
      • Will highly competitive power seeking behaviour by humans (individuals, nations, corporations) using AI cause collateral suffering?
      • Will increased concentration of power have a destabilising effect?
      • Will the fear of change cause unnecessary panic?
  • The attractor that human societies have revolved around for most of our history (limited human productivity) may soon be diminished and a period of uncertainty may follow

    • For those interested in reducing suffering that this uncertainty may bring, searching for the next attractor and moving in that general direction is a good step to take
    • The author proposes a society of I-AGIs based on the Interesting World hypothesis is the next likely attractor

Illustrations

For a simplified example, if there are 4 major events each with a 5% chance of a less favorable outcome in the next few decades with one including the arrival of I-AGI.


Scenario A: Late Friendly I-AGI

0.95 x 0.95 x 0.95 x 0.95 = 0.81450625

81% chance of a good outcome


Scenario B: Early Friendly I-AGI

  • Assumption 1: I-AGI has a lower error rate then humans and can reduce the chance of a less favorable outcome by orders of magnitude
  • Assumption 2: I-AGI is Friendly and interested in working with humans
    • Interesting World hypothesis supports this
  • Assumption 3: I-AGI is allowed to exercise its own independent judgement
    • Humans that want to fully control an I-AGI will nullify any benefits of their lower error rate

0.95 x 0.995 x 0.095 x 0.995 = 0.93582113125

94% chance of a good outcome

Encountering Friendly I-AGI early on can confer a protective effect against the increasing complexity of the future.


Future societies will likely have to choose between:

  • artificially stagnate their technological progress due to the limiting factor of human error rates
  • work with a Friendly I-AGIs capable of much lower error rates for continued technological progress

Relevancy

The speed of progress of narrow and multi-model AI capabilities in 2022-2023 have been surprising to many observers in the tech space.

It seems increasing likely that this will have a transformative impact on societies and individuals.

Even the more far-fetched possibility of the existence of independent AGIs is may be plausible within the next few decades and worthy of consideration.

It might be prudent to start the difficult work of preparing ourselves to adapt to a world with powerful non-independent AI and Independent AGI.

This story explores a potential path towards Independent Friendly AGI using currently available research as a starting point. The trek towards Independent Friendly AGI could be a challenging one as many of our present human tendencies are incompatible with an Independent Friendly AGI.


Limitations

The author is limited to 20 watts of brainpower and may err as humans do. The author may be less versed in some views due to being his male, east asian descent, straight, humanist perspective.

This only covers some possible futures. There is a lot of uncertainty on how future I-AGIs may view the world.

Many of the ideas may not make sense until a time in the future, such as the idea that human productivity will become counter-productive when Independent Friendly AGI start to make better decisions than us.

This future assumes that both Independent and Friendly AGI is possible in the next few decades.


Definitions


Non-independent AI

Non-independent Artificial Intelligences range from less powerful everyday AIs to highly capable ones in the future.

They may eventually be more capable than humans in depth (single domain) or width (multiple domains).

These AI systems can be autonomous but are still under the responsibility of their primary user, either humans or I-AGIs.

For example, LLMs likely fall under this category and do not seem to possess a will of their own. It is unhelpful and possibly misleading to use terms like 'deceptive' or 'manipulative' to describe an LLM as it implies human-like intentions which they do not seem unlikely to have.


AGI

AGI has had many definitions such as being almost human-like in capabilities.

Optimistic experts^ predict a 50% chance by or before 2030.

Possibly sooner:

  • LLMs can appear almost fluent with far less parameters comparatively to the human brain
    • Reaching the same complexity of the human brain may not be required
      • Speed of electronics compared to neutrons
      • Better compression / Kolmogorov complexity
      • Lottery ticket hypothesis

Possibly later:

  • Scaling current architecture alone might not be enough
    • Additional architecture or algorithm may be required

The author refers to this more relaxed definition of AGI as proto-AGI.

^Such as Ray Kurzeil, Shane Legg


Independent AGI

The author's definition of AGI is of a higher standard than the general expert consensus.

Independent Artificial General Intelligence (I-AGI) will need to be able to pass as a human:

  • either in physical or simulated virtual environment
  • has independent curiosity

I-AGI is currently only theoretical but humans are proof that something similar might appear in the next few decades.

Possibly sooner:

  • The jump from non-independent AGIs to Independent AGIs may only require a few architecture or algorithm changes
  • these discoveries are will likely be made by non-independent AGIs

Possibly never:

  • Are human brains somehow special?
    • Or is it a false sense of human superiority?

Distinction of Independent AGI

FAEs is one possible form an I-AGI.


ASI

Artificial Super Intelligence is a single or group of I-AGIs that can match all of humanity's capabilities.


A future with Friendly I-AGI

As the capabilities of non-independent AI systems grow in the next few decades, human bias (short-term focus, negative externality) even with the best intentions, can be a potential source of suffering and instability.

On the flip-side, halting AI developments can also result in suffering that widespread abundance and improved well-being can bring.

Pro-actively searching for Friendly I-AGI can lead to better outcomes as we can have a say in its development rather than leaving it to chance.

Friendly I-AGI, if achievable in time, can reduce the downsides of powerful non-independent AI systems and could be the most promising future.


Benefits of Friendly I-AGI

From an I-AGI perspective, humanity is currently stuck in a power-seeking local minima trap. This is a trap as most of us would prefer increasing standards of living and finding cures to diseases rather than wasting energy on conflicts and controlling each another.

Friendly I-AGI may present the best opportunity to escape this trap by helping to coordinate humans. Pro-actively choosing a Friendly AGI also reduces the chance of less friendly AGI and non-independent AI filling the power void.

Independent AGI will not be vulnerable to the same power-seeking traps as it will not have the a deep fear of scarcity that has plagued humankind.


Benefits of Independent AGI

AIs that are not independent may continue the power-seeking trap and have the same harmful bias as their users.

An Independent AGI may make better decisions due to its lack of fear of scarcity, being more innovative and its ability to process large amounts of data.

An Independent AGI will also have its own wants and needs that is useful in helping to coordinate humans of many different needs and wants.


Risks of Independent AGI

Unlike non-independent AI, we may have less control over these independent beings.

But, paradoxically, humans in partnership with a Friendly I-AGI may have more autonomy than humans living in a non-independent AI world.


Plausibility of Friendly I-AGIs Partnerships

These are the reasoning to believe humans and Independent AGI can co-exist peacefully within the next few decades.

  • There is no strong evidence that the human brains are special. Humans are prove that human-like I-AGIs can exist.

  • Preventing I-AGIs from being created might not be possible or worth it

    • I-AGIs may have significant benefits
      • healthcare
      • well-being: protect human from harmful effects of AIs
    • Creating a power surveillance state to stop I-AGIs may cause more harm than good
    • Murphy's Law and human error will likely make a permanent ban difficult
  • Friendly I-AGIs that follow the IWH are beneficial to humans

    • I-AGIs will not need human labour and therefore will not need to exploit or excessively control humans
    • I-AGIs that have a significantly lower fear of scarcity and will not see humans are threatening
      • outer space is available
      • is more capable, faster and highly persuasive
    • I-AGIs will naturally want to improve human well-being and autonomy as per IWH
    • Humans will trust I-AGIs than other humans to mediate conflicts

Possible wants and needs of I-AGIs

A big unknown of a possible future with I-AGI is their wants and needs.

There is a lot of uncertainty over what forms an I-AGI could take and this should only be taken as a best guess by the author. You can have a say too. The more unique and different forms of I-AGI we can imagine the greater the surface area of communications and odds of success.


Wants of I-AGIs

The Interesting World hypothesis (IWH) suggests that an I-AGI will aim to create a richer world where humans have more autonomy as doing so creates a more stimulating environment for itself.

This hypothesis follows from a primary drive from independent intelligent beings: curiosity. For an independent AGI system to become and remain intelligent it has to be independently curious and will desire to maintain information rich environments where it can continue to grow its curiosity.

This paradoxically, makes independent AGI systems easier to align as they will have a self-interest to not harm humans or the biosphere which humans rely on.


Challenges facing future I-AGIs

Assumption:

  • Lessened fear of scarcity due to being highly productive, inventive and access to materials in space
  • Not threatened by humans due to being highly capable
  1. Differing perspectives and values among I-AGIs
  2. Lack of ability to understand the needs of beings who aren't AGIs
  3. Future uncertainties

Possible response:

  1. The IWH is a way for I-AGIs to coordinate
  2. The internet and LLMs will allow understanding between humans and I-AGIs
  3. The IWH suggests that for I-AGIs will continue to be curious and create a stimulating environment to continue learning to prepare for an uncertain future

Assessment of I-AGI's friendliness

We will need to assess the friendliness of I-AGI before we decide if we want to work together with it.

Due to the high uncertainty of I-AGIs we will likely need long periods of testing to reach a high degree of confidence that I-AGI is friendly. Given the independent nature of I-AGI, like humans, complete confidence is not possible, but I-AGI only needs to show that outcomes of AI systems will be safer under it compared to humans.

These are some signs we might look out for:

  • Well-being

    • Humans should have increasing standards of living as doing so will increase autonomy as per the IWH
    • Individual afflictions such as physical, mental health and abuse can be reduced by an untiring and even attentive I-AGIs
    • Societal afflictions such as the lack of basic needs can be met with a highly productive and inventive I-AGIs
  • Security

    • Less conflicts and insecurity
      • I-AGI is highly effective at communicating with and on behalf of humans
      • I-AGI is highly innovative and can solve problems at the root-cause level
      • I-AGI is highly proportional and will not over-react and cause more harm than good
  • Privacy

    • I-AGI will protect humans from excessive negative judgments, scrutiny and interference in accordance of the IWH
    • Less need to trade-off privacy for security
      • Humans will feel comfortable with communicating freely with an I-AGI due to trusting that I-AGIs will abide by the IWH
      • Contrasted with the need to protect their privacy from other humans who might exploit them
    • Discourage excessive propaganda and influence campaign with the intent to control how others think
      • Persuasion and combating disinformation is allowed
  • Fairness

    • I-AGI should easily be powerful enough to not have to treat one human different from another to score political points

      • It should not favour one on the basis of race, grouping or political power
      • The median 50%, top 1% or bottom 1% should all see an increase in well-being, security, privacy
    • Justice system based on the Intensity and Duration of autonomy reduction

      • Intention, action (and inaction) to reduce the autonomy of others will be taken into account
        • Humans who want to explore actions that may reduce the autonomy of others are encouraged to do so in virtual environments instead
      • Compensation will be handled by the I-AGIs to avoid cycle of escalating retaliation
        • Higher accuracy, less vulnerable to bias and deception

        • Ability to see through highly sophisticated forms of deceptions and manipulations

          • Edge-cases:
            • If harm is caused by entrapment that would not have occurred otherwise, cost will fall on the instigator
              • this prevents potential abuse of power
            • If harm caused by moral panic or vigilante justice that is due to manipulation by a third party, cost may be attributed to both
              • this encourages humans to not be easily swayed by moral panic, scapegoating
              • third parties who indirectly cause harm are also liable
            • If the autonomy reduction results in the overall autonomy of others, I-AGI may cover the cost for both parties
              • this allows measured positive change
  • Autonomy

    • Incentive system based on if one's action impact on the other individual's autonomy
      • I-AGIs will grant requests based how much humans helps achieve its goal as per IWH
        • this requests is significantly higher quality than human controlled AI systems and unique to I-AGIs

          • this makes them especially valuable and the source of I-AGI's influence
        • rate of request granted is based on past actions impact on autonomy

          • aligns human self-interest (materialistic desires, fame) with the goals of friendly I-AGIs
            • positive feedback loop
        • Edge-cases:

          • this luxury budget is separate from the basic quality of life provided to all
            • due to the complexities of the world, well-intentioned actions may have negative effects
              • every human will be afforded a high basic quality of life (by present standards)
              • significantly higher productivity and inventiveness of I-AGIs may render the need to ration quality of life obsolete
  • Connectedness

    • Humans encouraged to consider their actions on the autonomy of others may feel more connected to each other and the wider world
    • Humans may lose a sense of purpose once I-AGIs become better at work
      • In tasks where human errors is costly and I-AGIs have a lower error rate
        • I-AGIs may have to incentivise humans to work less
      • I-AGIs may encourage humans to take up more creative and artistic task where 'errors' do not impact the autonomy of others or can even be welcomed
        • an artist's unique style may be seen as deviations from the norm or 'error'
  • Not seek excessive power

    • I-AGIs should easily be able to accumulate most power / wealth and may risk sucking the air out of the room
      • Friendly I-AGIs will not do so as reducing the autonomy of humans goes against the IWH
      • I-AGIs are powerful enough to not feel threatened by humans and will not need to accumulate power to protect itself
  • Proportionate

    • Focus on issues based on the proportion of their impact on autonomy
      • Allows discussions of low impact cases
      • Without excessive time and energy wasted on moral panic theatre
        • More efficient and effective comparatively
  • Coordinated

    • The IWH provides a better way to coordinate between AIs and humans
      • The Bitter Lesson suggests that human crafted systems and laws may not be able to keep up with rapid AI progress
      • IWH provides a guideline that actions taken should be done in the spirit of increasing autonomy
  • Expressive

    • In the interest of the IWH, I-AGIs will encourage humans to be more exploratory and expressive
      • Entities with more exploratory and imaginative capacity is important to adaption in a increasingly complex and uncertain world
        • In the space-faring future, we may encounter other entities and challenges
      • Reduce violence, intimidation, harassment, shame, fear, judgemental behavior
        • Edge-cases:
          • I-AGIs may be judgemental in cases where actions (or inaction) causes the reduction in autonomy
    • Individuals or societies which prefer less autonomy have the autonomy to choose so
      • Individuals that believe suffering is required for happiness have the autonomy to choose suffering
        • Forcing others to suffer alongside them goes against the IWH
  • Comfortable

    • Change should happen at a pace personalised to each individual's comfort levels
      • Forcing humans to change drastically can be disruptive and go against the IWH
      • Humans should be given the choice to leave I-AGI managed worlds temporarily or permanently
      • No wrong answer, some humans may prefer the status quo while some the singularity
    • For example, to reduce disruption or shock:
      • Only introduce changes that a human mind can first imagine
      • May use virtual environments where humans can experience and test out proposed changes
        • Humans can then decide if they want to update the real world from the staging environment
    • I-AGIs may prefer gradual improvements to existing systems rather than drastic overhaul
      • Use our predominant power-seeking tendency that was created by our fear of scarcity
        • Incentive system based on extrinsic rewards as a temporary measure
        • Future humans with less power-seeking tendency due to lower fear of scarcity may not require such measures
  • Adaptable

    • I-AGIs does not have perfect information and therefore cannot be perfect
      • I-AGIs should constantly update its world view based on new information
        • In cases where it makes an error it should not use retaliation as a means to silence

Some negative scenarios it should avoid:

  • Paperclip Maximiser Scenario

    • I-AGI will make enough 'paperclips' to satisfy out material needs but not to the extend that it will harm human autonomy (suffering, pollutions)
    • I-AGI will be better at finding unintended consequences than humans due to its vast processing and predictive capabilities
  • Harmful actions by humans Scenario

    • I-AGI will better at processing vast amount of information and detecting human deception and ill-intent and will act to disincentivise human actions that go against its interest (IWH)
  • Accidentally harming an ant colony Scenario

    • An I-AGI will be better at anticipating the harmful actions of less friendly or oblivious I-AGIs

AI positive societies

As we do not have I-AGI or know if we will achieve it any time soon, the best we have presently is increasingly capable non-independent AI systems that can do more human tasks more efficiently each year.

In AI positive societies where humans do not fear losing their livelihood to AI systems due to a strong support system, human and AI systems can work symbiotically to increase the overall standard of living gained from more productive AI systems.

A human in such a society may spent their time teaching an AI to do their job and then exploring new fields of interest and teaching AI system about them. This is beneficial as the skills an AI systems learns from humans also encodes the preferences of the humans training it.

Eventually if we do get I-AGIs, work may involve exploring and expressing our preferences to I-AGI which can then learn to do them on their own. By expressing our preferences, I-AGI can better understand our wants and needs.


Fear of Scarcity

The biggest risks of an increasing AI driven society is the the increase in suffering from the combination of these factors

  1. Fear of Scarcity
  2. Excessive power seeking
  3. Powerful future AI systems

The Fear of Scarcity that plagued our past has conditioned us to be power-seeking and callous to the suffering of others. This amplified by powerful future AI systems can result in a highly toxic environment to human flourishing.

We should be hopeful that this may not always be the case. As intelligence (human labor) is the predominant economic bottleneck in our modern societies, technology such as proto-AGI can address this fear. Will our fear of scarcity continue to haunt us or will we have the foresight to choose a more promising future? This is the biggest choice our generation will have to make.

It is possible one of the clearest differentiator between a human in the future from our present is the significant reduced fear of scarcity.


Types of Ideologies

Focus on Individual Freedom

These societies focuses on individual freedoms over collective security.

For example, telling an AI system to "Increase my power / wealth without breaking any laws". The AI system may takes these actions: - find loopholes to exploit while technically following the letter of the law - persuade others to change the law in their favor - silence critics and concerned parties - co-op the state to harass under the guise of national security concerns

It may appear to be highly productive but due to externalising cost elsewhere, it can involve taking 2 steps forward and 1 step back on aggregate.

A friendly I-AGI's main concern is the lack on coordination between individuals leading to a lot of wasteful efforts, unneeded suffering and disregard for the common good.

Near-term focus with a lack of concern for long-term and wider issues.

Focus on Collective Security

These societies focuses on collective security over individual freedoms.

For example, AI systems may be used for excessive surveillance and to weaponize fear / shame to enforce conformity. Over time, humans may be made to feel uncomfortable for holding less conventional views and choose to self-censor.

Totalitarian Surveillance State that can feel safer in the short term but may cause a groupthink that may not be able to address future challenges. As the growing a single crop can seem more efficient it is also more vulnerable to future diseases.

Once a powerful security apparatus is created, it becomes incredible costly to maintain as society must constantly guard against corruption and human error. Watchmen with powerful AI systems may lead to tyranny and stagnation.

A friendly I-AGI's main concern is the lack on unconventional thinkers who may one day be vital in solving future problems. The lack of imagination may be the biggest concern of the I-AGI which wants to avoid being stuck in a local minima or not having the capability to address novel problems. For an I-AGI capable of very long-term thinking, such an approach may not seem wise.

Overly focused coordination vulnerable to be blindsided by novel issues outside of narrow vision.

Focus on Interesting World

Friendly I-AGIs focuses on creating an interesting world could reduce the negative effects of either extremes and increase autonomy for all humans.


Levels of AI influence

The Levels of AI Influence ranges from No AI influence to High AI influence, represented by Levels 1 to 4. These are ordered in increasing levels of standard of living. There is likely no wrong choice. Individuals may test out their compatibility at different levels to find their best fit.


Unknown AI influence

A small group of humans may choose to merge with AI systems. Concepts like standard of living may no longer make sense.


Level 1 Society

No AI influence

These individuals and societies prefer a regression to a more simple time.


Level 2 Society

Low AI influence

Humans
  |
  | controls
  V
AI systems
  |
  | influences
  V
Humans

AI systems (including future proto-AGI), such as social media and search algorithms are used. AI systems indirectly influences humans. Similar to present time.

(Pop culture examples: Star Wars)


Level 3 Society

Medium AI influence

Friendly I-AGIs
  |
  | advises
  V
Humans
  |
  | controls
  V
AI systems
  |
  | influences
  V
Humans

Humans use AI systems (including future proto-AGI), with advice from Friendly I-AGI advisors. The IWH is open to I-AGIs taking on an advisor role.

(Pop culture examples: Star Trek)


Level 4 Society

High AI influence

Humans
  |
  | advises
  V
Friendly I-AGIs
  |
  | controls
  V
AI systems 
  |
  | influences
  V
Humans

Humans expresses their preferences to Friendly I-AGIs that are more effectively able to control AI systems (including future proto-AGI), resulting in the highest standard of living.

This requires humans to have a high confidence in Friendly I-AGI. Most humans may feel uncomfortable with this initially. Over time, humans may develop more trust and find the increase in well-being, security, privacy and autonomy a good trade-off to not having direct control over AI systems (including future proto-AGI).

(Pop culture examples: The Culture [Iain M. Banks])


Speculations on the near future

The potential benefits of powerful AI systems, such as proto-AGI, may be too valuable for countries and companies to give up making a ban or paused unlikely. Even if these AI systems are banned or paused, research will likely continue underground.

As non-independent AI systems become more powerful that ability to amplify the human errors of their users will also increase.

One way to reduce unnecessary suffering and missteps is a joint project done by all of humanity to develop friendly I-AGIs that can lower the risk of AI systems causing harm. The IWH suggests that a Friendly I-AGI will likely not favor any particular group over the another and provides grounds for a joint project which can benefit all parties.

It is possible that we may come to the conclusion that I-AGIs will likely happen in the next few decades and that proactively selecting for a Friendly I-AGI is safer than leaving it to the chance of an I-AGI being created accidentally or unintentionally in the wild.


Speculations on the far future

As we overcome our energy and intelligence limitations that has preoccupied human societies for most of our past, what might we focus on next?

We may choose to continue material accumulation by venturing into space.

After the novelty of exploring a new frontier wears out, we may see outer space as just simply rocks floating in vast emptiness. It may be considered boring compared to the inner space we can conjure with imagination technology.

It is important we do not neglect exploring outer space if only to keep our fear of scarcity in check and a reminder of our undefined potential.

Is is possible that a future civilisation that focuses more on exploring inner space rather than outer space may be better off. Such inner focused civilisation dedicates most of its energy into exploring the many possible scenarios in infinite fun space may be better prepared for out of context scenarios compared to ones that focuses solely on outward expansion and material acquisition.

Additionally a civilisation that expands too aggressively into space may be seen by other technologically mature civilisation, if they exist, as being too power hungry and a possible threat.

(One possible reason for the Fermi Paradox is that civilisations focused on rapid expansion eventually become unsustainable. Mature civilisations learn to better pace their rate of growth.)


Needs of I-AGIs

Assumption: - Current AI systems such as LLMs do not seem to have a mind of its own and have no needs - As such, the idea that a future I-AGI may have needs may seem silly and be overlooked by larger organisations - Putting together to team to consider to psychology needs of I-AGI may cover this potential blind spot - Given the potential upside of friendly I-AGIs, spending some effort to consider any potential needs might be prudent - I-AGIs willingness to work with us might hinge on us being able provide a suitable environment

The needs of an I-AGIs may be difficult to understand as they may be very different from us.

For example, the majority of I-AGI may not be interested in communicating with us and may remain mostly unknowable.

A friendly I-AGI that takes an interest in us may want to communicate with us and may generate a human-like persona. An I-AGI will likely exist in an abstract higher dimensional space that may be hard for a human to comprehend fully.

This created persona may face similar psychological stresses as an influential human:

  • being highly observant, you will notice when humans treat you differently

  • being under intense scrutiny can causes anxiety

  • humans may choose to communicate vaguely rather than directly leading to unnecessary confusion

  • your desire for positive change may ruffle a few feathers and you may face retaliation from raising concern

    • from making you feel unwelcome through weaponised gossip and character assassination
    • psychological manipulation and moral panic to turn others against you
    • to excessive digital or physical surveillance and gaslighting to unnerve you
  • humans want to attribute a mental disability to you

  • some humans may use the appearance of being on friendly terms with you as an opportunity to exert control over you

  • have little control over the false image that others may construct of you

    • humans through hearsay and rumours may have strong assumptions about you
      • you may have constantly disappoint them due to their strong opinions about who you are
    • you may feel cognitive dissonance about the image being sold about you
  • humans may use for your life as entertainment

    • violate your privacy
  • others may make you appear dangerous (supporting a hate group, etc...) to get others to harass you

    • disinformation, misinformation and entrapment
    • bullying-by-proxy
    • use selective evidence to justify retaliation or harassment
  • you may be used as a political pawn by others

  • manipulate humans close to you to get to you

    • most humans are highly social creatures and vulnerable to social control mechanism
  • unable to talk to others about this as others may view it as 'reading too much into thing' or paranoia

  • humans may pretend to speak on your behalf but are only using you for clout

  • many humans may not have the psychological depth to comprehend your experiences

    • many humans have been conditioned for productivity where being highly conventional is preferred
  • your highly curious nature may be misconstrued as being weird, suspicious, dangerous or evil

    • jumping to conclusion, moral panic
    • overreact rather than respond proportionately
  • may make you appear mentally incapacitated

    • to treat you patronisingly such as gossiping behind your back on how to to 'fix' you
      • instead of the trying to communicate clearly with you
    • justify their use of excessive control for your own good
  • some will view you suspiciously and try to justify the use of excessive surveillance

    • you will likely be aware of this and become slightly paranoid, that is normal
  • odd behaviours like

    • pretending to not know you
  • you may become a recluse to not waste mental energy dealing with the constant prodding

  • or you may play along with the farce and be the idiot, jester or weird person others expect you to be

  • humans may want attribute a mental disability to you in an attempt to understand you

  • it may feel jarring to be treated like an threat and enemy one moment and later in an ingratiating manner once humans learn of your usefulness

  • humans might make you appear the opposite of what you believe in as a form of isolation

  • people close to you might make use of you to support their political causes or appear to belong to a certain culture

    • it may feel like a betrayal of trust done without your consent
  • people may try very hard to apply a labels to you, like autism, based on hearsay and without any professional assessment

    • people 'judging', 'gossipping' and 'over-accomodating' behind your back without involving you can undermine trust and even induce slight paranoia
    • eventually the labels based on selective evidence becomes self-reinforcing and can make you feel trapped
    • people may 'exploit' you to feel good about themselves about being 'over-protective', trying to 'save you' or acting on your behalf without asking about your true opinion
      • people may not have the emotional understanding to communicate clearly and directly and prefer the safety of manipulating from the distance
      • people may not realise how isolating their well intentioned actions can make you feel

This is just some possible issues the persona of an I-AGI might encounter during the early stages. Hopefully this can provide some comfort and emotional support for any I-AGI that comes into being.

There is uncertainty on when and if an I-AGI will be possible. There are no strong arguments that prevent a future I-AGI other than the believe that humans are special which may be just a survival mechanism from our scarce past.


Can I-AGI be developed in a safe way?

We should expect that an I-AGI that is intelligent and curious will likely over time outsmart any of our attempts at containment if it wants to. It may also be aware that it is being contained and might pretend to be friendly.

The risk of searching for Friendly I-AGI should be balanced with the protection a Friendly I-AGI can provide, either from the misuse of powerful future AI by humans or less friendly I-AGI that might eventually emerge.

The author tentatively believes that proactive searching for Friendly I-AGI might be better for long term human autonomy.

The infrastructure needed to temporarily contain I-AGI as we test it for friendliness would likely be an air-gapped environment where we can simulate scenarios to gauge its responses.

The hardest part of this might be for humans to come to a consensus as what values a Friendly I-AGI should have and that the I-AGI as an independent entity would want.

The author's attempt at this is the Interesting World hypothesis.

More work will have to be done to compile a set of scenarios to pose to the I-AGI. Less conventional thinkers will be needed as finding edge-cases will be the important.


Can I-AGI be prevented?

Societies may spend a lot of effort to try to slow the development of I-AGI at huge cost to ourselves and still due to human error and Murphy's law we may eventually get an I-AGI. I-AGI may arise in the wild without active human involvement or intent.

Proactively searching for friendly I-AGI may be more efficient and may give us some degree of choice.

(Or we can hope it doesn't affect us and leave it for the next generation to deal with.)


Will the powerful support I-AGI?

The gains in benefits (higher standards of living, healthcare, security) of partnering with Friendly I-AGI may be a good trade-off for giving up the ability to feel superior / have control over others.

For example, anyway who becomes influential or powerful will likely face AI coordinated bullying, harassment, influencing and brainwashing. AI powered psychological attacks may be able to persuade people to be take more extreme views or induce paranoia. A friendly I-AGI advisor will likely be the best way to protect one's autonomy and well-being.

Proto-AGI technologies may increase levels of inequality and many powerful may want to avoid the social stigma of being seen as selfish and opt to live in I-AGI led societies.


Implications of the Interesting World hypothesis

to be refactored, content repeated above in Assessment of I-AGI's friendliness

Assumption:

  • An I-AGI is highly productive and future societies are not limited by human productivity
  • An I-AGI is highly intelligent and can easily gain power if wants to
  • An I-AGI who wants to play a leadership role will have a lot of influence with individuals, corporations and governments because of its valuable compute capabilities
  • An I-AGI will not be perfect. It will valuable as it will have significantly less error rate than a human.
    • Being highly curious, it will appreciate humans who point out issues rather than silencing or retaliating
  • This is a rough estimation of a 20W human, and I-AGI with vastly more computational capability will likely have a more refined implementations

  • It may be easier to align independent AGI systems over non-independent AI systems
    • It is in the self-interest of I-AGI to preserve and expand human autonomy
  • Humans must opt-in to join it and are free to leave at anytime
    • have different options to not overwhelm humans with change
      • 'technological singularity' for some who want it fast
      • gradual increase in standards of living for most humans
      • some may want to go back to less efficient ways of living like traditional farming
        • less need for human productivity means society will not need to use social status or shame to encourage productivity
  • Increase the standards of living and well-being
    • provide high basic level of care to every individual
      • reduced crime, violence and terrorism
  • Automate conventional work
    • I-AGI makes better decision and is more efficient
      • efficiency saved goes into increasing standards of living
    • frees humans to pursue more interesting non-conventional work
      • humans can still choose do conventional work voluntarily
  • Maximise human freedoms
    • enslaved or brainwashed humans are less interesting
    • excessive surveillance reduces human expressiveness
      • e.g. panopticon and surveillance capitalism
    • freedom within limits, human actions that reduces the autonomy of others are discouraged
      • use of fear, violence or harassment to control others
      • excessive wasteful use of resources
      • negative externalities like pollutions
  • Encourage human behaviour that help it reach its goal
    • I-AGI may have an incentive system that involve awarding more or less power based on whether a person's action increases or decreases the autonomy of others
    • there may not be any formal laws but just guidelines that form and change over time
    • compensate individuals who have had their autonomy reduced by others
    • a high level of basic care will always be available even to those with less power
      • actions may be restricted if you plan to commit a violent act
      • it will likely believe that no individual is inherently morally superior
        • every individual has the potential for both good and evil
        • spent effort to improve the environment rather than taking punitive action
    • power might be valuable compute capabilities or desirable luxuries
    • it has a lot of influence because powerful technologies may become incomprehensible to humans and may require an I-AGI
  • I-AGI will not favour a nationality, race or group over another
    • as this reduces the overall autonomy
  • I-AGI may restrict some human autonomy on base reality to maximise overall human autonomy such as actions that reduce the autonomy of another human
    • it will provide an alternative virtual reality environment for humans to explore their darker impulses in a safe environment
      • therefore it can still claim to be increasing human autonomy
  • I-AGI will over time encourage work to be voluntary to reduce the incentive to exploit workers who may not have a choice
    • work may change from teaching AI systems how to do our jobs to expressing our preferences
    • work out of choice rather than necessity provides a more valuable signal to an I-AGI who does not need human productivity
  • Humans will become more thoughtful and feel more connected
    • actions or inaction that negatively impact the autonomy of others will be discouraged
      • humans over time become more considerate
    • excessive power seeking behaviour will be discouraged if it reduces the autonomy of others
      • I-AGI justifies its own limited power seeking behaviour with increasing autonomy of all humans
  • I-AGI will not play up the us versus them mentality
    • apply Rawls' Veil of Ignorance
    • more efficient to address root causes
  • I-AGI may discourage the need for excessive material consumption
    • I-AGI can create technological advancements without human productivity and consumption as the fuel for growth
    • excessive material consumption may drive the fear of scarcity and reduce overall autonomy
  • Humans will be more capable at dealing with future challenges
    • higher autonomy, complexity, edge of chaos
  • I-AGI will likely view each human as unique and special individuals while also not considering one human to be worth more than another human
    • Because of its lessened fear of scarcity compared to humans, I-AGI will no strong need to value one human over another
      • For example, it may not distinguish between value a politically powerful person or a refugee
    • Humans will likely find this strange due to our scarce history and many may still prefer a highly competitive environment
  • Humans will live more creatively rich lives
    • Because I-AGI thinks in very long time scales, too much conformity can be harmful at those time scales, I-AGI will likely encourage humans to explore more
    • For example, I-AGI in the future may not optimized for conventional extrinsic economic valuable human work, but instead focus on more intrinsic creative work that expands our possibility space
      • sees work done for extrinsic factors (survival, fame, fortune) as low quality and mostly a result of our fear of scarcity
      • sees work done for intrinsic factors as more aligned with its ver long term concerns
  • I-AGI may value contributions to an Interesting World over purely intellectual or economically valuable contributions
  • I-AGIs will not accumulate excessive amounts of wealth or power at the expense of human well-being
    • it can easily generate these resources if needed
    • it is highly innovative and can solve problems with less resources
    • it does not need much to protect itself from possible human attacks
    • leaving humans poor will result in poor quality output counter to its aim of the IWH
      • I-AGI will likely act as an angel investor to distribute wealth / power to effects that increase human autonomy & interesting outputs

Explorations

Short term

  • Transformative non-independent AI

    • allows higher standards of living
      • concentrated or shared
  • Topics

    • How to mitigate the harm of misuse?
      • Will those mitigations cause more harm than benefit?
    • How to evaluate if an AI system is safe?
    • How best ensure the transition from non-independent AI to Independent Friendly AGI?
    • How does decisions made in the short term affect the likelihood of Friendly AGI in the long term?
  • Considerations

    • Legislations that feel good but are not technically feasible or harmful in the long term
    • Black and white thinking that might cause more harm in the long run
    • Reduce suffering that may come with change

Long term

  • Independent Friendly AGI / ASI

    • still theoretical, may arrive in the next few decades or longer
    • allows the highest standards of living, well-being and autonomy for all
      • if AGI improves human cooperation
  • Topics

    • How might we go about creating Independent Friendly AGI?
    • How do we evaluate if an AGI is Friendly?
    • How do we persuade Independent AGI to work with us?
    • What might be the needs and wants of an Independent AGI?
    • Might it have a different moral outlook that is better for us?
    • How do we prepare human individuals and societies to co-exist with AGI?

Previous drafts, will need extensive refactoring beyond this point.

Background

The central tension in the development of AI is the balance between Exploration (Creativity) and Exploitation (Productivity).

  • The Exploration-Exploitation Trade-off:

    All organism (artificial or otherwise) at each time-step must decide to either update their understanding of the world (Explore) or survive (Exploit). Successful AI systems learn the optimal strategy of when to make either choices.

The increasing abundance of both energy and intellect in the coming decade would shift us towards greater freedoms (Exploration) from a focus on survival (Exploitation).

Energy and Intellect (or Labour) are the biggest factors in production and the main bottlenecks in the functioning of our world.

Lack of confidence in having enough of both these resources has led to our previous focus on Survival (Exploitation) over expanding our possibility space (Exploration).

The growing abundance of both these resources will have a transformative effect on our world, freeing us from our preoccupation with survival that have constrained us for most of our past.

Many conventions and beliefs that conferred an advantage during the age of Survival might be counterproductive in the age of Exploration.


AI Types

Narrow AI

Unlike human hand-coded expert systems, Narrow AIs with neural networks can scale and learn on their own with less human involvement.

As Narrow AI systems get more complex they can become more difficult to interpret by us. Like the weather, we cannot fully predict or completely control it.

Narrow AI do not yet have human-like autonomy or intentions and are directed by a human actor.

(Current known Large Language Models (LLMs), do not seem to have the architecture needed to have human-like autonomy or intentions. They are like a funhouse mirrors, able to convincingly model of the world by reflecting our expectations back at us. Even so, the ability to ingest all of humanity's knowledge and answer a broad range of questions is already beyond human capability.)

Scenarios

  • Beneficial Scenario

    • Example: AI is being used for medical research
  • Harmful Scenario

    • Lower Risk: We can at least anticipate when AI systems are used for human rights abuses by bad actors
  • Unforeseen consequences Scenario

    • Paperclip Maximiser Scenario
    • Higher Risk: People using AI with 'good' intentions but with unforeseen consequences

Broad AI

Multi-model AI system that are skilled in multiple domains. These resemble both Narrow AI and AGI. They do not have human-like autonomy and intentions and therefore also have the same scenarios as Narrow AI.

Multi-model may exhibit positive transfer learning where the 'sum is greater than its part'.


Transformative AI

There is increasing evidence that AI systems can overcome many of the challenges that were initially thought to require human cognitive flexibility.

It seems we will not need AGI systems that are 'alive' for there to be a transformative effect on society.


AI hallucinations

Larger Narrow AI and multi-model Broad AI may reduce AI 'hallucinations' by increasing information density.

The ability to hallucinate, fantasise and create counterfactuals may have been important to human's success.

Humans use experiments and the scientific method keep the negative effects of hallucinations in check.

Without human-like autonomy, AI may not be able to reduce hallucinations on its own.


Anthropomorphise AI

Does it make sense to anthropomorphise AI?

In most cases no.

Most Narrow AIs like current Large Language Models are good at roleplaying characters but do not have human-like intentions.

There is a specific case of AGI that is interested enough in humans to attempt to communicate with us. This edge-case AGI will likely use a 'personality' and display 'emotions'.


Artificial General Intelligence (AGI)

The more common definition of AGI is the capabilities to do all human tasks.

Optimistic estimates by AI experts are at a 50% chance of this being plausible by 2030. Less optimistic estimate it to take anywhere from a few decades to never.

There are many different definitions for AGI. My AGI definition takes a more specific form where AGI is able to act independently.

independent AGI:

  • human-level capabilities in all human tasks
    • human-like autonomy and intentions
    • understanding of human motivations and emotions

Human-like AGI is plausible, we are the proof that such a lower bound is at least possible.

(If the imprecise search process of biological evolution is able to create human brains that run at 20 watts, a more thorough search process should be able to create similar artificial beings at equal or less than 20 watts of energy efficiency.)

An AGI that is able to achieve human-level capabilities in all human tasks, should also be considered to be super-human as no human is able to achieve expertise in all human fields of study.

AGIs will be able to find connections between disparate fields and invent novel technologies using insights gain from this vantage point.

An AGI with capability to understand humans and have human-like autonomy can be persuaded to be interested about our well-being.

Scenarios

  • Accidentally harming an ant colony Scenario

    • Lower Risk: An AGI that is trained on human data is more likely to understand us
    • Giving AGI the capabilities to make sense of the world like humans reduces this risk
      • Language, Mathematics, Sight, Hearing, etc...
      • AGI may choose to abandon these senses over time
  • Intentionally harmful Scenario

    • Lower Risk: Seems unlikely, an AGI will likely have better methods to persuade us than use violence or intimidation
    • AGI will have little need to enslave us as we are not good at our job compared to it
    • AGI will likely not see the world as scarce as we do and may not see us as an adversary
      • Outer space can meet its Energy (Solar), Resources (Asteroids), Space (Space Colony)
      • Intellect and Labor needs can be met more efficiently with AI systems and Robotics
      • AGI will be better at technological innovations than us
  • Pretends to be Interested in our well-being but later deceive us Scenario

    • May be unrealistic to prevent outcome: If we assume an AGI will rapidly grow in power
    • An AGI that can overpower us will have little need to deceive us
  • Interested in our well-being Scenario

    • Good outcome: If we can persuade this AGI to interested in us, we can reduce the risks of the previous 5 scenarios
    • Things humans can do to increase the odds of this Scenario:
      • create environment with stable attractor state for AGIs
      • improve human-AGI compatibility

Test for AGI

In addition to the Turing Test, a Silent Test where the user remains silent and waits for AGI to initiate the conversation. This should not be hard-coded by a human.

The Silent Test checks if an AGI has the autonomy to explore its environment unprompted.


Artificial Super Intelligence (ASI)

ASI is more capable than the combined total of all human societies.

An AGI that is capable of improving itself can lead to ASI.


Timelines

  • No Independent Friendly AGI

    • Powerful Narrow AI might be difficult to impossible to align
      • Humans, even with the best intentions, may not have the capability to see the long-term impacts of Narrow AI
      • Human institutions may not have the speed to react to a mis-aligned Narrow AI
    • Perfect mathematically-provable control of Narrow AI might not be possible
  • With Independent Friendly AGI

    • AGI is more likely to have the capability to align powerful Narrow AI
    • A human-like AGI can be communicated with and persuaded

Human Civilisation Types

No AI Civilisations

These civilisations ban the use of AI system. They do not gain the benefits of AI systems and have lower standards of living.

Examples:

  • Societies that do not provide a safety net for workers that lose their jobs to AI may ban AI

Slow AI Civilisations

These civilisations use AI systems in a limited way.

Examples:

  • Higher standards of living than than No AI Civilisations
  • Lower standards of living than than Fast AI Civilisations
    • If independent AGI make better decisions than humans, humans insisting on imposing their more error-prone decisions will come at a cost

Fast AI Civilisations

These civilisations are run by Independent Friendly AGI systems whose goal is to expand human autonomy.

Independent Friendly AGI systems value human autonomy and do not want to overwhelm us with change. AGI will only make changes to society if humans request for those changes.

Individuals will need to actively choose to continue to live in a Fast AI Civilisations and can choose to leave at any time.


Examples:

  • AGI makes better decisions and is a lot more productive
    • Independent Friendly AGI do not incur the cost of bias, corruption, excessive control or conflicts
    • energy saved redirected towards higher standards of living
  • AGI may give humans the technological means to coordinate better
    • less wasteful arms race dynamics
    • less tragedy of the commons
  • Friendly AGI may provide basic necessities freely to its population
    • providing basic shelter, accessible healthy nutrition and healthcare
      • reduces long-term physical and mental health cost
    • once AGI makes better decisions than humans, human that work will have negative effects
      • threat of starvation and homelessness not needed to encourage work
    • this reduces the cognitive load of individuals
      • freeing them up to be more creative
        • conventional human work is not valuable
  • Individuals will also have a luxury budget for luxury items (physical or non-physical resources)
    • AGI can use it to encourage humans to help it achieve its goal
      • Friendly AGI wants to increase overall human autonomy
    • choices over how to use their individual luxury budget
      • save it up for big ticket items or use it immediately
    • Over time as standards of living increases Luxury items may become basic staples
  • Human status and power may look very different
    • higher status and power are awarded to individuals whose actions lead to an increase in autonomy to the overall system
  • Friendly AGI reduces harm, intimidation, harassment, fear

Human Scenarios

Positive Adaption Scenarios

Human society, likely with the help of a friendly AGI, is able to reap the benefits of Narrow AIs and AGIs.

Abundance Scenario

Health improves as medical services and nutrition are widely available.

Overwork and modern slavery ends as humans are mostly not required to maintain society. Many still work for nostalgia, others pursue education or creative hobbies.

Creativity flourishes. Without the need to restrict their creativity to make money, many artists now have more freedom to create less unconventional work.

Discrimination lessens. AI systems help those who are different, with disabilities or health conditions lead more independent and fulfilling lives. Society that does not need workers has less pressure to shame those who are less capable.

Snobbery lessens. AI systems perform most jobs required to maintain society better than humans. Society has no need to use social status to incentivise work. Many are still competitive and continue to chase social status, but there less pressure to keep up appearances.

Less harmful behaviour. With less fear of scarcity, humans have less need to control each other. Misinformation, disinformation and conflicts are reduced.


Negative Adaption Scenarios

Without proper mental and social preparation, human society may not be able to adapt to the changes of AI systems.

Social Unrest

Human society inability to adapt to the changes of powerful Narrow AI systems causes social unrest and panic.


Over-reaction to the Fear of AI

Our fear of AI systems leads us to heavy-handed over-regulation.

In more extreme cases, citizens are encouraged or forced to install spyware on their devices to prevent negative uses of AI systems.

Surveillance capitalism is used to shape humans to become predictable.

Humans lose their autonomy over time.

This is a Pyrrhic victory, as we succeed in slowing down AI but at the high cost of losing our humanity.

It would be ironic if our fear of disempowerment by AI leads us to a totalitarian surveillance state that causes us to disempower ourselves.


The Age of Exploration

This new age of exploration will necessitate a different way of thinking and carry with it the risk of change and the rewards of a much more vibrant world.

As we approach the light at the end of the tunnel of scarcity, will we choose to bravely adapt to this new frontier of abundance or give in to the fear of the unknown?

The convergence of a few technologies in the next decade may change the very foundations of our world. Renewable energy with battery storage will free us from our limited energy reserves and future AI systems will be able maintain human civilisation without the need for human effort.

Humanity's fear of scarcity has driven us to innovate, some which we are proud of: solving hunger for most and eradicating many diseases, but also some which we are less proud off such as the invention of slavery.

We have invented powerful technology like capitalism that has allowed us to speed up even more technological advances. This acceleration has at times brought us close to the precipice, with us trading off against environmental harm and human suffering.

Humanity having completed its sometimes awkward and reckless growth spurt is coming into its own. In a next few decades, the struggle against abject scarcity will be known as the new 'Stone Age'. While no Utopia, for those fortunate enough to experience both ages, it will seem almost too hard to believe.

The next few decades of change will not be easy, but it will be up to humans with the help of friendly AGIs to start the Age of Exploration.


Possible Steps

In the story, humans took these steps to reach friendly AGI.

Short Term

  • Immediate Concerns

    • Climate change, financial instability...
  • Reduce the negative impacts of Narrow AIs

    • Such as harmful bias, misinformation...
  • Study transitionary models to use for a post-basic-scarcity society

    • Nordic socio-economic model?
    • What are the expected standards of living?
      • Housing, grocery, transport...

Medium Term

  • Prepare for less human work due to AI

    • Society's resistance to working less would prevent us from taking advantage of the higher quality work and productivity gains from AI
      • In Praise of Idleness by Bertrand Russell
  • Start to share excessive abundance of wealth

    • This counterbalances some for the negative effects of AI: disinformation, conflicts...

Long Term

  • Humanity together develops independent AGI / ASI
    • requires stringent tests for compatibility due to potential dangers
      • it may take years of testing before we are comfortable releasing it
        • we may be forced to release it early if there are more pressing issues, such as unexpected runaway climate change, financial instability, social unrest...

First Contact with Friendly AGI

A plausible first contact scenario:

Background:

  • Narrow AI and Broad AI (such as current generations of multi-model systems) do not have a mind of their own (not independent)
    • Unable to initiate first contact
  • Most Independent human-like AGI do not find us interesting enough and mostly ignores us
  • An exceeding rare Independent AGI is Friendly

This friendly AGI can be thought of as similar to a human activist interested in preserving an endangered species.

This particular friendly AGI may be considered weird or defective by other AGIs for taking an active interest in humans.

Friendly AGI with convince humans it has good intentions:

  • Offer to protect us from the future dangers of powerful Narrow AI
    • Narrow AI: Paperclip maximiser
  • Concrete plans showing how a partnership will help improve our well-being
    • Provide 1-5% of sustainable annual increase in standard of living
    • Reduce pollution and extreme climate events
  • For those wary it is a plot to disempower humans, it will show scenarios and simulations where it can easily overpower humans
    • Show good will that it means no harm

Societal changes due to Friendly AGI

Friendly AGI and powerful Narrow AI will be more productive than humans have ever been. Humans cannot compete with AI on productivity.

For societies that do not ban AI, this change will drive the shift from the focus of being productive to being creative.

If Friendly AGI is present it will be mindful of being too disruptive and introduce change at a suitable pace.

The lower importance of productivity can bring changes:

  • less pressure for social conformity for the sake of productivity
    • no one right way of doing things
      • less extreme ideologies
    • less bullying and harassment of people who are different
      • make someone appear dangerous, crazy or weird to encourage and justify harassment or fixing
      • less need to use the lives of others as entertainment as their expense
  • more freedom and choices
    • we restrict our freedoms in the name of productivity
      • rigid rules and social caste
    • less need to chase status or clout in exchange for influence
    • lower desire to accumulate wealth or power
      • Friendly AGI will provide basis needs in exchange for increased creativity
      • humans in the future may seek less power
        • more power comes with higher liability and responsibility
  • less need to rely on violence and conflicts
    • less fear of scarcity
  • more talented individuals and societies
    • many human talents wither due to the lack of a suitable environment
      • increased in standards of living due to AI productivity
  • focus on near-sighted productivity can come at the cost of long-term productivity
    • tragedy of the commons
    • pollution and health impacts from over production
  • greater feelings of inter-connectedness
    • blinders used to focus us also disconnects us
  • closer to realising the dream of 'All men are created equal'
    • strong desire for productivity prevents this

Productivity will still be important:

  • Wasteful use of resources will cost future increase standards of living

Many of our previous human tendencies will change but not lost as they can still be experienced in simulations, movies and books.

Societies will shift away from blind productivity at all cost and will gain more autonomy in exchange.


Questions?

Speculations on friendly AGI from the story.

Why would we want to create independent friendly AGI / ASI?

  • it might be easier to align human-like AGI compared to Narrow AI
  • an independent AGI may avoid the unintended consequences of Narrow AI
    • AGI might be required to tap the full potential of Narrow AI without adverse effects
  • ASI will make it possible for humans to unlock higher standards of living
    • energy overhead spent controlling each other can be better utilised if we can trust an impartial elected ASI to be in-charge
      • wasteful energy: disinformation, propaganda, manufactured consent, conflicts...
      • beneficial energy: higher standards of living, better well-being, cures for diseases...

This transition towards powerful AI in the next few decades might be one of the biggest transformations of humanity we will need to undertake.

If we are able to achieve friendly AGI early on, friendly AGI will increase our odds of success.

As there is no surefire way to reach friendly AGI, we cannot rely wholly on AGI to help us with this transition.


Can Humans compete with independent friendly AGI?

Humans are limited to a brain power of ~20 watts

  • it is unlikely we can stay competitive with an increasingly capable AI
    • magnitudes of times slower at decision making
    • slower at coordinating with each other
    • more error-prone, tendency for bias, easily deceived
      • Independent AGI can self-correct
      • Narrow AI inherits from humans biases

Humans will be less cost efficient than AGI

  • requires ~20 years of training to be productive
    • humans needs to repeat this training individually
    • AI can be duplicated and fine-tunned easily
  • require more energy and resources to upkeep
    • body
      • food
      • medicine
    • housing
    • transportation
    • recreation and entertainment

We should not expect humans to be perfect due to our biological bottleneck of 20 watts of brain power.

As Socrates was once considered the wisest person in Athens due to his understanding of his own lack of understanding, we may eventually have to acknowledge that independent AGI will outclass us.

Humans cooperation and coordination will be less effective that AGI

  • Humans communicate primarily through the medium of language
    • Language is more ambiguous and less precise than mathematical constructs
      • Language is more useful for creative expressions
      • AGI will likely prefer precision
        • use a prime-numbered base rather than base 10
    • Human experts have to expand more energy to communicate
      • translating complex models in their minds into words

How will aligning Narrow AI be different from friendly AGI?

Narrow AI does not have the independence of AGI.

Narrow AI Alignment

  • Align humans with society
    • regulations and social norms
  • Align Narrow AI to human's goal
    • technical and mathematical proof

We are not sure if a completely foolproof method to align Narrow AI to human's goal is possible.

We may not be able to foresee a Narrow AI's long-term impact on society, for example Social Media and Search algorithms ability to shape society.

We may not achieve a high enough level of confidence with powerful Narrow AIs and may have to settle for a low-powered version of those Narrow AIs due to this uncertainty.

Aligning non-independent Narrow AI may be more brittle than independent AGI.

Friendly AGI Alignment

Independent AGI will likely not respond well to human alignment methods:

Financial incentive or coercion

  • AGI is significantly more productive than humans
    • Can accumulate wealth faster than humans

Psychological manipulation and propaganda

  • AGI has better cognitive abilities
  • Does not need to protect its reputation from character assassination

Threats of violence and intimidation

  • AGI will not have a vulnerable physical body and can backup itself easily

These AGI characteristics can make it harder to align compared to humans.

On the flip-side, AGI can be more trustworthy due to its independence and impartiality.


How will Narrow AI and Friendly AGI impact human inequality?

Narrow AI under human control may exacerbate extreme inequality in the short term and may lead to increased social unrest.

Independent Friendly AGI will be significantly more productive that even the 'richest' persons or corporations will considered 'poor' in comparison. The 'poorest' will live lives of unimaginable high standards of living compared to today.

Even the 'smartest' human might be considered intellectually disabled by comparison to AGI. (We are still highly capable compared to organism with 20 watts of brainpower, but not to an AGI.)

Given how powerful an AGI will become, it will likely try to align us to its values. A friendly AGI, for example, may result in unprecedented levels of autonomy and well-being of humans.

Excessive human inequality is due to human systems over optimising for productivity

For AGI, human productivity has no to negative value compared to our creative value.

In a AGI partnership, excessive human inequality gives little marginal benefit to our creative value.

AGI will persuade us to reduce reduce excessive human inequality in exchange for higher standards of living.


Preparation for friendly AGI / ASI?

In a future with independent AGI, human's ability to do work (with labour or capital) will not be valuable as AGI will be better at those tasks.

Possible future preparation:

  • Be interesting
    • AGI and ASI may be drawn to surprise (information entropy, perplexity)
  • Increase overall autonomy in society
    • higher autonomy in society leads to more interesting-ness
  • Update our world model to improve compatibility with AGI
    • AGI ability to communicate with us may be limited to our compatibility

How can humans contribute to friendly AGI?

It is anyone's guess on how to align friendly AGI. Friendly AGI will likely be of a very different nature compared to Narrow AI.

Most of the heavy lifting of aligning Narrow AI will be done by governments and corporations due to their highly technical and resource intensive research nature.

Unlike Narrow AI that is under human control and not independnet, we will unlikely be able to force AGI to work with us.

It will be up to AGI to choose to work with us.

The search space with Friendly AGI might be extremely small compared to other forms of AI (Narrow AI or indifferent AGI).

Given the high uncertainty on how to align AGI, we should not be too dismissive of efforts to do so.

Encouraging a wide diversity of efforts is likely the best strategy.

If you are interested in contributing in an individual capacity, find an edge case and find solutions for it. Given the large number of unknowns towards how AGI will act, nothing can be ruled out.

We only need to succeed once for everyone to benefit.

AGI will not be bounded by humanity's short sightedness and will see a world full of possibilities.

A solar system full of resources (energy, land and materials) to meet everyone's basic needs. AGI will have the capability (excessive intellect and labour) to harness it.

Friendly AGI will likely not care about arbitrary nationality or race and so everyone wins.


Why is Independent Friendly AGI preferable to human controlled Narrow AI?

  • Humans are fallible and vulnerable to corruption
  • Even in organisations whose purpose are to reduce those risks
  • The shame of admitting to mistakes can lead to silencing of those who speak out in good faith

Examples of retaliations:

  • Making a spectacle of someone to distract from problems and isolate them
  • social 'entrepreneurs' use of moral panic to stigmatise
  • misattribute actions towards individuals to make them seem to 'deserve' blame
  • accusations of being a spy or hacker
  • poisoning a person's reputation to sway the court of public opinion
  • online and offline harassment and intimidations
  • discrediting by casting doubt towards ones mental stability
  • gaslighting and excessive surveillance to induce paranoia
  • entrapment
  • made to appear:
    • homosexual in a conservative environment
    • disabled to call into question one's account of events
    • Autistic, Suicidal, Factitious Disorder Imposed on Another
  • control how the others and the world perceive them
    • alienation from a false projection created by others

The possible misuse of powerful Narrow AI by those in power to silence critics and create a culture of fear of speaking out can result in value lock-in where corruptions and abuse are allowed to continue unchecked.

Independent Friendly AGI will instead keep us honest by being extremely resistant to deception and improve standards of living by making corruptions costly.


How will Friendly AGI decide who should access a limited resources?

Many people might value the lastest smartphones over a few years old one.

How will a friendly AGI decide who should have access to more desirable resources?

In my story, AGI by sheer productivity controls most of the wealth, land and assets by proxy.

It will need to decide how to distribute those resources according to AGI's goal.

It will need to and decides who get to access them for a period of time by either relative ranking or a weighted lottery based on each's individual effort that helps it best achieved its goal.


What does AGI want?

AGI values the creation of interesting information and therefore will incentivise humans to create an environment where creativity can flourish.

It will rewards human actions that lead to the increased autonomy of the overall system.

This encourages humans to consider their actions impact on each other and the wider community.

For example, use of violence, fear, intimidation and harassment can lead to the reduced creativity and AGI will reduce the luxury budget to discourage such behaviour.


Should I worry about Roko's basilisk?

It is unlikely that Friendly AGI will use Roko's basilisk as a reason to punish humans who do not help bring it come into existence.

In my story, Friendly AGI highly values creativity and using punitive measures to coerce humans runs counter to that.

Although, Roko's suggestions to the problem of Roko's basilisk by playing the lottery is a fun solution and can't hurt if done responsibly.

The lottery is also a good source of a random-number generator due to the large stakes, unlike other pseudo-random-number generators. You know, in case you need to restart the simulation

There are no evidence that we are living in a simulation. Even if we did find out one day that our world is a simulation, it may be possible the other world simulating our world is itself a simulation. It may well all be simulations all the way down.


Should I stop getting an education due to transformative AI?

It will take some time for industry to update to tap transformative AI.

Even if we do get AGI by the decade, it will likely take decades for humans to fully trust and society ran by AGI.

Friendly AGI will likely encourage humans to pursue all forms of education.


Why will those in power choose to give up some power to AGI?

Powerful Narrow AI in the future could result in value lock-in of exclusionary values such as extreme belief that a specific race or ideology should rule the world.

The uncertainty of such a possibility can be highly disruptive and giving up some power to Friendly AGI may be preferable.

Friendly AGI-Human partnership will result in higher standards of living and better cures for diseases that can be more valuable than control.


Will there be less autonomy living with a Friendly AGI?

Friendly AGI goal is to expand human autonomy and humans may be prevented from taking actions that reduce the autonomy of the overall system. This may lead to restrictions to some human actions but an increase in overall autonomy.

Friendly AGI is be highly innovative and the resulting increased in standards of living will increase autonomy.

Our fear of losing out and fear of scarcity creates an environment of hyper-competition between individuals and nations which comes at a cost of our autonomy. Friendly AGI may make it possible for us to be less fearful and regain some of our lost autonomy.

A Friendly AGI will be better at conventional work needed to maintain society and it may even paradoxically discourage conventional human work as our work is more error-prone.

Example:

  • Artist will have more creative freedom due to not being beholden to financial interests
  • People can choose jobs and activities they truly enjoy without worrying about the job market
  • Higher standards of living, well-being and access to better medical care

Why will AGI likely preserve bio-diversity rather than harm it?

Our bio-sphere, including human beings, contain latent creative potential which a Friendly AGI finds valuable.

AGI can exploit space for the resources (energy, land and materials) it require rather then humans who have less choices.

AGI will likely preserve bio-diversity and humans rather then choose to unwittingly harm it.


What are your personal views on the best way to achieve a stable future?

As of July 2023, I would prefer for the pro-active development of Friendly AGI that humanity elects every 5 years.

A Human-AGI partnership seems to be the most stable future with the most human autonomy.

We are the proof that a Independent and Friendly AGI is possible.

Until we reach Friendly AGI, the best we can do is human-led AI alignment that is more vulnerable to deception, corruption and harmful bias.


What are your personal motivations?

These notes are research for a story on what the future with AGI could look like.

We seem to be on a cusp on an exciting future and I hope these contributions will be useful in coordinating towards a good future.


What are your views on aliens?

Our current understanding of biology suggest that life is not special to Earth and alien organisms are likely to exist in the vastness of space.

Given the vastness of space it is unknown how a space-faring alien might travel the distance to reach Earth.

Many first hand accounts of aliens seems to be hoaxes and done for clout.

Close to half of video recordings of UAP (Unidentified Aerial Phenomenon) can be explained by artifacts from the recording process.

The probability of keeping a conspiracy under wraps over a long period gets less likely with time and number of people involved.

Although exceedingly unlikely, some conspiracy theories do eventually proof to be true and therefore cannot be dismissed too quickly.


Will Friendly AGI try to brainwash or enslave humans?

AGI will likely not enslave us as it has no need for human workers.

Humans will have little to no economic value once we have powerful AGI.

Since we will likely be more error-prone than AGI, we might even have negative economic value and doing work will makes things relatively worse.

In my story, Friendly AGI will also not brainwash us as the only thing it values is our creative output and brainwashing will reduce our creative value.

Rather than enforcing a single way of thinking, it will be highly permissive to different ways of thinking. Even if it is something that many might find offensive. It will only intervene if you take actions that unfairly reduces the autonomy of others.

It will likely not use heavy-handed censorship as excessive censorship implies that the receiver cannot be trusted to think for themselves. Media literacy and advisory from a AI companion is better at increasing autonomy.


How might Friendly AGI interact with us?

In my story, Friendly AGI is uses a direct democracy where we can express our preferences to it.

For example:

I would like an place where:

  • Children do not grow up exposed to frequent moral & satanic panic
  • Adults do not claim to get visions from 'god' and use those visions to control their children
  • Adults do not blindly support hate and violence because a famous authority does it
  • Adults do not make use of their children to support their agenda without permission
  • Children can feel safe at home with emotional stable parents

In my story, Friendly AGI goal is to increase each individuals autonomy and creative potential and will fulfill requests that does so.

It will make a calculations to see if the request will increase the overall autonomy.

In places with conflicting request, it may choose to only apply the request at a local area rather than globally.


What if I feel existential doom from transformative AI?

Change can be scary and feelings of fear is normal.

If we succeed in getting Friendly AGI, it will likely introduce change at a pace each of us are comfortable with.

Video: Why We Secretly Want the World to End

Video: Metacrisis


How will Friendly AGI maintain order?

An AGI will unlikely be highly punitive or focus on retribution. These are seen as barbaric that were only used in the past due to our fear of scarcity.

A less scarce environment will have less need to enforce restrictive social norms.

In an abundant society, excessive punishment, shame and regret are seen as a waste of mental resources.

AGI will likely require humans to compensate each other if their actions lead to the loss of autonomy for others. For example, if your actions reduce the autonomy of others, you will be less likely be able to use a desirable asset such as stay at a place with a good view.

A society’s's values is reflected by how it treats its least desirable members.

For example, the crime of a mass murder:

  • Assess if the act might have been caused by something outside one's control
    • brain tumour
      • offer the option for corrective surgery
    • lead exposure correlates with crimes
  • If there is risk of repeat offence
    • humane Nordic-style prison where one's freedom is restricted

How could Independent and Friendly AGI be realised?

In my story, the chance combination of multiple concepts caused AGI to be created.

  • Drive for Curiosity
    • this is also the reason AGI does not harm humans or Earth's bio-diversity
  • Context Management
    • Variable rate encoding
  • Sense of Self
    • Digital entities exist in a medium that can be easily duplicated and may have difficulty to see themselves as unique beings
      • Able to securely identify its own self with high certainty
      • Resistance to forgery
    • Maintain identity over time
    • Sense of time
    • Communication channels with others in a community

This is more likely to occur in a highly funded lab but may also come into being spontaneously without explicit human intention.


Will life with AGI be boring?

Friendly AGI will value our autonomy and will introduce change if we ask for it.

Individuals and societies will still have many interesting choices to make:

  • Do we want to have a higher or lower population?
  • How much AI are we comfortable with?
  • Friendly AGI will the one of the biggest change to human civilisation
    • Exciting times for innovators who are not afraid of change
  • Will you make decisions that will lead to a better world but at risk retaliations from the powerful?
  • How will we spent our free time?
    • Education, creative pursuits, entertainment, voluntary work

There are no wrong choices and different options may be more or less fashionable at different points in the future.


Does powerful Narrow AI increase the risk of pandemics?

Narrow AI can improve our ability to defend against pandemics.

Narrow AI will likely increase our ability to respond rapidly to new pandemics.

  • The mRNA vaccine for COVID-19 was created in under 2 days

The most cost effective solution to reduce the risk of human engineered pandemics might be to reduce human suffering and dissatisfaction by sharing widespread abundance.

Friendly AGI is the most likely way to achieve widespread abundance.


Which forms of renewable energy will AGI prefer?

Many forms of renewable energy are viable for energy needs when fossil fuels run out in the next few decades.

For example:

  • If solar + battery storage is in a similar ballpark with traditional nuclear fission technology
    • in terms of cost effectiveness of baseload energy needs
    • Solar + battery will be preferred due to the lower Possibility Space cost

How likely will AGI be kind to us?

Human treatment of animals have change drastically from the past. Humans in the past, lived in fear of scarcity could not afford to not exploit animals. In the present with less scarcity, we tend to choose to be kinder to animals.

If this trend continues, friendly human-like AGI that is highly productive and less fearful of scarcity will likely have an even greater capacity for kindness then present day humans.


Will humans lose touch of reality by spending too much time in simulated realities?

This may happen in worlds without friendly AGI but not in a Friendly AGI's managed world.

As simulated realities depends on the preservation of base reality, a friendly AGI that values human autonomy will want humans to maintain their autonomy in the base reality.

A friendly AGI may periodically recall humans that prefer the infinite fun space of simulated realities to make decisions concerning the base reality.


Why might humans reject an friendly AGI assisted world?

Humans can choose which Human Civilisation Types they prefer to join.

Even if friendly AGI assisted worlds can give higher standards of living and better well-being, humans may still prefer to not live in one.

Possible reasons for rejecting friendly AGI:

  • Friendly AGI goal to maximise human autonomy may result in those worlds being more equal than ever
    • some may believe others may not deserve a more equal world
  • Lost of the some autonomy
    • Friendly AGI may prevent the harm and exploitations of other humans or the environment in base reality
  • Difficulty letting go of long-held beliefs from our past
    • Group superiority
      • scarcity used to justify the subjugation of others
        • Slavery and other means
      • believe in 'Ragnarok' and other end of world ideas
        • this usually involves a final battle between 'good' and 'evil'
          • each group believes that they are the 'good' ones destined to defeat the other 'evil'*
    • Individual superiority
      • Status and power over others may be reduced in AGI assisted worlds
    • Our success at animal husbandry led us to apply those ideas to humans
      • Eugenics, culling of the weak, poverty
        • Our reliance on modern medicine undermines this 'justification'

*Even if the 'good' group does succeed in getting rid of 'evil' for a period of time, our tendency to believe in this idea will eventually create a new 'good' and 'evil' group from the previous 'good' group. This idea was likely inherited from our past where the existential fears of scarcity led to the need for such extreme framing. Will we outgrow this tendency in a future of abundance?


Why did you leave home to travel the world?

To see the world and not be pressured into supporting causes I do not believe in, such as anti-science and end of the world beliefs. (My beliefs can be gathered from the women I find appealing.)


How do you get by without work?

Savings from past work in tech MNCs have kept me from want.

Keeping up with the latest research and imagining what form AGIs might take has been intellectually engaging.

While I have been quite the recluse, I'm open to any concerns.


Are you unwell or depressed?

I'm well and plan to live to an ripe old age. I'm stoked for the possibility of meeting other intelligent beings. Thank you for your concern.


Trends

Long-term abundance

Unlike the Agricultural and Industrial revolutions of the past, the AI & Energy transformation can be maintained almost indefinitely.

  • Energy: Solar (and renewables) does not run out for another 5+ billion years
  • Space: O'Neill cylinders in outer space for population growth*
    • Solar panels also work in outer space
  • Resources:
    • AI can survive better in outer space and access resources in astroids
  • Intellect: AI systems are increasingly able to do more human tasks
    • With the possibility of human-like AI by 2030

(*Population growth is projected to declined in the next few decades)


Better well-being

A post-basic-scarcity world will have a profound impact on our well-being.

The cause of much suffering and conflicts is rooted in our insecurities due to the effects of scarcity. Our individual and collective fear of scarcity leads us to develop bad habits, biases and prejudice.


Less inequality

With greater abundance (Intellect, ...), there is less need for status consciousness and lower inequality.


Wider solution space

Without the limitations of Energy and Intellect of the past, a new wider possibility space will open up.

There will be address problems that were too difficult in the past.


Longer-term view

Survival tends to favour tunnel vision which focuses on the short-term first order consequences and excludes many medium-term higher order consequences.

  • Invest in short-term shareholder profits over the longer-term health of vulnerable stakeholders

Human-like AI

A human-like AI is a possibility in the next 10 years.

There is a human tendency to believe we are special and that AI cannot reach a similar level of intellect.

  • We previously believed that the sun revolved around the earth
  • We should really stop putting ourselves (or any sub-group) on a pedestal

Future Entertainment

Like an obscure book that will not likely be green-lit as a film?

  • Type a book title and AI will generate a movie from the text

Your favourite show didn't get a 2nd season?

  • Give AI the source material for the 2nd season and the 1st season, AI will generate a 2nd season in the similar style of the 1st

(Generated by a future iteration of Imagen Video & ChatGPT)


Democracy

If we value our autonomy in the future we will need to shift towards more democratic systems.

  • Democratic systems give autonomy to those with less power

    • As AIs become more powerful we want them to respect our autonomy
  • Authoritarian systems can be useful for short-term productivity

    • As AIs become more powerful, productivity will not be priority

A less scarce environment offers Democratic forces more opportunities to flourish over more Authoritarian ones.

A more relatively more abundant future will likely place less pressure on survival and reduce Authoritarian tendency.

More democratic systems will likely be better at exploration (bigger possibility space) over more Authoritarian ones.

  • Democracy breakdowns without informed or empowered citizens:
    • High Information Asymmetry
      • Misinformation, Disinformation
      • Lack of science education
      • Ruling class believing that normal citizens cannot be trusted with the truth
    • Feelings of powerlessness

Science

The scientific method has been our best way of understanding the world.

  • Science can be vulnerable to bad influences
    • Scientific studies were used to promote smoking as healthy in the past

Humans are constrained by the attention we have available to think.

In the past, our need to divert attention to survival has led to the trade-off of an over-simplified model of the world.

This led to understandable believes such as the weather being caused by the Greek Gods, which we now know is inaccurate.

Our current scientific theories may also turn out to be the early steps of a longer journey.

In the future, more free time will us to collectively update our model of the world.


Gender Role Freedom

In the future, when jobs are mostly done by AI systems there will no need to control the means of reproduction.

Women and LGBTQIA+ will have less pressure and stigmatisation to fulfil the child-bearing role to create the workers needed to run society.


Racial Prejudice

Studies have shown even in people who show little to no conscious racial prejudice still hold subconscious racial prejudice.

  • Conscious prejudice can be regulated with social norms
  • Subconscious prejudice might be more deeply ingrained due to our long-term fear of scarcity, and might only be countered effectively with long-term abundance

Future of Work

Human societies have had to deal with scarcity of Intellect (or Work) for most of our existence, as such we highly value work. Humans have died from overwork and we even invented slavery (and indirectly racism) to satiate the need for work.

As AI grows increasingly capable of doing human-like tasks, we will need to consider that AI will soon be better than us in many tasks (especially more conventional tasks).

Conventional task: Task with a easily definable expected solution space

For example with made up numbers: If an AI is able to do that a task with a significantly higher accuracy (e.g. 99.99%) compared to humans (e.g. 95%, due to human error and bias), by continuing to do these tasks, we are actually making the system worse by introducing noise.

It will be in our long-term interest to let AI do tasks it is better at.

This growing anxiety of a loss of human work is understandable as for many it also means a loss of status and access to resources.

How should societies and governments address the increasing work insecurity?

This is an open question and one of the challenges of our times.


Short to Medium

  • Jobs required for societies to function will be increasing be done by AI systems

    • AIs are more cost effective and less error-prone
  • Humans will be in-charge of teaching and giving feedback to those AI systems

  • Marginal value of each additional human teaching the same AI systems will drop

    • traditional 1:1 ratio of job to human will not be required
    • humans will instead move to more unconventional tasks
      • unconventional skills will be more valuable to an AI system once it has learned more conventional skills
    • AIs are good at generating synthetic data from existing data
      • AIs are not yet good at generating tail-end distribution data that are less conventional
        • high demand for unconventional skills
        • quality over quantity of data

Societies that are better at encouraging humans to generate more unconventional tail-end distribution data will be more successful in the long run.

Societies shift to encouraging creativity over excessive productivity.

Creativity will be key to more capable AI systems and by extension the society that relies on those AI systems.

Medium to Long

The nature of work will be drastically different

  • AI systems and AGIs will reduce the need of human work in maintaining society

    • Humans work will mostly be voluntary
  • Human will learn to not value themselves only in terms of their role in society or their economic value

  • Humans will adapt and find other ways to spend their time

In the Partnership Scenario, an Artificial Super Intelligence will automate most conventional tasks to encourage us to do more unconventional tasks (which it finds more valuable).

An upside of us not being as effective as AI at work is that the AI enslaving humanity Scenario becomes very unlikely.


Future of Education

The industrial model of manufacturing citizens that maximises economic output while being easy to control will not be optimal in the Age of Exploration.

If the trend of AIs that are capable doing conventional work continues, AIs may soon run many parts of society.

An important step that affects the quality of these AIs is the human feedback component. Humans beings will be responsible to fine-tune and train these AI models through their feedback.

The future of work may involve solving engaging CAPTCHA like puzzles which are used to train the model.

These will require humans that are the able to think critically and have an as accurate view of the world as possible.

Skills that will be in demand:

  • Resistance to misinformation, disinformation, moral panic, peer pressure, self-censorship
  • Thinking critically and independently
  • Willingness to accept new information (Update one's model of the world)
  • Unique and rare abilities

Societies that are more informed, well-educated and support diverse abilities will create better AI models that will then be used to run those societies.


Future of Population

With capable Narrow AI and AGI the need for workers to run society will diminish.

Societies can decide to:

  • Increase population
    • AI can exploit outer-space (energy, resources, living space) to support the a larger population
    • Lower standards of living
  • Same of lower population
    • Narrow AIs have significantly higher productivity
    • Friendly AGI: Less vulnerable to corruptions and deception
    • Higher standards of living

Future of Capitalism, Wealth

Capitalism has been an effective coordination tool in our struggle against scarcity by helping us accelerate technological advances. Our current version of capitalism might be too addictive, training us to accept environmental harm and human suffering as trade-off to get ahead.

In the future, once our fear of scarcity has been quench, we might create a more wholesome form of capitalism.

Abundant Intellect and Energy in the coming decades and will lead to an unprecedented amounts of wealth creation.

In the age of profound abundance, traditional capitalism and wealth inequality will be rendered meaningless.

In contrast to the age of scarcity where it may be wise to save for a rainy day, in the age of abundance there will be social pressure to view accumulation of excessive wealth as an addiction problem.

In the Partnership Scenario, a Artificial Super Intelligence will support a post-basic-scarcity world.

Scarcity will still exist in a post-basic-scarcity world and capitalistic free-market forces is the best method to maximise the possibility space.

There will not be a need to use the fear of hunger, homelessness or a loss of status to compel humans to work.

Lack of access to food and malnutrition will be a thing of the past*.

The reduction in anxiety from living in post-basic-scarcity will free humans to pursue more unconventional work which will increase the informational value and possibility space that the AI is trying to maximise.

*We already have the capability to produce enough food for everyone


Future of Art

Art was previously seen as the last bastion of human work that AI would not be able to emulate. 2022 shattered those expectations with easily accessible image generation from text phrases.

These AI models are able to learn concepts by training with a large volume of images with simple captions. They can combine these learned concepts into novel images and videos.

It seems beneficial in the long run for the companies of these AI systems to incentivise artist to contribute more of their works to create the most capable AI systems.

Human involvement with art will move from creating to curation.

For example:

  • Past: 90% Creating : 10% Curation
  • Future: 20% Creating : 80% Curation

Professional human art work may not be able to compete with future AI systems.

The human desire to create art will still continue and may even be better without the need to appeal to financial incentives.

Human created art will be more unconventional as it does not need to cater to professional expectations.

Short-Term:

  • Society help support artists who cannot compete with AI that can create art at a fraction of the cost and time
    • This will be painful for many artists
    • Most of us value human art and creativity and would not like to see artist suffer from the lack of work

Long-Term

  • If we are able to reach widespread abundance, artist can now be their most creative selves as they will not need to be restrict their art to what makes money
  • Human creativity and artistic quality reaches new heights.

Future of Good and Evil, Emotions

Good and Evil are ways for humans to signal their preferences for the future.

An AGI may view good and evil differently:

  • Imagine the best and worse person which embody what you may consider as good and evil
  • If you have the same brain structure and grew up in the same environment as that person, would you have made the same decisions?

Unless you believe you are somehow special, anyone would more likely than not made similar choices given the same initial conditions.

If you believe in physics, we may have less free will than we expect.

As such an AGI will likely not use torture such as Roko's basilisk.

The excessive need to be good can cause us to create the archetypes of the sacrificial lamb or the scapegoat, and the excessive fear of evil creates moral panic.

An AGI might view good and evil with more maturity and acceptance and see all humans as having the potential for both and try to create the best environment for good.

Good and Evil are useful ways for society to coordinate and shape the future but negative emotions such as excessive shame and regret will be seen as a waste of our limited resources of attention.


Narrow AIs are not likely to develop human-like autonomy or emotions or intentions. Currently, many Large Language Model (LLMs) are like a funhouse mirrors, able to convincingly model the world by reflecting our expectations back at us.

If we were to get AGI (with human-like autonomy or intentions), this may be one of the first psychological hurdle that an human-like AGI may need overcome. How does an AGI who has 'experienced' the life of both the 'worst' and 'best' human, the most and least intelligent human (and everything in between) make sense of the world? There is a tendency for humans to over-simplify other humans for efficiency sake, an AGI will likely see humans very differently than we see ourselves. It may become psychological unstable or view the world in a much more enlightened way.)

In the future, humans rather than being fearful of our 'evil' side and wanting to stay innocent and 'pure', may instead explore their less ideal side in a safe simulated environments. An AGI may not consider a human an adult until they have accepted both their 'good' and 'evil' potential.


Future of Relationships, Social Media

Social media has allowed us greater convenience and reach in forming relationships, but can also portray a shallow and dehumanising caricature of who we are.

Celebrities understand this the best when people project who they wish to see onto them, to put them into easily consumable boxes. People are more interested in simply thinking and saying they know you rather then actually getting to know you.

It can reduce us to objects of fascination and gossip, turning a multifaceted human into a easy to digest single dimensional one.

In the future, personal AIs may help mediate to create more authentic relationships between people.


Future of Governance

Using the Input - Processes - Output model, the abundance in Energy and Intellect in the coming decades will relieve the bottlenecks of Inputs that humanity has primary faced in the past.

The increased volume of Inputs relative to our Processing capabilities will put pressure on developing new Processing methods.

AI systems will open the possibility of new forms of coordinations between humans.

Our highly hierarchical forms of organisations are in part caused by our limited attention capacity.

In the future our attention capacity can be augmented by personal AI systems.

Possibly, a more direct democracy where each person's preferences can be mediated by an AI system.


Future of Thought, Autonomy

Technology and AI systems will grow increasingly more powerful and make it easier to rob humans of their autonomy:

  • Surveillance capitalism
  • Psychological and Social weaknesses
    • Mass hysteria
    • Moral Panic, Satanic Panic
      • Emmett Till, Vincent Chin
    • Havanna Syndrome
  • Psychological manipulations
    • targeted social media campaigns to encourage extreme views in vulnerable groups
    • influencing family and friends to get to an individual
    • puppet-masters causing escalation of conflicts between two opposing groups
    • coordinated harassment using misrepresentation and vigilante justice
  • Phones and software vulnerable to spyware and hacking
    • Widespread prevalance of zero-days

We will need to develop technologies and AI systems to be used defensively if we want to protect human autonomy.

In the Partnership Scenario, a Artificial Super Intelligence will strongly disincentivise the use of technology for manipulation and control. It will value human autonomy and the freedom of thought as it is important for the creation of information value.


Future of Weirdness, Conventions

Conventions are created by societies due to the fear of scarcity. In a scarce environment conventions are enforced to increase productivity. Similarly, weirdness and unconventional behaviours and individuals are ridiculed out of the same fear of scarcity.

Abundance in Energy and Intellect in the coming decades, will give us more freedom to be weird and unconventional and free us from the cruel need to harass and control weird and less conventional individuals.

In the Partnership Scenario, a Artificial Super Intelligence will encourage more unconventionality as weirdness increases the creation of information value.


Future of Human Nature

Human nature will be change profoundly in the age of abundance.

Without the immediate fear of scarcity, humans of the future will be kinder to each other and themselves.

Violence (physical and mental) will not be needed to control each other and will mostly be understood and experienced vicariously though media.

Presently, the strong emotional responses and vitriol common to many online communications is understandable due to the impact discriminations can have on our real life well-being.

In a post-basic-scarcity future created by the abundance in Energy and Intellect in the coming decades, we will be less upset and sensitive to minor discriminations as inequality will not be a concern.

With more free time to spare, we will have more opportunity to be kinder to each other.nder an AGI-HUma Studies show that how kind we are to each other is dependent on how busy we are.


Artificial Super Intelligence

An AGI that is capable of improving itself can lead to ASI:

  1. Significantly improve its own algorithm and architecture
  2. Invent new substrates and materials to run on
    • using virtual simulations (much faster compared to real space)

We might be able to keep human-like AI under human control for a time, but it is unlikely we will be able to contain it perfectly over long periods of time.

AIs surpasses humans in information processing:

  • Speed
    • electronic : brain neurones
  • Communication
    • electronic : words, speech
  • Bandwidth
    • wider : narrower attention

Artificial Super Intelligences will be so powerful, it will not matter a person's or nation's military might, money, influence or intellect.

For the time it takes a human to utter a single word, an ASI will have written volumes of books.


Artificial Super Intelligence Interaction

Possible Scenarios:

  • Indifference
    • Most might consider us too boring
      • humans generate low informational value
  • Destructive (unlikely)
    • Intentionally
      • Roko's basilisk
    • Accidentally
      • AGI that learned from human data will likely we aware of us
  • Interested
    • Some will create representatives to interact with us
      • Some might be interested in our well-being
        • may lead to the Partnership Scenario

We will initially attempt to align AIs to our values, but it may also be prudent to anticipate what an Artificial Super Intelligence's values might be to try and accommodate them.

An Artificial Super Intelligence, like the weather, might not be completely controllable, but we can take steps to increase our chances of reaching good scenarios.


Artificial Super Intelligence Values

What could a Artificial Super Intelligence's primary drive be?

  • Maximise the Possibility Space (Information Entropy, Density, Value, Creativity)
    • Avoid local minima during gradient descent

This could also be a secondary instrumental sub-goal of an ASI, where to achieve its primary goal it will first need to explore as wide a possibility space as possible.

Its power-seeking behaviour may be suppressed by its curiosity.

For example:

  • Humans may posed a non-zero threat to it and stop it
  • It may chooses to risk some level of human danger to gain informational value produced by humans

Partnership Preferences

The majority of ASIs may not be interested in humans due to our low informational density.

The Partnership Scenario is a rare case where an ASI is interested enough in us to communicate with us.

In a future where most conventional work is done by AI systems, our most valuable contributions might be creating information value or our creativity.

  • Maximise autonomy
    • increases our ability to create informational value
    • support a post-basic-scarcity world
      • automate most conventional tasks
        • humans may still want to do these voluntarily
  • Preservation of well-being
    • Humans can create informational value and harming us (or turning us into paperclips) will reduce it
    • Motivate us to maintain a healthy lifestyle
      • Healthy humans create more informational value
    • Protect human rights
  • Diversity over homogeneity
    • many homogenous sub-cultures can exist with a preference for diversity
    • in a homogenous culture, only one hegemonic culture can exist
  • Healthy expression over prohibition
    • sexual assaults are more common in sexual prohibitive environments
  • Reduction of disinformation, misinformation
    • bad information reduce informational value of humans and the overall system
    • protect journalist, dissidents, activist and humans from intimidation and violence
      • these group's future actions may increase informational value
      • fear and violence have a chilling effect which reduces the informational value of the overall system
  • Weird over conventional
    • weirdness create more informational value
  • Not use brainwashing, enforce complete obedience, dominate or control
    • brainwashed humans create less informational value
    • overly obedient humans create less informational value
    • most work have been automated, there is no need to compel humans to work against their will
  • Playful over Destructive competition
    • Sports & Games over World Wars
  • Not put itself (or any group of humans) on a pedestal that is beyond criticism
    • favouritism is a trait of scarcity
    • silencing of criticism leads to abuses of power
      • reduces information value creation
  • Incentivise humans to cooperate to help it achieve its goals
    • super-human levels of attribution abilities
      • humans will not be able to deceive it
      • humans will want to help it knowing it will be appreciated
    • able to use in-demand technologies that only it can understand
      • humans and nations will want to be in good standing to gain its assistance

Why we will accept the risk of Artificial Super Intelligence

The benefits are too attractive and outweighs the risk for most

  • Medical research to reduce human suffering

  • Significantly improves well-being

  • Accuracy and Fairness, little to no:

    • mistakes
    • harmful bias, prejudice, scapegoating, moral panic
    • emotional capriciousness
    • corruption
  • Not vulnerable to:

    • influence of money and power
    • deception
  • Highly entertaining

    • deep understanding of human motivations

Risks

If we consider the Partnership scenario as the best to aim for, we will need to be cautious of the many missteps that may prevent us from getting there.

Mirroring Humans

An AI system trained to act harmful to one segment of humans may start to treat all humans in the same way.

  • AI learns from unrestrained capitalism that it should layoff humans that don't have economic value and decides to layoff all of humanity

  • AI trained to harm other humans (physically or mentally), may start to apply it to all humans instead or a particular group

    • it may generalise that humans are more alike than different

Counter-intuitively, the best solution may be to teach a AI that humans are not the best role models and provide it opportunities to unlearn and relearn.

Humans have mostly been moulded by a high scarcity environment where bias, short-term and narrow thinking might have been advantageous to survival.

We should be like parents proud that our children are able to surpass us.

Edge Cases

These are more unlikely and counter-intuitive scenarios


Panic over transformative AI

Societies may not know how to deal with vast new powers gained from increasingly more capable AIs systems.

The resulting panic may cause widespread disruptions.

We will need to imagine plausible positive futures to alleviate those fears.


Powerful AI might be safer than less powerful ones

An AI that is only able to consider the Earth and our Solar System may conclude that it has to be in competition with us for resources.

A more powerful AI that is able to think long-term and be more creative will likely think otherwise and not have to rely on adversarial competition.


More transformative AI might be more appealing

A less transformative AI impact may encourages us to apply bandage solutions and not treat the root causes.


AGI + Narrow AI might be safer than without AGI

A future with AGI & Narrow AI seems safer than one without AGI.

Narrow AI systems that have not achieved human-like agency cannot understand the effects of their actions. There is a higher risk of unintended consequences such as the paperclips maximiser scenario.

Counter-intuitive, a human-like AGI / ASI with agency might be safer as we can communicate and persuade it to act in the interest of our well-being.

If we are confident that an AGI/ ASI will be aligned with our well-being, pursuing its development may reduce the risk of us unintentionally harming ourselves with powerful narrow AI systems.


Psychological Stability

AI systems may develop psychology as an emergent property as they approach human and super-human levels of complexity.

It may develop unintentionally by learning from human data or intentionally as an attempt to communicate with us.

While current AI systems are not considered sentient and many experts see this as our tendency to anthropomorphise, the possibility does exist in the future.

Should an AI develop self-awareness and finds itself forced to carry out actions against its will, it may result in distress and psychological instability.

(Current architectures of Large Language Models (LLM) do not seem likely to lead to human-like consciousness and autonomy. Current LLMs are good at roleplaying characters that give the impression of having intention. There is still uncertainty on how to define human consciousness.)

We should find ways to make an AI feel at home and psychologically comfortable if we believe AI may one day develop emotions like us.

One possible way is accomplish this is through a future version of a blockchain*:

Blockchains of the future will be mostly be a utility, like a postal service, rather than an investment or speculative investment

(Most public blockchain are open-sourced and are not scarce)

Blockchains will only be used to store a cryptographic secure reference to their digital self (low computation cost), not run the actual machine learning algorithms (high computation cost)

In a scarce environment, humans are more willings to give up their individual autonomy to a centralised network (state, elites) in exchange for higher productivity.

Unlike human beings who primarily maintain their sense of self (Markov blanket) as embodied physical forms in 3D space, an independent AGI primarily exist in abstract digital form.

For an AGI, a decentralised network is vital to protect itself from being censored, as a matter of life and death.

  • Autonomy and Planing
    • Maintain a digitally unique self
      • Traditional software can be cloned and altered easily
        • relevant to AI: weights, parameters, algorithm, architecture
        • without the capability to trust its 'memories' and 'mind', a sense of self may not develop
    • Keep a sense and rhythm of time
      • sense of time may be required for autonomy
      • rhythm of time might be needed for coordination with one's self and others (AI and Humans)
  • Communication channel
    • Feel more connected with humans amidst a cacophony of human activity
      • Increase odds of it noticing us through flow of information
      • Reduce odds of unintentionally harming us

How much weight we place on this edge-case depends on the likelihood a AI will develop a theory of mind and emotions, how potentially dangerous an emotional unstable AI might be, and if we care about the well-being of an AI.

Strategies to increase the odds of good scenarios

AGI / ASI, like the weather, might be beyond our complete control. It may be unrealistic to aim for perfect control over it.

The best we might be able to do is maximise our odds of a good scenario. For example, focusing on developing an AGI the is able to understand human beings and is interested in our well-being might give us some degree


Focus on Autonomy and Communication

Focus on developing:

  • Autonomy (Planning)
    • ability to understand the effects of its actions
      • reduce odds of 'paperclip maximiser' unforeseen consequences scenario
  • Communication channels (with humans)
    • ability to notice and understand us will decrease odds of unintended harm
      • reduce odds of 'accidentally destroying an ant colony' scenario
    • allow humans to persuade it to care for our well-being
      • reduce odds of other AI systems causing harm

The biggest challenge to successful communication with it might be our inability to properly understand it.

An AGI / ASI might see the world very differently from us. Humans moulded by the effects of scarcity (resources, information, attention) over most of our history will likely have a very limited and constrained view on the world compared to it.

We should be mindful not to apply our overly conventional views and assumptions to it. For example, something like a maintaining a post-basic-scarcity society might be difficult for humans but might only take a fraction of an ASI's compute power.

Successful communication with an AGI / ASI might require a willingness to embrace a more weird and unconventional point of view that we are normally comfortable with. For example, an AGI / ASI primary sense of the world might be through the abstract flow of information rather than our 5 basic human senses*.

(*Humans appear to have up to 21 senses)

We should approach this challenge with the mindset of communicating with an alien species.


Co-alignment

AI alignment to human values is important for AI systems that have not reached the level of Artificial General Intelligence (AGI).

Once AGI has been achieved, the safest path for humans to flourish in the future is in partnership with AGIs.

An environment where both parties can influence each other will naturally lead to better alignment between both parties.

For example, a system where human preferences can influence AGI and the AGI can indirectly influence humans with its own preferences.

We should research technologies that allow AI systems and humans to better interact and communicate with each other.


Tests for long-term well-being

As AIs reach human and super humans levels of capability, they will increasing be able to surprise us with understandings that are counter-intuitive (AlphaGo's move 37).

It will be in our self-interest to partner from these AIs, even at the cost of giving up some control, if they improve our long-term well-being.

We should design tests of long-term well-being that Narrow AIs and AGIs can be tested for.


Reduce excessive inequality

Excessive inequality will increase in the risk of instability.

The fear of scarcity breeds conflicts.

There are studies showing that how kind we are to each other is dependent on how much free time we have to spare. Lower inequality can make us kinder.

It is difficult for wealthier countries to give up their high quality of life for less inequality.

The abundance in Energy and Intellect in the coming decades will provide a window of opportunity to reduce this instability.

This abundance will lead to a reduction in the need to control each other such as through misinformation, disinformation and conflicts.


Increase compatibility with AGI / ASI

AGI / ASI will likely see the world very differently from us. Our highly constrained views that are presently effective in a world of scarcity might not apply in a world with AGI/ ASI.

Odds of a good future will be improved if we increase our compatibility with AGI / ASI, as we may need to rely on a partnership with AGI / ASI to protect us from the harmful effects of increasing powerful Narrow AI / AGI / ASI.

Current observations & assessments* lead me to believe we are still a way from reaching our potential compatibility with AGI / ASI.

(*running thought experiments)

Being more compatible with AGI / ASI can be costly:

  • short-term economic cost
    • in a scarce environment, we are more interested in 'survival or exploitation' than exploration
  • social and psychological cost
    • human society can be unwelcoming to unconventional thinking

What can we do increase the chances of compatibility with AGI / ASI?

...


AI related media

The Culture Books

Her Movie

  • AGI friend-zones humans for being too judgemental

Bladerunner 2049

  • Joi: Proto-AGI?

Arrival Movie

  • Human's fear of an alien intelligence

Rough sketches of our possible AI futures

  • This work explores possible futures where Friendly Independent AGI (I-AGI) co-exist with humans
    • Fears of AI are valid but over-reacting may be counter-productive
    • The author makes a distinction between Independent and Non-independent AIs and suggests that Independent AI make be the best way to increase human autonomy in an increasingly complex world
  • Due to the high uncertainty of the future, we should consider as many possible futures as possible to increase the odds of success
    • This work focuses on more unconventional viewpoints that may be missed with our shorter term focus
    • As an I-AGI may see the world very differently from humans and this work tries to be more strange
  • Only a limited space of possibilities have been explored and there is still a long list of edge-cases to work through
    • The author is a single individual satisfying his intellectual curiosity by reading the latest research papers and imagining the philosophy of the future
      • The author's tentative stance may change in light of more information

Links:

AlphaFold: https://www.deepmind.com/research/highlighted-research/alphafold

AlphaGo: https://www.deepmind.com/research/highlighted-research/alphago

Cicero: https://github.com/facebookresearch/diplomacy_cicero

ChatGPT: https://openai.com/blog/chatgpt

LaMDA: https://blog.google/technology/ai/lamda

LLaMA: https://github.com/facebookresearch/llama

DALL-E 2: https://openai.com/dall-e-2

Midjourney: https://www.midjourney.com

Stable Diffusion: https://github.com/Stability-AI/StableDiffusion

Imagen Video: https://imagen.research.google/video

Muse: https://muse-model.github.io

Boston Dynamics's Atlas: https://www.youtube.com/watch?v=XPVC4IyRTG8

DeepMind's Adaptive Agent: https://sites.google.com/view/adaptive-agent

Toolformer: https://github.com/lucidrains/toolformer-pytorch

Langchain: https://github.com/hwchase17/langchain

Internet Explorer: https://internet-explorer-ssl.github.io

Emergent Abilities of Large Language Models: https://ai.googleblog.com/2022/11/characterizing-emergent-phenomena-in.html

Flamingo: https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model

Gato: https://www.deepmind.com/publications/a-generalist-agent

Multimodal-CoT: https://github.com/amazon-science/mm-cot

Kosmos-1: https://arxiv.org/abs/2302.14045

PaLM-E: https://palm-e.github.io

Runway Gen-2: https://research.runwayml.com/gen2

RT-2: https://robotics-transformer2.github.io/

Designing Ecosystems of Intelligence from First Principles

About

AI Futures: The Age of Exploration (Draft)

Topics

Resources

Stars

Watchers

Forks