Think using a priority checklist of mental models
Table of Contents
History of this Repository
In 2011 I watched Jeff Hawkin's talk on artificial general intelligence(AGI) and I became obsessed with building AGI to help humanity. I taught myself how to code and built an OO Java implementation of Numenta's cortical learning algorithm(CLA) v2. After running unsuccessful vision experiments using the CLA v2 I began to think there must be a better approach to building AGI. I found a better approach in Dileep George's PhD thesis and the project changed into an OO Python implementation of his PhD thesis. While researching other approaches to AGI I came across Elon Musk's perspective on AGI and read Superintelligence by Nick Bostrom with my notes here and realized I was completely wrong about building AGI to help humanity in the first place. Then the goal changed to researching how to increase human intelligence faster than code based intelligence. Since then, the idea I've reached is since AGI will not be limited by slow biological processes it isn't possible for human intelligence to increase faster than code based intelligence in the long-term. Now I'm using 101 mental models to brainstorm alternative solutions to the AGI problem. If ur interested in thinking about this together e-mail me ur CV at firstname.lastname@example.org :)
~ Q94 Liu
Why and Goals
Between 2011 to 2016 I was so focused on how to build AGI it was so easy for me to have confirmation bias toward only the potential positive effects of building AGI while a part of me avoided the question:
Q0: How do you control something that is smarter than all of humanity combined?
The answer is you can not. I now believe AGI should not be built privately or publicly and instead one possible solution is to increase human intelligence with a privately built neural lace like Neuralink which you can read about here.
Today, many groups r trying to build AGI to help humanity. However, I believe AGI should not be built because:
- Humans r the dominant species on Earth because of our intelligence.
- If we make another species (code based AGI) smarter than us then there is no way for us to control it because you can't control something that is more intelligent than all of humanity combined.
- There is a very high chance that it will use humans for purposes we do not want. Just look at how we treat species that r less intelligent than us.
- We cannot stop the humans that r researching AGI so now the question is:
Q1: How do we solve the AGI control problem?
Using a checklist of 101 mental models I've brainstormed the following possible answers:
- Increase human intelligence faster than AGI.
Problem:AGI will not be limited by slow biological processes so I don't think this is possible in the long-term.
- Give everyone who wants one a neural lace.
Problem:Not sure if this is healthy for any human as it might cause insanity since we don't fully understand the brain yet.
- AGI development regulation.
Problem:I don't think this is scalable since it will not be possible to monitor every AGI developer like it is possible to monitor every nuclear bomb developer. And as knowledge of how to build AGI increases more people with self-serving motives can create a AGI without understanding the consequences.
- Figure out how to backwards time travel ethically.
Problem:I don't know anyone that will take me seriously yet :)
- Something I haven't or can't even imagine.
2 is the answer with the least number of assumptions so now the questions becomes
Q2: How do you create a safe neural lace for anyone who wants one?.
- First use it to help the mentally disabled.
- I'm not sure giving a neural lace to a mentally normal person is healthy as our lack of full understanding of the brain may cause unexpected insanity in the wearer.
- Privately research the best cyber security to be used for people with neural lace.
Q3: How do you fully understand the human brain without building an improved version of it in code?
- Non-Answer: In all other cases to truly understand an thing just rebuild a better version of it. However, in this case that means building an AGI.
~ Q102 Liu