Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Fetching contributors…
Cannot retrieve contributors at this time
286 lines (179 sloc) 80.4 KB
title: My Mistakes
description: Things I have changed my mind about.
> "One does not care to acknowledge the mistakes of one's youth."^[[Char Aznable](!Wikipedia), _[Mobile Suit Gundam](!Wikipedia)_; this line stayed with me after watching _[Otaku no Video](!Wikipedia)_ - one does not care, indeed.]
It is [salutary for the soul]( to review past events and perhaps keep a list of things one no longer believes, since [such crises]( are rare^["...Once we have taken on a definite form, we do not lose it until death." --Chapter 2 of the _[Chuang-tzu](!Wikipedia)_; [Thomas Cleary](!Wikipedia)'s translation in _Vitality, Energy, Spirit: A Taoist Sourcebook_ (1991), ISBN 978-0877735199] and so easily pass from memory (there is no feeling of *being* wrong, only of having *been* wrong[^schulz]). One does not need an [elaborate ritual]( (fun as they are to read about) to change one's mind, but the changes must happen. If you are not changing, you are not growing[^leary]; no one has won the belief lottery and has a monopoly on truth[^lottery]. To the honest inquirer, all surprises are pleasant ones[^jaynes].
[^schulz]: One of the few good bits of Kathryn Schulz's 2011 book _Being Wrong_ (part 1) is where she does a more readable version of Wittgenstein's observation (_PI_ Pt II, p. 162), "One can mistrust one's own senses, but not one's own belief. If there were a verb meaning "to believe falsely," it would not have any significant first person, present indicative." Her version goes:
> "But before we can plunge into the experience of being wrong, we must pause to make an important if somewhat perverse point: there *is* no experience of being wrong.
> There is an experience of *realizing* that we are wrong, of course. In fact, there is a stunning diversity of such experiences. As we'll see in the pages to come, recognizing our mistakes can be shocking, confusing, funny, embarrassing, traumatic, pleasurable, illuminating, and life-altering, sometimes for ill and sometimes for good. But by definition, there can't be any particular feeling associated with simply *being* wrong. Indeed, the whole reason it's possible to be wrong is that, while it is happening, you are oblivious to it. When you are simply going about your business in a state you will later decide was delusional, you have no idea of it whatsoever. You are like the coyote in the [_Road Runner_](!Wikipedia "Wile E. Coyote and Road Runner") cartoons, after he has gone off the cliff but before he has looked down. Literally in his case and figuratively in yours, you are already in trouble when you feel like you're still on solid ground. So I should revise myself: it does feel like something to be wrong. It feels like being right."
[^leary]: "You're only as young as the last time you changed your mind." --[Timothy Leary](!Wikipedia) (quoted in _Office Yoga: Simple Stretches for Busy People_ (2000) by Darrin Zeer, p. 52)
[^lottery]: "Everyone thinks they've won the Magical Belief Lottery. Everyone thinks they more or less have a handle on things, that they, as opposed to the billions who disagree with them, have somehow _lucked_ into the one true belief system." --[R. Scott Bakker](!Wikipedia), _Neuropath_
[^jaynes]: From E.T. Jaynes's ["Bayesian Methods: General Background"](
> "As soon as we look at the nature of inference at this many-moves-ahead level of perception, our attitude toward probability theory and the proper way to use it in science becomes almost diametrically opposite to that expounded in most current textbooks. We need have no fear of making shaky calculations on inadequate knowledge; for if our predictions are indeed wrong, then we shall have an opportunity to improve that knowledge, an opportunity that would have been lost had we been too timid to make the calculations.
> Instead of fearing wrong predictions, we look eagerly for them; it is only when predictions based on our present knowledge fail that probability theory leads us to fundamental new knowledge."
From Wittgenstein's _Culture and Value_, MS 117 168 c: 17.2.1940:
> "You can't be reluctant to give up your lie & still tell the truth."
# Changes
> "Only the most clever and the most stupid cannot change."[^chess]
[^chess]: As quoted in page 207 of Genna Sosonko's _Russian Silhouettes_ of [Mikhail Botvinnik](!Wikipedia):
> "He did not dissolve and he did not change. On the last pages of the book he is still the same Misha Botvinnik, pupil of the 157^th^ School of United Workers in Leningrad and Komsomol member. He had not changed at all for seventy years, and, listening to his sincere and passionate monologue, one involuntarily thinks of Confucius: 'Only the most clever and the most stupid cannot change.'"
This list is not for specific facts of which there are too many to record, nor is it for falsified predictions like my belief that George W. Bush would not be elected (for those see [Prediction markets]() or my [ page](, nor mistakes in my private life (which go into a private file), nor things I never had an initial strong position on (Windows vs Linux, Java vs Haskell). The following are some major ideas or sets of ideas that I have changed my mind about:
## Religion
> "For I count being refuted a greater good, insofar as it is a greater good to be rid of the greatest evil from oneself than to rid someone else of it. I don't suppose that any evil for a man is as great as false belief about the things we're discussing right now..."^[Socrates, Plato's _[Gorgias](!Wikipedia "Gorgias (dialogue)")_, 458a (Zeyl translation)]
I think religion was the first subject in my life that I took seriously. It was not that I ever believed - the stories in the Bible or at my Catholic church were interesting, but they were obviously fiction to some degree. My biggest problem with religion was that:
1. My prayers received no answers of any kind, not even a voice in my head
2. I didn't see any miracles or intercessions like I expected from a omnipotent loving god.
The latter was probably due to the cartoons I watched on TV, which seemed quite sensible to me: a powerful figure like a god would act in all sorts of ways. If there really was a god, that was something that ought to be quite obvious to anyone who 'had eyes to see'. I had more evidence that China existed than did God, which seemed backwards. I have seen these reasons [mocked as simplistic and puerile](, and I was certainly aware that there were subtle arguments which intelligent philosophers believed resolved the [theodicy](!Wikipedia) (such as [Alvin Plantinga's free will defense](!Wikipedia), which is valid but not sound since it requires free will) and that Christians of various stripes had various complicated explanations for why this world was consistent with there being a God (if for no other reason than that I observed there were theists as intelligent or more intelligent than me). But the basic concept seemed confused, free will was an even more dubious plank to go on, and in general the entire complex of historical claims, metaphysics, and activities of religious people did not seem convincing. (Richard Carrier's 2011 _Why I Am Not A Christian_ expresses the general tenor of my misgivings, especially after I checked out everything the library had on [higher Biblical criticism](!Wikipedia), [Josephus](!Wikipedia), the Gnostics, and early Christianity - _Je n'avais pas besoin de cette hypothèse-là_, basically.)
So I never believed (although it was obvious enough that there was no point in discussing this since it might just lead to me going to church more and sitting on the hard wooden pews), but there was still the troubling matter of Heaven & Hell: those infinities meant I couldn't simply dismiss religion and continue reading about dinosaurs or Alcatraz. If I got religion wrong, I would have gotten literally the most important possible thing wrong! Nothing else was as important - if you're wrong about a round earth, at worst you will never be a good geographer or astronomer; if you're wrong about believing in astrology, at worst you waste time and money; if you're wrong about evolution and biology, at worst you endanger your life; and so on. But if you're wrong about religion, wasting your life is about the least of the consequences. And *everyone* accepts a religion or at least the legitimacy of religious claims, so it would be unspeakably arrogant of a kid to dismiss religion entirely - the evidence is simply not there[^Moldbug]. (Oddly enough, atheists - who are not immediately shown to [be mistaken]( or [fools]( - are even rarer in books and cartoons than they are in real life.)
[^Moldbug]: Whatever the truth may be, I stand staunchly by this point: a kid sees so much evidence and belief in God that he *ought* to rationally believe. [Mencius Moldbug](
> "Most people are theists not because they were 'reasoned into' believing in God, but because they applied Occam's razor at too early an age. Their simplest explanation for the reason that their parents, not to mention everyone else in the world, believed in God, was that God actually existed. The same could be said for, say, Australia. Dennett's approach, which of course is probably ineffective in almost all cases, is to explain why, if God doesn't exist, everyone knows who He is. How did this whole God thing happen? Why is it not weird that people believed in Him for 2000 years, but actually they were wrong?"
Kids actually are kind of skeptical if they have reason to be skeptical, and likewise will believe all sorts of strange things if the source was previously trustworthy[^etiology]. This is as it should be! Kids cannot come prewired with 100% correct beliefs, and must be able to learn all sorts of strange (but true) things from reliable authorities; these strategies are exactly what one would advise. It is not their fault that some of the most reliable authorities in their lives (their parents) are mistaken about one major set of beliefs. They simply have bad [epistemic luck]( "Internet Encyclopedia of Philosophy").
[^etiology]: From a theology blog, ["Trust in testimony and miracles"](
> "...Harris found that children do not fall into either pattern. Pace the Humean account, he found that young children are readily inclined to believe extraordinary claims, such as that there are invisible organisms on your hands that can make you ill and that you need to wash off, and that there is a man who visits you each 24th December to bring presents and candy if you are nice (see e.g., [Harris & Koenig, 2006]( "Trust in Testimony: How Children Learn About Science and Religion"), _Child Development_, 77, 505-524). But children are not blindly credulous either, as Reid supposed. In a series of experiments, Harris could show that even children of 24 months pay attention to the reliability of the testifier. When they see two people, one of which systematically misnames known objects (e.g., saying "that's a bear", while presenting a bottle), toddlers are less likely to trust later utterances by these unreliable speakers (when they name unfamiliar objects), and more likely to trust people who systematically gave objects their correct names (see e.g., [Paul L. Harris and Kathleen H. Corriveau]( "Young children's selective trust in informants") _Phil. Trans. R. Soc._ B 2011 366, 1179-1187.) Experiments by Mills and Keil show that 6-year-olds already take into account a testifier's self-interest: they are more likely to believe someone who says he lost a race than someone who says he won it ([Candice M. Mills and Frank C. Keil]( "The Development of Cynicism") _Psychological Science_ 2005 16: 385)."
So I read the Bible, which veered from boring to incoherent to disgusting. (I became a fan of the [Wisdom literature](!Wikipedia), however, and still periodically read the Book of Job, Ecclesiastes, and Proverbs.) That didn't help much. Well, maybe Christianity was not the right religion? My elementary school library had a rather strange selection of books which included various Eastern texts or anthologies (I remember in particular one anthology on meditation, which was a hodge-podge of religious instruction manuals, essays, and scientific studies on meditation - that took me a long time to read, and it was only in high school and college that I really became comfortable reading psychology papers). I continued reading in this vein for years, in between all my more normal readings. The Koran was interesting and in general much better than the Bible. Shinto texts were worthless mythologizing. Taoism had some very good early texts (the _Chuang-tzu_ in particular) but then bizarrely degenerated into alchemy. Buddhism was strange: I rather liked the general philosophical approach, but there were many populist elements in Mahayana texts that bothered me. Hinduism had a strange beauty, but my reaction was similar to that of the early translators, who condemned it for sloth and lassitude. I also considered the Occult seriously and began reading the Skeptical literature on that and related topics (see the [later section](#the-occult)).
By this point in my reading, I had reached middle school; this summary makes my reading sound more systematic than it was. I still hadn't found any especially good reason to believe in God or any gods, and had a jaundiced view of many texts I had read. At some point, I shrugged and gave up and decided I was an atheist^[I sometimes wonder if this had anything to do with my later philosophy training; atheists make up something like 70% of respondents to the [Philpapers survey](, and a critical 'reflective' style both correlates with and causes [lower belief in God](; another interesting correlation is that people on the autism spectrum (which I have often been told I must surely be on) seem to be [heavily agnostic or atheistic](]. Theology was interesting to some extent, but there were better things to read about. (My literary interest in Taoism and philosophical interest in Buddhism remain, but I put no stock in any supernatural claims they might make.)
## The American Revolution
In middle school, we were assigned a pro-con debate about the American Revolution; I happened to be on the pro side, but as I read through the arguments, I became increasingly disturbed and eventually decided that the pro-Revolution arguments were weak or fallacious. The Revolution was a bloodbath with ~100,000 casualties or fatalities followed by 62,000 Loyalists [fleeing the country](!Wikipedia "Loyalist (American Revolution)#Emigration"); this is a butcher's bill that did not seem justified in the least by anything in Britain or America's subsequent history (what, were the British going to randomly massacre Americans for fun?), even now with a population of >300 million, and much less back when the population was 1/100th the size. Independence was granted to similar English colonies at the far smaller price of waiting a while: Canada was essentially autonomous by 1867 (less than a century later) and Australia was first settled in 1788 with autonomous colonies not long behind and the current Commonwealth formed by 1901. In the long run, independence *may* have been good for the USA, but this would be due to sheer accident: the British were holding the frontier at the Appalachians (see [Royal Proclamation of 1763](!Wikipedia)), and Napoleon likely would not have been willing engage in the [Louisiana Purchase](!Wikipedia) with English colonies inasmuch as he was at war with England.
Neither of these is a very strong argument; the British could easily have revoked the Proclamation in face of the colonial resistance (and in practice *did*[^Flavell]), and Napoleon could not hold onto New France for very long against the British fleets. The argument from 'freedom' is a buzzword or unsupported by the facts - Canada and Australia are hardly bastions of totalitarianism and are ranked in 2011 by [Freedom House](!Wikipedia) as being as free as the USA. And there is are important counter-arguments - Britain [ended slavery](!Wikipedia "Slavery Abolition Act 1833") very early on and likely would have ended slavery in the colonies as well. The South crucially depended on England's tacit support, so the [American Civil War](!Wikipedia) would either never have started or have been suppressed very quickly. The Civil War would also have lacked its intellectual justification of [states' rights](!Wikipedia) if the states had remained Crown colonies. The Civil War was so bloody and destructive^[From Wikipedia: 'It remains the deadliest war in American history, resulting in the deaths of 620,000 soldiers and an undetermined number of civilian casualties. According to John Huddleston, "Ten percent of all Northern males 20–45 years of age died, as did 30 percent of all Southern white males aged 18–40."'] that avoiding it is worth a great deal indeed. And then there comes WWI and WWII. It is not hard to see how America remaining a colony would have been better for both Europe and America.
Since that paradigm shift in middle school, my view has changed little:
- Crane Brinton's _[The Anatomy of Revolution](!Wikipedia)_ confirmed my beliefs with statistics about the economic class of participants: naked financial self-interest is not a very convincing argument for plunging a country into war, given that England had incurred substantial debt defending and expanding the colonies and their tax burden that they complained of was almost comically tiny compared to England proper.
- Mencius Moldbug discussed good deal of [primary source]( [material]( which supported my interpretation.
I particularly enjoyed [his description]( of the Pulitzer-winning _[The Ideological Origins of the American Revolution](!Wikipedia)_, a study of the popular circulars and essays (of which Thomas Paine's _[Common Sense](!Wikipedia "Common Sense (pamphlet)")_ is only the most famous) finding that the rebels and their leaders believed there was a conspiracy by English elites to strip them of their freedoms and crush the Protestants under the yoke of the [Church of England](!Wikipedia). Bailyn points out that no traces of any such conspiracy has ever been found in the diaries or memorandums or letters of said elites and hence the Founding Fathers were, as Moldbug claimed, *exactly* analogous to [9/11 Truthers](!Wikipedia) or [Birthers](!Wikipedia). Moldbug further points out that reality has directly contradicted their predictions, as both the Monarchy and Church of England have seen their power continuously decreasing to their present-day ceremonial status, a diminution in progress long before the American Revolution.
- Possibly on Moldbug's advice, I then read volume 1 of Murray Rothbard's _[Conceived in Liberty](!Wikipedia)_. I was unimpressed. Rothbard seems to think he is justifying the Revolution as a noble libertarian thing (except for those other scoundrels who just want to take over); but all I saw were scoundrels.
[^Flavell]: pg 120, _When London was Capital of America_, Julie Flavell 2011:
> "The British government hoped that a west sealed off from encroachments by whites, and where traders had to operate under the watchful eye of a British army detachment, would bring about good relations with the Indians. To the great discontent of speculators, in 1761 it was announced that all applications for land grants now had to go to London; no colonial government could approve them. The Proclamation of 1763 banned westward settlement altogether and instead encouraged colonists who wanted new lands to settle to the north in Quebec, and to the south in Florida. Within just a few years British ministers would be retreating from the Proclamation and granting western lands."
| I just read the things you changed your mind about and found it fascinating. I'm curious about your opinion of the American Revolution, though. I agree with you that a world without the AR would be a different one, better in some ways (stopping slavery sooner would probably have improved American demographics and the lives of those alive at the time, avoiding the civil war and prolonging the British Empire might have both done good things), but I'm curious what you think might have been worse if the AR hadn't happened.
| For example, I doubt America would have had the same levels of cross-European immigration if it remained a British colony (Australia is 30% English, 9% Irish, 8% Scottish, 4% German; Canada is 21% English, 15% Scottish, 14% Irish, 10% German; whereas the US is 15% German, 11% Irish, 9% English, 2% Scottish). I suspect that was overall good for Europe and the US, but I can't decide whether or not that's actually the case. It seems unlikely to me that Britain could have collected and concentrated the ambitious/bright of the world in similar ways (since a German scientist might emigrate to America but not to British Canada), but maybe without losing the US Britain would have gone on to global domination, which probably would have been awesome for everyone involved.
I actually have the different impression, that England was a refuge for controversial types of all sorts. England was a refuge for French fleeing the Revolution, or for German types escaping repression of their own (Karl Marx particularly comes to mind).
And it seems that physical proximity is important for research and development; could America have slowed down progress by spreading its researchers out over a continent and encouraging decentralization with things like the landgrant universities? With France and England, the acknowledged pre-eminence of Paris and London concentrates the intellectuals - and it's interesting to note that Murray's Human Accomplishment index seems to point to a per capita decline in scientific/artistic achievement around the 1890s/1900s...
Also, keep in mind the immigration in America was hugely economics motivated. The American intellectual elite had no especial love of immigrants distinct from the British intellectual elite - after the big wave in the 1800s, there was an equal backlash, part of the same backlash that gave us nativism and the Know-Nothings and eugenics, leading to almost a shutdown of legal immigration eg. the Gentlemen's Agreement of 1907 with Japan.
If you're willing to buy that the British would back down from their ban on settling the West, I see very little reason to believe the British would have shut down immigration more strongly than an independent America did.
## Communism
In roughly middle school as well, I was very interested in economic injustice and guerrilla warfare, which naturally led me straight into the communist literature. I grew out of this when I realized that while I might not be able to pinpoint the problems in communism, a lot of that was due to the sheer obscurity and bullshitting in the literature (I finally gave up with [_Empire_](!Wikipedia "Empire (book)"), concluding the problem was not me, Marxism was really that intellectually worthless), and the practical results with economies & human lives spoke for themselves: the ideas were tried in so many countries by so many groups in so many different circumstances over so many decades that if there were anything to them, at least one country would have succeeded. In comparison, even with the broadest sample including hellholes like the Belgian Congo, capitalism can still point to success stories like Japan.
(Similar arguments can be used for science and religion: after early science got the basic inductive empirical formula right, it took off and within 2 or 3 centuries had conquered the intellectual world and assisted the conquest of much of the real world too; in contrast, 2 or 3 centuries after Christianity began, its texts were beginning to finally congeal into the beginnings of a canon, it was minor, and the Romans were still making occasional efforts to exterminate this irksome religion. Charles Murray, whom I otherwise like a lot, attempted to argue in _Human Accomplishment_ that Christianity was a key factor in the great accomplishments of Western science & technology by some gibberish involving human dignity; the argument is intrinsically absurd - Greek astronomy and philosophy were active when Christianity started, St. Paul literally debated the Greek philosophers in Athens, and yet Christianity did not spark any revolution in the 100s, or 200s, or 300s, or for the next millennium, nor the next millennium and a half. It would literally be fairer to attribute science to William the Conqueror, because that's a gap one-third the size and there's at least a direct line from William the Conqueror to the Royal Society! If we try to be fairer and say it's *late* Christianity as exemplified by the philosophy of Thomas Aquinas - as influenced by non-Christian thought like Aristotle as it is - that still leaves us a gap of something like 300-500 years. Let us say I would find Murray's argument of more interest if it were coming from a non-Christian...)
## The Occult
This is not a particular error but a whole class of them. I was sure that the overall theistic explanations were false, but surely there were real phenomenon going on? I'd read up on individual things like Nostradamus's prophecies or the Lance of Longinus, check the skeptics literature, and disbelieve; rinse and repeat until I finally dismiss the entire area with some exceptions like the mental & physical benefits of meditation. One might say my experience was a little like [Susan Blackmore](!Wikipedia)'s career as recounted in ["The Elusive Open Mind: Ten Years of Negative Research in Parapsychology"](, _sans_ the detailed experiments. (I am still annoyed that I was unable to disbelieve the research on [Transcendental Meditation](!Wikipedia) until I read more about the corruption, deception, and falsified predictions of the TM organization itself.) Fortunately, I had basically given up on occult things by high school, before I read Eco's _[Foucault's Pendulum](!Wikipedia)_, so I don't feel *too* chagrined about this.
## Fiction
I spend most of my time reading; I also spent most of my time in elementary, middle, and high school reading. What has changed in *what* I read - I now read principally nonfiction (philosophy, economics, random sciences, etc.), where I used to read almost exclusively fiction. (I would include one nonfiction book in my stacks of books to check out, on a sort of 'vegetables' approach. Eat your vegetables and you can have dessert.) I, in fact, aspired to be a novelist. I thought fiction was a noble task, the highest production of humanity, and writers some of the best people around, producing immortal works of truth. Slowly this changed. I realized fiction changed nothing, and when it did change things, it was as oft as not for the worse. Fiction promoted simplification, focus on sympathetic examples, and I recognized how much of my own infatuation with the Occult (among other errors) could be traced to fiction. What a strange belief, that you could find truths in lies.^[Nietzsche writes this summary of traditional philosophers to mock them, but isn't there a great deal of truth in it? "How could anything originate out of its opposite? Truth out of error or the pure and sunlike gaze of the sage out of lust? Such origins are impossible; whoever dreams of them is a fool."] And there are so many of them, too! So very many. (I wrote one essay on this topic, [Culture is not about Esthetics]().) I still produce [some fiction](index#fiction) these days, but mostly when I can't help it or as a writing exercise.
## Nicotine
I changed my mind about [nicotine](Nicotine) in 2011. I had naturally assumed, in line with the usual American cultural messages, that there was nothing good about tobacco and that smoking is deeply shameful, proving that you are a selfish lazy short-sighted person who is happy to commit slow suicide (taking others with him via second-hand smoke) and cost society a fortune in medical care. Then some mentions of nicotine as useful came up and I began researching it. I'm still not a fan of *smoking*, and I regard any tobacco with deep trepidation, but [the research literature](Nicotine#performance) seems pretty clear: nicotine enhances mental performance in multiple domains and may have some minor health benefits to boot. Nicotine sans tobacco seems like a clear win. (It amuses me that of the changes listed here, this is probably the one people will find most revolting and bizarre.)
# Potential changes
> "The mind cannot foresee its own advance."^[Friedrich Hayek, _The Constitution of Liberty_ (1960)]
There are some things I used to be certain about, but I am no longer certain either way; I await future developments which may tip me one way or the other.
## Near Singularity
I am no longer certain that [the Singularity](!Wikipedia) is near.
In the 1990s, all the numbers seem to be ever-accelerating. Indeed, I could feel with Kurzweil that _[The Singularity is Near](!Wikipedia)_. But an odd thing happened in the 2000s (a dreary decade, distracted by the dual dissipation of Afghanistan & Iraq). The hardware kept getting better mostly in line with Moore's Law (troubling as the flight to parallelism is), but the AI software didn't seem to keep up. I am only a layman, but it looks as if all the AI applications one might cite in 2011 as progress are just old algorithms now practical with newer hardware. And economic growth slowed down, and the stock market ticked along, barely maintaining itself. The Human Genome Project completely fizzled out, with interesting insights and not much else. (It's great that genome sequencing has improved exactly as promised, but what about *everything else*? Where are our embryo selections, our germ-line engineering, our universal genetic therapies, our customized drugs?[^drugs]) The pharmaceutical industry has reached such diminishing returns that even the optimists have noticed the problems in the drug pipeline, problems so severe that it's hard to wave them away as due to that dratted FDA or ignorant consumers. As of 2007, the increases in longevity for the elderly[^infant] in the US has continued to be less each year and ["are probably getting slower"](, which isn't good news for those hoping to reach [Aubrey de Grey](!Wikipedia)'s "escape velocity"; and medicine has been a repeated disappointment even to [forecasting-savvy predictors]( (the '90s and the genetic revolution being especially remarkable for its lack of concrete improvements). Kurzweil published [an evaluation of his predictions]( up to ~2009 with great fanfare and self-congratulation, but reading through them, I was struck by how many he weaseled out on (claiming as a hit anything that existed in a lab or a microscopic market segment, even though in context he had clearly expected it to be widespread) and how often they failed due to unintelligent software.
[^drugs]: The rhetoric in the 1990s and early 2000s is amazing to read in retrospect; some of the claims were about as wrong as it is possible to be. For example, the CEO of [Millennium Pharmaceuticals](!Wikipedia) - not at all a small or fly-by-night pharmacorp - said [in 2000 it had high hopes for 6 drugs in human trials]( and [claimed in 2002]( that thanks to genetic research it would have 1-2 drugs entering trials every year within 3 years, for 6-12 new drugs by 2011. As of October 2011, it has exactly 1 approved drug.
[^infant]: The subsequently cited review covers this; almost all of the famous increase in longevity by decades is due to the young:
> "[Table 1]( shows the average number of years of life remaining from 1900 to 2007 from various ages, combining both sexes and ethnic groups. From birth, life expectancy increased from 49.2 years (previously estimated at 47.3 years in these same sources) in 1900 to 77.9 in 2007, a gain of life expectancy of nearly 29 years and a prodigious accomplishment. The increase was largely due to declines in perinatal mortality and reduction in infectious diseases which affected mainly younger persons. Over this period, developed nations moved from an era of acute infectious disease to one dominated by chronic illness. As a result, life extension from age 65 was increased only 6 years over the entire 20th century; from age 75 gains were only 4.2 years, from age 85 only 2.3 years and from age 100 a single year. From age 65 over the most recent 20 years, the gain has been about a year [[16]](
> Much confusion in longevity predictions comes from using projections of life expectancy at birth to estimate future population longevity [[18]]( For example, “If the pace of increase in life expectancy (from birth) for developed countries over the past two centuries continues through the 21st century, most babies born since 2000 will celebrate their 100th birthdays” [[29]]( Note from the 100-year line of [Table 1]( that life expectancies for centenarians would be projected to rise only one year in the 21st century, as in the 20th. Such attention-grabbing statements follow from projecting from birth rather than age 65, thus including infant and early life events to project “senior” aging, using data from women rather than both genders combined, cherry-picking the best data for each year, neglecting to compute effects of in-migration and out-migration, and others. "
Remarkably, some groups show a *decrease* in longevity; a centenarian in 1980 has an average remaining lifespan of 2.7 years, but in 2000, that has fallen to 2.6. There was an even larger reversal in 1940 (2.1) to 1960 (1.9). Younger groups show larger gains (eg. 85-year-olds had 6.0 years in 1980 and 6.3 in 2000), evidence for [compression of morbidity](!Wikipedia).
And there are many troubling long-term metrics. I was deeply troubled to read [Charles Murray](!Wikipedia)'s _[Human Accomplishment](!Wikipedia)_ pointing out a long-term decline in discoveries per capita (despite ever increasing scientists and artists per capita!), even after he corrected for everything he could think of. I didn't see any obvious mistakes. [Tyler Cowen](!Wikipedia)'s _[The Great Stagnation](!Wikipedia)_ twisted the knife further, and then I read [Joseph Tainter](!Wikipedia)'s _[The Collapse of Complex Societies](!Wikipedia)_. I have kept notes since and see little reason to expect a general exponential upwards over all fields, including the ones minimally connected to computing. ([Peter Thiel](!Wikipedia)'s ["The End of the Future"]( makes a distinction between "the progress in computers and the failure in energy"; he also makes an interesting link between the lack of progress and the many recent speculative bubbles in ["The Optimistic Thought Experiment"]( The Singularity is still more likely than not, but these days, I tend to look towards emulation of human brains as the cause. Whole brain emulation is not likely for many decades, given the extreme computational demands (even if we are optimistic and take the Whole Brain Emulation Roadmap figures, one would not expect a upload until the 2030s) and it's not clear how useful an upload would be in the first place. It seems entirely possible that the mind will run slowly, be able to self-modify only in trivial ways, and in general be a curiosity akin to the Space Shuttle than a pivotal moment in human history deserving of the title Singularity.
The difficulty is that a 'pure' AI is perfectly possible, and if the AI is not run at the exact instant that there is enough processing power available, ever more computing power in excess of what is needed (by definition) builds up. It is like a dry forest roasting in the summer sun: the longer the wait until the match, the faster and hotter the wildfire will burn[^overhang]. Perhaps paradoxically, the longer I live without seeing an AI of any kind, the wider my forecasts become - I will predict with increasingly high confidence normality (because the non-appearance makes it increasingly likely AI will not appear in the next time-unit and also increasingly likely AI is not possible, see [the hope function & AI](, but (in the increasingly improbable event of AI) the changes I predict become ever more radical.
## IQ & race
[This one](!Wikipedia "Race and intelligence") may be even more inflammatory than supporting nicotine, but it's an important entry on any honest list. I never doubted that IQ was in part hereditary (Stephen Jay Gould aside, this is too obvious - what, everything from drug responses to skin and eye color would be heritable *except* the most important things which would have a huge effect on reproductive fitness?), but all the experts seemed to say that diluted over entire populations, any tendency would be non-existent. Well, OK, I could believe that; visible traits consistent over entire populations like skin color might differ systematically because of sexual selection or something, but why not leave IQ following the exact same bell curve in each population? There was no specific thing here that made me start to wonder, more a gradual undermining (Gould's work like _[The Mismeasure of Man](!Wikipedia)_ being [completely dishonest]( is one example - with enemies like that...) as I continued to read studies and wonder why Asian model minorities did so well, and a lack of really convincing counter-evidence (like one would expect the last two decades to have produced given the politics involved).
The massive fall in genome sequencing costs (projected to be <$1000 by ~2014) means that large human datasets will inevitably be produced, and the genetics directly examined, eliminating entire areas of objections to the previous heredity studies. By 2030 or 2040, I expect the issue will be definitively settled. I don't spend too much time thinking about this issue - the results will come in regardless of my opinion, and unlike other issues here, does not materially affect my worldview or suggest action. (Switching from occultism/theism to atheism implies many changed choices; a near vs far Singularity has considerable consequences for retirement planning, if nothing else; Neo-Luddism has implications for both career choice and retirement planning; fiction and nicotine also cash out in obvious ways. Of the topics here, perhaps only Communism and the American Revolution are as sterile in practical application.)
## Neo-Luddism
> "Almost in the same way as earlier physicists are said to have found suddenly that they had too little mathematical understanding to be able to master physics; we may say that young people today are suddenly in the position that ordinary common sense no longer suffices to meet the strange demands life makes. Everything has become so intricate that for its mastery an exceptional degree of understanding is required. For it is not enough any longer to be able to play the game well; but the question is again and again: what sort of game is to be played now anyway?"^[Wittgenstein's _Culture and Value_, MS 118 20r: 27.8.1937]
The idea of [technological unemployment](!Wikipedia "Luddite#In contemporary thought") - permanent [structural unemployment](!Wikipedia) and a [jobless recovery](!Wikipedia) - used to be dismissed contemptuously as the [Luddite fallacy](!Wikipedia). (There are models where technology *does* produce permanent unemployment, and quite plausible ones too; see [Autor et al]( and [Autor & Hamilton]([^hansongrow] and Krugman's [commentary]( pointing to [recent data]( showing the 'hollowing out' and 'deskilling' predicted by the Autor model, which is also consistent with the [long-term decline in teenage employment due to immigration]( Martin Ford has [some graphs]( explaining the complementation-substitution model.) But ever since the Internet bubble burst, it's been looking more and more likely, with scads of evidence for it since the housing bubble like the otherwise peculiar changes in the value of college degrees[^wsjCensus]. (This is closely related to my grounds for believing in a distant Singularity.) When I look around, it seems to me that we have been suffering tremendous unemployment for a long time. When Alex Tabarrok writes "If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries", I think, isn't that correct? If you're not a student, you're retired; if you're not retired, you're disabled^[Charles Murray reportedly cites statistics in _Coming Apart: The State of White America 1960-2010_ that the disability rate for men - working class - was 2% in 1960; with more than half a century of medical progress, the rate has not fallen but risen to 10%.]; if you're not disabled, perhaps you are institutionalized; if you're not that, maybe you're on welfare, or just unemployed.
[^wsjCensus]: Intuitively, one would guess that the value of education and changes in it value would follow some sort of linear or exponential - more is better, less is worse. If the value of a high school diploma increases, an undergraduate ought to increase more, and postgraduate degrees increase even more, right? A 'hollowing-out' model, on the other hand, would seem to predict that there would be a sort of U-curve where the mediocre education is not worth what it costs and one would be better off not bothering with getting more education or sticking it out and getting a 'real' degree. With that in mind, it is interesting [to look at the Census data](
> "In fact, new Census Bureau data show that if you divide the population by education, *on average* wages have risen only for those with graduate degrees over the past 10 years. (On average, of course, means that some have done better and some have done worse.) Here (thanks to economist Matthew Slaughter of Dartmouth College's Tuck School of Business) are changes in U.S. workers wages as reported in the latest Census Bureau report, adjusted for inflation using the CPI-U-RS measure recommended by the Bureau of Labor Statistics:"
> ![]( "Change between 2000 and 2010 in inflation-adjusted average earnings by educational attainment")
Compare now to most of human history, or just the 1300s:
- every kid in special ed would be out working on the farm; there would, if only from reduced [moral hazard](!Wikipedia)^[And there have always been rumors that the moral hazard is *substantial*; eg. the psychiatrist Steve Balt, ["How To Retire At Age 27"]( [and commentary](] be fewer disabled than now (federal [Supplemental Security Income](!Wikipedia) alone supports 8 million Americans)
- everyone in college would be out working (because the number of students was a rounding error and they didn't spend very long in higher education to begin with)
Indeed, education and healthcare are a huge chunk of the US economy - and both have serious questions about how much good, exactly, they do and whether they are grotesquely inefficient or just inefficient.
- retirees didn't exist outside the tiny nobility
- 'guard labor' - people employed solely to control and ensure productivity of the others has increased substantially ([Bowles & Jayadev 2006]( claim US guard labor has gone from 6% of the 1890 labor force to 26% in 2002; this is not due to manufacturing declines[^knowledge]); examples of guard labor:
- standing militaries were unusual (although effective when needed^[A key advantage of the [Byzantine Empire](!Wikipedia), according to [Edward Luttwak](!Wikipedia), was that it had an efficient tax system which enabled it to support a standing military, which was able to be trained in horse-archery all the way up to steppe-nomad standards - a task which took years for the trainees who could manage it at all. (In contrast, the US military is happy to send many soldiers into combat with only a few months of training.)]); the US maintains the [second-largest active](!Wikipedia "List of countries by number of troops") in the world - ~1.5m (~0.5% of the population), which employs millions more with its $700 billion budget^[If you think that's the *whole* military-industrial-intelligence budget, you are quite naive.] and is a key source of pork^[Witness the massive fights over the [Base Realignment and Closure](!Wikipedia) and unusual measures required; the Congressmen aren't stupid, they understand how valuable the military-industrial welfare is for their communities.] and make-work
- prisons were mostly for *temporary* incarceration pending trial or punishment^[Imprisonment as a *permanent* punishment was [used rarely](!Wikipedia "Prison#History") prior to the Industrial Revolution, and what prisons there were often were primarily a mine or other facility of that kind; it is very expensive to imprison and only imprison someone, which is why techniques like fines (eg. Northern Europe), torture (China), exile (Greece) or [penal transportation](!Wikipedia) (England & Australia), or execution (everyone) were the usual methods.]; the US [currently](!Wikipedia "Incarceration in the United States") [has]( ~2.3m (nearly 1% of the population!), and perhaps another 4.9m on parole/probation. (See also [the relationship]( "An Institutionalization Effect: The Impact of Mental Hospitalization and Imprisonment on Homicide in the United States, 1934-2001") of psychiatric imprisonment with criminal imprisonment.) That's impressive enough, but as with the military, consider how many people are tied down solely *because* of the need to maintain and supply the prison system - prison wardens, builders, police etc.
- people worked *hard*; the [8-hour day](!Wikipedia) and 5-day workweek were major hard-fought changes (a plank of the *[Communist Manifesto](!Wikipedia)*!). Switching from a 16-hour to an 8-hour day means we are half-retired already and need many more workers than otherwise.
In contrast, Americans now spend most of their lives not working.
The unemployment rate looks good - 9% is surely a refutation of the Luddite fallacy! - until you look into the meat factory and see that that is the best rate, for college graduates actively looking for jobs, and not the overall population one including those who have given up. Economist [Alan Krueger]( writes of the ratio (which covers *only* 15-64 year olds):
> "Tellingly, the [employment-to-population rate](!Wikipedia "Employment-to-population ratio") has hardly budged since reaching a low of 58.2 percent in December 2009. Last month it stood at just 58.4 percent. Even in the expansion from 2002 to 2007 the share of the population employed never reached the peak of 64.7 percent it attained before the March-November 2001 recession."
What do you suppose the rate was in 1300 in the poorer 99% of the world population (remembering how homemaking and raising children is effectively a full-time job)? I'd bet it was a lot higher than the world record in 2005, Iceland's 84%. And Iceland is a very brainy place. What are the merely average with IQs of 100-110 supposed to do? (Heck, what is the half of America with IQs in that region or below supposed to do? Learn C++ and statistics so they can work on Wall Street?) If you want to see the future, look at our youth; where are [summer jobs]( these days? Gregory Clark comments sardonically (although he was likely not thinking of [whole brain emulation]( in _[Farewell to Alms](!Wikipedia)_:
> "Thus, while in preindustrial agrarian societies half or more of the national income typically went to the owners of land and capital, in modern industrialized societies their share is normally less than a quarter. Technological advance might have been expected to dramatically reduce unskilled wages. After all, there was a class of workers in the preindustrial economy who, offering only brute strength, were quickly swept aside by machinery. By 1914 most horses had disappeared from the British economy, swept aside by steam and internal combustion engines, even though a million had been at work in the early nineteenth century. When their value in production fell below their maintenance costs they were condemned to the knacker's yard."
Technology may increase total wealth under many models, but there's a key loophole in the idea of 'Pareto-improving' gains - *they don't **ever** have to make some people better off*. And a Pareto-improvement is a good result! Many models don't guarantee even that - it's perfectly possible to become worse off (see the horses above and the fate of humans in [Robin Hanson](!Wikipedia)'s 'crack of a future dawn' scenario). This is closely related to what I've dubbed the '"Luddite fallacy" fallacy'^[Along the lines of the [Pascal's Wager Fallacy Fallacy](]: technologists who are extremely intelligent and have worked most of their life only with fellow potential [Mensans](!Wikipedia "MENSA") confidently say that "if there is structural unemployment (and I'm being generous in granting you Luddites even this contention), well, better education and training will fix that!" It's a little hard to appreciate what a stupendous mixture of availability bias, infinite optimism, and plain denial of intelligence differences this all is. [Marc Andreessen]( offers an example in 2011:
> "Secondly, many people in the U.S. and around the world lack the education and skills required to participate in the great new companies coming out of the software revolution. This is a tragedy since every company I work with is absolutely starved for talent. Qualified software engineers, managers, marketers and salespeople in Silicon Valley can rack up dozens of high-paying, high-upside job offers any time they want, while national unemployment and underemployment is sky high. This problem is even worse than it looks because many workers in existing industries will be stranded on the wrong side of software-based disruption and may never be able to work in their fields again. There's no way through this problem other than education, and we have a long way to go."
I see. So all we have to do with all the people with <120 IQs, who struggled with algebra and never made it to calculus (when they had the self-discipline to learn it at all), is just to train them into world-class software engineers and managers who can satisfy Silicon Valley standards; and we have to do this for the first time in human history. Gosh, is that all? Why didn't you say so before - we'll get on that *right away*!
It's always a little strange to read an economist remark that potential returns to education have been rising and so more people should get an education, but this same economist somehow not realize that the *continued presence of this free lunch indicates it is not free at all*. Look at how the trend of increasing education has stalled out:
!["Education attainment climbed dramatically in the 20th century, but its growth has flattened recently (source: Census)"]( <!-- -->
Apparently markets work and people respond to incentives - *except* when it comes to education, and there people simply aren't picking up those $100 bills laying on the ground and have been not picking them up for decades for some reason[^Acemoglu]. I see. (In England, there's evidence that college graduates were still being successfully absorbed in the '90s and earlier, although apparently there weren't relatively many during those periods[^englandOvereducation].)
[^knowledge]: From pg 13/340 of Bowles & Jayadev 2006:
> "Other differences in technology (or different distributions of labor across sectors of the economy) may account for some of the differences. However, the data on supervision intensity by manufacturing sector in five sub-Saharan African countries shown in Table 4 suggest large country effects independent of the composition of output. Supervisory intensities in Zambia’s 'wood and furniture' and 'food processing' industries, are twice and five times Ghana’s respectively. A country-and-industry fixed effects regression indicates that Zambia’s supervision intensity conditioned on industrial structure is two and a half times Ghana’s. Of course these differences could reflect within sector variation among countries in output composition or technologies, but there is no way to determine how much (if any) of the estimated country effects are due to this. We also explored if supervision intensity was related to more advanced technologies generically. However, in the advanced economy dataset (shown in Table 2) the value added of knowledge intensive sectors as a share of gross value added was substantially uncorrelated with the supervisory ratio (r = 0.14).
> While the data are inadequate to provide a compelling test of the hypothesis, we thus find little evidence that the increase in guard labor in the U.S. or the differences across the countries is due to differences in output composition and technology. A more likely explanation is what we term 'enforcement specialization'. Economic development proceeds through a process of specialization and increasing division of labor; the work of perpetuating a society’s institutions is no exception to this truism....Our data indicate that the United States devotes well over twice as large a fraction of its labor force to guard labor as does Switzerland. This may occur in part because peer monitoring and informal sanctioning play a larger role in Switzerland, as well as the fact that ordinary Swiss citizens have military defense capacities and duties and are not counted in our data as soldiers."
[^Acemoglu]: For example, MIT economist "[Daron Acemoglu](!Wikipedia) [on Inequality](" has all the pieces but somehow escape the obvious conclusion:
> "*Let’s go through your books. Your first choice is **The Race between Education and Technology**, published by Harvard University Press. You mentioned in an earlier email to me that it is “a must-read for anyone interested in inequality”. Tell me more.*
> This is a really wonderful book. It gives a masterful outline of the standard economic model, where earnings are proportional to contribution, or to productivity. It highlights in a very clear manner what determines the productivities of different individuals and different groups. It takes its cue from a phrase that the famous Dutch economist, Jan Tinbergen coined. The key idea is that technological changes often increase the demand for more skilled workers, so in order to keep inequality in check you need to have a steady increase in the supply of skilled workers in the economy. He called this “the race between education and technology”. If the race is won by technology, inequality tends to increase, if the race is won by education, inequality tends to decrease.
> *The authors, Claudia Goldin and Larry Katz, show that this is actually a pretty good model in terms of explaining the last 100 years or so of US history. They give an excellent historical account of how the US education system was formed and why it was very progressive, leading to a very large increase in the supply of educated workers, in the first half of the century. This created greater equality in the US than in many other parts of the world.*
> They also point to three things that have changed that picture over the last 30 to 40 years. One is that technology has become even more biased towards more skilled, higher earning workers than before. So, all else being equal, that will tend to increase inequality. Secondly, we’ve been going through a phase of globalisation. Things such as trading with China – where low-skill labour is much cheaper – are putting pressure on low wages. Third, and possibly most important, is that the US education system has been failing terribly at some level. We haven’t been able to increase the share of our youth that completes college or high school. It’s really remarkable, and most people wouldn’t actually guess this, but in the US, the cohorts that had the highest high-school graduation rates were the ones that were graduating in the middle of the 1960s. Our high-school graduation rate has actually been declining since then. If you look at college, it’s the same thing. This is hugely important, and it’s really quite shocking. It has a major effect on inequality, because it is making skills much more scarce then they should be.
> *Do Goldin and Katz go into the reasons why education is failing in the US?*
> They do discuss it, but nobody knows. It’s not a monocausal, simple story. It’s not that we’re spending less. In fact, we are spending more. It’s certainly not that college is not valued, it’s valued a lot. The college premium – what college graduates earn relative to high-school graduates – has been increasing rapidly. It’s not that the US is not investing enough in low-income schools. There has been a lot of investment in low-income schools. Not just free lunches, but lots of grants and other forms of spending from both states and the federal government."
The failure of education to increase may be [masked by the dying of the uneducated elderly]( "'The Myth of the Education Plateau', by Bryan Caplan"), but that is an effect that can only last so long. And then we will see something like that looks [more like this]( "'Goldin-Katz and the Education Plateau', by Bryan Caplan"), a log graph which may begin petering out soon (and which looks like a diminishing returns graph - every time unit sees less and less increase squeezed out as additional efforts or larger returns are applied to the populace):
![Log-Relative Supply of College/non-College Labor, 1963-2008]( "by Lawrence F. Katz")
Or economist [Alex Tabarrok](!Wikipedia), in a [podcast](, who identifies the problem and blames it on a decrease in teacher quality!
> "You argue that the American education system, both K-12 and at the college levels, has got some serious problems. Let's talk about it. What's wrong with it? And of course, as a result, education is a key part of innovation and productivity. If you don't have a well-educated populace you are not going to have a very good economy. What's wrong with our education system?
> Let's talk about K-12. Here's two remarkable facts, which have just blown me away. Right now, in the United States, people 55-64 years old, they are more likely to have had a high school education than 25-34 year olds. Just a little bit, but they are more likely. So, you look everywhere in the world and what do you see? You see younger people having more education than older people. Not true in the United States. That is a shocking claim. Incredible. And the reason is that the drop-out rate has increased? Exactly. So, the high school dropout rate has increased. Now, 25% of males in the United States drop out of high school. And that's increased since the 1960s, even as the prospects for a high school dropout have gotten much worse. We've seen an increase, 21st century--25% of males not graduating high school. That's mind-boggling. Why? One of the underlying facts relating to education, which is [?], which is that the more education you get on average--and I'm going to talk about why on average can be very misleading--high school graduates do better than high school dropouts; people with some college do better than high school graduates; people graduating from college do better than people with some college; people with graduate degrees do better than college grads. And the differences are large. Particularly if you compare a college graduate to a high school dropout, there is an enormous difference.
> So, normally we would say: Well, this problem kind of solves itself. There's a natural incentive to stay in school, and I wouldn't worry about it. Why should we be worrying about it? It doesn't seem to be working. Why isn't it working and what could be done? I think there's a few problems. One is the quality of teachers I think has actually gone down. So I think that's a problem. This is a case of every silver lining has a cloud, or something like that, in that in 1970s about half of college-educated women became teachers. This is at a time when there's maybe 4% are getting an MBA, less than 10% are going to medical school, going to law school. These smart women, they are becoming teachers. Well, as we've opened up, by 1980 you've got 30% or so of the incoming class of MBAs, doctors, lawyers, are women. Which is great. Their comparative advantage, moving into these fields, productivity, and so forth. And yet that is meant that on average, the quality of teachers, the quality pool we are drawing from, has gone down in terms of their SAT levels and so forth. So, I think we need to fix that."
[^englandOvereducation]: ["Over-Education and the Skills of UK Graduates"]( (Chevalier & Lindley 2006):
> "Before the Eighties, Britain had one of the lowest participation rates in higher education across OECD countries. Consequently, increasing participation in higher education became the mantra of British governments. The proportion of school leavers reaching higher education began to slowly increase during the early Eighties, until it suddenly increased rapidly towards the end of the decade. As illustrated in Figure 1, the proportion of a cohort participating in higher education doubled over a five year period, from 15% in 1988 to 30% by 1992...we analyse the early labour market experience of the 1995 cohort, since these people graduated at the peak of the higher education expansion period. We find a reduction in the proportion of matched graduates, compared to the 1990 cohort. This suggests that the labour market could not fully accommodate the increased inflow of new graduates, although this did not lead to an increased wage penalty associated with over-education. Hence, the post-expansion cohort had the appropriate skills to succeed in the labour market. Secondly, we are the first to investigate whether the over-education wage penalty remains even after controlling for observable graduate skills, skill mismatch, as well as unobservable characteristics. We find some evidence that genuinely over-educated individuals lack 'graduate skills'; mostly management and leadership skills. Additionally, the longitudinal element of the dataset is used to create a measure of time-invariant labour market unobservable characteristics which are also found to be an important determinant of the probability to be over-educated. Over-education impacts negatively on the wages of graduates, over and above skill levels (observed or not) which suggests that the penalty cannot be solely explained by a lack of skills but also reflects some job idiosyncratic characteristics. It also increases unemployment by up to three months but does not lead to an increase in job search, as the numbers of job held since graduation is not affected by the current over-education status.
> ...Most of the UK literature has relied on self-assessment of over-education, and typically finds that 30% of graduates are overeducated^4^. Battu et al. (2000) provide one of the most comprehensive studies of over-education. The average proportion of over-educated individuals across the 36 estimates of their analysis was around one-quarter, with estimates ranging between one-fourteenth and as high as two-thirds. For the UK, Battu et al (2000) concluded that over-education has not increased in the early Nineties.
> This result is supported by Groot and Maassen van den Brink (2000) whose meta-analysis of 25 studies found no tendency for a world-wide increase in the incidence of over-education despite the general improvement in the level of education, although they do suggest it has become increasingly concentrated among lower ability workers, suggesting the over-education is not solely due to mismatch of workers and jobs.
> Freeman's pioneering work on over-education (1976) suggests that over-education is a temporary phenomenon due to friction in the labour market, although UK evidence is contrary to this assumption. Dolton and Vignoles (2000) found that 38% of 1980 UK graduates were over-educated in their first job and that 30% remained in that state six years later. Over a longer period there is also evidence that over-education is a permanent feature of some graduates' career (Dolton and Silles, 2003). For graduates the wage penalty associated with over-education ranges between 11 and 30 percent, however, contrary to Freeman's view over-education has not led to a decrease in the UK return to education in general (Machin, 1999 and Dearden et al., 2002) even if recent evidence by Walker and Zhu (2005) report lower returns for the most recent cohort of graduates.
> The general consensus is that after controlling for differences in socio-economic and institutional factors, over-education is a consequence of unobservable elements such as heterogeneous ability and skills. There is evidence to support this from studies by Büchel and Pollmann-Schult (2001), Bauer (2002), Chevalier (2003) and Frenette (2004). Most over-educated workers are efficiently matched into appropriate jobs and after accounting for the unobserved heterogeneity, the wage penalty for over-education is reduced. However, a remaining group of workers appear over-skilled for their jobs and suffer from substantial wage penalties."
Most economists continues to dismiss this line of thought, saying that technological changes are [are real]( but things will work themselves out somehow. Robin Hanson, for example, [seems to think that]( and he's a better economist than me and has thought a great deal about AI and [the economic implications]( Their opposition to Neo-Luddism is about the only reason I remain uncertain, because otherwise, the data for the economic troubles starting in 2007, and especially the unemployment data, seem to match nicely. From a [Federal Reserve brief]( (principally arguing that the data is better matched by a model in which the longer a worker remains unemployed, the longer they are likely to remain unemployed):
> "For most of the post–World War II era, unemployment has been a relatively short-lived experience for the average worker. Between 1960 and 2010, the average duration of unemployment was about 14 weeks. The duration always rose during recessions, but relatively quick upticks in hiring after recessions kept the long-term unemployment rate fairly low. Even during the two “jobless recoveries” that followed the 1990–91 and 2001 recessions, the peak shares of long-term unemployment were 21 percent and 23 percent, respectively. But the 2007–09 recession represents a marked departure from previous experience: the average duration has increased to 40 weeks, and the share of long-term unemployment remains high more than two years after the official end of the recession.^[The Bureau of Labor Statistics revised its data-collection method for unemployment duration in January 2011. Based on the previous method, the average unemployment duration would be about 37 weeks rather than 40 weeks. For more information, see <>.] Never before in the postwar period have the unemployed been unemployed for so long."
The American oddities began before the current recession:
> "Unemployment increased during the 2001 recession, but it subsequently fell almost to its previous low (from point A to B and then back to C). In contrast, job openings plummeted—much more sharply than unemployment rose—and then failed to recover. In previous recoveries, openings eventually outnumbered job seekers (where a rising blue line crosses a falling green line), but during the last recovery a labor shortage never emerged. The anemic recovery was followed in 2007 by an increase in unemployment to levels not seen since the early 1980s (the rise after point C). However, job openings fell only a little—and then recovered. The recession did not reduce hiring; it just dumped a lot more people into an already weak labor market."[^brookings]
[^brookings]: ["A Decade of Slack Labor Markets"](, Scott Winship, [Brookings Institution](!Wikipedia) Fellow; other good quotes:
> "From 1951 through 2007, there were never more than three unemployed workers for each job opening, and it was rare for that figure even to hit two-to-one. In contrast, there have been more than three jobseekers per opening in every single month since September 2008. The ratio peaked somewhere between five-to-one and seven-to-one in mid-2009. It has since declined but we have far to go before we return to “normal” levels.
> The bleak outlook for jobseekers has three immediate sources. The sharp deterioration beginning in early 2007 is the most dramatic feature of the above chart (the rise in job scarcity after point C in the chart, the steepness of which depends on the data source used). But two less obvious factors predated the recession. The first is the steepness of the rise in job scarcity during the *previous* recession in 2001 (from point A to point B), which rivaled that during the deep downturn of the early 1980s. The second is the failure between 2003 and 2007 of jobs per jobseeker to recover from the 2001 recession (the failure of point C to fall back to point A)."
And then there is the well-known example of Japan. Yet overall, both Japanese, American, and global wealth continue to grow. The hopeful scenario is that all we are suffering is temporary pains, which will eventually be grown out of, as [John Maynard Keynes](!Wikipedia) forecast in his 1930 essay ["Optimism in a Terrible Economy"](
> "At the same time technical improvements in manufacture and transport have been proceeding at a greater rate in the last ten years than ever before in history. In the United States factory output per head was 40 per cent greater in 1925 than in 1919. In Europe we are held back by temporary obstacles, but even so it is safe to say that technical efficiency is increasing by more than 1 per cent per annum compound...For the moment the very rapidity of these changes is hurting us and bringing difficult problems to solve. Those countries are suffering relatively which are not in the vanguard of progress. We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come--namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment. All this means in the long run that mankind is solving its economic problem. I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is to-day. There would be nothing surprising in this even in the light of our present knowledge. It would not be foolish to contemplate the possibility of afar greater progress still."
[^hansongrow]: Or Robin Hanson's paper, ["Economic Growth Given Machine Intelligence"](
> "Machines complement human labor when they become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, expensive hardware and software does only the few jobs where computers have the strongest advantage over humans. Eventually, computers do most jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do. An intelligence population explosion makes per-intelligence consumption fall this fast, while economic growth rates rise by an order of magnitude or more."
[^overhang]: This is known as the 'overhang' argument. The development and canonical form of it is unclear; it may simply be Singulitarian folklore-knowledge. Eliezer Yudkowsky, from the 2008 ["Hard Takeoff"](
> "Or consider the notion of sudden resource bonanzas. Suppose there's a semi-sophisticated Artificial General Intelligence running on a cluster of a thousand CPUs. The AI has not hit a wall - it's still improving itself - but its self-improvement is going so *slowly* that, the AI calculates, it will take another fifty years for it to engineer / implement / refine just the changes it currently has in mind. Even if this AI would go FOOM eventually, its current progress is so slow as to constitute being flatlined...
> So the AI turns its attention to examining certain blobs of binary code - code composing operating systems, or routers, or DNS services - and then takes over all the poorly defended computers on the Internet. This may not require what humans would regard as genius, just the ability to examine lots of machine code and do relatively low-grade reasoning on millions of bytes of it. (I have a saying/hypothesis that a *human* trying to write *code* is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.) The Future may also have more legal ways to obtain large amounts of computing power quickly.
> ...A subtler sort of hardware overhang, I suspect, is represented by modern CPUs have a 2GHz *serial* speed, in contrast to neurons that spike 100 times per second on a good day. The "hundred-step rule" in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in realtime has to perform its job in less than 100 *serial* steps one after the other. We do not understand how to efficiently use the computer hardware we have now, to do intelligent thinking. But the much-vaunted "massive parallelism" of the human brain, is, I suspect, [mostly cache lookups]( to make up for the sheer awkwardness of the brain's *serial* slowness - if your computer ran at 200Hz, you'd have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if *correctly designed*, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.
> So that's another kind of overhang: because our computing hardware has run so far ahead of AI *theory*, we have incredibly fast computers we don't know how to use *for thinking*; getting AI *right* could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.
> A still subtler kind of overhang would be represented by human [failure to use our gathered experimental data efficiently](^[A better link on inefficient human induction is probably Phil Goetz's ["Information Theory and FOOM"](]."
[Anders Sandberg & Carl Shulman]( gave a 2010 talk on it; from the blog post:
> "We give an argument for why - if the AI singularity happens - an early singularity is likely to be slower and more predictable than a late-occurring one....
> If you are on the hardware side, how much hardware do you believe will be available when the first human level AI occurs? You should expect the first AI to be pretty close to the limits of what researchers can afford: a project running on the future counterpart to Sequoia or the Google servers. There will not be much extra computing power available to run more copies. An intelligence explosion will be bounded by the growth of more hardware.
> If you are on the software side, you should expect that hardware has continued to increase after passing "human equivalence". When the AI is finally constructed after all the human and conceptual bottlenecks have passed, hardware will be much better than needed to just run a human-level AI. You have a "hardware overhang" allowing you to run many copies (or fast or big versions) immediately afterwards. A rapid and sharp intelligence explosion is possible.
> This leads to our conclusion: if you are an optimist about software, you should expect an early singularity that involves an intelligence explosion that at the start grows "just" as Moore's law (or its successor). If you are a pessimist about software, you should expect a late singularity that is very sharp. It looks like it is hard to coherently argue for a late but smooth singularity.
> ...Note that sharp, unpredictable singularities are dangerous. If the breakthrough is simply a matter of the right insights and experiments to finally cohere (after endless disappointing performance over a long time) and then will lead to an intelligence explosion nearly instantly, then most societies will be unprepared, there will be little time to make the AIs docile, there are strong first-mover advantages and incentives to compromise on safety. A recipe for some nasty dynamics."
[Jaan Tallinn]( in 2011:
> "It's important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore's Law – creating a massive hardware overhang. The first AI is likely to find itself running on a computer that's several orders of magnitude faster than needed for human level intelligence. Not to mention that it will find an Internet worth of computers to take over and retool for its purpose."
# See Also
- [Predictions](Prediction markets)
Jump to Line
Something went wrong with that request. Please try again.