Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
2270 lines (1384 sloc) 456 KB
description: Misc. thoughts, memories, proto-essays, musings, etc.
"And on that dread day, the Ineffable One will summon the artificers and makers of graven images, and He will command them to give life to their creations, and failing, they and their creations will be dedicated to the flames..."
"Some say that a god lives on in the faith and memory of its believers. They point to computers and say, 'Behold, they need but think all together in a particular & precise mode, and from nowhere appear things real and greater than any they thought. Might not the same be true of humans, who are so much greater?' But this is no more true than a painting of a flower the flower itself."
# _Evangelion_'s influence on _RahXephon_
- Paper idea: "The anxiety of influence: RahXephon's response to Neon Genesis Evangelion"
- [The Anxiety of Influence](!Wikipedia). every artist makes his predecessors... Borges. RahXephon and Eva... [RahXephon#Neon Genesis Evangelion](!Wikipedia) (comparison). Eva ineluctably influenced RahXephon... RahXephon's manga begun 2001, Evangelion's TV 1995. mecha anime are remixes... variations. the pleasure of watching one is seeing the variation on the Truth, of trying to see each one get closer and closer to the heart of the matter. "I've taken on a risk: 'It's just an imitation'. And for now I can only write this explanation. But perhaps our 'original' lies somewhere within there." (Hideaki Anno, from his story treatment "What were we trying to make here?" written before NGE began being produced by Gainax, as recorded on page 171 of _Neon Genesis Evangelion Volume 1_, Yoshiyuki Sadamoto, translated by Fred Burke. August 2003. ISBN 1-56931-294-X).
There is a deep relation here to Japanese poetry, in which originality is not necessarily valued. every change in a mecha anime from its predecessors is a reply, an ongoing dialog back and forth. how do mecha change? where have they gone to? look for the RahXephon bibles. Evangelion used Greek in Evangelion... rahx-ephon - "-ephon" as a suffix for instrument. RahXephon's creators wanted to create something new... track down references 2-5 of [RahXephon#Notes and reference](!Wikipedia).
# Long term investment
> "That is, from January 1926 through December 2002, when holding periods were 19 years or longer, the cumulative real return on stocks was never negative..."
How does one engage in extremely long investments? On a time-scale of centuries, investment is a difficult task, especially if one seeks to avoid erosion of returns by the costs of active management.
'Unit Investment Trust (UIT) is a US investment company offering a fixed (unmanaged) portfolio of securities having a definite life.'
'A closed-end fund is a collective investment scheme with a limited number of shares'
In long-term investments, one must become concerned about biases in the data used to make decisions. Many of these biases fall under the general rubric of "observer biases" - the canonical example being that stocks look like excellent investments if you only consider America's stock market, where returns over long periods have been quite good. For example, if you had invested by tracking the major indices any time period from January 1926 through December 2002 and had held onto your investment for at least 19 years, you were guaranteed a positive real return. Of course, the specification of place (America) and time period (before the Depression and after the Internet bubble) should alert us that this guarantee may not hold elsewhere. Had a long-term investor in the middle of the 19th century decided to invest in a large up-and-coming country with a booming economy and strong military (much like the United States has been for much of the 20th century), they would have reaped excellent returns. That is, until the hyperinflation of the Wiemar Republic. Should their returns have survived the inflation and imposition of a new currency, then the destruction of the 3rd Reich would surely have rendered their shares and Reichmarks worthless. Similarly for another up-and-coming nation - Japan. Mention of Russia need not even be made.
Clearly, diversifying among companies in a sector, or even sectors in a national economy is not enough. Disaster can strike an entire nation. Rosy returns for stocks quietly ignore those bloody years in which exchanges plunged thousands of percent in real terms, and whose records burned in the flames of war. Over a timespan of a century, it is impossible to know whether such destruction will be visited on a given country or even whether it will still exist as a unit. How could Germany, the preeminent power on the Continent, with a burgeoning navy rivaling Britain's, with the famous Prussian military and Junkers, with an effective industrial economy still famed for the quality of its mechanisms, and with a large homogeneous population of hardy people possibly fall so low as to be utterly conquered? And by the United States and others, for that matter? How could Japan, with its fanatical warriors and equally fanatical populace, its massive fleet and some of the best airplanes in the world - a combination that had humbled Russia, that had occupied Korea for nigh on 40 years, which easily set up puppet governments in Manchuria and China when and where it pleased - how could it have been defeated so wretchedly as to see its population literally decimated and its governance wholly supplanted? How could a god be dethroned?
It is perhaps not too much to say that investors in the United States, who say that the Treasury Bond has never failed to be redeemed and that the United States can never fall, are perhaps overconfident in their assessment. Inflation need not be hyper to cause losses. Greater nations have been destroyed quickly. Who remembers the days when the Dutch fought the English and the French to a standstill and ruled over the shipping lanes? Remember that Nineveh is one with the dust.
In short, our data on returns is biased. This bias indicates that stocks and cash are much more risky than most people think, and that this risk inheres in exogenous shocks to economies - it may seem odd to invest globally, in multiple currencies, just to avoid the rare black swans of total war and hyperinflation. But these risks are catastrophic risks. Even one may be too many.
This risk is more general. Governments can die, and so their bonds and other instruments (such as cash) rendered worthless; how many governments have died or defaulted over the last century? Many. The default assumption must be that the governments with good credit, who are not in that number, may simply have been lucky. And luck runs out.
In general, entities die unpredictably, and one has no guarantee that a, say, 1500 year old Korean construction company will honor its bills in another 500 years because all it takes is one bubble to drive it into bankruptcy. When one looks at securities turning into money, of course all you see are ones for those entities which survived. This is 'survivorship bias'; our observations are biased because we aren't looking at all of the past, but the present. This can be exploited, however. Obviously if an entity perishes, it has no need for assets.
Suppose one wishes to make a very long-term investment. One groups with a large number of other investors who wish to make similar investments, in a closed-end mutual fund with a share per investor, which is set to liquidate at some remote period. This fund would invest in assets all over the world and of various kinds, seeking great diversification. The key ingredient would be that shares are not allowed to be transferred. Should an investor perish, the value of their share would be split up amongst the other investors' shares (a percentage could be used to pay for management, perhaps). Because of this ingredient, the expected return for any individual investor would be extremely high - the potential loss is 100%, but the investor by definition will never be around for that loss. Because the identity and number of investments is fixed, potential control of the assets could be dispersed among the investors so as to avoid the situation where war destroys the headquarters of whomever is managing the assets. The technical details are unimportant; cryptography has many ingenious schemes for such matters (one can easily heavily encrypt a file and then distribute n keys where n - k are needed to decrypt the file).
'Suppose you think that gold will become worthless on April 27th, 2020 at between four and four-thirty in the morning. I, on the other hand, think this event will not occur until 2030. We can sign a contract in which I pay you one ounce of gold per year from 2010 to 2020, and then you pay me two ounces of gold per year from 2020 to 2030. If gold becomes worthless when you say, you will have profited; if gold becomes worthless when I say, I will have profited. We can have a prediction market on a generic apocalypse, in which participants who believe in an earlier apocalypse are paid by believers in a later apocalypse, until they pass the date of their prediction, at which time the flow reverses with interest. I don't see any way to distinguish between apocalypses, but we can ask the participants why they were willing to bet, and probably receive a decent answer.'
# American light novels
I think one of the more interesting trends in anime is the massive number of adaptations of light novels done in the '90s and 00s; it is interesting because no such trend exists in American media as far as I can tell (the closest I can think of are comic book adaptations, but of course those are analogous to the many mangas -> animes). Now, American media absolutely adapts many novels, but they are all normal Serious Business Novels. We do not seem to even have the light novel media - young adult novels do not cut the mustard. light novels are odd as they are kind of like speculative fiction novellas. The success of comic book movies has been much noted - could *comic books* be the American equivalent of light novels? There are attractive similarities in subject matter and even medium, light novels including a fair number of color manga illustrations.
- Question for self: if America doesn't have the light novel category, is that a claim that the _Twilight_ novels, and everything published under the James Patterson brand, are regular novels?
Answer: The _Twilight_ novels are no more light novels than the _Harry Potter_ novels were. The Patterson novels may fit, however; they have some of the traits such as very short chapters, simple literary style, and very quick moving plots, even though they lack a few less important traits (such as including illustrations). It might be better to say that there is no recognized and successful light novel *genre* rather than individual light novels - there are only unusual examples like the Patterson novels and other works uncomfortable listed under the Young Adult/Teenager rubric.
# Cultural growth through diversity
Leaving aside the corrosive effects on social solidarity documented by Putnam and Amy Chua's 'market minorities', I've wondered about the *artistic* consequences of substantial diversity to a country or perhaps civilization. In _[Human Accomplishment](!Wikipedia)_, one of the strongest indicators for genius is contact with a foreign culture. This foreign contact can be pretty minimal - Thomas Malthus drew on threadbare descriptions of China's teeming population, and the French _philosophes_ had little more to go on when drawing inspiration in Confucianism, as did the later rococo and _chinoiserie_ artists; much of American design and art traces back to interpretations of East Asian art based on few works, and the sprawling American cults or New Age movements and everything that umbrella term influenced post-'60s were not based on deep scholarship. They did much with little, one might say. This seems fairly true of many fertile periods: the foreigners make up, at most, a few percent of the population.
However, the modern era which is likely the most globalized one, in which population movements are so much vaster and in which English-speakers have access to primary sources like they have never had before (compare how much classic Japanese & Chinese literature has been translated and stored in libraries as of 2009 to what was available when Waley began translating _Genji Monogatari_ in 1921!). This would seem to be something of a contradiction: if a little foreign contact was enough to inspire all the foregoing, then why wouldn't all the Asian immigrants and translations and economic contact to America spark even greater revolutions? There has been influence, absolutely; but the influence is striking for how a little bit helped (how many haiku did the Imagists have access to?) and a lot done not much more, and perhaps even less. There's no obvious reason that more would not be better, and obvious reasons why it would be (less overhead and isolation for the foreigners; sheer better odds of getting access to the right master or specialist that a promising native artist needs). But nevertheless, I seem to discern a U-shaped curve.
> "We are doubtless deluding ourselves with a dream when we think that equality and fraternity will some day reign among human beings without compromising their diversity. However, if humanity is not resigned to becoming the sterile consumer of values that it managed to create in the past...capable only of giving birth to bastard works, to gross and puerile inventions, [then] it must learn once again that all true creation implies a certain deafness to the appeal of other values, even going so far as to reject them if not denying them altogether. For one cannot fully enjoy the other, identify with him, and yet at the same time remain different. When integral communication with the other is achieved completely, it sooner or later spells doom for both his and my creativity. The great creative eras were those in which communication had become adequate for mutual stimulation by remote partners, yet was not so frequent or so rapid as to endanger the indispensable obstacles between individuals and groups or to reduce them to the point where overly facile exchanges might equalize and nullify their diversity."^[[Claude Levi-Strauss](!Wikipedia), _The View from Afar_ pg 23; quoted in Clifford Geertz's ["The Uses of Diversity"]( [Tanner Lecture](!Wikipedia)]
In schools, one sees students move in cliques and especially so with students who share a native language and are non-native English speakers - one can certainly understand why they would do such a thing, or why immigrants would congregate in ghettos or Chinatowns or Koreatowns where they can speak freely and talk of the old country; perhaps this homophily drives the reduced cross-fertilizing by reducing the chances of crossing paths. (If one is the only Yid around, one must interact with many goyim, but not so if there are many others around.) Is this enough? It doesn't seem like enough to me.
This is a little perplexing. What's the explanation? Could it be that as populations build up, all the early artists sucked out the novelty available in hybridizing native material with the foreign material? Or is there something stimulating about having only a few examples - does one draw faulty but fruitful inferences based on idiosyncrasies of the small data set? In machine learning, the more data available, the less wild the guesses are, but in art, wildness is a way of jumping out of a local minima to somewhere new. If Yeats had available the entire Chinese corpus, would he produce better new English poems than when he pondered obsessively a few hundred verses, or would he simply produce better English pastiches of Chinese poems? Knowledge can be a curse by making it difficult or impossible to think new thoughts and see new angles. (Or perhaps the foreign material is important only as a *hint* to what the artist was trying already to achieve; in psychology, there is an interesting 'key' effect where one hears only static noise in a recording, is given a hint at the sentence spoken in the recording, and then one can suddenly hear it through the noise.)
# Decluttering
[Ego depletion](!Wikipedia):
> 'Ego depletion refers to the idea that self-control and other mental processes that require focused conscious effort rely on energy that can be used up. When that energy is low (rather than high), mental activity that requires self-control is impaired. In other words, using one's self-control impairs the ability to control one's self later on. In this sense, the idea of (limited) willpower is correct.'
Wonder whether this has any connection with minimalism? Clutter might damage [executive functions](!Wikipedia); from ["Henry Morton Stanley's Unbreakable Will"](, Roy F. Baumeister and John Tierney:
> You might think the energy spent shaving in the jungle would be better devoted to looking for food. But [Stanley's](!Wikipedia "Henry Morton Stanley") belief in the link between external order and inner self-discipline has been confirmed recently in studies. In one experiment, a group of participants answered questions sitting in a nice neat laboratory, while others sat in the kind of place that inspires parents to shout, “Clean up your room!” The people in the messy room scored lower self-control, such as being unwilling to wait a week for a larger sum of money as opposed to taking a smaller sum right away. When offered snacks and drinks, people in the neat lab room more often chose apples and milk instead of the candy and sugary colas preferred by their peers in the pigsty.
> In a similar experiment online, some participants answered questions on a clean, well-designed website. Others were asked the same questions on a sloppy website with spelling errors and other problems. On the messy site, people were more likely to say that they would gamble rather than take a sure thing, curse and swear, and take an immediate but small reward rather than a larger but delayed reward. The orderly websites, like the neat lab rooms, provided subtle cues guiding people toward self-disciplined decisions and actions helping others.
It's striking how cluttered a big city is when you visit them from a rural area; it's also striking how mental disease seems to [correlate with cities](The Melancholy of Subculture Society#fn26) and how mental performance improves with natural vista and not urban vistas.
See also [latent inhibition](!Wikipedia):
> "Latent inhibition is a process by which exposure to a stimulus of little or no consequence prevents conditioned associations with that stimulus being formed. The ability to disregard or even inhibit formation of memory, by preventing associative learning of observed stimuli, is an automatic response and is thought to prevent information overload. Latent inhibition is observed in many species, and is believed to be an integral part of the observation/learning process, to allow the self to interact successfully in a social environment."
> "Most people are able to shut out the constant stream of incoming stimuli, but those with low latent inhibition cannot. It is hypothesized that a low level of latent inhibition can cause either psychosis, a high level of creativity[1] or both, which is usually dependent on the subject's intelligence.[2][3] Those of above average intelligence are thought to be capable of processing this stream effectively, an ability that greatly aids their creativity and ability to recall trivial events in incredible detail and which categorizes them as almost creative geniuses. Those with less than average intelligence, on the other hand, are less able to cope, and so as a result are more likely to suffer from mental illness."
Interesting decluttering approach: "100 Things Challenge"
- <>
- <>
- <,9171,1812048,00.html>
- <>
# _The Count of Zarathustra_
The count of Monte Cristo as a Nietzschean hero?
# Title
Good poem title: 'The Scarecrow Appeals to Glenda the Good'
# Idea for Twitter SF
novel idea: an ancient British family has a 144 character (no spaces) string which encodes the political outcomes of the future eg. the restoration, the Glorious Rebellion, Napoleon, Nazis etc. thus the family has been able to pick the winning side every time and maintain its place. but they cannot interpret the remaining characters pertaining to our time. they hire researcher/librarians to crack it. one of them is our narrator. in the course of figuring it out, he becomes one of the sides mentioned. possible plot device: he has a corrupted copy?
# Misc. haiku
Down on the grasses,
I gaze at the summer sun -
And it gazes back!
Death poems are all just
falling blossoms and nonsense -
dying is dying
# Somatic genetic engineering
What's the killer app for non-medical genetic engineering in humans?
How about germ-line engineering of hair color? think about it. hair color is controlled by relatively few, and well-understood, genes. hair color is a dramatic change. there is massive demand for hair dye as it is, even with the extra effort and impermanence and unsatisfactory results. how many platinum blonds would jump at the chance to have kids who are *truly* enviably blond? or richly red-headed (and not washed-out Irish read)? A heck of a lot, I'd say. The health risks need not be enormous - aside from the intervention itself, what risk could swapping a brunette gene for blond cause? (There apparently is just 1 relevant gene: "Frost's theory is also backed up by a separate scientific analysis of north European genes carried out at three Japanese universities, which has isolated the date of the genetic mutation that resulted in blond hair to about 11,000 years ago." <>)
What sort of market could we expect? [Demographics of the United States](!Wikipedia)
103,129,321 women between 15 and 64; these are women who could be using dye themselves, so appreciate the benefit, and are of child-bearing years.
Likely, the treatment will only work if there's natural variation to begin with - that is, for Caucasians only. We'll probably want to exclude Hispanics and Latin Americans, who are almost as homogeneous in hair color as blacks and Asians, so that leaves us 66% of the total US population. 66% * 103,129,321 will get us a rough estimate of 6.806535186e7 or 68,065,351.
<> claims that "One study estimated that of the 30% of North American women who are blonde, 5/6^ths^ had some help from a bottle." (0.3 * (5/6) = 0.25 or 25%) says
[Demographics of Mexico](!Wikipedia) says 53,013,433 females
[Canada 2006 Census#Age and sex](!Wikipedia) 16,136,925
or 172,279,679 when you sum up Mexico/Canada/USA (the remaining NA states are too small to care about); 25% of 172,279,679 is 43,069,919. 43 million dye users.
Here's a random report <> saying hair dye is worth 1 billion USD a year. Let's assume that this is all consumed domestically by women. (So 1,000,000,000 / 43,069,919 per year is 23)
A woman using hair dye on a permanent basis will be dying every month or so, or 12 times a year. Assume that one dye job is ~20 USD* (she's not doing it herself); then ((1b / 20) / 12) gives us ~4,166,666 women using hair dye, or 1/24 or 4.1% of eligible women. This seems rather low to me, based on observations, but I suppose it may be that elderly women do not use much hair dye, or the trend to using highlights and less-than-complete dye jobs. But 4% seems like a rather safe lower end. That's a pretty large market - 4 million potential customers, who are regularly expressing their financial commitment to their desire to have some hair color other than their natural one.
If each is spending even 100$ a year on it, a genetic engineering treatment could pay for itself very quickly. At 1000$, just 10 years. (And women can expect to live ~80). Not to mention, one would expect the natural hair to simply look better than the dye job.
There's a further advantage to this: it seems reasonable to expect that early forms of this sort of therapy will simply not work for minorities such as blacks or Hispanics - their markets wouldn't justify the research to make it work for them; their dark hair colors seem to be very dominant genetically, and likely the therapy would be working with recessive alleles (at least, it seems intuitively plausible that there is less 'distance' between making a Caucasian embryo, who might even have a recessive blonde allele already, fully blond, as compared to making a black baby, who would never ever come anywhere near a non-black hair color, blond). So marketing would benefit from an implicit racism and classism: racism in that one might need to be substantially Caucasian to benefit, and classism to be able to pony up the money up front.
* I think this price is a low-ball estimate by at least 50%; hopefully it will give us a margin of error, since I'm not sure how often dye-jobs need to be done.
# Games with a purpose
There doesn't seem to be any good method of crowd-sourcing translation, despite excellent tools like Google Translate. Perhaps there could be a variant on the [ESP Game](!Wikipedia)? Not entirely sure how it works, but: use Google Translate as a base line, and compete to improve it? or maybe, the players could be given a word, then a sentence, then a paragraph?
# Esoteric story of _Aria_
See [_Aria_'s past, present, and future](Aria's past, present, and future).
# The Camel Has Two Humps
Why does the camel have 2 humps? <> "All teachers of programming find that their results display a 'double hump'. It is as if there are two populations: those who can, and those who cannot, each with its own independent bell curve." tho Alan Kay seems a little skeptical <> and replications of the test have had issues; from <>:
> "We now report that after six experiments, involving more than 500 students at six institutions in three countries, the predictive effect of our test has failed to live up to that early promise."
And <>
> "A test was designed that apparently examined a student's knowledge of assignment and sequence before a first course in programming but in fact was designed to capture their reasoning strategies. An experiment found two distinct populations of students: one could build and consistently apply a mental model of program execution; the other appeared either unable to build a model or to apply one consistently. The first group performed very much better in their end-of-course examination than the second in terms of success or failure. The test does not very accurately predict levels of performance, but by combining the result of six replications of the experiment, five in UK and one in Australia. We show that consistency does have a strong effect on success in early learning to program but background programming experience, on the other hand, has little or no effect."
There's something to this; your first computer language is really hard no matter your experience but the second is almost trivial, unless it's a truly alien paradigm
This suggests some questions to me. Obviously on a raw information level, a natural language is *much* more complex than a computer language (the former are almost indefinitely complex with vocabulary, and the latter are engineered to be simple). Is it, relatively speaking, easier to learn a second natural language, or a second computer language? That is, if the difficulty of learning a second computer language is perhaps 10% of learning the first language, is that better or worse than the difficulty of learning a second natural language after one's native language? My own impression is that learning Haskell after I knew some Java was a lot easier than my first attempt at learning Haskell; when I learned some French after learning haskell, it seemed easier than before but not *that* much easier.
If this is so, it suggests that computer languages share more deep similarities than natural languages.
what is the knack of programming? Why do people never seem to cease being programmers - what irreversible paradigm shift happens in their heads?
# _The Peace War_ game
> "Tellman initialized the Celest board to level nine, Rosas noticed. The kid studied the setup with a calculating look. Tellman's display was a flat, owing a hypothetical solar system as seen from above the plane of rotation. The three planets were small disks of light moving around the primary. Their size gave a clue to mass, but the precise values appeared near the bottom of the display. Departure and arrival planets moved in visibly eccentric orbits, the departure planet at one rev every five seconds — fast enough so precession was clearly occurring. Between it and the destination planet moved a third world, also in eccentric orbit. Rosas grimaced. No doubt the only reason Tellman left the problem coplanar was that he didn't have a holo display for his Celest. Mike had never seen anyone without a symbiotic processor play the departure/destination version of Celest at level nine. The timer on the display showed that the player — the kid — had ten seconds to launch his rocket and try to make it to the destination. From the fuel display, Rosas was certain that there was not enough energy available to make the flight in a direct orbit. A cushion shot on top everything else!
> The kid laid all his bank notes on the table and squinted at the screen. Six seconds left. He grasped the control handles and twitched them. The tiny golden spark that represented his spacecraft fell away from the green disk of the departure world, inward toward the yellow sun about which all revolved. He had used more than nine-tenths of his fuel and had boosted in the wrong direction. The children around him murmured their displeasure, and a smirk came over Tellman's face. The smirk froze:
> As the spacecraft came near the sun, the kid gave the controls another twitch, a boost which — together with the gravity of the primary-sent the glowing dot far out into the mock solar system. It edged across the two-meter screen, slowing the greater remove, heading not for the destination planet but for the intermediary. Rosas gave an low, involuntary whistle. He had played Celest, both alone and with a processor. The game was nearly a century old and almost as popular as chess; it made you remember what the human race had almost attained. Yet he had never seen such a two-cushion shot by an unaided player.
> Tellman's smile remained but his face was turning a bit gray. The vehicle drew close to the middle planet, catching up to it as it swung slowly about the primary. The kid made barely perceptible adjustments in the trajectory during the closing period. Fuel status on the display showed 0.001 full. The representation of the planet and the spacecraft merged for an instant, but did not record as a collision, for the tiny dot moved quickly away, going for the far reaches of the screen.
> Around them, the other children jostled and hooted. They smelled a winner, and old Tellman was going to lose a little of the money he had been winning off them earlier in the day. Rosas and Naismith and Tellman just watched and held their breaths. With virtually no fuel left, it would be a matter of luck whether contact finally occurred.
> The reddish disk of the destination planet swam placidly along while the mock spacecraft arced higher and higher, slower and slower, their paths becoming almost tangent. The craft was accelerating now, falling into the gravity well the destination, giving the tantalizing impression of success that always comes with a close shot. Closer and closer. And the two lights became one on the board.
> "Intercept," the display announced, and the stats streamed across the lower part of the screen. Rosas and Naismith looked at each other. The kid had done it"
--Vernor Vinge, _The Peace War_
Visual presentation: basic problem, how to represent 4D trajectories, since players simply can't be given all the necessary information, must be computed. The trajectories are orbital paths. Plot paths as visible lines, with *color*: time is represented as shades of color, from red->purple. Each time unit is one pixel change, for example. 2 lines/paths intersect/collide only if they have the same color when crossing. Perhaps color intersections black or white to denote collision or miss? (And grey to indicate near-miss? Or closeness of approach?)
# Who wrote the _Death Note_ script?
So recently (October 2009) there appeared online a PDF file claiming to be a script for the Hollywood remake of the _[Death Note](!Wikipedia)_ anime (see Wikipedia, or my own little [Death Note Ending]() essay, for a description). Such a leak begs the question, is it genuine?
I was skeptical at first - how many unproduced screenplays get leaked? it's rare even in this Internet age - so I downloaded a copy and read it.
The first thing I noticed was that the 2 claimed authors, "Charley and Vlas Parlapanides", were correct: they were the 2 brothers of whom it had been quietly [announced]( in April 2009 that they were hired to write it.
The second thing I did was take a look at the metadata. The creator tool checks out: "DynamicPDF v5.0.2 for .NET" is part of a commercial suite, and it was pirated well before July 2009, although I could not figure out when the commercial release was.
The date, though, is "Thu 09 Apr 2009 09:32:47 PM EDT". Keep in mind, this leak was October, and the original announcement was 30 April or so. If one were faking such a script, wouldn't one through either sheer carelessness & omission or by natural assumption (the Parlapanides signed a contract, the press release went out, and they started work) set the date well after the announcement? Why would you set it close to a month before? Wouldn't you take pains to show everything is exactly as an outsider would expect it to be? As Borges writes in "The Argentine Writer and Tradition":
> "Gibbon observes [in the _[Decline and Fall of the Roman Empire](!Wikipedia)_] that in the Arab book _par excellence_, the Koran, there are no camels; I believe that if there were ever any doubt as to the authenticity of the Koran, this lack of camels would suffice to prove it Arab. It was written by Mohammed, and Mohammed as an Arab had no reason to know that camels were particularly Arab; they were for him a part of reality, and he had no reason to single them out, while the first thing a forger or tourist or Arab nationalist would do is to bring on the camels - whole caravans of camels on every page; but Mohammed, as an Arab, was unconcerned. He knew he could be Arab without camels."
Another small point is the 'EDT', or Eastern Daylight-savings Time. The Parlapanides have long been based out of New Jersey.
Then there is the corporate address quietly listed at the bottom of the page. It is widely available on Google if you can search for it, but one has to know about it in the first place. Easier to just leave it out. Another interesting detail.
What of the actual play? Well, it is written like a screenplay, properly formatted, and the scene descriptions are brief but occasionally detailed like the other screenplays I've read (such as the _Star Wars_ trilogy's scripts). It is quite long and detailed. I could easily see a 2 hour movie being filmed from it. There are no obvious red flags.
The plot is curious. Ryuk and other [shinigami](!Wikipedia) are entirely omitted. Light is renamed 'Luke', and now lives in New York City, already in college. (Again, an appropriate setting for 2 screenwriters who grew up in New Jersey.) The plot is generally simplified.
What is more interesting is the changed emphases. Luke has been given a murdered mother, and much of his efforts go to tracking down the murderer (who, of course, escaped conviction for that murder). The Death Note is unambiguously depicted as a tool for evil, and a malign influence in its own right. There is minimal interest in the idea that Kira might be good. The Japanese aspects are minimized and treated as exotic curios, in the worst Hollywood tradition (Luke goes to a Japanese acquaintance for a translation of the kanji for 'shinigami', who, of course, being a primitive native, shudders in fear and flees the memsahib... oh, sorry, wrong era. But the description is still accurate.)
The ending shows Luke using the memory-wiping gambit to elude L (who from the script seems much the same, although things not covered by the script, such as casting, will be critically important to making L, L), and finding the hidden message from his old self - but destroying the message before he learns where he had hidden the Death Note. It is implied that Luke has redeemed himself, and L is letting him go. So the ending is classic Hollywood pap.
The ending indicates someone who doesn't love DN for its shades of gray mentality, its constant ambiguity and complexity. Any DN fan feels deep sympathy for Light, even if they root for L and co. I suspect that if they were to pen a script, the ending would be of the 'Light wins everything' variety, and not this hackneyed sop. I know I couldn't bring myself to write such a thing, even as a parody of Hollywood.
In general, the dialogue is short and hackneyed. There are no excellent megalomaniac speeches about creating a new world; one can expect a dearth of ominous choral chanting in the movie. Even the veriest tyro of fanfiction could write more DN-like dialogue than this script did.
Further, the complexities of ratiocination are largely absent, remaining only in the TV trick of L and the famous chips scene by Light. The tricks are even written incompetently - as written, on the bus, the crucial ID is seen by accident, whereas in DN, Light had written in the ID quite specifically. The moral subtlety of DN is gone; you cannot argue that Luke is a new god like Light. He is only an angry boy with a good heart lashing out, but by the end he has returned to the straight and narrow of conventional morality.
So much for the inside evidence; all suggestive, none damning. A forger *could* have randomly changed Charles to Charlie, looked up an appropriate address, edited the metadata, come up with all the Hollywood touches, wrote the whole damn thing (quite an endeavour since relatively little material is borrowed from DN), and put it online.
But is there any external evidence? Well, the timeline is right. Figure about 2 months for both brothers to read through the DN manga or watch the anime twice, clear up their other commitments, a month to brainstorm, 3 months to write the first draft, a month to edit it up and run it by the studio, and we're at 7 months or around February 2009. That leave a good 6 months for it to float around offices and get leaked, and then come to the wider attention of the Internet.
And then there is the fact that Warner Brothers has filed multiple take-down notices for hosts of the script. Not the 2 brothers, who would have a legal right to order the take-down of material falsely attributed to them, but the commissioning studio. Needless to say, they do not have a standing RIAA-style war against DN fanfiction or fan-art or even torrents of the anime or scanlations of the manga; just this script. Arguably, if the script were not the studio's property, it wouldn't have any legal ground to demand take-downs - their license likely covers just the movie rights, and so fanfiction in the form of a script (for example) would infringe on the Japanese rights-holder, not the studio.
I find this external legal argument fairly compelling, and in conjunction with the internal evidence and oddities best explained by the leaked script being authentically by the Hollywood scriptwriters, I've come to believe the script real. Perhaps an early draft, but still genuine. I suppose an American DN movie could be much much worse; just consider _[Dragon Ball Evolution](!Wikipedia)_!
# The advantage of an uncommon name
Theory: as time passes, it becomes more and more costly to have a 'common' name: a name which frequently appears either in history or in born-digital works. In the past, having a name like 'John Smith' may have not been a disadvantage - connections were personal, no one confused one John Smith with another, textual records were only occasionally used. It might sometimes be an issue with bureaucracy such as taxes or the legal system, but nowhere else.
But online, it is important to be findable. You want your friends on Facebook to find you with the first hit. You want potential employers doing surreptitious Google searches before an interview to see your accomplishments and not others' demerits; you do not want, as Abigail Garvey discovered when she married a Wilson, [employers thinking your resume fraudulent]( because you are no longer ranking highly in Google searches. As Kevin Kelly has since [put it](
> "With such a common first/last name attached to my face, I wanted my children to have unique names. They were born before Google, but the way I would put it today, I wanted them to have Google-unique names."
[Clive Thompson](!Wikipedia) [says]( that search rankings were why he originally started blogging:
> "Today's search engines reward people who have online presences that are well-linked-to. So the simplest way to hack Google to your advantage is to blog about something you find personally interesting, at which point other people with similar interests will begin linking to you — and the upwards cascade begins.
> This is precisely one of the reasons I started Collision Detection: I wanted to 0wnz0r the search string "Clive Thompson". I was sick of the British billionaire and Rentokil CEO Lord Clive Thompson getting all the attention, and, frankly, as a freelance writer, it's crucially important for anyone who wants to locate me — a source, an editor, old friends — to be able to do so instantly with a search engine. Before my blog, a search for "Clive Thompson" produced a blizzard of links dominated by the billionaire; I appeared only a few times in the first few pages, and those were mostly just links to old stories I'd written that didn't have current email addresses. But after only two months of blogging, I had enough links to propel my blog onto the first page of a Google search for my name."
This isn't obvious. It's easy to raise relatively rare risks as objections (but how many cases of identity theft are made possible solely by a relatively unique name making a person google-able? Surely few compared to the techniques of mass identity theft: corporate espionage, dumpster diving, cracking, skimming etc.) To appreciate the advantages, you have to be a 'digital native'. Until you've tried to Google friends or acquaintances, the hypothesis that unique names might be important will never occur to you. Until then, as long as your name was unique inside your school classes, or your neighborhood, or your section of the company, you would never notice. Even researchers spend their time researching unimportant correlations like people named Baker becoming bakers more often, or people tending to move to a state whose name they share (like Georgia).
What does one do? One avoids as much as possible choosing any name which is in the say, top 100 most popular names. People with especially rare surnames may be able to get away with common personal names, but not the Smiths. (It's easy to check how common names are with [online tools]( drawing on US Census data. My own name pair is unique at the expense of the Dutch surname being 12 letters long, and difficult to remember.)
But one doesn't wake up and say "I will name myself Zachariah today because John is so damn common". After 20 years or more, one is heavily invested in one's name. It's acceptable to change one's surname (women do it all the time), but not the first name.
One *does* decide the first name of one's children, though, and it's iron tradition that one does so. So we can expect digital natives to shy away from common names when naming their kids. But remember who are the 'digital natives' - kids and teenagers of the '00s, at the very earliest. If they haven't been on, say, Facebook for years, they don't count. Let's say their ages are 0-20 during 2008 when Facebook really picked up steam in the non-college population; and let's say that they won't have kids until ~30. The oldest of this cohort will reach child-bearing age at around 2018, and every one after that can be considered a digital native from osmosis if nothing else. 2018 is when we will see a growing '[long tail](!Wikipedia "Heavy-tailed distribution")' of baby names.
So this is a good story: we have a suboptimal situation (too many collisions in the new global namespace of the Internet) and a predicted adjustment with specific empirical consequences.
But there are issues.
- Rare names may come with comprehensibility issues; [Zooko's triangle](!Wikipedia) in cryptography says that names cannot be unique, globally valid, *and* short or human-meaningful. You have to compromise on some aspect.
- There's already a decline in popular names, according to [Wikipedia](!Wikipedia "Given names#Popularity distribution of given names"):
> "Since about 1800 in England and Wales and in the U.S., the popularity distribution of given names has been shifting so that the most popular names are losing popularity. For example, in England and Wales, the most popular female and male names given to babies born in 1800 were Mary and John, with 24% of female babies and 22% of male babies receiving those names, respectively. In contrast, the corresponding statistics for in England and Wales in 1994 were Emily and James, with 3% and 4% of names, respectively. Not only have Mary and John gone out of favor in the English speaking world, also the overall distribution of names has changed significantly over the last 100 years for females, but not for males."
(The female trend has continued through to 2010: "The 1,000 top girl names accounted for only 67 percent of all girl names last year, down from 91 percent in 1960 and compared with 79 percent for boys last year."^[["Say Goodnight, Grace (and Julia and Emma, too)"]( _New York Times Magazine_]) The theory could probably be rescued by saying that the advantage of having a unique given name (and thus a relatively unique full name) goes that far back, but then we would need to explain why the advantage would be there for women, but *not* men.
- Pop culture is known to have a very strong influence on baby names (cf. the popularity of _Star Wars_ and the subsequent massive spike in 'Luke'). The counter-arguments to [The Long Tail](!Wikipedia) marketing theory say that pop culture is becoming ever more monolithic and hit-driven. The fewer hits, and the more mega-hits, the more we could expect a few names to spike and drive down the rate of other names. The effect on a rare name can be incredible even from relatively small hits (the song in question was only a Top 10):
> "Kayleigh became a particularly popular name in the United Kingdom following the release of a song by the British rock group Marillion. Government statistics in 2005 revealed that 96% of Kayleighs were born after 1985, the year in which Marillion released "Kayleigh"."^[Wikipedia again]
- Given names follow a power-law distribution already where a few names dominate, and so small artifacts can make it appear that there is a shift towards unpopular names. Immigration or ethnic groups can distort the statistics and make us think we see a decline in popular names when we're actually seeing an increase in popular names elsewhere - imagine all the Muhammeds and Jesuses we might see in the future. Those will show up as decreases in the percentages of 'John' or 'James' or 'Emily' or 'William', and fool us, even though Muhammed and Jesus are 2 of the most popular names in the world.
(The above appears to be pretty common knowledge among people interested in baby names and onomastics in general; for example, a _Washington Post_ editorial by Laura Wattenberg, "Are our unique baby names that unique?", 16 Sunday May 2010, argues much of the above.)
# Optimizing the alphabet
Here's an interesting idea: the glyphs of the Phoenician-style alphabet are not optimized in any sense. They are bad in several ways, and modern glyphs are little better. For example, v and w, or m and n. People confuse them all the time, both in reading and in writing.
So that's one criterion: glyphs should be as distinct from all the rest as possible.
What's a related criterion? m and w are another pair which seem suboptimal, yet they are as dissimilar as, say, a and b, under many reasonable metrics. m and w are related via *symmetry*. Even though they share relatively few pixels, they are still identical under rotation, and we can see that. We could confuse them if we were reading upside down, or at an angle, or just confuse them period.
So that's our next criterion: the distinctness must also hold when the glyph is rotated by any degree and then compared to the rest.
OK, so we now have a set of unique and dissimilar glyphs that are unambiguous about their orientation. What else? Well, we might want them to be easy to write as well as read. How do we define 'easy to write'? We could have a complicated physiological model about what strokes can easily follow what movements and so on, but we will cop out and say: it is made of as few straight lines and curves as possible. Rather than unwritable pixels in a grid, our primitives will be little geometric primitives.
The fewer the primitives and the closer to integers or common fractions the positioning of said primitives, the simpler and the better.
We throw all these rules in, add a random starting population or better yet a population modeled after the existing alphabet, and begin our genetic algorithm. What 26 glyphs will we get?
Problem: our current glyphs may be optimal in a deep sense:
> Dehaene describes some fascinating and convincing evidence for the first kind of innateness. In one of the most interesting chapters, he argues that the shapes we use to make written letters mirror the shapes that primates use to recognize objects. After all, I could use any arbitrary squiggle to encode the sound at the start of "Tree" instead of a T. But actually the shapes of written symbols are strikingly similar across many languages.
> It turns out that T shapes are important to monkeys, too. When a monkey sees a T shape in the world, it is very likely to indicate the edge of an object — something the monkey can grab and maybe even eat. A particular area of its brain pays special attention to those significant shapes. Human brains use the same area to process letters. Dehaene makes a compelling case that these brain areas have been "recycled" for reading. "We did not invent most of our letter shapes," he writes. "They lay dormant in our brains for millions of years, and were merely rediscovered when our species invented writing and the alphabet."
# Meta
A: But who is to say that a butterfly could not dream of a man? You are not the butterfly to say so!
B: No. Better to ask what manner of beast could dream of a man dreaming a butterfly, and a butterfly dreaming a man.
# Why IQ doesn't matter and how points mislead
One common anti-IQ arguments is that IQ does nothing and may be actively harmful past 120 or 130 or so; the statistical evidence is there to support a loss of correlation with success, and commentators can adduce [William Sidis](!Wikipedia) if they don't themselves know any such 'slackers', or the [Terman report](!Wikipedia)'s [similar findings]( (viz. that personality factors matter more after ~130+).
This is a reasonable objection. But it is rarely proffered by people really familiar with IQ, who also rarely respond to it. Why? I believe they have an intuitive understanding that IQ is a *percentile ranking*, not an *absolute measurement*. (IQ is ordinal, not cardinal.)
It is plausible that the 20 points separating 100 and 120 represents far more cognitive power and ability than that separating 120 and 140, or 140 and 160. To move from 100 to 120 with a standard deviation of 15, one must surpass 40% of the population; to move from 120 to 140 requires surpassing a smaller percentage (~8.7%), and 140-160 smaller yet - which makes sense, since the higher the IQ, the smaller the percentage of the overall population to begin with!
Similarly it should make us wonder how much absolute ability is being measured at the upper ranges when we reflect that, while normal (relatively low) adult IQs are stable over years, they are unstable in the short-term and test results can vary dramatically even if there is no distorting factors like emotional disturbance or varying caffeine consumption. If one question at the end of an IQ test is the difference between an IQ of 170 and 160, wouldn't one expect a great deal of variance and reduced reliability? (I'm not familiar with the high-normed IQ literature; this may be utterly obvious and well-supported experimentally.)
Another thought: are the kids in your local [special ed](!Wikipedia) program mentally closer to chimpanzees, or to Albert Einstein/[Terence Tao](!Wikipedia)? Pondering all the things we expect even special ed kids to learn or already know (vision, natural language, eye-hand coordination - all the stuff of [Moravec's paradox](!Wikipedia)), I think those kids are vastly closer to Einstein than monkeys.
And if retarded kids are closer to Einstein that the smartest non-human animal, that indicates human intelligence is very 'narrow', and that there is a vast spectrum of stupidity stretching below us all the way down to viruses (which only 'learn' through evolution). (Current IQ tests are designed for, tested against, and normed on fine distinctions among humans. It is [very hard]( to test animal intelligence because of differing incentives and sensory systems, but *if* one deals with those problems, there ought to be some general intelligence of prediction and problem solving; the approach I favor is [AIXI-style IQ tests](
A gap like 20 points looks very impressive from our narrow compressed human perspective, but it reflects very little *absolute* difference; to a sheep, other sheep are each distinctive. In [Big O](!Wikipedia) computer terms, we might say that geniuses are a [constant factor](!Wikipedia) faster than their dimmer brethren, but not [asymptotically](!Wikipedia) faster.
It is expected then, that someone measured at 180 doesn't make the rest of us look like a nigh-comatose retard of 20 IQ points. To be so smart requires thousands of factors (mental & biological) to click just right (genetically correlating with [thousands of variations](, and not a few master genes); if ordinary people luck out on 900 factors, then those geniuses' scores are trying to secern differences of 2 or 3 factors. The practical impact of a few factors out of thousands may be minimal, and explain the findings without denying the existence of such differences.
# Backups: life and death
Consider the plight of an upload - a human mind running on a computer rather than a brain. It has the advantage of all digital data: perfect fidelity in replication, fast replication - replication period. An upload could well be immortal. But an upload is also very fragile. It needs storage at every instance of its existence, and it needs power for every second of thought. It doesn't carry with it any reserves - a bit is a bit, there are no bits more durable than other bits, nor bits which carry small batteries or [UPSes](!Wikipedia "Uninterruptable power supply") with themselves.
So reliable backups are literally life and death for uploads.
But backups are a double-edged sword for uploads. If I backup my photos to [Amazon S3](!Wikipedia) and a bored employee pages through them, that's one thing; annoying or career-ending as it may be, pretty much the worst thing that could happen is that I get put in jail for a few decades for child pornography. But for an upload? If an enemy got a copy of its full backups, the upload has essentially been kidnapped. The enemy can now run copies and torture them for centuries, or use them to attack the original running copy (as hostages, in [false flag attacks](!Wikipedia), or simply to understand & predict what the original will do). The negative consequences of a leak are severe.
So backups need to be both reliable and secure. These are conflicting desires, though.
One basic principle of long-term storage is '[LOCKSS](!Wikipedia)': "lots of copies keeps stuff safe". Libraries try to distribute copies of books to as many holders as possible, on the premise that each holder's failure to preserve a copy is a random event independent of all the other holders; thus, increasing the number of holders can give arbitrarily high assurances that *a* copy will survive. But the more copies, the more risk one copy will be misused. That's fine if 'misuse' of a book is selling it to a book collector or letting it rot in a damp basement; but 'misuse' of a conscious being is unacceptable.
Suppose one encrypts the copies? Suppose one uses a [one-time pad](!Wikipedia), since one worries that an encrypted copy which is bullet-proof *today* may be copied and saved for centuries until the encryption has been broken, and is perfectly certain the backups are 'secure'. Now one has 2 problems: making sure the backups survive until one needs them, and making sure the one-time pad survives as well! If the future upload is missing either one, nothing works.
The trade-off is unfortunate, but let's consider secure backups. The first and most obvious level is physical security. Most systems are highly vulnerable to attackers who have physical access; desktop computers are trivially hacked, and [DRM](!Wikipedia) is universally a failure.
Any backup ought to be as inaccessible as possible. [Security through obscurity](!Wikipedia) might work, but let's imagine *really* inaccessible backups. How about hard drives in orbit? No, that's too close: commercial services can reach orbit easily, to say nothing of governments. And orbit doesn't offer too much hiding space. How about orbit not around the Earth, but around the Solar System? Say, past the orbit of Pluto?
That offers an enormous volume: the Kuiper Belt is roughly ~1.95^30^ cubic kilometers[^volume]. The lightspeed delay is at least 20 minutes, but [latency](!Wikipedia) isn't an issue; a backup protocol on Earth could fire off one request to an orbiting device and the device would then transmit back everything it stored without waiting for any replies or confirmations (somewhat like [UDP](!Wikipedia)).
10^30^ cubic kilometers is more than enough to hide small stealthy devices in. But once it sends a message back to Earth, its location has been given away - the Doppler effect will yield its velocity and the message gives its location at a particular time. This isn't enough to specify its orbit, but it cuts down significantly on where the device could be. 2 such messages and the orbit is known. A restore would require more than 2 messages.
The device could self-destruct after sending off its encrypted payload. But that is very wasteful. We want the orbit to change unpredictably after each broadcast.
If we imagine that at each moment the device chooses between firing a thruster to go 'left' or 'right', then we could imagine the orbit as being a message encrypted with a one-time pad - a one-time pad, remember, being a string of random bits. The message is the original orbit; the one-time pad is a string of random bits shared by Earth and the device. Given the original orbit, and knowing when and how many messages have been sent by the device, Earth can compute what the new orbit is and where the device will be in the future. ('It started off on this orbit, then the random bit-string said at time X to go left, then at X+1, go left again, then at X+Y, go right; remembering how fast it was going, that means it should now be... there in the constellation of Virgo.')
The next step up is a symmetric cipher: a shared secret used not to determine future orbit changes, but to send messages back and forth - 'go this way next; I'm going this way next; start a restore' etc. But an enemy can observe where the messages are coming from, and can work out that 'the first message must've been X, since if it was at point N and then showed up at point O, only one choice fits, which means this encrypted message meant X, which lets me begin to figure out the shared secret'.
A public-key system would be better: the device encrypts all its messages against Earth's private key, and vice versa. Now the device can randomly choose where to go and tell Earth its choice so Earth knows where to aim its receivers and transmitters next.
But can we do better?
[^volume]: The area of a sphere is given by the equation: $\frac{4}{3} \times \pi \times r^3$\
1 AU = $149.60 \times 10^6$ kilometers\
30 AU = $30 \times 149.60 \times 10^6$, or $4.488 \times 10^9$ km\
55 AU = $55 \times 149.60 \times 10^6$, or $8.228 \times 10^9$ km\
So the shell is the volume of the outer sphere minus the inner sphere:\
$(\frac{4}{3} \times \pi \times (8.228 \times 10^9)^3) - (\frac{4}{3} \times \pi \times (4.488 \times 10^9)^3)$, or $1.9546466984296578 \times 10^{30}$.
# A secular humanist reads _The Tale of Genji_
After several years, I finished reading Edward Seidensticker's translation of _[The Tale of Genji](!Wikipedia)_. Many thoughts occurred to me towards the end, when the novelty of the Heian era began to wear off and I could be more critical.
The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in _Genji_, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through _Genji_, I would have been disgusted by the mawkish sentimentality & repetition.
The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it's mentioned that he liked sex with her better than with any of the other servants. Much earlier in _Genji_ (it's a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the _pro forma_ marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the 'shining' Genji?) The double-standards are countless.
The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on 'work' like running the country, despite many main characters ranking high in the hierarchy and holding minister-level ranks; the Emperor in particular does nothing except party. All the households spend money like mad, and just expect their land-holdings to send in the cash. (It is a signal of their poverty that the Uji household ever even mentions how less money is coming from their lands than used to.) The Buddhist clergy are remarkably greedy & worldly; after the death of the father of the Uji household, the abbot of the monastery he favored sends the grief-stricken sisters a note - which I found remarkably crass - reminding them that he wants the customary gifts of valuable textiles.
The medicinal practices are utterly horrifying. They seem to consist, one and all, of the following algorithm: 'while sick, pay priests to chant.' If chanting doesn't work, hire more priests. (One freethinker suggests that a sick woman eat more food.) Chanting is, at least, not outright harmful like bloodletting, but it's still sickening to read through *dozens* of people dying amidst chanting. In comparison, the bizarre superstitions (such as trapping them in houses on inauspicious days) that guide many characters' activities are unobjectionable.
The 'ending' is so abrupt, and so clearly unfinished; many chapters have been spent on the 3 daughters of the Uji householder, 2 are disposed of, and the last one has just been discovered in her nunnery by 1 of the 2 protagonists (and the other protagonist suspects). The arc is not over until the would-be nun has been confronted, yet the book ends. Given that [Murasaki Shikibu](!Wikipedia) was writing an episodic entertainment for her court friends, and the overall lack of plot, I agree with Seidensticker that the abrupt mid-sentence ending is due either to Shikibu dying or abandoning her tale - not to any sort of deliberate plan.
# Measuring multiple times in a sandglass
How does one make a sand hourglass measure multiple times?
One could just watch it and measure fractions by eye - when a 10-minute timer is down to 1/2, it has measured 5 minutes. One could mark the outside and measure fractions that way.
Or perhaps one could put in two-toned sand - when the white has run out and there's only black sand, then 5 minutes has passed.
But the sand would inevitably start to mix, and then you just have a 10-minute timer with grey sand. Perhaps some sort of plastic sheet separating them? But it would get messed up when it passes through the funnel.
Then, perhaps the black sand could be magnetically charged positively, and the white sand negatively? But magnetism attracts *unlike*. If the black is positive and white negative, they'll clump together even more effectively than random mixing would.
We can't make a color homogeneous in charge. Perhaps we could charge just black negative, and put positive magnets at the roof and floor? The bias might be enough over time to counteract any mixing effect - the random walk of grains would have a noticeable bias for black. But if the magnet is strong, then some black sand would never move, and if it's weak, then most of the sand will never be affected; either way, it doesn't work well.
Perhaps we could make half the black sand positive and half negative, while all white is neutral? Black will clump to black everywhere in the hourglass, without any issues about going through the funnel or affecting white.
How might this fail? Well, why would there be only *2* layers? There could be several alternating layers of black and white, and this be a stable system.
We might be able to remedy this by combining magnetized black sand with magnets on the roof/floor, imparting an overall bias - the layers form, but slowly get compacted together.
The real question is whether strong enough magnetism to usefully sort is also so strong to clump together and defeat the gravity-based timing.
# Measuring social trust by offering free lunches
People can be awfully suspicious of free lunches. I'd like to try a little experiment or stunt sometime to show this. Here's how it'd go.
I'd grab myself a folding table, make a big poster saying 'Free Money! $1 or $2' and in fine print, 'one per person per day'. Then, anyone who came up and asked would get $2. Eventually, someone would ask for $1 - they would get it, but be asked first *why* they declined the larger amount.
I think their answers would be interesting.
Even funner would be giving the $2 as a 2-dollar bill, and not 2 dollar bills. They're rare enough that it would be quite a novelty to people.
# Leaf burgers
One thing I was known for in Boy Scouts (or so I thought) was my trick of cooking hamburgers with leaves rather than racks or pans. I had learned it long ago at a campboree, and made a point of cooking my hamburger that way and not any other.
The way it works is you take several large green leafs straight from the tree, and sandwich your burger. Ideally you only need 2, one leaf on top and the other on bottom. (I was originally taught using just one leaf, and carefully flipping the burger on top of its leaf, but that's error prone - one bad flip and your burger is irretrievably dirty and burned.) Then you put your green sandwich on top of a nice patch of coals - no flames! - and flip it in 10 minutes or so.
You'll see it smoke, but not burn. The green leaves themselves don't want to burn, and the hamburger inside is giving off lots of water, so you don't need to worry unless you're overcooking it. At about 20 minutes, the leaves should have browned and you can pull it out and enjoy.
What's the point of this? Well, it saves one dishes. Given how difficult it is to clean dishes out there where there are no dishwashers or sinks, this should not be lightly ignored. It cooks better: much more evenly and with less char or burning of the outside. Given many scouts' cooking skills, this is no mean consideration either. It's a much more interesting way to cook. And finally, the hamburger ends up with a light sort of 'leafy' taste on the outside, which is quite good and not obtainable any way else.
# Stories
## Priorities
Grandma Birch once recounted how uncle Tom had beautiful hair as a child which she refused to allow be cut; one day, Grandad Birch took him to have it cut anyway while she was away. That same day, uncle Tom was playing in the street when a local girl ran into him with her car, hurtling him back up onto the lawn and leaving a scar on his face that one can still see as a dimple.
When she returned, she remembers that her chief concern was what had been done to his beautiful hair!
## Things kids say
Aunt Sally recounted 2 stories:
One of her elementary school colleagues was named 'Thomas Magwood'. She asked him whether anyone had called him Maggy Maggot. He answered yes. She wondered which student. Magwood replied: 'My granddaughter.' Aunt Sally: 'Thomas, I was not expecting that answer.'
Another student had lost her father while young, and one day was asking where he had gone. Her mother eloquently spoke of how he now lived on in their hearts and would remain in their house forever. The child acquiesced, and some time later, announced that she had remembered her father's name. What is it, the mother asked. Quite firmly she replied: 'Jesus.'
My parents told me another: a kindergarten acquaintance of mine apparently convinced his parents to let him start hockey, so they buy him a complete set of gear, paying hundreds of dollars and whatnot, register him, get him on a team, and he does well. He begins trailing off, though, and by a year later, he asks if he can drop hockey altogether. His mother asks him why on earth he wants to quit, when he was so enthusiastic about it initially, and he said, 'but when are we gonna learn how to *fight*?'
## Breakfast
My little sister Molly went to Hawkin's Path Elementary much the same as all of us did. Early on, one morning before classes began, she went to the cafeteria and got on the line for the breakfasts. Pretty much the only people who ate breakfast at Hawkin's Path were the (very) poor kids who qualified for the Federal free breakfast program. Molly, as it happens, was neither on the list nor had a card. But the lunch lady was new to the job & school, so when her turn came up, she said, "I'm Molly!" with such straightforwardness and assurance - as though *of course* that explained everything, who could not know about Molly? - that the flummoxed lady simply gave her a breakfast.
Some time later, Molly's kindergarten teacher Mrs. MacNamara would approach Mom & Dad at a book fair (or something) and shock them by inquiring as to whether Molly qualified for the program.
## On promises
When I was in elementary school, another family friend was named Patti, with sons Nick & Joe (the husband was not apparent). They were perhaps middle school aged, but we got along well, I thought. They had an interesting house. It was by the fire station, roughly in the same part of town (Centereach) as my old blue house. That propinquity and Patti's Dutch heritage explain the original connection, I think. Their (rented) house had a large piece of land, and a U-shaped driveway that went through the front. In the middle was a veritable island-mountain, with a giant pine in the middle. Underneath it was a mass of boulders poking through the thick drifts of needles. I had a Swiss army knife, and delighted in scraping sparks against the stone.
Behind the small red house was an orchard in advanced desuetude. I only ever noticed grapes in its arbors. They were purple, I think, and utterly untended. They were bitter - very foxy. The previous owner had loved grapes.
Joe was older. He liked video games, I remember. At this time in the '90s, there was only Nintendo & Sega with oddball also-rans like Lynx or Neo-Geo that kids like us scoffed at. The distinction was that Sega was known for its capable hardware and more adult games, but a smaller overall game library, and Nintendo was known for its odd controllers, its 1st-party games, and a large library of games (I understand that the SNES game library would only be surpassed only in the 2000s by the PS2 with its backwards compatibility). Joe was a Sega fan, and a diehard one - he demonstrated to me that he had bought the ill-starred 'Mega-CD' and also the poor Sega Saturn, though he had little to play on them but a _Sonic the Hedgehog_ game.
One year, we took Nick & Joe with us down south to visit our grandparents & Washington D.C. with our customary visits to the Smithsonian Museum of Natural History and the Air & Space Museum. We stayed at the Birch townhouse in Richmond, Virginia. This was a neighborhood of townhouses, with a genteel air and many pines behind the rows of nigh-identical townhouses. There was a little park not far from us. It wasn't used much (there were few enough children in the area) and one day Joe and I had gone there - kicked out from the TV and the flat, I recall, by an adult - and were skirmishing & discussing Taekwondo in the silly boastful way kids will who watched too much _Teenage Mutant Ninja Turtles_. Joe claimed to know a great deal about the martial arts, but he said he could not teach me; I and Allison were but green belts and not ready. But, he said, in a few years when I had become a brown (or, hazy memory avers, red) belt, then he would teach me.
A few years later, I had persevered (Allison had stopped) and reached the agreed-upon rank. But by then Patti had ceased to be a good family friend and the last I heard of Joe was in some military.
## Upsides to child abuse
Once, after a meeting of the RIT anime club, I got into a heated argument with another fellow. He and I had a running series of insults and arguments (often centering on how I had a problem with his face).
At some point I asserted that 'there is nothing funny about child abuse!'
He begged to differ.
Very well then, I said, tell me a funny child abuse joke. He craved 5 minutes, which I readily granted.
4 minutes in, he lifted his head from deep in thought and told me the following joke:
> "What's more fun than beating your child with a board game?"
> "I don't know, what?"
> "Beating them with anything else."
I paused, and conceded defeat.
## Revealed preferences
A teacher of mine, although for the life of me I cannot remember where or whom, once told the class a story about European customs.
When he was a younger man, he said, he went to a restaurant in Amsterdam, when a fellow American walked in. She was a beautiful young woman and the teacher noticed her immediately, as did the virile waiters.
She walked to one of the bistro's tables and sat down, clearly expecting to be served. The waiters were greedily looking at her from over in their corner, but made no move.
It is the custom, the teacher explained, that in America, the waiter accosts the customer but vice versa in Europe. They were at an impasse. It was amusing, he said - one wanted to be served and the other to serve, but their mutual ignorance frustrated them.
The young lady's impatience boiled over after a score of minutes, and she left, much to the dismay of the waiters. And all for want of a nail.
## Milgram authority experiments
Professor Grim tells a story about an acquaintance of his, who had once accepted a job at Yale's psychology department. He was driving into New Haven on his way to it, when he stopped for gas. The fellow working the station began filling up his car and they fell to chatting. (This was long ago, when such things still happened.) When he mentioned that he was going to his new job there, the attendant stiffened up, removing the gas pump, and retreated into the far side of the garage. Incensed, he followed the attendant and demanded an answer. The attendant eventually stopped the cold shoulder and quietly said that once he had participated in Milgram's infamous authoritarian experiments, and was one of those who had gone through with it all. "And", he added, "I have not slept well since."
(Grim added in clarification that the entire psychology department was in opprobrium with the New Haven population.)
## One lazy dog
Today I noted to my grandfather Gerald Birch that a lot of the most local Marylanders seem to have both pick-up trucks and dogs in the cabs. He told me an anecdote about his cousin Dill's life was saved by this habit.
One day Dill was driving down a road with his dog. For whatever reason, he goes wild into the woods and crashes and is knocked unconscious. His dog, more sensible than he, picked itself up and walked out to the road, where it then sat down and waited.
Eventually, some friends of Dill came driving by and recognized the dog instantly. "What's Dill's dog doing sitting there?" they asked one another as they stopped and got out. This led to them going into the woods where they found Dill in his wreck.
## True dreams
One curious event, that well illustrates the uncanny hold that coincidences can exert on our minds, is worth recording. One day, I had read part of Frank Herbert's _Dune_ and run into the word '[Spannungsbogen](!Wiktionary)' - a kind of self-discipline or restraint as Herbert described it. The word had an entry in [Wiktionary](!Wikipedia) but the entry lacked sources & examples, and was at risk of deletion. So I went to one of my favorite sources - _The New York Times_ - and searched their archives, found one useful hit (later, I would not remember anything about what the hit said; just that it existed), and listed it on the talk page (since I'm not familiar with Wiktionary conventions and prefer to let the regulars integrate new references into entries).
Then I woke up.
Some time later, I remembered the dream and thought to myself that I ought to check whether I had not actually added it yesterday and was mis-remembering; I had not - the talk page was devoid of my contribution - but the entry did need work. I then thought it would be amusing to see what the NYT *did* have, so I went and searched - and found one useful hit. Disquieted, I [edited the talk page]( as in my dream, and moved on.
## On lying and not lying
A small gem of equivocation:
An old and somewhat estranged family friend abandoned 2 cats with us when she went to seek her fortune in the West (turned out her brother there was only offering her room because he hoped to get her kidney); the cats lived relatively happily with us until one day, the black one made the mistake of taking a nap behind a wheel. Backing up, my mother ran it over. Yowling, he ran into the garage where we had kept them early on, and in a corner, expired of its injuries.
6 weeks later, the friend called and asked for news of that cat. My mother had previously consulted with her sister and replied - very carefully - that 'We found it dead in the garage.'
# Night watch
> "The gloom of dusk. \
> An ox from out in the fields \
> comes walking my way; \
> and along the hazy road \
> I encounter no one."^[Shōtetsu; 59 'An Animal in Spring'; _Unforgotten Dreams: Poems by the Zen monk Shōtetsu_; trans. Steven D. Carter, ISBN 0-231-10576-2]
Night watch is not devoid of intellectual interest. The night is quite beautiful in its own right, and during summer, I find it superior to the day. It is cooler, and often windier. Contrary to expectation, it is less buggy than the day. Fewer people are out, of course.
My own paranoia surprises me. At least once a night, I hear noises or see light, and become convinced that someone is prowling or seeks to break in. Of course, there is no one there. This is true despite it being my 4th year. I reflect that if it is so for me, then what might it be like for a primitive heir to millennia of superstition? There is a theory that spirits and gods arise from overly active imaginations, or pattern-recognition as it is more charitably termed. My paranoia has made me more sympathetic to this theory. I am a staunch atheist, but even so!
The tempo at night varies as well. It seems to me that the first 2 years, cars were coming and going every night. Cars would meet, one would stay and the other go; or a car would enter the lot and not leave for several days (with no one inside); or they would simply park for a while. School buses would congregate, as would police-cars, sometimes 4 or 5 of them. In the late morning around 5 AM, the tennis players would come. Sometimes when I left at 8 AM, all 4 or 5 courts would be busy - and some of the courts hosted 4 players. I would find 5 or 6 tennis balls inside the pool area, and would see how far I could drop-kick them. Now, I hardly ever find tennis balls, since I hardly ever see tennis players. A night in which some teenagers congregate around a car and smoke their cigarettes is a rarity. Few visit my lot.
I wonder, does this have to do with the recession which began in 2008?
## Fiction
> "Another year gone by \
> And still no spring warms my heart. \
> It's nothing to me \
> But now I am accustomed \
> To stare at the sky at dawn."^[[Fujiwara no Teika](!Wikipedia); pg 663 of [Donald Keene](!Wikipedia) (1999), _Seeds in the Heart: Japanese Literature from Earliest Times to the Late Sixteenth Century_, Columbia University Press, ISBN 0-231-11441-9]
The night has, paradoxically, sights one cannot see during the day. What one can see takes on greater significance, becoming new and fresh. I recall one night long ago; on this cool dark night, the fogs lay heavy on the ground, light-grey and densely soupy. In the light, one could watch banks of fog swirl and mingle in myriads of meetings and mutations; it seemed a thing alive. I could not have seen this under the sun. It has no patience for such ethereal and undefinable things. It would have burned off the fog, driven it along, not permitted it to linger. And even had it existed and been visible, how could I have been struck by it if my field of view were not so confined?
One feels an urge to do strange things. The night has qualities all its own, and they demand a reflection in the night watcher. It is strange to be awake and active in the wrong part of the day, and this strangeness demands strangeness on one's own part. Often when doing my rounds I have started and found myself perched awkwardly on a bench or fence. I stay for a time, ruminating on nothing in particular. The night is indefinite, and my thoughts are content to be that way as well. And then something happens, and I hop down and continue my rounds.
For I am the sole inhabitant of this small world. The pool is bounded by blackened fences, and as it lies prostrate under tall towers bearing yellowed flood-lights. The darkness swallows all that is not pool, and returns a feeling of isolation. As if nothing besides remains. I circumnambulate to recreate the park, to assure me it abides, that it is yet there to meet my eyes - a sop to conscience, a token of duty; an act of creation.
I bring the morning.
# Two cows: philosophy
Philosophy [two-cows](!Wikipedia "You have two cows") jokes:
Free will: you have 2 cows; in an event entirely independent of all previous events & predictions, they devour you alive; this makes no sense as cows are herbivores, but you are no longer around to notice this.
Fatalism: you have 2 cows; whether they survive or not is entirely up to the inexorable and deterministic course of the universe, and what you do or not likewise, so you don't feed your cows and they starve to death; you reflect that the universe really has it in for you.
Compatibilism: you have 1 cow which is free and capable of making decisions, and 1 cow that is determined and bound to follow the laws of physics; they are the same cow. But you get 2 cows' worth of milk anyway.
Existentialism: You have two cows; one is a metaphor for the human condition. You kill the other and in bad faith claim hunger made you do it.
Ethics: You have two cows, and as a Utilitarian, does it suit the best interests of yourself and both cows to milk them, or could it be said that the interests of yourself, as a human, come above those of the cows, who are, after all, inferior to the human race? Aristotle would claim that this is correct, although Peter Singer would disagree.
Sorites: you have 2 cows who produce a bunch of milk; but if you spill a drop, it's still a bunch of milk; and so on until there's no more milk left. Obviously it's impossible to have a bunch of milk, and as you mope over how useless your cows are, you die of thirst.
Nagarjuna: You have 2 cows; they are 'empty', of course, since they are dependent on grass; you milk them and get empty-milk (dependent on the cow), which tastes empty; you sell them both and go get some real cows. _Moo mani hum_...
Descartes: You have 2 cows, therefore you are (since deceive me howbeit the demon may, he can never make it so that I have 2 cows yet am not); further, there are an infinite # of 2-cows jokes, and where could this conception of infinity have come from but God? Therefore he exists. You wish you had some chocolate milk.
Bentham: no one has a natural right to anything, since that would be '2 cows walking upon stilts'; everything must be decided by the greatest good for the greatest number; you get a lobotomy and spend the rest of your life happily grazing with your 2 cows.
Tocqueville: Cows are inevitable, so we must study the United Cows of America; firstly, we shall take 700 pages to see how this nation broke free of the English Mooarchy, and what factors contributes to their present demoocracy...
Gettier: You see 2 cows in your field - actually, what you see is 2 cow-colored mounds of dirt, but there really are 2 cows over there; when you figure this out, your mind is blown and >2000 years of epistocowlogy shatters.
Heidegger: [dasein](!Wikipedia) dasein apophantic being-in cow being-in-world milk questioning proximate science thusly Man synthesis time, thus, 2 cows.
Husserl: You have 2 cows, but do you really *see* them?
# Waking up
In neuroscience, there's a model of consciousness called the 'workspace' model. The idea is that the various modules in the brain, like the auditory or visual or long-term memory modules normally operate on their own, doing their things, predicting & perceiving what they can; but sometimes something goes wrong: the predictions are suddenly all wrong, or there's unusual & urgent input. The modules panic and emit a summary of the situation over to the single global workspace, where it sits side by side with all the other summaries, and the slow linear prefrontal cortex ponders all the situations & weighs their importance (perhaps issuing some requests to various memories) & sends out orders. In other words, one is only conscious when there is conflict between modules; otherwise, one is unconscious and the modules continue their work. When carrying a dish from the kitchen to the table, one is largely unconscious - one isn't really thinking, one can't remember much, because not much is happening in consciousness. But if the plate is burning hot? Then all of a sudden there is conflict: the arm neurons are frantically trying to execute the 'flinch' reflex, another part is frantically saying don't drop it we're almost there! and the multiple summaries arrive in consciousness, one suddenly 'wakes up' and decides to drop it or not to drop it, and the deed is done.
Why do people ride roller-coasters? Why do they go into haunted houses? They say it makes them feel alive, that it's vivid and unusual, that it's very exciting.. That it wakes them up.
# _Full Metal Alchemist_
TODO: there's some general essay I could write about FMA, especially the manga versus anime+movie
> What do you think about Mustang using the philosopher's stone entirely to get his vision back?
Kosher. Mustang didn't ask to see the Gate, and that stone would otherwise have been wasted. He didn't merit his punishment.
> About him not taking the seat as the fuhrer?
With the corrupt establishment toppled, there's no longer any compelling reason for him to be fuhrer. Indeed, his personal failings may mean that it's better for him to not be fuhrer. (What would he do?)
> Ed transmuting the literal gate to break the rules?
That wasn't rule-breaking; that was awesome. It was tremendously satisfying.
One of FMA's running themes was the narrowness of those interested in alchemy. They were interested in it, in using it, in getting more of it. Obviously folks like Shou Tucker or Kimbly sold their soul for alchemy, but less obviously, the other alchemists have been corrupted to some degree by it. Even heroes like Izumi or the Elrics transgressed. Consider Mustang; his connection with Hawkeye was alchemy-based, and only after years did the connection blossom. Consider how little time he spent with Hughes, in part due to his alchemy-based position. Mustang didn't learn until Hughes was gone just how much his friends meant.
Similarly, Greed. His epiphany at the end hammers in the lesson about the value of friends. How did he lose *his* friends? By pursuit of alchemy-based methods of immortality.
That is why Ed was the real hero. Because he realized the Truth of FMA: your relationships are what really matter. No alchemist ever escaped the Gate essentially intact before he did. Why? Because it would never even occur to them to give up their alchemy or what they learned at the Gate.
Have you ever heard of a monkey trap made of a hole and a collar of spikes sticking down? The monkey reaches in and grabs the fruit inside, but his fist is too big to pass back out. If only the stupid monkey would let go of the fruit, he could escape. But he won't. And then the hunter comes.
The alchemists are the monkey, alchemy is the fruit, and the Truth is the hunter. The monkeys put the fruit above their lives, because they think they can have it all. Ed doesn't.
Were there things I disliked? Yes, the whole god thing struck me as strange and ill-thought out. I also disliked the mechanism for alchemy - some sort of Earth energy. I thought that the movie's idea that alchemy was powered by deaths in an alternate Earth to really fit the whole theme of Equivalent Exchange - TANSTAAFL. It's good that the Amestrian alchemy turns out to be [powered by human sacrifice]( (TANSTAAFL), but that turns out to be due to the Father character blocking the 'real' alchemy, and so, [non-Amestrian alchemy]( turns out to be a free lunch!
# Fake explanation of cats
21:47:59 < gwern> I often think that cat psychology is harder than dog psychology
21:48:14 < gwern> then I reflect that dogs have co-evolved with us for much longer than cats, and dogs have bigger brains as well
21:48:15 < cwillu> no, the dominance hierarchy is firmly established
21:48:25 < gwern> so maybe I only think I understand dogs
21:49:08 < gwern> perhaps under the evolved tricks like eye-following or pointing-understand lies a psychology as or more alien than cats
21:49:23 < gwern> *pointing-understanding
21:49:36 < AngryParsley> cat psychology is that they do whatever the hell they want
fake explanation ^ <>
Cats mimic children: <> <>
# Let's nuke Africa
w/r/t existential threats, when is it a better idea to bomb failed nations/continents back to the stone age? <>
> The absence of rule of law, democratic checks on the military, continual conflict and overall incompetence also increases the chances lab error or misuse of high tech weaponry as technology become more accessible while social, economic and political conditions do not improve.
I just had a fun idea: take this premise, and the demonstrated difficulty of improving Africa, and the idea that the development vs. likeliness-to-screw-everybody-over-with-WMDs curve would be an inverted U, and calculate the point at which it would be better to cut off all aid & begin bombing Africa into (or within) the Stone Age.
> There a high moral cost to beginning bombing Africa.
There is no moral cost by definition; at the point at which we would want to start bombing, the immoral thing is to not bomb. We've bombed many countries for far less than existential threats (arguably, every US bombing campaign back to WWII).
Further, I think you drastically overestimate the chances of homegrown terrorism. Vietnam was long ago. Reports like millions of Iraqi refugees or hundreds of thousands of excess Iraqi deaths merely spark muted partisan arguments about whether the Lancet's statistics are right or not. It's a long way to Tipperary.
> The Global economy would tailspin and the existential risk situation would get a lot worse as a result.
I think you badly overestimate how important Africa is. Even assuming resources cannot be extracted while also bombing the place, Africa isn't that important.
The continental GDP is just 2.7 trillion. Several percent of that is foreign aid ([Economy of Africa](!Wikipedia)) and their exports to the rest of the world are small enough that their balance of payments (with the rest of the world) is negative by billions (
Now, if Africa disappeared or was suddenly destroyed, I would expect the global financial markets to drop considerably; but they are so skittish they drop at the fall of a hat. The long-term economic impact wouldn't be so bad outside of commodities like coltan. Certainly not so bad as some grey goo getting loose.
(I'd count things like AIDS as further debits to Africa, but obviously that's a sunk cost as far as this suggestion is concerned.)
# Geneva culinary crimes tribunal
'King Krryllok stated that Crustacistan had submitted a preliminary indictment of Gary Krug, "the butcher of Boston", laying out in detail his systematic genocide of lobsters, shrimp, and others conducted in his Red Lobster franchisee; international law experts predicted that Krug's legal team would challenge the origin of the records under the poisoned tree doctrine, pointing to news reports that said records were obtained via industrial espionage of Red Lobster Inc. When reached for comment, Krug evinced confusion and asked the reporter whether he would like tonight's special on fried scallops'
# Multiple interpretations theory of humor
My theory is that humor is when there is a connection between the joke & punchline which is obvious to the person in retrospect, but not initially.
Hence, a pun is funny because the connection is unpredictable in advance, but clear in retrospect; Eliezer's joke about the motorist and the asylum inmate is funny because we were predicting some other response other than the logical one; similarly for 'why did the duck cross the road? to get to the other side' is not funny to someone who has never heard any of the road jokes, but to someone who has and is thinking of zany explanations, the reversion to normality is unpredicted.
Your theory doesn't work with absurdist humor. There isn't initially 1 valid decoding, much less 2.
Mm. This might work for some proofs - Lewis Carroll, as we all know, was a mathematician - but a proof for something you already believe that is conducted via tedious steps is not humorous by anyone's lights. Proving P/=NP is not funny, but proving 2+2=3 is funny.
'A man walks into a bar and says "Ow."'
> How many surrealists does it take to change a lightbulb? Two. One to hold the giraffe, and one to put the clocks in the bathtub.
Exactly. What are the 2 valid decodings of that? I struggle to come up with just 1 valid decoding involving giraffes and bathtubs; like the duck crossing the road, the joke is the frustration of our attempt to find the connection.
# Mr. T(athagata)
idea: Mr. T as modern Bodhisattva. He remains in the world because he pities da fools trapped in the Wheel of Reincarnation.
# Musical instruments are not about music
To get a rough estimate of how many musical instruments (like the piano) are we can look through Wikipedia's [Category:Musical instruments](!Wikipedia). Lots of non-instruments and notable examples of a kind of instrument, but it makes the numbers pretty clear - there are hundreds of instruments if not thousands, from most cultures, even if we compress the variations.
Suppose one had a well-defined aesthetic preference - a [total ordering](!Wikipedia) (or at least a [partially ordered set](!Wikipedia) with a [greatest element](!Wikipedia)) - so we can speak of an 'ideal instrument' for that person, an instrument which gives them the greatest aesthetic gratification of all known instruments.
If we picked a random instrument for them from our set of thousands of instruments, obviously the odds aren't good we'll pick the ideal one. Thousands to one, after all. If a parent inflicted such a choice on their kid, the kid ought to believe the choice is suboptimal from his aesthetic point of view (with a confidence of >99%). if he cares about the matter, then he should probably go looking as an adult for a better choice.
Depending on how much he cares and how easy it is to 'search' through thousands of instruments, he might search quite a bit.
Strangely, you don't see much of this. Most people seem pretty happy with their current instrument and even music nerds don't spend as much time as one might expect sampling instruments and pondering their merits. How to explain this? [Sunk cost](!Wikipedia) into learning the inferior instrument? Maybe the aesthetic difference between an average and the ideal isn't that great (despite a theremin sounding very different from a synthesizer or a keyboard or a piano, or even violas and violins sounding quite different, and the [revealed preference](!Wikipedia) of antique highly-regarded individual instruments going for hundreds of thousands or millions of dollars to performers)? Or maybe it's.... status.
Maybe people don't search through all manner of rare instruments because musical instruments aren't about aesthetics as much as they are about social [signals](!Wikipedia "Signalling theory") and [status](!Wikipedia "Social status") and prestige. There can only be a few prestigious instruments (perhaps less than 10; surely not as high as 20), after all, and we all hear them quite a bit. By the time a kid hits middle school, he's spent many years watching movies and TV where there's a lot of instrumental background music and he's learned whether he likes piano better than violin or cello.
There's just not many options to think about. If you aspire to WASPy high society, you learn piano; if you aspire to prestige among young people, the guitar or drums. And so on. This is so obvious and ingrained it can be difficult to see; Western society does not, that I know of, have any standard expression like the [Four Arts of the Chinese Scholar](!Wikipedia) (which mandates a scholar know the [guqin](!Wikipedia) and such rules of etiquette as the [Seven Should-not-plays](!Wikipedia) or [Six Avoidances](!Wikipedia)). But nevertheless, the [bongo drums](!Wikipedia) are not prestigious and similarly one can point out the middling status of the [harmonica](!Wikipedia) (which only avoids being low by its use in the blues and jazz). Note that in the stereotype of Asian parents in America forcing their kids to learn instruments, the parents are not choosing oddball instruments you've never heard of (you know, one of the thousands of instruments *not* included in your standard Western-style orchestra), they're choosing ones as familiar as dirt:
> "Let's go back to her crazy list of why her parenting is better. #9: violin or piano, no other instruments. If Chua is so Chinese, and has full executive control over her kids, why does she--and the real Chinese parents out there--make their kids play violin, play Bach and not Chinese music? They'd be happy to educate you on the beauty of Chinese music, I'm sure, but they don't make their kids learn that. Why not?
> She wants them learning this because the Western culture deems classical music as high culture, and therefore anyone who can play it is cultured. Someone said Beethoven is great music so they learn that. There is no sense of understanding, it is purely a technical accomplishment. Why Beethoven and not Beethoven's contemporaries? The parents have no idea. Can her kids write new music? Do they want to write music? It's all mechanics. This isn't a slander on Asian musicianship, it is an observation that the parents who push their kids into these instruments are doing it for its significance to other people (e.g. colleges) and not for itself. Why not guitar? Why not painting? Because it doesn't impress admissions counselors. What if the kid shows some interest in drama? Well, then kid can go live with his white friends and see how far he gets in life.
> That's why it's in the _WSJ_. The _Journal_ has no place for, "How a [Fender Strat](!Wikipedia) Changed My Life." It wants piano and violin, it wants Chua's college-resume worldview." --["Are Chinese Mothers Superior To American Mothers?"](, [_The Last Psychiatrist_](
William Weir in _The Atlantic_, ["Why Is It So Hard for New Musical Instruments to Catch On?"](
> 'As composer Edgard Varese put it in 1936, "It is because new instruments have been constantly added to the old ones that Western music has such a rich and varied patrimony." So what happened? Why has there been such a drought of new instruments—especially in rock and pop, which thrive on novelty?
> Inventor Aaron Andrew Hunt blames it in part on the "music industrial complex." He created the Tonal Plexus in 1996 and has since sold, by his count, "not many." With 1,266 keys, the instrument is designed especially for microtonal composition, so it would be a tough sell at just about any time. But Hunt said the deck is particularly stacked against new instruments now that a standard repertoire has been locked in, as has the popular idea of what a proper instrument is. "The biggest barrier is the institutionalization of Western music and the mass marketing of all the instruments," he says. "The problem is that no one can break though this marketing barrier and this education barrier because it's become this machine."
> In the past, support from the establishment has made a difference in whether new instruments find a market. The research and backing of universities and corporations like RCA helped make the synthesizer happen. In Hector Berlioz, the saxophone got a major boost from a major composer. But many instruments have risen from very humble origins. The steel drum evolved from frying pans and oil cans after the Trinidadian government banned other musical instruments. Folks of limited means also turned household objects into music makers with washboards and turntables.'
Or Amy Chua herself:
> "That's one of the reasons I insisted [her two daughters] do classical music. I knew that I couldn't artificially make them feel like poor immigrant kids. ... But I *could* make sure that [daughter #1] and [daughter #2] were deeper and more cultivated than my parents and I were. Classical music was the opposite of decline, the opposite of laziness, vulgarity, and spoiledness. It was a way for my children to achieve something I hadn't. But it was also a tie-in to the high cultural tradition of my ancestors."
It's simple logic that the less popular an instrument, the easier it is to become world-class in it. Standardizing on just a few instrument and turning them into [positional good](!Wikipedia)s also tragically turns them into an arms race (and anecdotally, admissions officers have begun to disregard them *because* of their popularity[^HN-admissions]):
> "On the whole, discipline makes life easier and better. On the other hand, who the fuck cares about the piano and violin? If all tiger mothers push the piano, say, the winner-take-all race for piano becomes utterly brutal, and the tiger-mothered pianist will likely get less far in the piano race than a bunny-mothered [bassoonist](!Wikipedia). That just seems dumb! Gamble on the [flugelhorn](!Wikipedia)! The Western ethos of hyper-individuation produces less of the sort of hugely inefficient positional pileup (not that there aren't too many guitarists) that comes from herding everybody onto the same rutted status tracks. It also produces less discipline and thus less virtuosity, but a greater variety of excellence by generating the cultural innovation that opens up new fields of endeavor and new status games. It's just way better to be the world's best acrobatic kite-surfer than the third best pianist in Cleveland." --["Amy Chua"](, [Will Wilkinson](!Wikipedia)
[^HN-admissions]: Given that this is all common knowledge, any advantage to musical instruments in college admissions would constitute a kind of inefficiency (see also [Goodhart's law](!Wikipedia)); hence it does not surprise me to read observations to that effect (even if they are only pseudonymous online anecdotes):
> "When I was an admissions office for a short time, my advice to asian applicants looking to be noticed was to go to clown school, perform as a semi-professional magician, or even excel at sports. Violin, cello, piano, essays about translating for your immigrant parents, computers, math, science...all that stuff blends together after awhile and makes it hard for an admissions office to remember you when sitting around the table voting on applicants...This is pretty close to how we did things at Princeton. You can only admit so many violin playing science hopefuls." --[brandnew]([low](
> "Full disclaimer: I'm a sophomore at Yale, my adviser last year was an admissions officer, and a friend of mine works in the admissions...There is no EXPLICIT comparison of Asians to Asians. Nobody looks at your application and says "Oh, another Asian, let me turn on my asian scale!" What happens, subconsciously, is that the stereotypical asian profile is "high scoring, high gpa, piano/violin, tennis, math/science." So a lot of qualified asians get rejected because their admission officer can't find enough good arguments for them. Regardless of how qualified you are individually, Yale is trying to build a diverse class, so if you do the same thing as 1000 other candidates, it's very hard to vouch for you. "What do you bring to the campus that this other kid doesn't? , and that's the end of it." --[BlackJack](
The defense for these practices?
> "There are definitely aspects of my upbringing that I'd like to replicate. I'm never going to be a professional pianist, but the piano has given me confidence that totally shapes my life. I feel that if I work hard enough, I can do anything. I know I can focus on a given task for hours at a time. And on horrible days when I'm lost and a mess, I can say to myself, "I'm good at something that I really, really love." --["Q&A: elves, dirt, and college decisions"](, Sophia Rubenfeld-Chua
> "The point of learning the piano is NOT about acquiring the skill of playing the piano so that the student can earn a living as a pianist. It is about building the character of the person. Here is the thing about character -- you can't build it by explicitly setting out to build it. Character is not a skill like tying your shoelaces. If it must be put in terms of "skill", character is a "meta-skill" -- a foundational human skill that is necessary to perfect any number of mechanical skills. And the only way to develop this meta-skill is to develop at least one highly sophisticated mechanical skill, such that the student may acquire the meta-skill in the course of building the mechanical skill.
> So, once again: the point of learning the piano is NOT about acquiring the skill of playing the piano. As Rubenfeld-Chua put it, it is about acquiring genuine confidence and iron discipline. With such confidence and discipline, she can move on and do anything she wants in her life because there is no task in life in which confidence and discipline hinder success. THIS is the whole point of Tiger Parenting, and the reason why Tiger Parenting is so successful." --["Confucianism and Korea - Part V: What Can Confucianism Do For America?"](, The Korean
The cynical questions almost ask themselves. Would Sophia love piano so much if she hadn't had [to practice](!Wikipedia "Cognitive dissonance") [so much](!Wikipedia "Stockholm syndrome")? How unlikely is it that piano would just happen to be the perfect instrument for her? And like the old argument that learning Latin was worthwhile because it sped up subsequent language learning, does the building-character argument actually build character? [Juvenile boot camps](!Wikipedia "Boot camp (correctional)") have generally failed to show any significant improvement in their inmates, and soldiers frequently discuss the difficulty of adapting to civilian life (despite decades of self-discipline)[^rand]. Were we to grant the character-building nature of piano, that raises a further question - don't other instruments build character as well, and so why not learn the flugelhorn and gain *both* benefits - character *and* useful skill building? Why must we all pile into the same high-prestige occupations like being a rock star or actor[^sailer]? This may be good for the tiny subset of "insiders: pianists, concert presenters and pianophiles" who are actually able to notice the differences and value highly small improvements, though even they seem to be a bit jaded and no longer very interested in technical proficiency[^dimeadozen] - but everyone else?
[^dimeadozen]: ["Virtuosos Becoming a Dime a Dozen"](, Anthony Tommasini, _New York Times_; besides inadvertently making the point that we truly do not need more pianists, Tommasini also adds some fodder to the notion that this is not *just* an East Asian or Asian-American arms race but includes Europe and Russia as well:
> "...Ms. Wang's virtuosity is stunning. But is that so unusual these days? Not really. That a young pianist has come along who can seemingly play anything, and easily, is not the big deal it would have been a short time ago. The overall level of technical proficiency in instrumental playing, especially on the piano, has increased steadily over time. Many piano teachers, critics and commentators have noted the phenomenon, which is not unlike what happens in sports...Something similar has long been occurring with pianists. And in the last decade or so the growth of technical proficiency has seemed exponential. Yes, Ms. Wang, who will make her New York recital debut at Carnegie Hall in October, can play anything. But in China alone, in recent years, there have been Lang Lang and Yundi Li. Russia has given us Kirill Gerstein, born in 1979, the latest recipient of the distinguished Gilmore Artist Award
> ...Because so many pianists are so good, many concertgoers have simply come to expect that any soloist playing the Tchaikovsky First Concerto with the New York Philharmonic will be a phenomenal technician. A new level of technical excellence is expected of emerging pianists. I see it not just on the concert circuit but also at conservatories and colleges. In recent years, at recitals and chamber music programs at the Juilliard School and elsewhere, particularly with contemporary-music ensembles, I have repeatedly been struck by the sheer level of instrumental expertise that seems a given. ...The first several decades of the 20th century are considered a golden era by many piano buffs, a time when artistic imagination and musical richness were valued more than technical perfection. There were certainly pianists during that period who had exquisite, impressive technique, like Josef Lhevinne and Rachmaninoff himself. And white-hot virtuosos like the young Vladimir Horowitz wowed the public. But audiences and critics tolerated a lot of playing that would be considered sloppy today. Listen to 1920s and '30s recordings of the pianist Alfred Cortot, immensely respected in his day. He would probably not be admitted to Juilliard now. Despite the refinement and élan in his playing, his recording of Chopin's 24 études from the early 1930s is, by today's standards, littered with clinkers.
> ...I would place essential artists today like Richard Goode, Mitsuko Uchida and Andras Schiff among the group with all the technique they need. Among younger pianists, this club would include Jonathan Biss, a sensitive, musically scrupulous player; and one of my new favorites, the young Israeli David Greilsammer, who played an inspiring program at the Walter Reade Theater last year in which he made connections among composers from Monteverdi to John Adams, with stops at Rameau, Janacek, Ligeti and more. He may not be a supervirtuoso. But I find his elegant artistry and pianism more gratifying than the hyperexpressive virtuosity of Lang Lang, whose astonishing technique I certainly salute. Besides, the group of play-anything pianists, of which Mr. Lang is a leader, is getting pretty big. Among them you would have to include Garrick Ohlsson, who not only plays with resourceful mastery but seems to play everything, including all the works of Chopin. I would include Leif Ove Andsnes, an artist I revere, who does not call attention to himself but plays with exquisite technique and vibrant musicality. This list goes on. Martha Argerich can be a wild woman at the piano, but who cares? She has stupefying technique and arresting musical ideas. I would add Krystian Zimerman, Marc-André Hamelin and probably Jean-Yves Thibaudet to this roster. There are others, both older and younger pianists....After Mr. Kissin's Liszt Sonata a piano enthusiast sitting near me asked, "Have you ever heard the piece played so magnificently?" I said that the performance was indeed amazing, but that actually, yes, I had heard a comparably magnificent performance on the same stage a few months earlier during a recital by Stephen Hough. Mr. Hough's playing was just as prodigious technically, and I found his conception more engrossing. He reconciled the episodic sections of this teeming work into an awesome entity. Mr. Hough is another pianist who can play anything. Join the club."
But hey, there is *one* benefit to all this futile status signaling. We get a ton of anime operas[^higurashi] and music played with classical instruments on YouTube![^touhou]
[^rand]: Mental disorders like post-traumatic stress disorder and suicide, perhaps the ultimate indicator of unhappiness, are increasingly common in former American soldiers thanks to the perpetual War on Terror:
- From 1999-2004, [70,000 new veterans]( received the highest level [PTSD](!Wikipedia) benefits, for 216,000 diagnosed PTSD cases in total; "17 percent of troops returning from duty in Iraq met the strict screening criteria for mental problems such as PTSD" (which is still better than 30% in Vietnam, but worse than 10% in WWII)
- A [2007 study]( found a 12% PTSD rate for the Iraq & Afghanistan theaters (>1.8m Americans deployed)
- [162 active-duty soldiers]( committed suicide in 2009 (with 101 drug deaths 2006-2009, some of which may be suicides); estimates of PTSD now start at 300,000
- 2009 and 2010 [active-duty suicides]( exceeded combat deaths in Iraq and Afghanistan
- the VA deals with [950 suicide attempts]( as of April 2010, See also the RAND study ["The War Within: Preventing Suicide in the U.S. Military"]( which covers the 50% rise in suicide rates 2001-2008, and tactics like [Battle buddy](!Wikipedia).
With all that in mind, it is interesting to consider the positives and negatives of being a [military brat](!Wikipedia "Military brat (US subculture)#Studies of military brats").
[^sailer]: [Steve Sailer](!Wikipedia) [retells one]( of many jokes expressing this observation:
> "I can't get a date, Doc," the new patient griped to his psychiatrist. "See, I sweep up the circus elephants' droppings and can never wash the stench off me."
> "Perhaps you should get a different job."
> "What, and quit show business?"
[^higurashi]: A [_Higurashi_](!Wikipedia "Higurashi When They Cry") ['doujin opera']( ([homepage]( was performed, to apparently [good reviews](, although I can't evaluate the [recorded scene online]( myself, not being an opera person.
[^touhou]: This is a general assertion that is fairly hard to prove, but an example may be suggestive. The [Touhou](!Wikipedia "Touhou Project") doujinshi-game phenomenon has a [fair amount of music](, but to get a sense of the true scale, we can look at some numbers. From the talk ["Riding on Fans' Energy: Touhou, Fan Culture, and Grassroot Entertainment"]( ([Barcamp](!Wikipedia) Bangkok 2 on August 31, 2008):
> "Touhou is [ZUN](!Wikipedia "Team Shanghai Alice#Member")'s work as much as it is a gigantic repertoire of fan-made manga, games, music, and video clips. I estimate that there are roughly at least three thousands short manga, five hundred music rearrangement albums, and one hundred derivative games created since 2003. These works are traded mainly in conventions dedicated to them, and some commercial firms are starting to capitalize on their popularity. Doujinshi shops like [Tora no Ana](!Wikipedia "Comic Toranoana") and [Mandarake](!Wikipedia) have shelves dedicate to Touhou comics. And are carrying CDs of arranged/sampled Touhou music (but not ZUN's originals). More and more people are attracted to the franchise because its diverse derivative works provide a variety of entry points for potential fans. In fact, Touhou's popularity skyrocketed when it became one of the killer content of [Nico Nico Douga](!Wikipedia), a Japanese equivalent of YouTube launched one year and a half earlier. There, Touhou content spread like wild fire and gave rise to many recurring memes and tens of thousands of mashup videos. To give a sense of how popular Touhou is in Nico Nico Douga, 18 of 100 most viewed videos are Touhou-related, and the best Touhou video ranks the 6th. [[5]]("
For a recent estimate, we can turn to [TV Tropes](!Wikipedia)'s [article]( on Touhou music:
> "The Touhou Project really gets a lot of great pieces of music for [the music] being [originally] made up by a single guy with a synthesizer. To put the sheer number of remix CDs in perspective, there is a torrent with over 870.4 gigabytes of over 3000 Touhou remixes, and that only includes the ones that the (English-speaking) maintainers of the torrent have added."
(This is outdated; the October 2011 lossless torrent is [1,020 gigabytes]( Personally, [I]( enjoy the orchestral pieces like the [WAVE]( group's [Luna Forest (第七楽章)](
# Lip reading website
As far as I can tell, there is no free resource for learning how to [lip read](!Wikipedia), much less free online resources.
Learning to lip read is basically:
1. watch a video with obscured audio
2. guess what you think they said
3. be corrected
4. go to #1
This is eminently doable as a website: YouTube for hosting videos, [Amazon Mechanical Turk](!Wikipedia) or similar services for generating Free videos, and perhaps a [SRS algorithm](!Wikipedia "Spaced repetition") for scheduling periodic reviews videos of particular words or sentences.
Lip reading is useful to know. There are roughly [28 million]( people in the US with hearing issues and as the [Baby Boomers](!Wikipedia) age and lose hearing, many will want to learn; estimates of Baby Boomers who will have any degree of hearing loss range from 20-60%. See:
- <>
- <>
- <>
- <>
For older Americans, the rate is [63%](
Nor is the loss limited to Baby Boomers; a report in the August 2010 issue of the American Medical Association estimated that over the 15 years from 1995 to 2010, teen hearing loss rate increased 30%^[Cited in _The Futurist_ November 2010], to a total of all teens with detectable hearing loss of 19.5% of 12-19 year olds (and a similar increase in the number with mild hearing loss).[^nytimesloss]
[^nytimesloss]: ["Childhood: Hearing Loss Grows Among Teenagers"](, _New York Times_:
> "The new study, published Wednesday in _The Journal of the American Medical Association_, analyzed data on about 1,771 youngsters aged 12 to 19 who participated in the National Health and Nutrition Examination Survey of 2005-6, and compared the prevalence of hearing loss with that of youngsters who took part in the survey in 1988-94. The percentage with at least slight hearing loss increased by 30 percent, to 19.5 percent from 14.9 percent in the earlier study. For most the hearing loss is slight enough they may not even notice."
The small industry of lip reading and the international scattering of lip reading classes shows that people will pay hundreds of dollars and go places to learn it.
## Costs
Optimistically, one-time >100$ for content, 20$ / month then on, and a substantial time investment in putting together a site and a process for acquiring or creating video.
### Technical
#### Hosting
Assuming videos are hosted on YouTube or [Amazon S3](!Wikipedia), the website would require extremely little bandwidth and accommodate >1000 users at ~20$ a month:
Assume a webpage requires 100KB to be loaded (very pessimistic), and that a user spends 1 hour a day using the website (30 hours a month), going to a new page every minute. That user will use $100\text{KB} \times (30 \times 60)$, or 180000KB, or 180MB of bandwidth. [Linode's]( cheapest offering at $20/month pays for 200GB of bandwidth; $\frac{200\text{GB}}{180\text{MB}}$ = 1112 users. A domain name costs ~$10 a year, or ~$1 a month.
More reasonable would be assuming 10KB per pageload, and 10 hours a month, cutting the per-user bandwidth down to $10 \times 10 \times 60$, or 6MB, and assuming fewer than 1000 users; then hosting could be even cheaper. [DreamHost]( is known for screwing over its more-demanding customers, but should be reliable enough here; their hosting is $9 a month.
#### Coding
Obviously a site custom-made for lip reading & very user-friendly doesn't exist. I'd have to code one or reuse some framework, though offhand I don't know of any really suited for the task. It'd be a big coding task - at least dozens of hours to learn the specific technologies and build a prototype. But then, I can't really count my own time as a cost - I'd just spend the time reading elsewise.
### Marketing
Unknown. These sorts of sites seem to do best with word of mouth marketing, so who knows? Maybe just time.
### Content
The content is the wildcard. There are a couple possible sources:
- There's a cottage industry of books and occasional CDs/DVDs, whose copyright obviously would be far too expensive to purchase.
- Hiring professionals to make lip movements also is obviously right out. To make it worth their while and to get at least 10 hours of material would take thousands of dollars.
- Online freelancing sites. I have a theory that one doesn't *want* professionals because one intends to use lip reading in real life, to read the lips of the 'amateurs' one interacts with.
- I mentioned Mechanical Turk, but that may not be appropriate; many Turkers do not have cameras or webcams, and it may not be doable to ask them to submit videos through Amazon, but Turkers could definitely be used to verify that the person in a clip is saying the things they are supposed to say. (This would cost ~10¢ per review, and usually one double-checks with multiple Turkers, so 20¢ a clip.)
- Other freelancing sites like list video/photo people working for $20-40 an hour. I figure that means that amateurs in both department will be no more than half that, $10/hour. 10 hours of content then would be ~$100.
## Revenue
Ads, obviously. A competitor would be; so that's a reasonable starting point. With zero effort at doing anything other than selling a DVD, some estimates of its ad revenue are [44¢]( to [$2.22]( a day. At hosting costs of $21 a month or <75¢ a day, the site could at least pay its on-going expenses.
## Links
Random links that may be of interest:
- <>
- <>
- <>
- <>
- <>
- <>
- <>
# Venusian Revolution
> "Venus is a great example. It does pretty well in the equation, and actually gets a value of about one and a half quadrillion dollars if you tweak its reflectivity a bit to factor in its bright clouds. This echoes what unfolded for Venus in the first half of the 20th century, when astronomers saw these bright clouds and thought they were water clouds, and that it was really humid and warm on the surface. It gave rise to this idea in the 1930s that Venus was a jungle planet. So you put this in the formula, and it has an explosive valuation. Then you'd show up and face the reality of lead melting on the surface beneath sulfuric-acid clouds, and everyone would want their money back!
> If Venus is valued using its actual surface temperature, it's like 10^-12^ of a single cent. was valued on the order of a billion dollars for its market cap, and the stock is now literally worth zero. Venus is unfortunately the of planets.
> It's tragic, amazing, and extraordinary, to think that there was a small window, in 1956, 1957, when it wasn't clear yet that Venus was a strong microwave emitter and thus was inhospitably hot.
> The scientific opinion was already going against Venus having a clement surface, but in those years you could still credibly imagine that Venus was a habitable environment, and you had authors like Ray Bradbury writing great stories about it. At the same time, the ability to travel to Venus was completely within our grasp in a way that, shockingly, it may not be now. Think what would have happened, how history would've changed, if Venus had been a quadrillion-dollar world, we'd have had a virgin planet sitting right next door. Things would have unfolded in an extremely different way. we'd be living in a very different time."
--Greg Laughlin, interviewed in ["Cosmic Commodities: How much is a new planet worth? "](
Sounds like a good alternate history novel. The space race heats up in the 1950s, with a new planet at stake. Of course, along the lines of [Peter Thiel's reasoning]( about France & John Law & the Louisiana Territory, the 'winner' of such a race would probably suffer the [winner's curse](!Wikipedia). (Don't go mine for gold yourself; sell pick-axes to the miners instead.)
# Pseudonymity
'Gwern Branwen' is the pseudonym I use to traffick online; I was long paranoid and sought to cleanly separate my online and offline life. (With the exception of commercial transactions - I didn't particularly mind if was able to link my credit card under my real name with my email account.) This was a good idea because I picked up the occasional online enemy who could annoy me.
As Maru Dubshinki, I made an enemy of Daniel Brandt; and more recently, there was an attempted harassment of me in real life, /b/-style, over Neil Gaiman's [Scientology connections](!Wikipedia "Neil Gaiman#Early life") which failed miserably when their investigation dead-ended at calling up [RIT](!Wikipedia) and asking whether a Gwern Branwen happened to work there. Which of course he did not. My biggest mistake, when I still cared about solid pseudonymity, was occasionally joining the [#wikipedia IRC channel](irc:// without my [IRC cloak](!Wikipedia) and thereby exposing my IP address; Daniel Brandt was able to narrow me to someone living in [Suffolk County, New York](!Wikipedia).
So, I had always expected a break in my pseudonymity to come from the online direction. I hadn't expected it to come from the other direction. Then one day in 2008 or 2009, it did...
I was idling in #wikipedia, discussing Wikipedia matters, as one does on the weekend, when [Andrew Garrett](!Wikipedia "User:Werdna") and [Jennifer Boriss](!Wikipedia "User:FlyingToaster") ([homepage]( happened to also be hanging out with my older sister at Carnegie-Mellon; they fell to discussing Wikipedia and she mentioned that her younger brother did an awful lot of Wikipedia editing (as I [do](Links#wikis)), and Boriss inquired as to who I was when I was not at home. It was some strange nickname she couldn't remember. But she *did* remember that I frequently edited Japanese literature articles and the nick started with 'g' or something. (I'm particularly proud of [Fujiwara no Teika](!Wikipedia), incidentally.)
Well, she says, I don't know many Japanese literature editors; but I do chat with this guy named Gwern on IRC who does a lot of that sort of thing. That's it!, my sister says (as she casually destroys my pseudonymity), the odd nickname was 'Gwern'. They both have a good laugh about what a small world it is, and then they go on IRC and knock me for a loop by 'guessing' personal details until they finally reveals that my 'hot sister' is there providing information. And of course since my sister now works in San Francisco, and Garrett is a PHP developer contracted to the Wikimedia Foundation & Boriss a developer for the Mozilla Foundation (both headquartered in or near SF), she or he occasionally visits & stays with my sister and I have to hear about it online from them. Oy vey.
# Efficient natural language
A single English character can be expressed (in [ASCII](!Wikipedia)) using a byte, or ignoring the wasted high-order bit, a full 7 bits. But English is pretty predictable, and isn't using those 7 bits to good effect. [Claude Shannon](!Wikipedia) found that each character was carrying more like 1 bit of unguessable information[^shannon]; Hamid Moradi found 1.62-2.28 bits on various books[^moradi], and Cover came up with 1.3 bits[^cover]. In practice, existing algorithms can make it down to just 2 bits to represent a character, and theory suggests the true entropy was around 0.8 bits per character.[^grassberger] (This, incidentally implies that the highest bandwidth human speech can attain is around 55 bits per second.[^rapping]) Languages can vary in how much they convey in a single 'word' - ancient Egyptian conveying ~7 bits per word and modern Finnish around 10.4[^plos] (and word ordering adding another at least 3 bits over most languages); but we'll ignore those complications.
[^shannon]: Claude E. Shannon, ["Prediction and entropy of printed English"](, _Bell Systems Technical Journal_, pp. 50-64, Jan. 1951
[^cover]: T. M. Cover, ["A convergent gambling estimate of the entropy of English"](, _IEEE Trans. Information Theory_, Volume IT-24, no. 4, pp. 413-421, 1978
[^moradi]: H. Moradi, ["Entropy of English text: Experiments with humans and a machine learning system based on rough sets"](, _Information Sciences, An International Journal_ 104 (1998), 31-47
[^grassberger]: Peter Grassberger, ["Data Compression and Entropy Estimates by Non-sequential Recursive Pair Substitution"]( (2002)
[^rapping]: Rapper Ricky Brown apparently set a rapping speed record in 2005 with ["723 syllables in 51.27 seconds"](, which is 14.1 syllables a second; if we assume that a syllable is 3 characters on average, and go with an estimate of 1.3 bits per character, then the bits per second (b/s) is $14.1 \times 3 \times 1.3$, or 55 b/s. This is something of a lower bound; Korean rapper [Outsider](!Wikipedia "Outsider (rapper)") claims [17]( syllables, which would be 66 b/s.
[^plos]: See ["Universal Entropy of Word Ordering Across Linguistic Families"](, Montemurro 2011
Whatever the true entropy, it's clear existing English spelling is pretty wasteful. How many characters could we get away with? We could ask, how many bits does it take to uniquely specify 1 out of, say, 100,000 words? Well, _n_ bits can uniquely specify 2^_n_^ items; we want at least 100,000 items covered by our bits, and as it happens, 2^17^ is 131072, which gives us some room to spare. (2^16^ only gives us 65536, which would be enough for a pidgin or something.) We already pointed out that a character can be represented by 7 bits (in ASCII), so each character accounts for 7 of those 17 bits. 7+7+7 > 17, so 3 characters. In this encoding, one of our 100,000 words would look like 'AxC' (and we'd have 30,000 unused triplets to spare). That's not so bad.
But as has often been [pointed out](, one of the advantages of our verbose system which can take as many as 9 characters to express a word like 'advantage' is that the waste also lets us understand partial messages. The example given is a disemvoweled sentence: 'y cn ndrstnd Nglsh txt vn wtht th vwls'. Word lengths themselves correspond roughly to frequency of use^[Which would be a sort of [Huffman encoding](!Wikipedia); see also ["Entropy, and Short Codes"](] or average information content.^[["Word lengths are optimized for efficient communication"]( "We demonstrate a substantial improvement on one of the most celebrated empirical laws in the study of language, [Zipf's](!Wikipedia "Zipf's law") 75-y-old theory that word length is primarily determined by frequency of use. In accord with rational theories of communication, we show across 10 languages that average information content is a much better predictor of word length than frequency. This indicates that human lexicons are efficiently structured for communication by taking into account interword statistical dependencies. Lexical systems result from an optimization of communicative pressures, coding meanings efficiently given the complex statistics of natural language use."]
The answer given when anyone points out that a compressed file can be turned to nonsense by a single error is that errors aren't that common, and the 'natural' redundancy is *very* inefficient in correcting for errors[^vowel], and further, while there are some reasons to expect languages to have evolved towards efficiency, we have at least 2 arguments that they may yet be very inefficient:
1. natural languages differ dramatically in almost every way, as evidence by the difficulty Chomskyians have in finding the [deep structure](!Wikipedia) of language; for example, average word length differs considerably from language to language. (Compare German and English; they are closely related, yet one is shorter.)
And specifically, natural languages seem to vary considerably in how much they can convey in a given time-unit; speakers make up for low-entropy syllables by speaking faster (and vice-versa), but even after multiply the number of syllables by rate, the languages still differ by as much as 30%[^idr].
2. speakers may prefer a concise short language with powerful error-detecting and correction, since speaking is so tiring and metabolically costly; but listeners would prefer not to have to think hard and prefer that the speaker do all the work for them, and would thus prefer a less concise language with less powerful error-detection and correction[^cancho]
[^idr]: An early study found that reading speed in Chinese and English were similar when the information conveyed was similar (["Comparative patterns of reading eye movement in Chinese and English"](; ["A cross-language perspective on speech information rate"]( investigated exactly how a number of languages traded off number of syllables versus talking speed by recording a set of translated stories by various native speakers, and found that the two parameters did not counter-balance exactly:
> "Information rate is shown to result from a density/rate trade-off illustrated by a very strong negative correlation between the ID~L~ and SR~L~. This result confirms the hypothesis suggested fifty years ago by Karlgren (1961:676) and reactivated more recently (Greenberg and Fosler-Lussier (2000); Locke (2008)): ‘It is a challenging thought that general optimalization rules could be formulated for the relation between speech rate variation and the statistical structure of a language. Judging from my experiments, there are reasons to believe that there is an equilibrium between information value on the one hand and duration and similar qualities of the realization on the other' (Karlgren 1961). However, IR~L~ exhibits more than 30% of variation between Japanese (0.74) and English (1.08), invalidating the first hypothesis of a strict cross-language equality of rates of information."
[^cancho]: ["Least effort and the origins of scaling in human language"](, Cancho 2002. From the abstract:
> "...In this article, the early hypothesis of Zipf of a principle of least effort for explaining the law is shown to be sound. Simultaneous minimization in the effort of both hearer and speaker is formalized with a simple optimization process operating on a binary matrix of signal–object associations. Zipf's law is found in the transition between referentially useless systems and indexical reference systems. Our finding strongly suggests that Zipf's law is a hallmark of symbolic reference and not a meaningless feature. The implications for the evolution of language are discussed..."
One interesting natural experiment in binary encoding of languages is the [Kele language](!Wikipedia "Kele language (Congo)"); its high and low tones add 1 bit to each syllable, and when the tones are translated to drumbeats, it takes about 8:1 repetition:
> "Kele is a tonal language with two sharply distinct tones. Each syllable is either low or high. The drum language is spoken by a pair of drums with the same two tones. Each Kele word is spoken by the drums as a sequence of low and high beats. In passing from human Kele to drum language, all the information contained in vowels and consonants is a tonal language like Kele, some information is carried in the tones and survives the transition from human speaker to drums. The fraction of information that survives in a drum word is small, and the words spoken by the drums are correspondingly ambiguous. A single sequence of tones may have hundreds of meanings depending on the missing vowels and consonants. The drum language must resolve the ambiguity of the individual words by adding more words. When enough redundant words are added, the meaning of the message becomes unique.
> ...She [his wife] sent him a message in drum language...the message needed to be expressed with redundant and repeated phrases: "White man spirit in forest come come to house of shingles high up above of white man spirit in forest. Woman with yam awaits. Come come." Carrington heard the message and came home. On the average, about eight words of drum language were needed to transmit one word of human language unambiguously. Western mathematicians would say that about one eighth of the information in the human Kele language belongs to the tones that are transmitted by the drum language."^[["How We Know"](, by [Freeman Dyson](!Wikipedia) in _[The New York Review of Books](!Wikipedia)_ (review of James Gleick's _The Information: A History, a Theory, a Flood_)]
With a good [FEC](!Wikipedia "Forward error correction"), you can compress and eat your cake too. Exactly how much error we can detect or correct is given by the [Shannon limit](!Wikipedia):
[^vowel]: If we argue that vowels are serving a useful purpose, then there's a problem. There are only 3 vowels and some semi-vowels, so we have at the very start given up at least 20 letters - tons of possibilities. To make a business analogy, you can't burn 90% of your revenue on booze & parties, and make it up on volume. Even the most trivial error-correction is better than vowels. For example, the last letter of every word could specify how many letters there were and what fraction are vowels; 'a' means there was 1 letter and it was a vowel, 'A' means 1 consonant, 'b' means 2 vowels, 'B' means 2 consonants', 'c' means 1 vowel & 1 consonant (in that order), 'C' means the reverse, etc. So if you see 'John looked _tc Julian', the trailing 'c' implies the missing letter is a vowel, which could only be 'a'.
This point may be clearer if we look at systems of writing. Ancient Hebrew, for example, was an [abjad](!Wikipedia) script, with vowel-indications (like the [niqqud](!Wikipedia)) coming much later. Ancient Hebrew is also a dead language, no longer spoken in the vernacular by its descendants until the [Revival of the Hebrew language](!Wikipedia) as [Modern Hebrew](!Wikipedia), so oral traditions would not help much. But nevertheless, the Bible is still very well-understood, and the lack of vowels rarely an issue; even the complete absence of modern punctuation didn't cause very many problems. The examples I know of are striking for their unimportance - the exact pronunciation of the [Tetragrammaton](!Wikipedia) or whether the [thief crucified](!Wikipedia "Saint Dismas#Today... in paradise") with Jesus immediately went to heaven.
> $\frac{\text{channelCapacity}}{1 - (-(\text{mistakeRate} \times log_2(\text{mistakeRate}) + (1 - \text{mistakeRate}) \times log_2(1 - \text{mistakeRate})))}$
If we suppose that each word is 3 characters long, and we get 1 error every 2 words on average, our channel capacity is 6 characters' of bits (or 7*6, or 42), and our mistake rate 1/6 of the characters (or 7/42), substituting in we get:
> $\frac{42}{1 - (-(\frac{1}{6} \times log_2(\frac{1}{6}) + (1 + \frac{1}{6}) \times log_2(1 - \frac{1}{6})))}$
Or in Haskell, we evaluate (using [logBase](!Hoogle) because [log](!Hoogle) is the natural logarithm, not the binary logarithm used in information theory):
42 / 1 - (-(1/6 * logBase 2 (1/6) + (1 - 1/6) * logBase 2 (1 - 1/6)))
Which evaluates to ~41. In other words, we started with 42 bits of possibly corrupted information, assumed a certain error rate, and asked how much could we communicate given that error rate; the difference is whatever we had to spend on ECC - 1 bit. Try comparing that to a vowel-scheme. The vowel would not guarantee detection or correction (you may be able to decade 'he st' as 'he sat', but can you decode 'he at' correctly?), and even worse, vowels demand an entire character, a single block of 7/8 bits, and can't be subtly spread over all the characters. So if our 2 words had one vowel, we just blew 7 bits of information on that and that alone, and if there were more than 1 vowel...
Of course, the Shannon limit is the theoretical ideal and requires complex solutions humans couldn't mentally calculate on the fly. In reality, we would have to use something much simpler and hence couldn't get away with devoting just 1 bit to the FEC. But hopefully it demonstrates that vowels are a really atrocious form of error-correction. What would be a good compromise between humanly possible simplicity and inefficiency (compared to the Shannon limit)? I don't know.
[Richard Hamming](!Wikipedia), who invented much of the early error-correcting codes, once devised such a scheme for IBM (similar to the [ISBN check-digit](!Wikipedia "International Standard Book Number#Check digits")); number letters or characters from 1 to 37, and add them all up modulo 37, which is the new prefix to the word. This checksum handles what Hamming considered the most common human errors like repeating or swapping digits.[^hamming] A related idea is encoding bits into audible words which are as phonetically distant as possible, so a binary string (such as a cryptographic hash) can be spoken and heard with minimum possibility of error; see [PGP word list](!Wikipedia) or the 32-bit [Mnemonic encoder]( scheme.
[^hamming]: from "Coding Theory II" in _The Art of Doing Science and Engineering_, Richard W. Hamming 1997:
> "...I was once asked by AT&T how to code things when humans were using an alphabet of 26 letter, ten decimal digits, plus a 'space'. This is typical of inventory naming, parts naming, and many other naming of things, including the naming of buildings. I knew from telephone dialing error data, as well as long experience in hand computing, humans have a strong tendency to interchange adjacent digits, a 67 is apt to become a 76, as well as change isolated ones, (usually doubling the wrong digit, for example a 556 is likely to emerge as 566). Thus single error detecting is not enough...Ed Gilbert, suggested a weighted code. In particular he suggested assigning the numbers (values) 0, 1, 2, ..., 36 to the symbols 0,1,..., 9, A, B, ..., Z, space.
> ...To encode a message of n symbols leave the first symbol, _k_=1, blank and whatever the remainder is, which is less than 37, subtract it from 37 and use the corresponding symbol as a check symbol, which is to be put in the first position. Thus the total message, with the check symbol in the first position, will have a check sum of exactly 0. When you examine the interchange of any two different symbols, as well as the change of any single symbol, you see it will destroy the weighted parity check, modulo 37 (provided the two interchanged symbols are not exactly 37 symbols apart!). Without going into the details, it is essential the modulus be a prime number, which 37 is.
> ...If you were to use this encoding, for example, for inventory parts names, then the first time a wrong part name came to a computer, say at transmission time, if not before (perhaps at order preparation time), the error will be caught; you will not have to wait until the order gets to supply headquarters to be later told that there is no such part or else they have sent the wrong part! Before it leaves your location it will be caught and hence is quite easily corrected at that time. Trivial? Yes! Effective against human errors (as contrasted with the earlier white noise), yes!"
# Charitable supercomputing
See [Charity is not about helping]().
# A Bitcoin+BitTorrent-driven economy for creators (Artcoin)
< foucist> sipa: gwern> idea is to convert the hashes BitTorrent does already into Bitcoin style hashes, which are only assemble-able by the server; if the
creator=server, then we just invented a way to turn downloads into nanopayments with zero mental transaction costs
< gwern> sipa: well, let's call this Artcoin. suppose there were a Artcoin-based economy but there were no upper limit but a built-in steady inflation. if downloads
solve hashes along the way, we've made a system which gives originators of popular torrents some bitcoins, and the more popular they are, the more bitcoins structured right, this might be the most popular way of downloading - solving the issue of 'BitTorrent/p2p is killing ...
< gwern> ... creators' livelihoods!'
< foucist> it might be entirely possible to do that with the current Bitcoin system, modify BitTorrent to use a different combo hash that ties into bitcoins somehow
< gwern> sipa: I thought about the Bitcoin increasing difficulty thing, but it seems to me that this would discourage creators. what happens when generating a new
Bitcoin becomes a rare event, like once a year or something? the incentive for the creator disappears.
< foucist> gwern: well the argument is that bitcoins isn't about mining
< gwern> 'even if I release my awesome new movie, I probably won't get this year's Bitcoin. so why bother?'
< gwern> sipa: paying creators using inflation isn't that bad an idea. inflation is invisible and doesn't require assent. assent is why microtransaction schemes have
failed in their thousands
< foucist> i think the base idea is to find a way to tie bitcoins into BitTorrent such that it's an automatic micropayment for creators of that content
< sipa> but you're describing two ideas
< gwern> (or at least, hundreds. micropayments seems like the sort of thing Internet folks have been trying for ages and failing miserably every time.)
< sipa> 1. using bitcoins to pay for downloads
< sipa> you mean content creators
< sipa> 2. reusing the hashing done by BitTorrent for Bitcoin
< gwern> the users mine as a side-effect of downloading for the benefit of the server/tracker, who may be the creator (given the creator's obvious first-mover
< sipa> furthermore, i don't see how you are going to reuse the hashing power... BitTorrent uses it in a deterministic way to verify whether data is identical, while
Bitcoin uses it as a pseudorandom function to look for a lot of zeroes
< gwern> sipa: #1 is boring. it's the same damn failed idea that has been tried hundreds of times. #2 is more interesting. is there any variant which does both?
< gwern> that's where I fail because I don't know the math
< gwern> (is there any way to force mining as part of the download? of course the users/downloaders could volunteer some hashing, but then someone will write a client
which does no hashing because 'it was making my computer slow' and the scheme collapses)
< gwern> foucist: I don't think that'd work, because suppose you have no bitcoins? you have an incentive to find a plaintext torrent
< sipa> and that you pay with Bitcoin hashes to get the data
< gwern> foucist: 'oh, but my backup server doesn't have access to my Bitcoin account'
< gwern> sipa: hm, that sounds more sensible. what measures could be put in place to prevent colluding clients?
< sipa> but that's little more than a using-bitcoins-to-pay-for-data system
< foucist> gwern: OK so you think only a purely generative way would work ? hmm
< sipa> and it's very wasteful
< sipa> since the profitability of CPU mining is already long gone
< gwern> sipa: in the current Bitcoin system, yeah, not some new Bitcoin system
< sipa> explain me why people would value your Artcoin's?
< gwern> foucist: well, you need to keep the Bitcoin+BitTorrent not much more expensive in time than plain BitTorrent. if the new system took 3 times as long to
download, then people have an incentive to use a parallel system. but if it were only, say, a 20% penalty, then lots of people would use it. look at iTunes
< foucist> gwern: Bitcoin mining of any kind is still kinda wasteful though, those CPU power could be used for solving real problems etc
< sipa> indeed, do that as payment in your paying-for-creators BitTorrent system
< gwern> sipa: suppose creators began using it. it would have pretty much the same advantages as regular Bitcoin, it'd give you access to the new stuff, you'd not feel
guilty about piracy, that sort of thing. (again, why does anyone download from iTunes?)
< sipa> pay with folding@home work units or some
< sipa> gwern: but it wouldn't have limited total supply
< gwern> sipa: would people suddenly stop being interested in Bitcoin if the guarantee weren't 0% inflation but, say, 0.1% inflation?
< sipa> some, certainly
< gwern> (US GDP is 14.6 trillion, so 0.1% inflation would be a lot.)
< gwern> foucist: wouldn't surprise me if the research has already been done, actually; this is starting to remind me of the 'proof of work' subfield of crypto
< Necr0s> I need to understand a bit more about crypto.
< foucist> a quick google on 'micropayment hash' reveals a variety of research papers on micropayments with hash-chains
< Necr0s> Particularly asymmetric public/private key systems.
< foucist> "Micro-payment Protocol Based on Multiple Hash Chains" 'A Study of Micro-payment Based on One-Way Hash Chain'
< Necr0s> And their use in signing content.
< foucist> gwern: "Floodgate: A Micropayment Incentivized P2P Content Delivery Network" it talks about hash-chains a bit and such, so it could be relevant to bitcoins
Alternate blockchains are not an impossible idea. The [Namecoin]( network is up and running with another blockchain, specialized for registering and paying for domain names.
And there's already a quasi-implementation of this: [Bitcoin Plus]( It is a piece of JavaScript that does the SHA-256 mining like the regular GPU miners. The idea is that one includes a link to it on one's website and then all one's website visitors' browsers will be bitcoin mining while they visit. In effect, they are 'paying' for their visit with their computational power. This is more efficient than [parasitic computing](!Wikipedia) (although visitors could simply disable JavaScript and so it is more avoidable than parasitic computing), but from the global view, it's still highly inefficient: JavaScript is not the best language in which to write tight loops and even if browser JavaScript were up to snuff, CPU mining in general is extremely inefficient compared to GPU mining. Bitcoin Plus works because the costs of electricity and computers is externalized to the visitors. Reportedly, CPU mining is no longer able to even pay for the cost of the electricity involved, so Bitcoin Plus would be an example of [negative externalities](!Wikipedia). A good Artcoin scheme should be Pareto-improving.
# Good governance & Girl Scouts
See [Girl Scouts and good governance]().
# Hard problems in utilitarianism
The Nazis believed many sane things, like exercise and the value of nature and [animal welfare](!Wikipedia "Animal welfare in Nazi Germany") and the harmful nature of smoking.
Possible rationalist [exercise](
1. Read _The Nazi War on Cancer_
2. Assemble modern demographic & mortality data on cancer & obesity.
3. Consider this hypothetical: 'If the Nazis had not attacked Russia and negotiated a peace with Britain, and remained in control of their territories, would the lives saved by the [health benefits](!Wikipedia "Anti-tobacco movement in Nazi Germany") of their policies outweigh the genocides they were committing?'
4. Did you answer yes, or no? Why?
5. As you pondered these questions, was there ever *genuine* doubt in your mind? Why was there or not?
# Who lives longer, men or women?
Do men or women live longer? Everyone knows [women live a few years longer](!Wikipedia "Life expectancy#Sex differences"); if we look at America and Japan (from the 2011 [CIA World Factbook](!Wikipedia)):
1. 75.92 vs 80.93
2. 78.96 vs 85.72
5-7 years additional bulk longevity is definitely in favor of women. But maybe what we are really interested in is whether women have longer *effective* lives: the amount of time which they have available to pursue those goals, whatever they may be, from raising children to pursuing a career. To take the Japanese numbers, women may live 8.6% longer, but if those same women had to spend 2 hours a day (or 1/12th a life, or 8.3%) doing something utterly useless or distasteful, then maybe one would rather trade off that last 0.3%.
But notice how much we had to assume to bring the female numbers down to male: 2 hours a day! That's a lot. I had not realized how much of a lifetime those extra years represented: it was a larger percentage than I had assumed.
The obvious criticism is that social expectations that women appear as attractive as possible will use up a lot of women time. It's hard to estimate this, but men have to maintain their appearance as well; a random guess would be that men spend half an hour and women spend an hour on average, but that only accounts for a fourth of the extra women time. Let's say that this extra half hour covers make-up, menstruation, waiting in female bathroom lines, and so on. (This random guess may understate the impact; the pill aside, menstruation reportedly can be pretty awful.)
Sleep patterns don't entirely account for the extra time either; one guide says ["duration of sleep appears to be slightly longer in females"](, and [Zeo, Inc.](!Wikipedia)'s [sleep dataset]( indicates a difference of women sleeping 19 minutes more on average. If we round to 20 minutes and add to the half hour for cosmetics, we're still not even half the way.
And then there's considerations like men becoming disabled at a higher rate than women (from the dangerous jobs or manual labor, if for no other reason). Unfortunately, the data doesn't seem to support this; while women have longer lifespans, they also seem to have more illnesses than men[^angus].
[^angus]: [Angus Deaton](!Wikipedia), ["What does the empirical evidence tell us about the injustice of health inequalities?"]( (January 2011):
> "Men die at higher rates than women at all ages after conception. Although women around the world report higher morbidity than men, their mortality rates are usually around half of those of men. The evidence, at least from the US, suggests that women experience similar suffering from similar conditions, but have higher prevalence of conditions with higher morbidity, and lower prevalence of conditions with higher mortality so that, put crudely, women get sick and men get dead, [Case and Paxson (2005)]( [[abstract](]."
Pregnancy and raising children is a possible way to even things out. The US census [reports a 2000 figure]( that 19% of women 40-44 did not have children. So the overwhelming majority of women will at some point bear the burden of at least 1 pregnancy. So that's 9 months there, and then...?
That's not even 1 year, so a quarter of the time is left over if we assume the pregnancy is a total time-sink but the women involved do not spend any further time on it (but also that the average male expenditure is zero time, which was never true and is increasingly less true as time passes). That leaves a decent advantage for women of ~2 years.
If you wanted to erase the female longevity advantage, you could argue that between many women having multiple children, and many raising kids full-time at the expense of their careers or non-family goals, that represents a good decade of lost productivity, and averaging it out (81% of 10 years) reduces their effective lives by 8.1 years, and then taking into account the sleep and toiletry issues reduces the number by another 2 years, and now women lifetimes are shorter than men lifetime.
So at least as far as this goes, your treatment of childbearing will determine whether the longevity advantage is simply a fair repayment, as it were, for childbearing and rearing, or whether it really is a gift to the distaff side.
# Considerations upon a weekend in July
To learn to build sandcastles on the beach is to learn to live and die as an atheist.
# Poems on the theme of _Genshiken_
See [Genshiken]().
# Poems on the theme of _Rurouni Kenshin_
[For ANN](
None remain to see
how under the pouring skies,
blood runs down the blade
None remain to see
now, under the pouring skies,
blood pooling with rain.
Friends & lives are rivers
that endlessly flow into
the dark sea of death
Our lives are rivers
that endlessly flow into
quiet seas of death
With the summer sun
my birthday comes, and it goes;
and the leftover
presents' discarded wrappings
remind me of my own fate.
# Two poems
What zeal!
the wild nights spent burning
running up mountains,
churning through paper.
With such zeal and joy
did I burn those wild nights
in the candle light,
bounding up paper piles
and scaling mountains of thought
# Chinese Kremlinology
> "I'm not suggesting that any of the news pieces above are false, I'm more worrying about my own ability to be a good consumer of news. When I read about Wisconsin, for example, I have enough context to know why certain groups would portray a story in a certain way, and what parts of the story they won't be saying. When I'm interested in national (US) news, I know where to go to get multiple main-stream angles, and I know where to go to get fringe analysis. Perhaps these tools don't amount to much, but I have them and I rely on them. But I really know very little about how news from China gets to me, and it is filtered through a lot more layers than when I read about things of local interest." --[Antoine Latter](
It *is* dangerous to judge a very large and complex country with truly formidable barriers to understanding and internal opacity. As best as I can judge the numbers and facts presented for myself, there are things rotten in Denmark. (The question is whether they are rotten enough.)
But at the same time, we can't afford to buy into the China-as-the-next-threat hype. When I was much younger, I read every book my library had on Japan's economics and politics, and many online essays & articles & op-eds besides. They were profoundly educational, but not just in the way that their authors had intended - because they were all from the _Japan as Number One_ ([Ezra Vogel](!Wikipedia)) / [_Rising Sun_](!Wikipedia "Rising Sun (novel)") ([Michael Crichton](!Wikipedia)) period of the [bubble '80s](!Wikipedia "Japanese asset price bubble"), and they were profoundly confident about how Japan would rule the world and quite convincing but *even as I read them*, Japan's bubble had popped brutally and it [continued to stagnate](!Wikipedia "Lost Decade (Japan)"). This dissonance, and my own susceptibility to the authors I had read, was not lost on me. (There was another sobering example from that same period for me - I had read [Frank Herbert](!Wikipedia)'s [_Dune_](!Wikipedia "Dune (novel)") with avidity, thoroughly approving of [Paul's](!Wikipedia "Paul Atreides") actions; then I read _[Dune Messiah](!Wikipedia)_ and some of Herbert's essays and interviews, only to realize that I had been cheering on a mass murderer and had fallen right into Herbert's trap - "I am showing you the superhero syndrome and your own participation in it.")
Years later, I came across [Paul Krugman](!Wikipedia)'s ["The Myth of Asia's Miracle"](, which told me about a *economic* (as opposed to military or geopolitical) parallel to Japan's ascension that I'd never heard of - Soviet Russia! (And it's worth noting that one of the other '[Asian Tigers](!Wikipedia "Four Asian Tigers")', [South Korea](!Wikipedia), despite its extraordinary growth and own mini-narratives, is still 3k or so below Japan's per-capita income.)
Ever since I have been curious as to China's fate (much greater than or comparable total wealth to the US?), skeptical of the optimistic forecasts, and mindful of my own fallibility. Falling into the narrative once, with Japan, is understandable; fool me twice with Soviet Russia, that's forgivable; fool me three times with China, and I prove myself a fool.
# The hidden Library of the Long Now
[Mike Darwin](!Wikipedia) told [an interesting story]( in August 2011 of a long-term project that is under way or may have been completed:
> "...he publicly posts them [his predictions], time stamps them and does statistics. That's just brilliant, and it is something I started doing privately in March of 2006. Within a year I found out that I was useless at predicting the kinds of events I thought I would be best at – such as, say, developments in the drug treatment of diseases which I knew a lot about. What I turned out to be really good at was anything to do with failure analysis, where there was a lot of both quantitative and qualitative data were available. For reasons I'll mention only elliptically here, I became interested in econometric data, and I also had the opportunity to travel the world specifically for the purpose of doing "failure analysis reports" on various kinds of infrastructure: the health care system in Mexico, food distribution and pricing in North Africa, the viability of the cruise (ship) industry over the period from 2010 to 2010, potential problems with automated, transoceanic container shipping… The template I was given was to collect data from a wide range of sources – some of which no self respecting academic would, or could approach. There were lots of other people in the study doing the same thing, sometimes in parallel.
> I got "recruited" because "the designer" of the study had a difficult problem he and his chosen experts could not solve, namely, how to encode vast amounts of information in a substrate that would, verifiably, last tens of millions of years. One of the participants in this working group brought me along as a guest to one of their sessions. There were all kind of proposals, from the L. Ron Hubbard-Scientology one of writing text on stainless steel plates, to nanolithography using gold… The discussion went on for hours and what impressed me was that no one had any real data or any demonstrated experience with (or for) their putative technology. At lunch, I was introduced to "the designer" and his first question was, "What are you here for?" I told him I was there to solve his problem and that, if he liked, I could tell him how to do what he wanted absent any wild new technology or accelerated aging tests. I said one word to him: GLASS. Organisms trapped in [amber](!Wikipedia) are, of course, the demonstrated proof that even a very fragile and perishable substrate can be stabilized and retain the information encoded in it for tens of millions of years, if not longer. Pick a stable glass, protect it properly, and any reasonable information containing substrate will be stable over geological time periods^[Darwin discusses this further in the context of brain preservation in his ["Science Fiction, Double Feature, 2: Part 2"](; see also my [plastination]() essay.]. There were a lot of pissed off people who didn't get to stay for the expected (and elaborate) evening meal. As it turned out, "the designer" had another passion, and that was that he collected and used people whom he deemed (and ultimately objectively tested) were "brilliant" at failure analysis. Failure analysis can be either prospective or retrospective, but what it consists of someone telling you what's likely to go wrong with whatever it is you are doing, or why things went wrong after they already have."
Darwin enlarges in an email:
> "My second concern is pretty well addressed in my last post, ["Fucked."]( The geopolitical situation is atrocious; much worse than the media sense and vastly, vastly worse than most of the politicians sense. At the very top, in a few places, such as B of A, Citicorp and the IMF, there are people shitting themselves morning, noon and night. Before, they were just shitting themselves in the morning. The "study" I allude to in my response was the work of a really bizarrely interesting human being who is richer than Croesus and completely obsessed with information. He has spent tens of millions analyzing the "planetary social, economic & geopolitical situation" for failure. He wanted a timeline to failure and was smart enough to understand he could never get precision. He wanted and I think he got, a "best completed by" date for his project. By now, I would guess that there are massive packets of glass going into very, very deep holes in a number of places...
> Let's just say I read the final report of the study group and I think I have every good reason to be in one hell of hurry."
This all is quite interesting. Can one guess who this mysterious figure is? Let's do some _ad hoc_ reasoning in the spirit of [Fermi calculation](!Wikipedia)s!
Let's see, tens of millions just on the preliminary studies rules out millionaires; add in land purchases and fabrication costs and the project would run into hundreds of millions (eg. it cost [Jeff Bezos](!Wikipedia) something like $50m for his [Clock of the Long Now](!Wikipedia) and most of the work had already been done by the Foundation!), so we can rule out multi-millionaires, leaving just billionaire-class wealth.
Private philanthropy is almost non-existent in China^[You might get the opposite impression reading articles like this [_New York Times_ article](, but consider the flip side of large percentage growth in philanthropy - they must be starting off from a small absolute base!], Russia, and India so although they have many billionaires we can rule out those nationalities. [Australian billionaires](!Wikipedia "Category:Australian billionaires") are fairly rare and mostly in business or the extractive industries, so we can probably rule out Australia too. Combined with Darwin being an English monolingual (as far as I can tell), one can restrict the search to America and England, European at the most exotic.
To strike Darwin - a cryonicist - as weird and extremely intelligent, he probably has a high [Openness](!Wikipedia "Openness to experience") personality rating, suggesting he either inherited his money or made it in tech or finance. Being obsessed with information fits the two latter better than the former. He implies starting in 2006 or 2007, and it's unlikely he was brought in on the ground floor or the obsession started only then, so our billionaire's wealth was probably made in the '80s or '90s or early 2000s at the very latest, in the first or second dot-com boom. This describes a relatively small subset of the 400 or so [American billionaires](!Wikipedia "Category:American billionaires").
Without trawling through Wikipedia's categories, the most obvious suspects for a weird extremely intelligent tech billionaire interested in information are Jeff Bezos, [Larry Page](!Wikipedia) & [Sergey Brin](!Wikipedia), [Larry Ellison](!Wikipedia), [Charles Simonyi](!Wikipedia)[^simonyi], [Jay S. Walker](!Wikipedia)[^jay], [Peter Thiel](!Wikipedia), and [Jerry Yang](!Wikipedia). Of those 6, I think I would rank them by plausibility as follows:
[^simonyi]: Charles Simonyi is actually the first person to come to mind when I think about 'weird wealthy American technologist interested in old and long-term information who has already demonstrated philanthropy on a large scale'
[^jay]: Walker was the second, due to his [library](!Wikipedia "Jay S. Walker#The Walker Library of the History of Human Imagination"). Information on his net wealth isn't too easy to come by, but he was solidly a [billionaire in 2000](, at least...
1. Jeff Bezos
Scattering glass capsules of information is an extremely Long Now idea and Bezos has already bought into the Long Now to the tune of [dozens of millions]( This alone makes him the most plausible candidate, although his plausibility is damaged by the fact that he is a very busy CEO and has been for the last 2 decades and presumably would have difficulties devoting a lot of time to such a project.
2. Peter Thiel
He has no direct Long Now links I know of, but he fits the described man even better than Bezos in some respects: he is acutely aware of upcoming [existential threats](!Wikipedia) and [anthropic biases](!Wikipedia)[^thiel] and scatters [his philanthropy](!Wikipedia "Peter Thiel#Philanthropy") widely over highly speculative investments ([seasteading](!Wikipedia), [SIAI](!Wikipedia), 20 under 20, the [Methuselah Mouse Prize](!Wikipedia) etc.). An additional point in his favor is he lives in San Francisco, near by Darwin or Long Now figures like [Stewart Brand](!Wikipedia)
3. Charles Simonyi; similar to Jay S. Walker
4. Page & Brin; while I generally get a utopian Singulitarian vibe off them and their projects and they seem to like publicizing their works, Google Books is relatively impressive and I could see them interested in this sort of thing as a 'insurance policy'.
5. Yang; I don't see anything especially *implausible* about him, but nothing in favor either.
6. Jay S. Walker; his Library quite impressed me when I saw it, indicating considerable respect for the past, a respect conducive to such a project. I initially ranked him at #3 based on old information about his fortune being at $6-7 billion in 2000, but [_Time_](,9171,57731,00.html) reported that the dot-com crash had reduced his fortune to $0.33 billion.
7. Ellison; like Jobs, his heart is cold, but he does seem to donate[^ellison] and claims to donate large sums quietly, consistent with the story. As someone who made his billions off database rented long-term, hopefully he has an appreciation of information and a longer-term perspective than most techies.
[^thiel]: See Thiel's essay ["The Optimistic Thought Experiment: In the long run, there are no good bets against globalization"](
[^ellison]: And I use 'seem' advisedly; it's remarkable how selfish [his donations all appear to be](!Wikipedia "Larry Ellison#Charitable donations").
(I do not include [Steve Jobs](!Wikipedia) although he is famous and matches a few criteria; as far as I [or others]( can tell, his past charity has been trivial[^jobsdonations] and has essentially never used his wealth for anything but his own good [like]( [buying]( [new]( [organs](, and comes off in _[iWoz](!Wikipedia)_ as having sociopathic characteristics; an anonymous Job adviser remarked in 2010 ["Steve expresses contempt for everyone - unless he's controlling them."]( It's interesting that Apple's current [matching gift](!Wikipedia) program was instituted *after* Jobs resigned, [by Tim Cook](; Apple's original philanthropic programs were shut down in 1997 by Jobs within weeks of his return[^jobsCNN]. I would be shocked if Jobs was the former employer.)
[^jobsCNN]: ["The trouble with Steve Jobs"](, [](!Wikipedia), 5 March 2008:
> "Last year the founder of the Stanford Social Innovation Review called Apple one of "America's Least Philanthropic Companies." Jobs had terminated all of Apple's long-standing corporate philanthropy programs within weeks after returning to Apple in 1997, citing the need to cut costs until profitability rebounded. But the programs have never been restored.
> Unlike Bill Gates - the tech world's other towering figure - Jobs has not shown much inclination to hand over the reins of his company to create a different kind of personal legacy. While his wife is deeply involved in an array of charitable projects, Jobs' only serious foray into personal philanthropy was short-lived. In January 1987, after launching Next, he also, without fanfare or public notice, incorporated the Steven P. Jobs Foundation. "He was very interested in food and health issues and vegetarianism," recalls Mark Vermilion, the community affairs executive Jobs hired to run it. Vermilion persuaded Jobs to focus on "social entrepreneurship" instead. But the Jobs foundation never did much of anything, besides hiring famed graphic designer Paul Rand to design its logo. (Explains Vermilion: "He wanted a logo worthy of his expectations.") Jobs shut down the foundation after less than 15 months."
[^jobsdonations]: I initially couldn't find anything whatsoever on charitable giving by Jobs. Eventually I found a [_The Times_ interview]( with Jobs where the reporter says "Jobs had volunteered himself as an advisor to John Kerry's unsuccessful campaign for the White House. He and his wife, Lauren, had given hundreds of thousands of dollars to Democratic causes over the last few years." [_Ars Technica_]( mentions a few others, but conflates Jobs with Apple. A large $150m donation speculated to be Jobs has been [confirmed to not be]( from him. (In 2004, [_Fortune_ estimated]( Jobs's fortune at $2.1 billion.) And in general, [absence of evidence]( is [evidence of absence]( Isaacson's 2011 _Steve Jobs_ biography was finished before he died and so includes nothing on Jobs's will, but does occasionally discuss Jobs's few acts of philanthropy:
> "He was not particularly philanthropic. He briefly set up a foundation, but he discovered that it was annoying to have to deal with the person he had hired to run it, who kept talking about "venture" philanthropy and how to "leverage" giving. Jobs became contemptuous of people who made a display of philanthropy or thinking they could reinvent it. Earlier he had quietly sent in a $5,000 check to help launch Larry Brilliant's Seva Foundation to fight diseases of poverty, and he even agreed to join the board. But when Brilliant brought some board members, including Wavy Gravy and Jerry Garcia, to Apple right after its IPO to solicit a donation, Jobs was not forthcoming. He instead worked on finding ways that a donated Apple II and a VisiCalc program could make it easier for the foundation to do a survey it was planning on blindness in Nepal....His biggest personal gift was to his parents, Paul and Clara Jobs, to whom he gave about $750,000 worth of stock. They sold some to pay off the mortgage on their Los Altos home, and their son came over for the little celebration. "It was the first time in their lives they didn't have a mortgage," Jobs recalled. "They had a handful of their friends over for the party, and it was really nice." Still, they didn't consider buying a nicer house. "They weren't interested in that," Jobs said. "They had a life they were happy with." Their only splurge was to take a Princess cruise each year. The one through the Panama Canal "was the big one for my dad," according to Jobs, because it reminded him of when his Coast Guard ship went through on its way to San Francisco to be decommissioned...[Mona Simpson's novel] depicts Jobs's quiet generosity to, and purchase of a special car for, a brilliant friend who had degenerative bone disease, and it accurately describes many unflattering aspects of his relationship with Lisa, including his original denial of paternity...Bono got Jobs to do another deal with him in 2006, this one for his Product Red campaign that raised money and awareness to fight AIDS in Africa. Jobs was never much interested in philanthropy, but he agreed to do a special red iPod as part of Bono's campaign. It was not a wholehearted commitment. He balked, for example, at using the campaign's signature treatment of putting the name of the company in parentheses with the word "red" in superscript after it, as in (APPLE)^RED^. "I don't want Apple in parentheses," Jobs insisted. Bono replied, "But Steve, that's how we show unity for our cause." The conversation got heated—to the F-you stage—before they agreed to sleep on it. Finally Jobs compromised, sort of. Bono could do what he wanted in his ads, but Jobs would never put Apple in parentheses on any of his products or in any of his stores. The iPod was labeled (PRODUCT)^RED^, not (APPLE)^RED^."
Of course, if we really want to rescue Jobs's reputation, we can still do. It *could* be the case that Jobs was very charitable, but he does so completely anonymously or perhaps that he has preferred to reinvest his wealth in gaining more wealth and only donating after his death; a Buffet-like strategy that - _ex post_ - would seem to be a very wise one given the stock performance of `AAPL`. Jobs's death October 2011 means that this theory is falsifiable sooner than I had expected while writing this essay. Based on Jobs's previous charitable giving, and the general impression I have from the hagiographic press coverage is Apple itself is Jobs's charitable gift to the world (which I can't help but suspect either influenced or is influenced by the man himself). My own general expectation is that he will definitely not will ~99% of his wealth to charity like Buffett or Gates ([80%](, probably not >50% ([70%](, and more likely somewhere in the 0-10% range ([60%]( If any philanthropy comes of Jobs's Pixar billions, I expect it to be at the behest of his widow, [Laurene Powell Jobs](!Wikipedia), who has long been involved in non-profits; to quote Isaacson again:
> "Jobs's relationship with his wife was sometimes complicated but always loyal. Savvy and compassionate, Laurene Powell was a stabilizing influence and an example of his ability to compensate for some of his selfish impulses by surrounding himself with strong-willed and sensible people. She weighed in quietly on business issues, firmly on family concerns, and fiercely on medical matters. Early in their marriage, she cofounded and launched College Track, a national after-school program that helps disadvantaged kids graduate from high school and get into college. Since then she had become a leading force in the education reform movement. Jobs professed an admiration for his wife's work: "What she's done with College Track really impresses me." But he tended to be generally dismissive of philanthropic endeavors and never visited her after-school centers."
All this said, I am well aware I haven't looked at even a small percentage of American billionaires, and I could be wrong in focusing on techies - finance is equally plausible (look at [James Harris Simons](!Wikipedia)! if he isn't a plausible candidate, no one is), and inherited wealth still common enough to not be ignored. Pondering the imponderables, I'd give a [15% chance]( that one of those 6 people was the employer, and perhaps [a 9% chance]( that the employer was either Bezos, Thiel, or Simonyi, with [Bezos being 4%](, [Thiel ~3%]( and [Simonyi 2%](
And indeed, Darwin said he didn't recognize several of those names, and implied they were all wrong. Well, it would have been fairly surprising if 15% confidence assertions derived through such dubious reasoning *were* right.
# Things I have changed my mind about
See [Mistakes]().
# William Carlos Williams
so much depends
a red wheel
glazed with rain
beside the white
Have you ever tried to change the oil in your car? Or stared perplexed at a computer error for hours, only for a geek to resolve it in a few keystrokes? Or tried to do yardwork with the wrong tool? (Bare hands rather than a shovel; a shovel rather than a rake, etc.)
So much depends on the right tool or the right approach. Think of a man lost in a desert. The right direction is such a trivial small bit of knowledge, almost too small a thing to even be called ‘data'. But it means the entire world to that man – the entire world.
So much depends on little pieces of metal being 0.451mm wide and not 0.450mm, and on countless other dimensions. (Think of the insides of a jet engine, of thousands of planes and even more tens of thousands of people not falling screaming out of the sky.)
Williams is sharing with us, in true Imagist style, a sudden realization, an epiphany in a previously mundane image.
Here is a farm. It seems robust and eternal and sturdy. Nothing about this neglected wheelbarrow, glazed with rain and noticed only by fowl, draws our attention – until we suddenly realize how fragile everything is, how much everything has to go right 99.999% of the time, how without a wheelbarrow, we cannot do critical tasks and the whole complex farm ecosystem loses homeostasis and falls apart.
(I sometimes have this feeling on the highway. Oh my god! I could die in so many ways right now, with just a tiny movement of my steering wheel or anyone else's steering wheel! How can I possibly still be alive after all these trips?)
# Fermi calculations
I really like [Fermi problems](!Wikipedia) - it's like [dimensional analysis](!Wikipedia) for everything outside of physics^[This is a little misleading; dimensional analysis is much more like [type-checking](!Wikipedia) a program in a language with a good type system like Haskell. Given certain data types as *inputs* and certain allowed transformations on those data types, what data types *must* be the resulting output? But the analogy is still useful.].
Not only are they fun to think about, they can be amazingly accurate, and are extremely cheap to do - because they are so easy, you do them in all sorts of situations you wouldn't do a 'real' estimate for, and are a fun part of a [physics education]( The common distaste for them baffles me; even if you never work through [_Street-Fighting Mathematics_]( or read [Douglas Hofstadter](!Wikipedia)'s essay ["On Number Numbness"]( (collected in _[Metamagical Themas](!Wikipedia)_), it's something you can teach yourself by asking, what information is public available, what can I compare this too, how can I put various boundaries around the true answers^[eg. if someone asks you how many piano tuners there are in Chicago, don't look blank, start thinking! 'Well, there must be fewer than 7 billion, because the human race isn't made of piano tuners, and likewise fewer than 300 million (the population of the United States), and heck, Wikipedia says Chicago has only 2.6 million people and piano tuners are rare, so there must be many fewer than *that*...'] You especially want to do Fermi calculations in areas where the data is unavailable; I ponder these areas frequently, eg. [is a lip-reading website a good idea?](#lip-reading-website), how many women [dye their hair blonde](#somatic-genetic-engineering), how many [people does Folding@home kill](Charity is not about helping), what's [the entropy of natural language](#efficient-natural-language) or [how big a computer](Simulation inferences) is needed to compute the universe, do [men have shorter real lives than women](#who-lives-longer-men-or-women), how much do Girl Scouts cookies [cost and earn them](Girl Scouts and good governance#cookie-prices-and-inflation), what is the cheapest we can expect to find [modafinil for](Modafinil#margins), or simply while quickly assessing the rough probability of some event as I made one of my [thousands of predictions](Prediction markets).
To look further afield, here's a quick and nifty application by investor John Hempton to the [Sino Forestry fraud](!Wikipedia "Sino-Forest Corporation#Fraud allegations and share suspension"): ["Risk management and sounding crazy"](
What I personally found most interesting about this post was not the overall theme that the whistleblowers were discounted before and after they were proven right (we see this in many bubbles, for example, the housing bubble), but how one could use a sort of [Outside View]( calculation to sanity-check the claims. If Sino Forestry was really causing 17m cubic meters of wood to be processed a year, where was all the processing? This simple question tells us a *lot*. With medicine, there is one simple question one can always ask too - where is the increased longevity? (This is an important question to ask of studies, such as a [recent caloric restriction study]( Simple questions tell us a lot.
# Politicians are not unethical
[Dominique Strauss-Khan](!Wikipedia), while freed of the charge of rape, stands convicted in the court of public opinion as an immoral philanderer; after all, even by his account he cheated on his wife with the hotel maid, and he has been accused in France by [a writer](!Wikipedia "Tristane Banon#Banon's allegations against Dominique Strauss-Kahn") of raping her; where there is smoke there is fire, so Khan has probably slept with quite a few women^[Tristane Banon came forward only after the maid, and Khan's calm behavior after the maid incident suggests he considered it routine. Both facts suggest that the 'probability of public revelation', if you will, is fairly low, and so we ought to expect numerous previous unreported such liaisons. (An analogy: a manager catches an employee stealing from the till. The employee claims this was the first time ever and he'll be honest thenceforth. Should the manager believe him?)]. This is as people expect - politicians sleep around and are immoral. Power corrupts. To be a top politician, one must be an risk-taking alpha male reeking of testosterone, to fuel [status-seeking behavior](/docs/2011-eisenegger.pdf "The role of testosterone in social interaction").^[If this is such common knowledge, one wonders what the *wives* think; during sex scandals, they seem to remain faithful, when other women divorce over far less than such public humiliation. Why would Khan's wife - the wealthy and extremely successful [Anne Sinclair](!Wikipedia) - remain linked with him? I've seen it suggested that such marriages are 'open' relationships, where neither party expects fidelity of the other, and like many aristocratic marriages of convenience, the heart of the agreement is to not be *caught* cheating. In Khan's case, perhaps Sinclair judges him not fatally politically wounded, with still a chance at the French presidency. It is an interesting question how conscious such considerations are; [Keith Henson](!Wikipedia) has [an evolutionary theory]( somewhat relevant - that women (in particular) can transfer their affections to powerful males such as captors to safeguard their future reproductive prospects.] And then it's an easy step to say that the testosterone causes this classically _hubristic_ behavior of ultimately self-destructive streaks of abuse:
> "Toward the end of my two-week [testosterone injection] cycle, I can almost feel my spirits dragging. In the event of a just-lost battle, as Matt Ridley points out in his book _The Red Queen_, there's a good reason for this to occur. If you lose a contest with prey or a rival, it makes sense not to pick another fight immediately. So your body wisely prompts you to withdraw, filling your brain with depression and self-doubt. But if you have made a successful kill or defeated a treacherous enemy, your hormones goad you into further conquest. And people wonder why professional football players get into postgame sexual escapades and violence. Or why successful businessmen and politicians often push their sexual luck."^[[Andrew Sullivan](!Wikipedia), ["The He Hormone"](]
Power corrupts, unconsciously, leading to abuse of power and an inevitable fall - the [paradox of power]( Such conventional wisdom almost dares examination. Politicians being immoral and sleeping around is a truism - people in general are immoral and sleep around. What's really being said is that politicians do *more* immorality and sleeping-around than another group, presumably upper-class^[National-level legislators usually being well-educated and well-off, when they are not mega-millionaires like John Kerry or millionaires like Barack Obama.] but still non-politician white men^[Minorities and women being rare even now.].
## Revealed moralities
But is this true? I don't think I've ever seen anyone actually ask this question, much less offer any evidence. It's a simple question: do white male politicians (and national politicians in particular) sleep around more than upper-class white males in general? It's [easy](!Wikipedia "Representativeness heuristic") to come up with examples of politicians who stray paying prostitutes, having a 'wide stance' sending photographs online, possibly to young pages, or impregnating mistresses, but those are anecdotes, not statistics. Consider how *many* 'national-level' politicians there are that could earn coverage with their infidelities: Congress alone, between the [House](!Wikipedia "United States House of Representatives") and the [Senate](!Wikipedia "United States Senate"), has 535 members; then one could add in 9 Justices, the President & Vice-president and [Cabinet](!Wikipedia "Cabinet of the United States") make 17, and then there are the governors of each of the 50 states, for a total of 611 people.
### A priori rates
If those 611 were merely ordinary, what would we expect? Lifetime estimates of adultery seem to center around 20%[^adultery] although Kinsey put it at 50% [for men](!Wikipedia "Adultery#United States"). So we might expect *122 to 305* of the current set of national politicians to be unfaithful eventually! That's 4-10 sex scandals a year on average (assuming a 30-year career), each of which might be covered for weeks on national TV. I do not know about you, but either end of that range seems high, if anything; it's not every other month that a politician goes down in flames. (Who went down as scheduled in September or August 2011? No one?) Why does it feel the opposite way, though? We might chalk it up to the [base rate fallacy](!Wikipedia) - saying 'that's a lot' while forgetting what we are comparing to.
And 611 is very low an estimate. After all, everyone lives *somewhere*. The 8 million inhabitants of New York City will read about and be disgusted by the assistant New York Governor, the Mayor of New York City and his flunkies, the New York State legislature (212 members); and then there are the nearby counties like Nassau or Suffolk which are covered by newspapers in circulation in NYC like _Newsday_. We could plausibly double or triple this figure. (I had not heard of many politicians involved in sex scandals - like Khan, come to think of it - so they do not even need to be famous.)
So we have noticed that there are 'too few' sex scandals in politics; the same reasoning seems to work for ordinary crimes like murder - there are too few! In fact, it seems that politicians are uncannily honest; the only category I can think of where politicians are normally unethical would be finance (bribes, conflicts of interest, insider trading by [Representatives]( & [Senators](, etc). Why is this?
## Why?
Self-discipline seems like an obvious key. A reputation is built over decades, and can be destroyed in one instant. But that seems a little too friendly - we're praising our politicians for morality and we're also going to claim it's because they are more disciplined (with all the positive moral connotations)?
Maybe the truth is more sinister: they whore around as much as the rest of us, they're just covering it up better.
And we need a cover-up which actually reduces the number of scandals going public to make this all go away and leave our prejudices alone.
## Investigating
If all the media was doing was *delaying* reporting on said scandals, we'd still see the same total number of scandals - just shifted around in time. To some extent, we see delays. For example, we seem to now know a lot about [John F. Kennedy's womanizing](, but his contemporaries [ignored]( even a determined attempt to spread the word; similar stories seem true of other Presidents & presidential candidates ([FDR & Wendell Wilkie]( & [John Edwards](!Wikipedia)^[Edwards is a good example because the news about his mistress [Rielle Hunter](!Wikipedia) was broken by the _[National Inquirer](!Wikipedia)_ - in 2008, 1 year before he admitted the relationship and 3 years before he admitted being the illegitimate child's father as well.]). This suggests a way to distinguish the permanent cover-up from the delayed cover-up theory: hit the history books and see how many politicians in a political cohort turn out to have mistresses and credible rumors of affairs. Take every major politician from, say 1930, and check into their affairs; how many were then known to have affairs? How many were revealed to have affairs decades later? This will give us the delay figure and let us calculate the 'shadow scandals', how many sex scandals there *ought* to be right now but aren't.
(One could probably even automate this. Take a list of politicians from Wikipedia and feed them into Google Books, looking for proximity to keywords like 'sex'/'adultery'/'mistress', etc.)
### Uses
The shadow rate is interesting since the mass media audience finds sex scandals interesting to a nauseating degree. (Why does the media spend so much time on something like Weiner? Because it sells.) The shadow rate ought to be *negative* if anything: there is so much incentive to report on sex scandals one might expect the media to occasionally make up a scandal, on the same principle as [William Randolph Hearst](!Wikipedia) and the [Spanish-American War](!Wikipedia "Propaganda of the Spanish-American War") - it sells well. Any positive shadow rate shows something very interesting: that the media values the politicians' interests more than its own, to the point where they are *collectively* (it only takes one story to start the frenzy) willing to conceal something their customers avidly demand.
In other words, the shadow rate is a proxy for how corrupt the media is.
[^adultery]: ["Married, With Infidelities"](
> "In 2001, The Journal of Family Psychology summarized earlier research, finding that "infidelity occurs in a reliable minority of American marriages." Estimates that "between 20 and 25 percent of all Americans will have sex with someone other than their spouse while they are married" are conservative, the authors wrote. In 2010, NORC, a research center at the University of Chicago, found that, among those who had ever been married, 14 percent of women and 20 percent of men admitted to affairs."
Baumeister 2010, _Is There Anything Good About Men?_ pg 242 puts it much higher:
> "According to the best available data, in an average year, well over 90% of husbands remain completely faithful to their wives. In that sense, adultery is rare. Then again, if you aggregate across all the years, something approaching half of all husbands will eventually have sex with someone other than their wives...There are many sources on adultery and extramarital sex. The best available data are in Laumann, E. O., Gagnon, J. H., Michael, R. T., & Michaels, S. (1994). _The social organization of sexuality: Sexual practices in the United States_. Chicago, IL: University of Chicago Press. For an older, but thoughtful and readable introduction, see Lawson, A. (1988). _Adultery: An analysis of love and betrayal_. New York: Basic Books."
Taormino 2008, _Opening Up_:
> "There's another significant indicator that monogamous marriages and relationships aren't working: cheating is epidemic. The Kinsey Report was the first to offer statistics on the subject from a large study published in 1953; it reported that 26 percent of wives and 50 percent of husbands had at least one affair by the time they were 40 years old. Other studies followed, with similar findings. According to the Janus Report of 1993, more than one-third of men and more than one-quarter of women admit to having had at least one extramarital sexual experience. Forty percent of divorced women and 45 percent of divorced men reported having had more than one extramarital sexual relationship while they were still married.' In a 2007 poll conducted by MSNBC and iVillage, half of more than 70,000 respondents said they've been unfaithful at some point in their lives, and 22 percent have cheated on their current partner."
# Defining 'but'
The word 'but' is pretty interesting. It seems to be short hand for a pretty complex logical argument, which isn't *just* [modus tollens](!Wikipedia) but something else, in much the same way that natural language's [if-then](!Wikipedia "Material conditional#Philosophical problems with material conditional") is not just the material conditional.
(Modus tollens, in a quick refresher, is 'A ~> B', 'not B', therefore, 'not A'. Its counterpart is [modus ponens](!Wikipedia), 'A ~> B', 'A', therefore, 'B'.)
Most arguments proceed by repeated modus ponens; 'this' implies 'that' which implies 'the other', and 'this' is in fact the case, so you must agree with me about 'the other'. It's fairly rare to try to dispute an argument immediately by denying 'this' but conceding the rest of the argument; instead, one replies with a 'but'. But what?
I thought, and I think we could formalize 'but' as a probabilistic modus tollens. Usually we know we're dealing in slippery probabilities and inductions; if I make an argument about GDP and tax rates, I only get a reliable conclusion if I am not talking about the cooked books of Greece. My conclusion is always less reliable than my premises because probability intervenes at every step: the probability of both A and B must be less than or equal to the probability of either A or B. So, when we argue by repeated modus ponens, what we are actually saying (although we pretend to be using good old syllogisms and deductive logic) is something more like: 'A implies B; probably A; therefore (less) probably B'.
When someone replies with 'But C!', what they are saying is: 'C implies ~B; both A implies B and C implies ~B cannot be true as it is a contradiction, and C is more likely than A, so we should really conclude that C and ~A, and therefore, ~B'.
They are setting up an unstated parallel chain of arguments. Imagine a physicist discussing FTL neutrinos; 'this observation therefore that belief therefore this conclusion that the neutrinos arrived faster than light'. And someone speaks up, saying 'But there was no burst of neutrinos *before* we saw light from that recent supernova!' What is going on here is the audience is weighing the probabilities of two premises, which then work backwards to the causal chains. One might conclude that it is more likely that the supernova observations were correct than the FTL observations were correct, and thus reason with modus tollens about the FTL - 'FTL-Correct ~> (seeing neutrino burst)^[To be clear, '~Seeing-neutrino-burst' means something like 'the equipment or staff screwed up and the lack of observation is some mistake or bad luck'. In this case, both theories think that the neutrino burst *does* exist.]; ~(seeing neutrino burst); therefore, ~FTL-Correct'. But if it goes the other way, then one would reason, 'Seeing-neutrino-burst ~> ~FTL; FTL; therefore, ~Seeing-neutrino-burst'.
You don't really find such probabilistic inference in English except in 'but'. Try to explain it without 'but'. Here's an example:
1. 'Steve ran by with a bloody sword, but he likes to role-play games so I don't think he's a serial killer' versus
2. 'Steve ran by clutching a sword which is consistent with the theory that he is a serial killer and also consistent with the theory that he is role-playing a game; I have a low prior for him ever being a serial killer and a high prior for him carrying a sword, bloody or otherwise, for reasons like role-playing and when I multiply them out, the role-playing explanation has a higher probability than the serial killer explanation'
I exaggerate a little here; nevertheless, I think this shows 'but' is a little more complex and sophisticated than one would initially suspect.o
# Cryonics cluster
When one looks at cryonics enthusiasts, there's an interesting cluster of beliefs. There's psychological materialism, as one would expect (it's possible to believe your personal identity is your soul and also that cryonics works, but it's a rather unstable and unusual possibility), since the mind cannot be materially preserved if it is not material. Then there's libertarianism with its appeal to free markets and invisible entities like deadweight loss. And then there is ethical utilitarianism, usually act utilitarianism[^ethics]. They're often accused of being nerdy and specifically autistic or Asperger's; with considerable truth. Most have programming experience, or have read a good deal about logic and math and computers.
[^ethics]: In one [LessWrong]( survey, 94/73.4% were consequentialists, and those who didn't believe in morality were only one fewer than the deontologists! (There were 5 virtue ethicists, to cover the last major division of modern secular ethics.)
This clustering could be due solely to social networks and whatnot. But suppose they're not. Is there any perspective which explains this, and cryonic's "hostile wife phenomenon" as well?
Let's look at the key quotes about that phenomenon, and a few quotes giving the reaction
> "The authors of this article know of a number of high profile cryonicists who need to hide their cryonics activities from their wives and ex-high profile cryonicists who had to choose between cryonics and their relationship. We also know of men who would like to make cryonics arrangements but have not been able to do so because of resistance from their wives or girlfriends. In such cases, the female partner can be described as nothing less than hostile toward cryonics. As a result, these men face certain death as a consequence of their partner's hostility. While it is not unusual for any two people to have differing points of view regarding cryonics, men *are* more interested in making cryonics arrangements. A recent membership update from the Alcor Life Extension Foundation reports that 667 males and 198 females have made cryonics arrangements. Although no formal data are available, it is common knowledge that a substantial number of these female cryonicists signed up after being persuaded by their husbands or boyfriends. For whatever reason, males are more interested in cryonics than females. These issues raise an obvious question: are women more hostile to cryonics than men?
> ...Over the 40 years of his active involvement, one of us (Darwin) has kept a log of the instances where, in his personal experience, hostile spouses or girlfriends have prevented, reduced or reversed the involvement of their male partner in cryonics. This list (see appendix) is restricted to situations where Darwin had direct knowledge of the conflict and was an Officer, Director or employee of the cryonics organization under whose auspices the incident took place. This log spans the years 1978 to 1986, an 8 year period...The 91 people listed in this table include 3 whose deaths are directly attributable to hostility or active intervention on the part of women. This does not include the many instances since 1987 where wives, mothers, sisters, or female business partners have materially interfered with a patient's cryopreservation(3) or actually caused the patient not to be cryopreserved or removed from cryopreservation(4). Nor does it reflect the doubtless many more cases where we had no idea...
> ...The most immediate and straightforward reasons posited for the hostility of women to cryonics are financial. When the partner with cryonics arrangements dies, life insurance and inheritance funds will go to the cryonics organization instead of to the partner or their children. Some nasty battles have been fought over the inheritance of cryonics patients, including attempts of family members to delay informing the cryonics organization that the member had died, if an attempt was made at all(5). On average, women live longer than men and can have a financial interest in their husbands' forgoing cryonics arrangements. Many women also cite the "social injustice" of cryonics and profess to feel guilt and shame that their families' money is being spent on a trivial, useless, and above all, selfish action when so many people who could be saved are dying of poverty and hunger now...Another, perhaps more credible, but unarguably more selfish, interpretation of this position is what one of us (Darwin) has termed "post reanimation jealousy." When women with strong religious convictions who give "separation in the afterlife" as the reason they object to their husbands' cryopreservation are closely questioned, it emerges that this is not, in fact, their primary concern. The concern that emerges from such discussion is that if cryonics is successful for the husband, he will not only resume living, he may well do so for a vast period of time during which he can reasonably be expected to form romantic attachments to other women, engage in purely sexual relationships or have sexual encounters with other women, or even marry another woman (or women), father children with them and start a new family. This prospect evokes obvious insecurity, jealousy and a nearly universal expression on the part of the wives that such a situation is unfair, wrong and unnatural. Interestingly, a few women who are neither religious nor believers in a metaphysical afterlife have voiced the same concerns. The message here may be "If I've got to die then you've got to die too!" As La Rochefoucauld famously said, with a different meaning in mind, "Jealousy is always born with love, but does not always die with it."...While cryonics is mostly a male pursuit, there are women involved and active, and many of them are single. Wives (or girlfriends) justifiably worry that another woman who shares their husbands' enthusiasm for cryonics, shares his newly acquired world view and offers the prospect of a truly durable relationship – one that may last for centuries or millennia – may win their husbands' affections. This is by no means a theoretical fear because this has happened a number of times over the years in cryonics. Perhaps the first and most [publicly acknowledged]( instance of this was the divorce of Fred Chamberlain from his wife (and separation from his two children) and the break-up of the long-term relationship between Linda McClintock (nee' Linda Chamberlain) and her long-time significant other as a result of Fred and Linda working together on a committee to organize the Third National Conference On Cryonics (sponsored the Cryonics Society of California)."^[["Is That What Love is? The Hostile Wife Phenomenon in Cryonics"](, by Michael G. Darwin, Chana de Wolf, and Aschwin de Wolf; [HTML version](]
Eliezer Yudkowsky, remarking on the number of women in one cryonics gathering, inadvertently demonstrates that the gender disparity is still significant:
> "This conference was just young people who took the action of signing up for cryonics, and who were willing to spend a couple of paid days in Florida meeting older cryonicists. The gathering was 34% female, around half of whom were single, and a few kids. This may sound normal enough, unless you've been to a lot of contrarian-cluster conferences, in which case you just spit coffee all over your computer screen and shouted "WHAT?" I did sometimes hear "my husband persuaded me to sign up", but no more frequently than "I persuaded my husband to sign up". Around 25% of the people present were from the computer world, 25% from science, and 15% were doing something in music or entertainment - with possible overlap, since I'm working from a show of hands. I was *expecting* there to be some nutcases in that room, people who'd signed up for cryonics for just the same reason they subscribed to homeopathy or astrology, i.e., that it sounded cool. *None* of the younger cryonicists showed any sign of it. There were a couple of older cryonicists who'd gone strange, but none of the young ones that I saw. Only three hands went up that did *not* identify as atheist/agnostic, and I think those also might have all been old cryonicists."^[["Normal Cryonics"](, [Eliezer Yudkowsky](!Wikipedia)]
Some female perspectives:
> "Well, as a woman, I do have the exact same gut reaction [to cryonics]. I'd never want to be involved with a guy who wanted this. It just seems horribly inappropriate and wrong, and no it's nothing to do at all with throwing away the money, I mean I would rather not throw away money but I could be with a guy who spent money foolishly without these strong feelings. I don't know that I can exactly explain why I find this so distasteful, but it's a very instinctive recoil. And I'm not religious and do not believe in any afterlife. It's sort of like being with a cannibal, even a respectful cannibal who would not think of harming anyone in order to eat them would not be a mate I would ever want."^[['Anne'](, commenting on _Overcoming Bias_]
> ""You have to understand," says Peggy, who at 54 is given to exasperation about her husband's more exotic ideas. "I am a hospice social worker. I work with people who are dying all the time. I see people dying All. The. Time. And what's so good about me that I'm going to live forever?"
> ...Peggy finds the quest an act of cosmic selfishness. And within a particular American subculture, the pair are practically a cliché. Among cryonicists, Peggy's reaction might be referred to as an instance of the "hostile-wife phenomenon," as discussed in a 2008 paper by Aschwin de Wolf, Chana de Wolf and Mike Federowicz."From its inception in 1964," they write, "cryonics has been known to frequently produce intense hostility from spouses who are not cryonicists." The opposition of romantic partners, Aschwin told me last year, is something that "everyone" involved in cryonics knows about but that he and Chana, his wife, find difficult to understand. To someone who believes that low-temperature preservation offers a legitimate chance at extending life, obstructionism can seem as willfully cruel as withholding medical treatment. Even if you don't want to join your husband in storage, ask believers, what is to be lost by respecting a man's wishes with regard to the treatment of his own remains? Would-be cryonicists forced to give it all up, the de Wolfs and Federowicz write, "face certain death." 1
> ...Cryonet, a mailing list on "cryonics-related issues," takes as one of its issues the opposition of wives. (The ratio of men to women among living cyronicists is roughly three to one.) "She thinks the whole idea is sick, twisted and generally spooky," wrote one man newly acquainted with the hostile-wife phenomenon. "She is more intelligent than me, insatiably curious and lovingly devoted to me and our 2-year-old daughter. So why is this happening?"...A small amount of time spent trying to avoid certain death would seem to be well within the capacity of a healthy marriage to absorb. The checkered marital history of cryonics suggests instead that a violation beyond nonconformity is at stake, that something intrinsic to the loner's quest for a second life agitates against harmony in the first...But here he doesn't expect to succeed, and as with most societal attitudes that contradict his intuitions, he's got a theory as to why. "Cryonics," Robin says, "has the problem of looking like you're buying a one-way ticket to a foreign land." To spend a family fortune in the quest to defeat cancer is not taken, in the American context, to be an act of selfishness. But to plan to be rocketed into the future — a future your family either has no interest in seeing, or believes we'll never see anyway — is to begin to plot a life in which your current relationships have little meaning. Those who seek immortality are plotting an act of leaving, an act, as Robin puts it, "of betrayal and abandonment.""^[["Until Cryonics Do Us Part"](, _NYT_, Kerry Howley]
> "As the spouse of someone who is planning on undergoing cryogenic preservation, I found this article to be relevant to my interests! My first reactions when the topic of cryonics came up (early in our relationship) were shock, a bit of revulsion, and a lot of confusion. Like Peggy (I believe), I also felt a bit of disdain. The idea seemed icky, childish, outlandish, and self-aggrandizing. But I was deeply in love, and very interested in finding common ground with my then-boyfriend (now spouse). We talked, and talked, and argued, and talked some more, and then I went off and thought very hard about the whole thing...Ultimately, my struggle to come to terms with his decision has been more or less successful. Although I am not (and don't presently plan to be) enrolled in a cryonics program myself, although I still find the idea somewhat unsettling, I support his decision without question. If he dies before I do, I will do everything in my power to see that his wishes are complied with, as I expect him to see that mine are. Anything less than this, and I honestly don't think I could consider myself his partner."^[[C](]
[Quentin's]( explanation is even more extreme:
> "What follows below is the patchwork I have stitched together of the true female objections to a mate undergoing cryonic suspension. I believe many women have a constant low-level hatred of men at a conscious or subconscious level and their narcissistic quest for entitlement and significance begrudges him any pursuit that isn't going to lead directly to producing, providing, protecting, and problem solving for her. It would evolutionarily be in her best interest to pull as many emotional and physical levers to bend as much of his energies toward her and their offspring as she can get away with and less away from himself. That would translate as a feeling of revulsion toward cryonics that is visceral but which she dares not state directly to avoid alerting her mate to her true nature.
> She doesn't want him to live for decades, centuries, or millennia more in a possibly healthier and more youthful state where he might meet and fall in love with new mates. She doesn't want her memory in his mind to fade into insignificance as the fraction of time she spent with him since she has died to be a smaller and smaller fraction of his total existence; reduced to the equivalent in his memory of an interesting conversation with a stranger on the sidewalk one summer afternoon. She doesn't want him to live for something more important than HER. So why not just insist she join him in cryonic suspension? Many of these same wives and girlfriends hate their life even when they are succeeding. Everyone is familiar with the endless complaints, tears, and heartache that make up the vast majority of the female experience stemming from frustration of her hypergamous instinct to be the princess she had always hoped to be and from resentment of his male nature, hopes, dreams, and aspirations. She thinks: "He wasn't sexually satisfying! He isn't romantic enough! He never took me anywhere! He didn't pay attention to me! Our kids aren't successes! We live in a dump! His hobbies are a waste of time and money! My mother always told me I can do better, and his mother will never stop criticizing me! I am fat, ugly, unsuccessful, old, tired, and weary of my responsibilities, idiosyncrasies, insecurities, fears, and pain. My life sucked but at least it could MEAN something to those most important to me." But if they are around for too long it shrinks in importance over time.She wants you to die forever because she hates what you are. She wants to die too, because she hates what she is. She wants us all to die because she hates what the world is and has meant to her."
In the same vein:
>> "But why not go with him then [into cryonics]?
> Show me the examples of the men who asked, or even insisted that their wives go with them, and said "If you don't go with me, I won't go". The fact that men generally don't do this, is likely a big contributor to the female reaction. Imagine your husband or boyfriend telling you, "I just scheduled a 1 year vacation in Pattaya, and since I know you hate Thai food, I didn't buy you tickets. I'll remember you fondly." That's very different from the man who says, "I've always dreamed of living in Antarctica, but I won't do it without you, so I'm prepared to spend the next 5 years convincing you that it's a great idea"."^[[JS Allen](, commenting on Katya Grace's post on hostile wives, ["Why do ‘respectable' women want dead husbands?"](]
> "Indeed, I buy the "one way ticket away from here" explanation. If I bought a one-way ticket to France, and was intent on going whether my wife wanted to come with me or not, then there would be reason for her to be miffed. If she didn't want to go, the "correct" answer is "I won't go without you". But that is not the answer the cryonicist gives to his "hostile" wife. It's like the opposite of "I would die for you" – he actually got a chance to take that test, and failed."^[[Thom Blake](]
Robin Hanson tries to explain it in terms of evolutionary incentives:
> "Mating in mammals has a basic asymmetry – females must invest more in each child than males. This can lead to an equilibrium where males focus on impressing and having sex with as many females as possible, while females do most of the child-rearing and choose impressive males.
> ...And because they are asymmetric, their betrayal is also asymmetric. Women betray bonds more by temporarily having fertile sex with other men, while men betray bonds more by directing resources more permanently to other women. So when farmer husbands and wives watch for signs of betrayal, they watch for different things. Husbands watch wives more for signs of a temporary inclination toward short-term mating with other men, while wives watch husbands more for signs of an inclination to shift toward a long-term resource-giving bond with other women. This asymmetric watching for signs of betrayal produces asymmetric pressures on appearances. While a man can be more straight-forward and honest with himself and others about his inclinations toward short-term sex, he should be more careful with the signs he shows about his inclinations toward long term attachments with women. Similarly, while a woman can be more straight-forward and honest with herself and others about her inclinations toward long-term attachments with men, she should be more careful with the signs she shows about her inclinations toward short term sex with men.
> ...Standard crude stereotypes of gender differences roughly fit these predictions! That is, when the subject is one's immediate lust and sexual attraction to others, by reputation men are more straight-forward and transparent, while women are more complex and opaque, even to themselves. But when the subject is one's inclination toward and feelings about long-term attachments, by reputation women are more self-aware and men are more complex and opaque, even to themselves...if cryonics is framed as abandonment, women should be more sensitive to that signal."^[["Why Men Are Bad At 'Feelings'"](, [Robin Hanson](!Wikipedia)]
## Reductionism is the common thread?
The previously listed 'systems of thought', as it were, all seem to share a common trait: they are made of millions or trillions of deterministic interacting pieces. Any higher-level entity is not an ontological atom, and those higher-level illusions can be manipulated in principle nigh-arbitrarily given sufficient information.
That the higher-level entities really are nothing but the atomic units interacting is the fundamental _pons asinorum_ of these ideologies, and the one that nonbelievers have not crossed.
We can apply this to each system.
- Many doubters of cryonics doubt that a bunch of atoms vitrified in place is *really* 'the self'.
- Many users of computers anthropomorphize it and can't accept that it is really just a bunch of bits (this is related to the thesis that [the camel has two humps](#the-camel-has-two-humps), the test being, basically, whether a sample program will be executed as-is by the (dumb) computer)
- Many doubters of materialist philosophy of mind are not willing to say that an extremely large complex enough system can constitute a consciousness
- Many doubters of utilitarianism doubt that there really is a best choice or good computable approximations to the ideal choice, and either claim utilitarianism fails basic ethical dilemmas by [forcing the utilitarian to make the stupid choice]( or instead vaunt as the end-all be-all of ethics what can be easily be formulated as simply heuristics and approximation, like [virtue ethics](!Wikipedia)^[I always wondered - suppose one cultivates a character of generosity, bravery, etc. How does *that* character decide? Virtue ethics seems like buck-passing to me.]
- Many doubters of libertarianism doubt that prices can coordinate multifarious activities, that the market really will find a level, etc. Out of the chaos of the atoms interacting is supposed to come all good things...? This seems arbitrary, unfair, and unreasonable.
- The same could be said of evolution. Like the profit motive, how can mere survival generate "from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved"^[Charles Darwin, _On the Origin of Species_, (1st ed.)]?
- Finally, atheism. A faith of its own in the power of reductionist approaches across *all* fields. What is a God, but the ultimate complex high-level irreducible ontological entity?
In all, there is incredulity at sheer numbers. An ordinary person can accept a few layers since that is what they are used to - a car is made of a few dozen systems with a few discrete thousand parts, a dinner is made of 3 or 4 dishes with no more than a dozen ingredients, etc. The ordinary mind quails at systems with *millions* of components (number of generations evolution can act on), much less billions (length of programs, number of processor cycles in a second) or trillions (number of cells in human body, number of bits on consumer hard drives).
If one doesn't deal first-hand with this, if one has never worked with them at any level, how does one *know* that semiconductor physics is the sublayer for circuits, the sublayer for logic gates; logic gates the sublayer for memory and digital operations, which then support the processor with its fancy instruction like `add` or `mov`, which enables machine code, which we prefer to write as assembler (to be compiled and the linked into machine code), which can be targeted by programming languages, at which point we have only begun to bring in the operating system, libraries, and small programs, which let us begin to think about how to write something like a browser, and a decade later, we have Firefox which will let Grandma go to AOL Mail.
(To make a mapping, the utilitarian definition is like defining a logic gate; the ultimate decisions in a particular situation are like an instance of Firefox, depending on trillions of intermediate steps/computations/logic gates. Non-programmers can't see how to work backwards from Firefox to individual logic gates, and their blindness is so profound that they can't even see that there *is* a mapping. Compare all the predictions that 'computers will never X'; people can't see how trillions of steps or pieces of data could result in computers doing X, so - 'argument from incredulity' - they then believe there is no such way.)
A programmer will have a hard time being knowledgeable about programming and debugging, and also not appreciative of reductionism in his bones. If you tell him that a given system is actually composed of millions of interacting dumb bits - he'll believe you. Because that's all his programs are. If you tell a layman that his mortgage rate is being set by millions of interacting dumb bits (or his mind...) - he'll probably think you're talking bullshit.
Religious belief seems to [correlate and causate]( with quick intuitive thinking (and deontological judgments [as well](, and what is more counterintuitive than reductionism?
I don't know if this paradigm is correct, but it does explain a lot of things. For example, it correctly predicts that evolutionism will be almost universally accepted among the specified groups, even though logically, there's no reason cryonicists have to be evolutionists or libertarians, and vice-versa, no reason libertarians would have any meaningful correlation with utilitarianism.
I would be deeply shocked & fascinated if there were data showing that they were uncorrelated or even inversely correlated; I could understand libertarianism correlating inversely with atheism, at least in the peculiar circumstances of the United States, but I would expect all of the others to be positively correlated. The only other potential counterexample I can think of would be engineers and terrorism, and that is a relatively small and rare correlation.
# Domain-squatting externalities
In developing my [custom search engine]( for finding [sources](!Wikipedia "WP:RS") for Wikipedia articles, one of its chief benefits turned out to nothing other than filtering out mirrors of Wikipedia! Since one is usually working on an *existing* article, that means there may be hundreds or thousands of copies of the article floating around the Internet, all of which match very well the search term one is using, but which contribute nothing. This is one of the hidden costs of having a FLOSS license: the additional copying imposes an overhead^[This is also true of new content in general; they are not a pure win, but impose additional costs on catalogers and collectors and libraries and whatnot. This is true even when they do not take a common name or word as their title, as lamentably many new works do. New works in general are hard to justify; see [Culture is not about Esthetics]().]. This cost is not borne by the copier, who may be making quite a bit of money on their Wikipedia mirror, even penalized by Google as they have since become. In other words, cluttering up searches is a *[negative externality](!Wikipedia)*. (One could say the same thing of the many mirrors or variant versions of social news sites like Hacker News. Who are they imposing costs upon unilaterally?)
Domain-squatters are another nuisance; so often I have gone to an old URL and found nothing but a parking domain, with maybe the URL plugged into a Google search underneath a sea of random ads. But, the libertarian objects, clearly these domain-squatters are providing a service since otherwise there would be no advertising revenue and the domain-squatters could not afford to annually renew the domain, much less turn a profit.
But here is another clear case of externalities.
On parking domains, only 1 person out of thousands is going to click on an ad (at best), find something useful to them, and make the ads a paying proposition. but those other thousands are going to be slowed down - the page has to be loaded, they have to look at it, analyze it, and realize that it's not what they wanted and try something else like a differently spelled domain or a regular search. A simple domain-not-found error would have been faster by a second at least, and less mental effort. The wasted time, the cost to those thousands, is *not* borne by the domain-squatter, the ad-clicker, or the advertiser. They are externalizing the costs of their existing.
# Worldbuilding: The Lights in the Sky are Sacs
On page 217 of evolutionary biologist [Geoffrey Miller](!Wikipedia)'s 2011 book _Spent_, in the middle of some [fairly interesting]( material on [Openness to Experience](!Wikipedia), one reads:
> '...Our six verbal creativity tasks included questions like: "Imagine that all clouds had really long strings hanging from them - strings hundreds of feet long. What would be the implications of that fact for nature and society?"...
To make the obvious point: strings hundreds of feet long strong enough to support themselves and any real weight are better termed 'ropes'. And ropes are heavy. There's no obvious way to change physics to permit just ropes to not be heavy, in the same way you can't [remove fire & keep cellular respiration]( (If we insist on the 'string' language ad the implication that the strings are weak and thin, we can take some sort of [arachnid tack](!Wikipedia "Ballooning (spider)"), which would be either creepy or awesome.) So let's engage in a little [worldbuilding](!Wikipedia) exercise and imagine alternatives.
A cloud with a rope dangling is an awful lot like a balloon or lighter-than-air vehicles in general. How do they work? Usually by using hot air, or with a intrinsically lighter gas like helium or hydrogen. Both need good seals, though, which is something a biological organism can do. But where is an organism going to get enough heat to be a living hot air balloon? So maybe it uses helium instead, but then, where does it get helium? We get helium by applying hundreds of billion of dollars in R&D to digging deep narrow holes in the ground, which is not a viable strategy for a global population of clouds. So hydrogen? That'd work actually; hydrogen is very easy to obtain, just crack water! Even better, the organisms creating this hydrogen to obtain flight could reuse the hydrogen for energy - just burn with oxygen! The Laws of Thermodynamics say that burning wouldn't *generate* any new energy, so this isn't what they feed on. But the answer presents itself - if you're in the *sky* or better yet, above the cloud layer, there' something very valuable up there - sunlight. Trees grow so big and engage in chemical warfare just to get access to the sun, but our hydrogen sacs soar over the groundlings. There might be a similar competition, but the sacs have their own problems: as altitude increases, ambient pressure decreases (which is good) but temperatures plunge (bad) and other forms of radiation increase (ultraviolet?). As well, if our sacs are photosynthetic, they need inputs: water & carbon dioxide for photosynthesis, and the usual organic bulk materials & rarer elements for themselves. Which actually explains where our ropes are coming from: they are the sacs' "roots".
How could such a lifeform evolve? I have no idea. There are animals which glide (eg. flying squirrel), others which are dispersed by wind (spiders), and so on, but none that actually crack water into hydrogen & oxygen or exploit hydrogen for gliding or buoyancy. And there are serious issues with the hydrogen sacs: lightning would seem to be a problem... Still, we could reuse our 'competition for solar radiation' idea; maybe a tree, striving to be taller but running into serious engineering issues to do with [power laws](!Wikipedia), tweaked its photosynthesis to divert some of the split hydrogen to storage vacuoles which would make it lighter and able to grow a little taller. Rise and repeat for millions of years to obtain something which is free-floating and has shed much of its old tree-form for a new spherical shape.
Imagine that a plant or animal did so evolve, and evolved before humanity did. Millions of floating creatures around the world, each one with lifting capacity of a few pounds; or since they could probably grow very large without the same engineering limitations as trees, perhaps hundreds to thousands of pounds. When humanity gets a clue, they will seize on the sacs without hesitation! Horses changed history, and the sacs are *better* than horses. The sacs are mobile over land and sea, hang indefinitely, allow aerial assaults, and would be common. It's hard to imagine a Great Wall of China effective against a sac-mounted nomad force! There's [barrage balloons](!Wikipedia), but those are impossibly expensive on any large scale.
More troubling, early states had major difficulties maintaining control. When you read about ancient Egypt or China or Rome, again and again one encounters barbarians or nomads invading or conquering entirely the state, and how they were, man for man, superior to the soldiers of the government. Relatively modest technical innovations meant that when the Mongols got their act together and refined their strategy, they conquered most of the world. Formal empires and states are not inevitable outcomes, as much as they dominate our thinking in modern times - they didn't exist for most of human history, didn't control most territory or people for much of the period they could be said to exist, and it's unclear how much longer they will survive even in this age of their triumph & universalization. History is shot through with contingency and luck. That China did not have an Industrial Revolution and oddball England did is a matter to give us pause.
What happens when we give nomadic humans, in the un-organized part of history, a creature unparalleled in mobility? At the very least, I think we can expect any static agriculture-based empire (the Indus, Yang-tze, Nile) to be strangled in its cradle. Without states, history would be completely different with few recognizable entities except perhaps ethnicities. The English state seemed closely involved in the Industrial Revolution (funding the Age of Exploration, patents, etc.) and also the concurrent scientific revolution (it is the *Royal* society, and even Newton worked much of his life for the Crown). No state, no Revolution? As cool as it would be to ride a sac around the world, I wouldn't trade them for science and technology.
But optimistically, could we expect something else to arise - so that the sac variant of human history not be one damn thing after another, happy savages until a pandemic or asteroid finally blots out the human world? I think so. If a sac can lift one person, then can't we tie together sacs and lift multiple people? Recycling ropes from dead sacs, we could bind together hundreds of sacs and suspend buildings from them. (I say suspend because to put them 'on top' of the sac-structure would cut off the light that the sacs need and might also be unstable as well.) A [traveling village](fiction/Missing Cities#i) would naturally be a trading village - living in the air is dangerous, so I suspect there will always be villages planted firmly on the ground (even if they keep a herd of sacs of their own). This increased mobility and trade might spark a global economy of its own.
I failed to mention earlier that the sacs, besides being a potent tool of mobility exceeding horses, could also constitute a weapon of their own: a highly refined and handy package of hydrogen. Hydrogen burns very well. If nothing else, it makes arson and torching a target very handy. Could sacs be *weaponized*? Could a nomad take a sac, poke a spigot into it, light a match and turn the sac into a rocket with a fiery payload on impact? If they can be, then things look very dim indeed for states. But on the flip side, hydrogen burns hot and [oxyhydrogen](!Wikipedia) was one of the first mixtures for welding. Our nomads will be able to easily melt and weld tough metals like iron. Handy.
I leave the thought exercise at this point, having overseen the labefaction of the existing world order and pointed at a potential iron-using airborne anarchy. Which of the two is a better world, I leave to the unknowable unfolding of the future.
# On meta-ethical optimization
When I or another utilitarian point out (eg. in [Charity is not about helping]()) that it costs only a few hundred/thousand dollars to reliably save a human life, and then note that one choosing to spend money on something else is choosing to not save that life, one of the common reactions is that this is true of every expenditure and that this implies we ought to donate most or all of our wealth.
This is quite true. If you have $10,000 and you donate it all, there will be say 10 more humans alive than in the counterfactual scenario where you spend $10,000 on a daily cup of coffee at Starbucks. This is a simple fact about how the world works. To deny it requires quibbling about probabilities and expected value (despite one accepting them in every other part of one's life) or engaging in desperate postulations about infinitely precise counter-balancing mechanisms ('maybe if I donate, that means someone somewhere will donate that much less! So it conveniently doesn't matter whether or not I do, I don't make a difference!'). Fundamentally, if to give a little helps, then for non-billionaires, giving a lot helps more, and given even more helps even more. What a dull point to make.
But the *reaction* to this dull point is interesting. Apparently for many people, this shows that utilitarianism is not correct! I saw this particularly in the reception to [Peter Singer](!Wikipedia)'s book _[The Life You Can Save](!Wikipedia)_ - that Singer to some extent lives up to his proposed standards seems to make it even worse.
It seems that people intuitively think that the true ethical theory will not be *too* demanding. This is rather odd.
A few criteria are common in meta-ethics, that the One True Ethics should satisfy. For example, universalizability: the One True Ethics should apply to Pluto just as much as it does Earth, or work a few galaxies over just like we would apply it in the Milky Way. Similarly for time: it'd be an odd and unsatisfying ethics which said casual murder was forbidden before 2050 AD but OK afterwards. (Like physics, the rules stay the same, but different input, different output.) It ought to cover all actions and inactions, if only to classify it as morally neutral. (It would be odd if one were pondering the morality of something and asked, only to be told in a very Buddhist way, that the action was: not moral, not immoral, not neither moral nor immoral, not both moral and immoral...) And finally, the ethical theory has to do *work*: it has to make relatively specific suggestions, and ideally those suggestions would be specific enough that it permits little and forbids much. (For example, could one base a satisfactory ethical theory on the Ten Commandments and nothing else? If all one had to do was be moral was to not violate a commandment? That would be not that hard, but I suspect, as we watch our neighbors fornicate with their goats and sheep, we will suspect that it is immoral even though nowhere in the Ten Commandments did God forbid bestiality - or many other things, for that matter, like child molestation.) The theory may not specify a *unique* action, but that's OK. (You see two strangers drowning and can save only one; your ethical theory says you can randomly pick, because saving either stranger is equally good. That seems fine to me, even though your ethics did not give you just one moral option, but two..)
Given that every person faces, at every moment, a mindboggling number of possible actions and inactions, even an ethics which permitted thousands of moral actions in a given circumstance is ruling out countless more. And since there are a lot of moments in a lifetime, that's a lot of actions too. Considering this, it would not be a surprise if people frequently chose immoral or amoral actions: no one bats a thousand and even Homer nods, as the sayings go. So there is a lot of room for improvement. If this were true of ethics, that would only mean ethics is like every other field of human endeavour in having an ideal that is beyond attainment - no doctor never makes a mistake, no chess player never overlooks an easy checkmate, no artist never messes up a drawing, and so on. There is no end to moral improvement:
> "Disquiet in philosophy may be said to arise from looking at philosophy wrongly, seeing it wrong, namely a if it were divided into (infinite) longitudinal strips instead of into (finite) cross strips. This inversion in our conception produces the *greatest* difficulty. So we try, as it were, to grasp the unlimited strips and complain that it cannot be done piecemeal. To be sure it cannot, if by a piece one means an infinite longitudinal strip. But it may well be done, if one means a cross-strip. --But in that case we never get to the end of our work! --Of course not, for it has no end."^[Ludwig Wittgenstein's _[Zettel](!Wikipedia)_, 447]
Yet, people seem to expect moral perfection to be easy! When utilitarianism tells them that they are far from being morally *perfect* (like they are not perfect writers or car drivers), they say that utilitarianism is stupid and sets unobtainable goals. Well, yes. Wouldn't it be awfully odd if goodness were as attainable as, say, playing a perfect game of tic-tac-toe? If all one had to do to be a good person on par with paragons like Jonas Salk or Norman Borlaug was to simply not do anything awful and be nice to people around you? Why would one expect morality to be easy? Most human endeavors are hard, and ethics covers all our endeavors. To object to utilitarianism because it points to a very high ideal is reminiscent, to me, of rejecting heliocentrism because it makes the universe much bigger and the earth much smaller.
The small-minded want an equally small-minded ethics.
# The Narrowing Circle
One sometimes sees arguments for vegetarianism which play on the idea of moral progress following a predictable trend of valuing ever more creatures, which leads to vegetarianism (not eating animals) among other ethical positions; if one wishes not to incur the opprobrium of posterity, one ought to 'skate where the puck will be' and beat the mainstream in becoming vegetarian. This seems plausible: Thomas Jefferson comes to mind as someone who surely saw that slavery was on the way out - for which we congratulate him - but also lacked the courage of his convictions, keeping and wenching his slaves - for which we condemn him. You can do better! All you have to do is abandon eating meat and animal products... The standard for this would be [Peter Singer](!Wikipedia)'s _The Expanding Circle: Ethics, Evolution, and Moral Progress_, which opens with the epigraph:
> "The moral unity to be expected in different ages is not a unity of standard, or of acts, but a unity of tendency...At one time the benevolent affections embrace merely the family, soon the circle expanding includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man with the animal world."^[W.E.H. Lecky, _The History of European Morals_]
In a way, it's kind of odd that one can predict future moral progress, as opposed to something like future population growth. Presumably we are doing the best we can with our morals and at any moment, might change our position on an issue (or not change at all). If one knew in advance what progress *would* be made, why has it not already been made? (A little like 'efficient markets'.) If one *knew* that one was going to get one's guess about a coin coming up heads wrong, why doesn't one immediately update to guess it will be tails?[^vonFraassen] But then, perhaps one is especially intelligent or especially attentive to empirical trends, or perhaps one has the benefit of being young & uncommitted while the rest of the populace is ossified.
[^vonFraassen]: It's accepted that theories should be consistent. It'd also be good if one's beliefs were consistent over time as well, otherwise one gets things like [Moore's question]( (or [a quote ascribed to Leonardo Da Vinci](, appropriately on vegetarianism), "I went to the pictures last Tuesday, but I don't believe that I did", a sort of inconsistency which seems to render one vulnerable to [a Dutch book exploit]( (How exactly the inconsistency is to be resolved is a [bit unclear]( Reflection principles have been [much discussed](
Of course, progress could be an illusion. Random data can look patterned, and especially patterned if one edits the data just a little bit. Biological evolution looks like an impressive multi-billion-year cascade of progress towards ever more complexity, but how can we prove Stephen Jay Gould wrong if he tells us that is due solely to evolution being a drunkard's walk with an intrinsic lower bound (no complexity = no life)? If we were to find that the appearance of progress were due to *omissions* in the presented data, that would certainly shake our belief that there's progress as opposed to some sort of random walk or periodic cycle or more complicated relationship with elapsed time. (For example, torture can be cyclical - northern European countries going from minimal torture under their indigenous governments to extensive torture under Roman dominion back to juries and financial punishments after Rome to torture *again* with the revival of Roman law by rising modern centralized states and then torture's abandonment when those states modernized and liberalized even further. China has gone through even more cycles of judicial torture, with its dynastic cycle.)
And there may be points omitted from the picture drawn by the expending circle.
An acquaintance, trying to come up in ways in which the moral circle might have *narrowed*, failed to. His failure was understandable because he was an atheist. When one doesn't believe religion deals with real things at all, it's hard to take it seriously. But nevertheless, when one compares modern with ancient society, the religious differences are striking: almost every single supernatural entity (place, personage, or force) has been excluded from the circle of moral concern, where they used to be huge parts of the circle and one could almost say the entire circle. One really has to read source texts to understand how vivid and important the gods were. In Herodotus, one reads of the gods on almost every page, it seems; the oracles were not consulted by the superstitious but were key parts of statecraft[^silence] (and so was bribing them either directly by sacrifices or indirectly by bribing the clergy); a messenger could meet the god Pan along the road, who berates him for his city's neglect of his sacrifices, and relate their conversation in detail to the legislature; the gods would guide their favorites in daily matters with useful little omens, and would routinely protect their temples and sacred places by such efficacious means as insanity and destroying entire families. (If one repeated the Roman maxim today that 'offenses to the gods are the concern of the gods', it would come out as ironic and mocking - physician, cure thyself! - but I suspect the Romans meant it quite literally.) Indeed, the gods were immanent and not transcendent. Their expressed wishes were respected and honored, as were their avatars, possessions (Herodotus's pages are as crowded with artwork given to Delphi as the temple precincts must have been), slaves, and food.
This blind spot is based on different facts, to some extent; as C.S. Lewis remarked, if many people are against burning witches, it's because they don't believe witches exist but if the witches existed and acted as described, they would howl for the witches' blood[^lewis]. (Is the US more moral for no longer executing for treason[^treason]?) But even more, it is based on weaker, less virulent^[The analogy is to contagious diseases, which cannot afford to be too deadly if transmission becomes more difficult.] religions, where the believers tolerate amazing things. Today? Iceland is mocked when construction is held up to expel elves - but the construction goes forward. Japan keeps its temples on sacred places - when they earn their keep and do not block housing. Lip service is paid, at most. In the West, ordinary use of the supernatural (eg. [trial by ordeal](!Wikipedia)) has been receding since well before any scientific revolution, since the 1300s[^ordeal]. The wishes of supernatural entities are not respected, even if - as many aboriginals might argue of their sacred places and spirits - those entities will be killed by whatever profitable human activity. And to think, atheists are a small minority in most every nation! If this is how believers treat their gods, the gods are fallen gods indeed. The circle may have widened for the human and less-than-human, but in what way has the circle not narrowed for the greater-than-human? (Singer focuses on animals; religion gives us a perspective on them - what have they lost by none of them being connected to divinities and by becoming subject to modern factory farming and agriculture? If you could ask snakes, one of the most common sacred animals, what they made of the world over the last millennia, would they regard themselves as better or worse off for becoming merely animals in the expanded circle? If India abandoned Hinduism, what would happen to [the cows](!Wikipedia "Cattle in religion")?) Let us not forget that the Nazis - who usefully replace devils and demons in the Western consciousness - were great [lovers of animals](!Wikipedia "Animal welfare in Nazi Germany"). I note with interest that the hormone [oxytocin](!Wikipedia) is associated with empathy, and *also* simultaneously increasing in-group favoritism and out-group hostility or xenophobia[^xenophobia].
[^silence]: The Greeks did not [believe in belief]( and the 'retreat to commitment' would have been sheer heresy; the oracles were taken very seriously by Greco-Roman culture and were not 'compartmentalized' aspects of their religion to be humored and ignored. Even the most skeptical educated elite often believed visions reported by people if they were awake (see ["Kooks and Quacks of the Roman Empire: A Look into the World of the Gospels"](; as has been observed by anthropologists, modern Western societies are extraordinarily atheistic *in practice*. Discussing the historical context of early Christian missionaries in his 2009 _Not the Impossible Faith_ (pg 283-284), Richard Carrier writes:
> 'In the other case, cultural presuppositions subconsciously guide the prophet’s mind to experience exactly what he needs to in order to achieve his goals. Such “experiences are found among 90 percent of the world’s population today, where they are considered normal and natural, even if not available to all individuals,” whereas “modern Euro-American cultures offer strong cultural resistance” to such “experiences, considering them pathological or infantile while considering their mode of consciousness as normal and ordinary.” So moderns like Holding stubbornly reject such a possibility only by ignoring the difference between modern and ancient cultures—for contrary to modern hostility to the idea, “to meet and converse with a god or some other celestial being is a phenomenon which was simply not very surprising or unheard of in the Greco-Roman period,” and the biology and sociology of altered states of consciousness is sufficient to explain this. [Malina & Pilch 2000, _Social Science Commentary on the Book of Revelation_ pg 5, 43]
> ...As it happens, schizotypal personalities (who experience a relatively common form of non-debilitating schizophrenia) would be the most prone to hallucinations guided by such a subconscious mechanism, and therefore the most likely to gravitate into the role of “prophet” in their society (as Malina himself argues). Paul, for example, so often refers to hearing voices in his letters (often quoting God’s voice verbatim) that it’s quite possible he was just such a person, and so might many of the original Christian leaders have been (like Peter). Indeed, the “Angel of Satan” that Paul calls a “thorn in his flesh” (2 Corinthians 12:6-10) could have been an evil voice he often heard and had to suppress (though Holding is right to point out that other interpretations are possible). But there are many opportunities even for normal people to enter the same kind of hallucinatory state, especially in religious and vision-oriented cultures: from fasting, fatigue, sleep deprivation, and other ascetic behaviors (such as extended periods of mantric prayer), to ordinary dreaming and hypnagogic or hypnopompic events (a common hallucinatory state experienced by normal people between waking and sleep). [On all of this see references in note 14 in Chapter 8 (and note 25 above).]
The gradual failure of the oracles was a spiritual crisis, memorialized by Plutarch in his dialogue [_De Defectu Oraculorum_](*.html). (An overview and connection to modern Christian concerns is Benno Zuiddam's ["Plutarch and 'god-eclipse' in Christian theology: when the gods ceased to speak"]( The dialogue is interesting on many levels (I am struck by the multiple refutations of the suggestion that a eternal flame burning less proves the year is shrinking yet all uncritically believe in the gods & oracles, or see [Elijah's theological experiment](; the speakers do not consult the remaining oracle but attempt to explain the decline of the oracles as a divine response to the decline of Greece itself (fewer people need fewer oracles), divine will (or whim?), corruption among humans, or deaths among the lesser supernatural entities (_daemons_) who might handle the oracles for the major gods like Apollo or Zeus:
> "Let this statement be ventured by us, following the lead of many others before us, that co-incidentally with the total defection of the guardian spirits assigned to the oracles and prophetic shrines, occurs the defection of the oracles themselves; and when the spirits flee or go to another place, the oracles themselves lose their power....but when the spirits return many years later, the oracles, like musical instruments, become articulate, since those who can put them to use are present and in charge of them."
Hopeful, but Plutarch concludes with a more depressing message:
> "The power of the spirit does not affect all persons nor the same persons always in the same way, but it only supplies an enkindling and an inception, as has been said, for them that are in a proper state to be affected and to undergo the change. The power comes from the gods and demigods, but, for all that, it is not unfailing nor imperishable nor ageless, lasting into that infinite time by which all things between earth and moon become wearied out, according to our reasoning. And there are some who assert that the things above the moon also do not abide, but give out as they confront the everlasting and infinite, and undergo continual transmutations and rebirths. "
This idea that the gods might die, and the general silence and reduction in miracles was fortunate for the upstart mystery cult Christianity as the silence could be and was interpreted as a victory of the Christian god over the Olympians: Christian [Eusebius of Caesarea](!Wikipedia) writes in his [_Praeparatio Evangelica_](!Wikipedia "Praeparatio Evangelica") (313 AD) of Plutarch's dialogue:
> "Hear therefore how Greeks themselves confess that their oracles have failed, and never so failed from the beginning until after the times when the doctrine of salvation in the Gospel caused the knowledge of the one God, the Sovereign and Creator of the universe, to dawn like light upon all mankind. We shall show then almost immediately that very soon after His manifestation there came stories of the deaths of daemons, and that the wonderful oracles so celebrated of old have ceased."
(The oracles would occasionally be restored and supported by various emperors but the efforts never took and they were finally outlawed as pagan remnants.) Eusebius goes further, saying the [death of Pan](!Wikipedia "Pan (god)#The .22Death.22 of Pan") (related in the dialogue by Cleombrotus) was due directly to Jesus:
> " is important to observe the time at which he says that the death of the daemon [Pan] took place. For it was the time of Tiberius, in which our Savior, making His sojourn among men, is recorded to have been ridding human life from daemons of every kind, so that there were some of them now kneeling before Him and beseeching Him not to deliver them over to the Tartarus that awaited them. You have therefore the date of the overthrow of the daemons, of which there was no record at any other time; just as you had the abolition of human sacrifice among the Gentiles as not having occurred until after the preaching of the doctrine of the Gospel had reached all mankind. Let then these refutations from recent history suffice"
[^lewis]: C.S. Lewis, _[Mere Christianity](!Wikipedia)_ (1952), Bk1, ch2:
> "I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, "Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct?" But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did.
> There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house."
One wonders how much of the 'expanding circle' is due solely to additional facts.
[^treason]: The USA has not executed anyone for treason in decades and barely ever [convicts anyone](!Wikipedia "List of people convicted of treason#United States"), despite people committing plenty of treasonous acts like spying for Israel or Russia. Does this reflect an expanding circle that one should not kill people? Or does it instead reflect that understanding that with the largest and most sophisticated military in the world, a huge population, 2 very friendly bordering states, oceans on either side and a navy to match, there is not the slightest possibility of any other country invading the USA, much less occupying it? We worry about overseas allies like South Korea - but how many other countries are so incredibly stable and secure that an *ally* is their chief headache? Terrorism is the chief security concern - and utterly ridiculous given the [actual facts](Terrorism is not Effective). The one real military threat to the USA is nuclear war with Russia, which relied on a balance of terror no traitor could single-handedly affect (the key information, the location of nuclear weapons, still wouldn't enable a pain-free first strike); [Julius and Ethel Rosenberg](!Wikipedia) were, incidentally, executed for nuclear espionage. [Benedict Arnold](!Wikipedia) could hand over to the British the key fort of West Point enabling Britain to invade & occupy the commercial heart of the colonies, New York State, but what could a modern Arnold do? (Britain no longer rules the waves.)
[^ordeal]: Kugel 2011, pg 165-16:
> "The omens continued to exist long after Europe was Christianized; indeed, Christianity was often the omens' close friend, a frequent feature in tales of the saints. But then, slowly at first, their sphere of influence began to shrink. The whole realm of the supernatural underwent a marked contraction in Western Europe - not, as one might suppose, with the scientific revolution, but well before it, during the period of, roughly, 1000 to 1500 of the common era.^6^ The supernatural of course continued to exist, but, as I mentioned, the very act of distinguishing the natural from the supernatural was a distinction that bespoke mankind's growing power over occult forces.
> One indication of this change is the phenomenon of 'trial by ordeal'. In many societies, supernatural means were used to determine a person's guilt or innocence, or the appropriateness or inappropriateness of a given course of action: lots were cast, entrails were scrutinized, arrows were shot, and so forth, and the results determined what was to be done. This was not, it should be stressed, like our flipping a coin nowadays, where the utterly random nature of the outcome is generally recognized by the participants. Instead, the results here were taken to be an expression of the divine will...Christian trials by ordeal continued long after this time [first century CE], in fact, well into the Middle Ages. And they were no joke: indeed, they were known by the somewhat more ominous name of 'the Judgment of God' (_iudicium Dei_)...The interesting thing is that such trials virtually disappeared from Western Europe by the year 1300, and it seems this was part of a wider trend that limited (but certainly did not eliminate entirely) the role of the supernatural in human affairs. It may not be a coincidence that this was also the time when the writings of Plato and Aristotle, as well as the other Greek scientific and mathematical treatises, were making their way into Latin, often via earlier translations into Arabic. (Greek had been largely unknown in Western Europe.) A whole new attitude to the formerly supernatural world was emerging, what the sociologist [Max Weber](!Wikipedia) called "breaking the magic spell" of the world.^8^ The uncanny was receding.
> - 6: This is the subject of a recent study from which some of the following examples are taken: Robert Bartlett, _The Natural and the Supernatural in the Middle Ages_ (Cambridge: Cambridge University Press, 2008). See also Peter Brown, ["Society and the Supernatural: A Medieval Change"](, _[Daedalus](!Wikipedia "Daedalus (journal)")_ 104 (1975), 133-151
> - 8: _Entzayberung der Welt_ [_[Disenchantment](!Wikipedia) of the World_]: see Barlett, 32-33. Even then, however, the movement was not unidirectional. While Aristotle's treatises on logic were uncontroversial, his writings on physics, biology, and other _libri naturales_ were regarded with some suspicion and even, briefly, banned."
[^xenophobia]: ["Oxytocin promotes human ethnocentrism"](, De Dreu et al 2010 ([criticism](; [counter-criticism](
> "Grounded in the idea that ethnocentrism also facilitates within-group trust, cooperation, and coordination, we conjecture that ethnocentrism may be modulated by brain oxytocin, a peptide shown to promote cooperation among in-group members. In double-blind, placebo-controlled designs, males self-administered oxytocin or placebo and privately performed computer-guided tasks to gauge different manifestations of ethnocentric in-group favoritism as well as out-group derogation. Experiments 1 and 2 used the Implicit Association Test to assess in-group favoritism and out-group derogation. Experiment 3 used the infrahumanization task to assess the extent to which humans ascribe secondary, uniquely human emotions to their in-group and to an out-group. Experiments 4 and 5 confronted participants with the option to save the life of a larger collective by sacrificing one individual, nominated as in-group or as out-group. Results show that oxytocin creates intergroup bias because oxytocin motivates in-group favoritism and, to a lesser extent, out-group derogation. These findings call into question the view of oxytocin as an indiscriminate "love drug" or "cuddle chemical" and suggest that oxytocin has a role in the emergence of intergroup conflict and violence. "
Continuing the religious vein, many modern practices reflect a narrowing circle from some points of view: abortion and contraception come to mind. Abortion could be a good example for cyclical or random walk theses, as in many areas the moral status of abortion or [infanticide](!Wikipedia) has varied widely over recorded history, from normal to religiously mandated to banned to permitted again. Consider Greece: Sparta routinely tossed infants [off a cliff](!Wikipedia "Taygetus#History") and exposure of deformed existed among other cities - Thebes exposed Oedipus and Athens discarded valueless females[^Athens], and examples like Agamemnon sacrificing [Iphigenia](!Wikipedia) only prove that such sacrifice was ambiguous in some periods, not that it never existed. Into recorded history, sacrifice disappears[^questionable-sacrifice] and infanticide becomes rarer while abortion remains permitted within limits; early Rome's [Twelve Tables](!Wikipedia) mandated infanticide of the deformed[^Twelve-Tables], but imperial Rome seems to have eventually banned abortion and infanticide (to little effect[^Tertullian]), while early Christianity seems to [have permitted abortion](!Wikipedia "History of early Christian thought on abortion") and [banned infanticide]( with many dissenters and different positions, followed by a [gradual hardening to the present day](!Wikipedia "Christianity and abortion") where majority sects like the Catholic Church and many Protestants flatly oppose it as murder. And Greece came under Turkish dominion, so it might be governed by the entirely different set of changing Islamic beliefs on those matters (consistently opposed to infanticide and all human sacrifice) which [may permit some abortions](!Wikipedia "Islam and abortion"). Is there any consistent trend here? If one accepts the basic premise that a fetus is human, then the annual rate (as pro-life activists never tire of pointing out) of millions of abortions worldwide would negate centuries of 'moral progress'. If one does not accept the premise, then per C.S. Lewis, we have change in facts as to what is 'human', but nothing one could call a expanding circle. Continuing the judicial vein, can we really say that our use of incarceration is that superior to our ancestors' use of metered-out torture? I am chilled when [I read]( and agree with Adam Gopnik:
> "Every day, at least fifty thousand men—a full house at Yankee Stadium—wake in solitary confinement, often in “[supermax](!Wikipedia)” prisons or prison wings, in which men are locked in small cells, where they see no one, cannot freely read and write, and are allowed out just once a day for an hour’s solo “exercise.” (Lock yourself in your bathroom and then imagine you have to stay there for the next ten years, and you will have some sense of the experience.) Prison rape is so endemic—more than seventy thousand prisoners are raped each year — that it is routinely held out as a threat, part of the punishment to be expected. The subject is standard fodder for comedy, and an uncooperative suspect being threatened with rape in prison is now represented, every night on television, as an ordinary and rather lovable bit of policing. The normalization of prison rape — like eighteenth-century japery about watching men struggle as they die on the gallows — will surely strike our descendants as chillingly sadistic, incomprehensible on the part of people who thought themselves civilized."
At least spectators could count how many lashes were administered; who counts the anal rapes and gives time off for extra?
[^questionable-sacrifice]: I have not read much about Greek sacrificing; Richard Hamilton's [review of]( Hughes's _Human Sacrifice in Ancient Greece_ relays Hughes's skepticism and alternate explanations of "twenty-five archaeological sites are discussed in detail and over a hundred literary testimonia" describing human sacrifices.
[^Athens]: As with much about the life of Greek city-states, the existing evidence only whets our appetite and does not give a full picture. Mark Golden in ["Demography and the exposure of girls at Athens"]( says the female infanticide rate may have ranged as high as 20%, while Donald Engels in ["The problem of female infanticide in the Greco-Roman world"]( deprecates this as demographically impossible and says there is a single percentage upper bound. The impression I get from Mindy Nichol's senior thesis, ["Did Ancient Romans Love Their Children? Infanticide in Ancient Rome"](, is that the picture is complicated but there was probably significant infanticide, although surely not as high as the $\frac{2}{3}$-$\frac{3}{4}$ rate (both genders) ascribed to pre-contact Polynesians by observers in the 1800s (Clark, ch5 _Farewell to Alms_).
[^Tertullian]: The Christian [Tertullian](!Wikipedia), writing in mid-empire, claims the infanticide laws are ignored in [_Libri duo ad Nationes_](, ch 15 ("The Charge of Infanticide Retorted on the Heathen"):
> "Meanwhile, as I have said, the comparison between us [Christians & pagans] does not fail in another point of view. For if we are infanticides in one sense, you also can hardly be deemed such in any other sense; because, although you are forbidden by the laws to slay new-born infants, it so happens that no laws are evaded with more impunity or greater safety, with the deliberate knowledge of the public, and the suffrages of this entire age. Yet there is no great difference between us, only you do not kill your infants in the way of a sacred rite, nor (as a service) to God. But then you make away with them in a more cruel manner, because you expose them to the cold and hunger, and to wild beasts, or else you get rid of them by the slower death of drowning."
[^Twelve-Tables]: From the [Twelve Tables](
> "A father shall immediately put to death a son recently born, who is a monster, or has a form different from that of members of the human race."
Another possible oversight is the way in which the dead and past are no longer taken into consideration. This is due in part to the expanding circle itself: if moral progress is indeed being made, and the weight of one's voice is related to how moral one was, then it follows past people (by being immoral) are also to be ignored. We pay attention to Jefferson in part because he was partially moral, and we pay no attention to a Southern planter who was not even partially moral by our modern lights. More dramatically, we dishonor our ancestors by neglecting their graves, by not offering any sacrifices or even performing any rituals, by forgetting their names (can you name your great-grandparents?), by selling off the family estate when we think the market has hit the peak, and so on. Even if the dead sacrifice and save up a large estate to be used after their death for something they greatly valued, we freely ignore their will when it suits us, assuming the courts will execute the will at all ('perpetuities' being outright forbidden by law despite being highly desirable[^perpetual] although the Muslim world's practical experience had both good and bad aspects[^waqf]). Contrast this with the ability of the wealthy in far gone eras to endow eternal flames, or masses continually said or sutras recited for their soul, or add conditions to their property like 'no Duke in my line shall marry a Catholic', or set up perpetual charities (as in the Muslim or Indian worlds). The dead are ill-respected, and are not even secure in their graves (what shame to hand over remains to be destroyed by alchemists in their bizarre unnatural procedures, whatever those 'scientists' claim to be doing). The 'dead hand of the past' was once more truly the 'live hand' - a vital component of society and the world[^African]. The dead have been ejected utterly from the 'expanding circle' and indeed, [exhumed in the thousands](!Wikipedia "Mummy#Treatment of ancient mummies in modern times") from the Egyptian sands to used as [paper](!Wikipedia "Mummy paper"), be burned as convenient fuel, be turned into [folk remedies](!Wikipedia "Mellified man#Similar medicine practices"), or become the lovely paint colors [caput mortuum](!Wikipedia) & [mummy brown](!Wikipedia). One might say that it has never been a worse time to be dead. This is particularly amusing given that one of the primary purposes of property was to honor and support the dead, and be honored by subsequent generations in turn[^Fukuyama].
[^Fukuyama]: From [Francis Fukuyama](!Wikipedia)'s _[The Origins of Political Order](!Wikipedia)_ (2011):
> "According to [Fustel de Coulanges](!Wikipedia), it was in no way comparable to Christian worship of saints: "The funeral obsequies could be religiously performed only by the nearest relative … They believed that the dead ancestor accepted no offerings save from his own family; he desired no worship save from his own descendants." Moreover, each individual has a strong interest in having male descendants (in an [agnatic](!Wikipedia) system), since it is only they who will be able to look after one's soul after one's death. As a result, there is a strong imperative to marry and have male children; celibacy in early Greece and Rome was in most circumstances illegal.
> The result of these beliefs is that an individual is tied both to dead ancestors and to unborn descendants, in addition to his or her living children. As Hugh Baker puts it with regard to Chinese kinship, there is a rope representing the continuum of descent that "stretches from Infinity to Infinity passing over a razor which is the Present. If the rope is cut, both ends fall away from the middle and the rope is no more. If the man alive now dies without heir, the whole continuum of ancestors and unborn descendants dies with him … His existence as an individual is necessary but insignificant beside his existence as the representative of the whole."^39^
> ...The emergence of modern property rights was then postulated to be a matter of economic rationality, in which individuals bargained among themselves to divide up the communal property, much like Hobbes's account of the emergence of the Leviathan out of the state of nature. There is a twofold problem with this scenario. The first is that many alternative forms of customary property existed before the emergence of modern property rights. While these forms of land tenure may not have provided the same incentives for their efficient use as do their modern counterparts, very few of them led to anything like the tragedy of the commons. The second problem is that there aren't very many examples of modern property rights emerging spontaneously and peacefully out of a bargaining process. The way customary property rights yielded to modern ones was much more violent, and power and deceit played a large role.^5^
> ...The earliest forms of private property were held not by individuals but by lineages or other kin groups, and much of their motivation was not simply economic but religious and social as well. Forced collectivization by the Soviet Union and China in the twentieth century sought to turn back the clock to an imagined past that never existed, in which common property was held by nonkin.
> Greek and Roman households had two things that tied them to a particular piece of real estate: the hearth with its sacred fire, which resided in the household, and nearby ancestral tombs. Land was desired not simply for its productive potential but also because it was where dead ancestors and the family's unmovable hearth resided. Property needed to be private: strangers or the state could not be allowed to violate the resting place of one's ancestors. On the other hand, these early forms of private property lacked a critical characteristic of what we regard today as modern property: rights were generally [usufructuary](!Wikipedia) (that is, they conveyed the right to use land but not to own it), making it impossible for individuals to sell or otherwise alienate it.^6^ The owner is not an individual landlord, but a community of living and dead kin. Property was held as a kind of trust on behalf of the dead ancestors and the unborn descendants, a practice that has parallels in many contemporary societies. As an early twentieth-century Nigerian chief said, "I conceive that land belongs to a vast family of which many are dead, few are living and countless members are still unborn."^7^ Property and kinship thus become intimately connected: property enables you to take care of not only preceding and succeeding generations of relatives, but of yourself as well through your ancestors and descendants, who can affect your well-being.
> In some parts of precolonial Africa, kin groups were tied to land because their ancestors were buried there, much as for the Greeks and Romans.^8^ But in other long-settled parts of West Africa, religion operated differently. There, the descendants of the first settlers were designated Earth Priests, who maintained Earth Shrines and presided over various ritual activities related to land use. Newcomers acquired rights to land not through individual buying and selling of properties but through their entry into the local ritual community. The community conferred access rights to planting, hunting, and fishing not in perpetuity but as a privilege of membership in the community.^9^
> In tribal societies, property was sometimes communally owned by the tribe. As the historical anthropologist [Paul Vinogradoff](!Wikipedia) explained of the Celtic tribes, "Both the free and the unfree are grouped in [agnatic] kindreds. These kindreds hold land in communal ownership, and their possessions do not as a rule coincide with the landmarks [boundaries] of the villages, but spread spider-like through different settlements."10 Communal ownership never meant that land was worked collectively, however, as on a twentieth-century Soviet or Chinese collective farm. Individual families were often allocated their own plots. In other cases, properties were individually owned but severely entailed by the social obligations that individuals had toward their kin—living, dead, and yet to be born.11 Your strip of land lies next to your cousin's, and you cooperate at harvest-time; it is unthinkable to sell your strip to a stranger. If you die without male heirs, your land reverts to the kin group. Tribes often had the power to reassign property rights. According to Vinogradoff, "On the borders of India, conquering tribes have been known to settle down on large tracts of land without allowing them to be converted into separate property even among clans or kindreds. Occasional or periodical redivisions testified to the effective overlordship of the tribe."^12^
> Customary property held by kin groups still exists in contemporary [Melanesia](!Wikipedia). Upward of 95 percent of all land is tied up in customary property rights in [Papua New Guinea](!Wikipedia) and the [Solomon Islands](!Wikipedia). When a mining or palm oil company wants to acquire real estate, it has to deal with entire descent groups ([wantoks](^13^ Each individual within the descent group has a potential veto over the deal, and there is no statute of limitations. As a result, one group of relatives may decide to sell their land to the company; ten years later, another group may show up and claim title to the same property, arguing that the land had been unjustly stolen from them in previous generations.^14^ Many individuals are unwilling to sell title to their land under any conditions, since the spirits of their ancestors dwell there.
> But the inability of individuals within the kin group to fully appropriate their property's resources, or to be able to sell it, does not necessarily mean that they neglect it or treat it irresponsibly. Property rights in tribal societies are extremely well specified, even if that specification is not formal or legal.^15^
> - 39: Hugh Baker, _Chinese Family and Kinship_ (New York: Columbia University Press, 1979), p. 26.
> - 5: Such rights were said to have spontaneously emerged during the [California gold rush](!Wikipedia "California Gold Rush#Legal rights") of 1849-1850, when miners peacefully negotiated among themselves an allocation of the claims they had staked out. See _Pipes, Property and Freedom_, p. 91. This account ignores two important contextual factors: first, the miners were all products of an Anglo-American culture where respect for individual property rights was deeply embedded; second, these rights came at the expense of the customary rights to these territories on the part of the various indigenous peoples living there, which were not respected by the miners.
> - 6: Charles K. Meek, _Land Law and Custom in the Colonies_, 2d ed. (London: Frank Cass, 1968), p. 26.
> - 7: Quoted in Elizabeth Colson, "The Impact of the Colonial Period on the Definition of Land Rights," in Victor Turner, ed., _Colonialism in Africa 1870–1960_. Vol. 3: "Profiles in Change: African Society and Colonial Rule" (New York: Cambridge University Press, 1971), p. 203.
> - 8: Meek, _Land Law and Custom_, p. 6.
> - 9: Colson, "Impact of the Colonial Period," p. 200.
> - 10: Paul Vinogradoff, _Historical Jurisprudence_ (London: Oxford University Press, 1923), p. 327.
> - 11: Meek, _Land Law and Custom_, p. 17.
> - 12: Vinogradoff, _Historical Jurisprudence_, p. 322.
> - 13: For a discussion of the pros and cons of traditional land tenure, see Curtin, Holzknecht, and Larmour, _Land Registration in Papua New Guinea_.
> - 14: For a detailed account of the difficulties of negotiating property rights in Papua New Guinea, see Wimp, "Indigenous Land Owners and Representation in PNG and Australia."
> - 15: The modern economic theory of property rights does not specify the social unit over which individual property rights extend for the system to be efficient. The unit is often presumed to be the individual, but families and firms are often posited as holders of property rights, whose constituent members are assumed to have common interests in the efficient exploitation of the resources they together own. See Jennifer Roback, "Exchange, Sovereignty, and Indian-Anglo Relations," in Terry L. Anderson, ed., Property Rights and Indian Economies (Lanham, MD: Rowman and Littlefield, 1991)."
[^perpetual]: English common law explicitly bans wills or trusts that operate indefinitely through a [rule against perpetuities](!Wikipedia); the application can be [very tricky](, forbidding even apparently legitimate short-term specifications. Under a basic [economic]( [analysis]( of [compound interest](!Wikipedia), respecting the wishes of even distant ancestors is valuable - we should hardly quibble about the odd billion devoted to an eternal flame for Ahura Mazda or child sacrifice to Moloch if it means additional trillions of dollars of growth in the economy (a conclusion which as stated may seem objectionable, but when hidden as [a parable]( seems sensible). Nor is the suggestion of very long-term investments and perpetuities purely theoretical: [Benjamin Franklin](!Wikipedia "Benjamin Franklin#Bequest") succeeded in exactly this, turning 2,000 pounds into $7,000,000+ over 2 centuries; Anna C. Mott's $1000 turned only into $215,000 in 2002 due to a shorter maturity, [Wellington R. Burt]( succeeded in turning his few millions into $100 million. Very old continuous organizations like the Catholic Church or [Fuggerei](!Wikipedia) are more common than one might think; see Wikipedia on the oldest [companies](!Wikipedia "List of oldest companies") & [newspapers](!Wikipedia "List of the oldest newspapers"), [universities](!Wikipedia "List of oldest universities in continuous operation"), [churches](!Wikipedia "List of oldest churches#Oldest continuous church congregations"), and [madrasahs](!Wikipedia "List of oldest madrasahs in continuous operation").
Sadly, when [we look at subsequent history](, the chief risk to such philanthropy is not inflation, taxes, or any of the other failure modes triumphantly suggested as refutations, but legal hostility. The estate of Franklin's first imitator, [Peter Thellusson](!Wikipedia) (who sought to benefit his descendants), was embroiled in the [Thellusson Will Case](!Wikipedia) on which more than 100 lawyers earned their daily bread (paid out of the interest of course) for the next 62 years; would-be philanthropist Jonathan Holden's millions were likewise eaten up, the trusts broken by the living, and nothing even named after Holden. The lack of perpetuities endangers arrangements one might want; [Richard Dawkins](!Wikipedia) in _[The God Delusion](!Wikipedia)_ describes an example of only partially-kept religious perpetuities and draws the appropriate lesson for (secular) long-term projects like [cryonics](!Wikipedia) or the [Long Now](!Wikipedia):
> "Even in the Middle Ages, money was not the only currency in which you could buy parole from purgatory [indulgences]. You could pay in prayers too, either your own before death or the prayers of others on your behalf, after your death. And money could buy prayers. If you were rich, you could lay down provision for your soul in perpetuity. My own Oxford College, [New College](!Wikipedia "New College, Oxford"), was founded in 1379 (it was new then) by one of that century's great philanthropists, [William of Wykeham](!Wikipedia), Bishop of Winchester. A medieval bishop could become the Bill Gates of the age, controlling the equivalent of the information highway (to God), and amassing huge riches. His diocese was exceptionally large, and Wykeham used his wealth and influence to found two great educational establishments, one in Winchester and one in Oxford. Education was important to Wykeham, but, in the words of the official New College history, published in 1979 to mark the sixth centenary, the fundamental purpose of the college was 'as a great [chantry](!Wikipedia) to make intercession for the repose of his soul. He provided for the service of the chapel by ten chaplains, three clerks and sixteen choristers, and he ordered that they alone were to be retained if the college's income failed.' Wykeham left New College in the hands of the Fellowship, a self-electing body which has been continuously in existence like a single organism for more than six hundred years. Presumably he trusted us to continue to pray for his soul through the centuries.
> Today the college has only one chaplain and no clerks, and the steady century-by-century torrent of prayers for Wykeham in purgatory has dwindled to a trickle of two prayers per year. The choristers alone go from strength to strength and their music is, indeed, magical. Even I feel a twinge of guilt, as a member of that Fellowship, for a trust betrayed. In the understanding of his own time, Wykeham was doing the equivalent of a rich man today making a large down payment to a cryogenics company which guarantees to freeze your body and keep it insulated from earthquakes, civil disorder, nuclear war and other hazards, until some future time when medical science has learned how to unfreeze it and cure whatever disease it was dying of. Are we later Fellows of New College reneging on a contract with our Founder? If so, we are in good company. Hundreds of medieval benefactors died trusting that their heirs, well paid to do so, would pray for them in purgatory. I can't help wondering what proportion of Europe's medieval treasures of art and architecture started out as down payments on eternity, in trusts now betrayed."
[^waqf]: ["The Provision of Public Goods under Islamic Law: Origins, Impact, and Limitations of the Waqf System"](, by Timur Kuran; _Law & Society Review_, Vol. 35, No. 4 (2001), pp. 841-898. The basic idea:
> A [waqf](!Wikipedia) is an unincorporated trust established under Islamic law by a living man or woman for the provision of a designated social service in perpetuity. Its activities are financed by revenue-bearing assets that have been rendered forever inalienable. Originally the assets had to be immovable, although in some places this requirement was eventually relaxed to legitimize what came to be known as a "cash waqf."
Waqfs were not an Islamic innovation, exactly; may have had Persian antecedents, but certainly we can find earlier analogies:
> One inspiration for the waqf was perhaps the Roman legal concept of a sacred object, which provided the basis for the inalienability of religious temples. Another inspiration might have been the philanthropic foundations of Byzantium, and still another the Jewish institution of consecrated property (hekdesh). But there are important differences between the waqf and each of these forerunners. A Roman sacred object was authorized, if not initiated, by the state, which acted as the property's administrator (K6prluii 1942:7-9; Barnes 1987:5-8). By contrast, a waqf was typically established and managed by individuals without the sovereign's involvement. Under Islamic law, the state's role was limited to enforcement of the rules governing its creation and operation. A Byzantine philanthropic foundation was usually linked to a church or monastery, and it was subject to ecclesiastical control (Jones 1980:25). A waqf could be attached to a mosque, but often it was established and administered by people outside the religious establishment. Finally, whereas under Jewish law it was considered a sacrilege to consecrate property for one's own benefit (Elon 1971:280-88), there was nothing to keep the founder of a waqf from appointing himself as its first administrator and drawing a hefty salary for his services.
These perpetuities were huge; modern Iran's [bonyad](!Wikipedia)s are estimated at 20% of its GDP and the waqfs may have been bigger and correspondingly active:
> Available aggregate statistics on the assets controlled by waqfs come from recent centuries. At the founding of the Republic of Turkey in 1923, three-quarters of the country's arable land belonged to waqfs. Around the same time, one-eighth of all cultivated soil in Egypt and one-seventh of that in Iran stood immobilized as waqf property. In the middle of the 19th century, one-half of the agricultural land in Algeria, and in 1883 one-third of that in Tunisia, was owned by waqfs (Heffening 1936:1100; Gibb & Kramers 1961:627; Barkan 1939:237; Baer 1968b:79-80). In 1829, soon after Greece broke away from the Ottoman Empire, its new government expropriated waqf land that composed about a third of the country's total area (Fratcher 1973:114). Figures that stretch back the farthest pertain to the total annual income of the waqf system. At the end of the 18th century, it has been estimated, the combined income of the roughly 20,000 Ottoman waqfs in operation equaled one-third of Ottoman state's total revenue, including the yield from tax farms in the Balkans, Turkey, and the Arab world (Yediyildlz 1984:26). Under the assumption that individuals cultivating waqf land were taxed equally with those working land belonging to state-owned tax farms, this last figure suggests that roughly one-third of all economically productive land in the Ottoman Empire was controlled by waqfs.
> is abundant evidence that even a single waqf could carry great economic importance. Jerusalem's Haseki Sultan charitable complex, founded in 1552 by Haseki Hurrem, wife of Suleyman the Magnificent and better known in the West as Roxelana, possessed 26 entire villages, several shops, a covered bazaar, 2 soap plants, 11 flour mills, and 2 bathhouses, all in Palestine and Lebanon. For centuries the revenues produced by these assets were used to operate a huge soup kitchen, along with a mosque and two hostels for pilgrims and wayfarers (Peri 1992:170-71). In the 18th century, a waqf established in Aleppo by Hajj Musa Amiri, a member of the local elite, included 10 houses, 67 shops, 4 inns, 2 storerooms, several dyeing plants and baths, 3 bakeries, 8 orchards, and 3 gardens, among various other assets, including agricultural land (Meriwether 1999: 182-83)...many of the architectural masterpieces that symbolize the region's great cities, were financed through the waqf system. So were practically all the soup kitchens in operation throughout the region. By the end of the 18th century, in Istanbul, whose estimated population of 700,000 made it the largest city in Europe, up to 30,000 people a day were being fed by charitable complexes (imarets) established under the waqf system (Huart 1927:475).
Such wealth would make them targets just like the Catholic Church was targeted by King Henry - but perhaps with different results (surprising since waqfs seem predicated on ordinary property rights being insecure, especially compared with England):
> The consequent weakness of private property rights made the sacred institution of the waqf a convenient vehicle for defending wealth against official predation. Expropriations of waqf properties did occur, especially following conquests or the replacement of one dynasty by another. However, when they occurred, they usually generated serious resistance. During the two and a half centuries preceding Egypt's fall to the Turks in 1517, no fewer than six revenue-seeking Mameluke rulers attempted to confiscate major waqfs; primarily because of judicial resistance, their efforts were largely unsuccessful (Yediyildlz 1982a:161). In the 1470s the Ottoman sultan Mehmed II expropriated scores of waqfs to raise resources for his army and his unusually broad public works program. His conversion of hundreds of waqf-owned villages into state property generated a strong reaction, and it influenced the succession struggle that followed his death. Moreover, his son Bayezid II, upon acceding to the throne, restored the confiscated lands to their former status (Repp 1988:128-29; Inalclk 1955:533). Such episodes underscored the relative security of waqf property. ...Precisely because of the commonness of this motive, when a state attempted to take over a waqf it usually justified the act on the ground that it was illegitimate (Akgunduz 1996:523-61). Accordingly, its officials tried to convince the populace that the expropriated properties belonged to the state to begin with or simply that the waqf founder had never been their legitimate owner.23
The waqf structure did succeed, as economics might predict, in increasing the amount dedicated to charity, as we can see comparing religious groups' participation:
> Accordingly, up to the 19th century Jews and Christians were ordinarily permitted to establish only functionally similar institutions (Akgindfiz 1996:238-41). Unlike waqfs, these would not be overseen by the Islamic courts or enjoy the protection of Islamic law. We know that actual practices varied. In certain periods and regions influential non-Muslims were permitted to establish waqfs.14 Yet, the requirement pertaining to the founder's religion was generally effective. Non-Muslims were less inclined than equally wealthy Muslims to establish and fund charitable foundations of any kind, even ones to serve mostly, if not exclusively, their own religious communities (Masters 1988:173-74; Jennings 1990:308-9; Marcus 1989:305).15 This pattern changed radically only in the 19th century, when the right to establish waqfs was extended to the members of other faiths (Cadlrcl 1991:257-58). At this point it became common for wealthy Jews and Christians to establish waqfs under a permissive new variant of Islamic law (Shaham 1991:460-72; Afifi 1994:119-22).16
The chief flaw in waqfs was the 'dead hand' - perpetual meant perpetual:
> To start with the former type of rigidity, the designated mission of a waqf was irrevocable. Ordinarily not even the founder of a waqf could alter its goals. Wherever possible, the objectives specified in the waqf deed had to be pursued exactly. This requirement, if obeyed to the letter, could cause a waqf to become dysfunctional. Imagine a richly endowed waqf established to build and support a particular caravanserai. Two centuries later, let us also suppose, a shift in trade routes idles the structure. If the long-dead founder had neglected to permit future mutawallis to use their own judgment in the interest of supporting commerce through the most efficient means, his waqf's assets could not be transferred from the now dysfunctional caravanserai to, say, the administration of a commercial port. They could not be shifted even to another caravanserai. At least for a while, therefore, the resources of the waqf would be used inefficiently. Probably because this danger of serious efficiency loss gained recognition early on, the architects of the waqf system made the residuary mission of every waqf the benefit of the poor.36 This rule meant that the assets supporting a dysfunctional caravanserai would eventually be transferred to a public shelter or a soup kitchen, thus limiting the misallocation of resources. But in tempering one form of inefficiency this measure created another. The resources devoted to poor relief would grow over time, possibly dampening incentives to work. The earlier-reported evidence of Istanbul's soup kitchens feeding 30,000 people a day points, then, to more than the waqf system's success in providing social services in a decentralized manner. Perhaps it shows also that the system could generate a socially costly oversupply of certain services. This is the basis on which some scholars have claimed that the waqf system contributed to the Islamic world's long economic descent by fostering a large class of indolent beneficiaries (Akdag 1979:128-30; Cem 1970:98-99).37
> Not only were these recognized but steps were taken to mitigate them. The typical Ottoman waqf deed contained a standard formulary featuring a list of operational changes the mutawalli was authorized to make. However, unless explicitly stated otherwise, he could make only one set of changes; once the waqf's original rules had undergone one modification, there could not be another reform (Akgindiiz 1996:257-70; Little 1984:317-18). This point qualifies, but also supports the observation that the waqf system suffered from operational rigidities. Sooner or later every waqf equipped with the standard flexibilities would exhaust its adaptive capacity...It is on this basis that in 1789, some 237 years after the establishment of the Haseki Sultan complex, its mutawalli decided against hiring a money changer, even though some employees wanted the appointment to cope with rising financial turnover (Peri 1992:184-85).
> Finally, if the founder had not explicitly allowed the waqf to pool its resources with those of other organizations, technically achievable economies of scale could remain unexploited. In particular, services that a single large waqf could deliver most efficiently - road maintenance, piped water - might be provided at high cost by multiple small waqfs. Founders were free, of course, to stipulate that part, even all, of the income of their waqfs be transferred to a large waqf. And scattered examples of such pooling of waqf resources have been found (Cizakga 2000:48).40 The point remains, however, that if a waqf had not been designed to participate in resource pooling it could not be converted into a "feeder waqf" of another, ordinarily larger waqf. Even if new technologies came to generate economies of scale unimaginable at the waqf's inception, the waqf would have to continue operating independently. Rifaah al-Tahtawi, a major Egyptian thinker of the 19th century, put his finger on this problem when he wrote, "Associations for joint philanthropy are few in our country, in contrast to individual charitable donations and family endowments, which are usually endowed by a single individual" (Cole 2000).
> On this basis one may suggest that the "static perpetuity" principle of the waqf system was more suitable to a slowly changing economy than to one in which technologies, tastes, and lifestyles undergo revolutionary changes within the span of a generation. Even if adherence to the principle was only partial-as discussed later, violations were hardly uncommon-in a changing economy the efficiency of the waqf system would have fallen as a result of delays in socially desirable adjustments.42 This interpretation is consistent with the fact that in various parts of the modern Islamic world the legal infrastructure of the waqf system has been, or is being, modified to endow mutawallis with broader operational powers. Like many forms of the Western trust, a modern waqf is a corporation-an internally autonomous organization that the courts treat as a legal person.43 As such, its mutawalli, which may now be a committee of individuals or even another corporation, enjoys broad rights to change its services, its mode and rules of operation, and even its goals, without outside interference. This is not to say that a mutawalli is now unconstrained by the founder's directives. Instead, there is no longer a presumption that the founder's directives were complete, and the mutawalli, or board of mutawallis, is expected and authorized to be much more than a superintendent following orders. A modern mutawalli is charged with maximizing the overall return on all assets, subject to intertemporal tradeoffs and the acceptability of risk. The permanence of any particular asset is no longer an objective in itself. It is taken for granted that the waqf's substantive goals may best be served by trimming the payroll to finance repairs or by replacing a farm received directly from the founder with equity in a manufacturing company. ...The ongoing reforms of the waqf system amount, then, to an acknowledgment that the rigidities of the traditional waqf system were indeed sources of inefficiency.
The obvious approach was to add in new flexibility by two routes; first, explicit flexibility in the incorporation:
> It was not uncommon for founders to authorize their mutawallis to sell or exchange waqf assets (istibddl). Miriam Hoexter (1998:ch. 5) has shown that between the 17th and 19th centuries the mutawallis of an Algerian waqf established for the benefit of Mecca and Medina managed, acting on the authority they enjoyed, to enlarge this waqf's endowment through shrewd purchases, sales, and exchanges of assets. In the same vein, Ronald Jennings (1990:279-80, 286) has observed that in 16th-century Trabzon some founders explicitly empowered their mutawallis to exercise their own judgment on business matters. He has also found that the courts with jurisdiction over Trabzon's waqfs tolerated a wide range of adaptations.45 The waqfs in question were able to undertake repairs, adjust payments to suit market conditions, and rent out unproductive properties at rates low enough and for sufficiently long periods to entice renters into making improvements (Jennings 1990:335). Other scholars, in addition to providing examples of founder-endorsed plasticity, have shown that there were limits to the founder's control over the waqf's management, especially beyond his or her own lifetime. Said Arjomand (1998:117, 126) and Stephane Yerasimos (1994:43-45) independently note that the waqf deed could suffer damage or even disappear with the passage of time. It could also be tampered with, sowing doubts about the authenticity of all its directives. In such circumstances, the courts might use their supervisory authority to modify the waqf's organization, its mode of operation, and even its mission. Moreover, even when no disagreements existed over the deed itself judges had the right to order unstipulated changes in the interest of either the waqf's intended beneficiaries or the broader community. We have seen that such heavy handedness sometimes sparked resistance. Harmed constituencies might claim that the principle of static perpetuity had been violated. However, judges were able to prevail if they commanded popular support and the opponents of change were poorly organized. Yerasimos furnishes examples of 16th-century Ottoman construction projects that involved the successful seizure of ostensibly immobilized waqf properties, sometimes without full compensation. ...There are ample indications that modification costs were generally substantial. As Murat (Sizakca 2000:16-21) observes, only some of the Islamic schools of law allowed sales and exchanges of waqf properties, and even these schools imposed various restrictions.
The second approach was to avoid inalienable *assets* - not real estate, but perhaps money or other financial instruments:
> "Cash waqfs" thus emerged as early as the eighth century, earning income generally through interest-bearing loans (Qizakga 2000:ch. 3). Uncommon for many centuries, these waqfs provoked intense controversy as their numbers multiplied, because they violated both waqf law and the prohibition of interest (Mandaville 1979; Kurt 1996:10-21). According to their critics, not only was the cash waqf doubly un-Islamic but it consumed resources better devoted to charity and religion. Interestingly, the defenders invoked neither scripture nor the law. Conceding that the cash waqf violates classical Islamic principles, they pointed to its popularity and inferred that it had to be serving a valuable social function. In effect, they held that the cash waqf should be tolerated because it passes the utilitarian test of the market-the irreligious test now commonly used to justify popular, but perhaps ethically troubling, economic practices. The defenders of the cash waqf, who included prominent clerics, also lamented that their opponents, though perhaps knowledgeable of Islam, were ignorant of both history and the prevailing practical needs of their communities (Mandaville 1979:297-300, 306-8).
> Because they met important needs and encountered little opposition outside of legal and religious circles, cash waqfs became increasingly popular. By the 16th century, in fact, they accounted for more than half of all the new Ottoman waqfs. Most of them were on the small side, as measured by assets (Cagatay 1971; Yediylldlz 1990:118-22; Masters 1988:161-63). One factor that accounts for their enormous popularity is the ubiquitous quest for wealth protection. Another was that there existed no banks able to meet the demand for consumption loans, only moneylenders whose rates reflected the risks they took by operating outside the strict interpretation of the law. Where and when the cash waqf enjoyed legal approval, it allowed moneylenders to operate more or less within the prevailing interpretation of Islamic law. If nothing else, the sacredness that flowed from its inclusion in the waqf system insulated its interest-based operations from the charge of sinfulness.
Both brought their own problems:
> Yet, cash waqfs were by no means free of operational constraints. Like the founder of an ordinary waqf, that of a cash waqf could restrict its beneficiaries and limit its charges. Yediylldlz points to the deed of an 18th-century waqf whose founder required it to lend at exactly 10% and only to merchants based in the town of Amasya (Yediylldlz 1990:122). The restrictions imposed on a cash waqf typically reflected, in addition to the founder's personal tastes and biases, the prevailing interest rates at the time of its establishment. Over time, these could become increasingly serious barriers to the waqf's exploitation of profit opportunities. Precisely because the cash waqfs were required to keep their rates fixed, observes Cizakqa (2000:52-53), only a fifth of them survived beyond a century...Revealingly, the borrowers of the 18th-century cash waqfs of Bursa included their own mutawallis. These mutawallis lent on their own account to the moneylenders of Ankara and Istanbul, where interest rates were higher (Cizakqa 1995). Had the endowment deeds of these cash waqfs permitted greater flexibility, the gains reaped by mutawallis could have accrued to the waqfs themselves.
> Insofar as these methods enhanced the acceptability of corruption, they would also have facilitated the embezzlement of resources ostensibly immobilized for the provision of social services, including public goods and charitable causes. Embezzlement often occurred through sales and exchanges of waqf properties. While such transactions could serve a waqf's financial interests, and thus its capacity for meeting the founder's goals, they were subject to abuse. Mutawallis found ways to line their own pockets through transactions detrimental to the waqf, for instance, the exchange of an economically valuable farm for the inferior farm of an uncle. A bribe-hungry judge might approve such a transaction under the pretext of duress, knowing full well that it was motivated more by personal gain than by civic duty. In certain times and places this form of embezzlement became so common that high officials took to treating waqf properties as alienable. In the early 16th century, right before the Ottomans occupied Egypt, a Mameluke judge ruled that the land on which the famous al-Azhar complex stands could be sold to someone looking for a site to build a mansion (Behrens-Abouseif 1994:146-47)...Her waqf was to support, she stated, "the poor and the humble, the weak and the needy... the true believers and the righteous who live near the holy places . . . [and] hold onto the sharia and strictly observe the commandments of the sunna" (Peri 1992:172). Since practically any Muslim resident of greater Jerusalem could qualify as either weak or devout, within a few generations huge numbers of families, including some of the richest, were drawing income from the waqf. Even an Ottoman governor managed to get himself on the waqf's payroll, and he took to using the waqf as an instrument of patronage (Peri 1992:173-74). As Hfirrem's waqf turned into a politicized source of supplementary income for people whom she would hardly have characterized as needy, the government in Istanbul tried repeatedly to trim the list of beneficiaries. Evidently it sensed that continued corruption would cause the waqf, and therefore Ottoman rule itself, to lose legitimacy. Yet the government itself benefited from showering provincial notables with privileges, which limited the reach of its reforms. After every crackdown the waqf's managers returned to creating entitlements for the upper classes (Peri 1992:182-84). Ann Lambton (1997:305) gives examples of even more serious abuses from 14th-century Iran. Based on contemporary observations, she notes that practically all assets of the 500 waqfs in Shiraz had fallen into the hands of corrupt mutawallis bent on diverting revenues to themselves...One must not infer that managerial harm to the efficiency of waqfs stemmed only, or even primarily, from corruption. As Richard Posner (1992:511) observes in regard to charitable trusts in common law jurisdictions, the managers and supervisors of trusts established for the benefit of broad social causes generally lack adequate incentives to manage properties efficiently.
Contrast with European institutions:
> Just as the premodern Middle East had inflexible waqfs, one might observe, the preindustrial and industrial West featured restrictions that inhibited the efficient administration of trusts (Fratcher 1973:22, 55, 66-71). ...Do such facts invalidate the claim of this section, namely, that inflexibilities of the waqf system held the Middle East back as Europe took the lead in shaping the modern global economy? Two additional facts from European economic history may be advanced in defense of the presented argument. First, over the centuries the West developed an increasingly broad variety of trusts, including many that give a trustee-the counterpart of the mutawalli-greater operational flexibility. These came to include trusts to operate businesses, trusts to manage financial portfolios, and trusts to hold the majority of the voting shares in a corporation. Also, while it is doubtless true that certain Western trusts suffered from the sorts of rigidities that plagued the waqf system, other trusts mitigated these problems by equipping their trustees, or boards of trustees, with powers akin to those of a corporate board.
> Another important difference concerns the powers of founders. As early as the 14th century, judges in England were discouraging waqf-like "perpetuities" through which donors could micromanage properties indefinitely, well after their deaths. Trusts providing benefits for unborn persons were declared invalid, or valid only if subject to destruction by prior beneficiaries. And in France, a law was instituted in 1560 to keep the founders of [_fideicommissa_](!Wikipedia "Fideicommissum"),trust-like devices grounded in Roman law, from tying the hands of more than two generations of beneficiaries (Fratcher 1973:11-12, 86). These cases of resistance to static perpetuity show that the immobilization of property also presented dangers in Europe. But they also demonstrate that successful attempts to contain the dangers came much earlier in Europe than in the Middle East, where legal reforms designed to give mutawallis greater discretion had to await the 20th century.
[^African]: From [Ryszard Kapuściński](!Wikipedia), _The Shadow of the Sun_ 2002, pg 36-37 (quoted in [James L. Kugel](!Wikipedia)'s _In the Valley of the Shadow_ 2011, pg 33):
> "The spiritual world of the 'African' (if one may use the term despite its gross simplification) is rich and complex, and his inner life is permeated by a profound religiosity. He believes in the coexistence of three different yet related worlds.
> The first is the one that surrounds us, the palpable and visible reality composed of living people, animals, and plants, as well as inanimate objects: stones, water, air. The second is the world of the ancestors, those who died before us, but who died, as it were, not completely, not finally, not absolutely. Indeed, in a metaphysical sense, they continue to exist, and are even capable of participating in our life, of influencing it, shaping it. That is why maintaining good relations with one's ancestors is a precondition of a successful life, and sometimes even of life itself. The third world is the rich kingdom of the spirits - spirits that exist independently, yet at the same time are present in every being, in every object, in everything and everywhere. At the head of these three worlds stands the Supreme Being, God. Many of the bus inscriptions speak of omnipresence and his unknown omnipotence: 'God is everywhere', 'God knows what he does', 'God is mystery'."
Continuing Kugel:
> "It is not difficult to imagine our own ancestors some generations ago living in such a world. Indeed, many of the things that Kapuściński writes about Africans are easily paralleled by what we know of the ancient Near east, including the cult of the dead. Though largely forbidden by official, biblical law, consulting dead ancestors, contacting them through wizards or mediums - in fact, providing the deceased with water and sustenance on a regular basis via feeding tubes specially implanted at their burial sites (because, as Kapuściński writes, those relatives have 'died, as it were, not completely, not finally, not absolutely') - were practices that have been documented by archaeologists within biblical Israel and, more widely, all across the eastern Mediterranean, as well as in Mesopotamia and even in imperial Rome.^6^ More generally, those three overlapping worlds Kapuściński describes - one's physical surroundings, one's dead ancestors, and the whole world of God and the divine - have been described elsewhere by ethnographers working in such diverse locales as the Amazon rain forests, New Guinea, and Micronesia....For centuries and millennia, we *were* small, dwarfed by gods and ancestors and a throbbing world of animate and inanimate beings all around us, each with its personal claim to existence no less valid than our own.
> - 6: See on this M. Bayliss, ["The Cult of Dead Kin in Assyria and Babylonia"](, _Iraq_ 35 (1973), 115-125; Brian B. Schmidt, _Israel's Beneficent Dead: Ancestor Cult and Necromancy in Ancient Israelite Religion and Tradition_ (Winona Lake, Ind.: Eisenbrauns, 1996), 201-215; Theodore Lewis, _The Cult of the Dead in Ancient Israel and Ugarit_ (Atlanta: Scholars Press, 1989), 97. [More reading: Elizabeth M. Bloch-Smith's 1992 paper, ["The Cult of the Dead in Judah: Interpreting the Material Remains"](]
If the past has been excluded from the circle, what of the future? One wonders. The [demographic transition](!Wikipedia) is a curious phenomenon, and one that is putting many developed nations below replacement fertility; when combined with national and private debt levels unprecedented in history, and depletion of non-renewable resources, that suggests a certain disregard for descendants. Yes, all that *may* have resulted in higher economic growth which the descendants can then use to purchase whatever bundle of goods they find most desirable, but as with banks lending money, it only takes one blow-up to render the net returns negative. (If a multi-lateral thermonuclear war bombs the world back to the stone age - what is the net global growth rate from the Neolithic to WWIII? Is it positive or negative? This is an important question since war casualties historically follow a [power law](!Wikipedia).) There are no explicit advocates for futurity, and no real place for them in contemporary ethics besides economics's idea of exponential discounting (which has been criticized for making any future consequence, no matter how large, almost irrelevant as long as it is delayed a century or two). Has the living's concern for their descendants, the inclusion of the future into the circle or moral concern, increased or decreased over time? Whichever one's opinion, I submit that the answer is shaky and not supported by excellent evidence.
One of the most difficult aspects of any theory of moral progress is explaining *why* moral progress happens when it does, in such apparently random non-linear jumps. (Historical economics has a similar problem with the Industrial Revolution & [Great Divergence](!Wikipedia).) These jumps do not seem to correspond to simply how many philosophers are thinking about ethics. As we have already seen, the straightforward picture of ever more inclusive ethics relies on cherry-picking if it covers more than, say, the past 5 centuries; and if we are honest enough to say that moral progress isn't clear before then, we face the new question of explaining why things changed *then* and not at any point previous in the 2500 years of Western philosophy, which included many great figures who worked hard on moral philosophy such as Plato or Aristotle. It is also troubling how much morality & religion seems to be correlated with biological factors. Even if we do not go as far as Julian Jaynes[^Jaynes], there are still many curious correlations floating around.
[^Jaynes]: pg65-66, James L. Kugel, _In the Valley of the Shadow_:
> "One book I read during chemotherapy was the well-known study by the experimental psychologist [Julian Jaynes](!Wikipedia), _[The Origin of Consciousness in the Breakdown of the Bicameral Mind](!Wikipedia)_ (1976). Jaynes suggested that the human brain used to function somewhat differently in ancient times (that is, up to about 3,000 years ago). He noted that, while many aspects of language and related functions are located in the two parts of the brain's left hemisphere known as [Wernicke's area](!Wikipedia) and [Broca's area](!Wikipedia), the right-brain counterparts to these areas are nowadays largely dormant. According to Jaynes, however, those areas had been extremely important in earlier times, before humans began to perceive the world as we do now. Back then, he theorized, humans had an essentially "bicameral mind" that lacked the integrative capacities of the modern brain. Instead, its two halves functioned relatively independently: the [left brain](!Wikipedia) would obey what it perceived as "voices". which in fact emanated from those now-dormant areas of the right brain. (In Jaynes's formulation, the right hemisphere "organized admonitory experience and coded it into 'voices' which were then 'heard' by the left hemisphere.") Although internally generated, those voices were thus perceived by the left brain as coming from outside. It is this situation that led to the belief in communications from the gods in ancient times, as well as the belief in lesser sorts of supernatural communicators: talking spirits and genies, muses who dictated poetry to the "inspired" poet, sacred rocks, trees, and other objects that brought word "from the other side." When this bicameral mind faded out of existence and modern consciousness arose, prophecy likewise ceased and people suddenly no longer heard the gods telling them what to do.
> Jaynes's theory attracted much attention when first promulgated: it answered a lot of questions in one bold stroke. But it was not long before other scholars raised significant, and eventually devastating, objections to his idea. To begin with, 3,000 years is a tiny speck of time on the scale of human evolution. How could so basic a change in the way our brains work have come about so recently? What is more, 3,000 years ago humans lived in the most varied societies and environments. Some societies were already quite sophisticated and diversified, while others then (and some still now) existed in the most rudimentary state; some humans lived in tropical forests, others in temperate climes, still others in snowy wastelands close to earth's poles; and so forth. could human brains in these most diverse circumstances all have changed so radically at - in evolutionary terms - the same instant? Certainly now our brains all seem to function in pretty much the same way, no matter where we come from; there are no apparent surviving exemplars of the bicameral mind that Jaynes postulated. What could have caused humanity to undergo this radical change *in lockstep* all over the earth's surface? A ray from outer space?
> But if Jaynes's idea has met with disapproval, the evidence he adduced is no less provocative. The problem of explaining such phenomena as the appearance and subsequent disappearance of prophecy in many societies (though certainly not all), along with the near-universal evidence of religion discussed earlier (with the widespread phenomenon of people communing with dead ancestors and/or gods - and hearing back grom them), remains puzzling."
Given these 3 very large areas of shrinking circles, should we call it an expanding circle or a *shifting* circle?
# Existential risks and mathematical error
["Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes"]( discusses a basic issue with [existential threats](!Wikipedia): any useful discussion will be rigorous, hopefully with physics and math proofs; but proofs themselves are empirically unreliable. Given that proofs are the most reliable form of epistemology humans know, this sets a basic upper bound on how much confidence we can put on *any* belief. (There are other rare risks, from mental diseases[^mind] to how to deal with contradictions[^Wittgenstein], but we'll look at mathematical error.)
> "When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs." --[Steven Kaas](
This upper bound on our certainty may force us to disregard certain rare risks because the effect of error on our estimates of existential risks is *asymmetric*: an error will usually reduce the risk, not increase it. The errors are not distributed in any kind of symmetrical around a mean: an existential risk is, by definition, bumping up against the upper bound on possible damage. If we were trying to estimate, say, average human height, and errors were distributed like a bell curve, then we could ignore them. But if we are calculating the risk of a super-asteroid impact which will kill all of humanity, an error which means the super-asteroid will actually kill humanity twice over is irrelevant because it's the same thing (we can't die twice); however, the mirror error - the super-asteroid actually killing half of humanity - matters a great deal!
[^mind]: There are various delusions (eg. [Cotard delusion](!Wikipedia)), [false memory syndrome](!Wikipedia)s, compulsive lying ([pseudologia fantastica](!Wikipedia)), disorders provoking [confabulation](!Wikipedia "Confabulation#Abnormal Psychopathology") such as the general symptom of [anosognosia](!Wikipedia); in a dramatic example of how the mind is what the brain does, some anosognosia can be temporarily cured by squirting cold water in an ear; from ["The Apologist and the Revolutionary"](
> "Take the example of the woman discussed in Lishman's _Organic Psychiatry_. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder"...In any case, a patient who has been denying paralysis for weeks or months will, upon having cold water placed in the ear, admit to paralysis, admit to having been paralyzed the past few weeks or months, and express bewilderment at having ever denied such an obvious fact. And then the effect wears off, and the patient not only denies the paralysis but denies ever having admitted to it."
[^Wittgenstein]: Most/all math results require their system to be consistent; but this is one particular philosophical view. [Ludwig Wittgenstein](!Wikipedia), in _[Remarks on the Foundations of Mathematics](!Wikipedia)_:
> "If a contradiction were now actually found in arithmetic – that would only prove that an arithmetic with *such* a contradiction in it could render very good service; and it would be better for us to modify our concept of the certainty required, than to say it would really not yet have been a proper arithmetic."
[Saul Kripke](!Wikipedia), [reconstructing a Wittgensteinian skeptical argument](!Wikipedia "Wittgenstein on Rules and Private Language"), points out one way to react to such issues:
> "A *skeptical* solution of a philosophical problem begins... by conceding that the skeptic's negative assertions are unanswerable. Nevertheless our ordinary practice or belief is justified because—contrary appearances notwithstanding—it need not require the justification the sceptic has shown to be untenable. And much of the value of the sceptical argument consists precisely in the fact that he has shown that an ordinary practice, if it is to be defended at all, cannot be defended in a certain way."
How big is this upper bound? Mathematicians have often made errors in proofs. We can divide errors into 2 basic cases:
1. Mistakes where the theorem is still true, but the proof was incorrect
2. Mistakes where the theorem was *false*, and the proof was also necessarily incorrect
Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid's proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). Similarly, early calculus used 'infinitesimals' which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically *all* of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted[^Neumann] and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications). Other cases are more straightforward, with mathematicians publishing multiple proofs or covertly correcting old papers[^Nathanson].
[^Colton2]: Colton 2007: "For example, Heawood discovered a flaw in Kempe's 1879 proof of the [four colour theorem](!Wikipedia "Four colour theorem#Early proof attempts"),2 which had been accepted for 11 years." It would ultimately be proved with a computer in 1976 - maybe.
[^Colton1]: ["Computational Discovery in Pure Mathematics"](, Simon Colton 2007
> A more recent example was the discovery that Andrew Wiles' original proof of Fermat's Last Theorem was flawed (but not, as it turned out, fatally flawed, as Wiles managed to fix the problem (Singh, 1997))...More recently, Larry Wos has been using Otter to find smaller proofs of theorems than the current ones. To this end, he uses Otter to find more succinct methods than those originally proposed. This often results in detecting double negations and removing unnecessary lemmas, some of which were thought to be indispensable. (Wos, 1996) presents a methodology using a strategy known as resonance to search for elegant proofs with Otter. He gives examples from mathematics and logic, and also argues that this work also implications for other fields such as circuit design.
> (Fleuriot & Paulson, 1998) have studied the geometric proofs in Newton's _Principia_ and investigated ways to prove them automatically with the Isabelle interactive theorem prover (Paulson, 1994). To do this, they formalized the _Principia_ in both Euclidean geometry and non-standard analysis. While working through one of the key results (proposition 11 of book 1, the Kepler problem) they discovered an anomaly in the reasoning. Newton was appealing to a cross-multiplication result which wasn't true for infinitesimals or infinite numbers. Isabelle could therefore not prove the result, but Fleuriot managed to derive an alternative proof of the theorem that the system found acceptable."
[^Neumann]: John von Neumann, ["The Mathematician"]( 1947:
> "That Euclid's axiomatization does at some minor points not meet the modern requirements of absolute axiomatic rigour is of lesser importance in this respect...The first formulations of the calculus were not even mathematically rigorous. An inexact, semi-physical formulation was the only one available for over a hundred and fifty years after Newton! And yet, some of the most important advances of analysis took place during this period, against this inexact, mathematically inadequate background! Some of the leading mathematical spirits of the period were clearly not rigorous, like Euler; but others, in the main, were, like Gauss or Jacobi. The development was as confused and ambiguous as can be, and its relation to empiricism was certainly not according to our present (or Euclid's) ideas of abstraction and rigour. Yet no mathematician would want to exclude it from the fold — that period produced mathematics as first-class as ever existed! And even after the reign of rigour was essentially re-established with Cauchy, a very peculiar relapse into semi-physical methods took place with Riemann."
[^Nathanson]: ["Desperately seeking mathematical proof"]( ([arXiv](, Melvyn B. Nathanson 2009:
> The history of mathematics is full of philosophically and ethically troubling reports about bad proofs of theorems. For example, the [fundamental theorem of algebra](!Wikipedia) states that every polynomial of degree _n_ with complex coefficients has exactly _n_ complex roots. D'Alembert published a proof in 1746, and the theorem became known "D'Alembert's theorem," but the proof was wrong. Gauss published his first proof of the fundamental theorem in 1799, but this, too, had gaps. Gauss's subsequent proofs, in 1816 and 1849, were OK. It seems to have been hard to determine if a proof of the fundamental theorem of algebra was correct. Why?
> [Poincaré](!Wikipedia "Henri Poincare") was awarded a prize from King Oscar II of Sweden and Norway for a paper on the [three-body problem](!Wikipedia), and his paper was published in _Acta Mathematica_ in 1890. But the published paper was not the prize-winning paper. The paper that won the prize contained serious mistakes, and Poincare and other mathematicians, most importantly, [Mittag-Leffler](!Wikipedia "Gösta Mittag-Leffler"), engaged in a conspiracy to suppress the truth and to replace the erroneous paper with an extensively altered and corrected one.
Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don't know). Case 2 could lead to extinction.
The prevalence of case 1 might lead us to be very pessimistic; case 1, case 2, what's the difference? We have demonstrated a large error rate in mathematics (and physics is probably even worse off). Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. [Gian-Carlo Rota](!Wikipedia) gives us an example with Hilbert:
> Once more let me begin with Hilbert. When the Germans were planning to publish Hilbert's collected papers and to present him with a set on the occasion of one of his later birthdays, they realized that they could not publish the papers in their original versions because they were full of errors, some of them quite serious. Thereupon they hired a young unemployed mathematician, Olga Taussky-Todd, to go over Hilbert's papers and correct all mistakes. Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the _Mathematische Annalen_ of the early thirties. At last, on Hilbert's birthday, a freshly printed set of Hilbert's collected papers was presented to the Geheimrat. Hilbert leafed through them carefully and did not notice anything.^[["Ten Lessons I wish I had been Taught"](, Gian-Carlo Rota 1996]
Only one of those papers was irreparable; all the others were correct and fixable. Rota himself experienced this:
> Now let us shift to the other end of the spectrum, and allow me to relate another personal anecdote. In the summer of 1979, while attending a philosophy meeting in Pittsburgh, I was struck with a case of detached retinas. Thanks to Joni's prompt intervention, I managed to be operated on in the nick of time and my eyesight was saved. On the morning after the operation, while I was lying on a hospital bed with my eyes bandaged, Joni dropped in to visit. Since I was to remain in that Pittsburgh hospital for at least a week, we decided to write a paper. Joni fished a manuscript out of my suitcase, and I mentioned to her that the text had a few mistakes which she could help me fix. There followed twenty minutes of silence while she went through the draft. "*Why, it is all wrong!*" she finally remarked in her youthful voice. She was right. Every statement in the manuscript had something wrong. Nevertheless, after laboring for a while, she managed to correct every mistake, and the paper was eventually published.
> There are two kinds of mistakes. There are fatal mistakes that destroy a theory; but there are also contingent ones, which are useful in testing the stability of a theory.
Other times, the correct result is known and proven, but many are unaware of the answers[^enduring]. The famous [Millennium Problems](!Wikipedia) - those that have been solved, anyway - have a long history of failed proofs (Fermat surely did not prove [Fermat's Last Theorem](!Wikipedia) and neither did [Lindemann](!Wikipedia "Ferdinand von Lindemann")[^mathworld]). What explains this? The guiding factor that keeps popping up when mathematicians make leaps seems to go under the name of 'elegance' or [mathematical beauty](!Wikipedia), which widely considered important[^Goldstein][^Erdos][^Sinclair1]. This imbalance suggests that mathematicians are quite correct when they say proofs are not the heart of mathematics and that they possess insight into math, a 6th sense for mathematical truth, a nose for aesthetic beauty which correlates with veracity: they disproportionately go after theorems rather than their negations. Why this is so I do not know. Outright Platonism like Godel apparently believed in seems unlikely - mathematical expertise resembles a complex skill like chess-playing more than it does a sensory modality like vision. Possibly they have well-developed heuristics and short-cuts and they focus on the subsets of results on which those heuristics work well (the drunk searching under the spotlight), or perhaps they *do* run full rigorous proofs but are doing so subconsciously and merely express themselves ineptly consciously with omissions and erroneous formulations 'left as an exercise for the reader'[^Sinclair2].
[^Sinclair1]: From ["Aesthetics as a Liberating Force in Mathematics Education?"](/docs/2009-sinclair.pdf), by [Nathalie Sinclair]( (reprinted in _The Best Writing on Mathematics 2010_, ed. Mircea Pitici); pg208:
> "There is a long tradition in mathematics of describing proofs and theorems in aesthetic terms, often using words such as 'elegance' and 'depth'. Further, mathematicians have also argued that their subject is more akin to an art than it is to a science (see [Hardy, 1967](!Wikipedia "A Mathematician's Apology"); Littlewood, 1986; Sullivan 1925/1956), and, like the arts, ascribe to mathematics aesthetic goals. For example, the mathematician W. Krull ([1930/1987]( writes: "the primary goals of the mathematician are aesthetic, and not epistemological" (p. 49). This statement seems contradictory with the oft-cited concern of mathematics with finding or discovering truths, but it emphasises the fact that the mathematician's interest is in expressing truth, and in doing so in clever, simple, succinct ways.
> While Krull focuses on mathematical expression, the mathematician H. Poincare ([1908/1966]( concerns himself with the psychology of mathematical invention, but he too underlines the aesthetic dimension of mathematics, not the logical. In Poincare's theory, a large part of a mathematician's work is done at the subconscious level, where an aesthetic sensibility is responsible for alerting the mathematicians to the most fruitful and interesting of ideas. Other mathematicians have spoken of this special sensibility as well and also in terms of the way it guides mathematicians to choose certain problems. This choice is essential in mathematic given that there exists no external reality against which mathematicians can decide which problems or which branches of mathematics are important (see [von Neumann, 1947]( the choice involves human values and preference - and, indeed, these change over time, as exemplified by the dismissal of geometry by some prominent mathematicians in the early 20th century (see [Whiteley, 1999]("
> - Littlewood, 1986: "The mathematician's art of work"; in B. Bollobas (ed.), _Littlewood's miscellany_, Cambridge University press
> - Sullivan 1925/1956: "Mathematics as an art"; in J. Newman (ed.), _The world of mathematics_, vol 3 (p 2015-2021)
[^Sinclair2]: From pg 211-212, Sinclair 2009:
> "The survey of mathematicians conducted by [Wells (1990)](/docs/1990-wells.pdf) provides a more empirically-based challenge to the intrinsic view of the mathematical aesthetic. Wells obtained responses from over 80 mathematicians, who were asked to identify the most beautiful theorem from a given set of 24 theorems. (These theorems were chosen because they were 'famous', in the sense that Wells judged them to be well-known by most mathematicians, and of interest to the discipline in general, rather than to a particular subfield.) Wells finds that the mathematicians varied widely in their judgments. More interestingly, in explaining their choices, the mathematicians revealed a wide range of personal responses affecting their aesthetic responses to the theorems. Wells effectively puts to rest the belief that mathematicians have some kind of secret agreement on what counts as beautiful in mathematics....Burton's (2004) work focuses on the practices of mathematicians and their understanding of those practices. Based on extensive interviews with a wide range of mathematicians...She points out that mathematicians range on a continuum from unimportant to crucial in terms of their positionings on the role of the aesthetic, with only 3 of the 43 mathematicians dismissing its importance. For example, one said "Beauty doesn't matter. I have never seen a beautiful mathematical paper in my life" (p. 65). Another mathematician was initially dismissive about mathematical beauty but later, when speaking about the review process, said: "If it was a very elegant way of doing things, I would be inclined to forgive a lot of faults" (p. 65)."
> - Burton, Leone (2004): _Mathematicians as enquirers: Learning about learning mathematics_; Dordrecht: Kluwer Academic Publishers
[^Goldstein]: To take a random example (which could be multiplied indefinitely); from [Gödel and the Nature of Mathematical Truth: A Talk with Rebecca Goldstein]( (6.8.2005):
> "Einstein told the philosopher of science [Hans Reichenbach](!Wikipedia) that he'd known even before the solar eclipse of 1918 supported his general theory of relativity that the theory must be true because it was so beautiful. And [Hermann Weyl](!Wikipedia), who worked on both relativity theory and quantum mechanics, said "My work always tried to unite the true with the beautiful, but when I had to choose one or the other, I usually chose the beautiful."...Mathematics seems to be the one place where you don't have to choose, where truth and beauty are always united. One of my all-time favorite books is _[A Mathematician's Apology](!Wikipedia)_. [G.H. Hardy](!Wikipedia) tries to demonstrate to a general audience that mathematics is intimately about beauty. He gives as examples two proofs, one showing that the square root of 2 is irrational, the other showing that there's no largest prime number. Simple, easily graspable proofs, that stir the soul with wonder."
[^Erdos]: Nathanson 2009 claims the opposite:
> "Many mathematicians have the opposite opinion; they do not or cannot distinguish the beauty or importance of a theorem from its proof. A theorem that is first published with a long and difficult proof is highly regarded. Someone who, preferably many years later, finds a short proof is "brilliant." But if the short proof had been obtained in the beginning, the theorem might have been disparaged as an "easy result." Erdős was a genius at finding brilliantly simple proofs of deep results, but, until recently, much of his work was ignored by the mathematical establishment."
[^enduring]: An example of this would be ["An Enduring Error"](, Branko Grünbaum:
> "Mathematical truths are immutable, but mathematicians do make errors, especially when carrying out non-trivial enumerations. Some of the errors are "innocent" –– plain mistakes that get corrected as soon as an independent enumeration is carried out. For example, Daublebsky [14] in 1895 found that there are precisely 228 types of configurations (123), that is, collections of 12 lines and 12 points, each incident with three of the others. In fact, as found by Gropp [19] in 1990, the correct number is 229. Another example is provided by the enumeration of the uniform tilings of the 3-dimensional space by Andreini [1] in 1905; he claimed that there are precisely 25 types. However, as shown [20] in 1994, the correct number is 28. Andreini listed some tilings that should not have been included, and missed several others –– but again, these are simple errors easily corrected....It is surprising how errors of this type escape detection for a long time, even though there is frequent mention of the results. One example is provided by the enumeration of 4-dimensional simple polytopes with 8 facets, by Brückner [7] in 1909. He replaces this enumeration by that of 3-dimensional "diagrams" that he interpreted as Schlegel diagrams of convex 4-polytopes, and claimed that the enumeration of these objects is equivalent to that of the polytopes. However, aside from several "innocent" mistakes in his enumeration, there is a fundamental error: While to all 4-polytopes correspond 3-dimensional diagrams, there is no reason to assume that every diagram arises from a polytope. At the time of Brückner's paper, even the corresponding fact about 3-polyhedra and 2-dimensional diagrams has not yet been established –– this followed only from Steinitz's characterization of complexes that determine convex polyhedra [45], [46]. In fact, in the case considered by Brückner, the assumption is not only unjustified, but actually wrong: One of Brückner's polytopes does not exist, see [25].
> ...Polyhedra have been studied since antiquity. It is, therefore, rather surprising that even concerning some of the polyhedra known since that time there is a lot of confusion, regarding both terminology and essence. But even more unexpected is the fact that many expositions of this topic commit serious mathematical and logical errors. Moreover, this happened not once or twice, but many times over the centuries, and continues to this day in many printed and electronic publications; the most recent case is in the second issue for 2008 of this journal....With our understandings and exclusions, there are fourteen convex polyhedra that satisfy the local criterion and should be called "Archimedean", but only thirteen that satisfy the global criterion and are appropriately called "uniform" (or "semiregular"). Representatives of the thirteen uniform convex polyhedra are shown in the sources mentioned above, while the fourteenth polyhedron is illustrated in Figure 1. It satisfies the local criterion but not the global one, and therefore is – in our terminology – Archimedean but not uniform. The history of the realization that the local criterion leads to fourteen polyhedra will be discussed in the next section; it is remarkable that this development occurred only in the 20th century. This implies that prior to the twentieth century all enumerations of the polyhedra satisfying the local criterion were mistaken. Unfortunately, many later enumerations make the same error."
[^mathworld]: From _[MathWorld](!Wikipedia)_, ["Fermat's Last Theorem"](
> "Much additional progress was made over the next 150 years, but no completely general result had been obtained. Buoyed by false confidence after his proof that pi is transcendental, the mathematician Lindemann proceeded to publish several proofs of Fermat's Last Theorem, all of them invalid (Bell 1937, pp. 464-465). A prize of 100000 German marks, known as the Wolfskehl Prize, was also offered for the first valid proof (Ball and Coxeter 1987, p. 72; Barner 1997; Hoffman 1998, pp. 193-194 and 199).
> A recent false alarm for a general proof was raised by Y. Miyaoka (Cipra 1988) whose proof, however, turned out to be flawed. Other attempted proofs among both professional and amateur mathematicians are discussed by vos Savant (1993), although vos Savant erroneously claims that work on the problem by Wiles (discussed below) is invalid."
So we might forgive case 1 errors entirely: if a community of mathematicians take an 'incorrect' proof about a particular existential risk and ratify it (either by verifying the proof subconsciously or seeing what their heuristics say), it not being written out because it would be tedious top[^Weiner], then we may be more confident in it[^Groupthink] than lumping the two error rates together. Case 2 errors are the problem, and they can sometimes be systematic. Most dramatically, when an entire group of papers with all their results turn out to be wrong since they made a since-disproved assumption:
[^Weiner]: The missing steps may be quite difficult to fully prove, though; Nathanson 2009:
> "There is a lovely but probably apocryphal anecdote about [Norbert Weiner](!Wikipedia). Teaching a class at MIT, he wrote something on the blackboard and said it was 'obvious.' One student had the temerity to ask for a proof. Weiner started pacing back and forth, staring at what he had written on the board and saying nothing. Finally, he left the room, walked to his office, closed the door, and worked. After a long absence he returned to the classroom. 'It *is* obvious', he told the class, and continued his lecture."
[^Groupthink]: What conditions count as full scrutiny by the math community may not be too clear; Nathanson 2009 trenchantly mocks math talks:
> "Social pressure often hides mistakes in proofs. In a seminar lecture, for example, when a mathematician is proving a theorem, it is technically possible to interrupt the speaker in order to ask for more explanation of the argument. Sometimes the details will be forthcoming. Other times the response will be that it's "obvious" or "clear" or "follows easily from previous results." Occasionally speakers respond to a question from the audience with a look that conveys the message that the questioner is an idiot. That's why most mathematicians sit quietly through seminars, understanding very little after the introductory remarks, and applauding politely at the end of a mostly wasted hour."
> "In the 1970s and 1980s, mathematicians discovered that framed manifolds with Arf-[Kervaire invariant](!Wikipedia) equal to 1 — oddball manifolds not surgically related to a sphere — do in fact exist in the first five dimensions on the list: 2, 6, 14, 30 and 62. A clear pattern seemed to be established, and many mathematicians felt confident that this pattern would continue in higher dimensions.
> ...Researchers developed what Ravenel calls an entire "cosmology" of conjectures based on the assumption that manifolds with Arf-Kervaire invariant equal to 1 exist in all dimensions of the form $2^n - 2$. Many called the notion that these manifolds might not exist the "Doomsday Hypothesis," as it would wipe out a large body of research. Earlier this year, Victor Snaith of the University of Sheffield in England published a book about this research, warning in the preface, "...this might turn out to be a book about things which do not exist."
> Just weeks after Snaith's book appeared, Hopkins announced on April 21 that Snaith's worst fears were justified: that Hopkins, Hill and Ravenel had proved that no manifolds of Arf-Kervaire invariant equal to 1 exist in dimensions 254 and higher. Dimension 126, the only one not covered by their analysis, remains a mystery. The new finding is convincing, even though it overturns many mathematicians' expectations, Hovey said."^[["Mathematicians solve 45-year-old Kervaire invariant puzzle"](, Erica Klarreich 2009]
The [parallel postulate](!Wikipedia "Parallel postulate#History") is another fascinating example of mathematical error of the second kind; its history is replete with false proofs even by greats like [Lagrange](!Wikipedia "Joseph Louis Lagrange") (on what strike the modern reader as bizarre grounds)[^lagrange], self-deception, and misunderstandings - [Giovanni Girolamo Saccheri](!Wikipedia) developed a non-Euclidean geometry flawlessly but concluded it was flawed:
> "The second possibility turned out to be harder to refute. In fact he was unable to derive a logical contradiction and instead derived many non-intuitive results; for example that triangles have a maximum finite area and that there is an absolute unit of length. He finally concluded that: "the hypothesis of the acute angle is absolutely false; because it is repugnant to the nature of straight lines". Today, his results are theorems of [hyperbolic geometry](!Wikipedia)."
[^lagrange]: ["Why Did Lagrange 'Prove' the Parallel Postulate?"](, Judith V. Grabiner 2009:
> "It is true that Lagrange never did publish it, so he must have realized there was something wrong. In another version of the story, told by [Jean-Baptiste Biot](!Wikipedia), who claims to have been there (though the minutes do not list his name), everybody there could see that something was wrong, so Lagrange's talk was followed by a moment of complete silence [2, p. 84]. Still, Lagrange kept the manuscript with his papers for posterity to read."
Why work on it at all?
> "The historical focus on the fifth postulate came because it felt more like the kind of thing that gets proved. It is not self-evident, it requires a diagram even to explain, so it might have seemed more as though it should be a theorem. In any case, there is a tradition of attempted proofs throughout the Greek and then Islamic and then eighteenth-century mathematical worlds. Lagrange follows many eighteenth-century mathematicians in seeing the lack of a proof of the fifth postulate as a serious defect in [Euclid's _Elements_](!Wikipedia "Euclid's Elements"). But Lagrange's criticism of the postulate in his manuscript is unusual. He said that the assumptions of geometry should be demonstrable "just by the [principle of contradiction](!Wikipedia)"—the same way, he said, that we know the axiom that the whole is greater than the part [32, p. 30R]. The theory of parallels rests on something that is not self-evident, he believed, and he wanted to do something about this."
What was the strange and alien to the modern mind approach that Lagrange used?
> "...Recall that Lagrange said in this manuscript that axioms should follow from the principle of contradiction. But, he added, besides the principle of contradiction, "There is another principle equally self-evident," and that is Leibniz's [principle of sufficient reason](!Wikipedia). That is: nothing is true "unless there is a sufficient reason why it should be so *and not otherwise*" [42, p. 31; italics added]. This, said Lagrange, gives as solid a basis for mathematical proof as does the principle of contradiction [32, p. 30V]. But is it legitimate to use the principle of sufficient reason in mathematics? Lagrange said that we are justified in doing this, because it has already been done. For example, Archimedes [used it](!Wikipedia "/Mechanical advantage#Law of the lever") to establish that equal weights at equal distances from the fulcrum of a lever balance. Lagrange added that we also use it to show that three equal forces acting on the same point along lines separated by a third of the circumference of a circle are in equilibrium [32, pp. 31R–31V].
> ...The modern reader may object that Lagrange's symmetry arguments are, like the uniqueness of parallels, equivalent to Euclid's postulate. But the logical correctness, or lack thereof, of Lagrange's proof is not the point. (In this manuscript, by the way, Lagrange went on to give an analogous proof—also by the principle of sufficient reason—that between two points there is just one straight line, because if there were a second straight line on one side of the first, we could then draw a third straight line on the other side, and so on [32, pp. 34R–34V]. Lagrange, then, clearly liked this sort of argument.)
> ...Why did philosophers conclude that space had to be infinite, homogeneous, and the same in all directions? Effectively, because of the principle of sufficient reason. For instance, [Giordano Bruno](!Wikipedia) in 1600 argued that the universe must be infinite because there is no reason to stop at any point; the existence of an infinity of worlds is no less reasonable than the existence of a finite number of them. Descartes used similar reasoning in his _Principles of Philosophy_: "We recognize that this world. . . has no limits in its extension. . . . Wherever we imagine such limits, we . . . imagine beyond them some indefinitely extended space" [28, p. 104]. Similar arguments were used by other seventeenth-century authors, including Newton. Descartes identified space and the extension of matter, so geometry was, for him, about real physical space. But geometric space, for Descartes, had to be Euclidean...Descartes, some 50 years before Newton published his first law of motion, was a co-discoverer of what we call linear inertia: that in the absence of external influences a moving body goes in a straight line at a constant speed. Descartes called this the first law of nature, and for him, this law follows from what we now recognize as the principle of sufficient reason. Descartes said, "Nor is there any reason to think that, if [a part of matter] moves. . . and is not impeded by anything, it should ever by itself cease to move with the same force" [30, p. 75]....Leibniz, by contrast, did not believe in absolute space. He not only said that spatial relations were just the relations between bodies, he used the principle of sufficient reason to show this. If there were absolute space, there would have to be a reason to explain why two objects would be related in one way if East is in one direction and West in the opposite direction, and related in another way if East and West were reversed [24, p. 147]. Surely, said Leibniz, the relation between two objects is just one thing! But Leibniz did use arguments about symmetry and sufficient reason—sufficient reason was his principle, after all. Thus, although Descartes and Leibniz did not believe in empty absolute space and Newton did, they all agreed that what I am calling the Euclidean properties of space are essential to physics.
> ...In his 1748 essay ["Reflections on Space and Time"](, Euler argued that space must be real; it cannot be just the relations between bodies as the Leibnizians claim [10]. This is because of the principles of mechanics—that is, Newton's first and second laws. These laws are beyond doubt, because of the "marvelous" agreement they have with the observed motions of bodies. The inertia of a single body, Euler said, cannot possibly depend on the behavior of other bodies. The conservation of uniform motion in the same direction makes sense, he said, only if measured with respect to immovable space, not to various other bodies. And space is not in our minds, said Euler; how can physics—real physics—depend on something in our minds? his _[Critique of Pure Reason](!Wikipedia)_ of 1781, Kant placed space in the mind nonetheless. We order our perceptions in space, but space itself is in the mind, an intuition of the intellect. Nevertheless, Kant's space turned out to be Euclidean too. Kant argued that we need the intuition of space to prove theorems in geometry. This is because it is in space that we make the constructions necessary to prove theorems. And what theorem did Kant use as an example? The sum of the angles of a triangle is equal to two right angles, a result whose proof requires the truth of the parallel postulate [26, "Of space," p. 423]....Lagrange himself is supposed to have said that spherical trigonometry does not need Euclid's parallel postulate [4, pp. 52–53]. But the surface of a sphere, in the eighteenth-century view, is not non-Euclidean; it exists in 3-dimensional Euclidean space [20, p. 71]. The example of the sphere helps us see that the eighteenth-century discussion of the parallel postulate's relationship to the other postulates is not really about what is logically possible, but about what is true of real space.
The final step:
> "[Johann Heinrich Lambert](!Wikipedia) was one of the mathematicians who worked on the problem of Postulate 5. Lambert explicitly recognized that he had not been able to prove it, and considered that it might always have to remain a postulate. He even briefly suggested a possible geometry on a sphere with an imaginary radius. But Lambert also observed that the parallel postulate is related to the law of the lever [20, p. 75]. He said that a lever with weightless arms and with equal weights at equal distances is balanced by a force in the opposite direction at the center equal to the sum of the weights, and that all these forces are parallel. So either we are using the parallel postulate, or perhaps, Lambert thought, some day we could use this physical result to prove the parallel postulate....These men did not want to do mechanics, as, say, Newton had done. They wanted to show not only that the world was this way, but that it necessarily had to be. A modern philosophical critic, Helmut Pulte, has said that Lagrange's attempt to "reduce" mechanics to analysis strikes us today as "a misplaced endeavour to mathematize. . . an empirical science, and thus to endow it with infallibility" [39, p. 220]. Lagrange would have responded, "Right! That's just exactly what we are all doing."
Should [P=NP](!Wikipedia) or the continuum hypothesis be false, a far larger math holocaust would ensue.
# The morality of sperm donation
It is generally agreed that increasing IQ is a good thing; IQ is a good correlate with general health, and IQ itself is useful. IQ [correlates with countless good outcomes]( in individuals' lives, and produces [positive]( [externalities]( Hence, charities like [iodizing salt](DNB FAQ#fn56).
I would like to suggest another charity: [sperm donation](!Wikipedia). The average IQ of sperm donors is likely lower than it could be; donations by high IQ donors would increase the average and likely would lead to, on the margin, smarter offspring. LessWrongers [claim]( to have significantly higher than average IQs, which is somewhat plausible given the abstruse material on LessWrong; absent similarly extreme filters on sperm donors, one would expect lower IQs.
How large is the margin? It's hard to be sure, but individual donors can be used in many pregnancies - the deviations from whatever average are really deviate. From the 5 September 2011 _New York Times_, ["One Sperm Donor, 150 Offspring"](
> "Over the years, she watched the number of children in her son’s group grow. And grow. Today there are 150 children, all conceived with sperm from one donor, in this group of half siblings, and more are on the way...While Ms. Daily’s group is among the largest, many others comprising 50 or more half siblings are cropping up on Web sites and in chat groups, where sperm donors are tagged with unique identifying numbers.
> ...No one knows how many children are born in this country each year using sperm donors. Some estimates put the number at 30,000 to 60,000, perhaps more.
> ...Sperm donors, too, are becoming concerned. “When I asked specifically how many children might result, I was told nobody knows for sure but that five would be a safe estimate,” said a sperm donor in Texas who asked that his name be withheld because of privacy concerns. “I was told that it would be very rare for a donor to have more than 10 children.” He later discovered in the Donor Sibling Registry that some donors had dozens of children listed...Ms. Kramer, the registry’s founder, said that one sperm donor on her site learned that he had 70 children. He now keeps track of them all on an Excel spreadsheet."
The average number of children per donor is likely greater than one, given that it seems unlikely there are 30,000-60,000+ active sperm donors[^donation-count]. This leads to a second benefit for the donator: they can rest assured that they have offspring, and offspring that are planned & wanted (as opposed to being the result of a one-night stand). Such a child is not a substitute for raising one's own child, of course, but consider the upside to not raising them: *not raising them*. We have all heard the estimates that to raise one child according to middle-class standards entails direct and indirect costs in the range of hundreds of thousands of dollars. Given that [the research](!Wikipedia "Positive psychology#Parenting") is not encouraging on whether raising children actually makes you happier, one has to ask - are kids really worth that? Why not let someone else raise the kid, someone who has demonstrated that they *really* want a child by not just agreeing to raise one but paying heavily upfront for a mere chance to? Division of labor and Pareto-improving transactions and all that.
[^donation-count]: _Salon_ 2001, ["The Rise of the Smart Sperm Shopper: How the Repository for Germinal Choice accidentally revolutionized sperm banking"](
> "This attention to consumer choice has boosted the sperm-bank industry. Banks now cater eagerly to the lesbians and single women who were rejected by old-school doctors (and by Graham). Rothman estimates that 40 percent of his clients are single women or lesbians. In 1987, the last year for which there is data (why no data? Keep reading), more than 30,000 babies were born to women who used anonymous donors. The number has almost certainly soared since then, as sperm banks have massively proliferated...What other branch of medicine could harbor a doctor like Cecil Jacobson, the fertility specialist who impregnated more than 70 women with his own semen while promising them anonymous donors?"
There is a third benefit. Surprisingly, sperm donor-assisted pregnancies result in [*1/5th* the number of birth defects](!Wikipedia "Sperm donation#Sperm_donation and reduced birth defects") as pregnancies in general. (The [CDC]( tells me that the defect rate is 1 in 33 or ~3%, and that birth defects in 2006 directly killed 5,819 infants.) Much of this significant benefit stems from the [paternal age effect](!Wikipedia) - older fathers' sperm result in more birth defects, lowered IQ, linked to autism, etc. To the extent that a sperm donor donating displaces having future offspring at an elder age, donation directly reduces birth defects and the other mentioned effects.
Technically, there is a fourth benefit: one may be paid. But the sums are nominal compared to the pay for an egg donation (>$5,000) and probably not worth considering.
A final speculative benefit is the storage of sperm itself. It is a rare but existing practice for women to purchase [egg freezing services](!Wikipedia "Oocyte cryopreservation") for their own eggs, to guard against loss of fertility (from age & delaying childbearing, cancer treatment, etc.) [Semen cryopreservation](!Wikipedia) is believed to work for long periods (at least 21 years so far) and presumably could also be used as a hedge.
What are the costs? As far as I can tell, if one does not seek out the children (a connection which could then be construed by a court as assuming the role of father, which has happened), there is no legal exposure to be worried about. The main cost is undergoing the testing sperm banks demand. From [Sperm Bank](
> You may approach a sperm bank directly to see if they are accepting new donors. You will be asked a number of questions over the phone. At that time, you will be asked to come in to the bank (or laboratory) for a meeting. During this first meeting, the laboratory will spend significant time with you, have you fill out a very thorough questionnaire about your own medical history and your family history. At that time they will go through their rules and procedures. Often labs will ask you, during this first visit, to produce an initial semen sample in the collection room. This initial sample is tested by the lab to see how much sperm is in the ejaculate, its quality, and how well it freezes. Most labs have private collection rooms with videos or magazine to help with production.
> Assuming the sample looks good and you meet the bank's basic criteria, you will be invited back for a full physical and to have blood drawn. At that time, you will probably be asked to produce another sample of semen and urine. These will be thoroughly tested for infectious disease, sexually transmitted diseases or genetic problems. Assuming all of these tests are completed and come back negative you will be able to start regular donations. Most often banks ask you to sign a contract agreeing to produce specimens 1-2/ week for at least 6 months. Again, each laboratory has its own requirements.
[Wikipedia](!Wikipedia "Sperm donation#Medical screening") reads us the riot act:
> "Screening includes:[5]
> - Taking a medical history of the donor, his children, siblings, parents, and grandparents etc. for three to four generations back. This is often done in conjunction with the patient's family doctor.
> - HIV risk assessment interview, asking about sexual activity and any past drug use.
> - Blood tests and urine tests for infectious diseases, such as:
> - HIV-1/2 see sections below
> - HTLV-1/2
> - Hepatitis B
> - Hepatitis C
> - Syphilis
> - Gonorrhea
> - Chlamydia
> - Cytomegalovirus (CMV) see sections below, although not all clinics test for this.
> - Blood and urine tests for blood typing and general health indicators: ABO/Rh typing, CBC, liver panel and urinalysis
> - Complete physical examination including careful examination of the penis, scrotum and testicles.
> - Genetic testing for carrier traits, for example:
> - Cystic Fibrosis
> - Sickle-cell disease
> - Thalassemia
> - Other hemoglobin-related blood disorders.
> - ...General health
> - Semen analysis for:
> - Sperm count
> - Morphology
> - Motility
> - Acrosome activity may also be tested"
All this is apparently at the sperm bank's expense (aside from one's time and patience). Hence, it may be simultaneously a cost and a benefit - the more they test, the more opportunity you have to learn about yourself.
It's not clear how many people will clear these hurdles. From _ScienceDaily_, ["Sperm Donors Valued Less Than Egg Donors"](
> "“Men donors are paid less for a much longer time commitment and a great deal of personal inconvenience,” she said. “They also are much less prepared for the emotional consequences of serving as a donor of reproductive material. Women, meanwhile, are not only paid more for a much shorter time commitment, they are repeatedly thanked for ‘giving the gift of life.’...The inequities persist despite the fact that profiles of hundreds of potential egg donors languish on agency Web sites, far outstripping recipient demand, while suitable sperm donors are quite rare, Almeling found. In fact, only a tiny fraction of the male population possesses a sperm count consistently high enough to be considered donation-worthy, and more than 90 percent of sperm bank applicants are rejected for this and other reasons. As a result, sperm banks routinely resort to finder’s fees to meet the need...In contrast, sperm banks do not pay as well or encourage such displays of gratitude. Male donors make between $50 and $75 per donation, and they are paid only when their samples meet the high fertility standards required for freezing. Over the length of their contracts — generally, an entire year — sperm donors may make as much as their female counterparts do over the course of a single six-week cycle, but only if they donate more than the required one sample per week. Invariably, however, earnings of sperm donors fell short, either because donors missed weekly sessions or their samples failed to meet fertility standards. Women also may donate as many as three times in a year, and their fees increase with each completed cycle."
Worrisomely, it sounds as if the ongoing costs are not simply blood draws (to test for STDs):
> "Moreover, men work much longer for their pay than women, and their activities are much more restricted as a result. In addition to requiring weekly donations for a year, sperm banks instruct men to refrain from sex for two days prior to donation or risk the possibility that their samples will fail to meet fertility standards. Being sick or stressed also has a negative effect on sperm count. “Even the doctors who were working with infertile couples were surprised when they learned just how demanding the process is for men,” Almeling said. “Sperm donors basically have to schedule their sex lives for a year.”"
In addition, some donors may be turned down entirely based on cosmetic features; eg. ["Sperm bank turns down redheads"](
> Ole Schou, Cryos's director, said that there had been a surge in donations in recent years, allowing the facility to become much more picky about its donors. "There are too many redheads in relation to demand," he told told Danish newspaper Ekstra Bladet. "I do not think you chose a redhead, unless the partner - for example, the sterile male - has red hair, or because the lone woman has a preference for redheads. And that's perhaps not so many, especially in the latter case."
_Slate_ [in 2001](
> "Cryobanks became ever more sensitive to consumer anxiety about health and donor achievement. Today the California Cryobank—probably the world's premier sperm bank—tests for a dozen genetic disorders and for almost as many infectious diseases. Donors must complete a 38-page, three-generation medical history, and submit to months of blood testing. The cryobank accepts only college graduates or students enrolled in a four-year program. (The cryobank's offices are in Westwood, Palo Alto, and Cambridge, Mass., meaning that most of its donors hail from USC, UCLA, Stanford, Harvard, and MIT.) And donors must stand at least 5 feet 9 inches tall. By the time it weeds out the sickly, the short, and the dim, the California Cryobank accepts only 3-5% of applicants."
Sean Berkley in a 2011 _Cracked_ article, ["6 Terrifying Things Nobody Tells You About Donating Sperm"](, breaks down the harsh realities for us:
1. "You May Now Have Dozens (or Hundreds) of Children -- and They May Find You
> At my particular bank, it was $20 a pop for a closed donor and $125 for an open donor. You're allowed to donate a maximum of twice a week, so going the open route will pay upwards of $12,000 a year, certainly not a bad chunk of change. However, this comes at the expense of releasing all your personal information to parents should they (or their child) ever want to contact you...While no person who has donated sperm through a bank has ever been found liable for child support (at least not yet), you and your family are still going to have to deal with the fact that there's a child, biologically YOUR child, who wants a relationship with you...I've had more than one girl refuse to date me because I've donated sperm, and I can totally understand where they were coming from. Who wants to deal with that kind of drama?"
2. "Not Tonight, Honey, I Have to 'Work' Two Days From Now.
> As mentioned above, you have to have an above-average sperm count for the whole process to be viable, so as such, you're required to be abstinent two to three days before making a deposit. So if you're trying to maximize your profits by donating twice a week, that leaves one day per week that you can do with your genitals as you please...Even if you're only donating once a week, you will still have a set day and time each week to come in and make your deposit (sperm banks operate on 9-to-5 hours). So if a girlfriend's birthday or your anniversary happens to fall less than three days before your scheduled appointment, too bad....Your sperm count is still spot checked on each donation; if it's too low, you don't get paid for that deposit. If several donations in a row are rejected because of fledgling sperm counts, you may be asked to follow a special diet"
3. "Yes, You Can Be Legally Obligated to Masturbate
> So if your first two donations are good enough, they'll bring you on as a paid donor. However, that means you'll be required to sign a contract, usually for six months to a year, stating you'll come in at least once a week to spank the monkey. Just to make sure you follow through, your paychecks are kept in escrow by the sperm bank until the end of the contract. In the meantime, your sperm are cryogenically preserved to maximize shelf life, but not all sperm handle the freezing process well. So, your first two donations are put on ice, and at the six-month mark, they're unfrozen to check how they're doing. If your tadpoles are still kicking, congrats, here's your check. If your sperm has gone all Mr. Bigglesworth, however, sorry, hit the road. Also, there are certain delayed onset diseases that can take a few months to show up on blood screens (like HIV), so they need to test you every six months to make sure your sperm is cleared to give to parents. By withholding the money, that helps ensure donors to come back for their follow-up tests."
4. "The Staff Is Female, There Is Porn and You Will Be Interviewed
> The sperm count is where most people have trouble, since you're already required to have an above-average sperm count, and masturbation only produces about half as many sperm as having sex. Fifty to 90 percent of donors who make it this far are eliminated."
5. "They Will Need to Know Everything About You (and Your Family)
> ...In keeping with the practice of only taking the best of the best, there are 50 or so disqualifying conditions (again, depending on the bank), and something as minor as a food allergy can knock you out of the running. Also, if you've ever had an STD, you're automatically disqualified, even if it has since been cured...You must also be able to provide a detailed medical history for every parent, sibling, aunt, uncle, cousin and grandparent you have, as well as any children your siblings or cousins may have, going back *four generations*...I had an uncle who died at a relatively young age in a workplace accident and I was asked to produce a newspaper article or obituary verifying my claim. I also had one set of grandparents who both died in their late sixties from heart attacks, which naturally was a cause for concern. When I explained that they had both been lifelong smokers and drinkers, I then had to assure them that no other member of my family had a history of substance abuse, to assuage their suspicions that I might be genetically predisposed to addictions."
6. "Minorities, Runts and Gingers Need Not Apply
> ...You obviously must be male (or a very talented female), usually between 18 and 35, and live within an hour's drive of the sperm bank. Not too difficult, right? Oh, did I mention you have to be at least 6 feet tall? Yeah, turns out nobody likes shorties, least of all prospective parents....Also, you need to have a high school degree or better. The bank I went to required that you at least be enrolled in college, if not already a college graduate."
All this is suggestive and interesting, but not complete. To make a solid utilitarian case we would need to establish:
1. What *is* the average IQ or general genetic quality of donors? What is the marginal increase in each offspring?
2. What is the average number of offspring produced?
3. At what point do diminishing returns set in?
4. How costly is the testing/application process, and then how burdensome is the actual donating process?
# Reasons of State: Why Didn't Denmark Sell Greenland?
Reading one day on [Reddit](, I ran into a bit of historical trivia [on Wikipedia](!Wikipedia "Greenland#Post_World_War_II"):
> "Following World War II, the United States developed a geopolitical interest in Greenland, and in 1946 the United States offered to buy Greenland from [Denmark](!Wikipedia) for $100,000,000, but Denmark refused to sell."
The reason why the US would want to buy Greenland is clear: being able to install anti-Russian military installations such as early-warning radar and nuclear bomber bases (Greenland being fairly close, on a [great circle](!Wikipedia), to Russia)[^strategic]. The US has famously often bought territory (Louisiana Territory & Alaska being the biggest), so it was nothing new.
[^strategic]: ["National Affairs: Deepfreeze Defense"](, _Time_, 27 January 1947
> "This week, as U.S. strategists studied the azimuthal map of the Arctic (see cut), it looked as though Seward had been right about Greenland; and Lansing wrong. The U.S. frontier is now on the shore of the Arctic Ocean. Thanks to "Seward's Folly," the fortress of North America has a castellated outpost at the northwest angle in Alaska. But at the northeast angle it has only tenuous base rights, to expire with the peace.
> So long as U.S. servicemen—even radio beacon operators and weathermen—remain at Greenland outposts, the U.S. is exposed to verbal sniping from Moscow for "keeping troops on foreign soil." But with the Soviets trying to muscle in on Norway's Spitsbergen (TIME, Jan. 20), Washington military men thought this might be as good a time as any to buy Greenland, if they could.
> Greenland's 800,000 square miles make it the world's largest island and stationary aircraft carrier. It would be as valuable as Alaska during the next few years, before bombers with a 10,000-mile range are in general use. It would be invaluable, in either conventional or push-button war, as an advance radar outpost. It would be a forward position for future Rocket-launching sites. In peace or war it is the weather factory for northwest Europe, whose storms must be recorded as near the source as possible."
["Let's Buy Greenland! A complete missile-defense plan"](, _[National Review](!Wikipedia)_ May 2001:
> "It's a little-known fact that Seward also was interested in Greenland. In 1946 — long after Seward's time — the United States seems to have made a formal offer of $100 million for Greenland, according to declassified documents discovered about ten years ago in the National Archives. The purpose of the acquisition, wrote a state-department official, was to provide the United States with "valuable bases from which to launch an air counteroffensive over the Arctic area in the event of attack." Secretary of State James Byrnes suggested the idea to the Danish foreign minister, but the record does not reveal whether the Danes formally turned down the offer or just ignored it."
The reason why Denmark would not sell Greenland is... less clear. At first glance, it's not clear. Nor the second.
## Costs
By the economics, holding on to Greenland is a bad idea.
$100m in 1946 dollars is somewhere around $1b in 2011 dollars. Then one should consider the [opportunity cost](!Wikipedia); $100m invested in 1946 until 2011 at a 5% return is around 24x ($1.05^{2011-1946} = 23.8399$) or $2.4b. Denmark could have realized such a return just by leaving the money in the American stock market; the Dow Jones varied between 160 and 200 in 1946-1950, and over the past few years trades at 10-12,000 for a return of 60-63x^[Stock market returns have been hurt badly by the late 2000s - if one took the old rule of thumb that stock markets return 8% over the long term, Denmark's hypothetical return would have been *149*x, not 63x ($1.08^{2011-1946} = 148.779$).]
We can deal with the [economy of Greenland](!Wikipedia) in one fell swoop: Greenland's entire GDP is around $2b. The overall trade deficit is a few hundred million and has been in place for 21 years, since 1990. The official Danish subsidy was $512m in 2005. Greenland is not a going concern and would collapse within years. It's hard to estimate the total subsidy sine 1946, but if we make the assumption that the subsidy is a constant percentage of Danish GDP then the subsidy totals (ignoring opportunity cost) somewhere upwards of $3-16b[^subsidy].
[^subsidy]: This was a fairly difficult and ugly calculation; we'll have to make some gruesome assumptions and approximations here. Fortunately, the total value of Denmark's subsidy isn't *that* important to the overall argument because all the other numbers are so dismal. (Better calculations are welcome.)
[Economy of Denmark](!Wikipedia) told me that Denmark had a $188b GDP in 2005, and the Greenland article claims a $512m subsidy in 2005, so the subsidy = 0.27% of that year's GDP. A time-series of Danish GDP is hard to come by, so I ultimately went to the [Penn World Table Version 6.1]( and asked it for "Real Gross Domestic Income (RGDPL adjusted for Terms of Trade changes)" (see "5. Adjustment for Changes in the Terms of Trade: RGDPTT [20]" in their [appendix]( between 1950 and 2000. This is not the data I wanted, but it is better than nothing.
With a percentage in mind and the data available, the calculation is fairly easy:
sum $ map (0.0027 *) [8130.66, 7798.41, 8060.55, 8569.56, 8817.04, 8769.47, 8887.68,
9210.72, 9413.06, 10273.77, 10773.04, 11357.40, 11907.76, 12060.87, 13044.24,
13570.60, 13890.28, 14212.31, 14643.65, 15608.56, 15847.24, 16142.49, 16985.90,
17415.87, 16536.29, 16373.67, 17309.97, 17415.79, 17812.97, 18054.61, 17644.67,
17123.93, 17667.56, 18095.44, 18749.36, 19490.52, 20643.30, 20640.86, 21291.20,
21333.19, 21573.87, 21649.87, 21933.63, 21789.76, 22917.37, 23532.41, 24086.62,
24850.91, 25494.56, 25942.25, 26916.01]
I'm not entirely sure how to interpret this figure - only \$2.3b or is it not even in millions?
So I took an alternate approach. [NationMaster]( gives a 1950 Denmark GDP per capita figure (apparently real) of \$6,683.00; the [Danish population in 1950](!Wikipedia "Danish_Demographics#Vital_statistics_since_1900_.5B1.5D") was 4,281,275 so multiply is \$28.6b, which seems reasonable compared with a 2005 GDP of \$188b. Assuming linear growth, we can average them and multiply by the interval ($\frac{28.6 + 188}{2} \times (2005 - 1950)$) to get the total GDP of \$5956.5b, and we assumed a constant 0.27% of the GDP so $5956.5 \times 0.0027 = 16.083$ billion. That seems pretty reasonable. (Imagine working backwards: \$0.5b in 2005 + ~\$0.5b in 2004 + ~\$0.5b in 2003, and we're already closing in on a solid \$2b.)
Americans may find 3 billion or even \$16 billion a pretty piddling sum given the colossal sums it spends (>$3t on Iraq and Afghanistan, $2.5b for [B-2 Bombers](, etc.); but remember the USA is the largest economy in the world with 300 million people. Denmark is closer to the 30th largest economy, and has just 5.6 million people. It might be fairer to multiply every figure by 60, to get a range from \$180b to \$960b - at which point it becomes clear that this is no laughing matter. This money could have funded major projects like the [Øresund Bridge](!Wikipedia).
## Benefits
So what can we put on the positive side of the balance-sheet? Before evaluating something we need to ask ourselves whether Danish sovereignty over Greenland *matters* - sovereignty is what Denmark was asked to sell, nothing else. If something would benefit Denmark regardless of whether it sold Greenland or not, then it *cannot* count as a benefit. We are also interested in the changes [on the margin](!Wikipedia "Margin (economics)") based on Denmark not selling.
Proceeding through possible benefits in descending order:
1. Geostrategic location
Obviously very valuable. But it is worthless to Denmark if it does not use it, which it does not. Denmark is liberal Scandinavian state which has been neutral since the 1864 [Second Schleswig War](!Wikipedia); it is not going to launch its piratical navy out from hidden Greenland ports to raid Atlantic shipping. Denmark has, does, and will make no significant military use of Greenland. Likewise, Denmark has failed to capture the *indirect* geostrategic value of Greenland: not only did it refuse the US offer, it then proceeded to let the US use Greenland as much as it pleased, expanding the US WWII installations into the gigantic [Thule Air Base](!Wikipedia) (linchpin of the [Strategic Air Command](!Wikipedia) nuclear bombers aimed at Russia) and agreeing to govern Thule under a [1951 treaty]( which specifically says (emphasis added):
> "Without prejudice to the sovereignty of the Kingdom of Denmark over such defense area and the natural right of the competent Danish authorities to free movement everywhere in Greenland, the Government of the United States of America, *without compensation* to the Government of the Kingdom of Denmark, shall be entitled within such defense area and the air spaces and waters adjacent thereto: ..."
When Denmark's military is active in the 20th/21st century, it is in roles where Greenland is of no possible value to it (eg. serving as [peacekeepers & advisors in Afghanistan]( What is the Danish Navy supposed to do, intervene in the [Cod Wars](!Wikipedia)? With the ending of the Cold War, Greenland's value is likewise diminished. Yes, the US would still like it for use with the anti-ballistic missile shield, but that's a preference and not an existential necessity.
2. oil/natural gas/methane
An old hypothetical. The Greenland state oil company, [NUNAOIL](!Wikipedia), is small; and if I read their [2010 report]( correctly, they lose money some years and the profitable years yield only a few million dollars. There may be ton of oil there, but there will always be oil deposits *somewhere* which are uneconomical to extract, and Greenland seem to hold them. (When oil companies prefer working with [tar sands](!Wikipedia) to your oil, you either have very little or very hard oil.) Worse, Denmark [has agreed]( that future oil profits will go to Greenland, left-overs will only go to reduce the annual Danish subsidy, and anything past that goes back to Greenland. So under no scenario does Denmark turn an actual profit. And finally, Denmark could profit from any oil discovered regardless of whether it or the US owned Greenland - if its oil companies are competitive. Oil is completely useless as a defense.
3. the [North-West Passage](!Wikipedia)
[Territorial waters](!Wikipedia) extend out only 12 nautical miles. Even the Greenland [EEZ](!Wikipedia "File:Kingdom of Denmark EEZ.PNG") does not cut off the Passage. The only obvious way to monetize the Passage is military, which as already noted, is politically impossible for Denmark.
4. Fishing grounds
Fishing is one of the [few viable Greenland industries](!Wikipedia "Economy of Greenland"); Wikipedia says it's 72% of the 2008 exports of $485m or $350m, from a fishery of around 5 million square miles. This is total export value, not profit. If we assume profits of 10%/$30m or whatever, that's nowhere near enough to repay the subsidy or the cost of not selling, and so it does not help. (But it is definitely a brighter spot in the tally when compared with the oil/Passage/geostrategic values of 0.)
5. Zinc and lead
Those mines were closed in 1990. I didn't find any data on the profitability of the mines between 1946 and 1990; it is possible they were *so* lucrative as to justify holding onto Greenland. This seems a little unlikely, but I can't immediately dismiss it. (It wouldn't justify the subsidies past 1990, of course, but it might justify the original 1946 decision to hold onto Greenland.)
6. Political control
Greenland has been principally self-governing since 1979. This may not have been foreseeable in 1946, but suppose it hadn't; how exactly is the control worth hundreds of millions and billions?
7. Capturing Greenland's import business
If Denmark's exports (and future benefits from postulated resource wealth) are driven by its sovereignty over Greenland, then their current moves in devolving even more power and control to the Greenland government (eg. the aforementioned oil deal) are shooting themselves in the economic foot. And if Denmark's export success isn't because of its sovereignty, all the less reason to have sovereignty! But given the fact that the direct Danish subsidy is almost the size of Greenland's entire annual imports and Greenland only imports 60% from Denmark (60% of $867m being $520m vs $500m subsidy), it is hard to see how this could ever result in a profit for Denmark.
8. Cultural value
Wikipedia does list 'handicrafts' as a major export. But just like oil and the other resources, benefiting from Greenland's craftsmen and artists do not require any kind of sovereignty or subsidization. The US doesn't own China or Canada, yet we still profit plenty and get lots of artists and craftsmen from them. Given free flow of people, the question becomes can Denmark justify its subsidies and opportunity cost by the *marginal* increase of artists and craftsmen? Justifying by Greenland's art is dubious; by the marginal increase even more dubious.
9. Colonization
Much of the Greenland population is Danish, unsurprisingly. One wonders if some successful colonization (the population is only 57,000 people) is a real benefit to Denmark; regardless, given that Denmark's 2010 birthrate of 1.88 children per women is below replacement rate, any nationalists ought to focus their colonizing efforts at home.
10. Charity
Greenland is a poor country. Perhaps Denmark simply wants to help out. This is ethically reprehensible. Greenland is poor, but compared to many African countries it is fabulously wealthy regardless of whether you take the \$20k per capita at face value or discount subsidies etc. to get something more like \$10k per capita.. Investigations of African intervention estimate the cost of saving African lives at ~\$1000 a life but let's be conservative and increase the cost by an order of magnitude to \$10,000 a life; is helping a bit the 57,000 Greenlanders ethically preferable to instead saving 50,000 African lives ($\frac{500,000,000}{10,000}$)? I do not think they are even close, and if that is Denmark's true reason, shame on them for letting distance or ethnicity warp their ethical judgement to such a freakish extreme. (It is not as if European countries sending foreign aid to Africa is some hugely novel and experimental concept. It is well-understood how to do good in Africa and the Danish would be more competent than most at the job.)
Note that I did *not* include as a benefit 'makes Denmark bigger'. Land is not intrinsically valuable, but rather what is in it or on it is of value. One acre of Tokyo or Manhattan will buy you many acres indeed of the frozen wastes of Russia or the baked wastes of the Sahara. (The Sahara, incidentally, used to be a really nice place: the [Green Sahara](!Wikipedia); something similar happened to Mesopotamia as it went from bread basket of civilization to dusty Iraq.) If we think about deserts, we see land can be an outright liability if it leads to desertification and destructive sandstorm devastating the good bits of one's country. Russia is fairly wealthy (in aggregate, not per capita), but that is due to what its frozen wastes *contain*, not the wastes; and it appears Greenland did not receive the dubious blessing of "the devil's excrement" (see [resource curse](!Wikipedia)).
## Why?
Being an outside unfamiliar with Denmark, it is hard for me to speculate. One person tells me:
> If the national budget have to be cut, I think Greenland would rate as one of the last things Danes would like to see cut.
That's very strange. As an American, would I say Puerto Rico is one of the very last things that ought to ever be cut in the federal budget? Heck no! Puerto Rico has repeatedly decided it'd rather not be a state, but at least it's still genuinely ruled by the USA; if Puerto Rico decided to switch to full home rule, I think I'd care even less about them.
So why are the Danish so keen on Greenland? It spurns them politically, it costs them a fortune, and has no apparent advantage. Given the above quote, it may be time to abandon realpolitik, especially when we read in the 1947 _Time_:
> "There was always the objection that Denmark's national pride would stand in the way of a sale. But U.S. military men thought they had an answer: Denmark owes U.S. investors $70 million. That is less than the cost of an 850-ft. carrier for the Navy, but more dollar exchange than Copenhagen can easily raise."
One wonders why Denmark didn't, say, offer the US a 99-year lease for \$70 million, especially if they were struggling (like post-war England, one notes) to raise dollars for imports and debt-service.
What would explain the Danish placing Greenland above other sacred cows in their budget and permit the US to establish huge military bases for free? "Sacred cow" seems to be as useful a phrase here as _Time_'s "national pride"; the last time the US bought land from Denmark was the [United States Virgin Islands](!Wikipedia) with the [Treaty of the Danish West Indies](!Wikipedia), under difficult Danish circumstances in WWI. It would not be too surprising if there were some bitter feelings about this among some Danish, regardless of what happened.
Perhaps this is the explanation: Greenland has become *symbolic*. Possession of Greenland has become a symbol of Denmark's status as a modern independent developed nation. The symbolism of sovereignty and the shared history is not threatened by US bases, nor home rule, nor economic failure, and apparently is sufficient repayment for all the money poured down that hole.
# Remote monitoring
Desire: some way to monitor freelancer's activity (if they are billing by time rather than results).
Why? This enables better compliance and turns freelancers into [less of a lemon market](!Wikipedia "The Market for Lemons") - allowing for higher salaries due to lower risk. Reportedly, such monitoring also helps one's own akrasia - one could use it both while 'working' and 'not working', just with someone else (akin to [coffee shops]( perhaps). The idea comes from Cousin It and Richard Hollerith's <> (even if it wouldn't go as far as letting one's [life be managed](!).
Potential solutions:
1. remote desktops: screenshots or video. Requirements:
- cross-platform (Linux, Windows & Mac)
- secure (eg. using SSH for transport, especially since we already use SSH for fulltext access)
- easily toggleable on and off
Of the remote desktop protocols, only the VNC protocol is acceptable: has many open source & proprietary cross-platform implementations for both client and server on, and can be tunneled over SSH. ([Nick Tarleton]( says Macs are already compatible with a VNC client user.) TightVNC seems like it would work well. (One difficulty: the natural tool to use once a VNC server is running on the remote desktop is [vncsnapshot]( which does what you think it does, but the Debian summary warns it does *not* work over SSH. [vnccapture]( may or may not work.)
2. browser URL logging (since much work takes place in browsers). Requirements:
- cross-browser (Firefox, Chrome, Safari; IE users can die in a fire)
- at a minimum, passworded
[RescueTime]( has a paid [group tracking]( set of features that seems designed for this sort of task. There are many [other Internet possibilities](!Wikipedia "Comparison of time tracking software"). (I used to use the Firefox extension PageAddict which worked well for this sort of thing but is unmaintained; the most popular maintained extension, [Leechblock](, doesn't export statistics. [about:me]( would probably work, but wouldn't be automated.)
3. Other
For example, my [sousveillance]( script; it would be trivial to set up a folder and then add a call to the script like `scp xwd-?????.png`. This should be easily implemented for Macs, but for Windows? I am sure it is doable to write some sort of batch script which integrates with Task Scheduler, but I left Windows before I wrote my first script, so I don't know how hard it would be.
The real turning point was the philosopher Peter Singer's 1975 book _Animal Liberation_, the so-called bible of the animal rights movement. 273 The sobriquet is doubly ironic because Singer is a secularist and a utilitarian, and utilitarians have been skeptical of natural rights ever since Bentham called the idea "nonsense on stilts." But following Bentham, Singer laid out a razor-sharp argument for a full consideration of the *interests* of animals, while not necessarily granting them "rights." The argument begins with the realization that it is consciousness rather than intelligence or species membership that makes a being worthy of moral consideration. It follows that we should not inflict avoidable pain on animals any more than we should inflict it on young children or the mentally handicapped. And a corollary is that we should all be vegetarians. Humans can thrive on a modern vegetarian diet, and animals' interests in a life free of pain and premature death surely outweigh the marginal increase in pleasure we get from eating their flesh. The fact that humans "naturally" eat meat, whether by cultural tradition, biological evolution, or both, is morally irrelevant.
Like Brophy, Singer made every effort to analogize the animal welfare movement to the other Rights Revolutions of the 1960s and 1970s. The analogy began with his title, an allusion to colonial liberation, women's liberation, and gay liberation, and it continued with his popularization of the term speciesism, a sibling of racism and sexism. Singer quoted an 18th-century critic of the feminist writer Mary Wollstonecraft who argued that if she was right about women, we would also have to grant rights to "brutes." The critic had intended it as a reductio ad absurdum, but Singer argued that it was a sound deduction. For Singer, these analogies are far more than rhetorical techniques. In another book, _The Expanding Circle_, he advanced a theory of moral progress in which human beings were endowed by natural selection with a kernel of empathy toward kin and allies, and have gradually extended it to wider and wider circles of living things, from family and village to clan, tribe, nation, species, and all of sentient life.274 The book you are reading owes much to this insight.
690 Pinker
And finally we get to meat. If someone were to count up every animal that has lived on earth in the past fifty years and tally the harmful acts done to them, he or she might argue that no progress has been made in the treatment of animals. The reason is that the Animal Rights Revolution has been partly canceled out by another development, the Broiler Chicken Revolution.285 The 1928 campaign slogan "A chicken in every pot" reminds us that chicken was once thought of as a luxury. The market responded by breeding meatier chickens and raising them more efficiently, if less humanely: factory-farmed chickens have spindly legs, live in cramped cages, breathe fetid air, and are handled roughly when transported and slaughtered. In the 1970s consumers became convinced that white meat was healthier than red (a trend exploited by the National Pork Board when it came up with the slogan "The Other White Meat"). And since poultry are small-brained creatures from a different biological class, many people have a vague sense that they are less fully conscious than mammals. The result was a massive increase in the demand for chicken, surpassing, by the early 1990s, the demand for beef.286 The unintended consequence was that billions more unhappy lives had to be brought into being and snuffed out to meet the demand, because it takes two hundred chickens to provide the same amount of meat as a single cow.287 Now, factory farming and cruel treatment of poultry and livestock go back centuries, so the baleful trend was not a backsliding of moral sensibilities or an increase in callousness. It was a stealthy creeping up of the numbers, driven by changes in economics and taste, which had gone undetected because a majority of people had always been incurious about the lives of chickens. The same is true, to a lesser extent, of the animals that provide us with the other white meat.
But is vegetarianism at least trending upward? As best we can tell, it is. In the U.K. the Vegetarian Society gathers up the results of every opinion poll it can find and presents the data on its information sheets. In figure 7–28 I've plotted the results of all the questions that ask a national sample of respondents whether they are vegetarians. The best-fitting straight line suggests that over the past two decades, vegetarianism has more than tripled, from about 2 percent of the population to about 7 percent. In the United States, the Vegetarian Resource Group has commissioned polling agencies to ask Americans the more stringent question of whether they eat meat, fish, or fowl, excluding the flexitarians and those with creative Linnaean taxonomies. The numbers are smaller, but the trend is similar, more than tripling in about fifteen years.
Loyalty to groups in competition, such as sports teams or political parties, encourages us to play out our instinct for dominance vicariously. Jerry Seinfeld once remarked that today's athletes churn through the rosters of sports teams so rapidly that a fan can no longer support a group of players. He is reduced to rooting for their team logo and uniforms: "You are standing and cheering and yelling for your clothes to beat the clothes from another city." But stand and cheer we do: the mood of a sports fan rises and falls with the fortunes of his team.129 The loss of boundaries can literally be assayed in the biochemistry lab. Men's testosterone level rises when their team defeats a rival in a game, just as it rises when they personally defeat a rival in a wrestling match or in singles tennis.130 It also rises or falls when a favored political candidate wins or loses an election.131
The dark side of our communal feelings is a desire for our own group to dominate another group, no matter how we feel about its members as individuals. In a set of famous experiments, the psychologist Henri Tajfel told participants that they belonged to one of two groups defined by some trivial difference, such as whether they preferred the paintings of Paul Klee or Wassily Kandinsky.132 He then gave them an opportunity to distribute money between a member of their group and a member of the other group; the members were identified only by number, and the participants themselves had nothing to gain or lose from their choice. Not only did they allocate more money to their instant group-mates, but they preferred to penalize a member of the other group (for example, seven cents for a fellow Klee fan, one cent for a Kandinsky fan) than to benefit both individuals at the expense of the experimenter (nineteen cents for a fellow Klee fan, twenty-five cents for a Kandinsky fan). A preference for one's group emerges early in life and seems to be something that must be unlearned, not learned. Developmental psychologists have shown that preschoolers profess racist attitudes that would appall their liberal parents, and that even babies prefer to interact with people of the same race and accent. 133
The only problem with Singer's metaphor is that the history of moral concern looks less like an escalator than an elevator that gets stuck on a floor for a seeming eternity, then lurches up to the next floor, gets stuck there for a while, and so on. Singer's history finds just four circle sizes in almost two and a half millennia, which works out to one ascent every 625 years. That feels a bit jerky for an escalator. Singer acknowledges the bumpiness of moral progress and attributes it to the rarity of great thinkers:
> Insofar as the timing and success of the emergence of a questioning spirit is concerned, history is a chronicle of accidents. Nevertheless, if reasoning flourishes within the confines of customary morality, progress in the long run is not accidental. From time to time, outstanding thinkers will emerge who are troubled by the boundaries that custom places on their reasoning, for it is in the nature of reasoning that it dislikes notices saying "off limits." Reasoning is inherently expansionist. It seeks universal application. Unless crushed by countervailing forces, each new application will become part of the territory of reasoning bequeathed to future generations.227
But it remains puzzling that these outstanding thinkers have appeared so rarely on the world's stage, and that the expansion of reason should have dawdled so. Why did human rationality need thousands of years to arrive at the conclusion that something might be a wee bit wrong with slavery? Or with beating children, raping unattached women, exterminating native peoples, imprisoning homosexuals, or waging wars to assuage the injured vanity of kings? It shouldn't take an Einstein to figure it out.
One possibility is that the theory of an escalator of reason is historically incorrect, and that humanity was led up the incline of moral progress by the heart rather than the head. A different possibility is that Singer is right, at least in part, but the escalator is powered not just by the sporadic appearance of outstanding thinkers but by a rise in the quality of everyone's thinking. Perhaps we're getting better because we're getting smarter.
pg 970
Luke: That paragraph and its footnote is a rough draft from Anna on the Laplacian approach to thinking about when AI might arrive.
9:42 PM There is also a link below that paragraph about the 'hope function'.
me: yes, that's familiar
9:43 PM Luke: One project, if you think it's within your particular abilities, would be to find a good reference or two where somebody else uses this Laplacian approach to try to predict something, and to write up a short summary of how one might use the hope function to do the same thing, along with a reference or two. The summary can be arbitrarily long — whatever you write will be mined and summarized to fit within the space constraints of the final chapter.
9:45 PM Another project would be to handle the sentence far below that which reads "add sentence about how this sort of improvement is not uncommon, with citations" - in the section on algorithm improvement.
(If you end up contributing enough to this chapter, we'll list you as a co-author.)
...On "statistical generalization about algorithm improvements", we aren't expecting the work required to write an original research paper on the subject. If there are no papers that achieve something that impressive, so be it, and we aren't going to write it. But perhaps a few more examples or something can be found, or at least somebody in the field who claims certain generalizations about algorithmic improvements, whom we can quote while giving certain qualifications.
,,,Luke: And yeah, starting tomorrow on Laplace + Hope function sounds good. Remember that what I want it an arbitrarily long write-up (doesn't have to be optimized for presentation or readability) on the subject, and then I'll worry about how to smash down your findings and my own analysis of it for the paper.
# Laplace's rule of succession, the Hope function, and Waiting for AI
["Chapter 18: the A_p_ Distribution and the Rule of Succession"](, from E.T. Jaynes's _Probability Theory: The Logic of Science_ starting pg 9
Poor old Laplace has been ridiculed for over a Century because he illustrated use of this rule by calculating the probability that the sun will rise tomorrow, given that it has risen every day for the past 5,000 years.y One gets a rather large factor (odds of $5000 \times 365.2426 + 1 = 1826214 : 1$) in favor of the sun rising again tomorrow. With no exceptions at all as far as we are aware, modern writers on probability have considered this a pure absurdity. Even Keynes (1921) and Jeffreys (1939) find fault with the rule of succession. We have to confess our inability to see
Here are some famous examples of the kind of objections to the rule of succession which you find in the literature:
1. Suppose the solidification of hydrogen to have been once accomplished. According to the rule of succession, the probability that it will solidify again if the experiment is repeated is 2/3. This does not in the least represent the state of belief of any scientist.
2. A boy 10 years old today. According to the rule of succession, he has the probability
11/12 of living one more year. His grandfather is 70; and so according to this rule
he has the probability 71/72 of living one more year. The rule violates qualitative
common sense!
3. Consider the case N = n = 0. It then says that any conjecture without verification has the probability 1/2. Thus there is probability 1/2 that there are exactly 137 elephants on Mars. Also there is probability 1/2 that there are 138 elephants on Mars. Therefore, it is certain that there are at least 137 elephants on Mars. But the rule says also that there is probability 1/2 that there are no elephants on Mars. The rule is logically self-contradictory!
...The trouble with examples (1) and (2) is obvious in view of our earlier remarks; in each case, highly relevant prior information, known to all of us, was simply ignored, producing a flagrant misuse of the rule of succession. But let's look a little more closely at example (3). Wasn't the rule applied correctly here? We certainly can't claim that we had prior information about elephants on Mars which was ignored.
...In the case N = 0, we could solve the problem also by direct application of the principle of indifference, and this will of course give the same answer P (A|X ) = 1/2, that we got from the rule of succession. But just by noting this, we see what is wrong. Merely by admitting the possibility of one of three different propositions being true, instead of only one of two, we have already specified prior information different from that used in deriving the rule of succession. If the robot is told to consider 137 different ways in which A could be false, and only one way in which it could be true, and is given no other information, then its prior probability for A is 1/138, not 1/2. So, we see that the example of elephants on Mars was, again, a gross misapplication of the rule of succession.
We give the derivation in full detail, to present a mathematical technique of Laplace that is useful in many other problems. There are $K$ different hypotheses, ${A_1, A_2,...,A_K}$, a belief that the 'causal mechanism' is constant, and no other prior information. We perform a random experiment $N$ times, and observe $A_1$ true $n_1$ times, $A_2$ true $n_2$ times, etc. Of course, $\sum_{i} n_i = N$. On the basis of this evidence, what is the probability that in the next $M = \sum_{i} m_i$ repetitions, $A_i$ will be true exactly $m_i$ times?
...In the case where we want just the probability that $A_1$ will be true on the next trial, we need this
formula with $M = m_1 = 1$, all other mi = 0. The result is the generalized rule of succession:
> (18-39): $p(A_1|n_1,N,K) = \frac{n_1 + 1}{N + K}$
You see that in the case $N = n_1 = 0$, this reduces to the answer provided by the principle of indifference, which it therefore contains as a special case.
...Now, use of the rule of succession in cases where $N$ is very small is rather foolish, of course. Not really wrong; just foolish. Because if we have no prior evidence about $A$, and we make such a small number of observations that we get practically no evidence; well, that's just not a very promising basis on which to do plausible reasoning. We can't expect to get anything useful out of it. We do, of course, get definite numerical values for the probabilities, but these values are very 'soft', i.e., very unstable, because the $A_p$ distribution is still very broad for small $N$. Our common sense tells us that the evidence $N_n$ for small $N$ provides no reliable basis for further predictions, and we'll see that this conclusion also follows as a consequence of the theory we're developing here.
The real reason for introducing the rule of succession lies in the cases where we do get a significant amount of information from the experiment; i.e., when $N$ is a large number. In this case, fortunately, we can pretty much forget about these fine points concerning prior evidence. The particular initial assignment $(A_p | X)$ will no longer have much influence on the results, for the same reason as in the particle-counter problem of Chapter 6. This remains true for the generalized case leading to (18-38). You see from (18-39) that as soon as the number of observations $N$ is large compared to the number of hypotheses $K$, then the probability assigned to any particular hypothesis depends for all practical purposes, only on what we have observed, and not on how many prior hypotheses there are. If you contemplate this for ten seconds, your common sense will tell you that the criterion $N \gg K$ is exactly the right one for this to be so.
# LW anchoring experiment
27 February 2012 in `#lesswrong`:
Grognor> I've been reading the highest-scoring articles, and I have noticed a pattern
Grognor> a MUCH HIGHER PROPORTION of top-scoring articles have "upvoted" in the first two words in the first comment
Grognor> (standard disclaimer: correlation is not causation blah blah blah blah blah)
Grognor> then I see this, one of the follow-ups to one of the top-scoring articles like
this. the first comment says "not upvoted"
Grognor> and it has a much lower score
Grognor> while reading it, I was wondering "why is this at only 23? this is one of the best articles I've ever oh look at that comment"
Grognor> I'm definitely hitting on a real phenomenon here, probably some social thing that says "hey if he upvoted, I should too" and it seems to be incredibly
strongly related to the first comment
Grognor> the proportion is really quite astounding
Grognor> compare this to the Tendencies in Reflective Equilibrium post. Compared to that, it's awful, but it has
nearly the same score. Note the distinct lack of a first comment saying "not upvoted"
Grognor> (side note: I thought my article, "on saying the obvious", would have a much lower score than it did. note the first comment: "good points, all of them.")
Grognor> it seems like it's having more of an effect than I would naively predict
gwern> Grognor: hm. maybe I should register a sockpuppet and on every future article I write flip a coin and write either upvoted or downvoted
quanticle> gwern: Aren't you afraid you'll incur Goodhart's wrath?
gwern> quanticle: no, that would be if I only put in 'upvoted' comments
gwern> Grognor: do these comments tend to include any reasons?
Grognor> gwern: yes
Boxo> you suggesting that the comments cause the upvotes? I'd rather say that the post is just the kind of post that makes lots people think as their first reaction
"hell yeah imma upvote this", makes upvoting salient to them, and then some of that bubbles up to the comments
Grognor> Boxo: I'm not suggesting it's entirely that simple, no, but I do think it's obvious now that a first comment that says "upvoted for reasons x, y, and z"
will cause more people to upvote than otherwise would have, and vice versa
Boxo> (ie. you saw X and Y and though X caused Y, but I think there's a Z that causes both X and Y)
ksotala> Every now and then, I catch myself wanting to upvote something because others have upvoted it already. It sounds reasonable that having an explicit comment
declaring "I upvoted" might have an even stronger effect.
ksotala> On the other hand, I usually decide to up/downvote before reading the comments.
gwern> ksotala: you should turn on anti-kibitzing then
ksotala> gwern: Probably.
rmmh> gwern: maybe karma blinding like HN would help
Boxo> I guess any comment about voting could remind people to vote, in whatever direction. Could test this if you had the total number of votes per post.
Grognor> that too
Grognor> the effect here is multifarious and complicated and the intricate details could not possibly be worked out, which is exactly why this proportion of first
comments with an 'upvoted' note surprises me
gwern> Boxo: I don't think that's exposed to us, no
Such an [anchoring](!Wikipedia) or [social proof](!Wikipedia) effect resulting in a [first-mover advantage](!Wikipedia) seems quite plausible to me.
So on the 27th, I registered the account ["Rhwawn"](^[If you were wondering about the account name: both 'Rhwawn' and 'Gwern' are character names from the Welsh collection _[Mabinogion](!Wikipedia)_. They share the distinctions of being short, nearly unique, and obviously pseudonymous to anyone who Googles them, which is why I also used that name as an alternate account [on Wikipedia](!Wikipedia "User:Rhwawn").]. I made some quality comments and upvotes to seed the account as a legitimate active account.
Thereafter, whenever I wrote an Article or Discussion, after making it public, I flipped a coin and if Heads, I posted a comment as Rhwawn saying only "Upvoted" or if Tails, a comment saying "Downvoted." (Grognor said that the comments came with reasons, but unfortunately if I came up with reasons for either comment, some criticisms or praise would be better than others and this would be *another* source of variability.) Needless to say, no actual vote was made. I then made a number of quality comments and votes on other Articles/Discussions to camouflage the experimental intervention. (In no case did I upvote or downvote someone I had already replied to or voted on with my [Gwern]( account.) Finally, I scheduled a reminder on my calendar for 30 days later to record the karma on that Article/Discussion. I don't post *that* often, so I decided to stop after 1 year, on 27 February 2013.
To enlarge the sample, I passed <> through `xclip -o | grep '^by ' | cut -d ' ' -f 2 | sort | uniq -c | sort -g` and picking everyone with >=6 posts (8 people excluding me), and I messaged them with a short message explaining my desire for a large sample and the burden of participation ("It would require perhaps half a minute to a minute of your time every time you post an Article or Discussion for the next year, which is for most of you no more than once a week or month.")
For those who replied, I sent a copy of this writeup and explained their procedure would be as follows: every time they posted they would flip a coin and post likewise (the Rhwawn account password having been shared with them); however, as a convenience to them, I would take care of recording the karma a month later. (I subscribed to participants' post RSS feeds; this would not guarantee that I would learn of their posts in time to add a randomized sock comment - hence the need for their active participation - but I could at least handle the scheduling & karma-checking for them.)
> I will have to do some contemplation of values before I accept or reject. I like getting honest feedback on my posts, I like accumulating karma, and I also like performing experiments.
Randomization suggests that your expected-karma-value would be 0, unless you expect asymmetry between positive and negative.
> What do you anticipate doing with the data accumulated over the course of the experiment?
Oh, it'd be simple enough. Sort articles into one group of karma scores for the positive anchors, the other group for the negative anchors; feed into a two-sample t-test to see if the means differ and if the difference is significant. I can probably copy the R code straight from my [Zeo#vitamin-d-analysis]() sessions.
If I can hit p<0.10 or p<0.05 or so, post an Article triumphantly announcing the finding of bias and an object lesson of why one shouldn't take karma too seriously; if I don't, post a Discussion article discussing it and why I thought the results didn't reach significance. (Not enough articles? Too weak assumptions in my t-test?)
The results:
1. negative; karma:
2. negative; karma:
3. positive; karma:
4. negative; karma:
5. positive; karma:
6. negative; karma:
7. positive; karma:
8. positive;
9. positive;
List of reasons:
negative; "Downvoted; ..."
- "too much weight on _n_ studies"
- "too many studies cited"
- "too reliant on personal anecdote"
- "too heavy on math"
- "not enough math"
- "rather obvious"
- "not very interesting"
positive; "Upvoted; ..."
- "good use of _n_ studies"
- "thorough citation of claims"
- "enjoyable anecdotes"
- "rigorous use of math"
- "just enough math"
- "not at all obvious"
- "very interesting"
# Ending Moore's law
This essay has been split out as [Slowing Moore's Law]().
# Conscientiousness and online education
[Online education](!Wikipedia) like [Khan Academy](!Wikipedia) has been hailed as a major innovation which will revolutionize higher & lower education, educate students better, and cut costs. But in general, it seems unlikely that online education will reduce all costs equally and educate all students equally better. Hardly any change ever preserves all relative positions or ratios - someone benefits disproportionately, someone benefits only a little.
So what differentials can we expect from online education? Hoary articles from the '90s about the '[digital divide](!Wikipedia)' might make one predict that it will benefit middle and upper-class whites; but on the other hand, proponents love to talk about favored minorities (eg. a foreign black female - that is, a girl in an African village) who can now access online education through cheap cellphones, so one might predict that online education will instead level the playing field. No longer will there be a big gap between receiving essentially no education and receiving a real education, a gap that perpetuates cycles of poverty. As Internet access becomes *more* common than access to quality schools, quality school delivered through the Internet will lead to an equalizing effect (the elites will be no better off than before, and the non-elites now have the chance to obtain a prerequisite to becoming an elite).
It may help to ask what causes success in education and see how online education affects it. To a first approximation, ignoring environment, one earns educational success through:
1. [IQ](!Wikipedia)/_g_
IQ obviously predicts a huge chunk of educational success (leading to the ironic accusation that IQ tests are only academic questions) since the smarter one is, the easier learning anything is, much less one's schoolwork.
2. [Conscientiousness](About#fn25) (a personality trait in the [Big Five](!Wikipedia "Conscientiousness#Personality models"); of the hard work, grit, effort)
If one is not smart enough that one can simply inhale lessons and pass tests, one still has the option of *working hard*: doing extra practice problems, asking for help, etc. Success will not come easy, but it will still come. These 2 factors together will correlate somewhere like 0.7 with educational success: someone who is smart and hard-working will go to the top, and someone who is stupid and lazy will not.
3. Miscellaneous
The rest of the correlation is made up of socioeconomic status, culture (eg. East Asian?) and [random other things]( "'Nonshared Environment: A Theoretical, Methodological, and Quantitative Review', Turkheimer & Waldron 2000"): life events, available wealth, environmental factors like an extra-inspiring teacher, etc.
How does online education affect them - reducing the need for that factor to reach a certain level of attainment, leaving it alone, or increasing the need for that factor?
1. IQ seems like it could go any way:
- Any effects could roughly cancel out, perhaps in some sort of compensating mechanism where students only aim at particular levels of mastery or performance and better or worse methods only change how much time they need to invest before they go off to play video games.
- It could increase the need for IQ, because now all the extraneous time-wasting 'gunk' like sharpening pencils or doing roll-call can be cleared away by the technical solutions, leaving more time for pure learning. By eliminating all the environmental hindrances and variation, the only variation left will come from the student's innate intellectual abilities: IQ. Students will race through courses until they hit their natural limits; even Sal Khan's videos can't make a dim bulb calculate solutions to Schrodinger's equation.
It has been noted in the psychometric literature that successful attempts to eliminate socio-economic penalties and provide quality environments for all children would necessarily *increase* the apparent contribution of heredity: if every child is in an environment that lets them develop and flourish to their fullest extent, then any remaining differences in their development will be due to hereditary factors! If variations in IQ are the joint product of variations in heredity and environment, then eliminating all variation in environment, setting environment to 0, means the remaining variation will be just the variation in heredity.
- It could reduce the need for IQ, since online education will lead to a marketplace of lessons where only the clearest, most insightful, easily understood lessons survive. In ordinary classrooms staffed by ordinary teachers, extemporaneous lectures or explanations are necessarily more opaque and lower-quality compared to a lecture that the world-class presenter has spent months or years honing.
But it is a utopian thought that perhaps everyone will be successful at education; so the question becomes, what trait or environmental factor would then become the best predictor of attainment? If you reduce the need for brains, then perhaps you still need motivation and appetite for work, which in conjunction with the previous point about joint products leads us to the next observations...
2. Conscientiousness is the joker. There is one clear possible change: online education will *increase* demand for Conscientiousness compared to offline education.
This tallies with my personal experience with online courses and classes with online assignment components like computer science classes (where class attendance may be optional and programming projects or homework are submitted remotely). I had a good deal of trouble just sitting down to do the course or assignment, even though it was not necessarily that difficult. The distractions on my laptop beckoned: I would go use crufty old Solaris boxes in the computer labs just to avoid the distractions and get something done.
There is also some academic research supporting this suggestion; I currently know of 4 relevant studies[^Elvers-2003][^Kim-2004][^Schniederjans-2005][^Bassili-2006].
3. Miscellaneous is too varied and heterogeneous to be predictable, so we won't discuss it further.
[^Elvers-2003]: From ["Procrastination in Online Courses: Performance and Attitudinal Differences"](, Elvers et al 2003; result:
> "There were no reliable differences between the 2 sections of the class on the measures of procrastination, exam performance, or attitudes toward the class. Yet, procrastination was negatively related with exam scores and with attitudes toward the class for the online students, but not for the lecture students. This difference may partially explain why online courses designed to increase the educational efficacy of a course often show no difference in performance when compared to lecture classes."
> "If procrastination is a problem in online classes, it would be desirable to know which students are most at risk for procrastination. Instructors could then offer the at-risk students interventions designed to reduce dilatory behaviors. Watson (2001) and Schouwenburg and Lay (1995) correlated self-reported procrastination with five factors of personality. Both found a reliable relation between self-reported procrastination and low conscientiousness. Watson found a reliable relation between procrastination and neuroticism. Schouwenburg and Lay also found some, but not all, facets of neuroticism to be related to procrastination."
What did the students say and what difference was found in their scores?
> "One question asked in the end-of-semester questionnaire was whether the student disliked the class because it was easy to get behind in the class. In the online class, 19 of 21 students reported that they disliked the class because it was easy to get behind. Only 13 of 23 students in the lecture class reported that they disliked the class because it was easy to get behind.
> ...However, the magnitude of the relation between procrastination and class performance and attitudes seemed to be larger for the online class than for the traditional class. Procrastination was a good predictor of performance for each of the five tests in the class for the online students, but not a good predictor of performance for any of the five tests for the lecture students."
Finally, the quote that really sums it all up:
> "Pedagogy suggests that activities such as online discussions, group writing projects, and immediate feedback on performance should lead to better performance. Thus, students in online classes, which often contain these activities, should have better performance in the class compared to traditional lecture classes, which often lack these activities. However, this is rarely the case. Russell (1999) cited more than 300 studies that failed to find any reliable difference in performance between traditional classes and classes at a distance (including correspondence courses, online courses, and telecourses). The observation that the magnitude of the relation between procrastination and exam scores was larger in this online class than in the lecture class could be a possible explanation for these null results. The additional activities in online classes that should increase performance may do just that. However, the decrements associated with dilatory behaviors in online classes may attenuate the increments associated with the additional activities. By reducing dilatory behaviors, the benefits of online classes may become more apparent."
[^Kim-2004]: Kim & Schniederjans 2004, ["The role of personality characteristics in web-based distance education courses"](/docs/2004-kim.pdf): in its sample of 140 students, online education worked best for those high on the Wonderlic PCI Success Scales for 'Commitment to Work' ("The tendency to remain on a job for a long time, and not be undependable, irresponsible, impulsive, disorganized, or lack persistence.") and 'Learning Orientation' ("The tendency of an individual to be willing to engage in activities to acquire knowledge, skills, and behaviors and to learn new methods and procedures to improve job effectiveness, how interested they are in developing themselves, seek opportunities to learn new and different ways of doing things, and enrolled in training programs that they are likely to be active and fully engaged participants.")
[^Schniederjans-2005]: Schniederjans & Kim 2005, ["Relationship of Student Undergraduate Achievement and Personality Characteristics in a Total Web-Based Environment: An Empirical Study"](/docs/2005-schniederjans.pdf); similar to Kim & Schniederjans 2004, 260 students. It found Conscientiousness significant, but also 3 others (Openness, Neuroticism, and Openness) and not Extraversion. (Neither seems to include any effect size or whether Conscientiousness out-predicts the other factors; this may be due to my inability to interpret some of the provided statistics.)
[^Bassili-2006]: Bassili 2006, ["Promotion and prevention orientations in the choice to attend lectures or watch them online"]( measured only Neuroticism and Openness, so cannot tell us anything about Conscientiousness.
Now, we discarded #3 as being impossible to generalize about, and #2 suggests that Conscientiousness will increase in its correlation with success, while to me the more plausible outcome for #1 is that it will reduce the need. But to be conservative, let's assume the need for IQ remains unchanged. This suggests the following argument:
1. Material presented in an online education format: requires the same amount of IQ to understand[^better-material-limits]
2. Material presented in an online education format: also requires more Conscientiousness than the same material presented in a classroom
3. there are no other factors; *then*
4. less of the general population will be able to learn it.
[^better-material-limits]: One wonders how much hope can we place in the falsity of #1. Just how much *can* education's IQ requirements be brought down? Advocates seem optimistic, but is the current material all that bad? How dumb can you be before even the best highest-quality of calculus becomes unlearnable with feasible amounts of time and effort on your part? How close are existing online courses to this lower bound on IQ? At what point does #1 resume being true?
To belabor the obvious and dress it up in mathematical garb: for a particular static set/population Z, the number of Z members which satisfy the requirements $IQ + C < IQ + (C+1)$, because the fraction of the population with both the necessary IQ and the necessary Conscientiousness must be equal to or smaller than the fraction with just the necessary IQ; for any properties $P(a \wedge b) \le P(a)$. See also the [conjunction fallacy](!Wikipedia).
(A major caveat here is that the premises really do need absolute values of IQ and Conscientiousness. If you only have correlations, I believe it is possible for IQ's correlation for educational success remain the same and Conscientiousness's correlation go up while the fraction of the general population succeeding goes up *also*. For example, if online education reduced the need for Conscientiousness, but reduced the need for IQ even more, more people will pass by the opposite of our conjunctive reasoning, but any attempt to predict success will need less information about IQ and more about Conscientiousness.)
Now, to discuss claim #2 in more detail. The first study cited previously on online education stressing Conscientiousness, Elvers 2003, is particularly interesting (see the quotes in the footnote). Now, given the evidence from this study that online education scores correlate with Conscientiousness, it seems very likely that #2 is true. However, the result that the online students had the same average as the offline students indicates that the conclusion #4 is not true; the obvious candidate to reject via modus tollens is assumption #1. As one would hope! But if #1 is not true, it could be true to a very large degree - as already mentioned, computerized education could make education a lot less correlated with your raw IQ because it's presented better or whatever (to listen to the most rapturous users of Khan Academy). However, the *equality* in scores between the online and offline classes indicates that whatever the drop in IQ requirements, it was offset by the increase in Conscientiousness requirements.
What does this tradeoff between loading on Conscientiousness and IQ suggest?
First, it suggests that blended learning will be intermediate in results: I'd expect partial online education to be 'weaker' than full online education.You have to force yourself to go to class, but then it's still easier to learn without burdening your willpower/Conscientiousness. (You can always, say, not bring your laptop to class - difficult or impossible with online education!) I'd expect the effect of non-mandatory to be intermediate, much like I'd expect frequent mandatory deadlines in online education to help only a little.
Second, if one lone course shows such a hit from lack of Conscientiousness, what happens as ever more material goes online and students might be expected to do entire semesters just online? Will we see the correlation go up, as students expend all their willpower and run completely dry (see eg. Baumeister & Tierney 2010, _Willpower_)? (You may be able to lift one weight up to your head, but if given 10 weights simultaneously, you drop them all.) It seems that the tradeoff might extend well beyond a single course to all courses.
Is loading outcome more on Conscientiousness a bad thing? I think it is, for a few reasons, some of which follow directly from the tradeoff and some of which are speculation about future consequences:
1. there is no particular reason to favor Conscientiousness as an additional reward for 'good' people. Whether we should favor it over IQ depends on the *consequences* such as what mental traits we need more of in our elites.
Conscientiousness is not a 'virtue' in the sense that the (non-existent) homunculus in your brain is 'good' or 'bad' for choosing to be Conscientious or not, any more than it is morally laudable to be high IQ than low IQ. Despite [folk psychology & moralizing]( "Reliable but dumb, or smart but slapdash?"), the Big Five personality traits are stable over lifetimes like IQ, are turning out to be influenced by heredity like IQ, and progress is being made on tracing the traits to the underlying neurological factors like IQ. You can no more 'try hard to be able to try hard' (how circular) than you can try hard to be more intelligent.
2. As already observed, the school system already [rewards]( "Do elite US colleges choose personality over IQ?") Conscientious grinds, and [oppresses creativity]( "'Creativity: Asset or burden in the classroom?', Westby & Dawson 1995"). Do we need to make the former even more true? Think of how this will penalize [bright creative potential-future-great-scientists]( "Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity") - but uninterested in forcing themselves to do mandated drudgework - nerds. ([Conscientiousness is necessary]( "The Personality of (great/creative) Scientists: Open and Conscientious") for scientific greatness, but not *that* much.)
3. The tradeoff resulting in online education favoring Conscientiousness was neither designed in nor realized by the designers; it is purely accidental and undesired. Wouldn't it be extraordinary if an accidental tradeoff turned out to be exactly optimal? How very convenient!
This is a good time to apply the [status-quo reversal test]( suppose online education did not result in any such tradeoff but a Khan Academy staffer unilaterally made some changes meant solely to make KA scores reflect Conscientiousness more (perhaps your progress would be deleted if you did not Conscientiously log in every week and do a few problems). Would you approve of this change? Suppose further online education actually reduced the need for Conscientiousness (maybe because the service pings your cellphone with a quick practice problem every so often); would you approve of the staffer's change then? If you would not approve in the latter scenario where the shift along the tradeoff curve is intentional, why would you approve of a shift caused accidentally?
4. The cheapness of online education may prove irresistible and a case of [worse is better](!Wikipedia): the cost of human teachers is nontrivial and may be increasing (whether this is due to backloaded pension compensation, growth of the education sector & diminishing returns, [Baumol's cost disease](!Wikipedia) etc.), and this has prompted reactions like the death of university tenure & wholesale use of adjuncts, attacks on unions, and interest in automated methods of teaching... like online education. Already cuts have begun. Even if online education is worse, there may be no choice about whether to use it or not - a sort of educational [enclosure](!Wikipedia) movement. This shift may or may not be economically efficient (if the public sector is able to force the losses onto the public which is not organized enough to avoid it, perhaps due to ideological divisions).
5. Economic growth is increasingly captured in the US by the most-educated, with income growth going mostly to graduate degree holders. So anything which may lessen the ranks of the most highly educated seems like it would exacerbate the inequality of returns to education. Is some general increases in the net wealth of the economy worth it? People do not eat absolute wealth increases, they eat relative increases - more egalitarian economies are happier populaces. (Note the same question can be asked of other 'cheaper' things like globalization and outsourcing, and the answer in those other cases is not trivial. [Pareto-efficient](!Wikipedia) does not mean everyone is better off, just that no one is worse off, and this assumes humans do not care about their rankings or place - a patently false approximation.)
What other consequences may there be?
> "This prodigious event is still on its way, still wandering; it has not yet reached the ears of men. Lightning and thunder require time, the light of the stars requires time, deeds, though done, still require time to be seen and heard. This deed is still more distant from them than the most distant stars - and yet they have done it themselves."
## External links
- [Original discussion on Google+](
- [LessWrong request for refs](