Skip to content

Commit

Permalink
epic footnote &c.
Browse files Browse the repository at this point in the history
  • Loading branch information
zackmdavis committed Sep 8, 2019
1 parent fc4c173 commit 7200b26
Showing 1 changed file with 6 additions and 11 deletions.
17 changes: 6 additions & 11 deletions blatant_cherry-picking_is_the_best_kind.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,34 +12,29 @@ On the other hand, if the reporter mentions only and exactly the flips that came

So far, so standard. (You _did_ [read the Sequences](https://www.readthesequences.com/), right??) What I'd like to _emphasize_ about this scenario today, however, is that while a Bayesian reasoner who _knows_ the non-lying reporter's algorithm of what flips to report will never be misled by the selective reporting of flips, a Bayesian with _mistaken_ beliefs about the reporter's decision algorithm can be misled _quite badly_: compare the 0.89 and 0.06 probabilities we just derived given the _same_ reported outcomes, but different assumptions about the reporting algorithm.

If the coin gets flipped a sufficiently large number of times, a reporter who you _trust_ to be impartial (but isn't), can _make you believe anything she wants without ever telling a single lie_, just with appropriate selective reporting. Imagine a _very_ biased coin that comes up Heads 99% of the time. If it gets flipped a ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.
If the coin gets flipped a sufficiently large number of times, a reporter whom you _trust_ to be impartial (but isn't), can _make you believe anything she wants without ever telling a single lie_, just with appropriate selective reporting. Imagine a _very_ biased coin that comes up Heads 99% of the time. If it gets flipped a ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.

Toy models about biased coins are instructive for constructing examples with explicitly calculable probabilities, but the same _structure_ applies to any real-world situation where you're receiving evidence from other agents, and you have uncertainty about what algorithm is being used to determine what reports get to you. Reality is like the coin's bias; evidence and arguments are like the outcome of a particular flip. _Wrong_ theories will still have _some_ valid arguments and evidence supporting them (as even a very Heads-biased coin will come up Tails sometimes), but theories that are [_less_ wrong](https://tvtropes.org/pmwiki/pmwiki.php/Main/TitleDrop) will have _more_.

If selective reporting is mostly due to the idiosyncratic [bad intent](http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/) of rare malicious actors, then you might hope for safety in [(the law of large)](TODO: linky) numbers: if Helga in particular is systematically more likely to report Headses than Tailses that she sees, then her flip reports will diverge from everyone else's, and you can _take that into account_ when reading Helga's reports. On the other hand, if selective reporting is mostly due to systemic _structural_ factors that result in _correlated_ selective reporting even among well-intentioned people who are being honest as best they know how,[^how] then you might have a more serious problem.
If selective reporting is mostly due to the idiosyncratic [bad intent](http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/) of rare malicious actors, then you might hope for safety in [(the law of large)](https://en.wikipedia.org/wiki/Law_of_large_numbers) numbers: if Helga in particular is systematically more likely to report Headses than Tailses that she sees, then her flip reports will diverge from everyone else's, and you can take that into account when reading Helga's reports. On the other hand, if selective reporting is mostly due to systemic _structural_ factors that result in _correlated_ selective reporting even among well-intentioned people who are being honest as best they know how,[^how] then you might have a more serious problem.

[^how]: And it turns out that knowing _how_ to be honest is _much more work_ than one might initially think. You _have_ [read the Sequences](https://www.readthesequences.com/), right?!

["A Fable of Science and Politics"](https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of-science-and-politics) depicts a fictional underground Society polarized between two partisan factions, the Blues and the Greens. "[T]here is a 'Blue' and a 'Green' position on almost every contemporary issue of political or cultural importance." If human brains consistently understood the is/ought distinction, then political or cultural alignment with the Blue or Green agenda wouldn't distort people's beliefs about reality. Unfortunately ... humans. (I'm not even going to finish the sentence.)

Reality itself isn't on anyone's side, but any particular fact, argument, [sign, or portent](TODO: bottom line) might just so happen to be more easily construed as "supporting" the Blues or the Greens. The Blues support stronger marriage laws; the Greens support no-fault divorce. An [evolutionary psychologist](https://www.lesswrong.com/posts/epZLSoNvjW53tqNj9/evolutionary-psychology) investigating [effects of kin-recognition mechanisms on child abuse by stepparents](https://en.wikipedia.org/wiki/Cinderella_effect) might aspire to scientific objectivity, but being objective and _staying_ objective is _difficult_ when you're embedded in an [intelligent social web](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web) in which in your work is going to be predictably championed by Blues and reviled by Greens.
Reality itself isn't on anyone's side, but any particular fact, argument, [sign, or portent](https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line) might just so happen to be more easily construed as "supporting" the Blues or the Greens. The Blues want stronger marriage laws; the Greens want no-fault divorce. An [evolutionary psychologist](https://www.lesswrong.com/posts/epZLSoNvjW53tqNj9/evolutionary-psychology) investigating [effects of kin-recognition mechanisms on child abuse by stepparents](https://en.wikipedia.org/wiki/Cinderella_effect) might aspire to scientific objectivity, but being objective and _staying_ objective is _difficult_ when you're embedded in an [intelligent social web](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web) in which in your work is going to be predictably championed by Blues and reviled by Greens.

Let's make another toy model to try to understand the resulting distortions on the Undergrounders' collective epistemology. Suppose Reality is a coin—no, not a coin, a three-sided die,[^triangle] with faces colored blue, green, and gray. One-third of the time it comes up blue (representing a fact that is more easily construed as supporting the Blue narrative), one-third of the time it comes up green (representing a fact that is more easily construed as supporting the Green narrative), and one-third of the time it comes up gray (representing a fact that not even the worst ideologues know how to spin as "supporting" their side).

[^triangle]: For lack of an appropriate [Platonic solid](https://en.wikipedia.org/wiki/Platonic_solid) in three-dimensional space, maybe imagine tossing a triangle in two-dimensional space??

Suppose each faction enforces consensus internally. [Without loss of generality](TODO: link), take the Greens.[^choice]
Suppose each faction enforces consensus internally. [Without loss of generality](https://en.wikipedia.org/wiki/Without_loss_of_generality), take the Greens.[^choice]

[^choice]: As an author, I'm facing some conflicting desiderata in my color choices here. I want to say "Blues and Greens" _in that order_ for compatibility with "A Fable of Science and Politics" (and other [classics from the Sequences](TODO: linky Blue or Green on regulation)). Then when making an arbitrary choice to talk in terms of the factions in order to avoid cluttering the dialogue when it's understood that all the same considerations apply to the other faction, you might expect me to say "Without loss of generality, take the Blues," because the first item in "Blues and Greens" is a more obvious Schelling point
[^choice]: As an author, I'm facing some conflicting desiderata in my color choices here. I want to say "Blues and Greens" _in that order_ for compatibility with "A Fable of Science and Politics" (and other [classics from the Sequences](https://www.lesswrong.com/posts/uaPc4NHi5jGXGQKFS/blue-or-green-on-regulation)). Then when making an arbitrary choice to talk in terms of one of the factions in order to avoid cluttering the exposition when it's understood that all the same considerations apply exactly the same to the other faction, you might have expected me to say "Without loss of generality, take the Blues," because the _first_ item in a sequence ("Blues" in "Blues and Greens") is a more of a [Schelling point](https://www.lesswrong.com/posts/yJfBzcDL9fBHJfZ6P/nash-equilibria-and-schelling-points) than the second or last item. But I don't _want_ to take the Blues, because that color choice [has other associations](http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/) that I'm trying to avoid right now: if I said "take the Blues", I fear many readers would assume that I'm trying to directly push a partisan point about [soft censorship](https://slatestarcodex.com/2019/04/02/social-censorship-the-first-offender-model/) and [preference-falsification](https://en.wikipedia.org/wiki/Preference_falsification) social pressures in liberal/left-leaning subcultures in the contemporary United States. And, I mean, it's _true_ that soft censorship and preference-falsification social pressures in liberal/left-leaning subcultures in the contemporary United States are, historically, what _inspired_ me, personally, to write this post. It's okay for you to notice that! But I'm _trying_ to talk about the _general mechanisms_ that generate this _class_ of distortions on a Society's collective epistemology, independently of which faction or which ideology happens to be "on top" in a particular time and place. If I'm _doing my job right_, then my analogue in a ["nearby" Everett branch](https://www.lesswrong.com/posts/WqGCaRhib42dhKWRL/if-many-worlds-had-come-first) whose local subculture was as "right-polarized" as my Berkeley environment is "left-polarized", would have written the exact same blog post, up to some of the details in this footnote. If you have any suggestions on how I can write differently in order to come closer to meeting that (very high) standard, feel free to PM me.








anyone who's actually [paying attention](https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/) can easily distinguish Green partisans from truthseekers, but the [social-punishment machinery](http://benjaminrosshoffman.com/blame-games/) can't process more than [five words at a time](https://www.lesswrong.com/posts/4ZvJab25tDebB8FGE/you-have-about-five-words)
Anyone who's actually [paying attention](https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/) can easily distinguish Green partisans from truthseekers, but the [social-punishment machinery](http://benjaminrosshoffman.com/blame-games/) can't process more than [five words at a time](https://www.lesswrong.com/posts/4ZvJab25tDebB8FGE/you-have-about-five-words)

everyone knows

Expand Down

0 comments on commit 7200b26

Please sign in to comment.