Skip to content

Commit

Permalink
check in
Browse files Browse the repository at this point in the history
  • Loading branch information
zackmdavis committed Oct 12, 2019
1 parent 343b808 commit 269caba
Show file tree
Hide file tree
Showing 3 changed files with 34 additions and 20 deletions.
44 changes: 26 additions & 18 deletions algorithms_of_deception.md
@@ -1,4 +1,4 @@
### DRAFT: "Algorithms of Deception!"
### "Algorithms of Deception!"

I want you to imagine a world consisting of a sequence of independent and identically distributed random variables $X_i$, and two computer programs.

Expand Down Expand Up @@ -27,8 +27,7 @@ def x():
return 4
```

For compatibility, we can imagine that Reporter and Audience are also written in Python. This is just for demonstration—the _real_ Reporter and Audience (out there in the world I'm asking you to imagine) might be much more complicated programs written for some kind of _alien_ computer the likes of which we have not yet dreamt!
But I like Python, and for the moment, we can pretend.
For compatibility, we can imagine that Reporter and Audience are also written in Python. This is just for demonstration in the blog post that I'm writing—the _real_ Reporter and Audience (out there in the world I'm asking you to imagine) might be much more complicated programs written for some kind of _alien_ computer the likes of which we have not yet dreamt! But I like Python, and for the moment, we can pretend.

So pretend that Audience looks like this (where the dictionary represents a probability distribution, with the keys being random-variable outcomes and the values being probabilities):

Expand Down Expand Up @@ -79,7 +78,7 @@ def reporter_1(xs):
return output
```

It instead induces Audience to output a very different (and rather boring) distribution):
It instead induces Audience to output a very different (and rather boring) distribution). It doesn't even matter how the $X_i$ turned up; the result will always be the same:

```
>>> audience(reporter_1([x() for _ in range(100000)]))
Expand All @@ -99,32 +98,41 @@ def reporter_2(xs):
return output
```

While the distribution that `reporter_2` makes Audience output isn't as boring as the one we saw for `reporter_1`, it still doesn't have the Very Interesting Property of matching the distribution of the $X_i$. Look at how much probability it assigns to the outcome "4":

[TODO: elaborate and explain that `reporter_2` at least _comes closer_ than reporter_1; the output still varies with reality; it's just distorted; whereas the liar was constant]
While the distribution that `reporter_2` makes Audience output isn't as boring as the one we saw for `reporter_1`, it still doesn't have the Very Interesting Property of matching the distribution of the $X_i$. It comes _closer_ to having the Very Interesting Property than `reporter_1` did—notice how the _ratios_ of probabilities assigned to the first three outcomes is similar to that of the original distribution, but it's assigning way too much probability-mass to the outcome "4"—

```
>>> audience(reporter_2([x() for _ in range(100000)]))
{1: 0.3971289947471831, 2: 0.20309555314968522, 3: 0.14860259032038173, 4: 0.2516540358474678}
```

So far, all of the Reporters we've imagined

So far, all of the Reporters we've imagined are still only putting one element in the inner lists of the list-of-lists that they return. But we could imagine `reporter_3`

-----
```
def reporter_3(xs):
output = []
for x in xs:
if x in [1, 4]:
output.append([1, 4])
else:
output.append([x])
return output
```

**Author's commentary (for email pre-readers, not for publication)**:
Unlike `reporter_2` (which typically returned a list with _fewer_ elements than it received as input), the list returned by `reporter_3` has exactly as many elements as the list it took in. Yet this Reporter still prompts Audience to return a distribution with too many "4"s—and _unlike_ `reporter_2`, it doesn't even get the ratio of the other outcomes right, yielding disproportionately fewer "1"s _vs._ "2"s and "3"s compared to the original distribution—

I'm interested in solving this recurring inferential distance problem—
```
>>> audience(reporter_3([x() for _ in range(100000)]))
{1: 0.2808949431909106, 2: 0.24795967354776766, 3: 0.19037045927348376, 4: 0.2808949431909106}
```

Blight-alarmists say, "Fraud!"
Again, I've presented Audience and the various Reporters as simple Python programs for illustration and simplicity, but the same _input-output relationships_ could be embodied by more complicated systems—perhaps an entire conscious mind which could talk.

Blight-denialists say, "How dare you?! Sure, maybe people sometimes say things that are false due to self-interested bias—that's a great motte. But this tendentious rhetoric of calling it _fraud_you can't have that bailey."
After the inferred frequency of "4"s failed to appear, perhaps Audience would say, "I _trusted_ you, and you _lied_ to me!"

Blight-alarmists say, "What the fuck do you think 'fraud' is, if it's not systematically saying things that aren't true in a direction that moves resources towards you?! I'm not convinced you're trying to create clarity here; if you're going to troll me like this, then I refuse to do any more interpretive labor."
And `reporter_2` might defend itself: "How dare you accuse me of _lying_!? Sure, I'm not a perfect program free from all bias, but every outcome I reported corresponded to one of the $X_i$—I never told a mistruth."

Blight-denialists say, "!?!"
And `reporter_3` might defend itself: "I can define a word any way I want! Unlike `reporter_2`, I'm not being selective—everything I saw, I reported.

Alarmists claim that they/we want to model political distortions on our collective epistemology. Denialists think that alarmists are just doing their own brand of coalitional politics by using inherently morally-charged language and falsely claiming that no moral connotations are intended.
[TODO: explain the problem where grouping the rare "4" and the common "1" into the same category artificially makes the former seem more common, if the listener cares about the difference and doesn't know the priors—probably give this line to "Audience"]

I'm interested in _complying_ with the demand for non-morally-charged language as much as I possibly can. (["Bad intent is a disposition, not a feeling"](http://benjaminrosshoffman.com/bad-faith-behavior-not-feeling/) was a great start, but doesn't go as far in this direction as I'm trying to go.) _Start_ with toy code/math models about programs that output data that make other programs make worse predictions, and _then_ slowly build up to, "We don't care about your conscious verbal narrative! Your narrative is _bullshit_! We care about systematic deviations from ideal map–territory entanglement!"
[TODO: wrap up the moral: outright-lying, selective-reporting, and category-gerrymandering are all examples of _algorithms of deception_: ways of communicating that cause listeners to make bad predictions (compared to the listener running the same inference algorithm on honest reports). It's kind of dysfunctional to care too much about lying vs. not-lying, or conscious-lying vs. unconscious-rationaliation, when the _outcomes_ are the same. (_Incentives_ matter—there's a reason vehicular manslaughter is punished differently from first-degree murder—but either way, the person is _still equally dead_.)]
8 changes: 6 additions & 2 deletions maybe_lying_doesnt_exist.md
Expand Up @@ -9,9 +9,13 @@
> _I swear it_
> _I never meant to play those games_
>
> —"Deception" by Jem and the Holograms
> ["Deception"](https://www.youtube.com/watch?v=kQKs0eQHZRs) by Jem and the Holograms
In ["Against Lie Inflation"](https://slatestarcodex.com/2019/07/16/against-lie-inflation/), the immortal Scott Alexander argues that the word "lie" should be reserved for knowingly-made false statements, and not used in an expanded sense that includes unconscious motivated reasoning. Useful categories convey information

All of this is entirely correct. Indeed, [you can't define a word any way you want (without negative side-effects on your cognition)](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong).


In ["Against Lie Inflation"](https://slatestarcodex.com/2019/07/16/against-lie-inflation/), the immortal Scott Alexander argues that the word "lie" should be reserved for knowingly-made false statements, and not used in an expanded sense that includes unconscious motivated reasoning.



Expand Down
2 changes: 2 additions & 0 deletions notes.txt
@@ -1,3 +1,5 @@
[TODO: elaborate and explain that `reporter_2` at least _comes closer_ than reporter_1; the output still varies with reality; it's just distorted; whereas the liar was constant]

"Maybe Lying ..."

Three Algorithms of Deception
Expand Down

0 comments on commit 269caba

Please sign in to comment.