-
Notifications
You must be signed in to change notification settings - Fork 0
/
pref_data.json
1 lines (1 loc) · 641 KB
/
pref_data.json
1
[{"chosen":"\n\nWhy Planning is Hard: A Multifaceted Model\n - LessWrong 2.0 viewerArchiveSequencesAboutSearchLog InQuestionsEventsShortformAlignment ForumAF CommentsHomeFeaturedAllTagsRecent CommentsWhy Planning is Hard: A Multifaceted Model\nRuby31 Mar 2019 2:33 UTC29 points9 commentsLW linkPlanning & Decision-Making\uf141Post permalinkLink without commentsLink without top nav barsLink without comments or top nav barsContentsContentsWhat is plan\u00adning? Plan\u00adning is a Pre\u00addic\u00adtion\/\u200bIn\u00adfor\u00adma\u00adtion ProblemPlan\u00adning is a Com\u00adpu\u00adta\u00adtion ProblemPlan\u00adning is a Self-Knowl\u00adedge & Self-Mastery ProblemPre\u00addict\u00ading yourselfKnow\u00ading what you wantSelf-mas\u00adtery of your hu\u00adman brainHeuris\u00adtics and biasesUs\u00ading Sys\u00adtem 1 (in\u00adtu\u00adition) and Sys\u00adtem 2 (ex\u00adplicit rea\u00adson) in harmonyEmo\u00adtional MasteryPlan\u00adning is a Re\u00adcur\u00adsive ProblemSum\u00admary: What It Takes to Be a Great PlannerEndnotesEpistemic con\u00adfi\u00addence: Highly con\u00adfi\u00addent. This post doesn\u2019t cover ev\u00adery\u00adthing rele\u00advant to the topic, but I am con\u00adfi\u00addent that ev\u00adery\u00adthing pre\u00adsented is solid.You may have no\u00adticed that plan\u00adning can be rather difficult. Granted, not all plans are difficult, plan\u00adning what you\u2019re go\u00ading to have for din\u00adner isn\u2019t too bad usu\u00adally; how\u00adever se\u00adri\u00adous plan\u00adning can range from merely daunt\u00ading to seem\u00adingly in\u00adtractable. Think of the challenge of plan\u00adning to\u00adwards a satis\u00adfy\u00ading ca\u00adreer, a fulfilling re\u00adla\u00adtion\u00adship, the suc\u00adcess of your startup, or the sim\u00adple preser\u00adva\u00adtion and flour\u00adish\u00ading of hu\u00adman civ\u00adi\u00adliza\u00adtion. I pre\u00adsent here a gears-level, re\u00adduc\u00adtion\u00adist ac\u00adcount of plan\u00adning which makes it starkly clear why plan\u00adning is hard. The point not be\u00ading that we should give up be\u00adcause plan\u00adning is fu\u00adtile. Far from it. With a de\u00adtailed model of the fac\u00adtors which make plan\u00adning hard, we can de\u00adrive a unified roadmap for how to get bet\u00adter at plan\u00adning and ul\u00adti\u00admately have a unified, pow\u00ader\u00adful ap\u00adproach to mak\u00ading bet\u00adter plans.ContentsWhat is plan\u00adning?Plan\u00adning is a pre\u00addic\u00adtion\/\u200bin\u00adfor\u00adma\u00adtion problemPlan\u00adning is a com\u00adpu\u00adta\u00adtion problemPlan\u00adning is a self-knowl\u00adedge and self-mas\u00adtery problemPlan\u00adning is a re\u00adcur\u00adsive problemSum\u00admary: what it takes to be a good plannerAp\u00adpendix: math\u00ade\u00admat\u00adi\u00adcal for\u00admal\u00adism of the com\u00adpu\u00adta\u00adtion problemWhat is plan\u00adning? We can\u2019t talk about why plan\u00adning is hard be\u00adfore clar\u00adify what plan\u00adning is. It\u2019s pretty sim\u00adple:Plan\u00adning is the se\u00adlec\u00adtion of ac\u00adtions in or\u00adder to achieve de\u00adsired out\u00adcomes. [1] The need for plan\u00adning arises when the fol\u00adlow\u00ading con\u00addi\u00adtions hold:There are mul\u00adti\u00adple states that the world can be in.You have prefer\u00adences over which states the world is in.There are mul\u00adti\u00adple ac\u00adtions available to you, each of which might cause the world to more likely be in some states rather than oth\u00aders.You can\u00adnot take all of the ac\u00adtions, ei\u00adther be\u00adcause you lack the re\u00adsources to do so or be\u00adcause the ac\u00adtions are in\u00adher\u00adently ex\u00adclu\u00adsive.Th\u00adese con\u00addi\u00adtions lead to a re\u00adfine\u00adment of the defi\u00adni\u00adtion above: plan\u00adning is the se\u00adlec\u00adtion of ac\u00adtions from among com\u00adpet\u00ading al\u00adter\u00adna\u00adtives. One must pri\u00adori\u00adtize among one\u2019s available op\u00adtions and al\u00adlo\u00adcate re\u00adsources to whichever op\u00adtions rank most highly. Often one is al\u00adlo\u00adcat\u00ading re\u00adsources among mul\u00adti\u00adple op\u00adtions in pro\u00adpor\u00adtion to their pri\u00ador\u00adity, but I think we can still rightly call it pri\u00adori\u00adti\u00adza\u00adtion even when one is only de\u00adcid\u00ading be\u00adtween al\u00adlo\u00adcat\u00ading 100% of their re\u00adsources to only one op\u00adtion of out two. In other words, all of plan\u00adning is pri\u00adori\u00adti\u00adza\u00adtion. [2]Plan\u00adning is a Pre\u00addic\u00adtion\/\u200bIn\u00adfor\u00adma\u00adtion ProblemOn what ba\u00adsis should we se\u00adlect or pri\u00adori\u00adtize one ac\u00adtion over an\u00adother? Of course, we should se\u00adlect whichever ac\u00adtions we most ex\u00adpect to lead us to the wor\u00adlds we most pre\u00adfer. We should choose ac\u00adtions based on the Ex\u00adpected Value (EV) we as\u00adsign to them.And therein lies one of the core challenges of plan\u00adning. Ex\u00adpect\u00ading is just an\u00adother word for pre\u00addict\u00ading. And pre\u00addict\u00ading is just a word which means as\u00adsign\u00ading each ac\u00adtion a prob\u00ada\u00adbil\u00adity dis\u00adtri\u00adbu\u00adtion over the world states it will re\u00adsult in. That\u2019s hard to do well.Plan\u00adning is hard be\u00adcause pre\u00addict\u00ading is hard. Of course, pre\u00addict\u00ading is a lot eas\u00adier when you have more in\u00adfor\u00adma\u00adtion, but usu\u00adally we have far less than we\u2019d like, so plan\u00adning is hard be\u00adcause of limited in\u00adfor\u00adma\u00adtion. Plan\u00adning is a pre\u00addic\u00adtion prob\u00adlem and an in\u00adfor\u00adma\u00adtion prob\u00adlem. Peo\u00adple of think of plan\u00adning as be\u00ading about \u201cdo\u00ading\u201d, but in truth plan\u00adning is just as much about \u201cknow\u00ading\u201d. One thing this amounts to is that the in\u00adstru\u00admen\u00adtal ra\u00adtio\u00adnal\u00adity in\u00advolved in plan\u00adning is in\u00adsep\u00ada\u00adrable from the epistemic ra\u00adtio\u00adnal\u00adity of hav\u00ading true be\u00adliefs, good mod\u00adels, and mak\u00ading ac\u00adcu\u00adrate pre\u00addic\u00adtions. Any plan\u00adner\u2019s abil\u00adity is go\u00ading to be capped by their epistemic skill.What we can do with this re\u00adal\u00adiza\u00adtion is that in any situ\u00ada\u00adtion where we\u2019re plan\u00adning, we can pay de\u00adliber\u00adate at\u00adten\u00adtion to the pre\u00addic\u00adtions we need to make, the in\u00adfor\u00adma\u00adtion we have, and ways in which might be able to make bet\u00adter pre\u00addic\u00adtions. (I call this the In\u00adfor\u00adma\u00adtion Con\u00adtext of a plan.) Alter\u00adna\u00adtively stated, one can be\u00adgin to ap\u00adproach plan\u00adning with an un\u00adcer\u00adtainty re\u00adduc\u00adtion mind\u00adset [3]. Rather than al\u00adlo\u00adcat\u00ading all of one\u2019s re\u00adsources from the out\u00adset, one in\u00adstead al\u00adlo\u00adcates a por\u00adtion of the re\u00adsources to\u00adwards \u201cpur\u00adchas\u00ading\u201d in\u00adfor\u00adma\u00adtion which im\u00adproves the sub\u00adse\u00adquent al\u00adlo\u00adca\u00adtion of the rest via im\u00adproved pre\u00addic\u00adtion. We do this already in many cases (read\u00ading re\u00adviews, ask\u00ading friends, tri\u00adals), but the oc\u00adca\u00adsions where we fail to do this can cost us big time: the stu\u00addent who didn\u2019t re\u00adsearch be\u00adfore start\u00ading law school, the com\u00adpany that spent years de\u00advel\u00adop\u00ading a product be\u00adfore test\u00ading with users, the cou\u00adple who rushed into mar\u00adriage, etc. (Not the only cause, but emo\u00adtions of\u00adten make us im\u00adpa\u00adtient to go all out with an op\u00adtion with\u00adout ad\u00ade\u00adquate con\u00adsid\u00ader\u00ada\u00adtion.)Un\u00adfor\u00adtu\u00adnately, while it\u2019s easy to say that get\u00adting more in\u00adfor\u00adma\u00adtion and re\u00adduc\u00ading un\u00adcer\u00adtainty is a good thing, there\u2019s still a lot of com\u00adplex\u00adity in man\u00adag\u00ading un\u00adcer\u00adtainty. Know\u00ading how much is okay and how to effi\u00adciently re\u00adduce it. Still, always re\u00admem\u00adber\u00ading that pre\u00addic\u00adtion is a core challenge of plan\u00adning is a start.Plan\u00adning is a Com\u00adpu\u00adta\u00adtion ProblemUn\u00adfor\u00adtu\u00adnately, even hav\u00ading all the right in\u00adfor\u00adma\u00adtion and perfect pre\u00addic\u00adtion would not be enough to make plan\u00adning easy. Even when one can perfectly pre\u00addict the out\u00adcome of all al\u00adter\u00adna\u00adtive op\u00adtions, plan\u00adning of\u00adten gives rise to in\u00adtractably large com\u00adpu\u00adta\u00adtional prob\u00adlems which are NP-Com\u00adplete. (I provide a small math\u00ade\u00admat\u00adi\u00adcal treat\u00adment in this com\u00adment be\u00adlow.)We are usu\u00adally al\u00adlo\u00adcat\u00ading a finite set of re\u00adsources (our time, our money) amount a set of op\u00adtions in or\u00adder to ac\u00adcom\u00adplish a va\u00adri\u00adety of goals. Ex\u00adam\u00adple: I might care about my health, en\u00adter\u00adtain\u00adment, ca\u00adreer, friend\u00adships, ro\u00admance, art, and ed\u00adu\u00adca\u00adtion. Towards these val\u00adues, I could in a given week: sleep, play ten\u00adnis, go to the gym, watch Net\u00adflix, work late, call up by bestie, go on Tin\u00adder, draw a pic\u00adture, read a text\u00adbook, etc. I get to al\u00adlo\u00adcate my time to some com\u00adbi\u00adna\u00adtion of ac\u00adtivi\u00adties, but the thing is, the num\u00adber of pos\u00adsi\u00adble com\u00adbi\u00adna\u00adtions for al\u00adlo\u00adcat\u00ading my time in a sin\u00adgle week is mind-bog\u00adgling. A su\u00adper-sim\u00adplified ex\u00adam\u00adple: I have 30 dis\u00adcre\u00adtionary hours in a week and 10 pos\u00adsi\u00adble ac\u00adtivi\u00adties I could spend each of those hours on, but I only spend time on ac\u00adtivi\u00adties in two-hour blocks. This re\u00adsults in 10^(30\/\u200b2) = 10^15 = one thou\u00adsand trillion differ\u00adent com\u00adbi\u00adna\u00adtions of how I could spend my time that week. Even if I could perfectly pre\u00addict how much I\u2019d like each com\u00adbi\u00adna\u00adtion of time us\u00adage, I could never con\u00adsider them all ex\u00adplic\u00aditly. This com\u00adpu\u00adta\u00adtional com\u00adplex\u00adity was very salient to me in my last job as a Product Man\u00adager where ev\u00adery six weeks I would choose which pro\u00adjects my team of Data Scien\u00adtists would work on. Un\u00adder ideal cir\u00adcum\u00adstances, I might get to choose ten pro\u00adjects out of a can\u00addi\u00addate twenty pro\u00adjects. Choos\u00ading was hard, es\u00adpe\u00adcially since in ad\u00addi\u00adtion to pure value of each pro\u00adject, I had to jug\u00adgle the facts that: a) pro\u00adjects were of\u00adten not in\u00adde\u00adpen\u00addent, b) pro\u00adjects of\u00adten have fu\u00adture costs, e.g. main\u00adte\u00adnance and tech debt, c) there are costs to not do\u00ading or de\u00adlay\u00ading some pro\u00adjects, d) the benefits of differ\u00adent pro\u00adjects are spread over differ\u00adent time hori\u00adzons, e) pro\u00adjects aren\u2019t com\u00admen\u00adsu\u00adrate, e.g. pro\u00adtect\u00ading down\u00adside risk vs build\u00ading new func\u00adtion\u00adal\u00adity, f) some pro\u00adjects are nec\u00ades\u00adsary to pre\u00adserve fu\u00adture op\u00adtion\u00adal\u00adity, g) bud\u00adget con\u00adstraints were soft, I could some\u00adtimes steal time from el\u00adse\u00adwhere it meant I could se\u00adlect a bet\u00adter set of pro\u00adjects.Even if I\u2019d had perfect in\u00adfor\u00adma\u00adtion and pre\u00addic\u00adtion, which I cer\u00adtainly didn\u2019t, the num\u00adber of pos\u00adsi\u00adbil\u00adities was large enough to be in\u00adtractable by any an\u00ada\u00adlyt\u00adi\u00adcal method of pri\u00adori\u00adti\u00adza\u00adtion.I am least cer\u00adtain about how to best tackle the com\u00adpu\u00adta\u00adtion prob\u00adlem of plan\u00adning, but my cur\u00adrent guess is that very much re\u00adlies on effec\u00adtively us\u00ading our in\u00adstinc\u00adtive, in\u00adtu\u00aditive, Sys\u00adtem 1 think\u00ading in con\u00adjunc\u00adtion with our Sys\u00adtem 2 think\u00ading. Sys\u00adtem 1 is bet\u00adter at han\u00addling prob\u00adlems too large to be con\u00adsciously con\u00adsid\u00adered, while Sys\u00adtem 2 can en\u00adsure that Sys\u00adtem 1 is pay\u00ading at\u00adten\u00adtion to all the rele\u00advant con\u00adsid\u00ader\u00ada\u00adtions. I like cost-benefit analy\u00adses and de\u00adci\u00adsion-ma\u00adtri\u00adces not be\u00adcause I think they should make the fi\u00adnal de\u00adci\u00adsion, but be\u00adcause I think the ex\u00ader\u00adcise of cre\u00adat\u00ading them loads the right in\u00adfor\u00adma\u00adtion into Sys\u00adtem 1.Plan\u00adning is a Self-Knowl\u00adedge & Self-Mastery ProblemIn a three-fold man\u00adner, plan\u00adning is a prob\u00adlem of self-knowl\u00adedge and self-mas\u00adtery.Pre\u00addict\u00ading yourselfThe first prob\u00adlem of plan\u00adning stated was that of pre\u00addic\u00adtion and in\u00adfor\u00adma\u00adtion. With the ex\u00adcep\u00adtion of plans made for teams, or\u00adga\u00adni\u00adza\u00adtions, coun\u00adtries, and the like, your plans will con\u00adcern your\u00adself and your ac\u00adtions. That means that in\u00advari\u00adably one of the most im\u00adpor\u00adtant things to be able to pre\u00addict is your\u00adself. Some\u00adtimes this is hard be\u00adcause we lack in\u00adfor\u00adma\u00adtion. It might be hard to pre\u00addict how you will be\u00adhave or feel in novel cir\u00adcum\u00adstances. Other times it can be hard to pre\u00addict our\u00adselves be\u00adcause we are averse to mak\u00ading the best pos\u00adsi\u00adble pre\u00addic\u00adtions we could. We re\u00adfuse to ad\u00admit that re\u00adal\u00adis\u00adti\u00adcally we are not go\u00ading to get up at 6am to go to the gym.A par\u00adtic\u00adu\u00adlar as\u00adpect of your\u00adself which it is key to be able to pre\u00addict is your mo\u00adti\u00adva\u00adtion. Plans which rely on un\u00adre\u00adal\u00adis\u00adtic pre\u00addic\u00adtions about how mo\u00adti\u00advated you will feel while ex\u00ade\u00adcut\u00ading the plans are prob\u00ada\u00adbly go\u00ading to be un\u00adsuc\u00adcess\u00adful plans.Ad\u00advice I have for this as\u00adpect of plan\u00adning is a) start pay\u00ading at\u00adten\u00adtion to your\u00adself and to pre\u00addict\u00ading your\u00adself, for\u00adtu\u00adnately usu\u00adally you have a lot of data to work with, b) adopt a policy of rad\u00adi\u00adcal-self hon\u00adesty, what is true is already so and good pre\u00addic\u00adtions are the ba\u00adsis of good plans.Know\u00ading what you wantThis post started with the defi\u00adni\u00adtion that plan\u00adning is the se\u00adlec\u00adtion of ac\u00adtions to\u00adwards de\u00adsired out\u00adcomes. It is rather im\u00adpor\u00adtant that you cor\u00adrectly iden\u00adtify your ac\u00adtual de\u00adsired out\u00adcomes, oth\u00ader\u00adwise, any ac\u00adtions you se\u00adlect aren\u2019t likely to be worth much. \u201cI thought I wanted X ac\u00adtu\u00adally but I didn\u2019t\u201d is too of\u00adten the foil of a sup\u00adpos\u00adedly suc\u00adcess\u00adful plan where the plan\u00adner ev\u00adi\u00addently lacked good self-pre\u00addic\u00adtion.I\u2019m of the be\u00adlief that one\u2019s deep-down de\u00adsired out\u00adcomes (per\u00adsonal and moral) are con\u00adtained in\u00adside one\u2019s brain, re\u00adgard\u00adless of whether one has achieved good con\u00adscious ac\u00adcess to them or in\u00adferred all the con\u00adse\u00adquences of them. Much of good plan\u00adning is re\u00adduc\u00ading un\u00adcer\u00adtainty around what is that you ac\u00adtu\u00adally want and value. Self-pre\u00addic\u00adtion is a spe\u00adcial case of the gen\u00aderal pre\u00addic\u00adtion\/\u200bin\u00adfor\u00adma\u00adtion prob\u00adlem, but it re\u00adquires differ\u00adent tech\u00adniques of un\u00adcer\u00adtainty re\u00adduc\u00adtion than out\u00adside-world pre\u00addic\u00adtion does. In\u00adtro\u00adspec\u00adtive meth\u00adods such as med\u00adi\u00adta\u00adtion, Fo\u00adcus\u00ading, In\u00adter\u00adnal Fam\u00adily Sys\u00adtems, etc. are helpful for hav\u00ading bet\u00adter knowl\u00adedge of which out\u00adcomes you ac\u00adtu\u00adally want.Of course, com\u00adpli\u00adcat\u00ading the dis\u00adcov\u00adery of what your de\u00adsired out\u00adcomes are is that the no\u00adtion of you may be a lit\u00adtle com\u00adpli\u00adcated. I have been per\u00adsuaded over the years that it can be very use\u00adful to think of your\u00adself as be\u00ading made up of parts or sub-agents, each with their own par\u00adtic\u00adu\u00adlar de\u00adsires. Mas\u00adter\u00ading your\u00adself means com\u00ading to un\u00adder\u00adstand your parts and their de\u00adsires lead\u00ading into an abil\u00adity to make plans that satisfy all your\u00adself. This can be cru\u00adcial as mak\u00ading plans which parts of you are not on\u00adboard of\u00adten causes those parts to un\u00adder\u00admine the plan.Re\u00adlat\u00adedly, \u201cmak\u00ading your\u00adself do some\u00adthing\u201d should always be a warn\u00ading sign that some part of you is be\u00ading dis\u00adre\u00adgarded. Some\u00adtimes that\u2019s le\u00adgi\u00adt\u00adi\u00admate, but re\u00adally only if you\u2019re ac\u00adcount\u00ading for it in your self-pre\u00addic\u00adtions and how it\u2019s go\u00ading to af\u00adfect over\u00adall suc\u00adcess. Self-mas\u00adtery of your hu\u00adman brainHu\u00adman brains are re\u00adally, re\u00adally good, but they\u2019re also re\u00adally com\u00adpli\u00adcated with a lot of differ\u00adent mov\u00ading pieces and no user man\u00adual. Most of us man\u00adage, but there are gains to be had from bet\u00adter at op\u00ader\u00adat\u00ading our own minds.Heuris\u00adtics and biasesTo get bet\u00adter at pre\u00addict\u00ading, it helps to un\u00adder\u00adstand when our brains na\u00adtively do and don\u2019t make good pre\u00addic\u00adtions of their own ac\u00adcord. That leads di\u00adrectly into heuris\u00adtics and bi\u00adases and at\u00adtempts to be\u00adcome a lens which sees its flaws.Us\u00ading Sys\u00adtem 1 (in\u00adtu\u00adition) and Sys\u00adtem 2 (ex\u00adplicit rea\u00adson) in harmonyHu\u00adman brains run on both and each sys\u00adtem has its ad\u00advan\u00adtages and dis\u00adad\u00advan\u00adtages. The hu\u00adman who is lev\u00ader\u00adag\u00ading their brain to its ful\u00adlest ex\u00adtent is us\u00ading all of their mind in har\u00admony, not priv\u00adileg\u00ading or dis\u00adre\u00adgard\u00ading one kind of think\u00ading in\u00adap\u00adpro\u00adpri\u00adately. This hard though, but re\u00adally a key part of get\u00adting good at plan\u00adning. Emo\u00adtional MasteryAr\u00adguably emo\u00adtions could be lumped un\u00adder Sys\u00adtem 1, but it\u2019s worth call\u00ading them out sep\u00ada\u00adrately. While emo\u00adtions serve mul\u00adti\u00adple pur\u00adposes, one of the things they do is carry an im\u00adpor\u00adtant sig\u00adnal of in\u00adfor\u00adma\u00adtion from your sub\u00adcon\u00adscious mind. If you feel anx\u00adious about some\u00adthing or hav\u00ading a nig\u00adgling doubt or what\u00adever, that\u2019s be\u00adcause there\u2019s a part of your mind has been pro\u00adcess\u00ading raw data and find\u00ading it sig\u00adnifi\u00adcant. The skil\u00adled plan\u00adner and pre\u00addic\u00adtor will be some\u00adone who can ex\u00adtract that valuable in\u00adfor\u00adma\u00adtion from their emo\u00adtions.How\u00adever, emo\u00adtional mas\u00adtery is per\u00adhaps even more im\u00adpor\u00adtant to plan\u00adning in an\u00adother way. Many, many plans are bad plans be\u00adcause peo\u00adple lapse into op\u00adti\u00admiz\u00ading for their emo\u00adtions rather than ac\u00adtual long-term de\u00adsired out\u00adcomes. A per\u00adson who rushes into a plan with in\u00adad\u00ade\u00adquate in\u00adfor\u00adma\u00adtion be\u00adcause they dis\u00adlike be\u00ading in a state of in\u00adde\u00adci\u00adsion is likely choos\u00ading a worse plan be\u00adcause they were un\u00adable to han\u00addle their un\u00adpleas\u00adant emo\u00adtions. Or a per\u00adson who only ex\u00ade\u00adcutes ex\u00adtremely con\u00adser\u00adva\u00adtive plans be\u00adcause any\u00adthing bold makes them too feel anx\u00adious. I be\u00adlieve schools of thought which help with emo\u00adtional mas\u00adtery such as ACT, CBT, and DBT have a place among the train\u00ading ma\u00adte\u00adri\u00adals for great plan\u00adners.Plan\u00adning is a Re\u00adcur\u00adsive ProblemWe\u2019ve cov\u00adered that plan\u00adning re\u00adquires iden\u00adti\u00adfy\u00ading your op\u00adtions, pre\u00addict\u00ading their out\u00adcomes, eval\u00adu\u00adat\u00ading the good\u00adness of said out\u00adcomes, and then some\u00adhow crunch\u00ading through all the differ\u00adent com\u00adbi\u00adna\u00adtions of pos\u00adsi\u00adble ac\u00adtions to se\u00adlect the best over\u00adall set. Ex\u00adcept plan\u00adning is re\u00adcur\u00adsive. Any difficult plan is go\u00ading to be com\u00adposed of mul\u00adti\u00adple sub-plans and for each one of those sub-plans the whole pro\u00adcess needs to re\u00adpeated again: iden\u00adti\u00adfy\u00ading op\u00adtions, mak\u00ading pre\u00addic\u00adtions, se\u00adlect\u00ading com\u00adbi\u00adna\u00adtions, etc.First, this adds to the already ex\u00adten\u00adsive amount of com\u00adpu\u00adta\u00adtion in\u00advolved in plan\u00adning. Se\u00adcond, it can of\u00adten cause plans to span mul\u00adti\u00adple do\u00admains strain\u00ading all but the most ver\u00adsa\u00adtile gen\u00ader\u00adal\u00adists. Have pity for the as\u00adpiring baker whose plan was to make the best muffins in town, but now has to figure out dou\u00adble-en\u00adtry ac\u00adcount\u00ading and US tax code. Or the physi\u00adcist-cum-founder who wanted to cre\u00adate clean en\u00adergy tech\u00adnolo\u00adgies but now needs to figure out a sales and mar\u00adket\u00ading strat\u00adegy too.At a base level, the challenges of good plan\u00adning re\u00admain con\u00adstant be\u00adtween do\u00admains, e.g. mak\u00ading good pre\u00addic\u00adtions, as well the ad\u00advis\u00adable steps, e.g. re\u00adduc\u00ading un\u00adcer\u00adtainty. Yet the con\u00adcrete steps for do\u00ading these might look very differ\u00adent. The physi\u00adcist-founder might have been very good at re\u00adduc\u00ading un\u00adcer\u00adtainty in the lab\u00ado\u00adra\u00adtory with ex\u00adper\u00adi\u00adments, but the feed\u00adback loop for iter\u00adat\u00ading on sales strat\u00adegy might be com\u00adpletely differ\u00adent, even if in both cases you\u2019re try\u00ading to re\u00adduce un\u00adcer\u00adtainty.Shift\u00ading be\u00adtween do\u00admains is prob\u00ada\u00adbly a rea\u00adson why peo\u00adple who are good at re\u00adduc\u00ading un\u00adcer\u00adtainty and plan\u00adning in some do\u00admains are neg\u00adli\u00adgent in oth\u00aders. Ar\u00adguably, the skil\u00adled plan\u00adner is go\u00ading to figure out the right spe\u00adcific tech\u00adniques to achieve good plans across all the do\u00admains they touch. In some do\u00admains, you might have lots of ex\u00adperts you can poll; in oth\u00aders it might be cheap to run ex\u00adper\u00adi\u00adments; in some you can build good ex\u00adplicit mod\u00adels; in oth\u00aders it\u2019s all about train\u00ading in\u00adtu\u00adition; and yet in fur\u00adther cases it might not be easy at all and you\u2019re left try\u00ading to draw ten\u00adu\u00adous in\u00adfer\u00adences from his\u00adtor\u00adi\u00adcal ex\u00adam\u00adple.Sum\u00admary: What It Takes to Be a Great PlannerWe can neatly sum\u00adma\u00adrize why plan\u00adning can be so difficult with a list of all the traits one should have in or\u00adder to over\u00adcome the difficul\u00adties.One must be ex\u00adcel\u00adlent at mak\u00ading pre\u00addic\u00adtions across do\u00admains, helped by their mas\u00adtery of epistemic skill and virtue.One must be skil\u00adled at iden\u00adti\u00adfy\u00ading which in\u00adfor\u00adma\u00adtion one needs yet lacks and at ex\u00ade\u00adcut\u00ading sub-plans to effi\u00adciently ob\u00adtain this in\u00adfor\u00adma\u00adtion across differ\u00adent do\u00admains and the dis\u00adparate tech\u00adniques re\u00adquired to do so.One must be skil\u00adled at mak\u00ading choices within com\u00adbi\u00adna\u00adto\u00adri\u00adally ex\u00adplo\u00adsive prob\u00adlems in\u00advolv\u00ading the se\u00adlec\u00adtion of com\u00adbi\u00adna\u00adtions of mul\u00adti\u00adple op\u00adtions to\u00adwards mul\u00adti\u00adple goals via the use of heuris\u00adtics, ex\u00adcel\u00adlent in\u00adtu\u00adition\/\u200bin\u00adstinct, or some other re\u00adli\u00adable method.One must be able to make use of their Sys\u00adtem 1 and Sys\u00adtem 2 to think in har\u00admony.One must have knowl\u00adedge of their true val\u00adues, goals, and de\u00adsired states of the world.One must have knowl\u00adedge of their sub-parts and al\u00adign\u00adment be\u00adtween them so that one does not un\u00adder\u00admine one\u00adself.One must be able to pre\u00addict one\u2019s own be\u00adhav\u00adior in\u00adclud\u00ading the be\u00adhav\u00adior of one\u2019s mo\u00adti\u00adva\u00adtions and emo\u00adtions so that one can plan effec\u00adtively around them.One must be able to re\u00adduce un\u00adcer\u00adtainty about one\u2019s self-model via ex\u00adper\u00adi\u00adments or in\u00adtro\u00adspec\u00adtive tools.One must be able to listen to the in\u00adfor\u00adma\u00adtion con\u00adtained in their emo\u00adtions.One must not let their emo\u00adtions co\u00aderce them into plans which sate their emo\u00adtions while sac\u00adri\u00adfic\u00ading their over\u00adall goals.One must be able to plan effec\u00adtively up and down the re\u00adcur\u00adsive tree of their plans: enu\u00admer\u00adat\u00ading, eval\u00adu\u00adat\u00ading, and re\u00adduc\u00ading un\u00adcer\u00adtainty wher\u00adever needed.Endnotes[1] This broad defi\u00adni\u00adtion stands in op\u00adpo\u00adsi\u00adtion to a com\u00admon defi\u00adni\u00adtion which uses plan\u00adning pri\u00admar\u00adily in con\u00adtexts of schedul\u00ading, e.g. plan your day, plan your week. The broader defi\u00adni\u00adtion here is es\u00adsen\u00adtially syn\u00adony\u00admous with de\u00adci\u00adsion-mak\u00ading, per\u00adhaps differ\u00ading only in con\u00adno\u00adta\u00adtion. De\u00adci\u00adsions some\u00adwhat im\u00adply a one-off choice be\u00adtween op\u00adtions whereas plan\u00adning im\u00adplies se\u00adlec\u00adtion of mul\u00adti\u00adple ac\u00adtions to be taken over time. The term plan\u00adning also some\u00adwhat more than de\u00adci\u00adsion-mak\u00ading high\u00adlights that there is a goal one wishes to achieve.[2] I of\u00adten hear peo\u00adple faced with clear pri\u00adori\u00adti\u00adza\u00adtion tasks plead that they need more re\u00adsource, that re\u00adsource scarcity is the prob\u00adlem. Some\u00adtimes it is, some\u00adtimes the best ac\u00adtion is to get more re\u00adsources. But of\u00adten it is a fal\u00adlacy to think that if you can\u2019t solve pri\u00adori\u00adti\u00adza\u00adtion now, it will some\u00adhow be eas\u00adier when you have more re\u00adsources. How\u00adever many re\u00adsources you have, they will be finite and you will still be able to think of things you\u2019d like to do with even more re\u00adsources, things which feel just as nec\u00ades\u00adsary. In short, you should get used to choos\u00ading sooner rather than later. [3] To be tech\u00adni\u00adcal, un\u00adcer\u00adtainty re\u00adduc\u00adtion is about con\u00adcen\u00adtrat\u00ading the prob\u00ada\u00adbil\u00adity mass of your prior dis\u00adtri\u00adbu\u00adtion into nar\u00adrower bands. What links here?On the Na\u00adture of Agency\n by Ruby (1 Apr 2019 1:32 UTC; 31 points)How to make plans? by Pee Doom (23 Apr 2019 8:29 UTC; 24 points)Ruby31 Mar 2019 2:33 UTC29 points9 commentsLW linkPlanning & Decision-Making\uf141Post permalinkLink without commentsLink without top nav barsLink without comments or top nav barsRuby 31 Mar 2019 2:23 UTC 9 pointsAp\u00adpendix: For\u00admal\u00adism of the Com\u00adpu\u00adta\u00adtion ProblemA sim\u00adple for\u00admal\u00adism illus\u00adtrates that plan\u00adning quickly be\u00adcomes com\u00adpu\u00adta\u00adtion\u00adally in\u00adtractable. Bor\u00adrow\u00ading from Lee Merkhofer\u2019s Math\u00ade\u00admat\u00adi\u00adcal The\u00adory for Pri\u00adori\u00adtiz\u00ading Pro\u00adjects and Op\u00adti\u00admally Allo\u00adcat\u00ading Cap\u00adi\u00adtal. As\u00adsume there are m po\u00adten\u00adtial pro\u00adjects. For now, as\u00adsume that the pro\u00adjects are in\u00adde\u00adpen\u00addent; that is, it is rea\u00adson\u00adable to se\u00adlect any com\u00adbi\u00adna\u00adtion of pro\u00adjects and the cost and value of any pro\u00adject do not de\u00adpend on what other pro\u00adjects are se\u00adlected. Define, for each pro\u00adject i = 1, 2,..., m the zero-one vari\u00adable xi. The vari\u00adable xi is one if the pro\u00adject is ac\u00adcepted and zero if it is re\u00adjected. Let ci be the in\u00adcre\u00admen\u00adtal value (b for \u201cbenefit\u201d) of the i\u2019th pro\u00adject and ci be its cost. Let C be the to\u00adtal available bud\u00adget. The goal is to se\u00adlect from the available pro\u00adjects the sub\u00adset of pro\u00adjects with a to\u00adtal cost less than or equal to C that pro\u00adduces the great\u00adest pos\u00adsi\u00adble to\u00adtal value.The prob\u00adlem may be ex\u00adpressed math\u00ade\u00admat\u00adi\u00adcally as:Max\u00adi\u00admize m\u2211i=1bixiSub\u00adject to: m\u2211i=1cixi\u2264C and xi= 0 or 1 for i = 1, 2,...,m.This is a zero-one in\u00adte\u00adger op\u00adti\u00admiza\u00adtion prob\u00adlem. It is NP-Com\u00adplete, i.e. the time re\u00adquired to solve such a prob\u00adlem us\u00ading any cur\u00adrently known al\u00adgorithm in\u00adcreases rapidly as the size of the pro\u00adgram grows. Nat\u00adu\u00adrally, be\u00adcause al\u00adlo\u00adcat\u00ading re\u00adsources\/\u200bplan\u00adning is in\u00advolves com\u00adbi\u00adna\u00adtions of ac\u00adtions and com\u00adbi\u00adna\u00adtions tend to ex\u00adplode. It can be okay if there the num\u00adber of pos\u00adsi\u00adble ac\u00adtions\/\u200bpro\u00adjects is rel\u00ada\u00adtively small, but re\u00admem\u00adber that even 10! is already 3.6 mil\u00adlion. The equa\u00adtion above isn\u2019t com\u00adpre\u00adhen\u00adsive enough to cap\u00adture the full de\u00adtail of real-world plan\u00adning, but it should suffice to in\u00addi\u00adcate that plan\u00adning is of\u00adten of the com\u00adbi\u00adna\u00adto\u00adri\u00adally ex\u00adplo\u00adsive class. (If you want to see how more fac\u00adtors can be in\u00adcluded, see the rest of Merkhofer\u2019s pa\u00adper where he mod\u00adels mu\u00adtu\u00adally ex\u00adclu\u00adsive\/\u200bse\u00adquen\u00adtial pro\u00adjects, multi-pe\u00adriod plan\u00adning, and sen\u00adsi\u00adtivity to de\u00adlay of pro\u00adjects.)Note how\u00adever that this treat\u00adment as\u00adsumes that the benefits and costs are perfectly known when perform\u00ading the op\u00adti\u00admiza\u00adtion. In the real world, we only have dis\u00adtri\u00adbu\u00adtions over the benefits and costs. A true for\u00admal\u00adism of real-world pri\u00adori\u00adti\u00adza\u00adtion would be couched in statis\u00adti\u00adcal terms. Plus, the benefits and costs in the above for\u00admal\u00adism are scalars which can be added and com\u00adpared, e.g. dol\u00adlars. In the real world, the benefits and costs we weigh are of dis\u00adparate types which at best have vague con\u00adver\u00adsion rates be\u00adtween them. So you might imag\u00adine that a com\u00adpre\u00adhen\u00adsive for\u00admal\u00adism would deal in vec\u00adtors and would in\u00adclude a com\u00adpli\u00adcated func\u00adtion for com\u00adpar\u00ading those vec\u00adtors.The point here is not that we should at\u00adtempt to cre\u00adate or use math\u00ade\u00admat\u00adi\u00adcal mod\u00adels in our plan\u00adning, but to rec\u00adog\u00adnize that it is pre\u00adcisely this math which our brains must find some way of crunch\u00ading. Un\u00adder\u00adstand\u00ading that this is the im\u00admense prob\u00adlem we are tasked with, we can start to look for ways to han\u00addle it bet\u00adter than our de\u00adfault. And, you know, also give our\u00adselves a bit of break when we find plan\u00adning hard.What links here?Why Plan\u00adning is Hard: A Mul\u00adti\u00adfaceted Model\n by Ruby (31 Mar 2019 2:33 UTC; 29 points)Rohin Shah 31 Mar 2019 22:08 UTC 6 pointsParentThis is a zero-one in\u00adte\u00adger op\u00adti\u00admiza\u00adtion prob\u00adlem. It is NP-CompleteNit\u00adpick: Just be\u00adcause a prob\u00adlem can be for\u00admal\u00adized as a zero-one in\u00adte\u00adger op\u00adti\u00admiza\u00adtion prob\u00adlem doesn\u2019t mean it\u2019s NP-com\u00adplete; you need to show that some NP-com\u00adplete prob\u00adlem re\u00adduces to the plan\u00adning prob\u00adlem. For ex\u00adam\u00adple, the prob\u00adlem of find\u00ading the largest num\u00adber in a set {ni} is a zero-one op\u00adti\u00admiza\u00adtion prob\u00adlem (it can be for\u00admal\u00adized as max\u2211ixini sub\u00adject to \u2211ixi=1 with each xi be\u00ading zero or one) but it isn\u2019t NP-com\u00adplete.That said, the prob\u00adlem you speci\u00adfied is iden\u00adti\u00adcal to the knap\u00adsack prob\u00adlem, which is known to be NP-com\u00adplete, so your point stands.Ruby 31 Mar 2019 22:11 UTC 6 pointsParentThat\u2019s a fair nit\u00adpick, thanks. I was aware it was iden\u00adti\u00adcal to the knap\u00adsack prob\u00adlem, though I do see that my phras\u00ading im\u00adplied that be\u00ading a zero-one in\u00adte\u00adger op\u00adti\u00admiza\u00adtion prob\u00adlem au\u00adto\u00admat\u00adi\u00adcally makes it NP-Com\u00adplete. That was sloppy of me.Swapna Rao 23 Dec 2019 4:46 UTC 5 pointsThis is so helpful. Thank you.\nRohin Shah 31 Mar 2019 22:10 UTC 3 pointsOne other as\u00adpect I would in\u00adclude is that even figur\u00ading out the set of ac\u00adtions available to you can be difficult. This post seems to be ar\u00adgu\u00ading that \u201cthink\u00ading in\u00adside the box\u201d is already hard; I think figur\u00ading out how to \u201cthink out\u00adside the box\u201d is also both im\u00adpor\u00adtant and hard.Also, you\u2019ll prob\u00ada\u00adbly en\u00adjoy Re\u00adsearch as a Stochas\u00adtic De\u00adci\u00adsion Pro\u00adcess.What links here?On the Na\u00adture of Agency\n by Ruby (1 Apr 2019 1:32 UTC; 31 points)Ruby 31 Mar 2019 23:25 UTC 1 pointParentThat\u2019s very true. I need to think through that more and figure out how to in\u00adcor\u00adpo\u00adrate into my mod\u00adels. I think there\u2019s a lot there which is miss\u00ading from here.NaiveTortoise 31 Mar 2019 18:15 UTC 2 pointsThe dis\u00adcus\u00adsion of plan\u00adning across do\u00admains seems to ig\u00adnore the fact that of\u00adten the best solu\u00adtion to plan\u00adning a pro\u00adject in a do\u00admain with which you\u2019re not fa\u00admil\u00adiar is to hire\/\u200bget help from an\u00adother per\u00adson. Of course, once you turn plan\u00adning into a multi-per\u00adson ac\u00adtivity, which I think any com\u00adpre\u00adhen\u00adsive model of plan\u00adning should treat it as, you also need to fac\u00adtor in un\u00adcer\u00adtainty about oth\u00aders\u2019 plans, which com\u00adpli\u00adcates the model quite a bit.\navturchin 31 Mar 2019 10:58 UTC 1 pointEach plan re\u00adal\u00adi\u00adsa\u00adtion con\u00adsists of sev\u00aderal stages, which are similar in very differ\u00adent types of tasks:1) Plan\u00adning and data gath\u00ader\u00ading about differ\u00adent tasks.2) Prepa\u00adra\u00adtion: buy\u00ading in\u00adstru\u00adments, col\u00adlect\u00ading data for this task, reg\u00adister on Tin\u00adder etc.3) Creat\u00ading \u201cfirst draft\u201d. Writ\u00ading the draft, build\u00ading a pro\u00adto\u00adtype, try\u00ading to go to the first date.4) Perfect\u00ading the product. Edit\u00ading, test\u00ading with users, many dates.5) \u201cSel\u00adling it\u201d and get\u00adting the out\u00adput. E.g: get\u00adting ar\u00adti\u00adcle pub\u00adlished and cited, startup be\u00adcomes uni\u00adcorn, or sta\u00adble re\u00adla\u00adtion. Sel\u00adling means that the out\u00adput of the pro\u00adject be\u00adcomes use\u00adful for some other pro\u00adjects, not nec\u00ades\u00adsary money or re\u00adla\u00adtion with other peo\u00adple. 6) End\u00ading. It is the mo\u00adment when you press stop but\u00adton (or you be\u00adcome in\u00adter\u00adnally \u201cpeper\u00adclipy\u201d by pro\u00adduc\u00ading more and more thing which you do not need already). For ex\u00adam\u00adple, you need to stop dat\u00ading if you get wife. Stop\u00adping is not easy, as we tend to do the same things again and again. Stop\u00adping is es\u00adpe\u00adcially difficult if the pro\u00adject fail and I have an op\u00adtion: try more or stop try\u00ading. The most difficult here is the 5th step, sel\u00adl\u00ading \u2013 this is there plans tend to fail. Be\u00adcause on the first 4 lev\u00adels I just spend re\u00adsources and mea\u00adsure the progress by my in\u00adter\u00adnal met\u00adrics. At the end I fi\u00adnally com\u00adpare it with the out\u00adside world, which could be just my big\u00adger pro\u00adject. avturchin 31 Mar 2019 10:35 UTC 1 pointPlans (or tasks) could be pre\u00adsented as black\u00adboxes, which con\u00adsume X and out\u00adput Y.X is:moneytimeother ma\u00adte\u00adrial or im\u00adma\u00adte\u00adrial re\u00adsources (con\u00adtacts, ob\u00adjects)in\u00adfor\u00adma\u00adtion about situationplan of actionop\u00adpor\u00adtu\u00adnity costY is a de\u00adsired out\u00adput, which con\u00adsists of:The di\u00adrect planned out\u00adput which is a re\u00adsource for other tasks (money, ca\u00adpa\u00adbil\u00adities, knowl\u00adedge, effect on other peo\u00adple opinion about my product). There is im\u00adpor\u00adtant differ\u00adence here be\u00adtween 90 per cent of re\u00adsult and 100 per cent. 90 per cent is a situ\u00ada\u00adtion when you wrote an email but didn\u2019t send it. The best out\u00adcome of a plan is 120 per cent re\u00adsult. My emo\u00adtions and stress in the pro\u00adcess of the im\u00adple\u00admen\u00adta\u00adtion of the plan.Col\u00adlat\u00aderal out\u00adput: what I will learn while im\u00adple\u00adment\u00ading the plan, and which other use\u00adful things I can do dur\u00ading it. 120-per-cent-re\u00adsult is partly based on get\u00adting many good sec\u00adondary goals achieved si\u00admul\u00adta\u00adneously. For ex\u00adam\u00adple, I will not only go to a con\u00adfer\u00adence, but also will visit my friends in the same city, prac\u00adtice my com\u00admu\u00adni\u00adca\u00adtion skills and find the next con\u00adfer\u00adence on the topic. Risks: what very bad things could hap\u00adpen? Ac\u00adci\u00addents, theft. Risks are not de\u00adsired, but they are part of the out\u00adput.Back to top","rejected":"\n\n\n\n\n\n\nA research workflow with Zotero and Org mode | mkbehr.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\nToggle navigation\n\n\n\n\n\nmkbehr.com\n\n\n\n\n\n\nAbout me\n\n\nArchive\n\n\nTags\n\n\nRSS feed\n\n\nGithub\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSource\n\n\n\n\n\n\n\n\n\n\nA research workflow with Zotero and Org mode\n\n\n Michael Behr\n \nSeptember 19, 2015\n\nComments\n\nSource\n\n\n\n\n\n\nAny research project is going to involve a literature search: reading\nthrough a bunch of papers that might be relevant to your topic in\norder to get a sense of what the field already knows. Now, maybe\nthere's some magic technique for picking out the information that\nmatters, passing over the rest, and writing out a single, coherent\nstory in one pass through all the papers you can find. If that\ntechnique exists, I have no idea what it is.\nSo when every paper brings up ten new questions and twenty papers to\nstart answering them, I need a system to keep my notes organized. I\nneed notes that let me jump back and forth between papers without\nlosing my place, draw links between papers, and store lists of\ncitations to come back to. Here's how I do it.\n\nStoring papers with Zotero\nThe first tool I use is Zotero, a reference\nmanager. Zotero's job is to store all the actual papers I come across,\nalong with information like data on how to cite the papers and any\ntags they might have been published with. It can grab that information\nfrom my web browser, whether from a journal's website or someplace\nlike Google Scholar or PubMed. It's also great for quickly putting\ntogether a bibliography, using bibtex or similar programs, when I want\nto write up some results.\n\n\n\nZotero stores the papers I want to\nread and reference. I scaled up the font size here to make it readable\nin a tiny blog image.\nZotero isn't the only choice for reference managers.\nMendeley is another popular choice, and\nthere are a\nwhole bunch more\nout there. I picked Zotero arbitrarily a few years ago, but it's\nworking out well because of its emacs integration.\nKeeping notes with Emacs and Org mode\nYou see, Zotero has some note-taking functions, and I used to keep my\nnotes there, but there were some problems. Notes are stored as\nseparate files for each paper, but I want to cross-reference notes\nfrom a lot of different papers at once. And while the editor has some\nrich-text capabilities (e.g. bold and italic text), it's missing\nimportant things I need in my notes, like the ability to typeset\nequations.\nThat's where Emacs and its\nextension Org mode come in. To borrow a term\nfrom Perl enthusiasts, Org mode is the swiss army chainsaw of text\ndocument formats. Org mode documents have a lot of features, and it's\nway beyond this post's scope to describe them all. For the purpose of\nresearch notes, the most useful things it lets me do are:\n\nI can store my notes in a hierarchical tree structure, and I can\n hide parts of the tree from view in order to focus on other parts.\nI can put hyperlinks into my notes, including links to papers,\n websites, or other parts of the file.\nI can put math in my notes using Latex, and view the typeset\n equations right in my Emacs buffer.\n\n\n\n\nA sample from my notes file. You\ncan see the tree structure of the file, some links to papers, and a\nlittle bit of inline math, using Latex.\nGluing it all together with zotxt\nNow, see those links to papers in my notes buffer? I didn't have to\ncopy and paste them from anywhere. I inserted them with just three\nkeystrokes each. So far, I've just described some useful pieces of\nsoftware, but the interesting part of my workflow is how they fit\ntogether.\nzotxt is an extension that lets\nother programs talk to Zotero, and Emacs has a package to talk to it.\nIt's even structured specifically to work with Org mode documents.\nWith zotxt, my workflow looks like this:\n\nI find a paper I want to look at somewhere on the internet.\nI use Zotero's browser plugin to save it to Zotero. Hopefully it\n grabs the paper itself and this happens in one click; if the site\n doesn't play along, I spend a minute grabbing a pdf and feeding it\n to Zotero.\nI insert a link to the Zotero entry into my notes file in Emacs. I\n can do this with the key chords C-c \" \". I don't need to further\n specify what paper I want to grab: the browser plugin leaves the\n paper selected in Zotero, and zotxt can grab the selected paper.\nWhen I want to read the paper, I go to the link and tell Emacs to\n open the paper in my system PDF viewer. The key chords for this are\n C-c \" a, and then selecting the PDF attachment from the Helm\n window that appears (usually I just type pdf RET).\nWhen I'm reading a paper and see a citation that might be useful, I\n look it up on the internet and repeat this process to store a note\n linking to it.\n\nIt took me a while to get it set up to my liking, so here's how I did\nit:\n\nFirst, install zotxt. If you're\n using Zotero as a firefox extension, you just need to install zotxt\n as another extension. If you're using the standalone Zotero client,\n you can still do it: download the extension file from that link,\n then go to the Add-Ons Manager under the Tools menu and find the\n option to install an add-on from a file.\n\n\n\n\nThe menu option looks like\nthis.\n\nNext, install the zotxt package in emacs. If your\n package manager is set up, you\n can just type M-x package-install RET zotxt RET.\nNow, when org-zotxt-mode is active, you can use its functions in\n your org-mode buffers. You can search for papers and insert them\n with C-c \" i, insert the currently-selected paper in Zotero with\n C-u C-c \" i, and open a paper's PDF or other related files by\n moving the cursor to a link and typing C-c \" a. However, you might\n want a little bit more setup to deal with some annoyances.\nYou probably want to have org-zotxt-mode automatically activated\n in all your org-mode documents. To make that happen, you can add\n some code to your .emacs file to start up this mode on all your\n org-mode buffers - see below this list for the .emacs\n configuration I use.\nIf you want to insert a link to the currently-selected item a lot,\n C-u C-c \" i is an awkward sequence to type. I rebound it to C-c \"\n \".\nYou might notice that when you insert a link to a paper, the text of\n that link is a full citation. That might be what you want, but I\n just want the authors, paper name, and year. It took me a bit of\n hacking to get around that: it's possible to tell the emacs zotxt\n interface to use a different citation format than the default, but I\n had to throw together a little XML file to give it a shorter format\n than a full citation. (This may not be the easiest or cleanest way\n to do it, but it works!)\n That XML file is here. To use it, go into\n your Zotero preferences and select Cite -> Styles, and add the file.\n It should appear in the menu as \"mkbehr's short reference format\".\n Then add the last two lines in the .emacs snippet below, and you\n should get shorter citations.\nYou probably want to install the\n Helm package, to make zotxt's\n search interface easier to navigate. That link should tell you\n everything you need to know.\n\nHere's that .emacs setup code:\n;; Activate org-zotxt-mode in org-mode buffers\n(add-hook 'org-mode-hook (lambda () (org-zotxt-mode 1)))\n;; Bind something to replace the awkward C-u C-c \" i\n(define-key org-mode-map\n (kbd \"C-c \\\" \\\"\") (lambda () (interactive)\n (org-zotxt-insert-reference-link '(4))))\n;; Change citation format to be less cumbersome in files.\n;; You'll need to install mkbehr-short into your style manager first.\n(eval-after-load \"zotxt\"\n'(setq zotxt-default-bibliography-style \"mkbehr-short\"))\n\nOf course, I'm not done tinkering to make my workflow better. I hear\ngood things about the org-ref\nand helm-bibtex\npackages - if only I can keep an up-to-date bibtex file as I add papers\nto my library, I can associate links with not only a paper's pdf, but\nalso that paper's section of my notes file. And I haven't found a\nsmooth way to take a paper and pull up the papers it cites in my\nbrowser. But until then, I'm pretty happy with this setup.\nHappy researching!\n\n\n\nemacs\nresearch\n\n\n\nPrevious post\n\n\nNext post\n\nComments\n\nPlease enable JavaScript to view the comments powered by Disqus.\n\nComments powered by Disqus\n\n\n\n\n Contents \u00a9 2015 Michael Behr - Powered by Nikola\n\n\n\n\n\n\n"},{"chosen":" Synchronous and Asynchronous Data Transmission | Computer Science Skip to content Menu Home Membership Sign up Contact About Login Teach Computer Science Menu KS3 Resources KS3 Theory Topics KS3 Algorithms Topics KS3 Programming KS3 Networking KS3 Data Representation KS3 Connecting clients & server KS3 Databases Resources All KS3 GCSE Resources GCSE Python Course Resources Python Basic Output Tutorial Python Variables Tutorial Python Basic Input Tutorial Python Calculations Tutorial Python Control Flow Tutorial Python Readability Tutorial Python Loops (For) Tutorial Python Lists Tutorial Python Functions Tutorial Python While Loops Tutorial Python Strings Tutorial Python Files Handling Tutorial Theory Topics Computer Architecture Computer Hardware Data Representation Memory and Data Storage Networking Computer Security and Ethics Algorithms Programming Databases All GCSE A-Level Resources A Level Python Course A Level Contemporary Processes A Level Software & Development A Level Exchanging Data A Level Data Types & Structures A Level Legal & Moral Issues A Level Computational Thinking A Level Problem Solving All A-Level Exam Board Mapping GCSE AQA CIE 2020 CIE 2022-24 (0984) CIE 0478 Edexcel IGCSE Edexcel OCR Eduqas WJEC National 5 A Level AQA CIE 2022 (9618) OCR IB Computing Eduqas WJEC SQA Higher Level SQA Advanced Higher Revision Past Papers Mock Exams Data Representation Data Storage Databases Ethics Hardware & Software Internet Networks Programming Security Synchronous and Asynchronous Data Transmission KS3 Computer Science 11-14 Years Old 48 modules covering EVERY Computer Science topic needed for KS3 level. View KS3 Resources \u2192 GCSE Computer Science 14-16 Years Old 45 modules covering EVERY Computer Science topic needed for GCSE level. View GCSE Resources \u2192 A-Level Computer Science 16-18 Years Old 66 modules covering EVERY Computer Science topic needed for A-Level. View A-Level Resources \u2192 Home \/ Networks \/ Synchronous and Asynchronous Data Transmission KS3 Data Representation (14-16 years) An editable PowerPoint lesson presentationEditable revision handoutsA glossary which covers the key terminologies of the moduleTopic mindmaps for visualising the key conceptsPrintable flashcards to help students engage active recall and confidence-based repetitionA quiz with accompanying answer key to test knowledge and understanding of the module View KS3 Data Representation Resources A-Level Exchanging Data Resources (16-18 years) An editable PowerPoint lesson presentationEditable revision handoutsA glossary which covers the key terminologies of the moduleTopic mindmaps for visualising the key conceptsPrintable flashcards to help students engage active recall and confidence-based repetitionA quiz with accompanying answer key to test knowledge and understanding of the module View A-Level Exchanging Data Resources Table of Contents hide 1 KS3 Data Representation (14-16 years) 2 A-Level Exchanging Data Resources (16-18 years) 3 Synchronous Data Transmission 3.1 Characteristics of Synchronous Transmission 3.2 Examples of Synchronous Transmission 4 Asynchronous Transmission 4.1 Characteristics of Asynchronous Transmission 4.2 Examples of Asynchronous Transmission 5 Synchronous and Asynchronous Transmission 6 Synchronous vs. Asynchronous Transmission 6.1 Further Reading Synchronous Data Transmission In synchronous data transmission, data moves in a completely paired approach, in the form of chunks or frames.\u00a0 The synchronisation between the source and target is required so that the source knows where the new byte begins since there are no spaces included between the data. Synchronous transmission is effective, dependable, and often utilised for transmitting a large amount of data.\u00a0 It offers real-time communication between linked devices. An example of synchronous transmission would be the transfer of a large text file.\u00a0 Before the file is transmitted, it is first dissected into blocks of sentences.\u00a0 The blocks are then transferred over the communication link to the target location. Because there are no beginning and end bits, the data transfer rate is quicker but there\u2019s an increased possibility of errors occurring.\u00a0 Over time, the clocks will get out of sync, and the target device would have the incorrect time, so some bytes could become damaged on account of lost bits.\u00a0 To resolve this issue, it\u2019s necessary to regularly re-synchronise the clocks, as well as to make use of check digits to ensure that the bytes are correctly received and translated. Characteristics of Synchronous Transmission There are no spaces in between characters being sent.Timing is provided by modems or other devices at the end of the transmission.Special\u00a0\u2019syn\u2019\u00a0characters goes before the data being sent.The\u00a0syn\u00a0characters are included between chunks of data for timing functions. Examples of Synchronous Transmission ChatroomsVideo conferencingTelephonic conversationsFace-to-face interactions Asynchronous Transmission In asynchronous transmission, data moves in a half-paired approach, 1 byte or 1 character at a time.\u00a0 It sends the data in a constant current of bytes.\u00a0 The size of a character transmitted is 8 bits, with a parity bit added both at the beginning and at the end, making it a total of 10 bits.\u00a0 It doesn\u2019t need a clock for integration\u2014rather, it utilises the parity bits to tell the receiver how to translate the data. It is straightforward, quick, cost-effective, and doesn\u2019t need 2-way communication to function. Characteristics of Asynchronous Transmission Each character is headed by a beginning bit and concluded with one or more end bits.There may be gaps or spaces in between characters. Examples of Asynchronous Transmission EmailsForumsLettersRadiosTelevisions Synchronous and Asynchronous Transmission Point of ComparisonSynchronous TransmissionAsynchronous TransmissionDefinitionTransmits data in the form of chunks or framesTransmits 1 byte or character at a timeSpeed of TransmissionQuickSlowCostExpensiveCost-effectiveTime IntervalConstantRandomGaps between the data?Does not existExistExamplesChat Rooms, Telephonic Conversations, Video ConferencingEmail, Forums, Letters Synchronous vs. Asynchronous Transmission In synchronous transmission data is transmitted in the form of chunks, while in asynchronous transmission data is transmitted one byte at a time.Synchronous transmission needs a clock signal between the source and target to let the target know of the new byte.\u00a0 In comparison, with asynchronous transmission, a clock signal is not needed because of the parity bits that are attached to the data being transmitted, which serves as a start indicator of the new byte.The data transfer rate of synchronous transmission is faster since it transmits in chunks of data, compared to asynchronous transmission which transmits one byte at a time.Asynchronous transmission is straightforward and cost-effective, while synchronous transmission is complicated and relatively pricey.Synchronous transmission is systematic and necessitates lower overhead figures compared to asynchronous transmission. \u00a0Both synchronous and asynchronous transmission have their benefits and limitations.\u00a0 Asynchronous transmission is used for sending a small amount of data while the synchronous transmission is used for sending bulk amounts of data.\u00a0 Thus, we can say that both synchronous and asynchronous transmission are essential for the overall process of data transmission. Further Reading Data communication Post navigation Programming Data TypesUniform Resource Locator (URL) About this site Teach Computer Science provides detailed and comprehensive teaching resources for the new 9-1 GCSE specification, KS3 & A-Level. Equally suitable for International teachers and students. Over 5,000 teachers have signed up to use our materials in their classroom. Search for: What do we provide? In short: everything you need to teach GCSE, KS3 & A-Level Computer Science: Condensed revision notes Exam question booklets Mind maps Interactive quizzes PowerPoint presentations Flashcards Exam boards Our materials cover both UK and international exam board specifications: AQA CIE Edexcel OCR Eduqas WJEC National 5 \u00a9 2023 Teach Computer Science This site uses cookies to improve your experience. To find out more, see our cookie policy. ","rejected":"\n\nGeneralized Efficient Markets in Political Power - LessWrong 2.0 viewerArchiveSequencesAboutSearchLog InQuestionsEventsShortformAlignment ForumAF CommentsHomeFeaturedAllTagsRecent CommentsGeneralized Efficient Markets in Political Powerjohnswentworth1 Aug 2020 4:49 UTC42 points6 commentsLW linkWorld ModelingWorld Optimization\uf141Post permalinkLink without commentsLink without top nav barsLink without comments or top nav barsContentsSchel\u00adling PointsGover\u00adnance as Schel\u00adling PointPoli\u00adti\u00adcal PowerCom\u00adpe\u00adti\u00adtion and Gen\u00ader\u00adal\u00adized Mar\u00adket EfficiencyDemoc\u00adracy\u2019s Seedy UnderbellySchel\u00adling PointsIn Thomas Schel\u00adling\u2019s clas\u00adsic ex\u00adper\u00adi\u00adment, we imag\u00adine try\u00ading to meet up with some\u00adone in New York City, but we haven\u2019t speci\u00adfied a time or place in ad\u00advance and have no way to com\u00admu\u00adni\u00adcate. Where do we go, and when, to max\u00adi\u00admize the chance of meet\u00ading? There are some \u201cnat\u00adu\u00adral\u201d choices\u2014places and times which stand out, like the top of the Em\u00adpire State Build\u00ading at noon. Th\u00adese are called Schel\u00adling points.More gen\u00ader\u00adally, Schel\u00adling points are rele\u00advant when\u00adever two or more peo\u00adple need to make \u201cmatch\u00ading\u201d choices with limited abil\u00adity to com\u00admu\u00adni\u00adcate in ad\u00advance. For in\u00adstance, cer\u00adtain mar\u00adkets, like Ebay or Uber, serve as \u201cmeet\u00ading points\u201d for buy\u00aders and sel\u00adl\u00aders. Schel\u00adling him\u00adself wrote a fair bit about ne\u00adgo\u00adti\u00ada\u00adtions, where peo\u00adple need to agree on how to di\u00advide some spoils, or where to draw a bound\u00adary, or \u2026 They can talk to each other, but ac\u00adtu\u00adally com\u00admu\u00adni\u00adcat\u00ading is hard be\u00adcause both par\u00adties have no in\u00adcen\u00adtive to be hon\u00adest\u2014and there\u00adfore no rea\u00adson to trust each other when e.g. when one per\u00adson says \u201cI just can\u2019t af\u00adford to sell it be\u00adlow $10\u201d. Schel\u00adling points be\u00adcome nat\u00adu\u00adral out\u00adcomes for the ne\u00adgo\u00adti\u00ada\u00adtions\u2014e.g. split the spoils evenly, draw the bound\u00adary at the river, etc.In prac\u00adtice, it\u2019s of\u00adten use\u00adful to cre\u00adate Schel\u00adling points. In the New York City ex\u00adper\u00adi\u00adment, one could put up a gi\u00adant billboard that says \u201cmeet\u00ading point\u201d, and place signs all over the city point\u00ading to\u00adward the meet\u00ading point, mak\u00ading that point a nat\u00adu\u00adral place for peo\u00adple to meet. Some air\u00adports ac\u00adtu\u00adally do this:Ebay and Uber are of course also ex\u00adam\u00adples of pur\u00adpose-built Schel\u00adling points.One in\u00adter\u00adest\u00ading fea\u00adture of cre\u00adat\u00ading a Schel\u00adling point is that we may have some de\u00adgrees of free\u00addom available, and we can use those de\u00adgrees of free\u00addom to ex\u00adtract value.In our meetup ex\u00adam\u00adple, we could imag\u00adine putting the meetup point in\u00adside a build\u00ading, and charg\u00ading peo\u00adple to get in\u2014much like Ebay or Uber charge fees for their ser\u00advices. Or, we could imag\u00adine lo\u00adcal busi\u00adnesses want\u00ading to put the meetup point nearby, in hopes of at\u00adtract\u00ading busi\u00adness from meeters\u2014one could imag\u00adine a gimicky air\u00adport restau\u00adrant with a bunch of \u201cMEET HERE!\u201d signs hop\u00ading to sell peo\u00adple over\u00adpriced na\u00adchos and drinks while they wait to meet up with friends or fam\u00adily. Alter\u00adna\u00adtively, we could imag\u00adine users of the meetup point want\u00ading it in lo\u00adca\u00adtions con\u00adve\u00adnient to them\u2014e.g. in the New York City ex\u00adam\u00adple, peo\u00adple in a par\u00adtic\u00adu\u00adlar neigh\u00adbor\u00adhood might cam\u00adpaign to es\u00adtab\u00adlish the meetup point there for their own con\u00adve\u00adnience.How\u00adever, the Schel\u00adling point cre\u00adator\/\u200bcon\u00adtrol\u00adler only has so many de\u00adgrees of free\u00addom. Charge too high a fee, and peo\u00adple will go to some other Schel\u00adling point. Move the meetup point to a neigh\u00adbor\u00adhood in the out\u00adskirts of town, and it will be too in\u00adcon\u00adve\u00adnient for peo\u00adple from other neigh\u00adbor\u00adhoods. By de\u00adfault, peo\u00adple will usu\u00adally stick to Schel\u00adling points which ev\u00adery\u00adone is already us\u00ading\u2014if ev\u00adery\u00adbody has always met up un\u00adder this par\u00adtic\u00adu\u00adlar sign, then that\u2019s the ob\u00advi\u00adous place to keep meet\u00ading up\u2014so the con\u00adtrol\u00adler of the origi\u00adnal Schel\u00adling point can ex\u00adtract more value than \u201cnew\u201d points. But there are always limits.We can think of a Schel\u00adling-point-con\u00adtrol\u00adler\u2019s \u201cpower\u201d as their range of free\u00addom in mov\u00ading the point around, or as the amount of value they can ex\u00adtract with\u00adout los\u00ading out to some other Schel\u00adling point. Just be\u00adcause some\u00adone nom\u00adi\u00adnally \u201ccon\u00adtrols\u201d the Schel\u00adling point does not mean they can ac\u00adtu\u00adally do any\u00adthing with\u00adout los\u00ading it! It may be that even a small fee will drive ev\u00adery\u00adone to switch to a differ\u00adent Schel\u00adling point. It may be that the Schel\u00adling point is in the dead cen\u00adter of the city and peo\u00adple will keep meet\u00ading in the dead cen\u00adter even if the signs move (e.g. maybe some\u00adone will just put up new signs for the \u201ccity cen\u00adter\u201d and peo\u00adple will meet there). It may be that main\u00adtain\u00ading all the signs costs roughly as much as one can earn from the Schel\u00adling point (oth\u00ader\u00adwise a com\u00adpeti\u00adtor would come along and put up more signs of their own). There are many ways to ex\u00adtract value from a Schel\u00adling point, but if there\u2019s some mechanism for open com\u00adpe\u00adti\u00adtion over con\u00adtrol of the point, then the net value one can ex\u00adtract may be driven to near-zero.That\u2019s roughly how I think poli\u00adtics works.Gover\u00adnance as Schel\u00adling PointWhen peo\u00adple are op\u00ader\u00adat\u00ading in a group, it\u2019s use\u00adful to have stan\u00addard\u00adized Schel\u00adling points for a wide va\u00adri\u00adety of in\u00adter\u00adper\u00adsonal con\u00adflicts, so that we don\u2019t need a bunch of ex\u00adpen\u00adsive ne\u00adgo\u00adti\u00ada\u00adtion\/\u200bcon\u00adflict to re\u00adsolve each one. Th\u00adese Schel\u00adling points are things like \u201crules\u201d and \u201clead\u00aders\u201d.An ex\u00adam\u00adple: Alice likes to rock out to loud mu\u00adsic af\u00adter sun\u00addown, while her neigh\u00adbor Bob likes to go to bed early. They have con\u00adflict\u00ading prefer\u00adences for when quiet hours should be. But nei\u00adther of them wants to get in a fight about it, or spend a bunch of time and effort ne\u00adgo\u00adti\u00adat\u00ading. So, the build\u00ading\/\u200bneigh\u00adbor\u00adhood has a rule: \u201cquiet hours run from 10pm to 6am\u201d. The main pur\u00adpose of the rule is to act as a Schel\u00adling point: by de\u00adfault, those are the quiet hours which ev\u00adery\u00adone re\u00adspects and ex\u00adpects ev\u00adery\u00adone else to re\u00adspect. They might be en\u00adforced if needed, but usu\u00adally that doesn\u2019t ac\u00adtu\u00adally hap\u00adpen. Of course, Alice and Bob could still work out a sep\u00ada\u00adrate deal\u2014e.g. maybe Alice talks to all her neigh\u00adbors and gets their ok to play loud mu\u00adsic on Fri\u00adday night\u2014but that would re\u00adquire a bunch of ex\u00adtra ne\u00adgo\u00adti\u00ada\u00adtion. The offi\u00adcial rule is the Schel\u00adling point ev\u00adery\u00adbody co\u00ador\u00addi\u00adnates on, by de\u00adfault.More gen\u00ader\u00adally, laws and courts serve as Schel\u00adling points in ne\u00adgo\u00adti\u00ada\u00adtions. Where does my prop\u00aderty end and yours be\u00adgin? The gov\u00adern\u00adment land records provide a Schel\u00adling point an\u00adswer, so we don\u2019t need to fight\/\u200bne\u00adgo\u00adti\u00adate over it our\u00adselves. In a dis\u00adagree\u00adment over land, the po\u00adlice and mil\u00adi\u00adtary all want to co\u00ador\u00addi\u00adnate and back the same per\u00adson, so the gov\u00adern\u00adment\u2019s land records tell them all who the \u201cright\u00adful\u201d owner is. Since the po\u00adlice and mil\u00adi\u00adtary co\u00ador\u00addi\u00adnate around that Schel\u00adling point, it be\u00adcomes the nat\u00adu\u00adral Schel\u00adling point for oth\u00aders as well.In prin\u00adci\u00adple, the legally-rec\u00adog\u00adnized Schel\u00adling point could sim\u00adply be ig\u00adnored\u2014e.g. if a chunk of land is legally rec\u00adog\u00adnized as your prop\u00aderty, and I build some\u00adthing on it with\u00adout per\u00admis\u00adsion, the two of us could just agree that this is fine, effec\u00adtively ne\u00adgo\u00adti\u00adat\u00ading a differ\u00adent Schel\u00adling point. Some non-gov\u00adern\u00adment group could even have mechanisms to en\u00adforce the al\u00adter\u00adna\u00adtive Schel\u00adling point. But the le\u00adgal Schel\u00adling point is the one which po\u00adlice and the mil\u00adi\u00adtary are will\u00ading to en\u00adforce.Poli\u00adti\u00adcal PowerThe Death Eaters don\u2019t always agree on when, where or how to launch an at\u00adtack, but they know that any at\u00adtack will go bet\u00adter if they\u2019re all in it to\u00adgether. So, they band to\u00adgether be\u00adhind a leader, and at\u00adtack when, where and how the leader di\u00adrects. The leader\u2019s or\u00adders be\u00adcome the Schel\u00adling point ac\u00adtion for the group.The Death Eaters ex\u00adam\u00adple illus\u00adtrates the no\u00adtion of \u201cpoli\u00adti\u00adcal power\u201d par\u00adtic\u00adu\u00adlarly well: the leader\u2019s \u201cpower\u201d is roughly the set of or\u00adders he could give with\u00adout his or\u00adders ceas\u00ading to be a Schel\u00adling point for group ac\u00adtivity. If the Death Eaters are mostly in agree\u00adment on some course of ac\u00adtion, and the leader di\u00adrects against it, then his or\u00adders be\u00adcome less of a Schel\u00adling point, and mul\u00adti\u00adple such or\u00adders will likely see him re\u00admoved from nom\u00adi\u00adnal power. He has some de\u00adgree of free\u00addom in which or\u00adders to give, but only to the ex\u00adtent that the Death eaters are, on av\u00ader\u00adage, mostly in agree\u00adment with his choices.There\u2019s a gen\u00aderal prin\u00adci\u00adple of \u201cpower\u201d here: a leader\u2019s power is the set of or\u00adders they could give with\u00adout their or\u00adders ceas\u00ading to be Schel\u00adling points for the group\u2019s ac\u00adtivi\u00adties. A leader\u2019s power is high when group mem\u00adbers all want to co\u00ador\u00addi\u00adnate their choices, but care much less about which choice is made, so long as ev\u00adery\u00adone \u201cmatches\u201d. Then the leader can just choose any\u00adthing they please, and ev\u00adery\u00adone will go along with it. (In\u00adter\u00adest\u00adingly, this sug\u00adgests that a leader can get high value from a group whose prefer\u00adences are or\u00adthog\u00ado\u00adnal to their own; pur\u00adsue power in groups which care about differ\u00adent things than you!) Con\u00adversely, a leader\u2019s power can be low in two ways:Group mem\u00adbers care a lot about which choice is made. In this case, the leader has lit\u00adtle free\u00addom to choose, and is mostly just a figure\u00adhead.Group mem\u00adbers only weakly care about co\u00ador\u00addi\u00adnat\u00ading. In this case, the group is in\u00adher\u00adently un\u00adsta\u00adble; lots of deals and con\u00adces\u00adsions are needed just to keep it to\u00adgether.Key thing to keep in mind: both of these con\u00addi\u00adtions are rel\u00ada\u00adtive-to-the-next-best-op\u00adtion. Group mem\u00adbers may care a lot about co\u00ador\u00addi\u00adnat\u00ading and only have weak prefer\u00adences about which choice is made, but if there\u2019s a com\u00adpeti\u00adtor who could co\u00ador\u00addi\u00adnate just as well and satisfy the weak prefer\u00adences bet\u00adter, then that com\u00adpeti\u00adtor\u2019s or\u00adders may be\u00adcome the new Schel\u00adling point.That\u2019s poli\u00adtics, in a nut\u00adshell: peo\u00adple try to turn their own or\u00adders\/\u200bpoli\u00adcies\/\u200bsug\u00adges\u00adtions into Schel\u00adling points for group ac\u00adtivity. They do this mainly by offer\u00ading con\u00adces\u00adsions and fa\u00advors to group mem\u00adbers\/\u200bsub\u00adgroups, in ex\u00adchange for those mem\u00adbers\u2019 sup\u00adport for the new Schel\u00adling point.Com\u00adpe\u00adti\u00adtion and Gen\u00ader\u00adal\u00adized Mar\u00adket EfficiencyIn demo\u00adcratic coun\u00adtries\/\u200bgroups, we have a built-in mechanism for com\u00adpe\u00adti\u00adtion be\u00adtween would-be lead\u00aders. In other words, there\u2019s a Schel\u00adling point for when and how to switch Schel\u00adling points. That im\u00adme\u00addi\u00adately sug\u00adgests a gen\u00ader\u00adal\u00adized effi\u00adcient mar\u00adkets-style hy\u00adpoth\u00ade\u00adsis: lead\u00aders\u2019 power in such groups is driven by com\u00adpe\u00adti\u00adtion to near-zero.What does that look like?Well, most peo\u00adple want to co\u00ador\u00addi\u00adnate; re\u00adgard\u00adless of what the rules are, we want to agree on what the rules are, oth\u00ader\u00adwise we end up in ex\u00adpen\u00adsive fights. But most peo\u00adple also have some prefer\u00adences about the rules\u2014i.e. poli\u00adti\u00adcal poli\u00adcies. Would-be lead\u00aders make promises: they pre\u00adcom\u00admit to cer\u00adtain poli\u00adcies, thereby cut\u00adting off cer\u00adtain op\u00adtions if they win (i.e. sac\u00adri\u00adfic\u00ading po\u00adten\u00adtial power), but gain\u00ading more sup\u00adport for their Schel\u00adling point in the pro\u00adcess. To max\u00adi\u00admize power, a would-be leader wants to just barely \u201cout\u00adbid\u201d all the other would-be lead\u00aders\u2014i.e. promise just a bit more to just a few more par\u00adties, keep\u00ading as much power as pos\u00adsi\u00adble while still win\u00adning the po\u00adsi\u00adtion.Of course, the other com\u00adpeti\u00adtors are try\u00ading to do the same thing. Solve for the equil\u00adibrium: the com\u00adpeti\u00adtors ei\u00adther bid away any de\u00adgree of free\u00addom which any con\u00adstituency cares about, or lose to some\u00adone who bids more. Gen\u00ader\u00adal\u00adized effi\u00adcient mar\u00adkets kick in; the leader ends up with near-zero power. They\u2019re mostly just a figure\u00adhead im\u00adple\u00adment\u00ading all the poli\u00adcies they had to pre\u00adcom\u00admit to in or\u00adder to win the elec\u00adtion.Now, con\u00adsider the re\u00adverse\u2014a dic\u00adta\u00adtor or sin\u00adgle-party state or the like. How do they max\u00adi\u00admize power?To max\u00adi\u00admize power, they want to avoid gen\u00ader\u00adal\u00adized effi\u00adcient mar\u00adkets\u2014i.e. they want to min\u00adi\u00admize com\u00adpe\u00adti\u00adtion over the Schel\u00adling point. Elec\u00adtions en\u00adcourage com\u00adpe\u00adti\u00adtion by pro\u00advid\u00ading a Schel\u00adling point for when and how to switch Schel\u00adling points; the power-hun\u00adgry leader wants ex\u00adactly the op\u00adpo\u00adsite of that. They want to make sure that there is no Schel\u00adling point for when and how to switch Schel\u00adling points.If a new Schel\u00adling point does show up (e.g. an op\u00adpo\u00adsi\u00adtion group), there won\u2019t be any agree\u00adment on when and how to switch, so there will prob\u00ada\u00adbly be some ex\u00adpen\u00adsive con\u00adflict (i.e. civil war). That ex\u00adpen\u00adsive con\u00adflict it\u00adself cre\u00adates a big po\u00adten\u00adtial en\u00adergy bar\u00adrier for any po\u00adten\u00adtial com\u00adpeti\u00adtor: for the peo\u00adple sup\u00adport\u00ading a switch to the new Schel\u00adling point, the ex\u00adpected gains from the switch must ex\u00adceed costs of the con\u00adflict. So from the dic\u00adta\u00adtor\u2019s stand\u00adpoint, the worse a civil war would be, the fewer con\u00adces\u00adsions and hand\u00adouts they need to make and the broader their power.(Of course, a dic\u00adta\u00adtor can use other strate\u00adgies to max\u00adi\u00admize power as well\u2014e.g. threat\u00aden\u00ading to kill peo\u00adple\/\u200bde\u00adstroy things if a new Schel\u00adling point comes along. But that\u2019s a sym\u00admet\u00adric strat\u00adegy: the dic\u00adta\u00adtor\u2019s en\u00ade\u00admies can just as eas\u00adily threaten to kill peo\u00adple\/\u200bde\u00adstroy things if no new Schel\u00adling point is adopted. The dic\u00adta\u00adtor may have an ad\u00advan\u00adtage in re\u00adsources, but that gap can in prin\u00adci\u00adple be closed by other means. It\u2019s mainly the lack of a Schel\u00adling point for switch\u00ading Schel\u00adling points which con\u00adfers an asym\u00admet\u00adric ad\u00advan\u00adtage to the in\u00adcum\u00adbent.)Democ\u00adracy\u2019s Seedy UnderbellyBased on the pre\u00advi\u00adous sec\u00adtion, some\u00adone ac\u00adcus\u00adtomed to a \u201cdemoc\u00adracy=good, dic\u00adta\u00adtor=bad\u201d wor\u00adld\u00adview might think that lead\u00aders be\u00ading forced to bar\u00adgain away all their po\u00adten\u00adtial power is good news. \u201cLead\u00aders are just figure\u00adheads\u201d and \u201clead\u00aders are just im\u00adple\u00adment\u00ading the poli\u00adcies which won the elec\u00adtion\u201d both say the same thing. This is \u201cgood\u201d, yes?The failure modes of democ\u00adracy are baked-in here too.Con\u00adsider a would-be leader figur\u00ading out the perfect mix of promises and con\u00adces\u00adsions to make, in or\u00adder to win an elec\u00adtion. From their point of view, differ\u00adent peo\u00adple want\u00ading op\u00adpo\u00adsite things is a prob\u00adlem. Mov\u00ading the meetup point closer to one neigh\u00adbor\u00adhood means mov\u00ading it fur\u00adther from an\u00adother. But di\u00admen\u00adsion\u00adal\u00adity is a ma\u00adjor boon for the leader: there\u2019s thou\u00adsands of di\u00admen\u00adsions along which policy can change. If Alice cares strongly about one par\u00adtic\u00adu\u00adlar di\u00admen\u00adsion\u2014like, say, gov\u00adern\u00adment sup\u00adport for her pro\u00adfes\u00adsion\u2014which no\u00adbody else cares about very much, then that promise can be made to Alice with\u00adout los\u00ading the sup\u00adport of some\u00adbody else. It\u2019s spe\u00adcial-in\u00adter\u00adest poli\u00adtics: look for poli\u00adcies with fo\u00adcused benefits and diffuse costs. Pile many such poli\u00adcies to\u00adgether, and you have a win\u00adning coal\u00adi\u00adtion.That out\u00adcome may be \u201ceffi\u00adcient\u201d in the sense that no other bun\u00addle of poli\u00adcies can beat it in an elec\u00adtion, but that\u2019s very differ\u00adent from \u201ceffi\u00adcient\u201d in the sense of \u201cnot wast\u00ading ridicu\u00adlous amounts of re\u00adsources on pork-bar\u00adrel pro\u00adjects and reg\u00adu\u00adla\u00adtory bar\u00adri\u00aders to en\u00adtry\u201d.Now, sup\u00adpose we\u2019re un\u00adhappy with this out\u00adcome. We want to build a bet\u00adter world. What can we do?Ob\u00advi\u00adously \u201crun for office\u201d is not a work\u00adable an\u00adswer here. Gen\u00ader\u00adal\u00adized effi\u00adcient mar\u00adkets mean we can\u2019t win an elec\u00adtion with\u00adout trad\u00ading away any abil\u00adity to en\u00adact our preferred poli\u00adcies.If we have some ex\u00adter\u00adnal re\u00adsources\u2014e.g. a gi\u00adant pile of money\u2014then we could po\u00adten\u00adtially use that to \u201cforce\u201d the poli\u00adti\u00adcal equil\u00adibrium in a differ\u00adent di\u00adrec\u00adtion. This could look like old-fash\u00adioned bribery, where we just pay some stake\u00adhold\u00aders to back our preferred Schel\u00adling point. It could look like a pay\u00adment to the poli\u00adti\u00adcal sys\u00adtem as a whole, e.g. offer\u00ading a pri\u00advate sub\u00adsidy for road re\u00adpair. It could in\u00advolve re\u00adsources other than money, as in a celebrity or me\u00addia out\u00adlet offer\u00ading an en\u00addorse\u00adment, or a na\u00adtion offer\u00ading some con\u00adces\u00adsion in ex\u00adchange for lower tar\u00adiffs. We could change the op\u00adtions available to the group via tech\u00adnol\u00adogy, e.g. bit\u00adcoin. We could sim\u00adply try to con\u00advince peo\u00adple to sup\u00adport our preferred poli\u00adcies\u2014though this means com\u00adpet\u00ading in memes\u00adpace, which has gen\u00ader\u00adal\u00adized effi\u00adcient mar\u00adkets of its own. The gen\u00aderal pat\u00adtern: we use our re\u00adsources to change the set of op\u00adtions available or to di\u00adrectly in\u00adfluence the prefer\u00adences of group mem\u00adbers.Point is: there are no hun\u00addred-dol\u00adlar bills ly\u00ading on the ground. If we want to change the poli\u00adti\u00adcal equil\u00adibrium in a highly-poli\u00adti\u00adcally-com\u00adpet\u00adi\u00adtive en\u00advi\u00adron\u00adment, we need to change the un\u00adder\u00adly\u00ading op\u00adtions available to the group and\/\u200bor the prefer\u00adences of in\u00addi\u00advi\u00add\u00adual group mem\u00adbers.johnswentworth1 Aug 2020 4:49 UTC42 points6 commentsLW linkWorld ModelingWorld Optimization\uf141Post permalinkLink without commentsLink without top nav barsLink without comments or top nav barsZack_M_Davis 1 Aug 2020 19:07 UTC 6 pointsIt gets worse. We also face co\u00ador\u00addi\u00adna\u00adtion prob\u00adlems on the con\u00adcepts we use to think with. In or\u00adder for lan\u00adguage to work, we need shared word defi\u00adni\u00adtions, so that the prob\u00ada\u00adbil\u00adis\u00adtic model in my head when I say a word matches up with the model in your head when you heard the word. A leader isn\u2019t just in a po\u00adsi\u00adtion to co\u00ador\u00addi\u00adnate what the group does, but also which as\u00adpects of re\u00adal\u00adity the group is able to think about.Raemon 4 Aug 2020 2:00 UTC 4 pointsI feel obli\u00adgated to link CGP Grey\u2019s The Rules for Rulers, which makes some similar points through a some\u00adwhat differ\u00adent lens with a bunch of cute graph\u00adics.betulaster 4 Aug 2020 1:21 UTC 3 pointsI\u2019m prob\u00ada\u00adbly miss\u00ading some\u00adthing ob\u00advi\u00adous, but I don\u2019t triv\u00adially see how this In\u00adter\u00adest\u00adingly, this sug\u00adgests that a leader can get high value from a group whose prefer\u00adences are or\u00adthog\u00ado\u00adnal to their own; pur\u00adsue power in groups which care about differ\u00adent things than you! fol\u00adlows from thisA leader\u2019s power is high when group mem\u00adbers all want to co\u00ador\u00addi\u00adnate their choices, but care much less about which choice is made, so long as ev\u00adery\u00adone \u201cmatches\u201d. Then the leader can just choose any\u00adthing they please, and ev\u00adery\u00adone will go along with it. Could you please elab\u00ado\u00adrate?Also, I have an out\u00adsider\u2019s view of Amer\u00adi\u00adcan (or, in\u00addeed, Western in gen\u00aderal) poli\u00adtics, so I can be wrong, but I think an ar\u00adgu\u00adment from em\u00adpirics could be made against this: It\u2019s spe\u00adcial-in\u00adter\u00adest poli\u00adtics: look for poli\u00adcies with fo\u00adcused benefits and diffuse costs. Pile many such poli\u00adcies to\u00adgether, and you have a win\u00adning coal\u00adi\u00adtion. At least in the two most re\u00adcent Amer\u00adi\u00adcan elec\u00adtions (2016 and then the 2018 midterms) it seems like it was very much not the case of peo\u00adple rac\u00ading for the most fo\u00adcused benefits and most diffuse cost, but rather for the most effi\u00adcient way to gal\u00adva\u00adnize their vot\u00aders, cost be damned. Think of the wall on the Mex\u00adi\u00adcan bor\u00adder\u2014it would prob\u00ada\u00adbly be ex\u00ador\u00adbitantly ex\u00adpen\u00adsive, in\u00adclud\u00ading to those that voted for it, but it was a very pow\u00ader\u00adful sym\u00adbol that peo\u00adple who felt strongly about the is\u00adsue could rally be\u00adhind. 538 here do a kind of liter\u00ada\u00adture re\u00adview\u2014and find, amongst other things, that racial at\u00adti\u00adtudes mat\u00adtered more in 2016 than in any re\u00adcent elec\u00adtion \u2014 even 2008, when the pres\u00adence of an Afri\u00adcan-Amer\u00adi\u00adcan can\u00addi\u00addate shaped the poli\u00adti\u00adcal con\u00adver\u00adsa\u00adtion. Un\u00adless I mi\u00ads\u00adun\u00adder\u00adstand the idea, I don\u2019t think is\u00adsues of race have a nar\u00adrow fo\u00adcused scope of benefits and costs diffuse enough not to be no\u00adticed by other vot\u00aders. I also think this point Would-be lead\u00aders make promises: they pre\u00adcom\u00admit to cer\u00adtain poli\u00adcies, thereby cut\u00adting off cer\u00adtain op\u00adtions if they win (i.e. sac\u00adri\u00adfic\u00ading po\u00adten\u00adtial power), but gain\u00ading more sup\u00adport for their Schel\u00adling point in the pro\u00adcess. makes an as\u00adsump\u00adtion of vot\u00aders be\u00ading more-or-less perfectly in\u00adformed about what the Schel\u00adling point (poli\u00adcies and laws) ac\u00adtu\u00adally is. What if a leader could get elected by pre-com\u00admit\u00ading to cer\u00adtain poli\u00adcies, but then ac\u00adtu\u00adally not act on them, while man\u00adag\u00ading to con\u00advince the vot\u00aders that they, in fact, are do\u00ading their best to im\u00adple\u00adment these poli\u00adcies, but are failing to for a cer\u00adtain (prob\u00ada\u00adbly not a very falsifi\u00adable) rea\u00adson? Or does the model already sup\u00adport this in a way that I don\u2019t no\u00adtice?johnswentworth 4 Aug 2020 2:53 UTC 3 pointsParentOn \u201cgroup with prefer\u00adences or\u00adthog\u00ado\u00adnal to your own\u201d: the idea is you can give the mem\u00adbers ex\u00adactly what they want, and then in\u00adde\u00adpen\u00addently get what\u00adever you want as well. Since they\u2019re in\u00addiffer\u00adent to the things you care about, you can choose those things how\u00adever you please.At least in the two most re\u00adcent Amer\u00adi\u00adcan elec\u00adtions (2016 and then the 2018 midterms) it seems like it was very much not the case of peo\u00adple rac\u00ading for the most fo\u00adcused benefits and most diffuse cost, but rather for the most effi\u00adcient way to gal\u00adva\u00adnize their vot\u00aders, cost be damned.I ex\u00adpect that poli\u00adtics in most places, and US Con\u00adgres\u00adsional poli\u00adtics es\u00adpe\u00adcially, is usu\u00adally much more heav\u00adily fo\u00adcused on spe\u00adcial in\u00adter\u00adests than the over\u00adall me\u00addia nar\u00adra\u00adtive would sug\u00adgest. For in\u00adstance, vot\u00aders in Kansas care a lot about farm sub\u00adsidies, but the news will mostly not talk about that be\u00adcause most of us find the sub\u00adject rather bor\u00ading. The me\u00addia wants to talk about the things ev\u00adery\u00adone is in\u00adter\u00adested in, which is ex\u00adactly the op\u00adpo\u00adsite of spe\u00adcial in\u00adter\u00adests.Also I am ex\u00adtremely skep\u00adti\u00adcal that racial is\u00adsues played more than a minor role in the elec\u00adtion, even as\u00adsum\u00ading that they played a larger role in 2016 than in other elec\u00adtions. Every me\u00addia out\u00adlet in the coun\u00adtry (in\u00adclud\u00ading 538) wanted to run sto\u00adries about how race was su\u00adper-im\u00adpor\u00adtant to the elec\u00adtion, be\u00adcause those sto\u00adries got tons of clicks, but that\u2019s very differ\u00adent from ac\u00adtu\u00adally play\u00ading a role.Or does the model already sup\u00adport this in a way that I don\u2019t no\u00adtice?Nope, you are com\u00adpletely right on that front, poor in\u00adfor\u00adma\u00adtion\/\u200bstraight-up ly\u00ading were is\u00adsues I ba\u00adsi\u00adcally ig\u00adnored for pur\u00adposes of this post. That said, most of the post still ap\u00adplies once we add in ly\u00ading\/\u200bbul\u00adlshit; the main change is that, when\u00adever they can get away with it, lead\u00aders will lie\/\u200bbul\u00adlshit in or\u00adder to si\u00admul\u00adta\u00adneously satisfy two groups with con\u00adflict\u00ading goals. As long as at least some peo\u00adple in each con\u00adstituency see through the lies\/\u200bbul\u00adlshit, there will still be pres\u00adsure to ac\u00adtu\u00adally do what those peo\u00adple want. On the other hand, peo\u00adple who can be fooled by lies\/\u200bbul\u00adlshit are es\u00adsen\u00adtially \u201cneu\u00adtral\u201d for pur\u00adposes of in\u00adfluenc\u00ading the poli\u00adti\u00adcal equil\u00adibrium; there\u2019s no par\u00adtic\u00adu\u00adlar rea\u00adson to worry about their prefer\u00adences at all. So we just ig\u00adnore the gullible peo\u00adple, and ap\u00adply the dis\u00adcus\u00adsion from the post to ev\u00adery\u00adbody else.betulaster 14 Aug 2020 21:13 UTC 3 pointsParentThanks for the re\u00adply and sorry I couldn\u2019t get to this for some time! Hope you\u2019re still in\u00adter\u00adested in the dis\u00adcus\u00adsion. I ex\u00adpect that poli\u00adtics in most places, and US Con\u00adgres\u00adsional poli\u00adtics es\u00adpe\u00adcially, is usu\u00adally much more heav\u00adily fo\u00adcused on spe\u00adcial in\u00adter\u00adests than the over\u00adall me\u00addia nar\u00adra\u00adtive would sug\u00adgest This is re\u00adally in\u00adter\u00adest\u00ading and you prob\u00ada\u00adbly have a good point. Do you think there\u2019s a more re\u00adli\u00adable way (for an out\u00adsider like my\u00adself, who\u2019s not able to, I dunno, go and ask peo\u00adple in a dive bar what they think) to get the lay of the poli\u00adti\u00adcal land in a par\u00adtic\u00adu\u00adlar point in space? (And time?) Maybe some cen\u00adtral\u00adized kind of poll repos\u00adi\u00adtory? Every me\u00addia out\u00adlet in the coun\u00adtry (in\u00adclud\u00ading 538) wanted to run sto\u00adries about how race was su\u00adper-im\u00adpor\u00adtant to the elec\u00adtion, be\u00adcause those sto\u00adries got tons of clicks, but that\u2019s very differ\u00adent from ac\u00adtu\u00adally play\u00ading a role. On a side note, I can imag\u00adine this kind of per\u00adspec\u00adtive, when taken to an un\u00admiti\u00adgated ex\u00adtreme, lead\u00ading to a very carte\u00adsian-de\u00admon view of the world. Most peo\u00adple that pub\u00adlish their thoughts are in\u00adcen\u00adtivized to make you, the reader, like it or be in\u00adter\u00adested in it. Mass-me\u00addia ob\u00advi\u00adously so, blog\u00adgers or an\u00ada\u00adlysts or think tanks less ob\u00advi\u00adously so, but still. If, when faced with a choice of writ\u00ading about (a) things that are real but dull vs (b) things that are not real but get clicks, no one has an in\u00adcen\u00adtive to do (a), how do you form a view of the world? (Had I not known about pub\u00adlish-or-per\u00adish and read Gel\u00adman\/\u200bFalkovich on p-hack\u00ading, I my\u00adself would give the tra\u00addi\u00adtion\u00adally carte\u00adsian an\u00adswer of \u201cby read\u00ading sci\u00aden\u00adtific pa\u00adpers in peer-re\u00adviewed jour\u00adnals\u201d, but\u2026 yeah). So we just ig\u00adnore the gullible peo\u00adple, and ap\u00adply the dis\u00adcus\u00adsion from the post to ev\u00adery\u00adbody else. I think we\u2019ve made an im\u00adpor\u00adtant move in ar\u00adgu\u00admen\u00adta\u00adtion here\u2014we\u2019ve started to in\u00adtro\u00adduce the pos\u00adsi\u00adbil\u00adity of the vot\u00aders differ\u00ading by whether they be\u00adlieve the lies\/\u200bbul\u00adlshit or not. But if we do\u2014that is, we in\u00adtro\u00adduce the pos\u00adsi\u00adbil\u00adity of the voter con\u00adsid\u00ader\u00ading some of the poli\u00adti\u00adcian\u2019s com\u00admit\u00adment to a fu\u00adture policy Schel\u00adling point not gen\u00aduine\u2014we also open the pos\u00adsi\u00adbil\u00adity for the voter to spec\u00adu\u00adlate on what the poli\u00adti\u00adcian\u2019s true poli\u00adcies are. Say, Alice runs for pres\u00adi\u00addent on a con\u00adser\u00adva\u00adtive\/\u200bjobs-cen\u00adtric plat\u00adform, com\u00admits to out\u00adlaw work visas and ex\u00adpunge all for\u00adeign work\u00aders, and wins the race, and says that she\u2019s work\u00ading hard to achieve that goal. Bob is a to\u00adtal sup\u00adporter and he\u2019s sure that she does ex\u00adactly that and thinks that de\u00adpor\u00adta\u00adtions are only a cou\u00adple days away. Char\u00adlie may be skep\u00adti\u00adcal about the promise, be\u00adcause sounded very rad\u00adi\u00adcal and cam\u00adpaign-y, but thinks she\u2019s prob\u00ada\u00adbly go\u00ading to cut work visas, but not be able to ex\u00adpunge for\u00adeign\u00aders already in the coun\u00adtry. Dave agrees with Char\u00adlie that the promise was rad\u00adi\u00adcal and will not be fol\u00adlowed through on fully, but not on what will ac\u00adtu\u00adally hap\u00adpen\u2014he thinks Alice will be able to ex\u00adpunge already pre\u00adsent for\u00adeign\u00aders, but never risk the poli\u00adti\u00adcal tur\u00admoil of re\u00admov\u00ading work visas. Erin is a to\u00adtal skep\u00adtic and thinks Alice is merely ex\u00adploit\u00ading the vot\u00aders, and is ac\u00adtu\u00adally not do\u00ading any\u00adthing about the for\u00adeign\u00aders, and fi\u00adnally Frank is a con\u00adspir\u00adacy the\u00ado\u00adrist and thinks that Alice is se\u00adcretly work\u00ading with a ca\u00adbal of global\u00adists to bring even more for\u00adeign\u00aders (maybe even ille\u00adgally!) in while bul\u00adlshit\u00adting him.All of these peo\u00adple have differ\u00adent Schel\u00adling points! If Zack, a for\u00adeigner, asks Bob to loan him money, Bob is go\u00ading to re\u00adfuse, be\u00adcause he thinks Zack will be kicked out of the coun\u00adtry to\u00admor\u00adrow and he\u2019s not get\u00adting his money back. If he asks Erin, she\u2019s likely to agree, be\u00adcause she doesn\u2019t be\u00adlieve Zack is go\u00ading any\u00adwhere.Now, sure, there\u2019s only so many ways to in\u00adter\u00adpret a sin\u00adgle cam\u00adpaign promise, and there are bound to be groups within the voter base that will agree on what Alice will ac\u00adtu\u00adally do, the Schel\u00adling point will work for them\u2014but since Alice is in\u00adcen\u00adtivized to make a lot of fo\u00adcused-benefit-dis\u00adperse-cost promises, vot\u00aders who agree on what her ac\u00adtions on a cer\u00adtain policy are, may dis\u00adagree on what her ac\u00adtions re\u00adgard\u00ading a differ\u00adent policy are. So\u2026 when no\u00adbody agrees on what the Schel\u00adling point is, does it, for all in\u00adtents and pur\u00adposes, ex\u00adist?johnswentworth 14 Aug 2020 21:55 UTC 2 pointsParentGreat points!Do you think there\u2019s a more re\u00adli\u00adable way (for an out\u00adsider like my\u00adself, who\u2019s not able to, I dunno, go and ask peo\u00adple in a dive bar what they think) to get the lay of the poli\u00adti\u00adcal land in a par\u00adtic\u00adu\u00adlar point in space?This is wayyyy out\u00adside my zone of ex\u00adper\u00adtise, but I would look for spe\u00adcial\u00adist-ori\u00adented pub\u00adli\u00adca\u00adtions\u2014e.g. newslet\u00adters speci\u00adfi\u00adcally tar\u00adgeted at lob\u00adby\u00adists\/\u200bpoli\u00adcy\u00admak\u00aders, or poli\u00adti\u00adcal in\u00adfor\u00adma\u00adtion in the in\u00addus\u00adtry pub\u00adli\u00adca\u00adtions of spe\u00adcial-in\u00adter\u00adest in\u00addus\u00adtries.If, when faced with a choice of writ\u00ading about (a) things that are real but dull vs (b) things that are not real but get clicks, no one has an in\u00adcen\u00adtive to do (a), how do you form a view of the world?I\u2019d say the key is to gen\u00ader\u00adate your own ques\u00adtions, then proac\u00adtively look for the an\u00adswers rather than wait\u00ading around for what\u00adever in\u00adfor\u00adma\u00adtion comes to you. There\u2019s plenty of good in\u00adfor\u00adma\u00adtion out there, it just isn\u2019t su\u00adper-viral, so you have to go look\u00ading for it.All of these peo\u00adple have differ\u00adent Schel\u00adling points!Im\u00adpor\u00adtant point here: these peo\u00adple don\u2019t ac\u00adtu\u00adally have differ\u00adent Schel\u00adling points. They pre\u00adsum\u00adably all agree that if Alice wins the elec\u00adtion, then what\u00adever Alice signs into law will be the new Schel\u00adling point. What these peo\u00adple dis\u00adagree on is their ex\u00adpec\u00adta\u00adtions for what the fu\u00adture Schel\u00adling point will be.Back to top"},{"chosen":"\n\n\n\nMAT337. Introduction to Real Analysis\n\n\n\n\n\n\n\n\n\nMAT337. Introduction to Real Analysis\nFall 2018\n\nWeb page: http:\/\/www.math.toronto.edu\/ilia\/MAT337.2018\/. \nClass Location & Time: Tue, 1:00PM - 2:00 PM; Thu, 11:00 AM - 1:00 PM; NE2190\nInstructor: Ilia Binder (ilia@math.toronto.edu), DH3026.\r\n \nOffice Hours:\u00a0Tue 2:00 PM - 3:00 PM and Thu 10:00 AM-11:00 AM\nTeaching Assistant: Belal Abuelnasr, (belal.abuelnasr@mail.utoronto.ca ).\nOffice Hours:\u00a0 Fri, 10-11 AM, DH3050.\n\nTextbooks: Understanding Analysis, Second Edition, by Stephen Abbott.\r\n This book is provided as a free electronic resource to all UofT students through the library website.\r\n Click on the following link to access the textbook (you may be required to enter your UTORid and password): http:\/\/myaccess.library.utoronto.ca\/login?url=http:\/\/books.scholarsportal.info\/viewdoc.html?id=\/ebooks\/ebooks3\/springer\/2015-07-09\/1\/9781493927128 \n\nPrerequisites:\u00a0 MAT102H5, MAT224H5\/MAT240H5, MAT212H5\/MAT244H5, MAT232H5\/MAT233H5\/MAT257Y5\nExclusions:\u00a0 MAT337H1, MAT357H1,MATB43H3, MATC37H3\r\n \nPrerequisites will be checked, and students not meeting them will be removed from the course by the end of the second week of classes. If a student believes that s\/he does have the necessary background material, and is able to prove it (e.g., has a transfer credit from a different university), then s\/he should submit a 'Prerequisite\/Corequisite Waiver Request Form'.\n\nTopics. \nThe course is the rigorous introduction to Real Analysis. We start with the careful discussion of The Axiom of Completeness and proceed to the study of the basic concepts of limits, continuity, Riemann integrability, and differentiability. \n\nTopics covered in class.\nSeptember 6: An introduction. Real numbers and the Axiom of Completeness. Section 1.3.\nSeptember 11: The Axiom of Completeness. Nested Interval property. Sections 1.3, 1.4.\nSeptember 13: Nested Interval property. Archimedean property. Definitions of the limit of a sequence (including an alternative definition). Limits and algebraic operations. Sections 1.4, 2.2, 2.3.\nSeptember 18: Limits and algebraic operations. Limits and order. Squeezed sequence lemma.Section 2.3.\nSeptember 20: The Monotone Convergence Theorem. Iterated sequences. Positive series. Liminf and limsup. Section 2.4.\nSeptember 25: Liminf and limsup. Subsequences and their limits. Bolzano-Weierstrass Theorem. Section 2.5.\nSeptember 27: Bolzano-Weierstrass Theorem. Cauchy Criterion. Series. Sections 2.5, 2.6, 2.7.\nOctober 2: Open and closed sets. Interrior, exterior, and border points. Section 3.2.\nOctober 4: Interrior, exterior, and border points. Compact sets. Heine-Borel Theorem. Sections 3.2, 3.3.\nOctober 16: Heine-Borel Theorem. Baire's Theorem. Sections 3.3, 3.5.\nOctober 18: Functional limits. Sequential criterion. Continuity. Sections 4.2, 4.3.\nOctober 23: Continuity and compact sets. Uniform continuity. Section 4.4.\nOctober 25: Uniform continuity and compact sets. The Intermediate value Theorem. Differentiability (including an alternative definition). Darboux's Theorem. Sections 4.4, 4.5, 5.2.\nOctober 30: Rolle's theorem. The Mean Value Theorem. L'Hospital rule. Pointwise and Uniform convergence. Sections 5.3, 6.2.\nNovember 1: Uniform convergence. Continuity of uniform limit. Uniform convergence and differentiation. Sections 6.2, 6.3.\nNovember 6: Midterm review.\nNovember 8: Midterm.\nNovember 13: Uniform convergence and differentiation. Uniform convergence of series. Sections 6.3, 6.4.\nNovember 15: Power series. Section 6.5.\nNovember 20: Riemann Integration. Section 7.2.\nNovember 22: Riemann Integration: criterion of integrability, non-integrable functions integrability of continuous functions, additivity and algebraic properties of Riemann integral. Sections 7.2, 7.3, 7.4.\nNovember 27: Algebraic properties of Riemann Integral. Integrability of Uniform limit. Section 7.4.\nNovember 29: The Fundamental Theorem of Calculus. Integration by parts. Riemann integrability criterion. Sections 7.5, 8.1.\nDecember 4: Final review. \n\n\nHomework. The assignments should be submitted through Quercus.\u00a0To submit, you can scan or take a photo of your work (or write your work electronically).\u00a0Please make sure that the images are clear and easy to read\u00a0before you submit them.\n\nAssignment #1, due September 13: The assignment is based on the material you have learned in MAT102.\r\n\r\nPlease do the following exercises from the textbook: 1.2.3, 1.2.4, 1.2.5, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.2.11, 1.2.12, 1.2.13.\r\n\n\nAssignment #2, due September 20.\n\n\nAssignment #3, due October 4.\n\n\nAssignment #4, due October 18.\n\n\nAssignment #5, due October 25.\n\n\nAssignment #6, due November 1.\n\n\nAssignment #7, due November 8.\n\n\nAssignment #8, due November 15.\n\n\nAssignment #9, due November 22.\n\n\nAssignment #10, due November 29.\n\nTutorials and presentations.\u00a0Each student must be registered in one of the tutorials (on ROSI).\u00a0The attendance of tutorials is mandatory. Based on the homework assignments, the students will be selected to present some of the homework problems at the tutorials. An unexcused absence at the tutorial on the day you are selected for the presentation will result in zero credit for the presentation.\u00a0\nTutorials will begin on\u00a0Friday of the second week of classes.\u00a0\nQuiz. There will be a one-hour in-tutorial quiz on Friday, September 28, or Monday, October 1, depending on your tutorial section. No aides are allowed for this quiz. The quiz will cover the material of the sections 1.3, 1.4, 2.2, 2.3, 2.4.\r\nRecommended preparation (do not hand in): problems 1.3.2, 1.3.3, 1.3.6, 1.3.8, 1.4.8, 2.2.2, 2.2.4, 2.3.2, 2.3.7, 2.4.1, 2.4.6, 2.4.8.\r\n\nMidterm Test. There will be a two-hour in-class midterm test on Thursday, November 8. No aides are allowed for this test. The test will cover the material of the sections 1.3, 1.4, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 3.2, 3.3, 3.5, 4.2, 4.3, .4.4, 4.5, 5.2, 5.3.\r\n Recommended preparation: assignment #7, and (do not hand in): all the quiz review problems, 2.5.9, 2.6.4, 2.7.7, 3.2.8, 3.3.8, 3.5.9, 4.2.4, 4.3.6, 4.4.11, 4.5.6, 5.2.10, 5.3.4. \nFinal exam. The final exam will be held on Wednesday, December 12, 5-8pm, at KN137. No aides are allowed for this test. \r\nThe exam will cover the material of the sections 1.3, 1.4, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 3.2, 3.3, 3.5, 4.2, 4.3, .4.4, 4.5, 5.2, 5.3, 6.2, 6.3, 6.4, 6.5, 6.6 (up to Theorem 6.6.2), 7.2, 7.3, 7.4, 7.5, 8.1 (up to Theorem 8.1.2).\nYou will be required to state and prove in detail one of the following Theorems from the textbook: 2.4.2, 2.5.5, 3.3.4, 4.2.3, 4.3.9, 4.4.1, 4.4.2, 4.4.7, 5.2.7, 5.3.2, 6.2.6, 6.4.4, 7.2.8, 7.5.1.\nRecommended preparation (do not hand in): all the quiz and midterm review problems, 6.2.3, 6.2.13, 6.2.14, 6.2.15, 6.3.1, 6.3.6, 6.4.2, 6.4.4, 6.4.10, 6.5.2, 6.5.8, 7.2.3, 7.3.2, 7.3.5, 7.4.3, 7.4.10, 7.5.2, 7.5.4.\nAdditional office hours: Tuesday, December 11, 12 - 1. Location: DH3000 .\n\n Grading. Grades will be based on the best eight out of ten homework assignements (10%), an in-tutorial quiz (10%), an in-lecture midterm test (25%), tutorial presentations (15%), attendance of tutorials and active participation in the discussions (5%), and Final exam (35%). I will also occasionally assign bonus problems.\nLate work. No late work will be accepted. Special consideration for late assignments or missed exams must be submitted via e-mail within a week of the original due date. There will be no make-up quiz, midterm test, or final. Justifiable absences must be declared on ROSI, undocumented absences will result in zero credit.\nE-mail policy.\r\nE-mails must originate from a utoronto.ca address and contain the course code MAT337 in the subject line. Please include your full name and student number in your e-mail.\n\nAcademic Integrity.\r\n Honesty and fairness are fundamental to the University of Toronto\u2019s mission. Plagiarism is a form of academic fraud and is treated\r\n very seriously. The work that you submit must be your own and cannot contain anyone elses work or ideas without proper\r\n attribution. You are expected to read the handout How not to plagiarize (http:\/\/www.writing.utoronto.ca\/advice\/using-sources\/how-not-to-plagiarize) and to be familiar with the Code of behaviour on academic matters, which is linked from the UTM calendar under\r\n the link Codes and policies.\n\nMaintained by: Ilia Binder (ilia@math.toronto.edu)\n\n\n","rejected":"\n\n\n\nStellar : Message of the Day\n\n\n\n\n\n\n\n\nSearch: \n\n\n\n\n\n\n\n\n\n\nMIT Course Management System\n\n\n\n\n\nHome\nCourse Guide\n@Stellar\nUpdates\n\n\n\n\n\nMessage of the day\nContinue\n\n\n\n\n\n\n\n\nStellar CMS\nInformation Services & Technology\n\n\nW92 . 304 Vassar Street\nCambridge . MA . 02139\n\n\n\nGet Help\n\n\nFAQ\n\n\nUser Guide\n\n\nContact the Help Desk\nRequest a Stellar site\n\n\n\nResources\n\nSupported Browsers\n\nCertificates\n\nLibrary E-Reserves\n\nWebSIS\n\n\n\nUpdates\n\n\nWhat's new?\n\nSubscribe\n\n\n\n\n\n\n"},{"chosen":"\n\n\n\n\n\n\nA research workflow with Zotero and Org mode | mkbehr.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\nToggle navigation\n\n\n\n\n\nmkbehr.com\n\n\n\n\n\n\nAbout me\n\n\nArchive\n\n\nTags\n\n\nRSS feed\n\n\nGithub\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSource\n\n\n\n\n\n\n\n\n\n\nA research workflow with Zotero and Org mode\n\n\n Michael Behr\n \nSeptember 19, 2015\n\nComments\n\nSource\n\n\n\n\n\n\nAny research project is going to involve a literature search: reading\nthrough a bunch of papers that might be relevant to your topic in\norder to get a sense of what the field already knows. Now, maybe\nthere's some magic technique for picking out the information that\nmatters, passing over the rest, and writing out a single, coherent\nstory in one pass through all the papers you can find. If that\ntechnique exists, I have no idea what it is.\nSo when every paper brings up ten new questions and twenty papers to\nstart answering them, I need a system to keep my notes organized. I\nneed notes that let me jump back and forth between papers without\nlosing my place, draw links between papers, and store lists of\ncitations to come back to. Here's how I do it.\n\nStoring papers with Zotero\nThe first tool I use is Zotero, a reference\nmanager. Zotero's job is to store all the actual papers I come across,\nalong with information like data on how to cite the papers and any\ntags they might have been published with. It can grab that information\nfrom my web browser, whether from a journal's website or someplace\nlike Google Scholar or PubMed. It's also great for quickly putting\ntogether a bibliography, using bibtex or similar programs, when I want\nto write up some results.\n\n\n\nZotero stores the papers I want to\nread and reference. I scaled up the font size here to make it readable\nin a tiny blog image.\nZotero isn't the only choice for reference managers.\nMendeley is another popular choice, and\nthere are a\nwhole bunch more\nout there. I picked Zotero arbitrarily a few years ago, but it's\nworking out well because of its emacs integration.\nKeeping notes with Emacs and Org mode\nYou see, Zotero has some note-taking functions, and I used to keep my\nnotes there, but there were some problems. Notes are stored as\nseparate files for each paper, but I want to cross-reference notes\nfrom a lot of different papers at once. And while the editor has some\nrich-text capabilities (e.g. bold and italic text), it's missing\nimportant things I need in my notes, like the ability to typeset\nequations.\nThat's where Emacs and its\nextension Org mode come in. To borrow a term\nfrom Perl enthusiasts, Org mode is the swiss army chainsaw of text\ndocument formats. Org mode documents have a lot of features, and it's\nway beyond this post's scope to describe them all. For the purpose of\nresearch notes, the most useful things it lets me do are:\n\nI can store my notes in a hierarchical tree structure, and I can\n hide parts of the tree from view in order to focus on other parts.\nI can put hyperlinks into my notes, including links to papers,\n websites, or other parts of the file.\nI can put math in my notes using Latex, and view the typeset\n equations right in my Emacs buffer.\n\n\n\n\nA sample from my notes file. You\ncan see the tree structure of the file, some links to papers, and a\nlittle bit of inline math, using Latex.\nGluing it all together with zotxt\nNow, see those links to papers in my notes buffer? I didn't have to\ncopy and paste them from anywhere. I inserted them with just three\nkeystrokes each. So far, I've just described some useful pieces of\nsoftware, but the interesting part of my workflow is how they fit\ntogether.\nzotxt is an extension that lets\nother programs talk to Zotero, and Emacs has a package to talk to it.\nIt's even structured specifically to work with Org mode documents.\nWith zotxt, my workflow looks like this:\n\nI find a paper I want to look at somewhere on the internet.\nI use Zotero's browser plugin to save it to Zotero. Hopefully it\n grabs the paper itself and this happens in one click; if the site\n doesn't play along, I spend a minute grabbing a pdf and feeding it\n to Zotero.\nI insert a link to the Zotero entry into my notes file in Emacs. I\n can do this with the key chords C-c \" \". I don't need to further\n specify what paper I want to grab: the browser plugin leaves the\n paper selected in Zotero, and zotxt can grab the selected paper.\nWhen I want to read the paper, I go to the link and tell Emacs to\n open the paper in my system PDF viewer. The key chords for this are\n C-c \" a, and then selecting the PDF attachment from the Helm\n window that appears (usually I just type pdf RET).\nWhen I'm reading a paper and see a citation that might be useful, I\n look it up on the internet and repeat this process to store a note\n linking to it.\n\nIt took me a while to get it set up to my liking, so here's how I did\nit:\n\nFirst, install zotxt. If you're\n using Zotero as a firefox extension, you just need to install zotxt\n as another extension. If you're using the standalone Zotero client,\n you can still do it: download the extension file from that link,\n then go to the Add-Ons Manager under the Tools menu and find the\n option to install an add-on from a file.\n\n\n\n\nThe menu option looks like\nthis.\n\nNext, install the zotxt package in emacs. If your\n package manager is set up, you\n can just type M-x package-install RET zotxt RET.\nNow, when org-zotxt-mode is active, you can use its functions in\n your org-mode buffers. You can search for papers and insert them\n with C-c \" i, insert the currently-selected paper in Zotero with\n C-u C-c \" i, and open a paper's PDF or other related files by\n moving the cursor to a link and typing C-c \" a. However, you might\n want a little bit more setup to deal with some annoyances.\nYou probably want to have org-zotxt-mode automatically activated\n in all your org-mode documents. To make that happen, you can add\n some code to your .emacs file to start up this mode on all your\n org-mode buffers - see below this list for the .emacs\n configuration I use.\nIf you want to insert a link to the currently-selected item a lot,\n C-u C-c \" i is an awkward sequence to type. I rebound it to C-c \"\n \".\nYou might notice that when you insert a link to a paper, the text of\n that link is a full citation. That might be what you want, but I\n just want the authors, paper name, and year. It took me a bit of\n hacking to get around that: it's possible to tell the emacs zotxt\n interface to use a different citation format than the default, but I\n had to throw together a little XML file to give it a shorter format\n than a full citation. (This may not be the easiest or cleanest way\n to do it, but it works!)\n That XML file is here. To use it, go into\n your Zotero preferences and select Cite -> Styles, and add the file.\n It should appear in the menu as \"mkbehr's short reference format\".\n Then add the last two lines in the .emacs snippet below, and you\n should get shorter citations.\nYou probably want to install the\n Helm package, to make zotxt's\n search interface easier to navigate. That link should tell you\n everything you need to know.\n\nHere's that .emacs setup code:\n;; Activate org-zotxt-mode in org-mode buffers\n(add-hook 'org-mode-hook (lambda () (org-zotxt-mode 1)))\n;; Bind something to replace the awkward C-u C-c \" i\n(define-key org-mode-map\n (kbd \"C-c \\\" \\\"\") (lambda () (interactive)\n (org-zotxt-insert-reference-link '(4))))\n;; Change citation format to be less cumbersome in files.\n;; You'll need to install mkbehr-short into your style manager first.\n(eval-after-load \"zotxt\"\n'(setq zotxt-default-bibliography-style \"mkbehr-short\"))\n\nOf course, I'm not done tinkering to make my workflow better. I hear\ngood things about the org-ref\nand helm-bibtex\npackages - if only I can keep an up-to-date bibtex file as I add papers\nto my library, I can associate links with not only a paper's pdf, but\nalso that paper's section of my notes file. And I haven't found a\nsmooth way to take a paper and pull up the papers it cites in my\nbrowser. But until then, I'm pretty happy with this setup.\nHappy researching!\n\n\n\nemacs\nresearch\n\n\n\nPrevious post\n\n\nNext post\n\nComments\n\nPlease enable JavaScript to view the comments powered by Disqus.\n\nComments powered by Disqus\n\n\n\n\n Contents \u00a9 2015 Michael Behr - Powered by Nikola\n\n\n\n\n\n\n","rejected":"\n\n\n\n\n\n\n\n\n\nEdge.org\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\nCopyright \u00a9 2023 By Edge Foundation, Inc. All Rights Reserved.\n\n\n\n \n\nEdge.org\n\n\n\n\n\n\n\n \n\n\n\nTo arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.\n\n\n\n\nhttps:\/\/www.edge.org\/response-detail\/26557Printed On Fri November 17th 2023 \n\n\n\nFri, Nov 17, 2023HOMECONVERSATIONSVIDEOAUDIOANNUAL QUESTIONEVENTSNEWSLIBRARYABOUTPEOPLE \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT?\n\n\n\n In the News [ 22 ] \n \u00a0\u00a0|\u00a0\u00a0 \n Contributors [ 199 ] \n \u00a0\u00a0|\u00a0\u00a0 \n View All Responses [ 199 ] \n\n\n\n\n \n Pamela McCorduck \n Author, Machines Who Think, The Universal Machine, Bounded Rationality, This Could Be Important; Co-author (with Edward Feigenbaum), The Fifth Generation \n\n \n\n\n Identifying The Principles, Perhaps The Laws, Of Intelligence \n The most important news for me came in mid-2015, when three scientists, Samuel J. Gershman, Eric J. Horvitz, and Joshua Tenenbaum published \u201cComputational rationality: A converging paradigm for intelligence in brains, minds, and machines\u201d in Science, 17 July 2015. They announced that they and their colleagues had something new underway: an effort to identify the principles, perhaps the laws, of intelligence, just as Newton once discovered the laws of motion.\nFormerly, any commonalities among a stroll in the park, the turbulence of a river, the revolution of a carriage wheel, the trajectory of a cannon ball, or the paths of the planets, seemed preposterous. It was Newton who found the underlying generalities that explained each of them (and so much more) at a fundamental level.\nNow comes a similarly audacious pursuit to subsume under general principles, perhaps even laws, the essence of intelligence wherever it\u2019s found. \u201cTruth is ever to be found in simplicity, and not in the multiplicity and confusion of things,\u201d Newton said.\nSo far as intelligence goes, we are pre-Newtonian. Commonalities of intelligence shared by cells, dolphins, plants, birds, robots and humans seem, if not preposterous, at least far-fetched.\nYet rich exchanges among artificial intelligence, cognitive psychology, and the neurosciences, for a start, aim exactly toward Newton\u2019s \u201ctruth in simplicity,\u201d those underlying principles (maybe laws) that will connect these disparate entities together. The pursuit\u2019s formal name is computational rationality. What is it exactly, we ask? Who, or what, exhibits it?\n\u00a0The pursuit is inspired by the general agreement in the sciences of mind that intelligence arises not from the medium that embodies it\u2014whether biological or electronic\u2014but the way interactions among elements in the system are arranged. Intelligence begins when a system identifies a goal, learns (from a teacher, a training set, or an experience) and then moves on autonomously, adapting to a complex, changing environment. Another way of looking at this is that intelligent entities are networks, often hierarchies of intelligent systems, humans certainly among the most complex, but congeries of humans even more so.\nThe three scientists postulate that three core ideas characterize intelligence. First, intelligent agents have goals, and form beliefs and plan actions that will best reach those goals. Second, calculating ideal best choices may be intractable for real-world problems, but rational algorithms can come close enough (\u201csatisfice\u201d in Herbert Simon\u2019s term) and incorporate the costs of computation. Third, these algorithms can be rationally adapted to the entity\u2019s specific needs, either off-line through engineering or evolutionary design, or online through meta-reasoning mechanisms that select the best strategy on the spot for a given situation.\nThough barely begun, the inquiry into computational rationality is already large and embraces multitudes. For example, biologists now talk easily about cognition, from the cellular to the symbolic level. Neuroscientists can identify computational strategies shared by both humans and animals. Dendrologists can show that trees communicate with each other (slowly) to warn of nearby enemies, like wood beetles: activate the toxins, neighbor.\nThe humanities themselves are comfortably at home here too, though it\u2019s taken many years for most of us to see that. And of course here belongs artificial intelligence, a key illuminator, inspiration, and provocateur.\nIt\u2019s news now; it will stay news because it\u2019s so fundamental; its evolving revelations will help us see our world, our universe, in a completely new way. And for those atremble at the perils of super-intelligent entities, surely understanding intelligence at this fundamental level is one of our best defenses.\u00a0\n \n Return to Table of Contents \n\n \n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n 2018 : WHAT IS THE LAST QUESTION? \n\n \n 2017 : WHAT SCIENTIFIC TERM OR\u00a0CONCEPT OUGHT TO BE MORE WIDELY KNOWN? \n\n \n 2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? \n\n \n 2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? \n\n \n 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT? \n\n \n 2013 : WHAT *SHOULD* WE BE WORRIED ABOUT? \n\n \n 2012 : WHAT IS YOUR FAVORITE DEEP, ELEGANT, OR BEAUTIFUL EXPLANATION? \n\n \n 2011 : WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT? \n\n \n 2010 : HOW IS THE INTERNET CHANGING THE WAY YOU THINK? \n\n \n 2009 : WHAT WILL CHANGE EVERYTHING? \n\n \n 2008 : WHAT HAVE YOU CHANGED YOUR MIND ABOUT? WHY? \n\n \n 2007 : WHAT ARE YOU OPTIMISTIC ABOUT? \n\n \n 2006 : WHAT IS YOUR DANGEROUS IDEA? \n\n \n 2005 : WHAT DO YOU BELIEVE IS TRUE EVEN THOUGH YOU CANNOT PROVE IT? \n\n 2004 : WHAT'S YOUR LAW? \n\n 2003 : WHAT ARE THE PRESSING SCIENTIFIC ISSUES FOR THE NATION AND THE WORLD, AND WHAT IS YOUR ADVICE ON HOW I CAN BEGIN TO DEAL WITH THEM? - GWB \n\n 2002 : WHAT IS YOUR QUESTION? ... WHY? \n\n 2001 : WHAT NOW? \n\n 2001 : WHAT QUESTIONS HAVE DISAPPEARED? \n\n 2000 : WHAT IS TODAY'S MOST IMPORTANT UNREPORTED STORY? \n\n \n 1999 : WHAT IS THE MOST IMPORTANT INVENTION IN THE PAST TWO THOUSAND YEARS? \n\n 1998 : WHAT QUESTIONS ARE YOU ASKING YOURSELF? \n\n \n\n\n\n\n\n\n\n\n\n\n\nJohn Brockman, Editor and Publisher\nContact Info:[email\u00a0protected]\nIn the News\nGet Edge.org by email\n\nEdge.org is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.\nCopyright \u00a9 2023 By Edge Foundation, Inc All Rights Reserved.\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},{"chosen":"\n\n\n\n90% of all claims about the problems with medical studies are wrong | Slate Star Codex\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHome\n\nAbout \/ Top Posts\nArchives\nTop Posts\n\nComments Feed\nRSS Feed\n\n\n\n\n\nSlate Star Codex\n\n\n\n\n\n\n\n\n\nBlogroll \nEconomics\n\nArtir Kel\nBryan Caplan\nDavid Friedman\nPseudoerasmus\nScott Sumner\nTyler Cowen\n\n\nEffective Altruism\n\n80000 Hours Blog\nEffective Altruism Forum\nGiveWell Blog\n\n\nRationality\n\nAlyssa Vance\nBeeminder\nElizabeth Van Nostrand\nGwern Branwen\nJacob Falkovich\nJeff Kaufman\nKatja Grace\nKelsey Piper\nLess Wrong\nPaul Christiano\nRobin Hanson\nSarah Constantin\nZack Davis\nZvi Mowshowitz\n\n\nScience\n\nAndrew Gelman\nGreg Cochran\nMichael Caton\nRazib Khan\nScott Aaronson\nStephan Guyenet\nSteve Hsu\n\n\nSSC Elsewhere\n\nSSC Discord Server\nSSC Podcast\nSSC Subreddit\nUnsong\n\n\nArchives\n\nJanuary 2021\nSeptember 2020\nJune 2020\nMay 2020\nApril 2020\nMarch 2020\nFebruary 2020\nJanuary 2020\nDecember 2019\nNovember 2019\nOctober 2019\nSeptember 2019\nAugust 2019\nJuly 2019\nJune 2019\nMay 2019\nApril 2019\nMarch 2019\nFebruary 2019\nJanuary 2019\nDecember 2018\nNovember 2018\nOctober 2018\nSeptember 2018\nAugust 2018\nJuly 2018\nJune 2018\nMay 2018\nApril 2018\nMarch 2018\nFebruary 2018\nJanuary 2018\nDecember 2017\nNovember 2017\nOctober 2017\nSeptember 2017\nAugust 2017\nJuly 2017\nJune 2017\nMay 2017\nApril 2017\nMarch 2017\nFebruary 2017\nJanuary 2017\nDecember 2016\nNovember 2016\nOctober 2016\nSeptember 2016\nAugust 2016\nJuly 2016\nJune 2016\nMay 2016\nApril 2016\nMarch 2016\nFebruary 2016\nJanuary 2016\nDecember 2015\nNovember 2015\nOctober 2015\nSeptember 2015\nAugust 2015\nJuly 2015\nJune 2015\nMay 2015\nApril 2015\nMarch 2015\nFebruary 2015\nJanuary 2015\nDecember 2014\nNovember 2014\nOctober 2014\nSeptember 2014\nAugust 2014\nJuly 2014\nJune 2014\nMay 2014\nApril 2014\nMarch 2014\nFebruary 2014\nJanuary 2014\nDecember 2013\nNovember 2013\nOctober 2013\nSeptember 2013\nAugust 2013\nJuly 2013\nJune 2013\nMay 2013\nApril 2013\nMarch 2013\nFebruary 2013\n\nFull Archives\n \n\n\n\n\n90% of all claims about the problems with medical studies are wrong\n\nPosted on February 17, 2013 by Scott Alexander \n\nI have frequently heard people cite John Ioannidis\u2019 apparent claim that \u201c90% of medical research is false\u201d.\nI think John Ioannidis is a brilliant person and I love his work and I think this statement points at a correct and important insight. But as phrased, I think this particular formulation when not paired with any caveats creates just a little more panic than is warranted.\nBefore I go further, Ioannidis\u2019 evidence:\nHe starts with simple statistics. Most studies are judged to have \u201cdiscovered\u201d a result if they reach p < 0.05, that is, if there is 5% probability or less the findings are due to mere chance (this is the best case scenario, where the study is totally free from bias or methodological flaws).\nSuppose you throw a dart at the Big Chart O\u2019 Human Metabolic Pathways and supplement your experimental group with the chemical you hit. Then ten years later you come back and see how many of them died of heart attacks.\nMost chemicals on the Big Chart probably don\u2019t prevent heart attacks. Let\u2019s say only one in a thousand do. Maybe your study will successfully find that 1\/1000. But the 999 inactive chemicals will also throw up about 50 (999 * 5%) false positives significant at the 5% level. Therefore, even if you conduct your study perfectly, and it shows a significant decrease in heart attacks, there\u2019s about a 98% chance it\u2019s false.\nOne would hope medical scientists plan their studies with a little more care than throwing a dart at a metabolic chart. Yet many don\u2019t; a lot of genetic research is conducted by checking every single gene against the characteristic of interest and seeing if any stick. And even when scientists have well-thought out theories, the inherent difficulty of medicine means they probably have less than a 50-50 chance of being right the first time, which means a 5% significance level has a less than 5% predictive value.\nAnd this isn\u2019t even counting publication bias or poor methodology or conflicts of interest or anything like that.\nDisturbingly, this problem seems to be borne out in empirical tests. Amgen Pharmaceuticals says it repeated experiments in 53 important papers and was only able to confirm 6. And Ioannidis himself did a re-analysis which is quoted as finding that \u201c41% of the most influential studies in medicine have been convincingly shown to be wrong or significantly exaggerated.\u201d\nSo I don\u2019t at all disagree with the general consensus that this is a huge problem. But I do disagree with the following statements:\n1. 90% of all medical research is wrong\n2. A given study you read, or your doctor reads, is 90% likely to be wrong.\n3. 90% of the things doctors believe, presumably based on these medical findings, is wrong.\n4. This proves the medical establishment is clueless and hopelessly irrational and that two smart people working in a basement for five minutes can discover a new medical science far better than what all doctors could have produced in seventy years.\nIs 90% of all medical research wrong?\nAs far as I can tell, there is no source at all for the 90% figure. I can\u2019t find it in any of Ioannidis\u2019 studies and indeed they contradict it. His table of predictive values of different studies doesn\u2019t have any entries that correspond to 90% (\u201cunderpowered exploratory epidemiological study\u201d is relatively close with 88%, but this is just for that one type of study, which is known to be especially bad). The Atlantic sums it up as:\nHis model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.\nNotice which number is conspicuously missing from that excerpt.\nNow another study of his did show that in 90% of studies with very large effect sizes, later research eventually found the effect size to be smaller, but this was out of a pool of studies specifically selected for being surprising and likely to be false. I don\u2019t think it\u2019s the source of the number and if it were that would be terrible.\nAs far as I can tell, this started from a quote in an Atlantic article on Ioannidis which included the line \u201che charges that as much as 90 percent of the published medical information that doctors rely on is flawed\u201d. This then got turned into the title of a Time article \u201cA Researcher\u2019s Claim: 90% of Medical Research Is Wrong\u201d, which itself got perverted to 90% of Medical Research Is Completely False.\nSo an unsourced quote that up to 90% of studies are flawed has somehow turned into a rallying cry that it has been proven that at least 90% of studies are false. To take this seriously we would have to believe that the numbers for all research are the same as the numbers for the poorly conducted epidemiological studies or the studies specifically selected for surprising results. I guess having a nice round number is good insofar as it makes the public pay attention to this field, but as far as actual numbers go, it\u2019s kind of made up.\nIs any given study you read, or your doctor reads, 90% likely to be wrong?\nBut let\u2019s take the above number at face value and say that 90% of medical studies are wrong. Fine. Does that mean the last medical study you read about in Scientific American, or that your doctor used to recommend you a new drug, is wrong?\nNo. Let\u2019s look at the Medical Evidence Pyramid.\nThe medical evidence pyramid is much like all pyramids, in that the bottom levels are infested with snakes and booby traps and vengeful medical evidence mummies. It\u2019s only after you reach the top few levels that you get the gold and jewels and precious, precious mummy powder.\nThis plays out in the same table of Ioannidis\u2019 speculations we saw before. While an in vitro study of the type used to identify possible drug targets might have a positive predictive value of 0.1%, a good meta-analysis or great RCT has a positive predictive value of 85%; that is, it\u2019s 85% likely to be true.\nThere are only two reasons someone might hear about the studies on the snake-infested bottom levels of the pyramid. Number one, that person is a specialist in the field who is valiantly trying to read through the entire niche medical journal the paper was published in. Or number two, the study found something incredible like DONUTS CURE CANCER IN A SAMPLE OF THREE LAB RATS!!! and the media decided to pick up on it. Hopefully everyone already ignores studies of the DONUTS CURE CANCER IN A SAMPLE OF THREE LAB RATES!!! type studies; if not, there\u2019s really not much I can say to you.\nBut most of the medical results that you hear about are the ones that get published in important journals and are trumpeted far and wide as important medical results. These are closer to the top of the pyramid than to the bottom. They\u2019re usually big expensive studies on thousands of people. Since the universities, hospitals, and corporations sponsoring them aren\u2019t idiots, they usually hire a decent statistician or two to make sure that they don\u2019t spend $300,000 testing something only to have a letter to the editor of the NEJM point out that they forgot to blind their subjects so it\u2019s totally worthless. And finally, in many cases you would only run a study that big and expensive if you had something plausible to test \u2013 you\u2019re not going to spend $300,000 just to throw a dart at the Big Chart O\u2019 Human Metabolic Pathways and see what happens.\nSo these studies that people actually hear about are bigger, they have more incentives to get their methodology right, and they\u2019re testing propositions with high plausibility. How do they do?\nI said above that one of Ioannidis\u2019 studies was frequently quoted as saying that \u201c41% of the most influential studies in medicine have been convincingly shown to be wrong or significantly exaggerated.\u201d\nThis is from a great study I totally endorse, but the 41% number was maximized for scariness. If I wanted to bias my reporting the other direction, I could equally well report the same results as \u201cOnly about 5% of influential medical experiments with adequate sample size have later been contradicted.\u201d\nHow? Ioannidis got his result by taking all medical studies with over 1000 citations in the \u201990s, of which there were 49. Of these, 4 were negative results (ie \u201cX doesn\u2019t work\u201d) so he threw them out. This is the first part I think is kind of unfair. Yes, negative results aren\u2019t as sexy as positive results, but they\u2019re still influential medical research, and if Ioannidis is quoted as saying that X% of medical findings are later contradicted when he means that X% of positive medical findings are, that\u2019s not quite fair.\nAnnnnyway, of the 45 famous studies with positive findings, 11 didn\u2019t really get tested and so we don\u2019t know if they\u2019re right or wrong. Eliminating these is also a potential bias, because we expect that studies which seem sketchy are more likely to be replicated so people can find out if they\u2019re actually right. Ioannidis quite rightly set himself a higher bar by not eliminating them, but the quote about 41% of studies being wrong does seem to have gone back eliminated them \u2013 at least that\u2019s the only way I can make the study numbers add up to 41% (the numbers given in the study actually say 32% of these studies failed to replicate).\nSo our 41% number is based off of 34 studies, best described as \u201c34 famous medical studies that found positive findings ie the least believable kind of finding, plus were suspicious enough that someone wanted to replicate them\u201d.\nOf these 34 studies, 7 were outright contradicted. Bad? Definitely. But for example, one of them was a study with a sample size of nine patients. Another study may well have been correct, but the results were interpreted wrongly (it said that estrogen decreased lipoprotein levels which everyone assumed meant decreased heart disease, but in fact later studies found increased heart disease without necessarily disproving the lipoprotein levels). Five of the six others were epidemiological trials, firmly on the middle of the pyramid. Only two of these contradicted studies were a true experiment with a sample size of >10.\n(even here, I am sort of skeptical. Three of these disproven studies, two epidemiologicals and an experimental, purported to show Vitamin E decreased heart disease. Then a single better trial showed that Vitamin E did not decrease heart disease. While recognizing the last trial was better, it does seem like something more complicated is going on here than \u201call three of the earlier trials were just wrong\u201d, and I\u2019ve recently been convinced antioxidant research is a huge minefield where tiny differences in protocol can cause big differences in results. But fine, let\u2019s grant this one and say there were two outright-contradicted experiments.)\nSo aside from the seven that were outright wrong, another seven were listed as \u201coverstating their results\u201d.\nThere are a couple of problems that bothered me here. One of them was that Ioannidis decided to count studies as contradicting each other if relative risk in one study was half or less than in the other study, \u201cregardless of whether confidence intervals might overlap or not\u201d. So even if a study effectively said \u201cHere is a wide range of possible results, we think it\u2019s about here in the middle but our research is consistent with it being anywhere in this range\u201d, if another study got somewhere else in that range, the first study was marked as \u201cexaggerated\u201d.\nThe second problem is, once again, poor studies versus poor interpretations. Ioannidis cites as an example of an exaggerated study one lasting a year and showing that the drug zidovudine helped slow the progression of HIV to AIDS. It concluded that giving HIV patients long-term zidovudine was probably a good idea. A later study lasted longer, and said that yes, zidovudine worked for a year, but then it stopped working. Because the earlier study had suggested longer-term zidovudine, it was marked as \u201cexaggerated results\u201d, even though the results of both studies were totally consistent with one another (both found that zidovudine worked for the first year). This is probably of little consolation to AIDS patients who were treated with a useless drug, but it seems pretty important if we\u2019re investigating study methodology.\nSo the way I got my 5% figure was to take the two experimental studies with decent sample sizes which were actually contradicted and compare them to the 38 large experimental studies total that started the experiment.\nSo this suggests that if you see a large experimental study being trumpeted in the medical literature, the chance that it will be found to be totally false (as opposed to true but exaggerated) within ten years or so is only about 5% \u2013 which if you understand p-values is about what you should have believed already.\n(I think. This requires quite a few assumptions, not the least of which is that my calculations above are correct!)\nAlso worth noting: Ioannidis\u2019 experiment did not investigate the absolute highest level of the medical pyramid, systematic reviews and meta-analyses. I expect the best of these to be better than any individual study.\n3. Are 90% of the things doctors believe, presumably based on medical findings, wrong?\nAfter going through the steps above, it should be pretty obvious that the answer is no, because doctors are mostly reading famous influential studies like the ones mentioned above, which are at worst 40% and at best 5% wrong.\nBut there\u2019s another factor to be taken into account, which is that why would you only read one study on something when lots of important findings have been investigated multiple times?\nSuppose that you\u2019re throwing darts at the Big Chart O\u2019 Human Metabolic Pathways, with your 1\/1000 base rate of true hypotheses. You run a very good methodologically sound study and find p = .05. But now there\u2019s still only a 1\/50 chance your hypothesis is correct.\nBut another team in China runs the same study, and they also find p = .05. We expect the Chinese to get false to true results at a rate of one to two (because the 1 in the 1\/50 stays 1, but the 50 is divided by 20 to produce approximately 2. Wow, I\u2019m even worse at explaining math than I am at doing it.)\nNow a team in, oh, let\u2019s say Turkey runs the same study, and they also find p = .05. We expect the Turks to get false to true results at a rate of one to ten, for, uh, the same math reasons as the Chinese. When the, um, Icelanders repeat the study, our odds go to one to two hundred.\nSo we started with 1000:1 odds, the first study brought us up to 50:1 odds, the second study to 2:1 odds, the third study to 1:10 odds, and the fourth study to 1:200 odds, ie we are now 99.5% sure we\u2019re right.\nReal medicine is both better and worse than this. It\u2019s better in that we often have dozens of studies rather than just four. It\u2019s worse in that the studies are not all so methodologically sound that we can multiply our odds by 20 each time (to put it lightly).\nBut some of them are, and once we get enough of them, the base rate problems which plague individual medical findings go away very quickly. Even if only one of the studies is methodologically sound, if the reason they\u2019re studying their topic is because a bunch of other less believable studies all got positive results, that\u2019s a much better base rate than \u201cbecause I hit it with my dart\u201d.\nWhen doctors say that, for example, iron supplements help anaemia, it\u2019s not because they hit iron on their Big Chart O\u2019 Human Metabolic Pathways, then ran a single study, got p = .05, and rushed off to publish a medical textbook. It\u2019s because they knew hemoglobin had iron in it, there are at least 21 randomized controlled studies, probably some had p-values closer to .001 than to .05 even though I don\u2019t have any of them in front of me to check, and eventually some really really smart statisticians at the Cochrane Collaboration gave it their seal of approval. Most doctors\u2019 beliefs aren\u2019t on quite this high a level, but most doctors\u2019 beliefs aren\u2019t on the \u201cSomeone threw a dart, then did one study\u201d level either.\n4. Does this prove the medical establishment is clueless and hopelessly irrational and that two smart people working in a basement for five minutes can discover a new medical science far better than what all doctors could have produced in seventy years?\nA lot of people seem to go from Ioannidis\u2019 experiment to something like \u201cSo I guess everyone in medicine is just clueless about how science and statistics work. I\u2019ll go read a couple of medical studies and then be able to outperform everyone in this totally flawed field.\u201d\n(important note: I\u2019m not accusing MetaMed of this! They seem pretty sane. I am accusing some people I come across in the community who are much more enthusiastic than the relatively sober MetaMed people of doing something like this.)\nBut the problem isn\u2019t that no one in medicine is familiar with Ioannidis\u2019 research. It\u2019s that they\u2019re not really sure what to do about it and figuring out a plan and implementing it will take time and effort.\nIoannidis\u2019 work isn\u2019t exactly secret. I\u2019ve hung out with groups of residents (ie trainee doctors) who have discussed Ioannidis\u2019 findings over the dinner table. According to The Atlantic\nTo say that Ioannidis\u2019s work has been embraced would be an understatement. His PLoS Medicine paper is the most downloaded in the journal\u2019s history, and it\u2019s not even Ioannidis\u2019s most-cited work\u2014that would be a paper he published in Nature Genetics on the problems with gene-link studies. Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back.\nSo if so many people are aware of this, why isn\u2019t the problem getting fixed more quickly?\nAn optimist could say the problem isn\u2019t getting fixed because there is no problem. A vast volume of embarassingly wrong medical literature gets published, inflates the publishers\u2019 resumes, and everyone else ignores it and concentrates on the not-really-so-bad large randomized trials. To the post-cynic it is all a smooth, well-functioning machine.\nA pessimist might say that the problem isn\u2019t getting fixed because it\u2019s impossible. The average medical hypothesis is always going to have a low base rate of being true \u2013 in fact, if we force scientists to only study high base-rate hypotheses, by definition everything we discover will be boring. There will never be enough resources to apply huge rigorous trials to every one of the millions of things worth studying. So we\u2019re always going to have weak studies about low-base rate hypotheses, which is what Ioannidis is attacking as the recipe for failure.\nA realist might point out there are some things we can do, but it involves coordinating a huge and complicated system with many moving parts. Journals can force trials to register before they conduct their experiments to avoid publication bias. The scientific community can give more status to people who perform important replications and especially important negative replications. Study authors and the media can come up with better ways to report their results to doctors and the public without blowing them out of proportion. Statisticians can\u2026actually, anything I say statisticians can do is just going to be a mysterious answer, along the lines of \u201cdo better statistics stuff\u201d, so I\u2019m not going to embarass myself by completing this sentence except to postulate that I\u2019ll bet there\u2019s some recommendation that could complete it usefully.\nBut all these things involve vague entities who aren\u2019t really actors (\u201cthe scientific community\u201d, \u201cthe media\u201d) acting in ways that are kind of against their immediate incentives. This is hard to make people do and usually involves a lot of grassroots coordination effort. Which is going on. But it takes time.\nBut no matter what happens, I think a useful epistemic habit is to be very skeptical of individual studies, and skeptical but not too skeptical of large randomized trials, good meta-analyses, and general medical consensus when supported by an evidence base.\n\n\n\t\t\t\t\t\tThis entry was posted in Uncategorized. Bookmark the permalink.\t\t\t\t\t\t\t\t\t\t\t\n\n\n\u2190 Google Correlate does not imply Google Causation\nTypical mind and gender identity \u2192\n\n\n26 Responses to 90% of all claims about the problems with medical studies are wrong\nReverse order\n\n\n\n\n gwern says: \n\n\t\t\tFebruary 17, 2013 at 7:17 pm \n> It\u2019s only after you reach the top few levels that you get the gold and jewels and precious, precious mummy powder. This plays out in the same table of Ioannides\u2019 speculations we saw before.\nThe hyperlink in both seems to be the same.\n> Also worth noting: Ioannides\u2019 experiment did not investigate the absolute highest level of the medical pyramid, systematic reviews and meta-analyses. I expect the best of these to be better than any individual study.\nIsn\u2019t that covered in http:\/\/www.plosmedicine.org\/article\/fetchObject.action?uri=info:doi\/10.1371\/journal.pmed.0020124.t004&representation=PNG_M ? Second row, \u2018Confirmatory meta-analysis of good-quality RCTs\u2019, PPV=0.85\n(Hm, I wonder why that meta-analysis has the same PPV as a \u2018Adequately powered RCT with little bias and 1:1 pre-study odds\u2019\u2026 Maybe adequately powered here means a study sample size equivalent to that of meta-analyses pooling many underpowered studies.)\nSpeaking of which, I can\u2019t believe I missed that chart while reading that paper originally. That changes everything: it is our priors for medical research!\n> After going through the steps above, it should be pretty obvious that the answer is no, because doctors are mostly reading famous influential studies like the ones mentioned above, which are at worst 40% and at best 5% wrong.\nI don\u2019t think this is obvious at all. If doctors did not take a step without consulting a Cochran Review, then yeah, any individual therapy or treatment will have that nice 5-40% chance of being wrong. But is that the case? I was under the impression that the evidence-based medicine folks had made lists and examined old standard treatments with actual RCTs and found that many treatments or medicines had never been genuinely tested, and when they were, often failed.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tFebruary 17, 2013 at 8:22 pm \nTable link fixed.\nWhen I said he doesn\u2019t include meta-analyses, I meant in that particular study of the top 49 most cited medical studies. I agree he certainly considers them from a theoretical perspective.\n\u201cI was under the impression that the evidence-based medicine folks had made lists and examined old standard treatments with actual RCTs and found that many treatments or medicines had never been genuinely tested, and when they were, often failed.\u201d\nI am under the impression that most medicine now is evidence-based medicine.\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tFebruary 17, 2013 at 8:42 pm \n> I am under the impression that most medicine now is evidence-based medicine.\nOne would certainly like to think that, but given the long track record of medicine, it\u2019s not something I would believe until I saw a paper asserting that the majority of operations performed in some sample had evidence-based medicine backing\u2026\n\n\n\n\n\n\n\n\n Michael vassar says: \n\n\t\t\tFebruary 18, 2013 at 9:59 am \nIs that itself a medical claim? An EBM claim?\nHow did you conclude this?\nI really really think you should talk with some of the doctors MetaMed works with about this. I certainly don\u2019t think anu of them agree\n\n\n\n\n\n\n\n\n\n\n\n\n Andrew Hunter says: \n\n\t\t\tFebruary 17, 2013 at 7:25 pm \nSide note: How many times has MetaMed changed its name?\nI mean, they seem like smart people and the basic idea is good, but constant renames is one of those classic signs of a Silicon Valley startup that has no idea what they\u2019re doing and is stuck bikeshedding\u2013it doesn\u2019t give me a particularly good feeling about it.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tFebruary 17, 2013 at 8:19 pm \nThey\u2019ve had legitimate reasons for most of the times they\u2019ve changed their name, and they\u2019re not really public yet so it doesn\u2019t really count.\n\n\n\n\n\n\n\n\n\n\n Sarah says: \n\n\t\t\tFebruary 17, 2013 at 7:56 pm \n1. I, for one, don\u2019t go around claiming that 90% of research studies are false; I believe Ioannides only found a little more than half of medical studies were disconfirmed by later experiments. 90% of all *pre-clinical cancer studies* are later disconfirmed, which does mean that if you see a study that says \u201cX cures cancer\u201d but X hasn\u2019t made it to clinical trials yet, X probably *doesn\u2019t* cure cancer.\n2. I don\u2019t expect most things doctors believe to be wrong. I don\u2019t even expect most things *people* believe to be wrong; after all, most things people believe are of the form \u201cwater is wet,\u201d so uncontroversial that we barely notice them as beliefs. What I *do* expect to be wrong are beliefs that aren\u2019t based on a model of the world. \n\u201cIron supplements cure anemia\u201d is a belief that depends on knowing how hemoglobin works, knowing how digestion works, having observed iron supplements cure anemia\u2026lots of different kinds of evidence, at different scales of complexity, confirm the prediction. \n\u201cAntioxidants reduce cancer risk\u201d is an example of the kind of belief we should be skeptical about. Free radical damage may lead to cancer; antioxidants stabilize free radicals; so one might think antioxidants prevent cancer. Some early clinical trials found that taking antioxidants reduced cancer risk; but further study found that they probably don\u2019t. And there\u2019s some evidence that \u201cfree radicals\u201d (or reactive oxygen species) are the mechanism by which the immune system and chemotherapy drugs attack cancers, so antioxidants, if anything, protect cancer cells. We don\u2019t have a clear model of what antioxidants do, in the same way that we have a clear model of what role iron plays in the blood, so we ought to be skeptical of any conclusions about \u201cantioxidants are good for you\u201d or \u201cantioxidants are bad for you.\u201d And we ought to be *especially* skeptical of anything that assumes a cause (eating antioxidants) will result in a health outcome (less cancer) merely because the cause affects an intermediate biochemical step (stabilizing free radicals).\nIf there\u2019s a prevailing theory in the biomedical sciences that a.) relies on complex chains of genetic and biochemical causation, and b.) hasn\u2019t shown measurable results in the form of lower death rates, then I\u2019m going to tag it as improbable.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tFebruary 17, 2013 at 8:31 pm \n1. I didn\u2019t mean to \u201caccuse\u201d you of saying this. I meant other, less reputable sources like Time Magazine and most of the rest of the media.\n2. I\u2019m not sure to what degree I agree with your emphasis on understanding mechanism. There are a lot of things that work in biology without us having any idea how. Natural selection was a pretty good example until we discovered genes. Another good example would be digitalis, which was used to treat heart failure since the 1700s but whose full mechanism of action was only discovered in the last few decades. I prefer good experimental results to good theoretical explanation, but the caveat which I think you\u2019re trying to point out is that they had better be *good* experimental results, as opposed to marginal experimental results.\n3. The last part wasn\u2019t meant as an attack on MetaMed, just on some of the people who try to talk about this on Less Wrong. I\u2019ve edited the post to try to make this slightly clearer.\n\n\n\n\n\n\n\n\n\n\n Sarah says: \n\n\t\t\tFebruary 17, 2013 at 8:15 pm \nA defense of MetaMed:\n1. We are not a Silicon Valley startup, we aren\u2019t even based in Silicon Valley, we don\u2019t belong to that culture, we\u2019re not a consumer web app, there\u2019s really no sense in which that\u2019s the right reference class.\n2. We are only doing a major publicity launch at the end of this *month*, so name changes are not really a practical issue. The name changes were in response to market research \u2014 people are bad at intuiting what kinds of names appeal to customers, and we finally settled on MetaMed after our marketing team put a lot of effort into finding out what made the best impression. It\u2019s just window dressing; the internal structure of the company has stayed the same.\n3. It\u2019s useless to promise we can fix a hard problem until we succeed in fixing it. But we\u2019re not claiming to be able to waltz in and fix medicine with no effort. One of the main things we\u2019re doing is a bit more modest: just *assembling* a coherent model out of the research that already exists. \nWe know, for example, that statistical prediction rules work very well for medical diagnosis and risk prediction; these are just simple little combinations of a few known risk factors for, say, heart attacks, that give each patient a score rating their hart attack risk. But these rules only exist for a few special cases in medicine. For most diseases, nobody has gone around combining all the risk factors and all the signs and symptoms into a single statistical model that says \u201cif we know X, Y, and Z about you, here\u2019s how likely you are to have disease A.\u201d \nOne way of looking at MetaMed\u2019s job is that we combine, reorganize, and quantify the existing scientific literature. \nNow, in a sense, you could say that medical culture already does this; doctors pretty much know, from some combination of their clinical experience, their med school education, and whatever research they have time to read, how to diagnose diseases and choose treatments. But this is, one has to admit, an imperfect process. Human minds are very bad at intuitively putting disparate pieces of information together. That\u2019s *why* things like checklists and statistical prediction rules can outperform clinicians; using intuitive judgment, you\u2019ll forget things. Formally organizing the research literature into prediction models is a kind of safeguard on expert judgment, and I think it\u2019s quite likely to catch things doctors miss.\n\n\n\n\n\n\n\n\n Deiseach says: \n\n\t\t\tFebruary 18, 2013 at 1:53 am \n(1) Statistics are much more complicated than people (even smart, educated, knowledgeable in their field people) think.\n(2) Medicine is an art more than a science.\n(3) Confusion reigns. For family reasons, I\u2019ve been scouring online resources for information on diabetes diets, and I\u2019m getting confusing recommendations even on the very same website; e.g. carbohydrates increase blood sugar \u2013 well and good; starch as well as sugar needs to be watched \u2013 fine, tell me more; eat vegetables rather than fruit \u2013 okay, what vegetables; eat carrots, they\u2019re low-GI \u2013 no, don\u2019t eat carrots, they\u2019re loaded with sugar! Eat peas \u2013 no, don\u2019t eat peas, they\u2019re full of starch and starch is bad!\nThe conclusion I am left with is that the only safe diet (for anything) is rainwater and moss \ud83d\ude41\n\n\n\n\n\n\n\n\n Michael vassar says: \n\n\t\t\tFebruary 18, 2013 at 10:06 am \nThis is exactly what MetaMed is for. It\u2019s definitely the case that 1, 2, and 3 are true, but 1 is largely a consequence of accepting too low a standard when seeing people as smatt and knowledgeable\n\n\n\n\n\n\n\n\n\n\n Sniffnoy says: \n\n\t\t\tFebruary 18, 2013 at 2:22 am \nSo this suggests that if you see a large experimental study being trumpeted in the medical literature, the chance that it will be found to be totally false (as opposed to true but exaggerated) within ten years or so is only about 5% \u2013 which if you understand p-values is about what you should have believed already.\n(I think. This requires quite a few assumptions, not the least of which is that my calculations above are correct!)\nNot really. P-values are not how likely something is to be wrong or invalid. Rather, they\u2019re how likely this data was to show up if you were wrong, i.e., they\u2019re P(E | not H) rather than P(not H | E). (Except they\u2019re not really that, either \u2014 they\u2019re just how likely this data was to show up if you were wrong in a particular way, i.e. the null hypothesis.)\nAnd yes this is very counterintuitive and hence why everybody gets them wrong.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tFebruary 18, 2013 at 1:03 pm \nThe 5% number comes not from p-values but from the empirical observation that of 40 studies analyzed, 2 were wrong. This matches the number of studies that would be wrong merely by chance if we only took the p-value into account rather than the base rate.\n\n\n\n\n\n\n\n\n Sniffnoy says: \n\n\t\t\tFebruary 19, 2013 at 5:58 am \nYes, but you say that the empirical 5% matches what you\u2019d expect from a p-value of 5%, and I don\u2019t think that\u2019s correct. Unless you just mean \u201chey look these numbers are the same!\u201d which doesn\u2019t really mean anything by itself.\nI mean, you talk about just taking the p-value into account rather than the base rate, but it\u2019s not at all clear to me that the way you do so is meaningful. Just considering the equation P(H|E)=P(E|H)*P(H)\/P(E), you\u2019re suggesting that we \u201cdon\u2019t take into account base rate\u201d by assuming P(H)\/P(E) is about 1? I really don\u2019t see what makes such an assumption reasonable.\nNow if you want to say, \u201cLet\u2019s not worry about what P(H) is, and so just assume P(H)\/P(E) is some constant\u201d, that might make more sense. But then you can\u2019t get any particular number out of it.\n\n\n\n\n\n\n\n\n\n\n\n\n Elissa says: \n\n\t\t\tFebruary 18, 2013 at 3:59 am \nThanks for looking into the 90% thing. Ioannidis misspelled as \u201cIoannides\u201d throughout.\n\n\n\n\n\n\n\n\n jason says: \n\n\t\t\tFebruary 18, 2013 at 8:08 am \nWhat percentage of studies that contradict famous studies with positive findings are false?\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tFebruary 18, 2013 at 10:26 am \nI believe, although I haven\u2019t checked, that the studies Ioannidis looked at were always larger or otherwise better than the original studies being tested (since this would be the only sensible approach); hence, if there\u2019s a disagreement, either way\u2026.\n\n\n\n\n\n\n\n\n\n\n Alyssa Vance says: \n\n\t\t\tFebruary 18, 2013 at 11:31 am \nFor what it\u2019s worth, I don\u2019t know of anyone at MetaMed who has ever claimed that 90% of studies are false, so I think your first sentence might be straw manning. Me and several others have claimed that 80% are false, but that\u2019s much more in line with his actual results, as you note.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tFebruary 18, 2013 at 1:08 pm \nDo feel like I\u2019ve heard the 90% number somewhere, possibly somewhere nonofficial in private conversation with someone, but unless I can track down a source I\u2019ll edit it out with apologies. I still think \u201c80% of non-experimental studies\u201d is a pretty big caveat compared to \u201c80% of research\u201d but I have no idea how this was phrased and for all I know you said it that way. Sorry about that.\n\n\n\n\n\n\n\n\n SGr says: \n\n\t\t\tDecember 5, 2015 at 8:34 am \nYour link to the 90% claim:\nhttp:\/\/therefusers.com\/refusers-newsroom\/90-of-peer-reviewed-clinical-research-is-completely-false-greenmedinfo\/#.VmLbj_mrTNN\n\n\n\n\n\n\n\n\n\n\n\n\n Nancy Lebovitz says: \n\n\t\t\tFebruary 18, 2013 at 7:06 pm \nI thought a lot of the point of MetaMed was to find sound but neglected research\u2013 at least as much that as debunking bad research.\n\n\n\n\n\n\nPingback: Future tense | Slate Star Codex\n\n\nPingback: MetaMed launch day | Slate Star Codex\n\n\n\n\n DanielLC says: \n\n\t\t\tApril 22, 2014 at 12:49 am \n> Of these, 4 were negative results (ie \u201cX doesn\u2019t work\u201d) so he threw them out. This is the first part I think is kind of unfair.\nI disagree. Negative results don\u2019t show that an effect isn\u2019t there. They just show that it\u2019s too small to see with that sample size. A negative result being later disproven does not show a flaw in the original study.\nIf you show that the effect is there and is large enough that the first study shouldn\u2019t have missed it, that\u2019s a problem, but it makes this way more complicated so it\u2019s easier just to ignore those studies.\nThinking about this more, I guess they\u2019d have to do something like this either way, to show that the study didn\u2019t just fail to replicate because the second study had a false negative. Either they\u2019d have to look at ones where the study where it fails is much more powerful, or they\u2019d have to use a two-tailed T-test and show that the two studies shouldn\u2019t result from the same effect.\n\n\n\n\n\n\nPingback: The Problem With Connection Theory | The Rationalist Conspiracy\n\n\n\n\n Q says: \n\n\t\t\tApril 28, 2014 at 4:10 pm \nHere is a 90% number: \u201cHe (Ioannidis) charges that as much as 90 percent of the published medical information that doctors rely on is flawed.\u201d\nhttp:\/\/www.theatlantic.com\/magazine\/archive\/2010\/11\/lies-damned-lies-and-medical-science\/308269\/\n\n\n\n\n\n\n\n\n\n\n\n\nMeta\n\nRegister Log in\nEntries feed\nComments feed\nWordPress.org\n\n\nB4X is a free and open source developer tool that allows users to write apps for Android, iOS, and more.\n80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.\nThe Effective Altruism newsletter provides monthly updates on the highest-impact ways to do good and help others.\nAltruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.\nAISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.\nNorwegian founders with an international team on a mission to offer the equivalent of a Norwegian social safety net globally available as a membership. Currently offering travel medical insurance for nomads, and global health insurance for remote teams.\nThe COVID-19 Forecasting Project at the University of Oxford is making advanced pandemic simulations of 150+ countries available to the public, and also offer pro-bono forecasting services to decision-makers.\n\nSeattle Anxiety Specialists are a therapy practice helping people overcome anxiety and related mental health issues (eg GAD, OCD, PTSD) through evidence based interventions and self-exploration. Check out their free anti-anxiety guide here.\nMealSquares is a \"nutritionally complete\" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks\/tastes a lot like an ordinary scone. \nDr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.\nSubstack is a blogging site that helps writers earn money and readers discover articles they'll like.\nGiving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.\nMetaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page\n\nBeeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here\nSupport Slate Star Codex on Patreon. I have a day job and SSC gets free hosting, so don't feel pressured to contribute. But extra cash helps pay for contest prizes, meetup expenses, and me spending extra time blogging instead of working.\nJane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required. \n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","rejected":"\n\n\n\n\nOff the Convex Path \u2013 Off the convex path\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout\n\nSubscribe\n\n\n\n\n\n\n\n\n\nOff the Convex Path\n\n\nContributors\n\nSanjeev Arora\nNisheeth Vishnoi\nNadav Cohen\n\nFormer contributors:\n\nMoritz Hardt\n\nMission statement\nThe notion of convexity underlies a lot of beautiful mathematics. When combined with computation, it gives rise to the area of convex optimization that has had a huge impact on understanding and improving the world we live in. However, convexity does not provide all the answers. Many procedures in statistics, machine learning and nature at large\u2014Bayesian inference, deep learning, protein folding\u2014successfully solve non-convex problems that are NP-hard, i.e., intractable on worst-case instances. Moreover, often nature or humans choose methods that are inefficient in the worst case to solve problems in P.\nCan we develop a theory to resolve this mismatch between reality and the predictions of worst-case analysis? Such a theory could identify structure in natural inputs that helps sidestep worst-case complexity.\nThis blog is dedicated to the idea that optimization methods\u2014whether created by humans or nature, whether convex or nonconvex\u2014are exciting objects of study and, often lead to useful algorithms and insights into nature. This study can be seen as an extension of classical mathematical fields such as dynamical systems and differential equations among others, but with the important addition of the notion of computational efficiency.\nWe will report on interesting research directions and open problems, and highlight progress that has been made. We will write articles ourselves as well as encourage others to contribute. In doing so, we hope to generate an active dialog between theorists, scientists and practitioners and to motivate a generation of young researchers to work on these important problems.\nContributing an article\nIf you\u2019re writing an article for this blog, please follow these guidelines.\n\n\n\n\n\n\n\n\n Theme available on Github.\n \n\n\n\n\n\n\n\n\n"},{"chosen":"\n\nAI Safety Fundamentals Course\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Your browser doesn't support iframes\n \n\n","rejected":"\n\n\n\nThe Control Group Is Out Of Control | Slate Star Codex\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHome\n\nAbout \/ Top Posts\nArchives\nTop Posts\n\nComments Feed\nRSS Feed\n\n\n\n\n\nSlate Star Codex\n\n\n\n\n\n\n\n\n\nBlogroll \nEconomics\n\nArtir Kel\nBryan Caplan\nDavid Friedman\nPseudoerasmus\nScott Sumner\nTyler Cowen\n\n\nEffective Altruism\n\n80000 Hours Blog\nEffective Altruism Forum\nGiveWell Blog\n\n\nRationality\n\nAlyssa Vance\nBeeminder\nElizabeth Van Nostrand\nGwern Branwen\nJacob Falkovich\nJeff Kaufman\nKatja Grace\nKelsey Piper\nLess Wrong\nPaul Christiano\nRobin Hanson\nSarah Constantin\nZack Davis\nZvi Mowshowitz\n\n\nScience\n\nAndrew Gelman\nGreg Cochran\nMichael Caton\nRazib Khan\nScott Aaronson\nStephan Guyenet\nSteve Hsu\n\n\nSSC Elsewhere\n\nSSC Discord Server\nSSC Podcast\nSSC Subreddit\nUnsong\n\n\nArchives\n\nJanuary 2021\nSeptember 2020\nJune 2020\nMay 2020\nApril 2020\nMarch 2020\nFebruary 2020\nJanuary 2020\nDecember 2019\nNovember 2019\nOctober 2019\nSeptember 2019\nAugust 2019\nJuly 2019\nJune 2019\nMay 2019\nApril 2019\nMarch 2019\nFebruary 2019\nJanuary 2019\nDecember 2018\nNovember 2018\nOctober 2018\nSeptember 2018\nAugust 2018\nJuly 2018\nJune 2018\nMay 2018\nApril 2018\nMarch 2018\nFebruary 2018\nJanuary 2018\nDecember 2017\nNovember 2017\nOctober 2017\nSeptember 2017\nAugust 2017\nJuly 2017\nJune 2017\nMay 2017\nApril 2017\nMarch 2017\nFebruary 2017\nJanuary 2017\nDecember 2016\nNovember 2016\nOctober 2016\nSeptember 2016\nAugust 2016\nJuly 2016\nJune 2016\nMay 2016\nApril 2016\nMarch 2016\nFebruary 2016\nJanuary 2016\nDecember 2015\nNovember 2015\nOctober 2015\nSeptember 2015\nAugust 2015\nJuly 2015\nJune 2015\nMay 2015\nApril 2015\nMarch 2015\nFebruary 2015\nJanuary 2015\nDecember 2014\nNovember 2014\nOctober 2014\nSeptember 2014\nAugust 2014\nJuly 2014\nJune 2014\nMay 2014\nApril 2014\nMarch 2014\nFebruary 2014\nJanuary 2014\nDecember 2013\nNovember 2013\nOctober 2013\nSeptember 2013\nAugust 2013\nJuly 2013\nJune 2013\nMay 2013\nApril 2013\nMarch 2013\nFebruary 2013\n\nFull Archives\n \n\n\n\n\nThe Control Group Is Out Of Control\n\nPosted on April 28, 2014 by Scott Alexander \n\nI.\nAllan Crossman calls parapsychology the control group for science.\nThat is, in let\u2019s say a drug testing experiment, you give some people the drug and they recover. That doesn\u2019t tell you much until you give some other people a placebo drug you know doesn\u2019t work \u2013 but which they themselves believe in \u2013 and see how many of them recover. That number tells you how many people will recover whether the drug works or not. Unless people on your real drug do significantly better than people on the placebo drug, you haven\u2019t found anything.\nOn the meta-level, you\u2019re studying some phenomenon and you get some positive findings. That doesn\u2019t tell you much until you take some other researchers who are studying a phenomenon you know doesn\u2019t exist \u2013 but which they themselves believe in \u2013 and see how many of them get positive findings. That number tells you how many studies will discover positive results whether the phenomenon is real or not. Unless studies of the real phenomenon do significantly better than studies of the placebo phenomenon, you haven\u2019t found anything.\nTrying to set up placebo science would be a logistical nightmare. You\u2019d have to find a phenomenon that definitely doesn\u2019t exist, somehow convince a whole community of scientists across the world that it does, and fund them to study it for a couple of decades without them figuring it out.\nLuckily we have a natural experiment in terms of parapsychology \u2013 the study of psychic phenomena \u2013 which most reasonable people believe don\u2019t exist, but which a community of practicing scientists believes in and publishes papers on all the time.\nThe results are pretty dismal. Parapsychologists are able to produce experimental evidence for psychic phenomena about as easily as normal scientists are able to produce such evidence for normal, non-psychic phenomena. This suggests the existence of a very large \u201cplacebo effect\u201d in science \u2013 ie with enough energy focused on a subject, you can always produce \u201cexperimental evidence\u201d for it that meets the usual scientific standards. As Eliezer Yudkowsky puts it:\nParapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored \u2013 that they are unfairly being held to higher standards than everyone else. I\u2019m willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.\nThese sorts of thoughts have become more common lately in different fields. Psychologists admit to a crisis of replication as some of their most interesting findings turn out to be spurious. And in medicine, John Ioannides and others have been criticizing the research for a decade now and telling everyone they need to up their standards.\n\u201cUp your standards\u201d has been a complicated demand that cashes out in a lot of technical ways. But there is broad agreement among the most intelligent voices I read (1, 2, 3, 4, 5) about a couple of promising directions we could go:\n1. Demand very large sample size.\n2. Demand replication, preferably exact replication, most preferably multiple exact replications.\n3. Trust systematic reviews and meta-analyses rather than individual studies. Meta-analyses must prove homogeneity of the studies they analyze.\n4. Use Bayesian rather than frequentist analysis, or even combine both techniques.\n5. Stricter p-value criteria. It is far too easy to massage p-values to get less than 0.05. Also, make meta-analyses look for \u201cp-hacking\u201d by examining the distribution of p-values in the included studies.\n6. Require pre-registration of trials.\n7. Address publication bias by searching for unpublished trials, displaying funnel plots, and using statistics like \u201cfail-safe N\u201d to investigate the possibility of suppressed research.\n8. Do heterogeneity analyses or at least observe and account for differences in the studies you analyze.\n9. Demand randomized controlled trials. None of this \u201ccorrelated even after we adjust for confounders\u201d BS.\n10. Stricter effect size criteria. It\u2019s easy to get small effect sizes in anything.\nIf we follow these ten commandments, then we avoid the problems that allowed parapsychology and probably a whole host of other problems we don\u2019t know about to sneak past the scientific gatekeepers.\nWell, what now, motherfuckers?\nII.\nBem, Tressoldi, Rabeyron, and Duggan (2014), full text available for download at the top bar of the link above, is parapsychology\u2019s way of saying \u201cthanks but no thanks\u201d to the idea of a more rigorous scientific paradigm making them quietly wither away.\nYou might remember Bem as the prestigious establishment psychologist who decided to try his hand at parapsychology and to his and everyone else\u2019s surprise got positive results. Everyone had a lot of criticisms, some of which were very very good, and the study failed replication several times. Case closed, right?\nEarlier this month Bem came back with a meta-analysis of ninety replications from tens of thousands of participants in thirty three laboratories in fourteen countries confirming his original finding, p < 1.2 * -1010, Bayes factor 7.4 * 109, funnel plot beautifully symmetrical, p-hacking curve nice and right-skewed, Orwin fail-safe n of 559, et cetera, et cetera, et cetera.\nBy my count, Bem follows all of the commandments except [6] and [10]. He apologizes for not using pre-registration, but says it\u2019s okay because the studies were exact replications of a previous study that makes it impossible for an unsavory researcher to change the parameters halfway through and does pretty much the same thing. And he apologizes for the small effect size but points out that some effect sizes are legitimately very small, this is no smaller than a lot of other commonly-accepted results, and that a high enough p-value ought to make up for a low effect size.\nThis is far better than the average meta-analysis. Bem has always been pretty careful and this is no exception. Yet its conclusion is that psychic powers exist.\nSo \u2013 once again \u2013 what now, motherfuckers?\nIII.\nIn retrospect, that list of ways to fix science above was a little optimistic.\nThe first nine items (large sample sizes, replications, low p-values, Bayesian statistics, meta-analysis, pre-registration, publication bias, heterogeneity) all try to solve the same problem: accidentally mistaking noise in the data for a signal.\nWe\u2019ve placed so much emphasis on not mistaking noise for signal that when someone like Bem hands us a beautiful, perfectly clear signal on a silver platter, it briefly stuns us. \u201cWow, of the three hundred different terrible ways to mistake noise for signal, Bem has proven beyond a shadow of a doubt he hasn\u2019t done any of them.\u201d And we get so stunned we\u2019re likely to forget that this is only part of the battle.\nBem definitely picked up a signal. The only question is whether it\u2019s a signal of psi, or a signal of poor experimental technique.\nNone of these commandments even touch poor experimental technique \u2013 or confounding, or whatever you want to call it. If an experiment is confounded, if it produces a strong signal even when its experimental hypothesis is true, then using a larger sample size will just make that signal even stronger. \nReplicating it will just reproduce the confounded results again. \nLow p-values will be easy to get if you perform the confounded experiment on a large enough scale.\nMeta-analyses of confounded studies will obey the immortal law of \u201cgarbage in, garbage out\u201d.\nPre-registration only assures that your study will not get any worse than it was the first time you thought of it, which may be very bad indeed.\nSearching for publication bias only means you will get all of the confounded studies, instead of just some of them.\nHeterogeneity just tells you whether all of the studies were confounded about the same amount. \nBayesian statistics, alone among these first eight, ought to be able to help with this problem. After all, a good Bayesian should be able to say \u201cWell, I got some impressive results, but my prior for psi is very low, so this raises my belief in psi slightly, but raises my belief that the experiments were confounded a lot.\u201d\nUnfortunately, good Bayesians are hard to come by, and the researchers here seem to be making some serious mistakes. Here\u2019s Bem:\nAn opportunity to calculate an approximate answer to this question emerges from a Bayesian critique of Bem\u2019s (2011) experiments by Wagenmakers, Wetzels, Borsboom, & van der Maas (2011). Although Wagenmakers et al. did not explicitly claim psi to be impossible, they came very close by setting their prior odds at 10^20 against the psi hypothesis. The Bayes Factor for our full database is approximately 10^9 in favor of the psi hypothesis (Table 1), which implies that our meta-analysis should lower their posterior odds against the psi hypothesis to 10^11\nLet me shame both participants in this debate.\nBem, you are abusing Bayes factor. If Wagenmakers uses your 10^9 Bayes factor to adjust from his prior of 10^-20 to 10^-11, then what happens the next time you come up with another database of studies supporting your hypothesis? We all know you will, because you\u2019ve amply proven these results weren\u2019t due to chance, so whatever factor produced these results \u2013 whether real psi or poor experimental technique \u2013 will no doubt keep producing them for the next hundred replication attempts. When those come in, does Wagenmakers have to adjust his probability from 10^-11 to 10^-2? When you get another hundred studies, does he have to go from 10^-2 to 10^7? If so, then by conservation of expected evidence he should just update to 10^+7 right now \u2013 or really to infinity, since you can keep coming up with more studies till the cows come home. But in fact he shouldn\u2019t do that, because at some point his thought process becomes \u201cOkay, I already know that studies of this quality can consistently produce positive findings, so either psi is real or studies of this quality aren\u2019t good enough to disprove it\u201d. This point should probably happen well before he increases his probability by a factor of 10^9. See Confidence Levels Inside And Outside An Argument for this argument made in greater detail.\nWagenmakers, you are overconfident. Suppose God came down from Heaven and said in a booming voice \u201cEVERY SINGLE STUDY IN THIS META-ANALYSIS WAS CONDUCTED PERFECTLY WITHOUT FLAWS OR BIAS, AS WAS THE META-ANALYSIS ITSELF.\u201d You would see a p-value of less than 1.2 * 10^-10 and think \u201cI bet that was just coincidence\u201d? And then they could do another study of the same size, also God-certified, returning exactly the same results, and you would say \u201cI bet that was just coincidence too\u201d? YOU ARE NOT THAT CERTAIN OF ANYTHING. Seriously, read the @#!$ing Sequences.\nBayesian statistics, at least the way they are done here, aren\u2019t gong to be of much use to anybody.\nThat leaves randomized controlled trials and effect sizes.\nRandomized controlled trials are great. They eliminate most possible confounders in one fell swoop, and are excellent at keeping experimenters honest. Unfortunately, most of the studies in the Bem meta-analysis were already randomized controlled trials.\nHigh effect sizes are really the only thing the Bem study lacks. And it is very hard to experimental technique so bad that it consistently produces a result with a high effect size.\nBut as Bem points out, demanding high effect size limits our ability to detect real but low-effect phenomena. Just to give an example, many physics experiments \u2013 like the ones that detected the Higgs boson or neutrinos \u2013 rely on detecting extremely small perturbations in the natural order, over millions of different trials. Less esoterically, Bem mentions the example of aspirin decreasing heart attack risk, which it definitely does and which is very important, but which has an effect size lower than that of his psi results. If humans have some kind of very weak psionic faculty that under regular conditions operates poorly and inconsistently, but does indeed exist, then excluding it by definition from the realm of things science can discover would be a bad idea.\nAll of these techniques are about reducing the chance of confusing noise for signal. But when we think of them as the be-all and end-all of scientific legitimacy, we end up in awkward situations where they come out super-confident in a study\u2019s accuracy simply because the issue was one they weren\u2019t geared up to detect. Because a lot of the time the problem is something more than just noise.\nIV.\nWiseman & Schlitz\u2019s Experimenter Effects And The Remote Detection Of Staring is my favorite parapsychology paper ever and sends me into fits of nervous laughter every time I read it.\nThe backstory: there is a classic parapsychological experiment where a subject is placed in a room alone, hooked up to a video link. At random times, an experimenter stares at them menacingly through the video link. The hypothesis is that this causes their galvanic skin response (a physiological measure of subconscious anxiety) to increase, even though there is no non-psychic way the subject could know whether the experimenter was staring or not. \nSchiltz is a psi believer whose staring experiments had consistently supported the presence of a psychic phenomenon. Wiseman, in accordance with nominative determinism is a psi skeptic whose staring experiments keep showing nothing and disproving psi. Since they were apparently the only two people in all of parapsychology with a smidgen of curiosity or rationalist virtue, they decided to team up and figure out why they kept getting such different results.\nThe idea was to plan an experiment together, with both of them agreeing on every single tiny detail. They would then go to a laboratory and set it up, again both keeping close eyes on one another. Finally, they would conduct the experiment in a series of different batches. Half the batches (randomly assigned) would be conducted by Dr. Schlitz, the other half by Dr. Wiseman. Because the two authors had very carefully standardized the setting, apparatus and procedure beforehand, \u201cconducted by\u201d pretty much just meant greeting the participants, giving the experimental instructions, and doing the staring.\nThe results? Schlitz\u2019s trials found strong evidence of psychic powers, Wiseman\u2019s trials found no evidence whatsoever.\nTake a second to reflect on how this makes no sense. Two experimenters in the same laboratory, using the same apparatus, having no contact with the subjects except to introduce themselves and flip a few switches \u2013 and whether one or the other was there that day completely altered the result. For a good time, watch the gymnastics they have to do to in the paper to make this sound sufficiently sensical to even get published. This is the only journal article I\u2019ve ever read where, in the part of the Discussion section where you\u2019re supposed to propose possible reasons for your findings, both authors suggest maybe their co-author hacked into the computer and altered the results.\nWhile it\u2019s nice to see people exploring Bem\u2019s findings further, this is the experiment people should be replicating ninety times. I expect something would turn up. \nAs it is, Kennedy and Taddonio list ten similar studies with similar results. One cannot help wondering about publication bias (if the skeptic and the believer got similar results, who cares?). But the phenomenon is sufficiently well known in parapsychology that it has led to its own host of theories about how skeptics emit negative auras, or the enthusiasm of a proponent is a necessary kindling for psychic powers.\nOther fields don\u2019t have this excuse. In psychotherapy, for example, practically the only consistent finding is that whatever kind of psychotherapy the person running the study likes is most effective. Thirty different meta-analyses on the subject have confirmed this with strong effect size (d = 0.54) and good significance (p = .001).\nThen there\u2019s Munder (2013), which is a meta-meta-analysis on whether meta-analyses of confounding by researcher allegiance effect were themselves meta-confounded by meta-researcher allegiance effect. He found that indeed, meta-researchers who believed in researcher allegiance effect were more likely to turn up positive results in their studies of researcher allegiance effect (p < .002). \n\nIt gets worse. There's a famous story about an experiment where a scientist told teachers that his advanced psychometric methods had predicted a couple of kids in their class were about to become geniuses (the students were actually chosen at random). He followed the students for the year and found that their intelligence actually increased. This was supposed to be a Cautionary Tale About How Teachers\u2019 Preconceptions Can Affect Children.\nLess famous is that the same guy did the same thing with rats. He sent one laboratory a box of rats saying they were specially bred to be ultra-intelligent, and another lab a box of (identical) rats saying they were specially bred to be slow and dumb. Then he had them do standard rat learning tasks, and sure enough the first lab found very impressive results, the second lab very disappointing ones.\nThis scientist \u2013 let\u2019s give his name, Robert Rosenthal \u2013 then investigated three hundred forty five different studies for evidence of the same phenomenon. He found effect sizes of anywhere from 0.15 to 1.7, depending on the type of experiment involved. Note that this could also be phrased as \u201cbetween twice as strong and twenty times as strong as Bem\u2019s psi effect\u201d. Mysteriously, animal learning experiments displayed the highest effect size, supporting the folk belief that animals are hypersensitive to subtle emotional cues.\nOkay, fine. Subtle emotional cues. That\u2019s way more scientific than saying \u201cnegative auras\u201d. But the question remains \u2013 what went wrong for Schlitz and Wiseman? Even if Schlitz had done everything short of saying \u201cThe hypothesis of this experiment is for your skin response to increase when you are being stared at, please increase your skin response at that time,\u201d and subjects had tried to comply, the whole point was that they didn\u2019t know when they were being stared at, because to find that out you\u2019d have to be psychic. And how are these rats figuring out what the experimenters\u2019 subtle emotional cues mean anyway? I can\u2019t figure out people\u2019s subtle emotional cues half the time!\nI know that standard practice here is to tell the story of Clever Hans and then say That Is Why We Do Double-Blind Studies. But first of all, I\u2019m pretty sure no one does double-blind studies with rats. Second of all, I think most social psych studies aren\u2019t double blind \u2013 I just checked the first one I thought of, Aronson and Steele on stereotype threat, and it certainly wasn\u2019t. Third of all, this effect seems to be just as common in cases where it\u2019s hard to imagine how the researchers\u2019 subtle emotional cues could make a difference. Like Schlitz and Wiseman. Or like the psychotherapy experiments, where most of the subjects were doing therapy with individual psychologists and never even saw whatever prestigious professor was running the study behind the scenes.\nI think it\u2019s a combination of subconscious emotional cues, subconscious statistical trickery, perfectly conscious fraud which for all we know happens much more often than detected, and things we haven\u2019t discovered yet which are at least as weird as subconscious emotional cues. But rather than speculate, I prefer to take it as a brute fact. Studies are going to be confounded by the allegiance of the researcher. When researchers who don\u2019t believe something discover it, that\u2019s when it\u2019s worth looking into.\nV.\nSo what exactly happened to Bem?\nAlthough Bem looked hard to find unpublished material, I don\u2019t know if he succeeded. Unpublished material, in this context, has to mean \u201cmaterial published enough for Bem to find it\u201d, which in this case was mostly things presented at conferences. What about results so boring that they were never even mentioned?\nAnd I predict people who believe in parapsychology are more likely to conduct parapsychology experiments than skeptics. Suppose this is true. And further suppose that for some reason, experimenter effect is real and powerful. That means most of the experiments conducted will support Bem\u2019s result. But this is still a weird form of \u201cpublication bias\u201d insofar as it ignores the contrary results of hypotheticaly experiments that were never conducted.\nAnd worst of all, maybe Bem really did do an excellent job of finding every little two-bit experiment that no journal would take. How much can we trust these non-peer-reviewed procedures?\nI looked through his list of ninety studies for all the ones that were both exact replications and had been peer-reviewed (with one caveat to be mentioned later). I found only seven:\nBatthyany, Kranz, and Erber: .268\nRitchie 1: 0.015\nRitchie 2: -0.219\nRichie 3: -0.040\nSubbotsky 1: 0.279\nSubbotsky 2: 0.292\nSubbotsky 3: -.399\nThree find large positive effects, two find approximate zero effects, and two find large negative effects. Without doing any calculatin\u2019, this seems pretty darned close to chance for me.\nOkay, back to that caveat about replications. One of Bem\u2019s strongest points was how many of the studies included were exact replications of his work. This is important because if you do your own novel experiment, it leaves a lot of wiggle room to keep changing the parameters and statistics a bunch of times until you get the effect you want. This is why lots of people want experiments to be preregistered with specific committments about what you\u2019re going to test and how you\u2019re going to do it. These experiments weren\u2019t preregistered, but conforming to a previously done experiment is a pretty good alternative.\nExcept that I think the criteria for \u201creplication\u201d here were exceptionally loose. For example, Savva et al was listed as an \u201cexact replication\u201d of Bem, but it was performed in 2004 \u2013 seven years before Bem\u2019s original study took place. I know Bem believes in precognition, but that\u2019s going too far. As far as I can tell \u201cexact replication\u201d here means \u201ckinda similar psionic-y thing\u201d. Also, Bem classily lists his own experiments as exact replications of themselves, which gives a big boost to the \u201cexact replications return the same results as Bem\u2019s original studies\u201d line. I would want to see much stricter criteria for replication before I relax the \u201cpreregister your trials\u201d requirement.\n(Richard Wiseman \u2013 the same guy who provided the negative aura for the Wiseman and Schiltz experiment \u2013 has started a pre-register site for Bem replications. He says he has received five of them. This is very promising. There is also a separate pre-register for parapsychology trials in general. I am both extremely pleased at this victory for good science, and ashamed that my own field is apparently behind parapsychology in the \u201cscientific rigor\u201d department)\nThat is my best guess at what happened here \u2013 a bunch of poor-quality, peer-unreviewed studies that weren\u2019t as exact replications as we would like to believe, all subject to mysterious experimenter effects.\nThis is not a criticism of Bem or a criticism of parapsychology. It\u2019s something that is inherent to the practice of meta-analysis, and even more, inherent to the practice of science. Other than a few very exceptional large medical trials, there is not a study in the world that would survive the level of criticism I am throwing at Bem right now.\nI think Bem is wrong. The level of criticism it would take to prove a wrong study wrong is higher than that almost any existing study can withstand. That is not encouraging for existing studies.\nVI.\nThe motto of the Royal Society \u2013 Hooke, Boyle, Newton, some of the people who arguably invented modern science \u2013 was nullus in verba, \u201ctake no one\u2019s word\u201d.\nThis was a proper battle cry for seventeenth century scientists. Think about the (admittedly kind of mythologized) history of Science. The scholastics saying that matter was this, or that, and justifying themselves by long treatises about how based on A, B, C, the word of the Bible, Aristotle, self-evident first principles, and the Great Chain of Being all clearly proved their point. Then other scholastics would write different long treatises on how D, E, and F, Plato, St. Augustine, and the proper ordering of angels all indicated that clearly matter was something different. Both groups were pretty sure that the other had make a subtle error of reasoning somewhere, and both groups were perfectly happy to spend centuries debating exactly which one of them it was.\nAnd then Galileo said \u201cWait a second, instead of debating exactly how objects fall, let\u2019s just drop objects off of something really tall and see what happens\u201d, and after that, Science.\nYes, it\u2019s kind of mythologized. But like all myths, it contains a core of truth. People are terrible. If you let people debate things, they will do it forever, come up with horrible ideas, get them entrenched, play politics with them, and finally reach the point where they\u2019re coming up with theories why people who disagree with them are probably secretly in the pay of the Devil. \nImagine having to conduct the global warming debate, except that you couldn\u2019t appeal to scientific consensus and statistics because scientific consensus and statistics hadn\u2019t been invented yet. In a world without science, everything would be like that.\nHeck, just look at philosophy.\nThis is the principle behind the Pyramid of Scientific Evidence. The lowest level is your personal opinions, no matter how ironclad you think the logic behind them is. Just above that is expert opinion, because no matter how expert someone is they\u2019re still only human. Above that is anecdotal evidence and case studies, because even though you\u2019re finally getting out of people\u2019s heads, it\u2019s still possible for the content of people\u2019s heads to influence which cases they pay attention to. At each level, we distill away more and more of the human element, until presumably at the top the dross of humanity has been purged away entirely and we end up with pure unadulterated reality.\n\nThe Pyramid of Scientific Evidence\nAnd for a while this went well. People would drop things off towers, or see how quickly gases expanded, or observe chimpanzees, or whatever.\nThen things started getting more complicated. People started investigating more subtle effects, or effects that shifted with the observer. The scientific community became bigger, everyone didn\u2019t know everyone anymore, you needed more journals to find out what other people had done. Statistics became more complicated, allowing the study of noisier data but also bringing more peril. And a lot of science done by smart and honest people ended up being wrong, and we needed to figure out exactly which science that was.\nAnd the result is a lot of essays like this one, where people who think they\u2019re smart take one side of a scientific \u201ccontroversy\u201d and say which studies you should believe. And then other people take the other side and tell you why you should believe different studies than the first person thought you should believe. And there is much argument and many insults and citing of authorities and interminable debate for, if not centuries, at least a pretty long time.\nThe highest level of the Pyramid of Scientific Evidence is meta-analysis. But a lot of meta-analyses are crap. This meta-analysis got p < 1.2 * 10^-10 for a conclusion I'm pretty sure is false, and it isn\u2019t even one of the crap ones. Crap meta-analyses look more like this, or even worse. \nHow do I know it\u2019s crap? Well, I use my personal judgment. How do I know my personal judgment is right? Well, a smart well-credentialed person like James Coyne agrees with me. How do I know James Coyne is smart? I can think of lots of cases where he\u2019s been right before. How do I know those count? Well, John Ioannides has published a lot of studies analyzing the problems with science, and confirmed that cases like the ones Coyne talks about are pretty common. Why can I believe Ioannides\u2019 studies? Well, there have been good meta-analyses of them. But how do I know if those meta-analyses are crap or not? Well\u2026\n\nThe Ouroboros of Scientific Evidence\nScience! YOU WERE THE CHOSEN ONE! It was said that you would destroy reliance on biased experts, not join them! Bring balance to epistemology, not leave it in darkness! \n\nI LOVED YOU!!!!\nEdit: Conspiracy theory by Andrew Gelman\n\n\n\t\t\t\t\t\tThis entry was posted in Uncategorized and tagged long post is long, science, statistics, studies. Bookmark the permalink.\t\t\t\t\t\t\t\t\t\t\t\n\n\n\u2190 Stop Confounding Yourself! Stop Confounding Yourself!\nLinks For May 2014 \u2192\n\n\n197 Responses to The Control Group Is Out Of Control\nReverse order\n\n\n\n\n suntzuanime says: \n\n\t\t\tApril 28, 2014 at 9:20 pm \nImagine the global warming debate, but you couldn\u2019t appeal to scientific consensus or statistics because you didn\u2019t really understand the science or the statistics, and you just had to take some people who claimed to know what was going on at their verba.\n\u201cTake no one\u2019s word\u201d sounds like a good rallying cry when it comes to dropping a bowling ball and a feather and seeing which hits the ground first, but I don\u2019t have my own global temperature monitoring stations that I\u2019ve been running for the past fifty years, and even if I did I probably wouldn\u2019t be smart enough to know if a climate model based on them was bullshit or not.\nI guess this is sort of the point you were making? But it\u2019s weird to cite climate change as a counterexample.\nNow I\u2019m wondering if you were just doing the thing where you subtly undercut your own points as you make them out of sheer uncontrollable perversity. This edit button is a curse, not a blessing.\n\n\n\n\n\n\n\n\n Oligopsony says: \n\n\t\t\tApril 28, 2014 at 11:07 pm \nIf some of the weirder psi suppression theories are right, psi should actually be easier to study by conducting personal experiments than by trying to study or do public science, especially if you precommit yourself to not telling anyone about the results.\n\n\n\n\n\n\n\n\n McGuire says: \n\n\t\t\tApril 29, 2014 at 12:10 pm \nI myself have taken this a step farther: I precommitted to not performing the experiments at all.\nSo far, preliminary results are promising.\n\n\n\n\n\n\n\n\n Will Newsome says: \n\n\t\t\tApril 30, 2014 at 1:11 am \nYeah not testing the Lord seems like a good idea.\n\n\n\n\n\n\n\n\n Multiheaded says: \n\n\t\t\tMay 4, 2014 at 11:11 am \nWill: damn straight! Shudder.\n\n\n\n\n\n\n\n\n\n\n\n\n Douglas Knight says: \n\n\t\t\tApril 29, 2014 at 2:03 am \nA 20 year project for that purpose? Why doesn\u2019t he seek publicity more often? Did you try the conspiracy theory link at the end?\n\n\n\n\n\n\n\n\n Deiseach says: \n\n\t\t\tApril 29, 2014 at 11:22 am \nExcept you have to wait until you get to the moon to drop your bowling ball and feather and have the theory proved right about \u201cthey both fall at the same rate because they are both acted upon by the same force\u201d.\nOr the counter-factual \u201cOkay, you see those two lights in the sky? The big one in the day that goes from east to west and looks like it\u2019s moving around us while we\u2019re staying still? And the small one in the night that moves from east to west and looks like it\u2019s moving around us while we\u2019re staying still? Yeah, well, you can believe the evidence of your senses about the small one but the big one is actually staying still and we\u2019re moving. What do you mean, \u2018evidence\u2019? This is SCIENCE!!!\u201d\nScott\u2019s little slap at the Scholastics is all well and good, but even science has to rely on *spit* philosophy when its debating things for which it has not yet got the physical evidence, especially when it won\u2019t be able to back it up with physical evidence for a couple of centuries.\n\n\n\n\n\n\n\n\n suntzuanime says: \n\n\t\t\tApril 29, 2014 at 12:50 pm \nExcept you have to wait until you get to the moon to drop your bowling ball and feather and have the theory proved right about \u201cthey both fall at the same rate because they are both acted upon by the same force\u201d.\nI was just doing the thing where I subtly undercut my own points as I make them out of sheer uncontrollable perversity. \ud83d\ude00\n\n\n\n\n\n\n\n\n\n\n Gates VP says: \n\n\t\t\tApril 29, 2014 at 2:20 pm \nPersonally, I think the whole global warming thing is an even bigger mess than you\u2019ve described \ud83d\ude42\nUnderpinning \u201cglobal warming\u201d is \u201cman-made climate change\u201d, which seems like a silly smoke-screen debate. In fact, even the premise of \u201cglobal warming\u201d is a really silly debate.\nThere\u2019s a 100% that humans having a dramatic influence on the earth\u2019s climate. And I really mean that 100%. We are clearly performing dramatic modifications to our climate and we\u2019re clearly doing so at a faster rate than previous species.\nSo back to \u201cglobal warming\u201d\u2026 what if it\u2019s really \u201cglobal cooling\u201d? Does it actually matter? I mean, what if it was \u201cglobal cooling\u201d, would we start burning down forests en masse just to \u201cfix\u201d the problem?\nAll of this debate back and forth is just ignoring the real problem: the planet is changing, how do we want to adapt, how do we want to mold it?\nI mean, we want to slow CO2 emissions because they also contain lots of things that unhealthy for us to breathe. Does it really matter if that cools or warms the planet?\n\n\n\n\n\n\n\n\n anon says: \n\n\t\t\tApril 30, 2014 at 9:39 am \nYour comment is one of the worst I\u2019ve ever seen on this website. It matters whether or not CO2 warms or cools the planet.\nFirst, we would need to pursue different policies in response to each possibility because they involve different scenarios for danger. Global warming might be stoppable by throwing giant ice cubes into the ocean, but if there\u2019s global cooling that\u2019s the last thing we should do. Second, they would have different consequences. Maybe warming would kill one billion people while cooling would only kill one million.\nIf CO2 is causing cooling and cooling is bad, we should want more trees and not fewer. Cooling is the opposite of warming but that doesn\u2019t imply that the opposite of one problem\u2019s solution will solve the other.\nPolicies aren\u2019t determined in a vacuum. If the only harmful consequence of CO2 is its effects on our lungs, it might not be a good to switch to other types of energy because the downsides might outweigh the upsides.\n\n\n\n\n\n\n\n\n Gates VP says: \n\n\t\t\tApril 30, 2014 at 4:07 pm \nIf CO2 is causing cooling and cooling is bad\u2026\nThis is the core issue. We have no way coherent strategy for calling the change \u201cgood\u201d or \u201cbad\u201d.\nGlobal warming might be stoppable by throwing giant ice cubes into the ocean,\u2026\nThere\u2019s also the assumption here that we could come up with a plan for to fix something that we\u2019re not really sure is broken.\nIs the goal here really to keep the earth at a static temperature for the remainder of human existence?\n\n\n\n\n\n\n\n\n\n\n MugaSofer says: \n\n\t\t\tApril 30, 2014 at 1:23 pm \n\u201cThere\u2019s a 100% that humans having a dramatic influence on the earth\u2019s climate. And I really mean that 100%.\u201d\nNo, you really don\u2019t.\nhttp:\/\/www.lesswrong.com\/lw\/mp\/0_and_1_are_not_probabilities\/\u200e\n\n\n\n\n\n\n\n\n Gates VP says: \n\n\t\t\tApril 30, 2014 at 3:27 pm \nFine, this is a heavy statistics blog.\nThe only way we can argue this would require a lot of semantics about the definition of the words \u201cclimate change\u201d.\nWe dropped a pair a nuclear bombs in Japan several decades ago. If those effects of those bombs do not count as \u201cclimate change\u201d for that region, then we need to be very specific about how we\u2019re going to define \u201cclimate change\u201d and attempt to measure the \u201cman-made\u201d effects thereof.\n\n\n\n\n\n\n\n\n\n\n Ellie Kesselman says: \n\n\t\t\tMay 4, 2014 at 12:29 am \nGates, you said,\nWe are clearly performing dramatic modifications to our climate and\u2026doing so at a faster rate than previous species. So back to \u201cglobal warming\u201d\u2026what if it\u2019s really \u201cglobal cooling\u201d? Does it actually matter?\nNo, it doesn\u2019t matter. Climate is a complex dynamic system. At the very least, it makes sense to conserve non-renewable resources because we\u2019ve nearly depleted them. Agreement and action on some basics like that would help. So would some honesty about carbon credit schemes (neoliberal economics is too easy to game) and boondoggle solar tax credits\/government funds that corrupt Green types have used for personal enrichment, repeatedly.\nYour comment didn\u2019t deserve to be called \u201cone of the worst ever seen on this website\u201d! This is NOT a heavy statistics website. Arguing over your rhetorical use of Prob(x) = 1.0 is petty. Ignore your detractors here. You are correct; they are more wrong.\n\n\n\n\n\n\n\n\n\n\n\n\n jaimeastorga2000 says: \n\n\t\t\tApril 28, 2014 at 9:37 pm \nThe \u201cExperimenter Effects And The Remote Detection Of Staring\u201d link is broken.\n\n\n\n\n\n\n\n\n Vertebrat says: \n\n\t\t\tApril 29, 2014 at 10:54 pm \nToo many people were staring at it.\n\n\n\n\n\n\n\n\n\n\n Ken Arromdee says: \n\n\t\t\tApril 28, 2014 at 9:52 pm \nHow do you distinguish\n1) Psychic researchers get as many good results as normal researchers because both sets of researchers are equally sloppy, and\n2) Psychic researchers get as many good results as normal researchers because psychic researchers are worse at research than normal researchers (raising the level of positive results) but this is compensated for by the fact that psychic powers are not real (reducing the number of positive results)?\nIn other words, you\u2019re basing this on the premise that psychic researchers are exactly the same as regular researchers except that they\u2019re researching something that\u2019s not going to pan out. I see little reason to believe this premise. For instance, I would not be very surprised if gullibility or carelessness is correlated with belief in psychic powers, and willingness to do psychic experiments is also correlated with belief in psychic powers.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tApril 28, 2014 at 9:57 pm \nI doubt psychic researchers are just as good as normal researchers (though a few are) and I agree that if I meant \u201ccontrol group\u201d literally, in terms of trying to find the quantitative effect size of science, this methodology wouldn\u2019t be good enough.\nI\u2019m using control group more as a metaphor, where the mistakes of the best parapsychologists can be used as a pointer to figure out what other scientists have to improve.\n\n\n\n\n\n\n\n\n Ken Arromdee says: \n\n\t\t\tApril 30, 2014 at 4:38 pm \nBut once you concede that psychic researchers aren\u2019t really like ordinary researchers , you have little reason to believe that psychic researchers will make the same sorts of mistakes that ordinary researchers do. Even if you find single examples of both types making the same mistake, you have no reason to believe that the distribution is the same among both groups. It could be that a mistake that is common among psychic researchers is rare among normal ones, and focusing on it is misplaced.\nI\u2019m also generally skeptical about using X as a metaphor when X is not actually true.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 5, 2014 at 4:59 am \nWith respect, I provided in my atrociously long reply a series of arguments, with evidence, that parapsychologists are at least as good as mainstream researchers in most respects, and significantly better in others. Skeptics like Chris French concur with me; see his recent talk (https:\/\/www.youtube.com\/watch?v=ObXWLF6acuw) for evidence of this.\nMosseau (2003), for example, took an empirical approach to this, and compared research in top parapsychology journals like the Journal of Scientific Exploration and the Journal of Parapsychology with mainstream journals such as the Journal of Molecular and Optical Physics, the British Journal of Psychology, etc, finding that, most of the time, fringe research displayed a higher level of conformance to several basic criteria of good science. This includes the reporting of negative results, usage of statistics, rejection of anecdotal evidence, self-correction, overlap with other fields of research, and abstinence from jargon. While I\u2019m aware most of these don\u2019t directly impact quality of experimentation, they do provide respectable evidence that parapsychologists are actually about as careful, in their scientific thinking, as most anyone else.\nMoreover, research trying to establish a link between belief in psi phenomena and measures like IQ and credulity has been for the most part unsuccessful, finding that belief in psi does not vary according to level of education (although belief in superstitions like the power of 13 does).\nFinally, there is the fact that skeptics have directly involved themselves in the critique and even design of parapsychological studies. Ganzfeld studies after 1986, for example, owe much of their sophistication and rigor to Ray Hyman, who coauthored a report called the Joint Communique with Honorton, a parapsychologist, where a series of recommendations were specified whose implementation would be convincing, and have been widely adopted today.\nI discuss many more examples of sophistication in parapsychological research; see, again, my absurdly large post for these details.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Sniffnoy says: \n\n\t\t\tApril 28, 2014 at 10:11 pm \nSome nitpicking:\nWhen you talk about probabilities of 10^7 and such, obviously, this should be odds ratios. I mean, these are pretty similar when you\u2019re talking about an odds ratio of 10^-11, not so much when it\u2019s 10^7.\nAlso, some writing nitpicking:\nBy my count, Bem follows all of the commandments except [2] and [8].\nYou seem to mean [6] and [10]? (Rest of paragraph similarly.)\nOther fields don\u2019t have this excuse. In psychotherapy, for example, practically the only consistent finding is that whatever kind of psychotherapy the person running the study likes.\nYou seem to have left out the verb in this sentence?\n\n\n\n\n\n\n\n\n Val says: \n\n\t\t\tFebruary 4, 2016 at 6:30 am \n\u201cIn psychotherapy, for example, practically the only consistent finding is that whatever kind of psychotherapy the person running the study likes.\u201d\nWhat they found was which kind of psychotherapy the experimenter likes. Not the best-worded the sentence could be, but the point is in there.\n\n\n\n\n\n\n\n\n\n\n Sniffnoy says: \n\n\t\t\tApril 28, 2014 at 10:18 pm \nAlso: Male Scent May Compromise Biomedical Research. Not actually related other than being another instance of \u201cscience is hard\u201d, but I thought that you\u2019d find it amusing and that it was worth pointing out.\n\n\n\n\n\n\n\n\n Eliezer Yudkowsky says: \n\n\t\t\tApril 28, 2014 at 10:21 pm \nIt\u2019s possible that you and I and some of the most experienced scientists and statisticians on the planet could get together and design a procedure for \u201cmeta-analysis\u201d which would require actual malice to get wrong. I\u2019ll be happy to start the discussion by suggesting that step 1 is to convert all the studies into likelihood functions on the same hypothesis space, and step 2 is to realize that the combined likelihood functions rule out all of the hypothesis space, and step 3 is to suggest which clusters of the hypothesis space are well-supported by many studies and to mark which other studies must then have been completely wrong.\nUntil that time, meta-analyses will go on being bullshit. They are not the highest level of the scientific pyramid. They can deliver whatever answer the analyst likes. When I read about a new meta-analysis I mostly roll my eyes. Maybe it was honest, sure, but how would I know? And why would it give the right answer even if the researchers were in fact honest? You can\u2019t multiply a bunch of likelihood functions and get what a real Bayesian would consider zero everywhere, and from this extract a verdict by the dark magic of frequentist statistics.\nI can envision what a real, epistemologically lawful, real-world respecting, probability-theory-obeying meta-analysis would look like. I mean I couldn\u2019t tell you how to actually set down a method that everyone could follow, I don\u2019t have enough experience with how motivated reasoning plays out in these things and what pragmatic safeguards would be needed to stop it. But I have some idea what the purely statistical part would look like. I\u2019ve never seen it done.\n\n\n\n\n\n\n\n\n Andrew Hickey says: \n\n\t\t\tApril 29, 2014 at 8:44 am \nThis is the first time I\u2019ve ever wished for an upvote button on a WordPress blog. Everything Eliezer says here.\n\n\n\n\n\n\n\n\n Josh H says: \n\n\t\t\tApril 29, 2014 at 10:46 pm \nRather than try to come up with an infallible procedure for doing valid science, it might be simpler and more productive to tweak the incentives. In other words, separate the people who perform the experiments from the people who generate the hypotheses. I just wrote up some quick thoughts on what that might look like.\n\n\n\n\n\n\n\n\n suntzuanime says: \n\n\t\t\tApril 29, 2014 at 11:13 pm \nThe problem with is that in non-pathological science there is a lot of interplay between experimentation and hypothesis-generation. It used to be that science was \u201cdo an experiment to figure out how the world works\u201d rather than \u201cdecide everything in the world is fire then do an experiment to see if that\u2019s true\u201d. The latter is still better than deciding everything in the world is fire and not bothering with experiment, but it injects a lot of friction, especially into exploratory work.\nA slightly modified version of your proposal might separate reaching conclusions from proving them. You wouldn\u2019t outsource your experimentation, you\u2019d still do your own experiments. But their results would be considered preliminary, and you\u2019d need to have your results replicated by a replication lab in order to be stamped as official science and published in the serious portions of serious journals.\n\n\n\n\n\n\n\n\n Josh H says: \n\n\t\t\tApril 30, 2014 at 12:46 pm \nYeah, I think that\u2019s a much better way of putting it. Discovery could still be experimental, but things like \u201cputting it in a peer reviewed journal\u201d could be outsourced.\n\n\n\n\n\n\n\n\n\n\n Nancy Lebovitz says: \n\n\t\t\tApril 30, 2014 at 12:43 pm \nIncentives can only select from what people can figure out how to do.\nhttp:\/\/www.youtube.com\/watch?v=O4f4rX0XEBA\nIf you don\u2019t want to watch the whole thing, you could start at about 3:18.\n\n\n\n\n\n\n\n\n Josh H says: \n\n\t\t\tApril 30, 2014 at 12:53 pm \nI agree that changing incentives can\u2019t make people start doing something they don\u2019t know how to do. \nPeople do know how to do experiments to disprove a hypothesis, though. What they don\u2019t know how to do is systemically prevent experimenter bias from systemically warping the design of and statistical interpretation of such experiments, leading to continual production of false positives. \nIf we can set up incentives such that the experimenter\u2019s bias is orthogonal to the hypothesis being true or false, we would still expect some false negatives and false positives, since science is hard, but we\u2019d expect them to statistically average out over time instead of accumulating as entire disciplines worth of non-results.\n\n\n\n\n\n\n\n\n\n\n\n\n Kevin C says: \n\n\t\t\tApril 30, 2014 at 12:45 am \nstep 1 is to convert all the studies into likelihood functions on the same hypothesis space\nWhile I agree that if the procedure you propose were possible it would be helpful, I\u2019m skeptical that step 1 is possible* outside the hardest of sciences. Sure, in physics, math, computer science, and maybe chemistry you can define the hypothesis space clearly. However, once you go even as far as molecular biology ex vivo, the hypothesis space becomes too difficult to measure, much less convert the original English & jargon description of the hypothesis into a proper representation of the hypothesis space. (Some in vitro biology may still be measurable, but as soon as you\u2019re dealing with even the simplest living cells, you\u2019ve got hundreds of proteins that have to be part of the hypothesis space, even if the hypothesis on that dimension is merely \u201cprotein X\u201d may be present but has neutral effect on the measured outcomes.)\n* That is, not possible for modern day humans. I\u2019m agnostic on whether even a super-human AI could correctly represent the hypothesis space of a molecular biology paper. I think you get into encoding issues before you get to hypotheses that complex.\n\n\n\n\n\n\n\n\n\n\n Douglas Knight says: \n\n\t\t\tApril 28, 2014 at 10:43 pm \nThere is another way to do placebo science \u2013 subtly sabotage the researcher\u2019s experiment. This is routinely done to all real science students (ie, physics majors at maybe 20 schools).\nThree find large positive effects, two find approximate zero effects, and two find large negative effects. Without doing any calculatin\u2019, this seems pretty darned close to chance for me.\nThe effects of chance assuming the null hypothesis are much more specific than the average effect being zero. If your sample sizes are adequate, there should be no large effects, by definition of \u201cadequate.\u201d\n\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\nAn analogy occurred to me, comparing the pyramid of evidence to Lewis Thomas\u2019s take on medicine. His \u201cTechnology of Medicine\u201d said that real medicine requires understanding and allows cheap immediate cures. In contrast, most real-world medicine is expensive use of techniques that barely work and do so for no apparent reason. The pyramid of evidence is purely a product of medicine, attempting to evaluate treatments that have tiny effect sizes. With no understanding, the only evaluation method is large samples. But it is not merely a tool for fake medicine, it is an example of fake science.\n\n\n\n\n\n\n\n\n Douglas Knight says: \n\n\t\t\tApril 28, 2014 at 11:14 pm \nI corrected my citation from Lewis Thomas\u2019s Youngest Science to an essay in his Lives of a Cell. But the specific work is unimportant because you should read everything he wrote. Not just Scott, but also you. Sadly, that is only a few hundred pages.\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tMay 6, 2014 at 5:57 pm \nThere is another way to do placebo science \u2013 subtly sabotage the researcher\u2019s experiment. This is routinely done to all real science students (ie, physics majors at maybe 20 schools).\nI don\u2019t follow. How are physics majors\u2019 experiments being sabotaged?\n\n\n\n\n\n\n\n\n Douglas Knight says: \n\n\t\t\tMay 6, 2014 at 7:58 pm \nThe TA comes in at night and miscalibrates equipment. I don\u2019t know the details. It is probably hard to cause qualitative changes, such as to move them into full placebo condition. Instead they get the wrong numeric answer or unexpectedly large error bars.\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tMay 6, 2014 at 8:28 pm \nI see. So this is a systematic practice endorsed by the school and not just some TAs screwing with the little undergrads for lulz?\n\n\n\n\n\n\n\n\n Douglas Knight says: \n\n\t\t\tMay 6, 2014 at 8:48 pm \nsystematically endorsed.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Kevin says: \n\n\t\t\tApril 28, 2014 at 10:54 pm \nBem definitely picked up a signal. The only question is whether it\u2019s a signal of psi, or a signal of poor experimental technique.\nBut as Bem points out, demanding high effect size limits our ability to detect real but low-effect phenomena. Just to give an example, many physics experiments \u2013 like the ones that detected the Higgs boson or neutrinos \u2013 rely on detecting extremely small perturbations in the natural order, over millions of different trials.\nThe point I\u2019m about to mention, suggested by these two excerpts, is mostly covered in the Experimenter Effect section, but in a way that seems somewhat indirect to me. That point is systematic uncertainty. Particle physics experiments can confidently capture small effects because \u2013 in addition to commandments 1, 2, 4, 5, among others \u2013 we spend a great deal of time measuring biases in our detectors. Time spent assessing systematic uncertainty can easily make up the majority of a data analysis project. The failure to find (and correct or mitigate, if possible) systematic biases can give us results like faster-than-light neutrinos.\nOf course, it is much easier to give this advice than to take it and apply it to messy things like medicine or psychology. I freely admit that I would barely know where to start when it comes to such fields. Systematic uncertainty is an important topic in this type of discussion, though.\n\n\n\n\n\n\n\n\n Chris Hallquist says: \n\n\t\t\tApril 28, 2014 at 11:50 pm \nCan anyone think of a remotely sensible explanation for the Wiseman and Schlitz result? Right now, \u201cskeptics emit negative auras, or the enthusiasm of a proponent is a necessary kindling for psychic powers\u201d is looking pretty good.\nIf someone were raising money to fund a replication of this experiment, I would totally consider donating.\n\n\n\n\n\n\n\n\n nydwracu says: \n\n\t\t\tApril 29, 2014 at 2:09 am \nOr psi ability is distributed unequally, and people with more of it observe\/notice it firsthand and so are more likely to believe in it. Or psi doesn\u2019t exist and the RNG has a sense of humor.\nOr:\nMost participants were run by whichever experimenters was free to carry out the session, however, on a few ocassions (e.g., when a participant was a friend or collegue of one of the experimeters) the experimenter would be designated in advance of the trial. Thus most participants were assigned to experimenters in an opportunistic, rather than properly \u2018randomised\u2019 (e.g., via random number tables or the output of an RNG), way.\nSomething really weird could have happened there, but I have no idea what bias could have been added by that that would produce those results.\nIt\u2019s probably the RNG\u2019s sense of humor. But it would be interesting to see someone steelman psi.\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tApril 29, 2014 at 2:20 am \nI think the hypothesis was that some of the watched had psi powers and could detect the watchers. But if the experiment accidentally picked up on Schlitz\u2019s psi ability to make people uncomfortable over a video link, that would be amusing.\n\n\n\n\n\n\n\n\n Avantika says: \n\n\t\t\tApril 29, 2014 at 3:45 am \nThis seems like a simple and actually reasonable-sounding explanation.\n\n\n\n\n\n\n\n\n\n\n ckp says: \n\n\t\t\tApril 29, 2014 at 7:38 am \nBut if psi is real, then their psi powers might be affecting the RNG that assigns them to experimenters.\n~spooky~\n\n\n\n\n\n\n\n\n Oligopsony says: \n\n\t\t\tApril 29, 2014 at 1:40 pm \nOr psi doesn\u2019t exist and the RNG has a sense of humor.\nThat psi is semi-agentic and mostly seeks to fuck with people is something that parapsychologists have seriously considered.\n\n\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tApril 29, 2014 at 6:25 am \nYeah, I don\u2019t know. This is one place where, contrary to the spirit of this post, I\u2019m pretty willing to accept \u201cthey got a significant result by coincidence\u201d. I\u2019d also donate to a replication.\n\n\n\n\n\n\n\n\n Shmi Nux says: \n\n\t\t\tApril 29, 2014 at 2:13 pm \nReplicating this experiment is a wrong way to go. It is designed to detect psi, but instead uncovers a more interesting effect of participant-dependence of psi-detection, which is worth studying, by constructing a separate experiment explicitly for that purpose. Once the dependence part is figured out it makes sense to review the original protocol.\n\n\n\n\n\n\n\n\n nydwracu says: \n\n\t\t\tApril 29, 2014 at 11:17 pm \nIf they got a significant result by coincidence, what of their earlier results? It\u2019s possible to explain it by saying that Schlitz had consistently bad methods and they got a significant result by coincidence\u2026\nIt could also be a really weird chemical thing somehow? Like, if intending to creep someone out results in subconscious emission of chemicals that can produce the effect of feeling creeped out in someone sitting in a room #{distance} away? (I think of that because of that rat study.)\n\n\n\n\n\n\n\n\n\n\n Deiseach says: \n\n\t\t\tApril 29, 2014 at 11:30 am \nWell, if the placebo effect works positively, in that you can think yourself better if given a sugar pill and told it\u2019s a powerful new medicine, maybe there\u2019s a negative effect as well?\nPerhaps \u201cskeptics interfere with the vibrations\u201d isn\u2019t just an excuse by fraudulent mediums as to why they can\u2019t produce effects (translation: they don\u2019t dare try their conjuring tricks) in the presence of investigators.\nIf a skeptic is running an experiment with the conscious attitude \u201cI am doing impartial science here\u201d but all the time in the back of his mind he\u2019s thinking \u201cThis is hooey, I know there\u2019s no such thing as telepathy\/precognition\/what have you, this is not going to work\u201d, maybe that really does trigger some kind of observer effect?\n(I\u2019m not even going to try and untangle Schrodinger\u2019s cat where if you go in with a strong expectation that the cat is dead, would this skew the likelihood of the cat being dead when you open the box beyond what you\u2019d expect from chance?)\nTo be fair, I\u2019m sceptical myself about measuring galvanic skin changes; I wouldn\u2019t hang a rabid dog on the evidence of a \u201clie detector\u201d, and I\u2019m as unconvinced as Chesterton\u2019s Fr. Brown in the 1914 story \u201cThe Mistake of the Machine\u201d:\n\u201cI\u2019ve been reading,\u201d said Flambeau, \u201cof this new psychometric method they talk about so much, especially in America. You know what I mean; they put a pulsometer on a man\u2019s wrist and judge by how his heart goes at the pronunciation of certain words. What do you think of it?\u201d\n\u201cI think it very interesting,\u201d replied Father Brown; \u201cit reminds me of that interesting idea in the Dark Ages that blood would flow from a corpse if the murderer touched it.\u201d\n\u201cDo you really mean,\u201d demanded his friend, \u201cthat you think the two methods equally valuable?\u201d\n\u201cI think them equally valueless,\u201d replied Brown. \u201cBlood flows, fast or slow, in dead folk or living, for so many more million reasons than we can ever know. Blood will have to flow very funnily; blood will have to flow up the Matterhorn, before I will take it as a sign that I am to shed it.\u201d\n\u201cThe method,\u201d remarked the other, \u201chas been guaranteed by some of the greatest American men of science.\u201d\n\u201cWhat sentimentalists men of science are!\u201d exclaimed Father Brown, \u201cand how much more sentimental must American men of science be! Who but a Yankee would think of proving anything from heart-throbs? Why, they must be as sentimental as a man who thinks a woman is in love with him if she blushes. That\u2019s a test from the circulation of the blood, discovered by the immortal Harvey; and a jolly rotten test, too.\u201d \n\n\n\n\n\n\n\n\n nydwracu says: \n\n\t\t\tApril 29, 2014 at 11:19 pm \nDoesn\u2019t matter if galvanic skin changes aren\u2019t related to anything \u2014 if they aren\u2019t a sign of any change in mental state or similar, but can still be affected by psi, then there\u2019s still something that is affected by psi.\n\n\n\n\n\n\n\n\n\n\n moridinamael says: \n\n\t\t\tApril 29, 2014 at 1:33 pm \nLet\u2019s say for the sake of argument that Schlitz gave off weird, nervousness-inducing vibes. Interacting with him made their galvanic response fluctuate more *in general,* as they sit in the testing room thinking about what a creep he is. So when Schlitz was staring at them, the meters are recording samples which are going to be sampling from a different distribution of agitation valence. Maybe they picked a bad statistical lumping function on top of this.\nThis was the first thing I thought of.\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 22, 2014 at 2:08 am \nIt\u2019s possible the 20m distance and however many walls wasn\u2019t enough for sensory isolation, and one of the starers made detectable sound when moving to look at\/away from the screen. \nBlinding the sender to the experimental condition would avoid both accidental and malicious back channels like this. One possible design would be to have the video be sometimes delayed by 30 seconds, which would let you separate the effect of \u201cthe receiver is being watched\u201d and \u201cthe sender thinks they\u2019re watching\u201d.\n\n\n\n\n\n\n\n\n Jonas says: \n\n\t\t\tFebruary 6, 2016 at 4:47 pm \nOne possible design would be to have the video be sometimes delayed by 30 seconds\nIf the average time of travel from greeting the researcher to entering the view of the camera is small enough that a 30 second difference is noticeable, there is still a signal (\u201chow long until I see the person on the screen?\u201d) which the \u201csender\u201d could pick up on. Idea for getting around this: have a stooge delay people 30 seconds in the hall if their video isn\u2019t delayed, or have them sit in one of two rooms with unequal distance, the video delay = the average travel time differential. Or just instruct the sender to delay turning on his screen for X seconds, where X >= the slowest plausible travel time plus the video delay.\n\n\n\n\n\n\n\n\n\n\n\n\n John says: \n\n\t\t\tApril 29, 2014 at 12:30 am \n\u201cIn psychotherapy, for example, practically the only consistent finding is that whatever kind of psychotherapy the person running the study likes. \u201d \u2013 I think you accidentally a word.\n\n\n\n\n\n\n\n\n Sarah says: \n\n\t\t\tApril 29, 2014 at 1:04 am \nI think this is an argument for including cruder, common-sense heuristics for thinking about scientific studies.\n*Effect size. If the effect is not *very* large and the results *very* unequivocal, you probably either have an illusory result or an artifact of misunderstood structure (A appears to *sometimes* cause B if A is really two things, A1 and A2, and only A2 causes B.)\n*Physical plausibility. You don\u2019t believe in psi because it violates physics. You also shouldn\u2019t believe that drugs that don\u2019t cross the blood-brain barrier can have effects on the brain. \n*Analogy. If people have been finding MS \u201ccures\u201d for decades, the next MS cure isn\u2019t so credible.\n*Multiple independent lines of reasoning. Evolutionary, biochemical, and physiological arguments pointing in the same direction. Especially simple arguments that are hard to screw up.\n*Motivation. Cui Bono. Yes, we care who funded it.\nI think what we\u2019re finding is that *blind* science, automatable science, \u201cstudies say X and this quickly-checkable rubric confirms the studies are good\u201d, isn\u2019t a good filter. \nTo be fair, rubrics could stand a lot of improvement. (Cochrane, btw, *does* pay a lot of attention to experimental design.) I do think an ideal meta-analysis could do a lot better than the average meta-analysis.\nBut the purpose of methodology is to abstract away personal opinion. We don\u2019t do this *just* to better approach truth. We also do it to avoid getting in fights. We want to be able to claim to be impersonal, to be following a documented set of rules. In an era where the volume of science is such that all metrics will be gamed and cargo-culted, methodology may just not be enough. Old-fashioned, pre-modern heuristics like \u201cis this an honest man?\u201d and \u201cdoes this make sense?\u201d are unreliable, to be sure, but they\u2019re unreliable in a different way than statistics and procedures, and it may be time to consider their value.\n\n\n\n\n\n\n\n\n Michael Edward Vassar says: \n\n\t\t\tApril 29, 2014 at 1:50 pm \nThe hardcore take on funding bias is to just consider any study unworthy of consideration if it was funded *at all*. That\u2019s actually what I\u2019d like to enable, and what everything I\u2019m working on is an attempt to build towards.\n\n\n\n\n\n\n\n\n Jonas says: \n\n\t\t\tFebruary 6, 2016 at 5:04 pm \nconsider any study unworthy of consideration if it was funded at all\nReading it literally, that is only possible if the study doesn\u2019t incur any expenses. Here are some potential expenses: researcher salary, subject compensation, equipment.\nI can see how you do science with no researcher salary, if the researchers themselves are independently wealthy* or have the necessary free time, and I can see uncompensated subjects participating for the fun or to promote knowledge. But building all your equipment (e.g. particle accelerators or microscopes) from scratch, no buying of any components, not harnessing specialization and trade? \u2018You nuts?\n* That would make science a hobby of the aristocracy, just like in the Good Old Reactionary Days (I\u2019m not a (neo-)reactionary).\nSomething I would expect from a Normal Person\u2014you know, the kind who mostly doesn\u2019t comment on blogs like this one\u2014would be to at least allow self-funding, i.e. allowing the aristocrats and hobbyists to buy their own equipment. Maybe what you meant was to taboo funding other than through The Right Channel, which is what? Government subsidies to basic science? But why is government agenda less distorting than other agendas? Okay, suppose it isn\u2019t and gives the individual researchers free reign; why is their agendas less distorting?\nIf your goal is for agenda influences to all wash out and simply fund people who (in aggregate) do Pure Truth-Seeking Science, it\u2019s not clear that self-funding or un-funding or government-funding or government-plus-private-funding is the right (or wrong) way to achieve that effect.\nComments? Have I horribly distorted what you said?\n\n\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tApril 29, 2014 at 3:09 pm \nThe problem with physics argument is that there could be yet unknown mechanism causing the effect. Like drugs that don\u2019t cross blood-brain barrier but would happen to be radioactive. If we wouldn\u2019t know about radioactivity and couldn\u2019t separate radioactive drugs from others, it would seem like violation of physics and could happen in just some labs..\n\n\n\n\n\n\n\n\n\n\n Jacob Steinhardt says: \n\n\t\t\tApril 29, 2014 at 1:11 am \nIn regards to the Wiseman & Schlitz paper, the sample size is quite small and p-value is only 0.04. Shouldn\u2019t one major possible explanation be: this happened by chance?\n\n\n\n\n\n\n\n\n Daniel says: \n\n\t\t\tMay 2, 2014 at 5:09 pm \n(This is simplifying issues and ignoring fundamental problems with nullhypotheses testing.)\nLet\u2019s imagine two studies. Study A has a sample size of 100 and the p-value to reject H_0 is 0.04. Study B has a sample size of 1000 and the p-value to reject H_0 ist 0.04.\nQuestion: What\u2019s the difference in the probability to falsely reject the H_0 between the two studies?\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 4, 2014 at 9:46 pm \nThere is no difference. P(i \u2264 \u03b1| H0), where \u201c\u03b1\u201d is any threshold (e.g. 0.05) and i is the p-value of a study, does not depend upon sample size, n. There\u2019s nothing particularly surprising about this, though, IMO; people often think there is because they mix it up with P(i \u2264 \u03b1| Ha), or power, which increases as n increases.\nThe second study had only a larger sample size; given the same p-value, then, this logically implies it had a much smaller effect size. The converse is true: the first study had a much larger effect size, which made up for its small sample size. The result was the same p-value.\nIF there\u2019s a real effect, the p-value of a study should asymptotically converge to zero, but if there\u2019s no effect, the p-value will hover around zero regardless of sample size, just as the measured ES does. This is pretty intuitive to me.\nBut the commenter you replied to is correct with respect to *practical* considerations such as: large studies being of generally better quality, less likely to be non-representative of the true ES, etc, etc.\n\n\n\n\n\n\n\n\n Daniel says: \n\n\t\t\tMay 9, 2014 at 5:41 am \nIndeed, larger samples are better (see e.g. recently discussed in Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., and\nMunafo, M. R. (2013). Power failure: why small sample size undermines the reliability of\nneuroscience. Nature Reviews Neuroscience 14, 365ff. for more details, representativity may not be achieved by just more data, though, see the discussion of failures of \u201cbig data\u201d-analysts to actually consider these statistical issues).\nAnyway, I wanted to reject the idea that the *combination* of \u201chigh\u201d p-values and small sample size is the problem. Indeed it\u2019s quite reasonable to use less strict significance thresholds for smaller sample sizes and on the other hand a high p-value does not become somehow better or more reliable if the sample size increases, indeed, as you imply, p-values of 0.04 will be quite meaningless for samples of more than let\u2019s say 10k respondents. Why? Because the p-value indicates only how likely the result would have been if the H_0 that there is no effect at all would be true, a statistical fiction which is usually not a reasonable possibility at all.\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 9, 2014 at 11:54 am \nActually, a p-value of .04 is meaningless for small sample sizes, but becomes increasingly meaningful as the sample size increases\u2014but not in the direction one might think. For generally reasonable prior distributions, as N increases, a p-value of .04 indicates increasing evidence for the null hypothesis. \nFor example, using a common default prior, and assuming a \u201cmedium\u201d effect size under H1, a 2-tailed p-value of .04 implies Bayes factors (for H1 vs H0) of 1.7, .64, ,.21, and .07) for N=20, 200, 2000, and 20000, respectively. Only for the largest two sample sizes does the Bayes factor show much support for either hypothesis over the other, and then the favored hypothesis (since BF<1) is the null. \nYou can verify the above trend by using the Bayes factor calculator here, although your specific numeric results will depend on the prior you choose.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n James A. Donald says: \n\n\t\t\tApril 29, 2014 at 2:30 am \nAll of these are extremely bad solutions, since they worsen the bureaucratization of science.\nBack in the eighteenth and early nineteenth century, science was high status. Smart, wealthy, important people, would compete each to be more scientific than the other. The result was that the scientific method itself was high status, and was, therefore, actually followed, rather than people going through bureaucratic rituals that supposedly correspond to following the scientific method. By and large, this successfully produced truth and exposed nonsense.\n\n\n\n\n\n\n\n\n ozymandias says: \n\n\t\t\tApril 29, 2014 at 2:57 am \nI am unclear about why the high-status scientists wouldn\u2019t preregister their trials, use heterogeneity analyses, look for high effect sizes, try to avoid experimenter effects, make their meta-analyses stronger, and the rest of it. It seems like all that advice could as easily be implemented by smart, wealthy, important people each competing to be more scientific than the other. Indeed, it seems like that\u2019s exactly what they *would* be competing on. \u201cWhat should we do?\u201d and \u201chow should we do it?\u201d are importantly different questions.\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tApril 29, 2014 at 3:08 am \nFor countersignalling reasons! Preregistering your trials etc. signals that you think your scientificity is in doubt, which means you aren\u2019t really a top class scientist. Remember when Einstein said he\u2019d defy evidence that conflicted with his theory because he was so sure it was right? There was no way Einstein was going to be confused for a crank.\n\n\n\n\n\n\n\n\n ozymandias says: \n\n\t\t\tApril 29, 2014 at 3:29 am \n\u201cEveryone adopts extremely rigorous standards except Einstein\u201d seems like a great improvement on the current situation.\n\n\n\n\n\n\n\n\n Anonymous2 says: \n\n\t\t\tApril 30, 2014 at 5:46 pm \nYou\u2019ve missed what Anonymous was saying.\n\n\n\n\n\n\n\n\n\n\n\n\n ckp says: \n\n\t\t\tApril 29, 2014 at 7:48 am \nWe can\u2019t turn back the clock on the bureaucratization of science now, because science is just so BIG nowadays. The amount of science is increasing exponentially (or even superexponentially) with time, and we\u2019ve exhausted all of the \u201ceasier\u201d low-hanging fruit results where status might have been enough to make sure you did it right.\n\n\n\n\n\n\n\n\n Michael Edward Vassar says: \n\n\t\t\tApril 29, 2014 at 1:52 pm \nThat\u2019s one hypothesis. I don\u2019t consider it to be a credible hypothesis though.\n\n\n\n\n\n\n\n\n\n\n Leonard says: \n\n\t\t\tApril 29, 2014 at 10:17 am \nThe problem is not that science is not high status. Perhaps it used to be; I am not sure. But now, it most certainly is. Indeed, that is part of the problem. Science has such high status that we allow its crazier emanations to override common sense.\nThe problem is the intersection of leaderless bureaucracy and \u201cfunding\u201d. Bureaucratically \u201cfunded\u201d science gradually loses its connection to reality, and sinks bank into the intellectual morass from whence it came.\n\n\n\n\n\n\n\n\n Anthony says: \n\n\t\t\tApril 29, 2014 at 12:24 pm \nJames, you\u2019re wrong. Back in the 18th and early (to mid) 19th century, scientists were working mostly on non-living systems, where it\u2019s a *lot* easier to be repeatable and to eliminate measurement biases. Until Darwin, almost any study of living, non-human systems was either stamp-collecting or wrong. There were plenty of people in those days who considered themselves \u201cscientists\u201d, studying psychology, sociology, and the like, but outside their respective schools, we mostly consider them cranks, unlike the pioneers of chemistry and physics.\n\n\n\n\n\n\n\n\n Piano says: \n\n\t\t\tApril 29, 2014 at 10:11 pm \nThe problem is that Science is high status, but science is not. We\u2019re trying to slowly make the two similar enough that science gets some high status by proxy and by virtue of Scientists accepting scientists into the fold. But, as long as Science is funded by democracies, politics will trump science. \nTo an extent, that\u2019s okay. Most people cannot afford for certain things to be destroyed by the truth, so democracy is an inadvertant and effective defence mechanism. Given the existence of people below IQ 125 (the \u201cstupids\u201d) who are members of different groups, we need to A) stick with democracy-controlled Science, B) obfuscate science the right amount so that it\u2019s still allowed yet the smarter scientists can still keep their jobs, or C) Mechanize the whole thing so that the only thing that can still be accused of heresy is systems of mathematics that are necessary for the rest of the economy to function. \nUntil someone shows that mathematics, and not the mathematicians, have been accused of heresy, I\u2019m going to be partial towards C.\n\n\n\n\n\n\n\n\n\n\n Doug S. says: \n\n\t\t\tApril 29, 2014 at 3:15 am \nBem and other parapsychologists should be required to attempt to publish their papers in physics journals. (Let\u2019s see them massage a p-value all the way down to 0.0000003!)\n\n\n\n\n\n\n\n\n Erik says: \n\n\t\t\tApril 29, 2014 at 8:43 am \nHe already massaged the p-value down to 0.00000000012 in the meta-analysis.\n\n\n\n\n\n\n\n\n\n\n Doug S. says: \n\n\t\t\tApril 29, 2014 at 3:18 am \nIncidentally, high school and undergraduate college students disprove basic physics and chemistry in their lab courses all the time\u2026\n\n\n\n\n\n\n\n\n Zach says: \n\n\t\t\tApril 29, 2014 at 8:40 pm \nSure, but those students are generally shown to be wrong when their experimental methods are critiqued, things that are supposed to be random are randomized, math errors are corrected, or others replicate their experiments. The issue here is that the p-values actually decrease and we become even more certain of the \u201cwrong\u201d results after replication and meta-analysis.\n\n\n\n\n\n\n\n\n\n\n Mai La Dreapta says: \n\n\t\t\tApril 29, 2014 at 9:39 am \nMy prior belief in psi was awfully low (though maybe not as low as 10^-20), but a major effect of reading this article and the linked studies has been to greatly increase my belief in its possible existence. This is particularly the case since all of the various arguments and hypotheses about experimenter effects causing these results to appear bear the stench of motivated cognition. And noticing the amount of motivated cognition required to explain away the result makes me place the estimated probability even higher.\nThanks, Scott. \/sarcasm\n\n\n\n\n\n\n\n\n Mai La Dreapta says: \n\n\t\t\tApril 29, 2014 at 10:37 am \nI should add that the other effect of reading this article was to make me much more skeptical about published science, especially in non-STEM sciences, and I think this was the intended result. But of course my default setting was to pretty much disbelieve all published psychology, sociology, and economics anyway.\n\n\n\n\n\n\n\n\n Anon256 says: \n\n\t\t\tApril 29, 2014 at 7:47 pm \nNon-Science\/Technology\/Engineering\/Maths sciences?\n\n\n\n\n\n\n\n\n nydwracu says: \n\n\t\t\tApril 29, 2014 at 11:20 pm \nMedicine isn\u2019t generally included in STEM. Maybe he meant medical science?\nedit: Or social science, given the context.\n\n\n\n\n\n\n\n\n Mai La Dreapta says: \n\n\t\t\tApril 30, 2014 at 9:18 am \nThis is what I get for not thinking about acronyms. So let me be explicit: I have high confidence in physics, chemistry, mathematics, computer science, engineering, and biology. I have low confidence in psychology, sociology, anthropology, and many forms of medicine.\n(My actual degree is in linguistics, and I have a middling view of that field. Most theoretical linguistics is trash, but most linguistics is not theoretical linguistics.)\n\n\n\n\n\n\n\n\n Creutzer says: \n\n\t\t\tApril 30, 2014 at 12:52 pm \nWhat linguistics are you thinking about that is not theoretical linguistics, but also not stamp collecting? Or are you thinking of the stamp collecting?\n\n\n\n\n\n\n\n\n Mai La Dreapta says: \n\n\t\t\tApril 30, 2014 at 1:18 pm \n@Creutzer, I\u2019m not familiar with your use of the term \u201cstamp collecting\u201d here. Is this a disparaging term for basic research, i.e. going somewhere, learning an undocumented language, and writing up a grammar for it? If so, then I admit that a lot of the non-garbage linguistics is \u201cstamp collecting\u201d, but I strongly reject the implicit value judgement in that term.\nIn any case, I suggest historical linguistics as a branch of linguistics which is neither stamp collecting nor unempirical gas-bagging. Phonetics likewise. Phonology has some very good points, but the theoretical spats over generative\/OT models are pretty much useless. Syntax is a wasteland.\nI adopt the maxim \u201cChomsky is wrong about everything\u201d as a good rule of thumb for both linguistics and politics.\n\n\n\n\n\n\n\n\n Creutzer says: \n\n\t\t\tApril 30, 2014 at 2:55 pm \nWell, stamp collection, whether in biology or linguistics, doesn\u2019t develop explanations. That makes me kind of go \u201cmeh\u201d.\nI agree about all your other points. I was just puzzled by the \u201cmajority\u201d statement.\nThere is one thing to be said for the Chomskyans, though: Ironically, they are the better stamp collectors. Just think of all the details about familiar language like English, German, Dutch and Italian that have been discovered by Chomskyans. And those Chomskyans who do turn do the description of new languages often look at things systematically in a way others wouldn\u2019t, and they less often use the word \u201ctopic\u201d in a way that makes me want to strangle them.\n\n\n\n\n\n\n\n\n\n\n\n\n name says: \n\n\t\t\tApril 29, 2014 at 11:32 pm \nI once had a college professor who believed in all sorts of parapsychology. One day, in the middle of a lecture, she announced that the class was going to do a\u2026 I forget the term she used and it certainly wasn\u2019t \u201cmind-reading exercise\u201d, but a mind-reading exercise is what it was.\nThis went as follows: Everyone paired off. One person would picture a location important to them for a few minutes, focusing solely on that, and then the other person would close their eyes, notice whatever thoughts came into their head, and try to pick up what the first person was thinking of. Then the two people would switch roles. Class discussion began after the thing was complete.\nVarying degrees of success were reported. \nI noticed myself interpreting the somewhat ambiguous statements of the other person in my pair as in the right general area of it. So it could be that success was only due to suggestibility after the fact.\nBut I gave no ambiguous statements; I said the other guy was thinking of a beach in Hawaii. He said that\u2019s exactly what he had been thinking of. But I found a year later that he didn\u2019t like the professor very much: he was a Catholic and she did not like Christianity at all, and so on, and she\u2019d given him a low grade on one class for disagreeing with her. I can\u2019t remember the chronology here, but the class he got a low grade in was the one least unrelated to that particular subject, and I know it was taught in that same classroom. He didn\u2019t say anything in the class discussion either. Nevertheless, it could be that he was bullshitting for fear of getting a bad grade, and thought she\u2019d overhear him.\nIt could be that similar explanations apply to the whole class. It could also be that the results this professor had reported herself seeing were the result of poor planning, or bullshitting for effect, or one of a thousand other possibilities. It could also be that every other report of psi, or of any other strange effect that doesn\u2019t quite fit into current theories of physics, had a similar explanation: myth, prescientific explanations pushed forward through cultural inertia, charlatanry, bad experimental design\u2026\nIt could be.\n\n\n\n\n\n\n\n\n nydwracu says: \n\n\t\t\tApril 30, 2014 at 6:55 am \n(Dammit. Gravatar. The response that was supposed to obviously bring about was that sometimes things that can be easily explained by postulating an otherwise surprising entity really are results of pure coincidence. Now I have to figure out another method to elicit the critique of neoreaction I was trying to lead to.)\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tApril 30, 2014 at 9:47 am \nHere\u2019s Derren Brown doing that one: https:\/\/www.youtube.com\/watch?v=k_oUDev1rME\n\n\n\n\n\n\n\n\n nydwracu says: \n\n\t\t\tApril 30, 2014 at 3:52 pm \nCan\u2019t watch videos on this thing, but it\u2019s interesting that you\u2019d link a pickup artist. The thing in the anecdote actually happened, but what I left out was the professor\u2019s habit of pacing around the classroom. I\u2019m almost certain that the force at work was just the students knowing to avoid contradicting the professor. A clever status-building exercise, but I think she actually believed all of it.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n James Babcock says: \n\n\t\t\tApril 29, 2014 at 10:23 am \nI once met someone who believed she had psychic powers. She described having done a personal experiment, with a mutual friend as experimenter\/witness, and gotten a surprisingly large effect size. The experiment involved predicting whether the top of a deck of Dominion cards was gold or copper.\nAs it happened, I had played Dominion at this friend\u2019s apartment before, and so I had an unusually good answer to this experiment: I had seen that particular deck of cards before and it was marked. Not deliberately of course, but the rules of Dominion lead to some cards getting used and shuffled much more than others, so if cards start getting worn, they get easy to distinguish.\nThat sort of observation would never, ever appear up in a study writeup.\n\n\n\n\n\n\n\n\n Vanzetti says: \n\n\t\t\tApril 29, 2014 at 11:03 am \nHmmm\u2026\nScience gave us a giant pile of utility.\nParapsychology gave us nothing.\nI feel like this argument is good enough for me to ignore even a million papers in Nature.\n\n\n\n\n\n\n\n\n Randy M says: \n\n\t\t\tApril 29, 2014 at 11:29 am \nCan you draw a dividing line between what science-like things are is and what parapsychology-like things are is that cleaves the useful from the useless without begging the question?\n\n\n\n\n\n\n\n\n Vanzetti says: \n\n\t\t\tApril 29, 2014 at 11:47 am \nNo. But that\u2019s parapsychology we are talking about. It\u2019s pretty far away from the line I can\u2019t draw, safely on the side of the crazy bullshit.\nNow, psychology on the other hand\u2026 \ud83d\ude42\n\n\n\n\n\n\n\n\n ozymandias says: \n\n\t\t\tApril 29, 2014 at 12:31 pm \nCan you explain your system of ranking things from \u201cmore sciencey\u201d to \u201cless sciencey\u201d? Uncharitably I would assume the answer is how high-status it is among skeptics.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Robin Hanson says: \n\n\t\t\tApril 29, 2014 at 11:41 am \nAs a theorist at heart, I\u2019m tempted to adopt an attitude of just not believing in effects where the empirically estimated effect size is weak, no matter what the p-values. Yeah I won\u2019t believe that aspirin reduces heart problems, but that seems a mild price to pay. I could of course believe in a theory that predicts large observed effects in some cases, and weaker harder to see effects in other cases. But then I\u2019d be believing in the law, not believing in the weak effect mainly because of empirical data showing the weak effect.\n\n\n\n\n\n\n\n\n Desertopa says: \n\n\t\t\tApril 29, 2014 at 8:29 pm \nIt may *seem* like a mild price to pay, but in practice it leads to, what, more than a thousand avoidable deaths per year? In medicine, failure to acknowledge small effect sizes, when applied over large populations, can result in some pretty major utility losses.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tApril 29, 2014 at 9:20 pm \nThis was the argument I was going to make too (although I bet it\u2019s way more than a thousand).\nAlso, I\u2019m pretty sure you\u2019d have to disbelieve in the Higgs boson and a lot of other physics.\n\n\n\n\n\n\n\n\n Sniffnoy says: \n\n\t\t\tApril 29, 2014 at 9:22 pm \nAlso, I\u2019m pretty sure you\u2019d have to disbelieve in the Higgs boson and a lot of other physics.\nNo, he already addressed this; he\u2019s talking about purely empirically detected effects with small effect size, not effects with small effect size backed up by theory.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tApril 29, 2014 at 9:41 pm \nSo would or would not Robin believe in the Higgs boson more once it was detected than he did before when it was merely predicted?\nIf the latter, does he think it was a colossal waste of money and time to (successfully) try to detect it?\n\n\n\n\n\n\n\n\n\n\n\n\n Julio Siqueira says: \n\n\t\t\tMay 4, 2014 at 8:34 pm \n\u201cYeah I won\u2019t believe that aspirin reduces heart problems, but that seems a mild price to pay.\u201d\nActually, there are theories about the way aspirin works:\n\u201cAntiplatelet agents, including aspirin, clopidogrel, dipyridamole and ticlopidine, work by inhibiting the production of thromboxane.\u201d\nhttp:\/\/www.strokeassociation.org\/STROKEORG\/LifeAfterStroke\/HealthyLivingAfterStroke\/ManagingMedicines\/Anti-Clotting-Agents-Explained_UCM_310452_Article.jsp\nThe \u201cinhibition\u201d of headaches by aspirin is based pretty much on the very same mechanism, with some minor variations in the biochemical pathways, and differences in the body\u2019s target-areas. But because its effect size is much much bigger than its effect as an antiplatelet, theorists at heart rarely dismissed it (especially when in need\u2026). Aspirin was used to fight headaches for about a century before its anti-headache mechanism was discovered.\nJulio Siqueira\nhttp:\/\/www.criticandokardec.com.br\/criticizingskepticism.htm\n\n\n\n\n\n\n\n\n\n\n Jonathan Graehl says: \n\n\t\t\tApril 29, 2014 at 12:12 pm \nhttp:\/\/news.sciencemag.org\/brain-behavior\/2014\/04\/male-scent-may-compromise-biomedical-research says that lab mice display pain less in the presence of the scent of a human male or his smelly t-shirt. The mice showed 2\/3 the pain when near the human male scent (in person or through shirt).\n\n\n\n\n\n\n\n\n Eric Rall says: \n\n\t\t\tApril 29, 2014 at 4:01 pm \nAs I understand it, the actual story of Galileo vs the Scholasticists involved a key role played by the emergence of gunpowder artillery as a major battlefield weapon. Specifically, artillery officers who aimed their weapons using Aristotelian mechanics (cannonball follows a straight line from the muzzle until it runs out of impetus, then falls straight down at a rate dependent on its mass) missed their targets, while those who used the theories developed by Galileo et al (curved path whose curve depends only on the angle and muzzle velocity of the cannonball, not its mass) hit their targets. And because effective use of artillery was becoming a life-and-death issue for various high-status people, those people paid serious attention to what made their artillery officers better at hitting their targets.\nIn that light, I\u2019d suggest considering putting applied engineering at the top of the pyramid of science. The ultimate confirmation of a theory as substantially correct (*) has to be the ability to use the assumption of that theory\u2019s correctness in actually making and doing things and to have those things actually work in ways that they wouldn\u2019t if the theory were fundamentally flawed. Of course, as I write this, I\u2019m realizing that while this works great for things like using physics theories to build airplanes and rockets, the \u201capplied science\u201d standard can have really shitty results in fields where the appearance of success is easier to come by accidentally. I\u2019ll leave telling the difference between the two cases as a massive unsolved problem.\n(*) \u201cSubstantially correct\u201d has to be qualified because of issues like Newtonian Mechanics being demonstrably wrong in very subtle ways, but it still having practical usefulness because it\u2019s correct enough to correctly predict all sorts of well-understood cases.\n\n\n\n\n\n\n\n\n suntzuanime says: \n\n\t\t\tApril 29, 2014 at 4:43 pm \nIf your psychological theory is so accurate, why aren\u2019t you a cult leader?\n\n\n\n\n\n\n\n\n Anthony says: \n\n\t\t\tApril 30, 2014 at 4:41 pm \nI commented below that science with bad epistemology is called \u201cengineering\u201d, but it\u2019s worse than that. Engineering results don\u2019t need a theory, and\/or can live quite happily with two mutually-incompatible theories about what\u2019s happening. \nI\u2019m a soil engineer, and some of what we do is downright embarrassing when you look into its theoretical basis. And on how much we extrapolate based on seriously limited data. (Extrapolate, not just interpolate.)\n\n\n\n\n\n\n\n\n Nancy Lebovitz says: \n\n\t\t\tMay 2, 2014 at 12:35 pm \nIt sounds like engineering that\u2019s based on incoherent theory would be fertile ground for finding hypothesis to test to develop better theories. Are people working on that?\n\n\n\n\n\n\n\n\n Anthony says: \n\n\t\t\tMay 4, 2014 at 5:27 pm \nIn my field, somewhat. What we\u2019ve got is theory with mathematically intractable application, and\/or real-life circumstances with inadequate characterization of the properties so that the theory is impossible to apply. Imagine finding the actual amount of friction in every pair of surfaces in a car.\nSoftware engineering seems to get along quite well with very little contact with the formal theory of software, and from the outside, it seems that their terrible results stem from overmuch complexity rather than poor theoretical foundations.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n somnicule says: \n\n\t\t\tApril 29, 2014 at 9:55 pm \nIt seems that naive experiments on sufficiently complicated systems may as well be correlational studies. Along the lines of Hanson\u2019s comment, without a powerful causal model behind the results of an experiment, it\u2019s very hard to draw any meaningful or useful conclusions, particularly in cases of small effect size. Things like throwing medications at the wall and seeing what sticks, just seeing what happens in specific circumstances for psychology experiments, etc. all seems a bit cargo culty. A stop-gap measure until we get solid casual models.\n\n\n\n\n\n\n\n\n The Nabataean says: \n\n\t\t\tApril 30, 2014 at 9:14 am \nSuppose this isn\u2019t just lousy protocol, and psi really does exist. All of these purported psychic phenomena are events that seem just a little too farfetched to be coincidence, and happen on the scale of human observers.\nIf this is the case (and just so you don\u2019t get the wrong idea, I think that\u2019s a very big \u2018if\u2019), that could be evidence in favor of the Simulation Hypothesis. Maybe the beings running the simulation occasionally bias their random number generators for the benefit (or confusion) of the simulated humans. Maybe they want to see how much we can deduce about our world when faced with seemingly inviolable laws of physics that nevertheless seem to be violated. Why? Perhaps they want to find out if they could have missed something in their own model of physics. They think they understand their world fully by now, but then again, there are always some anomalies and fishy results. They might want to run a simulation to see whether, if there really were some monkeywrenches being thrown into otherwise tidy patterns of cause and effect, intelligent beings would be able to infer their existence.\nIs parapsychology the control group, or are we the experimental group?\n\n\n\n\n\n\n\n\n Mai La Dreapta says: \n\n\t\t\tApril 30, 2014 at 9:26 am \n\u201cPsi exists\u201d strikes me as more likely than \u201cWe are in a simulation\u201d, and is favored by my internal implementation of Occam\u2019s Razor.\n\n\n\n\n\n\n\n\n Slippery Jim says: \n\n\t\t\tApril 30, 2014 at 10:15 am \nSimulation Hypothesis and psi?\nEnter Johnstone\u2019s Paradox\n1. The universe is finite.\n2. All phenomena in the universe are subject to, and can be explained in terms of, a finite set of knowable laws which operate entirely within the universe.\n1)If reality is ultimately materialistic and rational, then it could be described in a finite set of instructions and modelled as information.\n2)If it could be modelled in this way, then it will be \u2014 at the very least because, given limitless time, all possible permutations of a finite universe should occur.\n3)For every one original reality there will be many such sub-models, and they too will generate many sub-sub-models.\n4)The nature of complex systems means that it is almost impossible for any reality to reproduce itself exactly, indeed there is greater likelihood that the submodels will be mutations of the original, subject to different structures and laws.\n5)Because the models severely outnumber the original reality, or realities, it is therefore more likely that we are living in a universe modelled as information, and it is most likely that it is not identical to the original reality.\n6)Thus Johnstone\u2019s Paradox: if reality is ultimately materialistic and rational, then it is highly unlikely we are living in a materialistic, rational universe.\n[This was advanced in 1987 by Lionel Snell and is roughly isomorphic to Bostrom\u2019s argument.]\n\n\n\n\n\n\n\n\n anon says: \n\n\t\t\tApril 30, 2014 at 5:44 pm \nThe conclusion seems flawed. Maybe we\u2019re living in a universe that\u2019s an inaccurate simulation of a different universe. But that has no bearing on whether or not the universe is materialistic or rational. We might not match the original reality, but that doesn\u2019t imply causality doesn\u2019t exist.\n\n\n\n\n\n\n\n\n\n\n Doug S. says: \n\n\t\t\tMay 1, 2014 at 5:13 pm \nSo, in other words, \u201cNot only does God play dice, sometimes he ignores the result and just says it worked.\u201d\n\n\n\n\n\n\n\n\nPingback: The Ouroboros of Science | CURATIO Magazine\n\n\n\n\n Alexander Stanislaw says: \n\n\t\t\tApril 30, 2014 at 11:30 am \nEveryone seems to be assuming that bad epistemology makes for bad science. But does it? One advantage to bad epistemology (ie. normal science in which scientists have an incentive to prove their hypotheses rather than objectively test them) is that correct results are recognized more quickly. You get incorrect results too, but that is the price you pay. I anticipate that a super rigorous approach to science would slow down progress in most fields.\n\n\n\n\n\n\n\n\n Douglas Knight says: \n\n\t\t\tApril 30, 2014 at 11:38 am \nIf you bias towards false positives, then maybe true positives get published quicker, but there\u2019s a difference between \u201cpublished\u201d and \u201crecognized.\u201d In psychology, everyone is in their own bubble, completely ignoring everyone else\u2019s work. Good work is never recognized.\n\n\n\n\n\n\n\n\n Anthony says: \n\n\t\t\tApril 30, 2014 at 4:19 pm \nScience with bad epistemology is called \u201cengineering\u201d.\n\n\n\n\n\n\n\n\n Alexander Stanislaw says: \n\n\t\t\tApril 30, 2014 at 4:49 pm \nAnd I\u2019d say that engineering is a massive success from an instrumental rationality perspective.\n\n\n\n\n\n\n\n\n\n\n\n\n Chris says: \n\n\t\t\tApril 30, 2014 at 12:19 pm \nJust wanted to leave a note saying that this is outstanding \u2014 the best blog post I\u2019ve read this year, I think \u2014 and I expect I\u2019ll be referring people to it for a long time. Thanks for writing it! Have you thought about setting up a tip jar of some sort? (Paypal, Gittip, Patreon, etc)\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tMay 1, 2014 at 1:09 am \nThank you! I don\u2019t need the money that much right now, but if you want to donate to a charity I like, you can go for http:\/\/intelligence.org\/ or anything on http:\/\/www.givewell.org\/charities\/top-charities\n\n\n\n\n\n\n\n\nPingback: A modest proposal to fix science \u00ab Josh Haas's Web Log\n\n\n\n\n MugaSofer says: \n\n\t\t\tApril 30, 2014 at 2:30 pm \nI feel suddenly less critical of Mage: The Ascension, a game where reality was entirely subjective\/shaped by expectations and the \u201claws of physics\u201d, aka scientific consensus, were standardized by an ancient conspiracy in order to give humanity a stable footing for civilization.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tApril 30, 2014 at 5:22 pm \nLet\u2019s see if we can agree on one thing here, Alexander; I think you\u2019ve written a very intellectually engaging piece, with a great deal of thought behind it\u2014certainly one of the more interesting I have read\u2014but I still have some basic concerns I would like to flesh out. I\u2019ll start off with the caveat that I\u2019m favorably disposed towards psi and parapsychology, and that I\u2019m fairly well invested in researching the field, but I hope you\u2019ll agree with me that we can have a productive exchange despite this most unsupportable conviction :-). If I am correct, all participants including myself will leave with an enhanced understanding, and perhaps respect, for the positions of both sides of this debate.\nI\u2019ll start by noting my most significant argument in relation to your piece: that if all these experiments, as you graciously concede, are conducted to a standard of science that is generally considered rigorous, are we not well-justified in concluding at least this: \u201cThe possibility that psi phenomena exist must now be seriously considered\u201d? If not; what, I ask, can we say in defense of scientific practice? For if we allow it of ourselves to conduct numerous experiments of high-quality, designed by definition to eliminate (or at least strongly mitigate) explanations alternate to the one we have decided to test for, and then do not even bequeath to our conclusions\u2014upon finding a positive result\u2014the concession that the original explanation is a viable one, how do we justify our first impetus to scientifically investigate that explanation in the first place?\nTo illustrate my difference to your position, consider the following quote from your essay:\n\u201cAfter all, a good Bayesian should be able to say \u201cWell, I got some impressive results, but my prior for psi is very low, so this raises my belief in psi slightly, but raises my belief that the experiments were confounded a lot.\u201d\nI\u2019m led to question whether you really did not mean to say something slightly different. After all, if we take these words at face value, can we not\u2014satirically\u2014call them a decent formula for confirmation bias? A prior belief is examined with a strenuous test; that test lends evidence against the belief; we therefore conclude the test is more likely to have been flawed (i.e. evidence against our position causes us to reaffirm our belief). How do you counter this? IMO, statistical inference, whether bayesian or frequentist, only allows us to rule out the hypothesis of chance\u2014it says nothing about the methodology behind an experiment. Thus, people only ever accept the p-value or Bayes factor of a study literally if they already believe the experiments were well-done.\nNow let me address some of your specific points, to see if I cannot make the psi hypothesis a slightly more plausible one to you:\nYou mention Wiseman & Schlitz (1997), an oft-cited study in parapsychology circles, as strong evidence that the experimenter effect is operating here. I certainly agree. At the end of their collaboration, both had conducted four separate experiments, where three of Schlitz\u2019s were positive and significant, and zero of Wiseman\u2019s were. Their results can only be explained in two ways: (1) psi does not exist, and the positive results are due to experimenter bias, and (2) psi does exist, and the negative-positive schism is still related to experimenter effects. Let\u2019s ignore issues of power, fraud, and data selectivity for now (if you find them convincing, we can discuss them in another post).\nThe rub, for me, is that this is an example of a paper that is designed to offer evidence against hypothesis (1)\u2014Wiseman certainly wasn\u2019t happy about it. The reason is that both experimenters ran protocols that were precisely the same but for their prior level of interaction with subjects (and their role as starers), ostensibly eliminating methodology as a confounding problem. Smell or other sensory cues, for example (as was mentioned in the above comments), could not have been the issue; staring periods were conducted over closed circuit television channels, and the randomization of the stare\/no-stare periods was undertaken by a pseudo-random algorithm, where no feedback was given during the session that would allow subjects to detect, consciously or subconsciously, any of the impossibly subtle micro-regularities that might have occasioned in the protocol.\nNow, you\u2014understandably, from your position\u2014criticize hypothesis (2), but consider the following remarks from Schlitz and Wiseman, after their experiment had been completed:\n\u201cIn the October 2002 issue of The Paranormal Review, Caroline Watt asked each of them [Wiseman and Schlitz] what kind of preparations they make before starting an experiment. Their answers were: Schlitz: [\u2026] \u201cI tell people that there is background research that\u2019s been done already that suggests this works [\u2026] I give them a very positive expectation of outcome.\u201d Wiseman: \u201cIn terms of preparing myself for the session, absolutely nothing\u201d\nThe social affect of both experimenters seems to have been qualitatively different; we can say this almost with complete certainty (and it\u2019s not unexpected). If we acknowledge, then, such confounding factors as \u201cpygmalion effects\u201d (Rosenthal, 1969), it would be only rational to conclude that\u2014should psi exist\u2014attempts to exhibit it would be influenced by them. Even more clearly, IMO (and why parapsychologists tend to see this experiment as consistent with their ideas), it was Wiseman who did the staring in the null experiments, and Schlitz who did the staring in the positive ones. Would it not make sense that a believer in psi would be more \u201cpsychic\u201d than a skeptic, if psi exists? (or that a person with confidence in their abilities could make a better theatrical performance, or more likely deduce the solution to a complex mathematical problem, if they are not insecure about their skill level?)\nParapsychologists are only following the data, to the best of their ability. You\u2019ll find that, under the psi hypothesis, the discrepancy in success is relatively simple to explain, whereas under the skeptical explanation we must conclude such a thing as that the most miniscule variation in experimental conditions\u2014so miniscule that it must be postulated apart from the description of the protocol and will likely never be directly identified\u2014can cause a study to be either significant or a failure. We must, in other words, logically determine that our science is still utterly incapable of dealing with simple experimenter bias; not just on the level of producing spurious conclusions more often than not (as Ioannidis et al show), but to the degree of failing to reliably assess literally any moderately small effect. This is itself a powerful claim.\nBut I will return to the nature of the psi hypothesis later. For now, I will cover parapsychological experimenter effects more broadly. Consider the following: as we probably both agree, Robert Rosenthal is one of those scientists who has done a great deal of work to bring expectancy influences to our attention; his landmark (1986) book, \u201cExperimenter Effects in Behavioral Research\u201d, for example, has not inconsiderably advanced our understanding of self-fulfilling prophecies in science. Would it then surprise you to learn that Rosenthal has spoken favorably on the resistance of a category of psi studies (called ganzfeld experiments) to just the sort of idea expounded by hypothesis (2)? See the following quote from Carter (2010):\n\u201cGanzfeld research would do very well in head-to-head comparisons with mainstream research. The experimenter-derived artifacts described in my 1966 (enlarged edition 1976) book Experimenter Effects in Behavioral Research were better dealt with by ganzfeld researchers than by many researchers in more traditional domains.\u201d\nWhat if I told you that Rosenthal & Harris (1998) co-wrote a paper evaluating psi, in response to a government request, with overall favorable conclusions towards its existence; would you be inclined to read a little more of the literature on parapsychology? (The reference here is \u201cEnhancing Human Performance: Background Papers, Issues of Theory and Methodology\u201d) \nWhatever you believe about psi, I agree with you that examination of parapsychological results can do much to bolster our understanding of setbacks in experimentation; however, I also believe that thinking and examining our many attempts (and there are quite literally thousands of experiments, and dozens of categories, with their own literature) to grapple with potentially psychic effects, have the merit of helping to engender a truly reflective spirit of inquiry, for the reason that they represent precisely that ideal of science that we dream of meeting\u2014using data and argument to resolve deeply controversial, and potentially game-changing, issues. \nOn a superficial level, we already have evidence that parapsychology employs much more rigorous safeguards against experimenter effects than most any other scientific discipline. Watt & Naategal (2004), for example, conducted a survey and found that parapsychology had run 79.1% of its research using a double-blind methodology, compared to 0.5% in the physical sciences, 2.4% in the biological sciences, 36.8% in the medical sciences, and 14.5% in the psychological sciences. These findings are consistent with those of an earlier survey on experimenter effects by Sheldrake (1998), which found an even greater disparity favorable to parapsychology.\nOriginating out of vigorous debates between proponents and skeptics, however, I find it intuitively plausible that this should be the case (the same amount of vehement scrutiny used to contest telepathy is not used to criticize studies of the effect of alcohol on memory, for example), so these findings\u2014while a bit surprising to me\u2014don\u2019t seem, on reflection, to be very much out of place.\nI think, however\u2014and you will probably agree with me\u2014that I could ramble on about\nsafeguards and variables all day, without any effect on your opinion, if I do not discuss the most crucial, foundational issues pertaining to psi. It would be like trying to convince you that studies of astrology have rigorously eliminated alternate explanations; after all, if the hypothesis we would have to entertain is that the stars themselves, billions of miles away, determine our likelihood to get laid on any given day, it doesn\u2019t matter how strong the data is\u2014we will always suspect a flaw.\nI therefore suggest we take a wide-angle view, for a moment, on the psi question. We cannot hope to be properly disposed towards its investigation if we do not\u2014certainly it would be unacceptable to simply absorb the popular bias against it, without critical thought, since that\u2019s exactly the religious mindset we eschew; neither would it be acceptable to enjoin its possibility because we want it to be true, or because it\u2019s widespread in the media.\nLet me first address the physical argument. I\u2019m well-versed in the literature of physics and psi myself, but my friend, Max, is a theoretical physics graduate studying condensed matter physics, with a long-standing interest in parapsychology. He and I both agree that you are overestimating the degree to which psi and physics clash. Before I state why, consider that our opinion is not so unusual, for those who have thought about the question at length; Henry Margenau, David Bohm, Brian Josephson, Evan Harris Walker, Henry Stapp, Olivier Costa de Beauregard, York Dobyns, and others are examples of physicists who either believe that psi is already compatible with modern physics, or else think (more plausibly, IMO) that the current physical framework is suggestive of psi. De Beauregard actually thinks psi is demanded by modern physics, and has written so.\nIn light of these positions, you will see that our perspective is not an unreasonable one to maintain. Basically, we agree that if we take physical theory in its most conventional form (hoping thereby to reflect the \u201ccurrent consensus\u201d), psi and physics are just barely incompatible. I say \u201cjust barely\u201d because we have such suggestive phenomena as Bell\u2019s EPR correlations, which Einstein himself derided as telepathy, (but which we now have incontrovertibly proved through experiment) that show how two particles may remain instantaneously connected at indefinite distances from each other, if once they interacted. It is true that this phenomenon of entanglement is exceptionally fragile; however, experimental evidence in physics and biophysics journals these days purports to show its presence in everything from the photosynthesis of algae to the magnetic compass of migrating birds, and more such claims accrue all the time. Entanglement is entering warm biological systems.\nThe incompatibility arises if we conceive of psi as an information signal; if we think something is \u201ctransmitted\u201d; because the no-signaling theorem in quantum mechanics says EPR phenomena are just spooky correlations, not classical communication. You cannot use an entangled particle, as Alice, to get a message to Bob, for example, in physics parlance. However some parapsychologists and physicists don\u2019t think of psi as a transfer, and lend to it the same spooky status as EPR\u2014unexplained non-local influence. If this is correct, and you are willing to accept that non-local principles can scale up to large biological organisms (as the trend of the evidence is indicating), but to a larger degree than has ever been experimentally verified (outside parapsychology, of course), then certain forms of psi are already compatible with physics (e.g. telepathy).\nIt may also surprise you to know that the AAAS convened a panel discussion on the compatibility of physics and psi, with numerous physicists in attendance, where the general consensus was that physics cannot yet rule out even phenomena like precognition. The main reason given was that the equations of physics are time-symmetric; they work forwards and backwards equally well. There are, in fact, interpretations of quantum mechanics like TSQM that play explicitly on this principle, with optics experiments unrelated to parapsychology designed to provide evidence for retro-causality (e.g. Tollaksen, Aharanov, Zeilinger). Some of them exhibit results that are rather intuitively and elegantly explained under a retro-causal model, and have more convoluted mathematical interpretations in other frameworks (all QM experiments at this time can be explained by all the interpretations, to various degrees).\nI would talk more about other ways that psi and physics can be reconciled, such as by introducing slight non-linearities in QM, but I sense that this may bury my point rather than clarify it. Where psi and physics are concerned, therefore, I say just this: that if physicists can unblinkingly confront the possibility of inflating space, multiple universes, retro-causality, observer-dependent time, universal constants, black holes, worm holes, extra dimensions, and vibrating cosmic strings\u2014much for which there exists fleetingly little experimental evidence, a good deal of theoretical modeling, and a lot of funding\u2014we cannot, with a straight face, dismiss a considerable body of experimental evidence for something as mundane as telepathy\u2014or of slightly more significance: precognition (the future doesn\u2019t have to be \u201cfixed\u201d to explain these experiments, BTW).\nNow, if you\u2019re looking for evidence that Bem\u2019s experiments themselves, and their replications, were well-conducted, and well-guarded against spurious expectancy effects, thus providing parapsychological evidence for retro-causality, I can only say that I personally think they were, having read the relevant papers and thought about their methodologies. However, my area of expertise relates more to ganzfeld experiments (telepathy), which in my opinion convincingly show that critics have been unable to account for the results. I have been led to this conclusion by personal examination of the data, as well as debate, and in this capacity I have seen every methodological and statistical criticism in the book, as well as every parapsychological rejoinder to them. IMO, no one has yet been able to identify a single flaw in either the ganzfeld experiments or the treatment of those experiments that can successfully account for their results\u2014and none of the major skeptics purport to. I am happy to debate anyone on this issue. A paper authored by myself and Max, in fact, will be coming out in the Journal of Parapsychology in June, on that very subject, if you care to read it; it tackles a number of general criticisms of psi research, with a focus on the ganzfeld, using empirical and theoretical approaches. Look for \u201cBeyond the Coin Toss: Examining Wiseman\u2019s Criticisms of Parapsychology\u201d. At the parapsychology convention in San Francisco this year, as well, we may likely do a presentation on it. \nThe rub on the ganzfeld, by the way, is this: where the baseline expected hit rate is 25%, the observed hit rate is 32%, across thousands of trials and more than hundred experiments; and if we partition trials into those where subjects have been specially selected in accord with characteristics predicted to significantly enhance performance, we instead find hit rates of upwards of 40% (with 27% as the average proportion of hits across unselected subjects). A main focus of our paper is the proposal that we can better the ganzfeld experiments by focusing on these special subjects.\nAs I wrap up, I will admit to finding this extremely fascinating. Confronted with the kind of findings I discussed, I find myself saying\u2014as we are often impelled to do, in science, by strange data\u2014that it didn\u2019t have to be this way. And it didn\u2019t. For every example of a spuriously successful scientific hypothesis, I would wager that there are a dozen that simply didn\u2019t make it. We could have obtained a packet of null studies in parapsychology, but instead we wound up with a collection of successively improved, robust, and (many) methodologically formidable experiments (many are also poorly conducted) that collectively\u2014in almost every paradigm examined to date\u2014exhibit astronomically significant departures from chance. Intelligent people have performed and assailed these experiments, but no satisfactory explanation exists today for them.\nIs it possible that this seeming signal in the noise is psi? I think so. Physics doesn\u2019t rule it out; some aspects are even suggestive. People, also, have anecdotally claimed its existence for millennia, so psi is not without an observational precedent. Nor is it without an experimental precedent, as many experiments conducted from the turn of the 19th century to today have sought to evidence it, and found results consistent with the hypothesis. \nShould we be surprised at psi? Of course\u2014the phenomenon defies our basic intuitions; I\u2019m not claiming we shouldn\u2019t be skeptical. But we should also be open-minded, and not hostile. Psi touches our scientific imaginations; it has accompanied our scientific journey from its inception (one of the first proposals for the scientific method, made by Francis Bacon was to study psi). It is directly in connection with investigating it, in fact, that several of our favorite procedures came into being, such as blinding and meta-analysis.\nI conclude this commentary with the following point: if, after having obtained results like those I just described and alluded to, as well as those you eloquently summarized for us, all that we can bring ourselves is to say is \u201cthere must have been an error someplace\u201d, I respectfully contend that what has failed is not our science, but our imagination. It is not time for scientists to throw out their method; it is not time for us to conclude that an evidence-based worldview cannot survive in the face of human bias; rather, it is time for scientists to become genuinely excited about the possibility of psi. We need more mainstream researchers, more funding, and more support to decide the question. Surely you will concede it is an interesting one. In pursuit of its answer, much is to be gained in understanding either Nature or our own practices in connection with investigating her mysteries.\n* If any of what I have said interests you, I highly recommend reading the following exchange between my colleague, Max, and Massimo Pigliucci (a skeptic), on the topic of parapsychology\u2014especially the comments. The debate is illuminating. \nhttp:\/\/rationallyspeaking.blogspot.com\/2011\/12\/alternative-take-on-esp.html\nhttp:\/\/rationallyspeaking.blogspot.com\/2012\/01\/on-parapsychology.html\nThank you for an interesting read.\nBest, \u2013 Johann\n\n\n\n\n\n\n\n\n anon says: \n\n\t\t\tApril 30, 2014 at 5:59 pm \nTL;DR Version\n\u2013 When Scott used Bayes\u2019 Theorem to update towards the result that the study was flawed, that was more or less confirmation bias.\n\u2013 Statistics can only tell us about probability not other things (this argument made no sense to me, I might be misunderstanding it).\n\u2013 Skeptics block psi effects which is why only believers discover their effects.\n\u2013 People are more skeptical of psi than other phenomena, unfairly.\n\u2013 Lots of smart people believe in psi and have published studies on it.\n\u2013 Physics doesn\u2019t rule out psi. Quantum entanglement provides a mechanism psi might be operating through.\n\u2013 Commenter is publishing a study on psi soon with a physicist friend.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tApril 30, 2014 at 7:02 pm \nHere\u2019s to hoping the above is, more-or-less, facetious?\nAn aversion to long comments is understandable, but since it is more liable to occur in connection with positions we disapprove of, the danger of missing interesting challenges to our ideas is there.\n\n\n\n\n\n\n\n\n Randy M says: \n\n\t\t\tApril 30, 2014 at 7:05 pm \nwell, it was a very long comment, and a not unfair summary. This could use more elaboration:\n\u201cIf this is correct, and you are willing to accept that non-local principles can scale up to large biological organisms (as the trend of the evidence is indicating)\u201d\nWhat evidence is that referring to?\n\n\n\n\n\n\n\n\n Kibber says: \n\n\t\t\tMay 8, 2014 at 3:04 pm \nWith regards to the above comment, and in my personal experience, it\u2019s more about credibility and writing style than disagreement. Scott can write long posts that I would read because I know from experience that his posts are good, plus his writing style is usually engaging all the way through to the end. In the above comment, you lost me around \u201ccertainly one of the more interesting\u201d \u2013 i.e. before actually revealing any positions. Not necessarily a huge loss, of course, but disagreement had nothing to do with it.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 9, 2014 at 1:23 am \n@ Kibber\nNo matter how I write, some will perceive me as either genuine, evasive, stupid, or intelligent. And the smaller their sample is of my writing, the less accurate will be their judgements.\nMy aim was simply to put forth a counter-perspective to Scott\u2019s in the most genial, open manner possible; and to mitigate misunderstanding it had to be thorough. It is of course my prejudice that I have made arguments which should be considered on a level similar to his; this includes pointing out two technical mistakes on Scott\u2019s part that I hope will eventually be fixed, and a number of observations in parapsychology that seem to me compelling of further study.\nMy perspective is that of someone who had directly analyzed portions of parapsychology data\u2014specifically ganzfeld data\u2014and has two modest co-written papers accepted for publication on the subject, claiming that the findings are interesting in specific ways that warrant replication. It is your choice whether to include this perspective in your assessment.\nI am also more than happy to answer any questions you may have about my arguments. A good, well-intentioned debate is hard to pass up.\nBest, \u2013 J\n\n\n\n\n\n\n\n\n\n\n he who hates deceitful error messages, like that i'm posting \"too quickly\" says: \n\n\t\t\tApril 30, 2014 at 8:02 pm \nthanks for the precis\n\n\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tMay 1, 2014 at 1:07 am \nI think on a Bayesian framework, the probability that psi exists after an experiment like this one would depend on your prior for psi existing and your prior for an experiment being flawed.\nThis experiment produced results that could only possibly happen if either psi existed or the experiment was flawed, so we should increase both probabilities. However, how *much* we increase them depends on the ratio of our priors.\nSuppose that before hearing this, we thought there was a 1\/10 chance of any given meta-analysis being flawed (even one as rigorous as this one), and a 1\/1000 chance of psi existing.\nNow we get a meta-analysis saying psi exists. For the sake of simplicity let\u2019s ignore its p-value for now and just say it 100% proves its point.\nIn 1000 worlds where someone does a meta-analysis on psi, 100 will have the meta-analysis be flawed and 1 will have psi exist. \nThe results of this study show we\u2019re in either the 100 or the 1. So our probabilities should now be:\n1\/101 = ~0.99% chance psi exists\n100\/101 = ~99.1% chance the meta-analysis is flawed\nI think it\u2019s a little more complicated than this, because we know there are other parapsychological experiments whose success or failure is correlated with this one. It\u2019s probably not true that if a similar meta-analysis came out, I\u2019d have to update to 90\/10. And the fact that there have been a lot of studies looking for psi that found none also has to figure in somewhere.\nBut this is the basic dynamic at work making me thing of this as \u201cmostly casts doubt on analysis\u201d rather than \u201cmostly proves psi\u201d\nRegarding my low prior, you make a physics argument, and I\u2019m not really qualified to judge. But I don\u2019t find psi physically too implausible, in the sense that I wouldn\u2019t be surprised if, a hundred years from now, scientists can create a machine that does a lot of the things psi is supposed to be able to do (manipulate objects remotely, receive messages from the future, etc).\nMy worries are mostly biological. Psi would require that we somehow evolved the ability to make use of very exotic physics, integrated it with the rest of our brain, and did so in a way that leaves no physical trace in the sense of a visible dedicated organ for doing so. The amount of clearly visible wiring and genes and brain areas necessary to produce and process visual input is obvious to everyone \u2013 from anatomists dissecting the head to molecular biologists looking at gene function to just anybody who looks at a person and sees they have eyes. And it\u2019s not just the eyes, it\u2019s the occipital cortex and all of the machinery involved in processing visual input into a mechanism the rest of the brain can understand. That\u2019s taken a few hundred million years to evolve and it\u2019s easy to trace every single step in the evolutionary chain. If there were psi, I would expect it to have similarly obvious correlates.\nThere\u2019s also the question of how we could spend so much evolutionary effort exploiting weird physics to evolve a faculty that doesn\u2019t even really work. I don\u2019t think any parapsychologist has found that psi increases our ability to guess things over chance more than five to ten percent. And even that\u2019s only in very very unnatural conditions like the ganzfeld. The average person doesn\u2019t seem to derive any advantage from psi in their ordinary lives. The only possible counterexample might be if some fighting reflexes were able to respond to enemy moves tiny fractions of a second before they were made \u2013 but it would be very strange for a psi that evolved for split second decisions to also be able to predict far future (also, this would run into paradoxes). Another possibility might be that psi is broken in modern humans for some reason (increased brain size? lifestyle?). But I don\u2019t know of any animal psi results that are well-replicated.\nThese difficulties magnify when you remember that psi seems to be a whole collection of abilities that would each require different exotic physics. As weird as it would be to have invisible organ-less non-adaptive-fitness-providing mechanisms for one, that multiplies when you need precognition AND telepathy AND telekinesis.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 1, 2014 at 3:41 pm \nWell, Scott, you bring up some good points. Thank you for the swift and gracious reply. I will address your argument about Bayesian priors first.\nThe way I see it, there is something strange about your approach, and that is that if you use a two prior system, the experimental evidence becomes essentially superfluous\u2014and however logical the grounding for it may be, if this is the case we must be led to seriously question it. Consider: no matter how many meta-analyses we conduct, or how many experiments, if your priors are at 1\/10 for flaws and 1\/1000 for psi (a reasonable psi prior, BTW), you will always accept the flaw hypothesis. Not to put too fine a point on it, but if we were randomly selecting from infinite possible worlds where we conduct a meta-analysis, we would always have a 100 times greater chance of selecting a world with flaws than with psi, and thus also a 99.1% chance of being in the world with flaws (given a positive MA) and a .99% chance of being in the world with psi. What\u2019s remarkable to me is that it wouldn\u2019t matter how many positive MAs we got, sequentially; these probabilities would hold steady!\nMy concern is that the two-prior system is a self-fulfilling prophecy, and not how Bayesian ideas should actually play out. But that\u2019s not to say it\u2019s not unreasonable, in certain contexts, to think this way. We all know human beings have only a limited time to pursue what interests them; from the perspective of a person trying to make rational decisions about what to pursue and what to avoid\u2014for whom parapsychology is just one of a mass of unreliable claims\u2014it makes sense to default on the flaw hypothesis. However, for someone who has decided that the field is worth a more intense form of scrutiny, this analysis cannot be acceptable. I have the burden of proof in this discussion; it is my job to persuade you that parapsychology is worth the latter, and not the former, treatment.\nEven a flaws prior based on empirical estimates of problematic meta-analyses is still only the crudest approximation to the \u201ctrue\u201d error prior of any MA; what it does is it uses the roughshod general quality of a scientific discipline (in this case all meta-analyses!) to predetermine the amount of evidence in one example\u2014again, a perfectly reasonable approach for estimating the likelihood of success, generally, or of any particular meta-analysis one doesn\u2019t want to invest time in, but rather superficial for a question one wants to have an accurate answer to. I don\u2019t know about you, but I want to have a superbly accurate answer to whether psi exists\u2014it\u2019s of great importance to me, and I will be satisfied with nothing less.\nTo the genuinely moved investigator, I think, the first step is to undertake a considerable analysis of the experimental and statistical methodology. The flaws prior must disappear, to be replaced with the condition that if any flaw is found which the investigator deems influential, the resulting p-value of the evidence (or Bayes factor) must be left in serious question until such time as it can be shown that (1) the flaw did not exist or (2) it could not have significantly impacted the data.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tMay 1, 2014 at 8:27 pm \nI don\u2019t think it\u2019s as pessimistic as you make it sound.\nIn theory, if this meta-analysis dectupled my probability to 1%, the next one that comes out positive should dectuple my probability to (almost) 10%, and so on.\nIn practice this doesn\u2019t work because I assume that, if this meta-analysis is systematically flawed, the next one shares those same flaws. But you could raise my probability by showing it doesn\u2019t. For example, if the next meta-analysis fixes the problem I mention above about lack of conclusive evidence in peer-reviewed exact replications, that should raise my probability further, and so on.\nYou ask below what evidence I would find conclusive. I don\u2019t think I would find any single study conclusive. But if some protocol was invented that consistently showed a large positive effect size for psi, and it was replicated dozens of times with few or no negative replication attempts, and it was even replicated when investigated by skeptics like Wiseman, I think that would be enough to make me believe.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 3, 2014 at 7:19 pm \nThe problem with the two-prior system is that even if the flaws prior varies between meta-analyses\u2014which I agree is better than keeping it fixed\u2014it still has the effect of turning the Bayes factor of any particular meta-analysis into little more than a large constant by which to multiply your predetermined priors, leaving the same relative odds for your hypotheses at the end of the analysis as at the beginning. \nEssentially, the approach seems to me to be slapping numbers onto what people already do: disregard the Bayes Factor or the p-value if they think flaws are more likely than the result of the MA.\nAll I\u2019m saying is that determining that last part\u2014the likelihood that flaws are really a more reasonable explanation\u2014requires an exhaustive, self-driven inspection, if one genuinely wants to avoid a Type II error as well as Type I error.\nI also think you may have misread the results of published exact replications because you left out\nBem (2012)\nN=42\nES=0.290\nSavva et al. (2004)\nN=25\nES=0.34\nSavva et al. Study 1 (2005)\nN=50\nES=0.288\nSavva et al. Study 2 (2005)\nN=92\nEs=-0.058\nSavva\u2019s studies can be listed as exact replications of Bem, without precognition, because they stated explicitly in their papers that they were replicating Bem (2003), the first series of habituation experiments in Bem (2011) which Bem presented at a PA convention 11 years ago. Bem (2012), also, counts as a legitimate prospective replication of Bem (2011), IMO.\nIf you factor in these missing studies, you now have 6\/11 positively directional results, five of which display effect sizes from .27 to .34, and only two of which display negative effect sizes of comparable magnitude. Just eyeballing it, I\u2019m willing to bet that a combination of these ES values would result in an aggregate ES of around 0.09, the reported value for exact replications of Bem in general.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 3, 2014 at 10:30 pm \nI\u2019m also interested in hearing your perspective on the fact that not only was there a general overall result in Bem et al. (2014), but several observations, as well, that run directly counter to what we might predict in the absence of an effect. I\u2019d like to see what you think about the strength of such observations as evidence against the experimenter error\/manipulation hypothesis.\n For example:\n(1) Consider that the much talked about \u201cexperimenter degrees of freedom\u201d would be expected to more strongly impact conceptual rather than exact replications\u2014hardly a controversial point\u2014yet the ES of the conceptual experiments is lower than that of the exact. Indeed, exact replications have the highest ES of the batch.\nAlso, if you read Table 1 carefully, as well as the paragraph before it, you will see that all the experiments listed under \u201cIndependent Replications of Bem\u2019s experiments\u201d exclude Bem\u2019s original findings, including the \u201cexact replications\u201d, so the point you make above in your essay that he counted those in the analysis is, I believe, mistaken.\n(2) There were very noticeable differences between fast-thinking and slow-thinking experiments, as well as no obvious reason why this should be according to the experimenter manipulation hypothesis. In particular, every single fast-thinking category yielded a p-value of less than .003, but both types of slow-thinking experiments yielded p-values above .10. Bem gives a good explanation of this in terms of the psi hypothesis, pointing out that online experiments seemed to have very strongly hampered the ES of slow-thinking protocols. It seems to me the experimenter error\/manipulation hypothesis would be at a loss to account for this; why should it be that slow-thinking protocols or online administration of these protocols lower incidences of bias? \nI think the safeguards of these experiments pretty much rule out sensory cues; therefore the skeptical explanation must lie in p-hacking, multiple analysis, experimenter degrees of freedom, selective outcome reporting, etc. However (1) seems inconsistent with most of this, the check for p-hacking failed to find a result, and (2) decisively refutes the prediction that such biases evenly affected fast and slow-thinking protocols\u2014the most straightforward prediction we would have made prior to seeing the results.\nI\u2019m not claiming any of this is conclusive, but I am saying that when you think carefully about some of the results in this MA, they take you by surprise. This is the kind of ancillary data that contributes to the veracity of an effect as much or more than the basic overall effect measure, IMO; if these types of suggestive trends weren\u2019t present throughout parapsychology databases, I would find them less convincing.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 4, 2014 at 12:53 am \nBTW, Max just told me that he did the sample-size weighted mean calculation for the ES values I report plus the ones you report: the result is 0.0802. This exactly confirms what I said in my above post, and refutes the contention that published exact replications of Bem\u2019s studies fail to replicate the results of the 31 reported exact replications, published or not, in Bem et al. (2014).\nMax also bets there is heterogeneity across these ES values, given their extreme variance; an I^2 test on them might be appropriate. If moderate to great heterogeneity was found, it would count as further evidence against the null.\n\n\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 1, 2014 at 7:15 pm \nI don\u2019t have time to address the physics\/biology argument in detail now, but consider just these few points: \n(1) psi is taken by many parapsychologists to be nothing more than an artifact of retrocausality; under the DATS model of psi, which makes testable predictions, psi is just the collateral of a conscious process that slips backwards and forwards in time (we don\u2019t need to factor in long-term precognition right now, since the evidence for that is mostly anecdotal). See: http:\/\/www.lfr.org\/lfr\/csl\/library\/DATjp.pdf \nEDIT: Retrocausality can explain, BTW, the results of probably most, if not all, psi experiments. It is also one of the ways entanglement has been theorized to work, and it is entailed by the TSQM model of quantum mechanics. Bell\u2019s theorem, if I recall, establishes a non-local, anti-realist, OR retrocausal universe. Most physicists opt for non-locality, but I think the experimental evidence from the physicists I mentioned, as well as from parapsychology, should prompt us to examine the retrocausality option more carefully.\nFor a fascinating series of physiological psi experiments which complement Bem\u2019s and offer evidence for presentiment, where another comprehensive meta-analysis has also been published, see: http:\/\/journal.frontiersin.org\/Journal\/10.3389\/fpsyg.2012.00390\/pdf\nThe effect size of the above experiments is considerably larger than Bem\u2019s, on the order of the ganzfeld findings.\n(2) It is true that we haven\u2019t found an obvious organ associated with the \u201cpsi sense\u201d, but it is also true that the human body has a number of senses beyond the five\u2014precisely how many is constantly in debate\u2014that don\u2019t have organs as clear as eyes, for example, \n(3) The brain has been rightly termed the most complex object in the known universe; so complex it contains the mystery of conscious experience\u2014the \u201cHard Problem\u201d\u2014which has bewildered neuroscientists and philosophers for centuries.\nWhen it comes to spooky physics, consider that we have predicted entanglement to occur in the eyes of birds, with empirical and theoretical evidence (still in debate), as well as in the photosynthesis of algae, and also quantum tunneling in proteins across the biological spectrum; if this is the case, imagine the level of spooky physics that might take place in a human brain.\n\n\n\n\n\n\n\n\n suntzuanime says: \n\n\t\t\tMay 1, 2014 at 7:29 pm \nTo be fair, it doesn\u2019t take much to bewilder philosophers.\n\n\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 1, 2014 at 7:18 pm \nA final question for now: what level of evidence would convince you that some form of psi exists? What sort of experiment, under what conditions?\n\n\n\n\n\n\n\n\n Troy says: \n\n\t\t\tMay 2, 2014 at 12:15 pm \nThere\u2019s also the question of how we could spend so much evolutionary effort exploiting weird physics to evolve a faculty that doesn\u2019t even really work. I don\u2019t think any parapsychologist has found that psi increases our ability to guess things over chance more than five to ten percent. And even that\u2019s only in very very unnatural conditions like the ganzfeld. The average person doesn\u2019t seem to derive any advantage from psi in their ordinary lives.\nOne of the most common \u201cpsychic\u201d anecdotes that I hear is some variation of the following story: I had a sense that something was wrong with my Aunt Bea, so I picked up the phone and called, and my brother answered and said that she just had a heart attack. I think such anecdotes are especially common among twins and other close relatives.\nLet\u2019s assume that there\u2019s actually some kind of telepathy going on here and that it\u2019s either explained by genetic similarities or close emotional connections. Either way, it seems plausible that being able to (even highly fallibly) tell when your close genetic relative or emotional companion is in trouble could be of significant evolutionary advantage.\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tMay 6, 2014 at 6:52 pm \nLet\u2019s assume that there\u2019s actually some kind of telepathy going on here and that it\u2019s either explained by genetic similarities or close emotional connections. Either way, it seems plausible that being able to (even highly fallibly) tell when your close genetic relative or emotional companion is in trouble could be of significant evolutionary advantage.\nNice thing about the evolutionary theory is that it suggests quite a few testable predictions: since it seems clear that psi, if it exists, varies between people, if it\u2019s genetically based we should see the usual factor results from sibling\/fraternal-twin\/identical-twin studies; we should see dramatic increases in psi strength when we pair related or unrelated people in ganzfeld or staring studies (presumably identical-twin and then parent-child bonds are strongest, but we might expect subtler effects like stronger dad-son & mom-daughter communication, weaker communication among people related by adoption etc), we should be able to show decreases in communication between couples who were linked in 1 session but had broken up by the next sesssion, and so on. You can probably think of more implications. (Hm\u2026 would we expect people who grew up in dangerous places or countries, and so whose relatives\/close-ones would be more likely to be at risk, to have greater receptiveness?)\nOn the other hand, when we try to put it in evolutionary terms as kin selection, it casts a lot of doubt on the hypothesis, since the benefit doesn\u2019t seem to be big and so selection would be weak. I\u2019m not a population genetics expert, but I\u2019ll try to do some estimating\u2026\nThe benefit: remember the quip \u2013 you would sacrifice yourself for 2 siblings, 4 nephews, 8 cousins\u2026 How often does one feel worried about Aunt Bea? And how often is Aunt Bea actually in trouble? (It\u2019s no good for the psi sense if it spits out false negatives or positives, it only helps when it generate a true positive and alerts you when the relative is in danger.) Speaking for myself, I\u2019ve never had that experience. Even the people who generate such anecdotes don\u2019t seem to experience such events more than a few times in a lifetime. Imagine that you\u2019re alerted, say, 3 or 4 times in a lifetime, a quarter are correct, and you have even odds of saving their lives single-handedly, and also that they\u2019re still young enough to reproduce, and you costlessly wind up saving Aunt Bea\u2019s life. You\u2019re related by a quarter to Aunt Bea, I think (half comes from mother, mother will be sibling-related to Aunt Bea, so 0.5 * 0.5 = 0.25, right?), then your inclusive fitness gain here is 0.25 * (1\/4) * (1\/2) = +0.03125. I think these are generous figures and the true s is a lot lower, but let\u2019s go with it.\nSo let\u2019s say that if psi were a single mutation rather than a whole bunch, it has a selective advantage of 0.03125. An advantage doesn\u2019t guarantee that the mutation will become widespread through the population, since the original bearers can just get unlucky before reproducing much. One good approximation to estimate fitness is apparently simply doubling the selective advantage (http:\/\/rsif.royalsocietypublishing.org\/content\/5\/28\/1279.long \u03c0\u22482s), so then the probability of fixation is 6%.\nSo if such a mutation were to ever happen, it\u2019s highly unlikely that it would then spread.\n\n\n\n\n\n\n\n\n endoself says: \n\n\t\t\tMay 6, 2014 at 10:15 pm \nIf it appears once it is unlikely to spread. Many mutations that increase fitness by 3% have reached fixation, since the same mutation, or different mutations with the same effects, can occur many times given a large enough population and enough time.\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tMay 6, 2014 at 10:48 pm \nYes, but the question of how often a psi single mutation would arise leads us to the question of whether that\u2019s remotely plausible (no), hence it must be a whole assemblage of related pieces, and that leads us to the question of how psi could possibly work, much less evolve incrementally; and I\u2019d rather not get into that just to make my point that the fitness & probability fixation can\u2019t be very big.\n\n\n\n\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 4, 2014 at 3:29 pm \nThis experiment produced results that could only possibly happen if either psi existed or the experiment was flawed, so we should increase both probabilities. However, how *much* we increase them depends on the ratio of our priors.\nThat\u2019s patently wrong. It is fundamental to Bayesian inference that the amount by which our prior odds change in response to new evidence is independent of those prior odds:\nPosterior odds = Bayes factor \u00d7 Prior odds , \nwhere the Bayes factor, the ratio of the marginal likelihoods of the two hypotheses we are considering, quantifies the relative weight of the evidence for those two hypotheses. \nSuppose that before hearing this, we thought there was a 1\/10 chance of any given meta-analysis being flawed (even one as rigorous as this one), and a 1\/1000 chance of psi existing.\nNow we get a meta-analysis saying psi exists. For the sake of simplicity let\u2019s ignore its p-value for now and just say it 100% proves its point.\nIn 1000 worlds where someone does a meta-analysis on psi, 100 will have the meta-analysis be flawed and 1 will have psi exist.\nThe results of this study show we\u2019re in either the 100 or the 1. So our probabilities should now be:\n1\/101 = ~0.99% chance psi exists 100\/101 = ~99.1% chance the meta-analysis is flawed.\nThe new evidence increases your odds of the psi vs the non-psi hypothesis; thus it is evidence in favor of the psi hypothesis relative to the non-psi hypothesis. It is fundamental to Bayesian inference that if enough such evidence accumulates, the probability of the psi hypothesis must approach 1 in the limit. However, if you continue to update your odds in the manner you describe, your probability of psi can never exceed 1\/2. Thus, no amount of such evidence could ever convince you of the psi hypothesis, in contradiction of a fundamental Bayesian law.\n\n\n\n\n\n\n\n\n\n\n Jesper \u00d6stman says: \n\n\t\t\tMay 4, 2014 at 3:09 am \nInteresting points. I looked for 4 studies by Wiseman and Schlitz but could only find 3. What is the study after Wiseman and Schlitz 1997?\n(She only mentions 3 as of February 2013 http:\/\/marilynschlitz.com\/experimenter-effects-and-replication-in-psi-research\/ , and I have failed to find other Wiseman and schlitz papers when googling a bit )\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 4, 2014 at 2:37 pm \nPrior to their collaboration, Wiseman and Schlitz had both carried out staring studies with null results, which prompted their joint experiment. These were Wiseman & Smith (1994), Wiseman et al. (1995), and Schlitz & LaBerge (1994). I count these as semi-evidential; kind of like preliminary studies; which offered results suggestive of an experimenter effect, but which were then reproduced under more rigorous conditions.\nI missed one of those studies though, so the revised count should be Shlitz: 3\/4 and Wiseman: 0\/5, three of which for Schlitz (2 successes\/3) were part of the collaboration, and three of which for Wiseman (3 failures\/3) were as well. \nAltogether, given that every success mentioned is an\nindependently significant experiment, the statistical fluke hypothesis is unlikely.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 5, 2014 at 5:18 am \n* I say \u201ccarried out studies with null results\u201d above, but what I really meant was Wiseman got null results and Schlitz got positive results.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tApril 30, 2014 at 7:22 pm \nThe following popular article in Nature mentions a few examples:\nhttp:\/\/www.nature.com\/news\/2011\/110615\/pdf\/474272a.pdf\nThere is a decent talk on the subject by physicist Jim Al-Khalili, at the Royal Academy, unconnected with parapsychology (he\u2019s got a great bow-tie, though):\nhttps:\/\/www.youtube.com\/watch?v=wwgQVZju1ZM\n^ If the above link doesn\u2019t show, type \u201cJim Al-Khalili \u2013 Quantum Life: How Physics Can Revolutionise Biology\u201d into Youtube instead.\nThere are also a number of references:\nSarovar, Mohan; Ishizaki, Akihito; Fleming, Graham R.; Whaley, K. Birgitta (2010). \u201cQuantum entanglement in photosynthetic light-harvesting complexes\u201d. Nature Physics\nEngel GS, Calhoun TR, Read EL, Ahn TK, Mancal T, Cheng YC et al. (2007). \u201cEvidence for wavelike energy transfer through quantum coherence in photosynthetic systems.\u201d. Nature 446 (7137): 782\u20136.\n \u201cDiscovery of quantum vibrations in \u2018microtubules\u2019 inside brain neurons supports controversial theory of consciousness\u201d. ScienceDaily. Retrieved 2014-02-22.\nErik M. Gauger, Elisabeth Rieper, John J. L. Morton, Simon C. Benjamin, Vlatko Vedral: Sustained quantum coherence and entanglement in the avian compass, Physics Review Letters, vol. 106, no. 4, 040503 (2011) \nIannis Kominis: \u201cQuantum Zeno effect explains magnetic-sensitive radical-ion-pair reactions\u201d, Physical Review E 80, 056115 (2009)\nYou can check Wikipedia, if you like, as well; it has those references and a little bit of information.\n\n\n\n\n\n\n\n\n Christian says: \n\n\t\t\tApril 30, 2014 at 7:54 pm \nThis is why scientists should be humble and embrace constructivism and second-order cybernetics when they write papers.\n\n\n\n\n\n\n\n\n Troy says: \n\n\t\t\tApril 30, 2014 at 8:33 pm \nAre there any surveys of what percentage of professional psychologists (or other relevant scientists) believe in psi (or think the evidence for it is strong enough to take it seriously)? Presumably said survey would have to be anonymous to get reliable results, since believers might be embarrassed to say so publicly.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tApril 30, 2014 at 9:01 pm \nThat\u2019s an excellent question, actually, and the answer is yes.\nWagener & Monnet (1979) and Evans (1973) both privately polled populations of scientists, technologists, and academics, and found that between 60-70% of them agreed with the statement that psi is either a \u201cproven fact or a likely possibility\u201d (response bias and other confounding variables exist, though). Consistently low results have been found for belief among psychologists; in Wagener & Monet (1979), psychologists who thought psi was either \u201can established fact or a likely possibility\u201d were just 36% of the total, compared to 55% natural scientists, and 65% social scientists.\nWhen it comes to the scientific elite, however, it is another matter. Here, evidence from McClenon (1982) seems to point to unambiguous skeptical dominance, with less than 30% of AAAS leaders holding to the likelihood of psi. Still, 30% is a lot of scientists\u2014especially given these are the board members of the AAAS, the largest scientific academy in the world. Add to that the fact that the Parapsychological Association is an affiliate member of the AAAS\u2014despite a vigorous campaign to remove them in 1979\u2014and you have an interesting situation.\nWe must keep in mind that while both of these pieces of information are interesting, they shouldn\u2019t do much to sway our judgement. In both surveys, it is difficult to gauge to what extent the opinions of those polled were formed in response to the empirical evidence.\nYou may find the Wagener & Monet (1979) results here: http:\/\/www.tricksterbook.com\/truzzi\/\u2026ScholarNo5.pdf\nA larger body of results is reviewed here: http:\/\/en.wikademia.org\/Surveys_of_academic_opinion_regarding_parapsychology\n\n\n\n\n\n\n\n\nPingback: Links For May 2014 | Slate Star Codex\n\n\nPingback: The motto of th\u2026 | On stuff.\n\n\n\n\n Ilya Shpitser says: \n\n\t\t\tMay 1, 2014 at 6:18 am \nPeople like to mock Less Wrong, saying we\u2019re amateurs getting all starry-eyed about Bayesian statistics even while real hard-headed researchers who have been experts in them for years understand both their uses and their limitations. Well, maybe that\u2019s true of some researchers. But the particular ones I see talking about Bayes here could do with reading the Sequences.\n\nSo what would be your recommendation to an endless list of people from LW from EY on down who say things about B\/F that are (a) wrong, or (b) not even wrong. Could they do with reading a textbook?\nIf I had to choose between the LW cohort and the stats (or even data analyst) cohort as to who had generally better calibrated beliefs about stats issues, I know who I would go with.\n\n\n\n\n\n\n\n\n Ellie Kesselman says: \n\n\t\t\tMay 27, 2014 at 10:49 am \nYes, Ilya Shpitser! I am a mere statistician and data analyst, doubter of Jonah Lehrer\u2019s veracity, ignorantly idolatrous in my continued use of Neyman, Pearson and Fisher. I love validation.\nI recognize your name. You had a lively, cordial conversation with jsteinhardt on LW, following his Fervent Defense of Frequentist Statistics. I smiled with delight as I read of your commitment there.\n\n\n\n\n\n\n\n\nPingback: Utopian Science | Slate Star Codex\n\n\n\n\n Allan Crossman says: \n\n\t\t\tMay 2, 2014 at 7:27 am \nJust for the record, I didn\u2019t invent the term \u201ccontrol group for science\u201d, I think that was probably Michael Vassar.\n\n\n\n\n\n\n\n\n Ellie Kesselman says: \n\n\t\t\tMay 4, 2014 at 9:12 pm \nAlan Crossman,\nTrue or not, the term \u201ccontrol group for science\u201d is attributed to you, near and far, all over the internet. The origin seems to be consistent with your (commendably modest, honest) denial, per Douglas Knight comments on She Blinded Me With Science, 4 Aug 2009:\n\u201cI think I\u2019ve heard the line about parapsychology as a joke in a number of places, but I heard it seriously from Vassar.\u201d\nEY replies, thread ends with yet others, e.g. \u201cParapsychology: The control group for science. Excellent quote. May I steal it?\u201d and \u201cIt\u2019s too good to ask permission for. I\u2019ll wait to get forgiveness ;).\u201d\nOn 05 December 2009, you wrote, Parapsychology the control group for science. I could find no other, better sources online, attributing it to you or Vassar. Actually, none directly to him, only you.\nEek! I need to put my time to better use. This is embarrassing!\n\n\n\n\n\n\n\n\n\n\n Ellie Kesselman says: \n\n\t\t\tMay 4, 2014 at 2:55 am \nFor the author, Mr. Steve Alexander,\nPlacing meta-analyses at the pinnacle of your Pyramid of Scientific Evidence is incorrect. As a practicing frequentist statistician, I am certain. Also, this is one of the few times that I actually agree with Eliezer Yudkowsky! He commented on your post. Substitute \u201cfrequentist\u201d for Bayes, and vice-versa, in his comment. The conclusion is the same, in my informed opinion: meta-analyses are less, rather than more, ah, robust, compared to some of the other pyramid levels.\nI mention this with good intent (it isn\u2019t like a tiny missing word). You said,\nThere is broad agreement among the most intelligent voices I read (1, 2, 3, 4, 5) about a couple of promising directions we could go.\nNo, noooo! Number 3 is notorious science fraud, Jonah Lehrer. Lehrer acknowledged that he fabricated or plagiarized everything. He even gave a lecture about it at a prominent journalism school, maybe Columbia or Knight or NYU, last year, after being found out. You should probably re-think whether you want to cite him as one of the most intelligent voices you read.\nUnlike most critiques of statistical analysis, yours does contain a core of truth!\nPeople are terrible. If you let people debate things, they will do it forever, come up with horrible ideas, get them entrenched, play politics with them, and finally reach the point where they\u2019re coming up with theories why people who disagree with them are probably secretly in the pay of the Devil.\nI enjoyed that, very much.\n\n\n\n\n\n\n\n\n Scott Alexander says: \n\n\t\t\tMay 4, 2014 at 9:04 pm \nI think you\u2019re misunderstanding. I am posting the standard, internationally accepted \u201cpyramid of scientific evidence\u201d, and then criticizing it. I didn\u2019t invent that pyramid and I don\u2019t endorse it.\nJonah Lehrer is indeed a plagiarist. He\u2019s also smart and right about a lot of things. Or maybe the people whom he plagiarizes are smart and right about a lot of things. I don\u2019t know. In either case, the source doesn\u2019t spoil the insight, nor does that article say much different from any of the others.\n\n\n\n\n\n\n\n\n Ellie Kesselman says: \n\n\t\t\tMay 4, 2014 at 9:48 pm \nI regret being unclear. I meant that I agreed with this, and only this, in EYudkowsky\u2019s comment earlier:\n\u2026meta-analyses will go on being [bullshXt]. They are not the highest level of the scientific pyramid\u2026When I read about a new meta-analysis I mostly roll my eyes.\nMe too!\nI don\u2019t know what this is about,\n\u201cYou can\u2019t multiply a bunch of likelihood functions and get what a real Bayesian would consider zero everywhere, and from this extract a verdict by the dark magic of frequentist statistics.\u201d\nWhen I make my magical midnight invocations to the dark deities of frequentist statistics, open my heart and mind to the spirits of Neyman, Pearson and Fisher, I work with maximum likelihood estimates (MLE\u2019s), not \u201clikelihood functions\u201d. There are naive Bayes models and MLE for expectation maximization algos [PDF!], but I don\u2019t know if EY had that in mind.\nYou\u2019ll lose credibility if you continue to claim that Jonah Lehrer is among the most intelligent voices you read. That is, of course, entirely your perogative. I only wanted to be friendly, helpful.\n\n\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tMay 6, 2014 at 6:20 pm \nNo, noooo! Number 3 is notorious science fraud, Jonah Lehrer. Lehrer acknowledged that he fabricated or plagiarized everything. He even gave a lecture about it at a prominent journalism school, maybe Columbia or Knight or NYU, last year, after being found out. You should probably re-think whether you want to cite him as one of the most intelligent voices you read.\nHe acknowledged plagiarizing some things (mostly things I\u2019d regard as fairly trivial and common journalistic sins of simplifying & overstating), but if he plagiarized \u2018everything\u2019 I will be extremely impressed. I don\u2019t recall anyone raising doubts about his \u2018Decline\u2019 article, involved people commented favorably on the factual aspects of it when it came out, the NYer still has it up, and my own reading on the topics has not lead me to the conclusion that Lehrer packed his decline article with lies, to say the least. If you want to criticize use of that article, you\u2019ll need to do better.\n\n\n\n\n\n\n\n\n Ellie Kesselman says: \n\n\t\t\tMay 27, 2014 at 4:11 pm \nAnother online acquaintance: Gwern of Disqus comments, who has found (sometimes-amusing) fault with my comments on inane The Atlantic posts.\nSo. You like writing about Haskell, the Volokh Conspiracy, bitcoin and the effectiveness of terrorism. Goldman Sachs has not been extant for 300 years. I was saddened by your blithe dismissal of Cantor-Fitzgerald, post-9\/11. I worked, briefly, for Yamaichi Securities, on floor 98 of Tower 2, but several years after the 1993 WTC explosion.\nPlease consider dropping by for a visit on any of my Wikipedia talk pages. You have 7 years\u2019 seniority to me there. David Gerard is a decent person. He wrote your theme song, the mp3, so you must have some redemptive character traits :o) I am FeralOink, a commoner.\n\n\n\n\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 4, 2014 at 7:06 pm \n@johann:\nit still has the effect of turning the Bayes factor of any particular meta-analysis into little more than a large constant by which to multiply your predetermined priors, leaving the same relative odds for your hypotheses at the end of the analysis as at the beginning.\nLast time i checked, multiplying a positive number by a large constant resulted in a larger number. You need a review of Bayes factors, multiplication, or both.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 5, 2014 at 4:31 am \nI don\u2019t think you understand me clearly: the main utility of Bayesian statistics is that we can update a prior to a posterior, by multiplication with a Bayes factor. On a calculational level, this is all that happens, and indeed Scott\u2019s approach doesn\u2019t break from this. However, when it comes to the actual inference, what should happen is more than this; ideally, we should allow our beliefs to be guided by what those numbers actually represent. Because Scott uses two priors, however, the relative odds of his two competing hypotheses (i.e. there is a true effect and there is not) remains the same before and after any particular statistical test of the evidence. Something about this is just not right, IMO.\nIn practice, I know that it is nonsense to believe what the numbers literally say, without skepticism. People should only take statistics at face-value, for areas that really intrigue them, after having satisfied themselves thoroughly that the experiments under analysis are not explainable on the basis of flaws. But after this has occurred, those numbers have real meaning!\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 5, 2014 at 2:40 pm \n@Johann:\nI don\u2019t think you understand me clearly\nActually your new post confirms that I did understand you, and that you don\u2019t understand how Bayesian updating works.\n[W]e can update a prior to a posterior, by multiplication with a Bayes factor. On a calculational level, this is all that happens, and indeed Scott\u2019s approach doesn\u2019t break from this.\nIn fact the approach Scott proposed for updating his probabilities was dead wrong, because he made the contribution from the new evidence depend on the prior, which violates one tenet of Bayesian inference; and using his method of updating, his posterior probability for psi can never exceed 1\/2, which violates another tenet of Bayesian inference.\nBecause Scott uses two priors, however, the relative odds of his two competing hypotheses (i.e. there is a true effect and there is not) remains the same before and after any particular statistical test of the evidence. Something about this is just not right, IMO.\nBayesian inference always considers (at least) two hypotheses. Often, the second hypothesis is the complement of the first, but this need not be the case. It is perfectly fine to consider the prior odds of an observed effect being due to psi (H1) vs being due to experimental bias (H2). The prior odds is a ratio of two probabilities, P(H1)\/P(H2), and is hence a single non-negative number. This number is multiplied by the Bayes factor, which is the ratio of the probability of the data under the psi hypothesis to the probability of the data under the bias hypothesis, and is hence also a non-negative number. Unless this number is 1, or the prior odds 0 or infinity, then multiplying the prior odds by the Bayes factor will result in posterior odds that are different than the prior odds. Clearly, the odds will not remain the same, as you claim.\n\n\n\n\n\n\n\n\n Johann says: \n\n\t\t\tMay 6, 2014 at 12:50 am \nIt may well be that I am technically mistaken in my analysis\u2014I have not deeply studied Bayesian hypothesis testing\u2014but my impression is still that we\u2019re not actually disagreeing on much, although now I have a concern or two about your approach as well. I would be glad to be corrected on any mistake, BTW.\nFirstly, I will be as clear as possible about what I mean. I see Scott\u2019s approach as one that conducts two separate hypothesis tests, both correctly performed. My contention, though, is that it is fundamentally wrong to do *both*. It is clear to me that Scott is juggling four subtle hypotheses, when really he\u2019s only interested in two, to start with: psi exists vs. it does not, and invalidating flaws exist vs. they do not. He sets his prior for flaws at 1\/10 (implying a prior of 9\/10 for no invalidating flaws) and his prior for psi at 1\/1000 (implying a prior of 999\/1000 for no psi), multiplies both of them by the Bayes factor of a study, let\u2019s say 300, and obtains two posterior distributions, let\u2019s say 30 to 1 for flaws vs. no flaws and 3 to 10 for psi vs. no psi.\nNow, since the existence of invalidating flaws effectively begets the same conclusion as the non-existence of psi, we can make the following comparison: At the start of the test, the ratio of Scott\u2019s two priors was (1\/10)\/(1\/1000) = 100\/1, implying that he favored the flaws hypothesis a hundred times more than the psi hypothesis. Now, the ratio for his posteriors is (30\/1)\/(3\/10) = 100\/1, so it is clear that nothing has changed and the very performance of the test was meaningless. If you believe there are likely to be flaws in a study, why update the numbers?\nIf you saw something different in Scott\u2019s methodology, feel free to explain.\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 6, 2014 at 1:20 pm \n@Johann:\nIt is clear to me that Scott is juggling four subtle hypotheses, when really he\u2019s only interested in two, to start with: psi exists vs. it does not, and invalidating flaws exist vs. they do not.\nNo. There are only two hypotheses under consideration: H1: Results of experiments purporting to show psi are actually due to psi; H2: Such results are due to bias in the experiments.\nHe sets his prior for flaws at 1\/10 (implying a prior of 9\/10 for no invalidating flaws) and his prior for psi at 1\/1000 (implying a prior of 999\/1000 for no psi)\u2026\nThe two hypotheses quoted above that are complementary to H1 and H2 (ie, the material you have parenthesized) do not enter into the analysis.\n[He] multiplies both of them by the Bayes factor of a study, let\u2019s say 300, and obtains two posterior distributions, let\u2019s say 30 to 1 for flaws vs. no flaws and 3 to 10 for psi vs. no psi.\nNo. It works like this: We start with the prior odds of H1 vs H2: \nPrior odds = P(H1)\/P(H2) = .001\/.1 = .0001 .\nWe multiply the prior odds by the Bayes factor for H1 vs H2. If D stands for our observed data (in this case, the results of the experiments in the meta-analysis), then\nBayes\u2019 factor = P(D|H1)\/P(D|H2) = 300.\nAnd, by the odds form of Bayes\u2019 theorem, we multiply the prior odds by the Bayes\u2019 factor to obtain the posterior odds of H1 vs H2:\nPosterior odds = P(H1|D)\/P(H2|D) = 300 \u00d7 .0001 = .03 .\nSo, prior to observing the data, we believed that results purporting to show psi were 10,000 times as likely to be due to bias than to psi. After observing the data, we believe that results purporting to show psi are only 33 (=1\/.03) times as likely to be due to bias than to psi. Our new observations have increased our belief that the results are due to psi relative to our belief that they are due to bias by a factor of 300.\nHopefully that makes sense to you.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Julio Siqueira says: \n\n\t\t\tMay 4, 2014 at 9:17 pm \nHi Scott,\nI must say that I am sort of a \u201cbeliever in psi.\u201d Also, I have read pretty much about it over the last ten years, and I have been following to a certain extent the psi-believers vs psi-skeptics debate (on the web, in papers, in books, like \u201cPsi Wars: Getting to Grips with the Paranormal,\u201d etc). Further, I have, on some occasions, taken sides rather fiercely on this issue. Yet, I must acknowledge that many critiques of psi works are of high value. And I did find your evaluation (article) above very interesting and worthy of respect. I would like to comment on a few points:\n\u201cNone of these five techniques even touch poor experimental technique \u2013 or confounding, or whatever you want to call it. If an experiment is confounded, if it produces a strong signal even when its experimental hypothesis is true, then using a larger sample size will just make that signal even stronger.\u201d \u2026 \u2026 \u201cReplicating it will just reproduce the confounded results again.\u201d\nI believe that, if only confounding were involved in the issue, there actually would be a drifting in the results, and not confirmation plus confirmation plus confirmation. It would take more than poor standards to veer the results in one direction: it would take some sort of fraud, at least either conscious or unconscious.\nAnd, the results in the Wiseman and Schlitz\u2019s work was interesting. It is curious that it was not heavily replicated (It might be interesting to find out if it was mostly because of the believers or because of the skeptics\u2026).\nI just would like to add, as gentlemanly as possible (and thus, honour the high level of the debate on this page), that I do not share your view regarding Wiseman. But, that is not the issue, anyway.\nI also think that Johann made very good contributions to the debate on this page. It is nice that he provided a fair amount of information about quantum mechanics\u2019 based biological phenomena, which is an area of knowledge that has been increasing considerably (and robustly) over the last ten years.\nJulio Siqueira\nhttp:\/\/www.criticandokardec.com.br\/criticizingskepticism.htm\n\n\n\n\n\n\nPingback: The Bem Precognition Meta-Analysis Vs. Those Wacky Skeptics | The Weiler Psi\n\n\n\n\n Adam Safron says: \n\n\t\t\tMay 5, 2014 at 8:20 pm \nFor the Wiseman & Schlitz staring study (or \u201cthe stink-eye \u2018effect\u2019\u201d as I like to call it), although I haven\u2019t looked closely, I think I might have an explanation for how different results could be obtained with \u201cidentical\u201d methods. It wasn\u2019t a double-blind study. Although the person receiving the stink-eye was unaware of when they were being stared at, the person generating the stink was able to monitor the micro-expressions of the participants, and so be influenced in when they \u2018chose\u2019 to commence with stink-generation. Under this account, there is a causal relationship, but it goes in the reverse direction, and isn\u2019t mediated by anything spooky, except for the spookiness of the exquisite pattern-detecting abilities of brains.\n\n\n\n\n\n\n\n\n Kibber says: \n\n\t\t\tMay 8, 2014 at 3:21 pm \nAt least in the 1997 paper that I looked at, the experimenters used randomly generated sequences of stare and non-stare periods \u2013 i.e. the decision to stare or not was truly random and not at-will.\n\n\n\n\n\n\n\n\n\n\n hughw says: \n\n\t\t\tMay 5, 2014 at 8:24 pm \nThe analogy of the meta experiment to a control using a placebo is slightly wrong. In giving a subject the placebo, you are causing him to believe it might work. He did not enter the experiment believing it would work. Whereas, the parapsychologists all enter the experiment believing parapsychology is real.\n\n\n\n\n\n\n\n\n he who posts slowly says: \n\n\t\t\tMay 5, 2014 at 8:34 pm \nParapsychologists are not distinguished by the property of believing their hypothesis is correct.\n\n\n\n\n\n\n\n\n hughw says: \n\n\t\t\tMay 6, 2014 at 9:53 am \nIt\u2019s a premise of this essay. \u201c\u2026the study of psychic phenomena \u2013 which most reasonable people don\u2019t believe exists but which a community of practicing scientists does and publishes papers on all the time\u2026. I predict people who believe in parapsychology are more likely to conduct parapsychology experiments than skeptics\u201d\n\n\n\n\n\n\n\n\n Julio Siqueira says: \n\n\t\t\tMay 6, 2014 at 10:05 am \nHi Hughw,\nI think you are oversimplifying the issue. And, as to the (one of the) \u201cpremise\u201d of this essay, bear in mind that Scott said *most* reasonable people do not believe in psi. He did not say that *all* reasonable people do not believe in psi. Further, it is said above (in your quote) that the *community* believes in psi. But is is not said that *all* the parapsychologists believe in psi.\nEven though we all do not believe in God, Angels, and Demons, the Devil is still in the details\u2026\nBest,\nJulio Siqueira\nhttp:\/\/www.criticandokardec.com.br\/criticizingskepticism.htm\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 6, 2014 at 10:50 am \nPsychologists believe in their hypothesis, just like parapsychologists believe in theirs.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n nyan sandwich says: \n\n\t\t\tMay 6, 2014 at 4:53 pm \n>Imagine the global warming debate, but you couldn\u2019t appeal to scientific consensus or statistics because you didn\u2019t really understand the science or the statistics, and you just had to take some people who claimed to know what was going on at their verba.\nYou say this immediately after spending 3 sections proving that even in our world, statistics and consensus don\u2019t actually work, but then don\u2019t mention it in this context even to lampshade it.\nThere is no way this is accidental, because I know you read Jim\u2019s blog, and his influence on this post is quite apparent, and he makes that argument all the time.\nI\u2019ve noticed this habit you have before where you bust out some extremely interesting argument and then fail to even lampshade the obvious implication. It\u2019s not plausible that it\u2019s an accident, but it\u2019s also too weird for it to be deliberate. I\u2019m confused.\n\n\n\n\n\n\n\n\n Douglas Knight says: \n\n\t\t\tMay 6, 2014 at 6:54 pm \nI very much doubt Scott reads Jim\u2019s blog, outside of Jim\u2019s responses to Scott.\nWhat influence do you see of Jim on this post?\n\n\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tMay 6, 2014 at 6:22 pm \nSome random comments:\nusing statistics like \u201cfail-safe N\u201d to investigate the possibility of suppressed research.\nNitpick: I think \u2018fail-safe N\u2019 should be avoided whenever possible. It assumes that publication bias does not exist, and so simply doesn\u2019t do what one wants it to do. (See http:\/\/arxiv.org\/abs\/1010.2326 \u201cA brief history of the Fail Safe Number in Applied Research\u201d.)\nThis scientist \u2013 let\u2019s give his name, Robert Rosenthal \u2013 then investigated three hundred forty five different studies for evidence of the same phenomenon. He found effect sizes of anywhere from 0.15 to 1.7, depending on the type of experiment involved. Note that this could also be phrased as \u201cbetween twice as strong and twenty times as strong as Bem\u2019s psi effect\u201d. Mysteriously, animal learning experiments displayed the highest effect size, supporting the folk belief that animals are hypersensitive to subtle emotional cues.\nI agree the Rosenthal results are interesting, but I think the Pygmalion effect is more likely to be an example of violating the commandments & statistical malpractice (Rosental also gave us the \u2018fail-safe N\u2019\u2026) than subtle experimenter effects influencing the actual results; see Jussim & Harber 2005, \u201cTeacher Expectations and Self-Fulfilling Prophecies: Knowns and Unknowns, Resolved and Unresolved Controversies\u201d http:\/\/www.rci.rutgers.edu\/~jussim\/Teacher%20Expectations%20PSPR%202005.pdf\nBut first of all, I\u2019m pretty sure no one does double-blind studies with rats.\nNot really. They barely do randomized studies. That\u2019s part of why animal studies suck so hard; see the list of studies in http:\/\/www.gwern.net\/DNB%20FAQ#fn97\n\n\n\n\n\n\n\n\n Anonymous says: \n\n\t\t\tMay 6, 2014 at 7:28 pm \n I think \u2018fail-safe N\u2019 should be avoided whenever possible. It assumes that publication bias does not exist, and so simply doesn\u2019t do what one wants it to do.\nRosenthal\u2019s fail-safe N should never be used, but not because it assumes that publication bias does not exist, but because it is based on the unrealistic assumption that the mean effect size in the unpublished studies is 0. On the contrary, if the true effect size is 0, then the mean effect size in the unpublished studies would be expected to be negative.\nIn the Bem et al meta-analysis, the authors calculated, in addition to Rosenthal\u2019s fail-safe N, Orwin\u2019s fail-safe N, which in principle can provide a more realistic estimate of the number of unpublished studies because it allows the investigator to set the assumed mean unpublished effect size to a more realistic, negative, value. But, bizarrely, Bem et al, set the value to .001, actually assuming that the unpublished studies support the psi hypothesis!\n\n\n\n\n\n\n\n\n gwern says: \n\n\t\t\tMay 6, 2014 at 8:26 pm \nRosenthal\u2019s fail-safe N should never be used, but not because it assumes that publication bias does not exist, but because it is based on the unrealistic assumption that the mean effect size in the unpublished studies is 0. On the contrary, if the true effect size is 0, then the mean effect size in the unpublished studies would be expected to be negative.\nYes, that\u2019s what I mean: publication bias is a concern because it\u2019s a bias, studies which are published are systematically different from the ones which are not, and the fail-safe N ignores this and instead is sort of like sampling-error.\n\n\n\n\n\n\n\n\n\n\n\n\n Julio Siqueira says: \n\n\t\t\tMay 6, 2014 at 7:42 pm \nHi Scott,\nRegarding your biological concerns (from someone who is highly concerned with biology\u2026):\nFirst, you say \u201cexotic physics.\u201d We, naturally, have to be careful when using the word \u201cexotic\u201d in this context. For example, the \u201cphysics\u201d that almost everybody would point out as being \u201cexotic\u201d, namely Quantum Mechanics, is actually far more ubiquitous even than the Almighty Omnipresent Holy Lord Himself ! (if He exists). \nThen you add that \u201cthe amount of clearly visible wiring and genes and brain areas\u201d\u2026 \u2026\u201cis obvious to *everyone* \u201d\u2026 \u2026\u201cfrom anatomists (\u2026) to molecular biologists (\u2026) to JUST ANYBODY WHO LOOKS AT (\u2026) eyes.\u201d (emphasis added). And as a consequence, you expect similar *obvious* correlates. I think we have to remember that even almighty stuff isn\u2019t always (surprisingly enough) obviously \u201cperceptible.\u201d Especially when some sort of canceling out is at play. For example, we all know that electromagnetic force is pretty much almighty (far mightier than gravity; and gravity is not exactly a light weight\u2026 \u2013 pun intended). Yet, were it not for lightening bolts, even fairly advanced human societies might have passed it by completely, without an inkling of perception of it. See, we have *obvious* physical apparatus for dealing with air (lungs; the inhaling process; exhaling; etc), with water or somewhat solid matter (mouth, teeth, stomach, etc), with visible light (eyes), with sound (ears), etc. But unlike electric fish, we do not have *obvious* biological machinery to deal with \u201clightening stuff.\u201d Yet, not only is \u201celectricity\u201d itself immensely present in our biological machinery (i.e. even we, humans, take huge advantage of it), electromagnetism actually knits reality tight; and without it, matter would wander astray (even atoms would fall apart). What recent biology is telling us about biological uses of quantum phenomena is that it seems that we don\u2019t have any *obvious* correlate to it in terms of biological machinery. Yet, the correlates that we do have are not only ubiquitous but essential for life. Enzymes work taking advantage of quantum tunneling. Photosynthesis, if my mind serves me well, takes advantage of quantum entanglement. And when I say \u201ctake advantage\u201d, what I mean is: cannot do without! So, yes, it is speculative and even unlikely, but\u2026 it might just be that the correlates are there. It is just us that cannot see them yet.\nAnd next you add: \u201cThere\u2019s also the question of how we could spend so much evolutionary effort exploiting weird physics to evolve a faculty that doesn\u2019t even really work.\u201d, and you remind us that psi won\u2019t give us a head start greater than five percent over chance (as it seems; Ganzfeld), and then you mention its (apparent) absence in our ordinary lives, and the possible paradoxes of precognition (short and\/or long term). Well, what might be happening (if psi exists\u2026) is that we are not really looking at *how* psi works and at *what for* psi works. For example, electromagnetism does not exist for making lightening\u2026 Lightening bolts are just minor manifestations of the greater phenomenon of electromagnetism. They are almost \u201cspin offs.\u201d The basic and \u201ctrue\u201d function of electromagnetism in Nature is to knit reality together (Especially protons and electrons \u201cdirectly\u201d, as happens in atoms. And, oddly enough, as a consequence of this knitting, electromagnetism becomes pretty much\u2026 hidden!). So maybe the function of psi (i.e. the *main* function of it in Nature) is not to get humans synched in telepathy or forewarned in precognition or \u201cphysically\u201d unencumbered in telekinesis. These might actually be lesser deities, when instead we should rather look for the Big Guy. \ud83d\ude42 \nFinally you say something like: \u201cWeird Physics + Invisible Organ-less Non-adaptive-fitness-providing Mechanisms x Precognition x Telepathy x Telekinesis.\u201d Parapsychology does have problems\u2026 Admittedly, even according to psi researchers who believe in psi, two chief problems are, 1: possible theories for the paranormal and, 2: the role of the paranormal in Nature. We have been very clumsy in tackling these issues, IMHO. (Note that I am not a psi researcher. I am just considering these issues to be a concern of us all, no matter where we stand regarding it). So we are pretty much in the dark trying to make sense out of something that almost everybody agrees that\u2026 may not even exist! But since we (i.e. some of us) *are* trying to make sense out of it, one possibility (aside from the alternatives offered by Scott: poor studies, biased researchers, weird combinations of the two previous alternatives, etc) that comes to my mind when I look at the apparent anomaly in the Ganzfeld database (or when I look at Bem\u2019s results now) is that this anomaly is a very \u201cintentional\u201d phenomenon. What I mean is that: even if you can control electronic devices with the electricity from your neurons (something that we now can routinely accomplish), it takes a lot of practice and \u201cinformative feedback\u201d for you to learn how to master it. Yet, when it comes to shifting the odds in the desired direction in Ganzfeld sessions, we seem to be naturals in that\u2026 I think the only biological conclusion that can be drawn from it is that, if psi exists, we all use it routinely. But\u2026 maybe we do not use it for the things most people (including almost all parapsychology researchers) believe it is used for.\nAnyway, just thoughts\u2026\nBest Wishes,\nJulio Siqueira\n\n\n\n\n\n\n\n\n Nancy Lebovitz says: \n\n\t\t\tMay 6, 2014 at 10:59 pm \nSo far as the Invisible Organ is concerned, if we don\u2019t know how psi works, how likely would we be to recognize an organ for it?\n\n\n\n\n\n\n\n\n Julio Siqueira says: \n\n\t\t\tMay 7, 2014 at 9:37 am \nHi Nancy,\nThat is really an impediment. And just to give an example that makes things far far worse: we knew pretty well how enzymes work. We knew immensely well (some might say: astronomically well) how quantum mechanics works. Yet, no one, for decades, was able to unveil the fact that enzymes make use of quantum mechanics. Now, imagine this scenario under the conditions that you reminded us of (we not knowing how psi work). The expected result might as well be: decades or even centuries of searching in the dark.\n\n\n\n\n\n\n\n\n\n\nPingback: Lightning Round \u2013 2014\/05\/07 | Free Northerner\n\n\n\n\n MoodyDoc says: \n\n\t\t\tMay 8, 2014 at 5:54 pm \nThe first thing that came to my mind when I read about the weird controversy of the results of Wiseman & Schlitz\u2019s is that one scientist has psi-powers and the other not, while the subjects are all in average the same susceptible. Or it is really a kind of placebo that works telepathic. Like, when the one scientist stares at the subject while thinking \u201cyou can feel that I\u2019m staring at you now!\u201d this is subconsciously sensed. But if the other looks at the screen he acts more like an observer rather than an influencer. Still, observing the experiment from the outside by scientific means one would not see the difference in the \u201cinput\u201d but only the output. However, if so, then a life brain scan of the experimenters would surely be interesting to look at. And again another dataset to analyse more or less objective\u2026\n\n\n\n\n\n\nPingback: Nothing About Potatoes | Things I found on the internet. Cannot guarantee 100% potato-free.\n\n\nPingback: What we\u2019re reading: Dealing with missing sequence data, SNP2GO, and the challenge of replication in bad results | The Molecular Ecologist\n\n\n\n\n Norm DeLisle says: \n\n\t\t\tMay 9, 2014 at 1:18 pm \nVery Nice! A couple of other observations. My father was a process development engineer at Dow Chemical. His design work in this area was to take a research result and test it\u2019s commercial viability in a sizable plant process, a kind of enhanced replication. Researchers thought this was grunt work (the persistent attitude toward replication), and that all that was required was to make what they did in the laboratory bigger. But when you increase the size of a process by 5-7 orders of magnitude, a great many things become different. The lesson is that it isn\u2019t always clear which changes in experimental conditions are important to the outcome, and the opinion of the researcher is a poor guide.\nThe second observation is from an article I read from the late 70-80s. It was a test of the hypothesis that niacin in large doses reduced the symptoms of schizophrenia. The design was double blind and the people who evaluated the improvement didn\u2019t know who was receiving the niacin. However, the waiting room for the people being evaluated would hold 4-5 people at a time. Naturally they talked, and because of the niacin flush, they quickly figured out who was on placebo. It was, incidentally easy to figure out what the drug was, too. The conclusion was that one-third of the people on placebo broke the blind and went out and bought niacin. No one told the researchers because there were incentives for not telling the blind had been broken. Experimental conditions includes everything, not just what the researcher thinks is critical to successful publication.\nAlso, there is some evidence that placebos work even when people know they are placebos.\n\n\n\n\n\n\n\n\n Solo Atkinson says: \n\n\t\t\tJanuary 13, 2016 at 10:57 am \nA well-known example of this phenomenon is the development of the Haber-Bosch nitrogen fixation process. Haber established the concept on a tabletop and Bosch overcame obstacles to do it large scale. They rightly share the credit, but it\u2019s easy to slip focus back to the \u201coriginal genius\u201d perspective. Fascinating stories.\n\n\n\n\n\n\n\n\nPingback: Das Versagen der Religionen - Seite 7\n\n\n\n\n Put Down Artist says: \n\n\t\t\tMay 22, 2014 at 5:32 am \n\u201cThat doesn\u2019t tell you much until you take some other researchers who are studying a phenomenon you know doesn\u2019t exist \u2013 but which they themselves believe in \u2013 and see how many of them get positive findings.\u201d\nThis statement made my jaw drop. The illogic is stunning. You have made an assumption, assumed this assumption to be true, and are then deriding the people who are researching the question with an open mind.\nI\u2019m pretty sure this doesn\u2019t need to be explained to you, but you don\u2019t, in fact, know that these phenomenon don\u2019t exist until you have studied them scientifically. No ifs, ands or buts. Possible bias by researchers has to be taken into account when considering their results, and when they themselves formulate their experiments \u2013 in any scientific research. \nHowever, it is ridiculous to assert that because you personally don\u2019t believe in something anyone trying to determine whether there is a way to scientifically measure and validate alleged phenomenon is automatically wrong.\nThis is a very worrying thought pattern I see from \u2018skeptics\u2019 all the time. It is not skepticism at all, it is a kneejerk response to things that threaten their personal world view and belief system. By this definition 99% of the people on the planet are \u2018skeptics\u2019, and the only genuine skeptics are those prepared to challenge their own world views.\nI\u2019m concerned that someone could actually take that statement seriously, so, allow me to assert that to actually prove that a group of researchers science is wrong, you actually have to go into their methodology and conclusions and find errors. To take issue with their conclusions and then assume that because you don\u2019t like them their methodology must be flawed is the the least scientific approach imaginable.\nSheesh.\n\n\n\n\n\n\n\n\n Caroline Watt says: \n\n\t\t\tMay 23, 2014 at 4:34 pm \nLove your post, and v pleased that you approve of our KPU trial registry \ud83d\ude42\nJust FYI, the other parapsych pre-registry that you refer to (the one Richard Wiseman and I set up for Bem replications) dates back to November 2010 and is no longer active.\n\n\n\n\n\n\nPingback: Science smorgasbord 2 | Deadline island\n\n\n\n\n Phil Goetz says: \n\n\t\t\tJuly 3, 2014 at 12:14 pm \nThe results? Schlitz\u2019s trials found strong evidence of psychic powers, Wiseman\u2019s trials found no evidence whatsoever.\nTake a second to reflect on how this makes no sense.\nIt makes perfect sense. Schlitz has psychic powers. Wiseman doesn\u2019t. They need to redo the experiment, keeping Schlitz as the starer in both groups.\n\n\n\n\n\n\nPingback: other mind meditation | Meditation Stuff (@meditationstuff)\n\n\n\n\n Stephanie says: \n\n\t\t\tNovember 8, 2015 at 3:42 pm \nWow, what an amazing post. I love your blog, it\u2019s awesome. I just had my heart broken a little though. This is why I\u2019m a physicist. Still hard as hell and not as clear as many people think, but easier than fields involving biological organisms to get some level of confidence in your results. As long as you actually care about reality. Some physicists are just mathematicians, ie string theorists.\nDid you ever read the golem? We read it in a philo of science course i took in the education department. (http:\/\/www.amazon.com\/The-Golem-Should-Science-Classics\/dp\/1107604656). It focusses a bit too much on the uncertainty side, but, i think too many people have gone from faith in an invisible sky god to faith in \u201cScience\u201d. I have a post on my blog with an essay i wrote for fun in grad school comparing incentives in science vs incentives in free market capitalism. I always found faith in the invisible hand of free market capitalism to cure all human ills to be a bit too much like faith in the invisible sky god, and faith in \u201cscience\u201d is right up there. I have faith that, in the long run, science will probably get closer to reflecting how the universe actually works, but not in any particular current paradigm. Skepticism is a virtue in science. Except in climate change, which i consider more like a religion, which i also have post on.\n\n\n\n\n\n\n\n\n Kevin Keough says: \n\n\t\t\tJanuary 2, 2016 at 10:48 am \nCheck out Rupert Sheldrake re much of this\n\n\n\n\n\n\n\n\n Solo Atkinson says: \n\n\t\t\tJanuary 13, 2016 at 10:45 am \n\u201c\u2026both authors suggest maybe their co-author hacked into the computer and altered the results.\u201d\nActually, it was more collegial than that. Together, they suggest that one of them may have hacked the results.\n\n\n\n\n\n\n\n\n\n\n\n\nMeta\n\nRegister Log in\nEntries feed\nComments feed\nWordPress.org\n\n\nB4X is a free and open source developer tool that allows users to write apps for Android, iOS, and more.\nThe Effective Altruism newsletter provides monthly updates on the highest-impact ways to do good and help others.\n80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their free career guide show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.\nJane Street is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.\nThe COVID-19 Forecasting Project at the University of Oxford is making advanced pandemic simulations of 150+ countries available to the public, and also offer pro-bono forecasting services to decision-makers.\n\nSubstack is a blogging site that helps writers earn money and readers discover articles they'll like.\nSeattle Anxiety Specialists are a therapy practice helping people overcome anxiety and related mental health issues (eg GAD, OCD, PTSD) through evidence based interventions and self-exploration. Check out their free anti-anxiety guide here.\nGiving What We Can is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.\nDr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.\nBeeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing here\nMealSquares is a \"nutritionally complete\" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks\/tastes a lot like an ordinary scone. \nSupport Slate Star Codex on Patreon. I have a day job and SSC gets free hosting, so don't feel pressured to contribute. But extra cash helps pay for contest prizes, meetup expenses, and me spending extra time blogging instead of working.\nMetaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their questions page\n\nNorwegian founders with an international team on a mission to offer the equivalent of a Norwegian social safety net globally available as a membership. Currently offering travel medical insurance for nomads, and global health insurance for remote teams.\nAISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.\nAltruisto is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid. \n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"},{"chosen":"\n\n\n\nMAT337. Introduction to Real Analysis\n\n\n\n\n\n\n\n\n\nMAT337. Introduction to Real Analysis\nFall 2018\n\nWeb page: http:\/\/www.math.toronto.edu\/ilia\/MAT337.2018\/. \nClass Location & Time: Tue, 1:00PM - 2:00 PM; Thu, 11:00 AM - 1:00 PM; NE2190\nInstructor: Ilia Binder (ilia@math.toronto.edu), DH3026.\r\n \nOffice Hours:\u00a0Tue 2:00 PM - 3:00 PM and Thu 10:00 AM-11:00 AM\nTeaching Assistant: Belal Abuelnasr, (belal.abuelnasr@mail.utoronto.ca ).\nOffice Hours:\u00a0 Fri, 10-11 AM, DH3050.\n\nTextbooks: Understanding Analysis, Second Edition, by Stephen Abbott.\r\n This book is provided as a free electronic resource to all UofT students through the library website.\r\n Click on the following link to access the textbook (you may be required to enter your UTORid and password): http:\/\/myaccess.library.utoronto.ca\/login?url=http:\/\/books.scholarsportal.info\/viewdoc.html?id=\/ebooks\/ebooks3\/springer\/2015-07-09\/1\/9781493927128 \n\nPrerequisites:\u00a0 MAT102H5, MAT224H5\/MAT240H5, MAT212H5\/MAT244H5, MAT232H5\/MAT233H5\/MAT257Y5\nExclusions:\u00a0 MAT337H1, MAT357H1,MATB43H3, MATC37H3\r\n \nPrerequisites will be checked, and students not meeting them will be removed from the course by the end of the second week of classes. If a student believes that s\/he does have the necessary background material, and is able to prove it (e.g., has a transfer credit from a different university), then s\/he should submit a 'Prerequisite\/Corequisite Waiver Request Form'.\n\nTopics. \nThe course is the rigorous introduction to Real Analysis. We start with the careful discussion of The Axiom of Completeness and proceed to the study of the basic concepts of limits, continuity, Riemann integrability, and differentiability. \n\nTopics covered in class.\nSeptember 6: An introduction. Real numbers and the Axiom of Completeness. Section 1.3.\nSeptember 11: The Axiom of Completeness. Nested Interval property. Sections 1.3, 1.4.\nSeptember 13: Nested Interval property. Archimedean property. Definitions of the limit of a sequence (including an alternative definition). Limits and algebraic operations. Sections 1.4, 2.2, 2.3.\nSeptember 18: Limits and algebraic operations. Limits and order. Squeezed sequence lemma.Section 2.3.\nSeptember 20: The Monotone Convergence Theorem. Iterated sequences. Positive series. Liminf and limsup. Section 2.4.\nSeptember 25: Liminf and limsup. Subsequences and their limits. Bolzano-Weierstrass Theorem. Section 2.5.\nSeptember 27: Bolzano-Weierstrass Theorem. Cauchy Criterion. Series. Sections 2.5, 2.6, 2.7.\nOctober 2: Open and closed sets. Interrior, exterior, and border points. Section 3.2.\nOctober 4: Interrior, exterior, and border points. Compact sets. Heine-Borel Theorem. Sections 3.2, 3.3.\nOctober 16: Heine-Borel Theorem. Baire's Theorem. Sections 3.3, 3.5.\nOctober 18: Functional limits. Sequential criterion. Continuity. Sections 4.2, 4.3.\nOctober 23: Continuity and compact sets. Uniform continuity. Section 4.4.\nOctober 25: Uniform continuity and compact sets. The Intermediate value Theorem. Differentiability (including an alternative definition). Darboux's Theorem. Sections 4.4, 4.5, 5.2.\nOctober 30: Rolle's theorem. The Mean Value Theorem. L'Hospital rule. Pointwise and Uniform convergence. Sections 5.3, 6.2.\nNovember 1: Uniform convergence. Continuity of uniform limit. Uniform convergence and differentiation. Sections 6.2, 6.3.\nNovember 6: Midterm review.\nNovember 8: Midterm.\nNovember 13: Uniform convergence and differentiation. Uniform convergence of series. Sections 6.3, 6.4.\nNovember 15: Power series. Section 6.5.\nNovember 20: Riemann Integration. Section 7.2.\nNovember 22: Riemann Integration: criterion of integrability, non-integrable functions integrability of continuous functions, additivity and algebraic properties of Riemann integral. Sections 7.2, 7.3, 7.4.\nNovember 27: Algebraic properties of Riemann Integral. Integrability of Uniform limit. Section 7.4.\nNovember 29: The Fundamental Theorem of Calculus. Integration by parts. Riemann integrability criterion. Sections 7.5, 8.1.\nDecember 4: Final review. \n\n\nHomework. The assignments should be submitted through Quercus.\u00a0To submit, you can scan or take a photo of your work (or write your work electronically).\u00a0Please make sure that the images are clear and easy to read\u00a0before you submit them.\n\nAssignment #1, due September 13: The assignment is based on the material you have learned in MAT102.\r\n\r\nPlease do the following exercises from the textbook: 1.2.3, 1.2.4, 1.2.5, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.2.11, 1.2.12, 1.2.13.\r\n\n\nAssignment #2, due September 20.\n\n\nAssignment #3, due October 4.\n\n\nAssignment #4, due October 18.\n\n\nAssignment #5, due October 25.\n\n\nAssignment #6, due November 1.\n\n\nAssignment #7, due November 8.\n\n\nAssignment #8, due November 15.\n\n\nAssignment #9, due November 22.\n\n\nAssignment #10, due November 29.\n\nTutorials and presentations.\u00a0Each student must be registered in one of the tutorials (on ROSI).\u00a0The attendance of tutorials is mandatory. Based on the homework assignments, the students will be selected to present some of the homework problems at the tutorials. An unexcused absence at the tutorial on the day you are selected for the presentation will result in zero credit for the presentation.\u00a0\nTutorials will begin on\u00a0Friday of the second week of classes.\u00a0\nQuiz. There will be a one-hour in-tutorial quiz on Friday, September 28, or Monday, October 1, depending on your tutorial section. No aides are allowed for this quiz. The quiz will cover the material of the sections 1.3, 1.4, 2.2, 2.3, 2.4.\r\nRecommended preparation (do not hand in): problems 1.3.2, 1.3.3, 1.3.6, 1.3.8, 1.4.8, 2.2.2, 2.2.4, 2.3.2, 2.3.7, 2.4.1, 2.4.6, 2.4.8.\r\n\nMidterm Test. There will be a two-hour in-class midterm test on Thursday, November 8. No aides are allowed for this test. The test will cover the material of the sections 1.3, 1.4, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 3.2, 3.3, 3.5, 4.2, 4.3, .4.4, 4.5, 5.2, 5.3.\r\n Recommended preparation: assignment #7, and (do not hand in): all the quiz review problems, 2.5.9, 2.6.4, 2.7.7, 3.2.8, 3.3.8, 3.5.9, 4.2.4, 4.3.6, 4.4.11, 4.5.6, 5.2.10, 5.3.4. \nFinal exam. The final exam will be held on Wednesday, December 12, 5-8pm, at KN137. No aides are allowed for this test. \r\nThe exam will cover the material of the sections 1.3, 1.4, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 3.2, 3.3, 3.5, 4.2, 4.3, .4.4, 4.5, 5.2, 5.3, 6.2, 6.3, 6.4, 6.5, 6.6 (up to Theorem 6.6.2), 7.2, 7.3, 7.4, 7.5, 8.1 (up to Theorem 8.1.2).\nYou will be required to state and prove in detail one of the following Theorems from the textbook: 2.4.2, 2.5.5, 3.3.4, 4.2.3, 4.3.9, 4.4.1, 4.4.2, 4.4.7, 5.2.7, 5.3.2, 6.2.6, 6.4.4, 7.2.8, 7.5.1.\nRecommended preparation (do not hand in): all the quiz and midterm review problems, 6.2.3, 6.2.13, 6.2.14, 6.2.15, 6.3.1, 6.3.6, 6.4.2, 6.4.4, 6.4.10, 6.5.2, 6.5.8, 7.2.3, 7.3.2, 7.3.5, 7.4.3, 7.4.10, 7.5.2, 7.5.4.\nAdditional office hours: Tuesday, December 11, 12 - 1. Location: DH3000 .\n\n Grading. Grades will be based on the best eight out of ten homework assignements (10%), an in-tutorial quiz (10%), an in-lecture midterm test (25%), tutorial presentations (15%), attendance of tutorials and active participation in the discussions (5%), and Final exam (35%). I will also occasionally assign bonus problems.\nLate work. No late work will be accepted. Special consideration for late assignments or missed exams must be submitted via e-mail within a week of the original due date. There will be no make-up quiz, midterm test, or final. Justifiable absences must be declared on ROSI, undocumented absences will result in zero credit.\nE-mail policy.\r\nE-mails must originate from a utoronto.ca address and contain the course code MAT337 in the subject line. Please include your full name and student number in your e-mail.\n\nAcademic Integrity.\r\n Honesty and fairness are fundamental to the University of Toronto\u2019s mission. Plagiarism is a form of academic fraud and is treated\r\n very seriously. The work that you submit must be your own and cannot contain anyone elses work or ideas without proper\r\n attribution. You are expected to read the handout How not to plagiarize (http:\/\/www.writing.utoronto.ca\/advice\/using-sources\/how-not-to-plagiarize) and to be familiar with the Code of behaviour on academic matters, which is linked from the UTM calendar under\r\n the link Codes and policies.\n\nMaintained by: Ilia Binder (ilia@math.toronto.edu)\n\n\n","rejected":"\n\n\n\n\n\n02-251: Great Ideas in Computational Biology\n\n\n\n\n\n\n\n02-251 Great Ideas in Computational Biology\nSpring 2019\n\n\n\n\n\n\nHome\nSchedule\nPiazza\nGradescope\n\n\n\n\n\n\n\n\n\nDate\nLecture Topic\nInstructor\n\n\n1\/15\nIntroduction\nGenome assembly (Part 1)\nKingsfordCompeau\n\n1\/17\nGenome assembly (Complete)\nCompeau\n\n\n1\/18\nRecitation on Genome Assembly\nTAs\n\n\n1\/22\nSequence alignment (Part 1)\nAlignment Demo\nKingsford\n\n1\/24\nSequence alignment (Part 2)\nKingsford\n\n\n1\/25\nRecitation on Dynamic Programming\nTAs\n\n\n1\/29\nRead mapping (Part 1)Suffix Tree Math\nCompeau\n\n1\/31\nNo Class (cancelled due to cold)\nPolar Vortex\n\n\n2\/1\nRecitation on Read Mapping (Part 1)\nTAs\n\n\n2\/5\nRead mapping (Part 2)\nCompeau\n\n\n2\/7\nMultiple sequence alignment\nKingsford\n\n\n2\/8\nRecitation on Read Mapping (Part 2)\nTAs\n\n\n2\/12\nMini-lecture day 1\n\n2\/14\nMidterm 1\n\n\n\n2\/19\nHidden Markov models\nKingsford\n\n\n2\/21\nMetagenomics\nCompeau\n\n\n2\/22\nRecitation on HMMs\nTAs\n\n\n2\/26\nPhylogenetics (in progress)\nCompeau\n\n\n2\/28\nPhylogenetics (complete) and some necessary mathematics\nCompeau\n\n\n3\/1\nMCMC for Phylogenetics\nTAs\n\n\n3\/5\nMotif Finding, Gibbs Sampling, EM\nKingsford\n\n\n3\/7\nRNA sequencing, gene expression\nKingsford\n\n\nS P R I N G \u00a0 B R E A K\n\n\n3\/19\nNetwork biology: intro & function prediction\nKingsford\n\n3\/21\nNetwork biology: evolution of modularity\nKingsford\n\n\n3\/26\nPopulation Genetics\nCompeau\n\n\n3\/28\nMini-lecture day 2\n\n\n\n3\/29\nRecitation on Soft Clustering\nTAs\n\n\n4\/2\nMidterm 2\n\n\n\nNote: All topics after Midterm 2 are tentative and subject to change.\n\n\n4\/4\nEvolutionary Game TheoryAnimated GIF\nKingsford\n\n\n4\/9\nNeural Modeling\nKingsford\n\n\nS P R I N G \u00a0 C A R N I V A L\n\n\n4\/16\nUniversality of Neural Networks and Deep Learning (incomplete)\nCompeau\n\n\n4\/18\nProtein Structure Prediction\nKingsford\n\n\n4\/19\nRecitation on Neural Nets\nTAs\n\n\n4\/23\nFinishing up Neural Nets\nCompeau\n\n4\/25\nThree Mini-\"Great Ideas\": Turing Patterns, Fragile Genomes, and DNA Computing\nCompeau\n\n\n4\/26\nDrop-in Project Office Hours During Recitation\nTAs\n\n\n4\/30\nProject Presentations\nStudents\n\n5\/2\nProject Presentations\nStudents\n\n\n\n\n\n\n\n\n\n"},{"chosen":"\n\n\n\nStellar : Message of the Day\n\n\n\n\n\n\n\n\nSearch: \n\n\n\n\n\n\n\n\n\n\nMIT Course Management System\n\n\n\n\n\nHome\nCourse Guide\n@Stellar\nUpdates\n\n\n\n\n\nMessage of the day\nContinue\n\n\n\n\n\n\n\n\nStellar CMS\nInformation Services & Technology\n\n\nW92 . 304 Vassar Street\nCambridge . MA . 02139\n\n\n\nGet Help\n\n\nFAQ\n\n\nUser Guide\n\n\nContact the Help Desk\nRequest a Stellar site\n\n\n\nResources\n\nSupported Browsers\n\nCertificates\n\nLibrary E-Reserves\n\nWebSIS\n\n\n\nUpdates\n\n\nWhat's new?\n\nSubscribe\n\n\n\n\n\n\n","rejected":"\n\n\n\n\n15-251 Fall 2018\n\n\n\n\n\n\n15-251 Great Ideas in Theoretical Computer Science\n\nHome\nDiderot\nCourse Info\nSchedule\nCalendar\nWeekly Planner\nNotes\nStaff\n\n\n\n Course Information\n\nPrerequisites\n\n The formal prerequisites for the course are (15-122 or 15-150) and (21-127 or 21-128 or 15-151). In particular, we expect the students to have taken an introductory computer science course that goes beyond basic computer programming and covers algorithmic thinking. On the mathematics side, we expect the students to have experience reasoning abstractly and know how to write formal proofs.\n \nLearning Objectives\n\n Broadly speaking, the course has several goals. First, it provides a rigorous\/formal introduction to the foundations of computer science, which is the science that studies computation in all its generality. An important component of this is improving your analytic and abstract thinking skills since nature's language is mathematics. Second, the course intends to prepare you to be innovators in computer science by presenting some of the great ideas that people in the past have contributed to science and humanity. We hope that you will learn from their examples. Third, the course gives you opportunities to improve your social skills by emphasizing cooperation, clarity of thought, and clarity in the expression of thought. \n \n\n More specifically, some of the main learning objectives are the following.\n \n Define mathematically the notions of computation, computational problem, and algorithm.\n Express, analyze and compare the computability and computational complexity of problems.\n Use mathematical tools from set theory, combinatorics, graph theory, probability theory, and number theory in the study of computability, computational complexity, and some of the real-world applications of computational concepts.\n State and explain the important and well-known open problems in the theory of computation.\n Write clearly presented proofs that meet rigorous standards of correctness and conventional guidelines on style.\n Identify and critique proofs that are logically flawed and\/or do not meet the expected standards of clarity.\n Cooperate with other people in order to solve challenging and rigorous problems related to the study of computer science.\n \n\n\n Note that even though all of the topics we discuss in the course have real-world applications, often we will not be explicitly discussing the applications. This is because initially it is better to separate concerns regarding real-world applications from the exploration of fundamental truths and knowledge that shape how we view and understand the world. The quest for truth and understanding, wherever it takes us, eventually do produce applications, some that we hoped to achieve, and some that were beyond our wildest dreams. The focus of the course is on that quest for truth and understanding, which is arguably more important than specific applications. \n \nExternal Resources\n\n There is no required textbook for the course. The material is fairly diverse, and no standard text contains it. Lecture notes will be provided. Furthermore, the lectures will be recorded and the links to the video recordings as well as the slide handouts will be provided on the course website.\n \n\n If you want to look at books which contain parts of the course material, we recommend the following:\n \n\n\n Introduction to the Theory of Computation by Michael Sipser,\n The Nature of Computation by Cristopher Moore and Stephan Mertens,\n Introduction to Theoretical Computer Science by Boaz Barak,\n Quantum Computing Since Democritus by Scott Aaronson.\n \n\nMentoring System\n\n You will all be assigned a mentor TA at the beginning of the semester. Your mentor will keep track of your progress, grade your homeworks, and help you do well in the course. Don't hesitate to contact your mentor TA about anything related to the course. For instance, you can set up meetings with you mentor TA to review course material, go over homeworks or exams, or chat about studying strategies.\n \n\n Throughout the semester, feel free to reach out to anyone on the course staff about anything. We are here to help you anyway we can!\n \nGrading\n\n Your grade will depend on the following factors.\n \n\nHomeworks. There are 10 homework assignments.\n \n\nMidterm Exams. There are 3 Midterm exams (Sep 18 from 9:00am to 10:20am, Oct 10 from 6:30pm to 9:30pm, and Nov 7 from 6:30pm to 9:30pm). Please mark your calendars. The first midterm is an early midterm and has half the weight of other exams. The purpose of this exam is to give you early feedback and give you a chance to adjust your approach to the course if needed.\n \n\nFinal Exam. There is a Final exam at the end of the semester during the finals week.\n \n\nClass Participation. This is based on attendance in lectures and recitations, as well as completion of weekly online quizzes.\n \n\n Your numerical grade will be calculated according to the following table.\n \n\n\n\nCourse Component\nWeight\n\n\nHomework\n25%\n\n\nMidterm 1\n10%\n\n\nMidterm 2\n20%\n\n\nMidterm 3\n20%\n\n\nFinal\n20%\n\n\nParticipation\n5%\n\n\n\n\n At mid-semester, letter grade cut-offs will be announced.\n \n\nHomework System\n\n Homework is an extremely important component of the course and is the main tool we use to teach you valuable skills, reinforce key concepts, and help you learn the material.\n \n\n There are some general rules that apply to all the questions in the homework:\n \n You cannot share written material with anyone.\n You cannot discuss solutions to the problems on any discussion forum.\n You cannot solicit answers to the homework questions, i.e., you cannot ask anyone to provide you the solution to a problem, before the homework writing session.\n Searching the internet for general concepts is allowed. Googling for specific keywords that happen to appear in one of the homework questions is prohibited.\n You must always cite your sources including the people you have worked with.\n For the collaborative portions of the homework, you must think about a problem for 15 minutes before you start discussing it with someone else.\n If you work on a publicly visible whiteboard\/blackboard, you must erase all contents when you are done.\n \n If you have any doubts about whether something is within the rules or not, do not hesitate to contact the course staff.\n \n\nTypes of questions: There will be 4 types of questions in the homework and each question will be clearly labeled with its type.\n \n\n SOLO - You must work on these questions by yourself. In addition to the rules mentioned above, you are not allowed to discuss these questions with anyone except for the course staff.\n \n\n GROUP - These questions must be solved in groups of 3 or 4. Working on these questions just by yourself is not allowed! You must clearly indicate your group members. You can change your group from week to week, but you can have at most one group per week. Other than your group members, you may discuss these questions with the course staff.\n \n\n OPEN COLLABORATION - You can discuss these questions with anyone you like from class (i.e., other students currently taking the course and the course staff). Other than the general rules stated above, there are no additional rules for this type of question.\n \n\n PROGRAMMING - Not all homework assignments will contain a programming question, but some might. The SOLO rules apply to these types of questions. You must submit your programs to Autolab by 6:30pm the day the homework is due.\n \n\n A homework assignment for a particular week will contain SOLO and\/or PROGRAMMING type questions covering the current week's material, plus, GROUP and\/or OPEN type questions covering the previous week's material. This has a couple of benefits. First, you'll solve problems on a topic for two weeks rather than one, which helps with retention. Second, after solving the easier solo questions, you will be much better prepared to solve the harder collaborative questions.\n \n\nHomework writing sessions: You will not hand in written up solutions to every question of the homework. Every Wednesday from 6:30pm to 7:50pm at DH 2315, we will have a homework writing session. We will randomly pick a subset of the homework questions (usually 3 questions are picked), and you will be required to write the solutions to those problems individually during this proctored setting. We expect that you will have already practiced writing down the solution to every question in the homework prior to Wednesday night. Therefore these homework writing sessions should be relatively straightforward and stress-free.\n \n\nHomework grading: After the homework writing session, you will get back your graded homework the following recitation. The rubrics that we used to grade each question will be posted on Diderot. You will know who graded which question. Whenever there is a point deduction on your homework, an explanation should be given, but if you do not understand why you lost points, please don't hesitate to contact us so we can clarify things for you.\n \n\n Grading proofs is a complicated process. We try our best to be as fair and as consistent as possible. However, mistakes will happen from time to time. Therefore we have a system in place that makes grading a two-step process. The first step is that we read your solutions and assign an initial grade based on the rubric. The second step is that you carefully review the rubric and your solution, and if you have any disagreement with the number of points you got, you email the TA who graded that question. If there was a mistake, we'll correct it. If you cannot resolve the situation with the TA who graded the question, email one of the head TAs to get a second opinion. If you are still not satisfied, email one of the instructors.\n \n\n Note that your grade can never go down as a result of a regrade request; it can only go up.\n \n\n The deadline for homework regrade request is Wednesday 6:30pm (one week after the corresponding homework writing session). Email your request to the TA who graded the question. \n \n\nHomework resubmission: It is very important that you learn from your mistakes and correct them. For this reason, after you get your graded homework back, you will be allowed to resubmit solutions that you have gotten wrong. You may (and should) go to the homework solution sessions (see below for details) or ask about the solution during office hours. If you turn in a completely correct and well-written solution, you will receive back 50% of the lost credit for that question. If on the other hand your solution is not near-perfect, then unfortunatly you will not receive any points back.\n \n\n The deadline for homework solution resubmission is Sunday 6:30pm (11 days after the corresponding homework writing session). Email your resubmission, along with your original solution, to the TA who graded that question.\n \n\nProof-writing guidelines: The quality of your write-up and presentation matters a lot, so you should make sure your solutions are very clearly explained. If you are not sure of something, or you think there is a gap in your argument, clearly indicate these in your write-up (you will earn more points doing so rather than writing a wrong argument!!). Do not try to sell a wrong or incomplete proof! If you leave a question completely blank, you will earn 20% of the credit for that question.\n \n\n To help you write correct and clear proofs, we have prepared a document with a list of guideline points. The guideline points will appear as a checklist in each homework. For each proof you write, you will have to tick the checklist items to acknowledge that you are following the guidelines. \n \n\nClick here to access the document.\n\n\nHomework solution sessions: Unfortunately, we will not be publishing written solutions to the homework problems. The main reason is that any homework solution we post kills the question for future semesters of the course (and any other course that might be using a similar question). Most questions we ask are pedagogically very valuable, and coming up with such questions is very hard. So we don't want to kill those questions by publishing solutions. That being said, we don't keep the solutions a secret either. We hold homework solution sessions twice a week and go over the solutions (on the blackboard) to the problems that appeared in the writing session. We are also always happy to go through the solutions to any problem with you during office hours. \n \n\n Note that during the solutions sessions, we will not write the full proof on the blackboard. We expect you to fill in the details yourself.\n \n\n The times and locations of the homework solution sessions will be announced at the beginning of the semester.\n \nRecitation System\n\n The recitation sections that you have signed up for on SIO will only be used for the first week of the course. Starting week 2, we will transition to a different system.\n \n\n One of the main advantages of recitations over lectures is that the sections are much smaller in size. In order to improve the student-TA ratio and give you more flexibility, we will be asking you the times you are available on Fridays and Saturdays. Based on that information, we will assign you to a recitation slot. And a typical recitation section will have about 12 students. \n \n\n In addition to the above change, we will offer 3 different spiciness levels for recitations. You will select yourself which level is appropriate for you. During the semester if you feel like you would like to switch to another level, let us know, and we'll arrange the switch.\n \n\n\n\n\n\n\nBell pepper\n Not spicy. We will go over the definitions to make sure everyone understands them fully. Then we will solve the problems together (as many as the time allows).\n \n\n\n\n\n\n\nJalape\u00f1o pepper\n Normal spicy. After a quick review of definitions, we'll solve the problems together. These sections will have a faster pace.\n \n\n\n\n\n\n\nHabanero pepper\n Hot! We'll assume you are comfortable with everything covered in lecture and notes, so we'll directly dive into the problems. These sections will have the fastest pace.\n \n\n\n\n We will take attendance in recitation. \n \nDiderot\n\n We will use Diderot for several purposes, as listed below. Every student is required to signup for the course's Diderot page!\n \n\n Making announcements. Important announcements related to the course will be made on Diderot. You must check Diderot and\/or your email on a daily basis to receive the announcements in a timely manner.\n Asking and answering questions. If you have a question about the course that can be easily answered electronically, please use Diderot.\n Finding group members. You can use Diderot's \"Social\" posts to search and find group members to collaborate for the homework.\n Conducting in-class polls. We will be asking poll questions during lectures related to the topics being discussed. We will give a link to the Google form containing the poll question through a Diderot post. We expect everyone to answer all the poll questions. We will not keep track of whether your answers are correct, however, we will use the polls to keep track of attendance. If a lecture contains multiple polls, a random one will be chosen to take attendance. If you experience a technical difficulty that prevents you from participating in a poll, then see the instructor right after lecture so your presence can be noted.\n Publishing parts of course content. We plan to publish course notes, homeworks and recitations on Diderot.\n \nAsking Questions\n\n Even though we are always ready to help and provide support anyway we can, there is a fine balance that we have to respect. Ultimately, we would like you to develop the necessary skills to be self-sufficient problem solvers. You will have many questions throughout the semester. Reflecting on your questions to try to figure out the answers on your own is extremely valuable, and we want to make sure that you are not robbed of this experience. Here are some general guidelines for asking questions.\n \n\n\n The general rule of thumb is the following. Before you ask a question to us, ask yourself whether you can figure out the answer yourself. If the answer is \"yes\" or \"maybe\", then you should give a solid effort in trying to find the answer. This is an extremely valuable learning experience.\n Whenever you ask a question, first tell us what your own thoughts about the question are and what you have tried. If you don't, then we will usually respond to your question with another question asking you what your thoughts are. When you explain your thoughts to us, this allows us to see and fix any misunderstanding and help you more effectively.\n If a homework problem is ambiguous to you, try to figure out all the possible interpretations and evaluate them one by one. Often, you'll find that there is really one interpretation that makes sense.\n Try not to turn a conversation with a course staff member into the Twenty Questions game. This does not maximize your learning outcomes. Remember that when a question formulates in your mind, the first person who should try to answer it is you. Our role is to help you when you are stuck.\n Certain discussions are best suited for your group. For example, if you want to bounce off ideas and get some feedback on your thought process for a GROUP or OPEN problem on the homework, you should have that conversation with your group members.\n Please do not ask us to read your solution write-up and give you feedback on how many points you would get. Solutions can have subtle bugs, and we cannot always spot such bugs after a quick glance. Properly reading and evaluating a solution can take a lot of time. That being said, even though we cannot read your solution in detail before the homework writing session, we are happy to listen to your overall proof strategy and help you try to figure out if there are any logical flaws or gaps. \n Diderot is a good resource for short-answer questions, but can be extremely inefficient for long-answer questions or questions that may require a back and forth conversation. When you want to ask a question on Diderot, consider whether the question is suitable for that platform, and if it is not, ask your question during office hours for a more useful and efficient conversation. \n \n\nUse of Electronic Devices\n\n The use of electronic devices like phones, tablets, and laptops during lectures and recitations is prohibited. These devices cause distractions both to you and the people around you. If you would like to use an electronic device to take notes and using paper and pencil is not a good option for you, please contact one of the instructors.\n \n\n There is an exception to the above rule. When we open up a poll during a lecture, you are allowed to use your phone to cast your vote. Once the poll is completed, you should put away your phone. If you do not have a smart phone, please contact one of the instructors.\n \nHow to Succeed in 251\n\nDownload the pdf.\n \nAcademic Integrity\n\n We understand that most of you would never consider cheating in any form. There is, however, a small minority of students for whom this is not the case. In the past, when we have caught students cheating they have often insisted that they did not understand the rules and penalties. As a part of the first homework, you will be required to acknowledge that you have read and understood the cheating policies. Please read Carnegie Mellon University Policy on Academic Integrity. The following are some clear examples of cheating:\n \n\n Copying from another student during an exam or homework writing session.\n Discussing a SOLO problem before the homework writing session with someone who is not a part of the course staff.\n Googling for specific keywords that happen to appear in one of the homework questions.\n Showing a draft of a written solution to another student.\n Getting help from someone who you do not acknowledge on your solution.\n Receiving exam related information from a student who has already taken the exam.\n Attempting to hack any part of the 15-251 infrastructure.\n Looking at someone else\u2019s work on AFS, even if the file permissions allow it.\n Lying to the course staff.\n \n\nConsequences: The penalty for cheating can range from a 10% deduction on your overall course average (i.e. a letter grade drop) to directly failing the course. Furthermore, in most cases, a letter to the Dean of Student Affairs is sent and further consequences are determined by them.\n \nExtended-Time and Make-Up Policy\n\n We are happy to provide appropriate accommodations to students who have approval from the Disability Resources Center. Please contact one of the instructors if you are in this situation.\n \n\n No make-up quizzes, exams, or homework writing sessions will be administered, except in the case of documented medical or family emergencies, or other university approved absences. The common cold or your computer crashing, unfortunately, do not qualify as an excused absence.\n \nWell-Being and Happiness\n\n We very much care about your well-being and happiness! Be aware that everyone on the course staff is always available to provide counsel or chat, and you should attend office hours as often as you want for academic and non-academic conversation.\n \n\n However, also know that the university provides services that you may want to take advantage of at some point during the semester. If you are ever unsure about them, run into a problem, or want more information, feel free to reach out to the instructors.\n \n\n For a comprehensive list of CMU\u2019s resources, please click here.\n \n\nCMU Police Department\n\n\n Do not hesitate to call CMU police when in an emergency or if you are interested in taking advantage of their services.\n \n\n Website: http:\/\/www.cmu.edu\/police\/welcome.html\n Emergency phone number: 412-268-2323\n Non-Emergency phone number: 412-268-6232\n \n\nCounseling and Psychological Services (CAPS)\n\n\n CAPS offer therapy, crisis support, etc. and you should reach out to CAPS for counseling if you are struggling, no matter how small you may think your problems are. If CAPS can\u2019t help you appropriately, they also do referrals and basic consultations to help you find what you need.\n \n\nWebsite: http:\/\/www.cmu.edu\/counseling\/\nHours: Monday through Friday 8:30am-5:00pm\n Phone number: 412-268-2922\n Location: 2nd floor, Morewood Gardens, E-Tower\n \n\nUniversity Health Services (UHS)\n\n\n Health services can help you in the same way a doctor does but they also offer comprehensive care management and health promotion services.\n \n\n Website: http:\/\/www.cmu.edu\/health-services\/\n Hours: M, Tu, W: 8:30am-7:00pm, Th: 10:00am-7:00pm, F: 8:30am-5:00pm, Sat: 11:00am-3:00pm\n Note: When UHS is closed, call 1(844)881-7176.\n To set up an appointment on HealthConnect, click here.\n Comprehensive Care Manager: Diane Dawson, 412-268-9171\n \n\n15-251 Wellness Help\n\n\n If you find yourself struggling in any way or simply would like to discuss how you are feeling about 251 or just chat, reach out to one of the following people or your mentor TA to set up a casual meeting.\n \n\n Anil Ada (Instructor): aada@cs.cmu.edu\n Bernhard Haeupler (Instructor): haeupler@cs.cmu.edu\n Corwin de Boor (Head TA): cdeboor@andrew.cmu.edu\n Patrick Lin (Head TA): patrick1@andrew.cmu.edu\n\n\n\n\n\n\n\n\n"},{"chosen":"\n\nThe ground of optimization - LessWrong 2.0 viewerArchiveSequencesAboutSearchLog InQuestionsEventsShortformAlignment ForumAF CommentsHomeFeaturedAllTagsRecent CommentsThe ground of optimizationAlex Flint20 Jun 2020 0:38 UTCLW: 241 AF: 9275 commentsLW link1 reviewOptimizationAIWorld ModelingGeneral IntelligenceSelection vs ControlDynamical systemsBest of LessWrong\uf141Post permalinkLink without commentsLink without top nav barsLink without comments or top nav barsContentsIntroductionEx\u00adam\u00adple: com\u00adput\u00ading the square root of twoEx\u00adam\u00adple: build\u00ading a houseDefin\u00ading optimizationRe\u00adla\u00adtion\u00adship to Yud\u00adkowsky\u2019s defi\u00adni\u00adtion of optimizationRe\u00adla\u00adtion\u00adship to Drexler\u2019s Com\u00adpre\u00adhen\u00adsive AI ServicesRe\u00adla\u00adtion\u00adship to Garrabrant and Dem\u00adski\u2019s Embed\u00added AgencyEx\u00adam\u00adple: ball in a valleyEx\u00adam\u00adple: ball in valley with robotEx\u00adam\u00adple: com\u00adputer perform\u00ading gra\u00addi\u00adent descentEx\u00adam\u00adple: billiard ballsEx\u00adam\u00adple: satel\u00adlite in orbitEx\u00adam\u00adple: a treeEx\u00adam\u00adple: bot\u00adtle capEx\u00adam\u00adple: the hu\u00adman liverEx\u00adam\u00adple: the uni\u00adverse as a wholePower sources and entropyCon\u00adnec\u00adtion to dy\u00adnam\u00adi\u00adcal sys\u00adtems theoryConclusionThis work was sup\u00adported by OAK, a monas\u00adtic com\u00admu\u00adnity in the Berkeley hills. This doc\u00adu\u00adment could not have been writ\u00adten with\u00adout the daily love of liv\u00ading in this beau\u00adtiful com\u00admu\u00adnity. The work in\u00advolved in writ\u00ading this can\u00adnot be sep\u00ada\u00adrated from the sit\u00adting, chant\u00ading, cook\u00ading, clean\u00ading, cry\u00ading, cor\u00adrect\u00ading, fundrais\u00ading, listen\u00ading, laugh\u00ading, and teach\u00ading of the whole com\u00admu\u00adnity.\n\nWhat is op\u00adti\u00admiza\u00adtion? What is the re\u00adla\u00adtion\u00adship be\u00adtween a com\u00adpu\u00adta\u00adtional op\u00adti\u00admiza\u00adtion pro\u00adcess \u2014 say, a com\u00adputer pro\u00adgram solv\u00ading an op\u00adti\u00admiza\u00adtion prob\u00adlem \u2014 and a phys\u00adi\u00adcal op\u00adti\u00admiza\u00adtion pro\u00adcess \u2014 say, a team of hu\u00admans build\u00ading a house?\nWe pro\u00adpose the con\u00adcept of an op\u00adti\u00admiz\u00ading sys\u00adtem as a phys\u00adi\u00adcally closed sys\u00adtem con\u00adtain\u00ading both that which is be\u00ading op\u00adti\u00admized and that which is do\u00ading the op\u00adti\u00admiz\u00ading, and defined by a ten\u00addency to evolve from a broad basin of at\u00adtrac\u00adtion to\u00adwards a small set of tar\u00adget con\u00adfigu\u00adra\u00adtions de\u00adspite per\u00adtur\u00adba\u00adtions to the sys\u00adtem. We com\u00adpare our defi\u00adni\u00adtion to that pro\u00adposed by Yud\u00adkowsky, and place our work in the con\u00adtext of work by Dem\u00adski and Garrabrant\u2019s Embed\u00added Agency, and Drexler\u2019s Com\u00adpre\u00adhen\u00adsive AI Ser\u00advices. We show that our defi\u00adni\u00adtion re\u00adsolves difficult cases pro\u00adposed by Daniel Filan. We work through nu\u00admer\u00adous ex\u00adam\u00adples of biolog\u00adi\u00adcal, com\u00adpu\u00adta\u00adtional, and sim\u00adple phys\u00adi\u00adcal sys\u00adtems show\u00ading how our defi\u00adni\u00adtion re\u00adlates to each.\nIntroduction\nIn the field of com\u00adputer sci\u00adence, an op\u00adti\u00admiza\u00adtion al\u00adgorithm is a com\u00adputer pro\u00adgram that out\u00adputs the solu\u00adtion, or an ap\u00adprox\u00adi\u00adma\u00adtion thereof, to an op\u00adti\u00admiza\u00adtion prob\u00adlem. An op\u00adti\u00admiza\u00adtion prob\u00adlem con\u00adsists of an ob\u00adjec\u00adtive func\u00adtion to be max\u00adi\u00admized or min\u00adi\u00admized, and a fea\u00adsi\u00adble re\u00adgion within which to search for a solu\u00adtion. For ex\u00adam\u00adple we might take the ob\u00adjec\u00adtive func\u00adtion (x2\u22122)2 as a min\u00adi\u00admiza\u00adtion prob\u00adlem and the whole real num\u00adber line as the fea\u00adsi\u00adble re\u00adgion. The solu\u00adtion then would be x=\u221a2 and a work\u00ading op\u00adti\u00admiza\u00adtion al\u00adgorithm for this prob\u00adlem is one that out\u00adputs a close ap\u00adprox\u00adi\u00adma\u00adtion to this value.\nIn the field of op\u00ader\u00ada\u00adtions re\u00adsearch and en\u00adg\u00adineer\u00ading more broadly, op\u00adti\u00admiza\u00adtion in\u00advolves im\u00adprov\u00ading some pro\u00adcess or phys\u00adi\u00adcal ar\u00adti\u00adfact so that it is fit for a cer\u00adtain pur\u00adpose or fulfills some set of re\u00adquire\u00adments. For ex\u00adam\u00adple, we might choose to mea\u00adsure a nail fac\u00adtory by the rate at which it out\u00adputs nails, rel\u00ada\u00adtive to the cost of pro\u00adduc\u00adtion in\u00adputs. We can view this as a kind of ob\u00adjec\u00adtive func\u00adtion, with the fac\u00adtory as the ob\u00adject of op\u00adti\u00admiza\u00adtion just as the vari\u00adable x was the ob\u00adject of op\u00adti\u00admiza\u00adtion in the pre\u00advi\u00adous ex\u00adam\u00adple.\nThere is clearly a con\u00adnec\u00adtion be\u00adtween op\u00adti\u00admiz\u00ading the fac\u00adtory and op\u00adti\u00admiz\u00ading for x, but what ex\u00adactly is this con\u00adnec\u00adtion? What is it that iden\u00adti\u00adfies an al\u00adgorithm as an op\u00adti\u00admiza\u00adtion al\u00adgorithm? What is it that iden\u00adti\u00adfies a pro\u00adcess as an op\u00adti\u00admiza\u00adtion pro\u00adcess?\nThe an\u00adswer pro\u00adposed in this es\u00adsay is: an op\u00adti\u00admiz\u00ading sys\u00adtem is a phys\u00adi\u00adcal pro\u00adcess in which the con\u00adfigu\u00adra\u00adtion of some part of the uni\u00adverse moves pre\u00addictably to\u00adwards a small set of tar\u00adget con\u00adfigu\u00adra\u00adtions from any point in a broad basin of op\u00adti\u00admiza\u00adtion, de\u00adspite per\u00adtur\u00adba\u00adtions dur\u00ading the op\u00adti\u00admiza\u00adtion pro\u00adcess.\nWe do not imag\u00adine that there is some en\u00adg\u00adine or agent or mind perform\u00ading op\u00adti\u00admiza\u00adtion, sep\u00ada\u00adrately from that which is be\u00ading op\u00adti\u00admized. We con\u00adsider the whole sys\u00adtem jointly \u2014 en\u00adg\u00adine and ob\u00adject of op\u00adti\u00admiza\u00adtion \u2014 and ask whether it ex\u00adhibits a ten\u00addency to evolve to\u00adwards a pre\u00addictable tar\u00adget con\u00adfigu\u00adra\u00adtion. If so, then we call it an op\u00adti\u00admiz\u00ading sys\u00adtem. If the basin of at\u00adtrac\u00adtion is deep and wide then we say that this is a ro\u00adbust op\u00adti\u00admiz\u00ading sys\u00adtem.\nAn op\u00adti\u00admiz\u00ading sys\u00adtem as defined in this es\u00adsay is known in dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory as a dy\u00adnam\u00adi\u00adcal sys\u00adtem with one or more at\u00adtrac\u00adtors. In this es\u00adsay we show how this frame\u00adwork can help to un\u00adder\u00adstand op\u00adti\u00admiza\u00adtion as man\u00adi\u00adfested in phys\u00adi\u00adcally closed sys\u00adtems con\u00adtain\u00ading both en\u00adg\u00adine and ob\u00adject of op\u00adti\u00admiza\u00adtion.\nIn this way we find that op\u00adti\u00admiz\u00ading sys\u00adtems are not some\u00adthing that are de\u00adsigned but are dis\u00adcov\u00adered. The con\u00adfigu\u00adra\u00adtion space of the world con\u00adtains countless pock\u00adets shaped like small and large bas\u00adins, such that if the world should crest the rim of one of these pock\u00adets then it will nat\u00adu\u00adrally evolve to\u00adwards the bot\u00adtom of the basin. We care about them be\u00adcause we can use our own agency to tip the world into such a basin and then let go, know\u00ading that from here on things will evolve to\u00adwards the tar\u00adget re\u00adgion.\nAll op\u00adti\u00admiza\u00adtion bas\u00adins have a finite ex\u00adtent. A ball may roll to the cen\u00adter of a valley if ini\u00adtially placed any\u00adwhere within the valley, but if it is placed out\u00adside the valley then it will roll some\u00adwhere else en\u00adtirely, or per\u00adhaps will not roll at all. Similarly, even a very ro\u00adbust op\u00adti\u00admiz\u00ading sys\u00adtem has an outer rim to its basin of at\u00adtrac\u00adtion, such that if the con\u00adfigu\u00adra\u00adtion of the sys\u00adtem is per\u00adturbed be\u00adyond that rim then the sys\u00adtem no longer evolves to\u00adwards the tar\u00adget that it once did. When an op\u00adti\u00admiz\u00ading sys\u00adtem de\u00advi\u00adates be\u00adyond its own rim, we say that it dies. An ex\u00adis\u00adten\u00adtial catas\u00adtro\u00adphe is when the op\u00adti\u00admiz\u00ading sys\u00adtem of life on Earth moves be\u00adyond its own outer rim.\nEx\u00adam\u00adple: com\u00adput\u00ading the square root of two\nSay I ask my com\u00adputer to com\u00adpute the square root of two, for ex\u00adam\u00adple by open\u00ading a python in\u00adter\u00adpreter and typ\u00ading:\n>>> print(math.sqrt(2))\n1.41421356237\n\nThe value printed here is ac\u00adtu\u00adally calcu\u00adlated by solv\u00ading an op\u00adti\u00admiza\u00adtion prob\u00adlem. It works roughly as fol\u00adlows. First we set up an ob\u00adjec\u00adtive func\u00adtion that has as its min\u00adi\u00admum value the square root of two. One func\u00adtion we could use is y=(x2\u22122)2\n\nNext we pick an ini\u00adtial es\u00adti\u00admate for the square root of two, which can be any num\u00adber what\u00adso\u00adever. Let\u2019s take 1.0 as our ini\u00adtial guess. Then we take a gra\u00addi\u00adent step in the di\u00adrec\u00adtion in\u00addi\u00adcated by com\u00adput\u00ading the slope of the ob\u00adjec\u00adtive func\u00adtion at our ini\u00adtial es\u00adti\u00admate:\n\nThen we re\u00adpeat this pro\u00adcess of com\u00adput\u00ading the slope and up\u00addat\u00ading our es\u00adti\u00admate over and over, and our op\u00adti\u00admiza\u00adtion al\u00adgorithm quickly con\u00adverges to the square root of two:\n\nThis is gra\u00addi\u00adent de\u00adscent, and it can be im\u00adple\u00admented in a few lines of python code:\n\tcurrent_estimate = 1.0\n\tstep_size = 1e-3\n\twhile True:\n\t\tobjective = (current_estimate**2 - 2) ** 2\n\t\tgradient = 4 * current_estimate * (current_estimate**2 - 2)\n\t\tif abs(gradient) < 1e-8:\n\t\t\tbreak\n\t\tcurrent_estimate -= gradient * step_size\n\nBut this pro\u00adgram has the fol\u00adlow\u00ading un\u00adusual prop\u00aderty: we can mod\u00adify the vari\u00adable that holds the cur\u00adrent es\u00adti\u00admate of the square root of two at any point while the pro\u00adgram is run\u00adning, and the al\u00adgorithm will still con\u00adverge to the square root of two. That is, while the code above is run\u00adning, if I drop in with a de\u00adbug\u00adger and over\u00adwrite the cur\u00adrent es\u00adti\u00admate while the loop is still ex\u00ade\u00adcut\u00ading, what will hap\u00adpen is that the next gra\u00addi\u00adent step will start cor\u00adrect\u00ading for this per\u00adtur\u00adba\u00adtion, push\u00ading the es\u00adti\u00admate back to\u00adwards the square root of two:\n\nIf we give the al\u00adgorithm time to con\u00adverge to within ma\u00adchine pre\u00adci\u00adsion of the ac\u00adtual square root of two then the fi\u00adnal out\u00adput will be bit-for-bit iden\u00adti\u00adcal to the re\u00adsult we would have got\u00adten with\u00adout the per\u00adtur\u00adba\u00adtion.\nCon\u00adsider this for a mo\u00adment. For most kinds of com\u00adputer code, over\u00adwrit\u00ading a vari\u00adable while the code is run\u00adning will ei\u00adther have no effect be\u00adcause the vari\u00adable isn\u2019t used, or it will have a catas\u00adtrophic effect and the code will crash, or it will sim\u00adply cause the code to out\u00adput the wrong an\u00adswer. If I use a de\u00adbug\u00adger to drop in on a web\u00adserver ser\u00advic\u00ading an http re\u00adquest and I over\u00adwrite some vari\u00adable with an ar\u00adbi\u00adtrary value just as the code is perform\u00ading a loop in which this vari\u00adable is used in a cen\u00adtral way, bad things are likely to hap\u00adpen! Most com\u00adputer code is not ro\u00adbust to ar\u00adbi\u00adtrary in-flight data mod\u00adifi\u00adca\u00adtions.\nBut this code that com\u00adputes the square root of two is ro\u00adbust to in-flight data mod\u00adifi\u00adca\u00adtions, or at least the \u201ccur\u00adrent es\u00adti\u00admate\u201d vari\u00adable is. It\u2019s not that our per\u00adtur\u00adba\u00adtion has no effect: if we change the value, the next iter\u00ada\u00adtion of the al\u00adgorithm will com\u00adpute the ob\u00adjec\u00adtive func\u00adtion and its slope at a com\u00adpletely differ\u00adent point, and each iter\u00ada\u00adtion af\u00adter that will be differ\u00adent to how it would have been if we hadn\u2019t in\u00adter\u00advened. The per\u00adtur\u00adba\u00adtion may change the to\u00adtal num\u00adber of iter\u00ada\u00adtions be\u00adfore con\u00adver\u00adgence is reached. But ul\u00adti\u00admately the al\u00adgorithm will still out\u00adput an es\u00adti\u00admate of the square root of two, and, given time to fully con\u00adverge, it will out\u00adput the ex\u00adact same an\u00adswer it would have out\u00adput with\u00adout the per\u00adtur\u00adba\u00adtion. This is an un\u00adusual breed of com\u00adputer pro\u00adgram in\u00addeed!\nWhat is hap\u00adpen\u00ading here is that we have con\u00adstructed a phys\u00adi\u00adcal sys\u00adtem con\u00adsist\u00ading of a com\u00adputer and a python pro\u00adgram that com\u00adputes the square root of two, such that:\n\nfor a set of start\u00ading con\u00adfigu\u00adra\u00adtions (in this case the set of con\u00adfigu\u00adra\u00adtions in which the \u201ccur\u00adrent es\u00adti\u00admate\u201d vari\u00adable is set to each rep\u00adre\u00adsentable float\u00ading point num\u00adber),\n\nthe sys\u00adtem ex\u00adhibits a ten\u00addency to evolve to\u00adwards a small set of tar\u00adget con\u00adfigu\u00adra\u00adtions (in this case just the sin\u00adgle con\u00adfigu\u00adra\u00adtion in which the \u201ccur\u00adrent es\u00adti\u00admate\u201d vari\u00adable is set to the square root of two),\n\nand this ten\u00addency is ro\u00adbust to in-flight per\u00adtur\u00adba\u00adtions to the sys\u00adtem\u2019s con\u00adfigu\u00adra\u00adtion (in this case ro\u00adbust\u00adness is limited to just the di\u00admen\u00adsions cor\u00adre\u00adspond\u00ading to changes in the \u201ccur\u00adrent es\u00adti\u00admate\u201d vari\u00adable).\n\nIn this es\u00adsay I ar\u00adgue that sys\u00adtems that con\u00adverge to some tar\u00adget con\u00adfigu\u00adra\u00adtion, and will do so de\u00adspite per\u00adtur\u00adba\u00adtions to the sys\u00adtem, are the sys\u00adtems we should rightly call \u201cop\u00adti\u00admiz\u00ading sys\u00adtems\u201d.\nEx\u00adam\u00adple: build\u00ading a house\nCon\u00adsider a group of hu\u00admans build\u00ading a house. Let us con\u00adsider the hu\u00admans to\u00adgether with the build\u00ading ma\u00adte\u00adri\u00adals and con\u00adstruc\u00adtion site as a sin\u00adgle phys\u00adi\u00adcal sys\u00adtem. Let us imag\u00adine that we as\u00adsem\u00adble this sys\u00adtem in\u00adside a com\u00adpletely closed cham\u00adber, in\u00adclud\u00ading food and sleep\u00ading quar\u00adters for the hu\u00admans, light\u00ading, a power source, con\u00adstruc\u00adtion ma\u00adte\u00adri\u00adals, con\u00adstruc\u00adtion blueprint, as well as the phys\u00adi\u00adcal hu\u00admans with ap\u00adpro\u00adpri\u00adate in\u00adstruc\u00adtions and in\u00adcen\u00adtives to build the house. If we just put these phys\u00adi\u00adcal el\u00ade\u00adments to\u00adgether we get a sys\u00adtem that has a ten\u00addency to evolve un\u00adder the nat\u00adu\u00adral laws of physics to\u00adwards a con\u00adfigu\u00adra\u00adtion in which there is a house match\u00ading the blueprint.\n\nWe could per\u00adturb the sys\u00adtem while the house is be\u00ading built \u2014 say by drop\u00adping in at night and re\u00admov\u00ading some walls or mov\u00ading some con\u00adstruc\u00adtion ma\u00adte\u00adri\u00adals about \u2014 and this phys\u00adi\u00adcal sys\u00adtem will re\u00adcover. The team of hu\u00admans will come in the next day and find the con\u00adstruc\u00adtion ma\u00adte\u00adri\u00adals that were moved, put in new walls to re\u00adplace the ones that were re\u00admoved, and so on.\n\nJust like the square root of two ex\u00adam\u00adple, here is a phys\u00adi\u00adcal sys\u00adtem with:\n\nA basin of at\u00adtrac\u00adtion (all the pos\u00adsi\u00adble ar\u00adrange\u00adments of vi\u00adable hu\u00admans and build\u00ading ma\u00adte\u00adri\u00adals)\n\nA tar\u00adget con\u00adfigu\u00adra\u00adtion set that is small rel\u00ada\u00adtive to the basin of at\u00adtrac\u00adtion (those in which the build\u00ading ma\u00adte\u00adri\u00adals have been ar\u00adranged into a house match\u00ading the de\u00adsign)\n\nA ten\u00addency to evolve to\u00adwards the tar\u00adget con\u00adfigu\u00adra\u00adtions when start\u00ading from any point within the basin of at\u00adtrac\u00adtion, de\u00adspite in-flight per\u00adtur\u00adba\u00adtions to the system\n\nNow this sys\u00adtem is not in\u00adfinitely ro\u00adbust. If we re\u00adally scram\u00adble the ar\u00adrange\u00adment of atoms within this sys\u00adtem then we\u2019ll quickly wind up with a con\u00adfigu\u00adra\u00adtion that does not con\u00adtain any hu\u00admans, or in which the build\u00ading ma\u00adte\u00adri\u00adals are ir\u00adre\u00advo\u00adca\u00adbly de\u00adstroyed, and then we will have a sys\u00adtem with\u00adout the ten\u00addency to evolve to\u00adwards any small set of fi\u00adnal con\u00adfigu\u00adra\u00adtions.\n\nIn the phys\u00adi\u00adcal world we are not sur\u00adprised to find sys\u00adtems that have this ten\u00addency to evolve to\u00adwards a small set of tar\u00adget con\u00adfigu\u00adra\u00adtions. If I pick up my dog while he is sleep\u00ading and move him by a few inches, he still finds his way to his wa\u00adter bowl when he wakes up. If I pull a piece of bark off a tree, the tree con\u00adtinues to grow in the same up\u00adward di\u00adrec\u00adtion. If I make a noise that sur\u00adprises a friend work\u00ading on some math home\u00adwork, the math home\u00adwork still gets done. Sys\u00adtems that con\u00adtain liv\u00ading be\u00adings reg\u00adu\u00adlarly ex\u00adhibit this ten\u00addency to evolve to\u00adwards tar\u00adget con\u00adfigu\u00adra\u00adtions, and tend to do so in a way that is ro\u00adbust to in-flight per\u00adtur\u00adba\u00adtions. As a re\u00adsult we are fa\u00admil\u00adiar with phys\u00adi\u00adcal sys\u00adtems that have this prop\u00aderty, and we are not sur\u00adprised when they arise in our lives.\nBut phys\u00adi\u00adcal sys\u00adtems in gen\u00aderal do not have the ten\u00addency to evolve to\u00adwards tar\u00adget con\u00adfigu\u00adra\u00adtions. If I move a billiard ball a few inches to the left while a bunch of billiard balls are en\u00ader\u00adget\u00adi\u00adcally bounc\u00ading around a billiard table, the balls are likely to come to rest in a very differ\u00adent po\u00adsi\u00adtion than if I had not moved the ball. If I change the tra\u00adjec\u00adtory of a satel\u00adlite a lit\u00adtle bit, the satel\u00adlite does not have any ten\u00addency to move back into its old or\u00adbit.\nThe com\u00adputer sys\u00adtems that we have built are still, by and large, more prim\u00adi\u00adtive than the liv\u00ading sys\u00adtems that we in\u00adhabit, and most com\u00adputer sys\u00adtems do not have the ten\u00addency to evolve ro\u00adbustly to\u00adwards some set of tar\u00adget con\u00adfigu\u00adra\u00adtions, so op\u00adti\u00admiza\u00adtion al\u00adgorithms as dis\u00adcussed in the pre\u00advi\u00adous sec\u00adtion, which do have this prop\u00aderty, are some\u00adwhat un\u00adusual.\nDefin\u00ading optimization\nAn op\u00adti\u00admiz\u00ading sys\u00adtem is a sys\u00adtem that has a ten\u00addency to evolve to\u00adwards one of a set of con\u00adfigu\u00adra\u00adtions that we will call the tar\u00adget con\u00adfigu\u00adra\u00adtion set, when started from any con\u00adfigu\u00adra\u00adtion within a larger set of con\u00adfigu\u00adra\u00adtions, which we call the basin of at\u00adtrac\u00adtion, and con\u00adtinues to ex\u00adhibit this ten\u00addency with re\u00adspect to the same tar\u00adget con\u00adfigu\u00adra\u00adtion set de\u00adspite per\u00adtur\u00adba\u00adtions.\nSome sys\u00adtems may have a sin\u00adgle tar\u00adget con\u00adfigu\u00adra\u00adtion to\u00adwards which they in\u00adevitably evolve. Ex\u00adam\u00adples are a ball in a steep valley with a sin\u00adgle lo\u00adcal min\u00adi\u00admum, and a com\u00adputer com\u00adput\u00ading the square root of two. Other sys\u00adtems may have a set of tar\u00adget con\u00adfigu\u00adra\u00adtions and per\u00adturb\u00ading the sys\u00adtem may cause it to evolve to\u00adwards a differ\u00adent mem\u00adber of this set. Ex\u00adam\u00adples are a ball in a valley with mul\u00adti\u00adple lo\u00adcal min\u00adima, or a tree grow\u00ading up\u00adwards (per\u00adturb\u00ading the tree by, for ex\u00adam\u00adple, cut\u00adting off some branches while it is grow\u00ading will prob\u00ada\u00adbly change its fi\u00adnal shape, but will not change its ten\u00addency to grow to\u00adwards one of the con\u00adfigu\u00adra\u00adtions in which it has reached its max\u00adi\u00admum size).\nWe can quan\u00adtify op\u00adti\u00admiz\u00ading sys\u00adtems in the fol\u00adlow\u00ading ways.\nRo\u00adbust\u00adness. Along how many di\u00admen\u00adsions can we per\u00adturb the sys\u00adtem with\u00adout al\u00adter\u00ading its ten\u00addency to evolve to\u00adwards the tar\u00adget con\u00adfigu\u00adra\u00adtion set? What mag\u00adni\u00adtude per\u00adtur\u00adba\u00adtion can the sys\u00adtem ab\u00adsorb along these di\u00admen\u00adsions? A self-driv\u00ading car nav\u00adi\u00adgat\u00ading through a city may be ro\u00adbust to per\u00adtur\u00adba\u00adtions that in\u00advolve phys\u00adi\u00adcally mov\u00ading the car to a differ\u00adent po\u00adsi\u00adtion on the road in the city, but not to per\u00adtur\u00adba\u00adtions that in\u00advolve chang\u00ading the state of phys\u00adi\u00adcal mem\u00adory reg\u00adisters that con\u00adtain crit\u00adi\u00adcal bits of com\u00adputer code in the car\u2019s in\u00adter\u00adnal com\u00adputer.\nDual\u00adity. To what ex\u00adtent can we iden\u00adtify sub\u00adsets of the sys\u00adtem cor\u00adre\u00adspond\u00ading to \u201cthat which is be\u00ading op\u00adti\u00admized\u201d and \u201cthat which is do\u00ading the op\u00adti\u00admiza\u00adtion\u201d? Between en\u00adg\u00adine and ob\u00adject of op\u00adti\u00admiza\u00adtion; be\u00adtween agent and world. Highly du\u00adal\u00adis\u00adtic sys\u00adtems may be ro\u00adbust to per\u00adtur\u00adba\u00adtions of the ob\u00adject of op\u00adti\u00admiza\u00adtion, but brit\u00adtle with re\u00adspect to per\u00adtur\u00adba\u00adtions of the en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion. For ex\u00adam\u00adple, a sys\u00adtem con\u00adtain\u00ading a 2020s-era robot mov\u00ading a vase around is a du\u00adal\u00adis\u00adtic op\u00adti\u00admiz\u00ading sys\u00adtem: there is a clear sub\u00adset of the sys\u00adtem that is the en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion (the robot), and ob\u00adject of op\u00adti\u00admiza\u00adtion (the vase). Fur\u00adther\u00admore, the robot may be able to deal with a wide va\u00adri\u00adety of per\u00adtur\u00adba\u00adtions to the en\u00advi\u00adron\u00adment and to the vase, but there are likely to be nu\u00admer\u00adous small per\u00adtur\u00adba\u00adtions to the robot it\u00adself that will ren\u00adder it in\u00adert. In con\u00adtrast, a tree is a non-du\u00adal\u00adis\u00adtic op\u00adti\u00admiz\u00ading sys\u00adtem: the tree does grow to\u00adwards a set of tar\u00adget con\u00adfigu\u00adra\u00adtions, but it makes no sense to ask which part of the tree is \u201cdo\u00ading\u201d the op\u00adti\u00admiza\u00adtion and which part is \u201cbe\u00ading\u201d op\u00adti\u00admized. This lat\u00adter ex\u00adam\u00adple is dis\u00adcussed fur\u00adther be\u00adlow.\nRe\u00adtar\u00adgetabil\u00adity. Is it pos\u00adsi\u00adble, us\u00ading only a micro\u00adscopic per\u00adtur\u00adba\u00adtion to the sys\u00adtem, to change the sys\u00adtem such that it is still an op\u00adti\u00admiz\u00ading sys\u00adtem but with a differ\u00adent tar\u00adget con\u00adfigu\u00adra\u00adtion set? A sys\u00adtem con\u00adtain\u00ading a robot with the goal of mov\u00ading a vase to a cer\u00adtain lo\u00adca\u00adtion can be mod\u00adified by mak\u00ading just a small num\u00adber of micro\u00adscopic per\u00adtur\u00adba\u00adtions to key mem\u00adory reg\u00adisters such that the robot holds the goal of mov\u00ading the vase to a differ\u00adent lo\u00adca\u00adtion and the whole vase\/\u200brobot sys\u00adtem now ex\u00adhibits a ten\u00addency to evolve to\u00adwards a differ\u00adent tar\u00adget con\u00adfigu\u00adra\u00adtion. In con\u00adtrast, a sys\u00adtem con\u00adtain\u00ading a ball rol\u00adling to\u00adwards the bot\u00adtom of a valley can\u00adnot gen\u00ader\u00adally be mod\u00adified by any micro\u00adscopic per\u00adtur\u00adba\u00adtion such that the ball will roll to a differ\u00adent tar\u00adget lo\u00adca\u00adtion. A tree is an in\u00adter\u00adme\u00addi\u00adate ex\u00adam\u00adple: to cause the tree to evolve to\u00adwards a differ\u00adent tar\u00adget con\u00adfigu\u00adra\u00adtion set \u2014 say, one in which its leaves were of a differ\u00adent shape \u2014 one would have to mod\u00adify the ge\u00adnetic code si\u00admul\u00adta\u00adneously in all of the tree\u2019s cells.\nRe\u00adla\u00adtion\u00adship to Yud\u00adkowsky\u2019s defi\u00adni\u00adtion of optimization\nIn Mea\u00adsur\u00ading Op\u00adti\u00admiza\u00adtion Power, Eliezer Yud\u00adkowsky defines op\u00adti\u00admiza\u00adtion as a pro\u00adcess in which some part of the world ends up in a con\u00adfigu\u00adra\u00adtion that is high in an agent\u2019s prefer\u00adence or\u00adder\u00ading, yet has low prob\u00ada\u00adbil\u00adity of aris\u00ading spon\u00adta\u00adneously. Yud\u00adkowsky\u2019s defi\u00adni\u00adtion asks us to look at a patch of the world that has already un\u00adder\u00adgone op\u00adti\u00admiza\u00adtion by an agent or mind, and draw con\u00adclu\u00adsions about the power or in\u00adtel\u00adli\u00adgence of that mind by ask\u00ading how un\u00adlikely it would be for a con\u00adfigu\u00adra\u00adtion of equal or greater util\u00adity (to the agent) to arise spon\u00adta\u00adneously.\nOur defi\u00adni\u00adtion differs from this in the fol\u00adlow\u00ading ways:\n\nWe look at whole sys\u00adtems that evolve nat\u00adu\u00adrally un\u00adder phys\u00adi\u00adcal laws. We do not as\u00adsume that we can de\u00adcom\u00adpose these sys\u00adtems into some en\u00adg\u00adine and ob\u00adject of op\u00adti\u00admiza\u00adtion, or into mind and en\u00advi\u00adron\u00adment. We do not look at sys\u00adtems that are \u201cbe\u00ading op\u00adti\u00admized\u201d by some ex\u00adter\u00adnal en\u00adtity but rather at \u201cop\u00adti\u00admiz\u00ading sys\u00adtems\u201d that ex\u00adhibit a nat\u00adu\u00adral ten\u00addency to evolve to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion set. Th\u00adese op\u00adti\u00admiz\u00ading sys\u00adtems may con\u00adtain sub\u00adsys\u00adtems that have the prop\u00ader\u00adties of agents, but as we will see there are many in\u00adstances of op\u00adti\u00admiz\u00ading sys\u00adtems that do not con\u00adtain du\u00adal\u00adis\u00adtic agen\u00adtic sub\u00adsys\u00adtems.\n\nWhen dis\u00adcern\u00ading the bound\u00adary be\u00adtween op\u00adti\u00admiza\u00adtion and non-op\u00adti\u00admiza\u00adtion, we look prin\u00adci\u00adpally at ro\u00adbust\u00adness \u2014 whether the sys\u00adtem will con\u00adtinue to evolve to\u00adwards its tar\u00adget con\u00adfigu\u00adra\u00adtion set in the face of per\u00adtur\u00adba\u00adtions \u2014 whereas Yud\u00adkowsky looks at the im\u00adprob\u00ada\u00adbil\u00adity of the fi\u00adnal con\u00adfigu\u00adra\u00adtion.\n\nRe\u00adla\u00adtion\u00adship to Drexler\u2019s Com\u00adpre\u00adhen\u00adsive AI Services\nEric Drexler has writ\u00adten about the need to con\u00adsider AI sys\u00adtems that are not goal-di\u00adrected agents. He points out that the most eco\u00adnom\u00adi\u00adcally im\u00adpor\u00adtant AI sys\u00adtems to\u00adday are not con\u00adstructed within the agent paradigm, and that in fact agents rep\u00adre\u00adsent just a tiny frac\u00adtion of the de\u00adsign space of in\u00adtel\u00adli\u00adgent sys\u00adtems. For ex\u00adam\u00adple, a sys\u00adtem that iden\u00adti\u00adfies faces in images would be an in\u00adtel\u00adli\u00adgent sys\u00adtem but not an agent ac\u00adcord\u00ading to Drexler\u2019s tax\u00adon\u00adomy. This per\u00adspec\u00adtive is highly rele\u00advant to our dis\u00adcus\u00adsion here since we seek to go be\u00adyond the nar\u00adrow agent model in which in\u00adtel\u00adli\u00adgent sys\u00adtems are con\u00adceived of as uni\u00adtary en\u00adtities that re\u00adceive ob\u00adser\u00adva\u00adtions from the en\u00advi\u00adron\u00adment, send ac\u00adtions back into the en\u00advi\u00adron\u00adment, but are oth\u00ader\u00adwise sep\u00ada\u00adrate from the en\u00advi\u00adron\u00adment.\nOur per\u00adspec\u00adtive is that there is a spe\u00adcific class of in\u00adtel\u00adli\u00adgent sys\u00adtems \u2014 which we call op\u00adti\u00admiz\u00ading sys\u00adtems \u2014 that are wor\u00adthy of spe\u00adcial at\u00adten\u00adtion and study due to their po\u00adten\u00adtial to re\u00adshape the world. The set of op\u00adti\u00admiz\u00ading sys\u00adtems is smaller than the set of all AI ser\u00advices, but larger than the set of goal-di\u00adrected agen\u00adtic sys\u00adtems.\n\nFigure: re\u00adla\u00adtion\u00adship be\u00adtween our op\u00adti\u00admiz\u00ading sys\u00adtem con\u00adcept and Drexler\u2019s tax\u00adon\u00adomy of AI systems\nEx\u00adam\u00adples of sys\u00adtems that lie in each of these three tiers are as fol\u00adlows:\n\nA sys\u00adtem that iden\u00adti\u00adfies faces in images by eval\u00adu\u00adat\u00ading a feed-for\u00adward neu\u00adral net\u00adwork is an AI sys\u00adtem but not an op\u00adti\u00admiz\u00ading sys\u00adtem.\n\nA tree is an op\u00adti\u00admiz\u00ading sys\u00adtem but not a goal-di\u00adrected agent sys\u00adtem (see sec\u00adtion be\u00adlow an\u00ada\u00adlyz\u00ading a tree as an op\u00adti\u00admiz\u00ading sys\u00adtem).\n\nA robot with the goal of mov\u00ading a ball to a spe\u00adcific des\u00adti\u00adna\u00adtion is a goal-di\u00adrected agent sys\u00adtem.\n\nRe\u00adla\u00adtion\u00adship to Garrabrant and Dem\u00adski\u2019s Embed\u00added Agency\nScott Garrabrant and Abram Dem\u00adski have writ\u00adten about the many ways that a du\u00adal\u00adis\u00adtic view of agency in which one con\u00adceives of a hard sep\u00ada\u00adra\u00adtion be\u00adtween agent and en\u00advi\u00adron\u00adment fails to cap\u00adture the re\u00adal\u00adity of agents that are re\u00adducible to the same ba\u00adsic build\u00ading-blocks as the en\u00advi\u00adron\u00adments in which they are em\u00adbed\u00added. They show that if one starts from a du\u00adal\u00adis\u00adtic view of agency then it is difficult to de\u00adsign agents ca\u00adpa\u00adble of re\u00adflect\u00ading on and mak\u00ading im\u00adprove\u00adments to their own cog\u00adni\u00adtive pro\u00adcesses, since the du\u00adal\u00adis\u00adtic view of agency rests on a uni\u00adtary agent whose cog\u00adni\u00adtion does not af\u00adfect the world ex\u00adcept via ex\u00adplicit ac\u00adtions. They also show that rea\u00adson\u00ading about coun\u00adter\u00adfac\u00adtu\u00adals be\u00adcomes non\u00adsen\u00adsi\u00adcal if start\u00ading from a du\u00adal\u00adis\u00adtic view of agency, since the agent\u2019s cog\u00adni\u00adtive pro\u00adcesses are gov\u00aderned by the same phys\u00adi\u00adcal laws as those that gov\u00adern the en\u00advi\u00adron\u00adment, and the agent can come to no\u00adtice this fact, lead\u00ading to con\u00adfu\u00adsion when con\u00adsid\u00ader\u00ading the con\u00adse\u00adquences ac\u00adtions that are differ\u00adent from the ac\u00adtions that the agent will, in fact, out\u00adput.\nOne could view the Embed\u00added Agency work as enu\u00admer\u00adat\u00ading the many log\u00adi\u00adcal pit\u00adfalls one falls into if one takes the \u201cop\u00adti\u00admizer\u201d con\u00adcept as the start\u00ading point for de\u00adsign\u00ading in\u00adtel\u00adli\u00adgent sys\u00adtems, rather than \u201cop\u00adti\u00admiz\u00ading sys\u00adtem\u201d as we pro\u00adpose here. The pre\u00adsent work is strongly in\u00adspired by Garrabrant and Dem\u00adski\u2019s work. Our hope is to point the way to a view of op\u00adti\u00admiza\u00adtion and agency that cap\u00adtures re\u00adal\u00adity suffi\u00adciently well to avoid the log\u00adi\u00adcal pit\u00adfalls iden\u00adti\u00adfied in the Embed\u00added Agency work.\nEx\u00adam\u00adple: ball in a valley\nCon\u00adsider a phys\u00adi\u00adcal ball rol\u00adling around in a small valley. Ac\u00adcord\u00ading to our defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion, this is an op\u00adti\u00admiz\u00ading sys\u00adtem:\nCon\u00adfigu\u00adra\u00adtion space. The sys\u00adtem we are study\u00ading con\u00adsists of the phys\u00adi\u00adcal valley plus the ball\nBasin of at\u00adtrac\u00adtion. The ball could ini\u00adtially be placed any\u00adwhere in the valley (these are the con\u00adfigu\u00adra\u00adtions com\u00adpris\u00ading the basin of at\u00adtrac\u00adtion)\nTar\u00adget con\u00adfigu\u00adra\u00adtion set. The ball will roll un\u00adtil it ends up at the bot\u00adtom of the valley (the set of lo\u00adcal min\u00adima are the tar\u00adget con\u00adfigu\u00adra\u00adtions)\nWe can per\u00adturb the ball while it is \u201cin flight\u201d, say by chang\u00ading its po\u00adsi\u00adtion or ve\u00adloc\u00adity, and the ball will still ul\u00adti\u00admately end up at one of the tar\u00adget con\u00adfigu\u00adra\u00adtions. This sys\u00adtem is ro\u00adbust to per\u00adtur\u00adba\u00adtions along di\u00admen\u00adsions cor\u00adre\u00adspond\u00ading to the spa\u00adtial po\u00adsi\u00adtion and ve\u00adloc\u00adity of the ball, but there are many more di\u00admen\u00adsions along which this sys\u00adtem is not ro\u00adbust. If we change the shape of the ball to a cube, for ex\u00adam\u00adple, then the ball will not con\u00adtinue rol\u00adling to the bot\u00adtom of the valley.\n\nEx\u00adam\u00adple: ball in valley with robot\nCon\u00adsider now a ball in a valley as above, but this time with the ad\u00addi\u00adtion of an in\u00adtel\u00adli\u00adgent robot hold\u00ading the goal of en\u00adsur\u00ading that the ball reaches the bot\u00adtom of the valley.\nCon\u00adfigu\u00adra\u00adtion space. The sys\u00adtem we are study\u00ading now con\u00adsists of the phys\u00adi\u00adcal valley, the ball, and the robot. We con\u00adsider the evolu\u00adtion of and per\u00adtur\u00adba\u00adtions to this whole joint sys\u00adtem.\nTar\u00adget con\u00adfigu\u00adra\u00adtion set. As be\u00adfore, the tar\u00adget con\u00adfigu\u00adra\u00adtion is the ball be\u00ading at the bot\u00adtom of the valley\nBasin of at\u00adtrac\u00adtion. As be\u00adfore, the basin of at\u00adtrac\u00adtion con\u00adsists of all the pos\u00adsi\u00adble spa\u00adtial lo\u00adca\u00adtions that the ball could be placed in the valley.\nWe can now per\u00adturb the sys\u00adtem along many more di\u00admen\u00adsions than in the case where there was no robot. For ex\u00adam\u00adple, we could in\u00adtro\u00adduce a bar\u00adrier that pre\u00advents the ball from rol\u00adling down\u00adhill past a cer\u00adtain point, and we can then ex\u00adpect a suffi\u00adciently in\u00adtel\u00adli\u00adgent robot to move the ball over the bar\u00adrier. We can ex\u00adpect a suffi\u00adciently well-de\u00adsigned robot to be able to over\u00adcome a wide va\u00adri\u00adety of hur\u00addles that grav\u00adity would not over\u00adcome on its own. There\u00adfore we say that this sys\u00adtem is more ro\u00adbust than the sys\u00adtem with\u00adout the robot.\nThere is a se\u00adquence of sys\u00adtems span\u00adning the gap be\u00adtween a ball rol\u00adling in a valley, which is ro\u00adbust to a nar\u00adrow set of per\u00adtur\u00adba\u00adtions and there\u00adfore we say ex\u00adhibits a weak de\u00adgree of op\u00adti\u00admiza\u00adtion, up to a robot with a goal of mov\u00ading a ball around in a valley, which is ro\u00adbust to a much wider set of per\u00adtur\u00adba\u00adtions, and there\u00adfore we say ex\u00adhibits a stronger de\u00adgree of op\u00adti\u00admiza\u00adtion. There\u00adfore the differ\u00adence be\u00adtween sys\u00adtems that do and do not un\u00addergo op\u00adti\u00admiza\u00adtion is not a bi\u00adnary dis\u00adtinc\u00adtion but a con\u00adtin\u00adu\u00adous gra\u00addi\u00adent of in\u00adcreas\u00ading ro\u00adbust\u00adness to per\u00adtur\u00adba\u00adtions.\nBy in\u00adtro\u00adduc\u00ading the robot to the sys\u00adtem we have also in\u00adtro\u00adduced new di\u00admen\u00adsions along which the sys\u00adtem is frag\u00adile: the di\u00admen\u00adsions cor\u00adre\u00adspond\u00ading to mod\u00adifi\u00adca\u00adtions to the robot it\u00adself, and in par\u00adtic\u00adu\u00adlar the di\u00admen\u00adsions cor\u00adre\u00adspond\u00ading to mod\u00adifi\u00adca\u00adtions to the code run\u00adning on the robot (i.e. phys\u00adi\u00adcal per\u00adtur\u00adba\u00adtions to the con\u00adfigu\u00adra\u00adtion of the mem\u00adory cells in which the code is stored). There are two types of per\u00adtur\u00adba\u00adtion we might con\u00adsider:\n\nPer\u00adtur\u00adba\u00adtions that de\u00adstroy the robot. There are nu\u00admer\u00adous ways we could cut wires or scram\u00adble com\u00adputer code that would leave the robot com\u00adpletely non-op\u00ader\u00ada\u00adtional. Many of these would be phys\u00adi\u00adcally micro\u00adscopic, such as flip\u00adping a sin\u00adgle bit in a mem\u00adory cell con\u00adtain\u00ading some crit\u00adi\u00adcal com\u00adputer code. In fact there are now more ways to break the sys\u00adtem via micro\u00adscopic per\u00adtur\u00adba\u00adtions com\u00adpared to when we were con\u00adsid\u00ader\u00ading a ball in a valley with\u00adout a robot, since there are few ways to cause a ball not to reach the bot\u00adtom of a valley by mak\u00ading only a micro\u00adscopic per\u00adtur\u00adba\u00adtion to the sys\u00adtem, but there are many ways to break mod\u00adern com\u00adputer sys\u00adtems via a micro\u00adscopic per\u00adtur\u00adba\u00adtion.\n\nPer\u00adtur\u00adba\u00adtions that change the tar\u00adget con\u00adfigu\u00adra\u00adtions. We could also make phys\u00adi\u00adcally micro\u00adscopic per\u00adtur\u00adba\u00adtions to this sys\u00adtem that change the robot\u2019s goal. For ex\u00adam\u00adple we might flip the sign on some crit\u00adi\u00adcal com\u00adpu\u00adta\u00adtions in the robot\u2019s code such that the robot works to place the ball at the high\u00adest point rather than the low\u00adest. This is still a phys\u00adi\u00adcal per\u00adtur\u00adba\u00adtion to the valley\/\u200bball\/\u200brobot sys\u00adtem: it is one that af\u00adfects the con\u00adfigu\u00adra\u00adtion of the mem\u00adory cells con\u00adtain\u00ading the robot\u2019s com\u00adputer code. Th\u00adese kinds of per\u00adtur\u00adba\u00adtions may point to a con\u00adcept with some similar\u00adity to that of an agent. If we have a sys\u00adtem that can be per\u00adturbed in a way that pre\u00adserves the ro\u00adbust\u00adness of the basin of con\u00adver\u00adgence but changes the tar\u00adget con\u00adfigu\u00adra\u00adtion to\u00adwards which the sys\u00adtem tends to evolve, and if we can find per\u00adtur\u00adba\u00adtions that cause the tar\u00adget con\u00adfigu\u00adra\u00adtions to match our own goals, then we have a way to nav\u00adi\u00adgate be\u00adtween con\u00adver\u00adgence bas\u00adins.\n\nEx\u00adam\u00adple: com\u00adputer perform\u00ading gra\u00addi\u00adent descent\nCon\u00adsider now a com\u00adputer run\u00adning an iter\u00ada\u00adtive gra\u00addi\u00adent de\u00adscent al\u00adgorithm in or\u00adder to solve an op\u00adti\u00admiza\u00adtion prob\u00adlem. For con\u00adcrete\u00adness let us imag\u00adine that the ob\u00adjec\u00adtive func\u00adtion be\u00ading op\u00adti\u00admized is globally con\u00advex, in which case the al\u00adgorithm will cer\u00adtainly reach the global op\u00adti\u00admum given suffi\u00adcient time. Let us fur\u00adther imag\u00adine that the com\u00adputer stores its cur\u00adrent best es\u00adti\u00admate of the lo\u00adca\u00adtion of the global op\u00adti\u00admum (which we will hence\u00adforth call the \u201cop\u00adti\u00admizand\u201d) at some known mem\u00adory lo\u00adca\u00adtion, and up\u00addates this af\u00adter ev\u00adery iter\u00ada\u00adtion of gra\u00addi\u00adent de\u00adscent.\nSince this is a purely com\u00adpu\u00adta\u00adtional pro\u00adcess, it may be tempt\u00ading to define the con\u00adfigu\u00adra\u00adtion space at the com\u00adpu\u00adta\u00adtional level \u2014 for ex\u00adam\u00adple by tak\u00ading the con\u00adfigu\u00adra\u00adtion space to be the do\u00admain of the ob\u00adjec\u00adtive func\u00adtion. How\u00adever, it is of ut\u00admost im\u00adpor\u00adtance when an\u00ada\u00adlyz\u00ading any op\u00adti\u00admiz\u00ading sys\u00adtem to ground our anal\u00ady\u00adsis in a phys\u00adi\u00adcal sys\u00adtem evolv\u00ading ac\u00adcord\u00ading to the phys\u00adi\u00adcal laws of na\u00adture, just as we have for all pre\u00advi\u00adous ex\u00adam\u00adples. The rea\u00adson this is im\u00adpor\u00adtant is to en\u00adsure that we always study com\u00adplete sys\u00adtems, not just some in\u00adert part of the sys\u00adtem that is \u201cbe\u00ading op\u00adti\u00admized\u201d by some\u00adthing ex\u00adter\u00adnal to the sys\u00adtem. There\u00adfore we an\u00ada\u00adlyze this sys\u00adtem as fol\u00adlows.\nCon\u00adfigu\u00adra\u00adtion space. The sys\u00adtem con\u00adsists of a phys\u00adi\u00adcal com\u00adputer run\u00adning some code that performs gra\u00addi\u00adent de\u00adscent. The con\u00adfigu\u00adra\u00adtions of the sys\u00adtem are the phys\u00adi\u00adcal con\u00adfigu\u00adra\u00adtions of the atoms com\u00adpris\u00ading the com\u00adputer.\nTar\u00adget-con\u00adfigu\u00adra\u00adtion set. The tar\u00adget con\u00adfigu\u00adra\u00adtion set con\u00adsists of the set of phys\u00adi\u00adcal con\u00adfigu\u00adra\u00adtions of the com\u00adputer in which the mem\u00adory cells that store the cur\u00adrent op\u00adti\u00admized state con\u00adtain the true lo\u00adca\u00adtion of the global op\u00adti\u00admum (or the clos\u00adest float\u00ading point rep\u00adre\u00adsen\u00adta\u00adtion of it).\nBasin of at\u00adtrac\u00adtion. The basin of at\u00adtrac\u00adtion con\u00adsists of the set of phys\u00adi\u00adcal con\u00adfigu\u00adra\u00adtions in which there is a vi\u00adable com\u00adputer and it is run\u00adning the gra\u00addi\u00adent de\u00adscent al\u00adgorithm.\nEx\u00adam\u00adple: billiard balls\nLet us now ex\u00adam\u00adine a sys\u00adtem that is not an op\u00adti\u00admiz\u00ading sys\u00adtem ac\u00adcord\u00ading to our defi\u00adni\u00adtion. Con\u00adsider a billiard table with some billiard balls that are cur\u00adrently bounc\u00ading around in mo\u00adtion. Left alone, the balls will even\u00adtu\u00adally come to rest in some con\u00adfigu\u00adra\u00adtion. Is this an op\u00adti\u00admiz\u00ading sys\u00adtem?\nIn or\u00adder to qual\u00adify as an op\u00adti\u00admiz\u00ading sys\u00adtem, a sys\u00adtem must (1) have a ten\u00addency to evolve to\u00adwards a set of tar\u00adget con\u00adfigu\u00adra\u00adtions that are small rel\u00ada\u00adtive to the basin of at\u00adtrac\u00adtion, and (2) con\u00adtinue to evolve to\u00adwards the same set of tar\u00adget con\u00adfigu\u00adra\u00adtions if per\u00adturbed.\nIf we reach in while the billiard balls are bounc\u00ading around and move one of the balls that is in mo\u00adtion, the sys\u00adtem will now come to rest in a differ\u00adent con\u00adfigu\u00adra\u00adtion. There\u00adfore this is not an op\u00adti\u00admiz\u00ading sys\u00adtem, be\u00adcause there is no set of tar\u00adget con\u00adfigu\u00adra\u00adtions to\u00adwards which the sys\u00adtem evolves de\u00adspite per\u00adtur\u00adba\u00adtions. A sys\u00adtem does not need to be ro\u00adbust along all di\u00admen\u00adsions in or\u00adder to be an op\u00adti\u00admiz\u00ading sys\u00adtem, but a billiard table ex\u00adhibits no such ro\u00adbust di\u00admen\u00adsions at all, so it is not an op\u00adti\u00admiz\u00ading sys\u00adtem.\nEx\u00adam\u00adple: satel\u00adlite in orbit\nCon\u00adsider a sec\u00adond ex\u00adam\u00adple of a sys\u00adtem that is not an op\u00adti\u00admiz\u00ading sys\u00adtem: a satel\u00adlite in or\u00adbit around Earth. Un\u00adlike the billiard balls, there is no chaotic ten\u00addency for small per\u00adtur\u00adba\u00adtions to lead to large de\u00advi\u00ada\u00adtions in the sys\u00adtem\u2019s evolu\u00adtion, but nei\u00adther is there any ten\u00addency for the sys\u00adtem to come back to some tar\u00adget con\u00adfigu\u00adra\u00adtion when per\u00adturbed. If we per\u00adturb the satel\u00adlite\u2019s ve\u00adloc\u00adity or po\u00adsi\u00adtion, then from that point on it is in a differ\u00adent or\u00adbit and has no ten\u00addency to re\u00adturn to its pre\u00advi\u00adous or\u00adbit. There is no set of tar\u00adget con\u00adfigu\u00adra\u00adtions to\u00adwards which the sys\u00adtem evolves de\u00adspite per\u00adtur\u00adba\u00adtions, so this is not an op\u00adti\u00admiz\u00ading sys\u00adtem.\nEx\u00adam\u00adple: a tree\nCon\u00adsider a patch of fer\u00adtile ground with a tree grow\u00ading in it. Is this an op\u00adti\u00admiz\u00ading sys\u00adtem?\nCon\u00adfigu\u00adra\u00adtion space. For the sake of con\u00adcrete\u00adness let us take a re\u00adgion of space that is sealed off from the out\u00adside world \u2014 say 100m x 100m x 100m. This re\u00adgion is filled at the bot\u00adtom with fer\u00adtile soil and at the top with an at\u00admo\u00adsphere con\u00adducive to the tree\u2019s growth. Let us say that the re\u00adgion con\u00adtains a sin\u00adgle tree.\nWe will an\u00ada\u00adlyze this sys\u00adtem in terms of the ar\u00adrange\u00adment of atoms in\u00adside this re\u00adgion of space. Out of all the pos\u00adsi\u00adble con\u00adfigu\u00adra\u00adtions of these atoms, the vast ma\u00adjor\u00adity con\u00adsist of a uniform hazy gas. An as\u00adtro\u00adnom\u00adi\u00adcally tiny frac\u00adtion of con\u00adfigu\u00adra\u00adtions con\u00adtain a non-triv\u00adial mass of com\u00adplex biolog\u00adi\u00adcal nu\u00adtri\u00adents mak\u00ading up soil. An even tinier frac\u00adtion of con\u00adfigu\u00adra\u00adtions con\u00adtain a vi\u00adable tree.\nTar\u00adget-con\u00adfigu\u00adra\u00adtion set. A tree has a ten\u00addency to grow taller over time, to sprout more branches and leaves, and so on. Fur\u00adther\u00admore, trees can only grow so tall due to the physics of trans\u00adport\u00ading sug\u00adars up and down the trunk. So we can iden\u00adtify a set of tar\u00adget con\u00adfigu\u00adra\u00adtions in which the atoms in our re\u00adgion of space are ar\u00adranged into a tree that has grown to its max\u00adi\u00admum size (has sprouted as many branches and leaves as it can sup\u00adport given the at\u00admo\u00adsphere, the soil that it is grow\u00ading in, and the con\u00adstraints of its own biol\u00adogy). There are many topolo\u00adgies in which the tree\u2019s branches could di\u00advide, many po\u00adsi\u00adtions that leaves could sprout in, and so on, so there are many con\u00adfigu\u00adra\u00adtions within the tar\u00adget con\u00adfigu\u00adra\u00adtion set. But this set is still tiny com\u00adpared to all the ways that the same atoms could be ar\u00adranged with\u00adout the con\u00adstraint of form\u00ading a vi\u00adable tree.\nBasin of con\u00adver\u00adgence. This sys\u00adtem will evolve to\u00adwards the tar\u00adget con\u00adfigu\u00adra\u00adtion set start\u00ading from any con\u00adfigu\u00adra\u00adtion in which there is a vi\u00adable tree. This in\u00adcludes con\u00adfigu\u00adra\u00adtions in which there is just a seed in the ground, as well as con\u00adfigu\u00adra\u00adtions in which there is a tree of small, medium, or large size. Start\u00ading from any of these con\u00adfigu\u00adra\u00adtions, if we leave the sys\u00adtem to evolve un\u00adder the nat\u00adu\u00adral laws of physics then the tree will grow to\u00adwards its max\u00adi\u00admum size, at which point the sys\u00adtem will be in one of the tar\u00adget con\u00adfigu\u00adra\u00adtions.\nRo\u00adbust\u00adness to per\u00adtur\u00adba\u00adtions. This sys\u00adtem is highly ro\u00adbust to per\u00adtur\u00adba\u00adtions. Con\u00adsider per\u00adturb\u00ading the sys\u00adtem in any of the fol\u00adlow\u00ading ways:\n\nMov\u00ading soil from one place to another\n\nRe\u00admov\u00ading some leaves from the tree\n\nCut\u00adting a branch off the tree\n\nTh\u00adese per\u00adtur\u00adba\u00adtions might change which par\u00adtic\u00adu\u00adlar tar\u00adget con\u00adfigu\u00adra\u00adtion is even\u00adtu\u00adally reached \u2014 the par\u00adtic\u00adu\u00adlar ar\u00adrange\u00adment of branches and leaves in the tree once it reaches its max\u00adi\u00admum size \u2014 but they will not stop the tree from grow\u00ading taller and evolv\u00ading to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion. In fact we could cut the tree right at the base of the trunk and it would con\u00adtinue to evolve to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion by sprout\u00ading a new trunk and grow\u00ading a whole new tree.\nDual\u00adity. A tree is a non-du\u00adal\u00adis\u00adtic op\u00adti\u00admiz\u00ading sys\u00adtem. There is no sub\u00adsys\u00adtem that is re\u00adspon\u00adsi\u00adble for \u201cdo\u00ading\u201d the op\u00adti\u00admiza\u00adtion, sep\u00ada\u00adrately from that which is \u201cbe\u00ading\u201d op\u00adti\u00admized. Yet the tree does ex\u00adhibit a ten\u00addency to evolve to\u00adwards a set of tar\u00adget con\u00adfigu\u00adra\u00adtions, and can over\u00adcome a wide va\u00adri\u00adety of per\u00adtur\u00adba\u00adtions in or\u00adder to do so. There are no man-made sys\u00adtems in ex\u00adis\u00adtence to\u00adday that are ca\u00adpa\u00adble of gath\u00ader\u00ading and uti\u00adliz\u00ading re\u00adsources so flex\u00adibly as a tree, from so broad a va\u00adri\u00adety of en\u00advi\u00adron\u00adments, and there are cer\u00adtainly no man-made sys\u00adtems that can re\u00adcover from be\u00ading phys\u00adi\u00adcally dis\u00admem\u00adbered to such an ex\u00adtent that a tree can re\u00adcover from be\u00ading cut at the trunk.\nAt this point it may be tempt\u00ading to say that the en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion is nat\u00adu\u00adral se\u00adlec\u00adtion. But re\u00adcall that we are study\u00ading just a sin\u00adgle tree grow\u00ading from seed to max\u00adi\u00admum size. Can you iden\u00adtify a phys\u00adi\u00adcal sub\u00adset of our 100m x 100m x 100m re\u00adgion of space that is this en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion, analo\u00adgous to how we iden\u00adti\u00adfied a phys\u00adi\u00adcal sub\u00adset of the robot-and-ball sys\u00adtem as the en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion (i.e. the phys\u00adi\u00adcal robot)? Nat\u00adu\u00adral se\u00adlec\u00adtion might be the pro\u00adcess by which the ini\u00adtial sys\u00adtem came into ex\u00adis\u00adtence, but it is not the pro\u00adcess that drives the growth of the tree to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion.\nIt may then be tempt\u00ading to say that it is the tree\u2019s DNA that is the en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion. It is true that the tree\u2019s DNA ex\u00adhibits some char\u00adac\u00adter\u00adis\u00adtics of an en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion: it re\u00admains un\u00adchanged through\u00adout the life of the tree, and phys\u00adi\u00adcally micro\u00adscopic per\u00adtur\u00adba\u00adtions to it can dis\u00adable the tree. But a tree repli\u00adcates its DNA in each of its cells, and per\u00adturb\u00ading just one or a small num\u00adber of these is not likely to af\u00adfect the tree\u2019s over\u00adall growth tra\u00adjec\u00adtory. More im\u00adpor\u00adtantly, a sin\u00adgle strand of DNA does not re\u00adally have agency on its own: it re\u00adquires the molec\u00adu\u00adlar ma\u00adchin\u00adery of the whole cell to syn\u00adthe\u00adsize pro\u00adteins based on the ge\u00adnetic code in the DNA, and the phys\u00adi\u00adcal ma\u00adchin\u00adery of the whole tree to col\u00adlect and de\u00adploy en\u00adergy, wa\u00adter, and nu\u00adtri\u00adents. Just as it would be in\u00adcor\u00adrect to iden\u00adtify the mem\u00adory reg\u00adisters con\u00adtain\u00ading com\u00adputer code within a robot as the \u201ctrue\u201d en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion sep\u00ada\u00adrate from the rest of the com\u00adput\u00ading and phys\u00adi\u00adcal ma\u00adchin\u00adery that brings this code to life, it is not quite ac\u00adcu\u00adrate to iden\u00adtify DNA as an en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion. A tree sim\u00adply does not de\u00adcom\u00adpose into en\u00adg\u00adine and ob\u00adject of op\u00adti\u00admiza\u00adtion.\nIt may also be tempt\u00ading to ask whether the tree can \u201cre\u00adally\u201d be said to be un\u00adder\u00adgo\u00ading op\u00adti\u00admiza\u00adtion in the ab\u00adsence of any \u201cin\u00adten\u00adtion\u201d to reach one of the tar\u00adget con\u00adfigu\u00adra\u00adtions. But this ex\u00adpec\u00adta\u00adtion of a cen\u00adtral\u00adized mind with cen\u00adtral\u00adized in\u00adten\u00adtions is re\u00adally an ar\u00adti\u00adfact of us pro\u00adject\u00ading our view of our self onto the world: we be\u00adlieve that we have a cen\u00adtral\u00adized mind with cen\u00adtral\u00adized in\u00adten\u00adtions, so we fo\u00adcus our at\u00adten\u00adtion on op\u00adti\u00admiz\u00ading sys\u00adtems with a similar struc\u00adture. But this turns out to be mis\u00adguided on two counts: first, the vast ma\u00adjor\u00adity of op\u00adti\u00admiz\u00ading sys\u00adtems do not con\u00adtain cen\u00adtral\u00adized minds, and sec\u00adond, our own minds are ac\u00adtu\u00adally far less cen\u00adtral\u00adized than we think! For now we put this ques\u00adtion of whether op\u00adti\u00admiza\u00adtion re\u00adquires in\u00adten\u00adtions and in\u00adstead just work within our defi\u00adni\u00adtion of op\u00adti\u00admiz\u00ading sys\u00adtems, which a tree definitely satis\u00adfies.\nEx\u00adam\u00adple: bot\u00adtle cap\nDaniel Filan has pointed out that some defi\u00adni\u00adtions of op\u00adti\u00admiza\u00adtion would non\u00adsen\u00adsi\u00adcally clas\u00adsify a bot\u00adtle cap as an op\u00adti\u00admizer, since a bot\u00adtle cap causes wa\u00adter molecules in a bot\u00adtle to stay in\u00adside the bot\u00adtle, and the set of con\u00adfigu\u00adra\u00adtions in which the molecules are in\u00adside a bot\u00adtle is much smaller than the set of con\u00adfigu\u00adra\u00adtions in which the molecules are each al\u00adlowed to take a po\u00adsi\u00adtion ei\u00adther in\u00adside or out\u00adside the bot\u00adtle.\nIn our frame\u00adwork we have the fol\u00adlow\u00ading:\n\nThe sys\u00adtem con\u00adsists of a bot\u00adtle, a bot\u00adtle cap, and wa\u00adter molecules. The con\u00adfigu\u00adra\u00adtion space con\u00adsists of all the pos\u00adsi\u00adble spa\u00adtial ar\u00adrange\u00adments of wa\u00adter molecules, ei\u00adther in\u00adside or out\u00adside the bot\u00adtle.\n\nThe basin of at\u00adtrac\u00adtion is the set of con\u00adfigu\u00adra\u00adtions in which the wa\u00adter molecules are in\u00adside the bottle\n\nThe tar\u00adget con\u00adfigu\u00adra\u00adtion set is the same as the basin of attraction\n\n\nThis is not an op\u00adti\u00admiz\u00ading sys\u00adtem for two rea\u00adsons.\nFirst, the tar\u00adget con\u00adfigu\u00adra\u00adtion set is no smaller than the basin of at\u00adtrac\u00adtion. To be an op\u00adti\u00admiz\u00ading sys\u00adtem there must be a ten\u00addency to evolve from any con\u00adfigu\u00adra\u00adtion within a basin of at\u00adtrac\u00adtion to\u00adwards a smaller tar\u00adget con\u00adfigu\u00adra\u00adtion set, but in this case the sys\u00adtem merely re\u00admains within the set of con\u00adfigu\u00adra\u00adtions in which the wa\u00adter molecules are in\u00adside the bot\u00adtle. This is no differ\u00adent from a rock sit\u00adting on a beach: due to ba\u00adsic chem\u00adistry there is a ten\u00addency to re\u00admain within the set of con\u00adfigu\u00adra\u00adtions in which the molecules com\u00adpris\u00ading the rock are phys\u00adi\u00adcally bound to one an\u00adother, but it has no ten\u00addency to evolve from a wide basin of at\u00adtrac\u00adtion to\u00adwards a small set of tar\u00adget con\u00adfigu\u00adra\u00adtion.\nSe\u00adcond, the bot\u00adtle cap sys\u00adtem is not ro\u00adbust to per\u00adtur\u00adba\u00adtions since if we per\u00adturb the po\u00adsi\u00adtion of a sin\u00adgle wa\u00adter molecule so that it is out\u00adside the bot\u00adtle, there is no ten\u00addency for it to move back in\u00adside the bot\u00adtle. This is re\u00adally just the first point above restated, since if there were a ten\u00addency for wa\u00adter molecules moved out\u00adside the bot\u00adtle to evolve back to\u00adwards a con\u00adfigu\u00adra\u00adtion in which all the wa\u00adter molecules were in\u00adside the bot\u00adtle, then we would have a basin of at\u00adtrac\u00adtion larger than the tar\u00adget con\u00adfigu\u00adra\u00adtion set.\nEx\u00adam\u00adple: the hu\u00adman liver\nFilan also asks whether one\u2019s liver should be con\u00adsid\u00adered an op\u00adti\u00admizer. Sup\u00adpose we ob\u00adserve a hu\u00adman work\u00ading to make money. If this per\u00adson were de\u00adprived of a liver, or if their liver stopped func\u00adtion\u00ading, they would pre\u00adsum\u00adably be un\u00adable to make money. So are we then to view the liver as an op\u00adti\u00admizer work\u00ading to\u00adwards the goal of mak\u00ading money? Filan asks this ques\u00adtion as a challenge to Yud\u00adkowsky\u2019s defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion, since it seems ab\u00adsurd to view one\u2019s liver as an op\u00adti\u00admizer work\u00ading to\u00adwards the goal of mak\u00ading money, yet Yud\u00adkowsky\u2019s defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion might clas\u00adsify it as such.\nIn our frame\u00adwork we have the fol\u00adlow\u00ading:\n\nThe sys\u00adtem con\u00adsists of a hu\u00adman work\u00ading to make money, to\u00adgether with the whole hu\u00adman econ\u00adomy and world.\n\nThe basin of at\u00adtrac\u00adtion con\u00adsists of the con\u00adfigu\u00adra\u00adtions in which there is a healthy hu\u00adman (with a healthy liver) hav\u00ading the goal of mak\u00ading money\n\nThe tar\u00adget con\u00adfigu\u00adra\u00adtions are those in which this per\u00adson\u2019s bank bal\u00adance is high. (In\u00adter\u00adest\u00adingly there is no up\u00adper bound here, so there is no fixed point but rather a con\u00adtin\u00adu\u00adous gra\u00addi\u00adent.)\n\nWe can ex\u00adpect that this per\u00adson is ca\u00adpa\u00adble of over\u00adcom\u00ading a rea\u00adson\u00adably broad va\u00adri\u00adety of ob\u00adsta\u00adcles in pur\u00adsuit of mak\u00ading money, so we rec\u00adog\u00adnize that this over\u00adall sys\u00adtem (the hu\u00adman to\u00adgether with the whole econ\u00adomy) is an op\u00adti\u00admiz\u00ading sys\u00adtem. But Filan would surely agree on this point and his ques\u00adtion is more spe\u00adcific: he is ask\u00ading whether the liver is an op\u00adti\u00admizer.\nIn gen\u00aderal we can\u00adnot ex\u00adpect to de\u00adcom\u00adpose op\u00adti\u00admiz\u00ading sys\u00adtems into an en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion and ob\u00adject of op\u00adti\u00admiza\u00adtion. We can see that the sys\u00adtem has the char\u00adac\u00adter\u00adis\u00adtics of an op\u00adti\u00admiz\u00ading sys\u00adtem, and we may iden\u00adtify parts, in\u00adclud\u00ading in this case the per\u00adson\u2019s liver, that are nec\u00ades\u00adsary for these char\u00adac\u00adter\u00adis\u00adtics to ex\u00adist, but we can\u00adnot in gen\u00aderal iden\u00adtify any crisp sub\u00adset of the sys\u00adtem as that which is do\u00ading the op\u00adti\u00admiza\u00adtion. And pick\u00ading var\u00adi\u00adous sub\u00adcom\u00adpo\u00adnents of the sys\u00adtem (such as the per\u00adson\u2019s liver) and ask\u00ading \u201cis this the part that is do\u00ading the op\u00adti\u00admiza\u00adtion?\u201d does not in gen\u00aderal have an an\u00adswer.\nBy anal\u00adogy, sup\u00adpose we looked at a planet or\u00adbit\u00ading a star and asked: \u201cwhich part here is do\u00ading the or\u00adbit\u00ading?\u201d Is it the planet or the star that is the \u201cen\u00adg\u00adine of or\u00adbit\u00ading\u201d? Or sup\u00adpose we looked at a car and no\u00adticed that the fuel pump is a com\u00adplex piece of ma\u00adchin\u00adery with\u00adout which the car\u2019s lo\u00adco\u00admo\u00adtion would cease. We might ask: is this fuel pump the true \u201cen\u00adg\u00adine of lo\u00adco\u00admo\u00adtion\u201d? Th\u00adese ques\u00adtions don\u2019t have an\u00adswers be\u00adcause they mis\u00adtak\u00adenly pre\u00adsup\u00adpose that we can iden\u00adtify a sub\u00adsys\u00adtem that is uniquely re\u00adspon\u00adsi\u00adble for the or\u00adbit\u00ading of the planet or the lo\u00adco\u00admo\u00adtion of the car. Ask\u00ading whether a hu\u00adman liver is an \u201cop\u00adti\u00admizer\u201d is similarly mis\u00adtaken: we can see that the liver is a com\u00adplex piece of ma\u00adchin\u00adery that is nec\u00ades\u00adsary in or\u00adder for the over\u00adall sys\u00adtem to ex\u00adhibit the char\u00adac\u00adter\u00adis\u00adtics of an op\u00adti\u00admiz\u00ading sys\u00adtem (ro\u00adbust evolu\u00adtion to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion set), but be\u00adyond this it makes no more sense to ask whether the liver is a true \u201clo\u00adcus of op\u00adti\u00admiza\u00adtion\u201d.\nSo rather than an\u00adswer\u00ading Filan\u2019s ques\u00adtion in ei\u00adther the pos\u00adi\u00adtive or the nega\u00adtive, the ap\u00adpro\u00adpri\u00adate move is to dis\u00adsolve the con\u00adcept of an op\u00adti\u00admizer, and in\u00adstead ask whether the over\u00adall sys\u00adtem is an op\u00adti\u00admiz\u00ading sys\u00adtem.\nEx\u00adam\u00adple: the uni\u00adverse as a whole\nCon\u00adsider the whole phys\u00adi\u00adcal uni\u00adverse as a sin\u00adgle closed sys\u00adtem. Is this an op\u00adti\u00admiz\u00ading sys\u00adtem?\nThe sec\u00adond law of ther\u00admo\u00addy\u00adnam\u00adics tells us that the uni\u00adverse is evolv\u00ading to\u00adwards a max\u00adi\u00admally di\u00ads\u00ador\u00addered ther\u00admo\u00addy\u00adnamic equil\u00adibrium in which it cy\u00adcles through var\u00adi\u00adous max\u00aden\u00adtropy con\u00adfigu\u00adra\u00adtion. We might then imag\u00adine that the uni\u00adverse is an op\u00adti\u00admiz\u00ading sys\u00adtem in which the basin of at\u00adtrac\u00adtion is all pos\u00adsi\u00adble con\u00adfigu\u00adra\u00adtions of mat\u00adter and en\u00adergy, and the tar\u00adget con\u00adfigu\u00adra\u00adtion set con\u00adsists of the max\u00aden\u00adtropy con\u00adfigu\u00adra\u00adtions.\nHow\u00adever, this is not quite ac\u00adcu\u00adrate. Out of all pos\u00adsi\u00adble con\u00adfigu\u00adra\u00adtions of the uni\u00adverse, the vast ma\u00adjor\u00adity of con\u00adfigu\u00adra\u00adtions are at or close to max\u00adi\u00admum en\u00adtropy. That is, if we sam\u00adple a con\u00adfigu\u00adra\u00adtion of the uni\u00adverse at ran\u00addom, we have only an as\u00adtro\u00adnom\u00adi\u00adcally tiny chance of find\u00ading any\u00adthing other than a close-to-uniform gas of ba\u00adsic par\u00adti\u00adcles. If we define the basin of at\u00adtrac\u00adtion as all pos\u00adsi\u00adble con\u00adfigu\u00adra\u00adtions of mat\u00adter in the uni\u00adverse and the tar\u00adget con\u00adfigu\u00adra\u00adtion set as the set of max\u00aden\u00adtropy con\u00adfigu\u00adra\u00adtions, then the tar\u00adget con\u00adfigu\u00adra\u00adtion set ac\u00adtu\u00adally con\u00adtains al\u00admost the en\u00adtirety of the basin of at\u00adtrac\u00adtion, with the only con\u00adfigu\u00adra\u00adtions that are in the basin of at\u00adtrac\u00adtion but not the tar\u00adget con\u00adfigu\u00adra\u00adtion set be\u00ading the highly un\u00adusual con\u00adfigu\u00adra\u00adtions of mat\u00adter con\u00adtain\u00ading stars, galax\u00adies, and so on.\n\nFor this rea\u00adson the uni\u00adverse as a whole does not qual\u00adify as an op\u00adti\u00admiz\u00ading sys\u00adtem un\u00adder our defi\u00adni\u00adtion. (Or per\u00adhaps it would be more ac\u00adcu\u00adrate to say that it qual\u00adifies as an ex\u00adtremely weak op\u00adti\u00admiz\u00ading sys\u00adtem.)\nPower sources and entropy\nThe sec\u00adond law of ther\u00admo\u00addy\u00adnam\u00adics tells us that any closed sys\u00adtem will even\u00adtu\u00adally tend to\u00adwards a max\u00adi\u00admally di\u00ads\u00ador\u00addered state in which mat\u00adter and en\u00adergy is spread ap\u00adprox\u00adi\u00admately uniformly through space. So if we were to iso\u00adlate one of the sys\u00adtems ex\u00adplore above in\u00adside a sealed cham\u00adber and leave it for a very long pe\u00adriod then even\u00adtu\u00adally what\u00adever power source we put in\u00adside the sealed cham\u00adber would be\u00adcome de\u00adpleted, and then even\u00adtu\u00adally af\u00adter that ev\u00adery com\u00adplex ma\u00adte\u00adrial or com\u00adpound in the sys\u00adtem would de\u00adgrade into its base prod\u00aducts, and then fi\u00adnally we would be left with a cham\u00adber filled with a uniform gaseous mix\u00adture of what\u00adever base el\u00ade\u00adments we origi\u00adnally put in.\nSo in this sense there are no op\u00adti\u00admiz\u00ading sys\u00adtems at all, since any of the sys\u00adtems above evolve to\u00adwards their tar\u00adget con\u00adfigu\u00adra\u00adtion sets only for a finite pe\u00adriod of time, af\u00adter which they de\u00adgrade and evolve to\u00adwards a max\u00aden\u00adtropy con\u00adfigu\u00adra\u00adtion.\nThis is not a very se\u00adri\u00adous challenge to our defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion since it is com\u00admon through\u00adout physics and com\u00adputer sci\u00adence to study var\u00adi\u00adous \u201csteady-state\u201d or \u201cfixed point\u201d sys\u00adtems even though the same ob\u00adjec\u00adtion could be made about any of them. We say that a ther\u00admome\u00adter can be used to build a heat reg\u00adu\u00adla\u00adtor that will keep the tem\u00adper\u00ada\u00adture of a house within a de\u00adsired range, and we do not usu\u00adally need to add the caveat that even\u00adtu\u00adally the house and reg\u00adu\u00adla\u00adtor will de\u00adgrade into a uniform gaseous mix\u00adture due to the heat death of the uni\u00adverse.\nNev\u00ader\u00adthe\u00adless, two pos\u00adsi\u00adble ways to re\u00adfine our defi\u00adni\u00adtion are:\n\nWe could stipu\u00adlate that some power source is pro\u00advided ex\u00adter\u00adnally to each sys\u00adtem we an\u00ada\u00adlyze, and then perform our anal\u00ady\u00adsis con\u00addi\u00adtional on the ex\u00adis\u00adtence of that power source.\n\nWe could spec\u00adify a finite time hori\u00adzon and say that \u201ca sys\u00adtem is an op\u00adti\u00admiz\u00ading sys\u00adtem if it tends to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion set up to time T\u201d.\n\nCon\u00adnec\u00adtion to dy\u00adnam\u00adi\u00adcal sys\u00adtems theory\nThe con\u00adcept of \u201cop\u00adti\u00admiz\u00ading sys\u00adtem\u201d in this es\u00adsay is very close to that of a dy\u00adnam\u00adi\u00adcal sys\u00adtem with one or more at\u00adtrac\u00adtors. We offer the fol\u00adlow\u00ading re\u00admarks on this con\u00adnec\u00adtion.\n\nA gen\u00aderal dy\u00adnam\u00adi\u00adcal sys\u00adtem is any sys\u00adtem with a state that evolves over time as a func\u00adtion of the state it\u00adself. This en\u00adcom\u00adpasses a very broad range of sys\u00adtems in\u00addeed!\n\nIn dy\u00adnam\u00adi\u00adcal sys\u00adtem the\u00adory, an at\u00adtrac\u00adtor is the term used for what we have called the tar\u00adget con\u00adfigu\u00adra\u00adtion set. A fixed point at\u00adtrac\u00adtor is, in our lan\u00adguage, a tar\u00adget con\u00adfigu\u00adra\u00adtion set with just one el\u00ade\u00adment, such as when com\u00adput\u00ading the square root of two. A limit cy\u00adcle is, in our lan\u00adguage, a sys\u00adtem that even\u00adtu\u00adally sta\u00adbly loops through a se\u00adquence of states all of which are in the tar\u00adget con\u00adfigu\u00adra\u00adtion set, such as a satel\u00adlite in or\u00adbit.\n\nWe have dis\u00adcussed sys\u00adtems that evolve to\u00adwards tar\u00adget con\u00adfigu\u00adra\u00adtions along some di\u00admen\u00adsions but not oth\u00aders (e.g. ball in a valley). We have not yet dis\u00adcov\u00adered whether dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory ex\u00adplic\u00aditly stud\u00adies at\u00adtrac\u00adtors that op\u00ader\u00adate along a sub\u00adset of the sys\u00adtem\u2019s di\u00admen\u00adsions.\n\nThere is a con\u00adcept of \u201cwell-posed\u00adness\u201d in dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory that jus\u00adtifies the iden\u00adti\u00adfi\u00adca\u00adtion of a math\u00ade\u00admat\u00adi\u00adcal model with a phys\u00adi\u00adcal sys\u00adtem. The con\u00addi\u00adtions for a model to be well-posed are (1) that a solu\u00adtion ex\u00adists (i.e. the model is not self-con\u00adtra\u00addic\u00adtory), (2) that there is a unique solu\u00adtion (i.e. the model con\u00adtains enough in\u00adfor\u00adma\u00adtion to pick out a sin\u00adgle sys\u00adtem tra\u00adjec\u00adtory), and (3) that the solu\u00adtion changes con\u00adtin\u00adu\u00adously with the ini\u00adtial con\u00addi\u00adtions (the be\u00adhav\u00adior of the sys\u00adtem is not too chaotic). This third con\u00addi\u00adtion may pre\u00adsent an in\u00adter\u00adest\u00ading av\u00adenue for fu\u00adture in\u00adves\u00adti\u00adga\u00adtion as it seems re\u00adlated to but not quite equiv\u00ada\u00adlent to our no\u00adtion of ro\u00adbust\u00adness since ro\u00adbust\u00adness as we define it ad\u00addi\u00adtion\u00adally re\u00adquires that the sys\u00adtem con\u00adtinue to evolve to\u00adwards the same at\u00adtrac\u00adtor state de\u00adspite per\u00adtur\u00adba\u00adtions. Ex\u00adplor\u00ading this con\u00adnec\u00adtion may pre\u00adsent an in\u00adter\u00adest\u00ading av\u00adenue for fu\u00adture in\u00adves\u00adti\u00adga\u00adtion.\n\nConclusion\nWe have pro\u00adposed a con\u00adcept that we call \u201cop\u00adti\u00admiz\u00ading sys\u00adtems\u201d to de\u00adscribe sys\u00adtems that have a ten\u00addency to evolve to\u00adwards a nar\u00adrow tar\u00adget con\u00adfigu\u00adra\u00adtion set when started from any point within a broader basin of at\u00adtrac\u00adtion, and con\u00adtinue to do so de\u00adspite per\u00adtur\u00adba\u00adtions.\nWe have an\u00ada\u00adlyzed op\u00adti\u00admiz\u00ading sys\u00adtems along three di\u00admen\u00adsions:\n\nRo\u00adbust\u00adness, which mea\u00adsures the num\u00adber of di\u00admen\u00adsions along which the sys\u00adtem is ro\u00adbust to per\u00adtur\u00adba\u00adtions, and the mag\u00adni\u00adtude of per\u00adtur\u00adba\u00adtion along these di\u00admen\u00adsions that the sys\u00adtem can with\u00adstand.\n\nDual\u00adity, which mea\u00adsures the ex\u00adtent to which an ap\u00adprox\u00adi\u00admate \u201cen\u00adg\u00adine of op\u00adti\u00admiza\u00adtion\u201d sub\u00adsys\u00adtem can be iden\u00adti\u00adfied.\n\nRe\u00adtar\u00adgetabil\u00adity, which mea\u00adsures the ex\u00adtent to which the sys\u00adtem can be trans\u00adformed via micro\u00adscopic per\u00adtur\u00adba\u00adtions into an equally ro\u00adbust op\u00adti\u00admiz\u00ading sys\u00adtem but with a differ\u00adent tar\u00adget con\u00adfigu\u00adra\u00adtion set.\n\nWe have ar\u00adgued that the \u201cop\u00adti\u00admizer\u201d con\u00adcept rests on an as\u00adsump\u00adtion that op\u00adti\u00admiz\u00ading sys\u00adtems can be de\u00adcom\u00adposed into en\u00adg\u00adine and ob\u00adject of op\u00adti\u00admiza\u00adtion (or agent and en\u00advi\u00adron\u00adment, or mind and world). We have de\u00adscribed sys\u00adtems that do ex\u00adhibit op\u00adti\u00admiza\u00adtion yet can\u00adnot be de\u00adcom\u00adposed this way, such as the tree ex\u00adam\u00adple. We have also pointed out that, even among those sys\u00adtems that can be de\u00adcom\u00adposed ap\u00adprox\u00adi\u00admately into en\u00adg\u00adine and ob\u00adject of op\u00adti\u00admiza\u00adtion (for ex\u00adam\u00adple, a robot mov\u00ading a ball around), we will not in gen\u00aderal be able to mean\u00adingfully an\u00adswer the ques\u00adtion of whether ar\u00adbi\u00adtrary sub\u00adcom\u00adpo\u00adnents of the agent are an op\u00adti\u00admizer not (c.f. the hu\u00adman liver ex\u00adam\u00adple).\nThere\u00adfore, while the \u201cop\u00adti\u00admizer\u201d con\u00adcept clearly still has much util\u00adity in de\u00adsign\u00ading in\u00adtel\u00adli\u00adgent sys\u00adtems, we should be cau\u00adtious about tak\u00ading it as a prim\u00adi\u00adtive in our un\u00adder\u00adstand\u00ading of the world. In par\u00adtic\u00adu\u00adlar we should not ex\u00adpect ques\u00adtions of the form \u201cis X an op\u00adti\u00admizer?\u201d to always have an\u00adswers.What links here?Cyborgism by NicholasKees (10 Feb 2023 14:47 UTC; 328 points)Why Agent Foun\u00adda\u00adtions? An Overly Ab\u00adstract Explanation by johnswentworth (25 Mar 2022 23:17 UTC; 283 points)The Plan by johnswentworth (10 Dec 2021 23:41 UTC; 253 points)Utility Max\u00adi\u00admiza\u00adtion = De\u00adscrip\u00adtion Length Minimization by johnswentworth (18 Feb 2021 18:04 UTC; 195 points)Matt Botv\u00adinick on the spon\u00adta\u00adneous emer\u00adgence of learn\u00ading algorithms by Adam Scholl (12 Aug 2020 7:47 UTC; 153 points)How would a lan\u00adguage model be\u00adcome goal-di\u00adrected? by David Mears (EA Forum; 16 Jul 2022 14:50 UTC; 113 points)Vot\u00ading Re\u00adsults for the 2020 Review by Raemon (2 Feb 2022 18:37 UTC; 108 points)Prizes for the 2020 Review by Raemon (20 Feb 2022 21:07 UTC; 94 points)Un\u00adnat\u00adu\u00adral Cat\u00ade\u00adgories Are Op\u00adti\u00admized for Deception by Zack_M_Davis (8 Jan 2021 20:54 UTC; 89 points)Op\u00adti\u00admiza\u00adtion at a Distance by johnswentworth (16 May 2022 17:58 UTC; 88 points)Search\u00ading for Search by NicholasKees (28 Nov 2022 15:31 UTC; 81 points)AI take\u00adoff story: a con\u00adtinu\u00ada\u00adtion of progress by other means by Edouard Harris (27 Sep 2021 15:55 UTC; 76 points)Long-Term Fu\u00adture Fund: July 2021 grant recommendations by abergal (EA Forum; 18 Jan 2022 8:49 UTC; 75 points)2020 Re\u00adview Article by Vaniver (14 Jan 2022 4:58 UTC; 74 points)Liter\u00ada\u00adture Re\u00adview on Goal-Directedness by adamShimi (18 Jan 2021 11:15 UTC; 74 points)Op\u00adti\u00admiza\u00adtion Con\u00adcepts in the Game of Life by Vika (16 Oct 2021 20:51 UTC; 74 points)The \u201cMea\u00adsur\u00ading Stick of Utility\u201d Problem by johnswentworth (25 May 2022 16:17 UTC; 72 points)Dis\u00adcov\u00ader\u00ading Agents by zac_kenton (18 Aug 2022 17:33 UTC; 71 points)Ab\u00adstract\u00ading The Hard\u00adness of Align\u00adment: Un\u00adbounded Atomic Optimization by adamShimi (29 Jul 2022 18:59 UTC; 65 points)my cur\u00adrent out\u00adlook on AI risk mitigation by Tamsin Leake (3 Oct 2022 20:06 UTC; 63 points)My take on Michael Littman on \u201cThe HCI of HAI\u201d by Alex Flint (2 Apr 2021 19:51 UTC; 59 points)Vingean Agency by abramdemski (24 Aug 2022 20:08 UTC; 58 points)Re\u00adview of \u2018But ex\u00adactly how com\u00adplex and frag\u00adile?\u2019 by TurnTrout (6 Jan 2021 18:39 UTC; 54 points)In\u00adter\u00adpretabil\u00adity\u2019s Align\u00adment-Solv\u00ading Po\u00adten\u00adtial: Anal\u00ady\u00adsis of 7 Scenarios by Evan R. Murphy (12 May 2022 20:01 UTC; 53 points)Al\u00adgorith\u00admic In\u00adtent: A Han\u00adso\u00adnian Gen\u00ader\u00adal\u00adized Anti-Zom\u00adbie Principle by Zack_M_Davis (14 Jul 2020 6:03 UTC; 50 points)AXRP Epi\u00adsode 4 - Risks from Learned Op\u00adti\u00admiza\u00adtion with Evan Hubinger by DanielFilan (18 Feb 2021 0:03 UTC; 43 points)Mo\u00adsaic and Pal\u00adimpsests: Two Shapes of Research by adamShimi (12 Jul 2022 9:05 UTC; 39 points)Agency from a causal perspective by tom4everitt (30 Jun 2023 17:37 UTC; 38 points)[ASoT] Con\u00adse\u00adquen\u00adtial\u00adist mod\u00adels as a su\u00adper\u00adset of mesaoptimizers by leogao (23 Apr 2022 17:57 UTC; 36 points)Selec\u00adtion pro\u00adcesses for subagents by Ryan Kidd (30 Jun 2022 23:57 UTC; 36 points)TurnTrout's comment on But ex\u00adactly how com\u00adplex and frag\u00adile? by KatjaGrace (6 Jan 2021 18:40 UTC; 35 points)Epistemic Arte\u00adfacts of (con\u00adcep\u00adtual) AI al\u00adign\u00adment research by Nora_Ammann (19 Aug 2022 17:18 UTC; 30 points)Prob\u00adlems fac\u00ading a cor\u00adre\u00adspon\u00addence the\u00adory of knowledge by Alex Flint (24 May 2021 16:02 UTC; 30 points)adamShimi's comment on Selec\u00adtion vs Control by abramdemski (3 Jan 2021 17:34 UTC; 29 points)The ac\u00adcu\u00admu\u00adla\u00adtion of knowl\u00adedge: liter\u00ada\u00adture review by Alex Flint (10 Jul 2021 18:36 UTC; 29 points)Com\u00adpu\u00adta\u00adtional sig\u00adna\u00adtures of psychopathy by Cameron Berg (19 Dec 2022 17:01 UTC; 28 points)[AN #157]: Mea\u00adsur\u00ading mis\u00adal\u00adign\u00adment in the tech\u00adnol\u00adogy un\u00adder\u00adly\u00ading Copilot by Rohin Shah (23 Jul 2021 17:20 UTC; 28 points)Bits of Op\u00adti\u00admiza\u00adtion Can Only Be Lost Over A Distance by johnswentworth (23 May 2022 18:55 UTC; 27 points)Bridg\u00ading Ex\u00adpected Utility Max\u00adi\u00admiza\u00adtion and Optimization by Whispermute (5 Aug 2022 8:18 UTC; 25 points)[AN #105]: The eco\u00adnomic tra\u00adjec\u00adtory of hu\u00adman\u00adity, and what we might mean by optimization by Rohin Shah (24 Jun 2020 17:30 UTC; 24 points)Epistemic Mo\u00adtif of Ab\u00adstract-Con\u00adcrete Cy\u00adcles & Do\u00admain Expansion by Dalcy (10 Oct 2023 3:28 UTC; 23 points)Ramana Kumar's comment on Ngo and Yud\u00adkowsky on al\u00adign\u00adment difficulty by Eliezer Yudkowsky (19 Nov 2021 15:48 UTC; 22 points)Con\u00adfu\u00adsions in My Model of AI Risk by peterbarnett (7 Jul 2022 1:05 UTC; 22 points)pro\u00adgram searches by Tamsin Leake (5 Sep 2022 20:04 UTC; 21 points)Pit\u00adfalls of the agent model by Alex Flint (27 Apr 2021 22:19 UTC; 20 points)Sun\u00adday July 12 \u2014 talks by Scott Garrabrant, Alexflint, alexei, Stu\u00adart_Armstrong by jacobjacob (8 Jul 2020 0:27 UTC; 19 points)Mo\u00adti\u00adva\u00adtions, Nat\u00adu\u00adral Selec\u00adtion, and Cur\u00adricu\u00adlum Engineering by Oliver Sourbut (16 Dec 2021 1:07 UTC; 16 points)What sorts of sys\u00adtems can be de\u00adcep\u00adtive? by Andrei Alexandru (31 Oct 2022 22:00 UTC; 16 points)[AN #164]: How well can lan\u00adguage mod\u00adels write code? by Rohin Shah (15 Sep 2021 17:20 UTC; 13 points)Sun\u00adday Septem\u00adber 27, 12:00PM (PT) \u2014 talks by Alex Flint, Alex Zhu and more by habryka (22 Sep 2020 21:59 UTC; 11 points)johnswentworth's comment on I\u2019m no longer sure that I buy dutch book ar\u00adgu\u00adments and this makes me skep\u00adti\u00adcal of the \u201cutil\u00adity func\u00adtion\u201d abstraction by Eli Tyre (22 Jun 2021 17:09 UTC; 11 points)evhub's comment on Risks from Learned Op\u00adti\u00admiza\u00adtion: Con\u00adclu\u00adsion and Re\u00adlated Work by evhub (26 Jun 2020 19:51 UTC; 10 points)Ramana Kumar's comment on Ngo and Yud\u00adkowsky on al\u00adign\u00adment difficulty by Eliezer Yudkowsky (23 Nov 2021 17:28 UTC; 10 points)Mark Xu's comment on Oper\u00ada\u00adtional\u00adiz\u00ading com\u00adpat\u00adi\u00adbil\u00adity with strat\u00adegy-stealing by evhub (25 Dec 2020 17:14 UTC; 8 points)A new defi\u00adni\u00adtion of \u201cop\u00adti\u00admizer\u201d by Chantiel (9 Aug 2021 13:42 UTC; 5 points)Gordon Seidoh Worley's comment on Selec\u00adtion The\u00ado\u00adrems: A Pro\u00adgram For Un\u00adder\u00adstand\u00ading Agents by johnswentworth (29 Sep 2021 14:44 UTC; 4 points)johnswentworth's comment on Gra\u00addi\u00adent de\u00adscent doesn\u2019t se\u00adlect for in\u00adner search by Ivan Vendrov (15 Aug 2022 5:41 UTC; 4 points)Rohin Shah's comment on Our take on CHAI\u2019s re\u00adsearch agenda in un\u00adder 1500 words by Alex Flint (21 Jun 2020 20:28 UTC; 4 points)Dalcy's comment on Dalcy\u2019s Shortform by Dalcy (11 Jan 2023 19:08 UTC; 3 points)RobertKirk's comment on Pos\u00adi\u00adtive val\u00adues seem more ro\u00adbust and last\u00ading than prohibitions by TurnTrout (19 Dec 2022 15:46 UTC; 3 points)Alex_Altair's comment on My first year in AI alignment by Alex_Altair (3 Jan 2023 23:44 UTC; 3 points)lifelonglearner's comment on Defin\u00ading \u201cop\u00adti\u00admizer\u201d by Chantiel (17 Apr 2021 16:33 UTC; 2 points)Jan's comment on Ad\u00adver\u00adsar\u00adial at\u00adtacks and op\u00adti\u00admal control by Jan (29 May 2022 12:42 UTC; 2 points)Alex Flint's comment on Knowl\u00adedge is not just map\/\u200bter\u00adri\u00adtory resemblance by Alex Flint (26 May 2021 16:41 UTC; 2 points)Edouard Harris's comment on Re-Define In\u00adtent Align\u00adment? by abramdemski (4 Aug 2021 17:46 UTC; 1 point)DragonGod's comment on DragonGod\u2019s Shortform by DragonGod (19 Dec 2022 21:44 UTC; 1 point)Alex Flint20 Jun 2020 0:38 UTCLW: 241 AF: 9275 commentsLW link1 reviewOptimizationAIWorld ModelingGeneral IntelligenceSelection vs ControlDynamical systemsBest of LessWrong\uf141Post permalinkLink without commentsLink without top nav barsLink without comments or top nav barsPart of the sequence:Align\u00adment & AgencyPrevious: An overview of 11 pro\u00adpos\u00adals for build\u00ading safe ad\u00advanced AINext: Search ver\u00adsus designVanessa Kosoy 16 Dec 2021 14:21 UTC LW: 28 AF: 16AFIn this post, the au\u00adthor pro\u00adposes a semifor\u00admal defi\u00adni\u00adtion of the con\u00adcept of \u201cop\u00adti\u00admiza\u00adtion\u201d. This is po\u00adten\u00adtially valuable since \u201cop\u00adti\u00admiza\u00adtion\u201d is a word of\u00adten used in dis\u00adcus\u00adsions about AI risk, and much con\u00adfu\u00adsion can fol\u00adlow from sloppy use of the term or from differ\u00adent peo\u00adple un\u00adder\u00adstand\u00ading it differ\u00adently. While the defi\u00adni\u00adtion given here is a use\u00adful per\u00adspec\u00adtive, I have some reser\u00adva\u00adtions about the claims made about its rele\u00advance and ap\u00adpli\u00adca\u00adtions.\nThe key para\u00adgraph, which sum\u00adma\u00adrizes the defi\u00adni\u00adtion it\u00adself, is the fol\u00adlow\u00ading:\n\nAn op\u00adti\u00admiz\u00ading sys\u00adtem is a sys\u00adtem that has a ten\u00addency to evolve to\u00adwards one of a set of con\u00adfigu\u00adra\u00adtions that we will call the tar\u00adget con\u00adfigu\u00adra\u00adtion set, when started from any con\u00adfigu\u00adra\u00adtion within a larger set of con\u00adfigu\u00adra\u00adtions, which we call the basin of at\u00adtrac\u00adtion, and con\u00adtinues to ex\u00adhibit this ten\u00addency with re\u00adspect to the same tar\u00adget con\u00adfigu\u00adra\u00adtion set de\u00adspite per\u00adtur\u00adba\u00adtions.\n\nIn fact, \u201ccon\u00adtinues to ex\u00adhibit this ten\u00addency with re\u00adspect to the same tar\u00adget con\u00adfigu\u00adra\u00adtion set de\u00adspite per\u00adtur\u00adba\u00adtions\u201d is re\u00addun\u00addant: clearly as long as the per\u00adtur\u00adba\u00adtion doesn\u2019t push the sys\u00adtem out of the basin, the ten\u00addency must con\u00adtinue.\nThis is what is known as \u201cat\u00adtrac\u00adtor\u201d in dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory. For com\u00adpar\u00adi\u00adson, here is the defi\u00adni\u00adtion of \u201cat\u00adtrac\u00adtor\u201d from the Wikipe\u00addia:\n\nIn the math\u00ade\u00admat\u00adi\u00adcal field of dy\u00adnam\u00adi\u00adcal sys\u00adtems, an at\u00adtrac\u00adtor is a set of states to\u00adward which a sys\u00adtem tends to evolve, for a wide va\u00adri\u00adety of start\u00ading con\u00addi\u00adtions of the sys\u00adtem. Sys\u00adtem val\u00adues that get close enough to the at\u00adtrac\u00adtor val\u00adues re\u00admain close even if slightly dis\u00adturbed.\n\nThe au\u00adthor ac\u00adknowl\u00adedges this con\u00adnec\u00adtion, al\u00adthough he also makes the fol\u00adlow\u00ading re\u00admark:\n\nWe have dis\u00adcussed sys\u00adtems that evolve to\u00adwards tar\u00adget con\u00adfigu\u00adra\u00adtions along some di\u00admen\u00adsions but not oth\u00aders (e.g. ball in a valley). We have not yet dis\u00adcov\u00adered whether dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory ex\u00adplic\u00aditly stud\u00adies at\u00adtrac\u00adtors that op\u00ader\u00adate along a sub\u00adset of the sys\u00adtem\u2019s di\u00admen\u00adsions.\n\nI find this re\u00admark con\u00adfus\u00ading. An at\u00adtrac\u00adtor that op\u00ader\u00adates along a sub\u00adset of the di\u00admen\u00adsion is just an at\u00adtrac\u00adtor sub\u00adman\u00adi\u00adfold. This is com\u00adpletely stan\u00addard in dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory.\nGiven that the defi\u00adni\u00adtion it\u00adself is not es\u00adpe\u00adcially novel, the post\u2019s main claim to value is via the ap\u00adpli\u00adca\u00adtions. Un\u00adfor\u00adtu\u00adnately, some of the pro\u00adposed ap\u00adpli\u00adca\u00adtions seem to me poorly jus\u00adtified. Speci\u00adfi\u00adcally, I want to talk about two ma\u00adjor ex\u00adam\u00adples: the claimed re\u00adla\u00adtion\u00adship to em\u00adbed\u00added agency and the claimed re\u00adla\u00adtions to com\u00adpre\u00adhen\u00adsive AI ser\u00advices.\nIn both cases, the main short\u00adcom\u00ading of the defi\u00adni\u00adtion is that there is an es\u00adsen\u00adtial prop\u00aderty of AI that this defi\u00adni\u00adtion doesn\u2019t cap\u00adture at all. The au\u00adthor does ac\u00adknowl\u00adedge that \u201cgoal-di\u00adrected agent sys\u00adtem\u201d is a dis\u00adtinct con\u00adcept from \u201cop\u00adti\u00admiz\u00ading sys\u00adtems\u201d. How\u00adever, he doesn\u2019t ex\u00adplain how are they dis\u00adtinct.\nOne way to for\u00admu\u00adlate the differ\u00adence is as fol\u00adlows: agency = op\u00adti\u00admiza\u00adtion + learn\u00ading. An agent is not just ca\u00adpa\u00adble of steer\u00ading a par\u00adtic\u00adu\u00adlar uni\u00adverse to\u00adwards a cer\u00adtain out\u00adcome, it is ca\u00adpa\u00adble of steer\u00ading an en\u00adtire class of uni\u00adverses, with\u00adout know\u00ading in ad\u00advance in which uni\u00adverse it was placed. This un\u00adder\u00adlies all of RL the\u00adory, this is im\u00adplicit in the Shane-Legg defi\u00adni\u00adtion of in\u00adtel\u00adli\u00adgence and my own[1], this is what Yud\u00adkowsky calls \u201ccross do\u00admain\u201d.\nThe is\u00adsue of learn\u00ading is not just nit\u00adpick\u00ading, it is cru\u00adcial to delineate the bound\u00adary around \u201cAI risk\u201d, and delineat\u00ading the bound\u00adary is cru\u00adcial to con\u00adstruc\u00adtively think of solu\u00adtions. If we ig\u00adnore learn\u00ading and just talk about \u201cop\u00adti\u00admiza\u00adtion risks\u201d then we will have to in\u00adclude the risk of pan\u00addemics (be\u00adcause bac\u00adte\u00adria are op\u00adti\u00admiz\u00ading for in\u00adfec\u00adtion), the risk of false vac\u00aduum col\u00adlapse in par\u00adti\u00adcle ac\u00adcel\u00ader\u00ada\u00adtors (be\u00adcause vac\u00aduum bub\u00adbles are op\u00adti\u00admiz\u00ading for ex\u00adpand\u00ading), the risk of run\u00adaway global warm\u00ading (be\u00adcause it is op\u00adti\u00admiz\u00ading for in\u00adcreas\u00ading tem\u00adper\u00ada\u00adture) et cetera. But, these are very differ\u00adent risks that re\u00adquire very differ\u00adent solu\u00adtions.\nThere is an\u00adother, less cen\u00adtral, differ\u00adence: the au\u00adthor re\u00adquires a par\u00adtic\u00adu\u00adlar set of \u201ctar\u00adget states\u201d whereas in the con\u00adtext of agency it is more nat\u00adu\u00adral to con\u00adsider util\u00adity func\u00adtions, which means there is a con\u00adtin\u00adu\u00adous gra\u00adda\u00adtion of states rather than just \u201cgood states\u201d and \u201cbad states\u201d. This is re\u00adlated to the differ\u00adence the au\u00adthor points out be\u00adtween his defi\u00adni\u00adtion and Yud\u00adkowsky\u2019s:\n\nWhen dis\u00adcern\u00ading the bound\u00adary be\u00adtween op\u00adti\u00admiza\u00adtion and non-op\u00adti\u00admiza\u00adtion, we look prin\u00adci\u00adpally at ro\u00adbust\u00adness \u2014 whether the sys\u00adtem will con\u00adtinue to evolve to\u00adwards its tar\u00adget con\u00adfigu\u00adra\u00adtion set in the face of per\u00adtur\u00adba\u00adtions \u2014 whereas Yud\u00adkowsky looks at the im\u00adprob\u00ada\u00adbil\u00adity of the fi\u00adnal con\u00adfigu\u00adra\u00adtion.\n\nThe im\u00adprob\u00ada\u00adbil\u00adity of the fi\u00adnal con\u00adfigu\u00adra\u00adtion is a con\u00adtin\u00adu\u00adous met\u00adric, whereas just ar\u00adriv\u00ading or not ar\u00adriv\u00ading at a par\u00adtic\u00adu\u00adlar set is dis\u00adcrete.\nLet\u2019s see how this short\u00adcom\u00ading af\u00adfects the con\u00adclu\u00adsions. About em\u00adbed\u00added agency, the au\u00adthor writes:\n\nOne could view the Embed\u00added Agency work as enu\u00admer\u00adat\u00ading the many log\u00adi\u00adcal pit\u00adfalls one falls into if one takes the \u201cop\u00adti\u00admizer\u201d con\u00adcept as the start\u00ading point for de\u00adsign\u00ading in\u00adtel\u00adli\u00adgent sys\u00adtems, rather than \u201cop\u00adti\u00admiz\u00ading sys\u00adtem\u201d as we pro\u00adpose here.\n\nThe cor\u00adrect start\u00ading point is \u201cagent\u201d, defined in the way I ges\u00adtured at above. If in\u00adstead we start with \u201cop\u00adti\u00admiz\u00ading sys\u00adtem\u201d then we throw away the baby with the bath\u00adwa\u00adter, since the cru\u00adcial as\u00adpect of learn\u00ading is ig\u00adnored. This is an es\u00adsen\u00adtial prop\u00aderty of the em\u00adbed\u00added agency prob\u00adlem: ar\u00adguably the en\u00adtire difficulty is about how can we define learn\u00ading with\u00adout in\u00adtro\u00adduc\u00ading un\u00adphys\u00adi\u00adcal du\u00adal\u00adism (in\u00addeed, I have re\u00adcently ad\u00addressed this prob\u00adlem, and \u201cop\u00adti\u00admiz\u00ading sys\u00adtem\u201d doesn\u2019t seem very helpful there).\nAbout com\u00adpre\u00adhen\u00adsive AI ser\u00advices:\n\nOur per\u00adspec\u00adtive is that there is a spe\u00adcific class of in\u00adtel\u00adli\u00adgent sys\u00adtems \u2014 which we call op\u00adti\u00admiz\u00ading sys\u00adtems \u2014 that are wor\u00adthy of spe\u00adcial at\u00adten\u00adtion and study due to their po\u00adten\u00adtial to re\u00adshape the world. The set of op\u00adti\u00admiz\u00ading sys\u00adtems is smaller than the set of all AI ser\u00advices, but larger than the set of goal-di\u00adrected agen\u00adtic sys\u00adtems.\n\nWhat is an ex\u00adam\u00adple of an op\u00adti\u00admiz\u00ading AI sys\u00adtem that is not agen\u00adtic? The au\u00adthor doesn\u2019t give such an ex\u00adam\u00adple and in\u00adstead talks about trees, which are not AIs. I agree that the class of dan\u00adger\u00adous sys\u00adtems is sub\u00adstan\u00adtially wider than the class of sys\u00adtems which were ex\u00adplic\u00aditly de\u00adsigned with agency in mind. How\u00adever, this is pre\u00adcisely be\u00adcause agency can arise from such sys\u00adtems even when not ex\u00adplic\u00aditly de\u00adsigned, and more\u00adover this is hard to avoid if the sys\u00adtem is to be pow\u00ader\u00adful enough for pivotal acts. This is not be\u00adcause there is some class of \u201cop\u00adti\u00admiz\u00ading AI sys\u00adtems\u201d which are in\u00adter\u00adme\u00addi\u00adate be\u00adtween \u201cagen\u00adtic\u201d and \u201cnon-agen\u00adtic\u201d.\nTo sum\u00adma\u00adrize, I agree with and en\u00adcourage the use of tools from dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory to study AI. How\u00adever, one must ac\u00adknowl\u00adedge to cor\u00adrect scope of these tools and what they don\u2019t do. More\u00adover, more work is needed be\u00adfore truly novel con\u00adclu\u00adsions can be ob\u00adtained by these means.\n\n\n\u21a9\ufe0eMo\u00addulo is\u00adsues with traps which I will not go into atm. \n\nWhat links here?Vaniver's comment on 2020 Re\u00adview: The Dis\u00adcus\u00adsion Phase by Vaniver (23 Dec 2021 3:11 UTC; 5 points)Rohin Shah 21 Jun 2020 20:03 UTC LW: 59 AF: 28AFThis is ex\u00adcel\u00adlent, it feels way bet\u00adter as a defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion than past at\u00adtempts :) Thanks in par\u00adtic\u00adu\u00adlar for the aca\u00addemic style, speci\u00adfi\u00adcally re\u00adlat\u00ading it to pre\u00advi\u00adous work, it made it much more ac\u00adcessible for me.Let\u2019s try to build up some core AI al\u00adign\u00adment ar\u00adgu\u00adments with this defi\u00adni\u00adtion.Task: A task is sim\u00adply an \u201cen\u00advi\u00adron\u00adment\u201d along with a tar\u00adget con\u00adfigu\u00adra\u00adtion set. When\u00adever I talk about a \u201ctask\u201d be\u00adlow, as\u00adsume that I mean an \u201cin\u00adter\u00adest\u00ading\u201d task, i.e. some\u00adthing like \u201cbuild a chair\u201d, as op\u00adposed to \u201chave the air molecules be in one of these par\u00adtic\u00adu\u00adlar con\u00adfigu\u00adra\u00adtions\u201d.Solv\u00ading a task: An ob\u00adject O solves a task T if adding O to T\u2019s en\u00advi\u00adron\u00adment trans\u00adforms it into an op\u00adti\u00admiz\u00ading sys\u00adtem for the T\u2019s tar\u00adget con\u00adfigu\u00adra\u00adtion set.Perfor\u00admance on the task: If O solves task T, its perfor\u00admance is quan\u00adtified by how quickly it reaches the tar\u00adget con\u00adfigu\u00adra\u00adtion set, and how ro\u00adbust it is to per\u00adtur\u00adba\u00adtions.Gen\u00ader\u00adal\u00adity of in\u00adtel\u00adli\u00adgence: The gen\u00ader\u00adal\u00adity of O\u2019s in\u00adtel\u00adli\u00adgence is a func\u00adtion of the num\u00adber and di\u00adver\u00adsity of tasks T that it can solve, as well as its perfor\u00admance on those tasks.Op\u00adti\u00admiz\u00ading AI: A com\u00adputer pro\u00adgram for which there ex\u00adists an in\u00adter\u00adest\u00ading task such that the com\u00adputer pro\u00adgram solves that task.This isn\u2019t ex\u00adactly right, as it in\u00adcludes e.g. ac\u00adcount\u00ading pro\u00adgrams or video games, which when paired with a hu\u00adman form an op\u00adti\u00admiz\u00ading sys\u00adtem for cor\u00adrect fi\u00adnan\u00adcials and win\u00adning the game, re\u00adspec\u00adtively. You might be able to fix this by say\u00ading that the op\u00adti\u00admiz\u00ading sys\u00adtem has to be ro\u00adbust to per\u00adtur\u00adba\u00adtions in any hu\u00adman be\u00adhav\u00adior in the en\u00advi\u00adron\u00adment.AGI: An op\u00adti\u00admiz\u00ading AI whose gen\u00ader\u00adal\u00adity of in\u00adtel\u00adli\u00adgence is at least as great as that of hu\u00admans.Ar\u00adgu\u00adment for AI risk: As op\u00adti\u00admiz\u00ading AIs be\u00adcome more and more gen\u00aderal, we will ap\u00adply them to more eco\u00adnom\u00adi\u00adcally use\u00adful tasks T. How\u00adever, they also be\u00adcome more and more ro\u00adbust to per\u00adtur\u00adba\u00adtions, pos\u00adsi\u00adbly in\u00adclud\u00ading per\u00adtur\u00adba\u00adtions such as \u201cwe try to turn off the AI\u201d. As a re\u00adsult, we might even\u00adtu\u00adally have AIs that form strong op\u00adti\u00admiz\u00ading sys\u00adtems for some task T that isn\u2019t the one we ac\u00adtu\u00adally wanted, which tends to be bad due to frag\u00adility of value.Deep learn\u00ading AGI im\u00adplies mesa op\u00adti\u00admiza\u00adtion: Since deep learn\u00ading is so sam\u00adple in\u00adeffi\u00adcient, it can\u00adnot reach hu\u00adman lev\u00adels of perfor\u00admance if we ap\u00adply deep learn\u00ading di\u00adrectly to each pos\u00adsi\u00adble task T. (For ex\u00adam\u00adple, it has to re\u00adlearn how the world works sep\u00ada\u00adrately for each task T.) As a re\u00adsult, if we do get AGI pri\u00admar\u00adily via deep learn\u00ading, it must be that we used deep learn\u00ading to cre\u00adate a new op\u00adti\u00admiz\u00ading AI sys\u00adtem, and that sys\u00adtem was the AGI.Ar\u00adgu\u00adment for mesa op\u00adti\u00admiza\u00adtion: Due to the com\u00adplex\u00adity and noise in the real world, most eco\u00adnom\u00adi\u00adcally use\u00adful tasks re\u00adquire set\u00adting up a ro\u00adbust op\u00adti\u00admiz\u00ading sys\u00adtem, rather than di\u00adrectly cre\u00adat\u00ading the tar\u00adget con\u00adfigu\u00adra\u00adtion state. (See also the im\u00adpor\u00adtance of feed\u00adback for more on this in\u00adtu\u00adition.) It seems likely that hu\u00admans will find it eas\u00adier to cre\u00adate al\u00adgorithms that then find AGIs that can cre\u00adate these ro\u00adbust op\u00adti\u00admiz\u00ading sys\u00adtems, rather than cre\u00adat\u00ading an al\u00adgorithm that is di\u00adrectly an AGI.(The pre\u00advi\u00adous ar\u00adgu\u00adment also ap\u00adplies: this is ba\u00adsi\u00adcally just a gen\u00ader\u00adal\u00adiza\u00adtion of the pre\u00advi\u00adous point to ar\u00adbi\u00adtrary AI sys\u00adtems, in\u00adstead of only deep learn\u00ading.)I want to note that un\u00adder this ap\u00adproach the no\u00adtion of \u201csearch\u201d and \u201cmesa ob\u00adjec\u00adtive\u201d are less nat\u00adu\u00adral, which I see as a pro of this ap\u00adproach (see also here): the ar\u00adgu\u00adment is that we\u2019ll get a gen\u00aderal in\u00adner op\u00adti\u00admiz\u00ading AI, but it doesn\u2019t say much about what task that AI will be op\u00adti\u00admiz\u00ading for (and it could be an op\u00adti\u00admiz\u00ading AI that is re\u00adtar\u00adgetable by hu\u00adman in\u00adstruc\u00adtions).Outer al\u00adign\u00adment: ??? Seems hard to for\u00admal\u00adize in this frame\u00adwork. This makes me feel like outer al\u00adign\u00adment is less im\u00adpor\u00adtant as a con\u00adcept. (I also don\u2019t par\u00adtic\u00adu\u00adlarly like for\u00admal\u00adiza\u00adtions out\u00adside of this frame\u00adwork.)In\u00adner al\u00adign\u00adment: En\u00adsur\u00ading that (con\u00addi\u00adtional on mesa op\u00adti\u00admiza\u00adtion oc\u00adcur\u00adring) the in\u00adner AGI is al\u00adigned with the op\u00ader\u00ada\u00adtor \/\u200b user, that is, com\u00adbined with the user it forms an op\u00adti\u00admiz\u00ading sys\u00adtem for \u201cdo\u00ading what the user wants\u201d. (Note that this is ex\u00adplic\u00aditly not in\u00adtent al\u00adign\u00adment, as it is hard to for\u00admal\u00adize in\u00adtent al\u00adign\u00adment in this frame\u00adwork.)In\u00adtent al\u00adign\u00adment: ??? As men\u00adtioned above, it\u2019s hard to for\u00admal\u00adize in this frame\u00adwork, as in\u00adtent al\u00adign\u00adment re\u00adally does re\u00adquire some no\u00adtion of \u201cmo\u00adti\u00adva\u00adtion\u201d, \u201cgoals\u201d, or \u201ctry\u00ading\u201d, which this frame\u00adwork ex\u00adplic\u00aditly leaves out. I see this as a con of this frame\u00adwork.Ex\u00adpected util\u00adity max\u00adi\u00admiza\u00adtion: One par\u00adtic\u00adu\u00adlar ar\u00adchi\u00adtec\u00adture that could qual\u00adify as an AGI (if the util\u00adity func\u00adtion is treated as part of the en\u00advi\u00adron\u00adment, and not part of the AGI). I see the fact that EU max\u00adi\u00admiza\u00adtion is no longer high\u00adlighted as a pro of this ap\u00adproach.Wire\u00adhead\u00ading: Spe\u00adcial case of the ar\u00adgu\u00adment for AI risk with a weird task of \u201cmax\u00adi\u00admize the num\u00adber in this reg\u00adister\u201d. Un\u00adnat\u00adu\u00adral in this fram\u00ading of the AI risk prob\u00adlem. I see this as a pro of this fram\u00ading of the prob\u00adlem, though I ex\u00adpect peo\u00adple dis\u00adagree with me on this point.What links here?In\u00adter\u00adpretabil\u00adity\u2019s Align\u00adment-Solv\u00ading Po\u00adten\u00adtial: Anal\u00ady\u00adsis of 7 Scenarios by Evan R. Murphy (12 May 2022 20:01 UTC; 53 points)Re-Define In\u00adtent Align\u00adment? by abramdemski (22 Jul 2021 19:00 UTC; 28 points)[AN #105]: The eco\u00adnomic tra\u00adjec\u00adtory of hu\u00adman\u00adity, and what we might mean by optimization by Rohin Shah (24 Jun 2020 17:30 UTC; 24 points)Rohin Shah's comment on The ground of optimization by Alex Flint (21 Jun 2020 20:06 UTC; 7 points)Evan R. Murphy's comment on Re-Define In\u00adtent Align\u00adment? by abramdemski (9 Apr 2022 0:20 UTC; 3 points)Alex Flint 29 Jun 2020 19:26 UTC LW: 8 AF: 6AFParentThanks for the very thought\u00adful com\u00adment Ro\u00adhin. I was on re\u00adtreat last week af\u00adter I pub\u00adlished the ar\u00adti\u00adcle and upon re\u00adturn\u00ading to com\u00adputer us\u00adage I was delighted by the en\u00adgage\u00adment from you and oth\u00aders.\n\nGen\u00ader\u00adal\u00adity of in\u00adtel\u00adli\u00adgence: The gen\u00ader\u00adal\u00adity of O\u2019s in\u00adtel\u00adli\u00adgence is a func\u00adtion of the num\u00adber and di\u00adver\u00adsity of tasks T that it can solve, as well as its perfor\u00admance on those tasks.\n\nI like this.\nWe\u2019ll pre\u00adsum\u00adably need to give O some in\u00adfor\u00adma\u00adtion about the goal \/\u200b tar\u00adget con\u00adfigu\u00adra\u00adtion set for each task. We could say that a robot ca\u00adpa\u00adble of mov\u00ading a vase around is a lit\u00adtle bit gen\u00aderal since we can have it solve the tasks of plac\u00ading the vase at many differ\u00adent lo\u00adca\u00adtions by in\u00adputting some lat\u00adi\u00adtude\/\u200blon\u00adgi\u00adtude into some ap\u00adpro\u00adpri\u00adate mem\u00adory lo\u00adca\u00adtion. But this means we\u2019re ac\u00adtu\u00adally past\u00ading in a differ\u00adent ob\u00adject O for each task T\u2014each of the ob\u00adjects differs in those mem\u00adory lo\u00adca\u00adtions into which we\u2019re past\u00ading the lat\u00adi\u00adtude\/\u200blon\u00adgi\u00adtude. It might be helpful to think of a \u201cagent schema\u201d func\u00adtion that maps goals to ob\u00adjects, so we take the goal part of the task, com\u00adpute the ob\u00adject O for that goal, then paste this ob\u00adject into the en\u00advi\u00adron\u00adment.\nIt\u2019s also im\u00adpor\u00adtant that O be able to solve the task for a rea\u00adson\u00adably broad range of en\u00advi\u00adron\u00adments.\n\nIn\u00adner alignment\n\nPer\u00adhaps we could look at it this way: take a sys\u00adtem con\u00adtain\u00ading a hu\u00adman that is try\u00ading to get some\u00adthing done. This is pre\u00adsum\u00adably an op\u00adti\u00admiz\u00ading sys\u00adtem as hu\u00admans of\u00adten ro\u00adbustly move their en\u00advi\u00adron\u00adment to\u00adwards some de\u00adsired tar\u00adget con\u00adfigu\u00adra\u00adtion set. Then an in\u00adner-al\u00adigned AI is an ob\u00adject O such that adding it to this en\u00advi\u00adron\u00adment does not change the tar\u00adget con\u00adfigu\u00adra\u00adtion set, but does change the speed and\/\u200bor ro\u00adbust\u00adness of con\u00adver\u00adgence to that tar\u00adget con\u00adfigu\u00adra\u00adtion set.\n\nIn\u00adtent alignment\n\nYup very difficult to say much about in\u00adten\u00adtions us\u00ading the pure out\u00adside view ap\u00adproach of this frame\u00adwork. Per\u00adhaps we could say that an in\u00adtent-al\u00adigned AI is an in\u00adner-al\u00adigned AI mod\u00adulo less ro\u00adbust\u00adness. Or per\u00adhaps we could say that an in\u00adtent-al\u00adigned AI is an AI that would achieve the goal in a large set of be\u00adnign en\u00advi\u00adron\u00adments, but might not achieve it in the pres\u00adence of un\u00adlikely mis\u00adtakes, un\u00adlikely en\u00advi\u00adron\u00admen\u00adtal con\u00addi\u00adtions, or the pres\u00adence of other pow\u00ader\u00adful bas\u00adins of at\u00adtrac\u00adtion.\nBut this doesn\u2019t re\u00adally get at the spirit of Paul\u2019s idea, which I think is about re\u00adally look\u00ading in\u00adside the AI and un\u00adder\u00adstand\u00ading its goals.\nRohin Shah 29 Jun 2020 21:23 UTC LW: 4 AF: 3AFParent+1 to all of this.We\u2019ll pre\u00adsum\u00adably need to give O some in\u00adfor\u00adma\u00adtion about the goal \/\u200b tar\u00adget con\u00adfigu\u00adra\u00adtion set for each task. I was imag\u00adin\u00ading that the tasks can come equipped with some speci\u00adfi\u00adca\u00adtion, but some sort of coun\u00adter\u00adfac\u00adtual also makes sense. This also gets around is\u00adsues of the AI sys\u00adtem not be\u00ading ap\u00adpro\u00adpri\u00adately \u201cmo\u00adti\u00advated\u201d\u2014e.g. I might be ca\u00adpa\u00adble of perform\u00ading the task \u201clock up pup\u00adpies in cages\u201d, but I wouldn\u2019t do it, and so if you only look at my be\u00adhav\u00adior you couldn\u2019t say that I was ca\u00adpa\u00adble of do\u00ading that task.But this doesn\u2019t re\u00adally get at the spirit of Paul\u2019s idea, which I think is about re\u00adally look\u00ading in\u00adside the AI and un\u00adder\u00adstand\u00ading its goals.+1 es\u00adpe\u00adcially to thisTurnTrout 22 Jun 2020 23:43 UTC LW: 5 AF: 3AFParentMild op\u00adti\u00admiza\u00adtion: the eas\u00adiest way to solve hard tasks may be to spec\u00adify a proxy, which an AI max\u00adi\u00admizes. The AI steers into con\u00adfigu\u00adra\u00adtions which max\u00adi\u00admize the proxy func\u00adtion. Sim\u00adple prox\u00adies don\u2019t usu\u00adally have tar\u00adget sets which we like, be\u00adcause hu\u00adman value is com\u00adplex. How\u00adever, maybe we just want the AI to ran\u00addomly se\u00adlect a con\u00adfigu\u00adra\u00adtion which satis\u00adfies the proxy, in\u00adstead of find\u00ading the max\u00adi\u00admally-proxy-ness con\u00adfigu\u00adra\u00adtion, which may be bad due to ex\u00adtremal Good\u00adhart. Quan\u00adtiliza\u00adtion tries to solve this by ran\u00addomly se\u00adlect\u00ading a tar\u00adget con\u00adfigu\u00adra\u00adtion from some top quan\u00adtile, but this is sen\u00adsi\u00adtive to how world states are in\u00addi\u00advi\u00add\u00adu\u00adated. What links here?Rohin Shah's comment on The ground of optimization by Alex Flint (23 Jun 2020 18:55 UTC; 8 points)Rohin Shah 23 Jun 2020 17:54 UTC LW: 8 AF: 5AFParentThis makes sense, but I think you\u2019d need a differ\u00adent no\u00adtion of op\u00adti\u00admiz\u00ading sys\u00adtems than the one used in this post. (In par\u00adtic\u00adu\u00adlar, in\u00adstead of a tar\u00adget con\u00adfigu\u00adra\u00adtion set, you want a con\u00adtin\u00adu\u00adous no\u00adtion of good\u00adness, like a util\u00adity func\u00adtion \/\u200b re\u00adward func\u00adtion.)TurnTrout 23 Jun 2020 18:15 UTC LW: 2 AF: 1AFParentI\u2019m say\u00ading the tar\u00adget set for non-mild op\u00adti\u00admiza\u00adtion is the set of con\u00adfigu\u00adra\u00adtions which max\u00adi\u00admize proxy-ness. Just take the argmax. By con\u00adtrast, we might want to sam\u00adple uniformly ran\u00addomly from the set of satis\u00adfic\u00ading con\u00adfigu\u00adra\u00adtions, which is much larger. (This is as\u00adsum\u00ading a fixed ini\u00adtial state)Rohin Shah 23 Jun 2020 18:49 UTC LW: 4 AF: 3AFParentIt sounds like you\u2019re as\u00adsum\u00ading that the tar\u00adget con\u00adfigu\u00adra\u00adtion set is built into the AI sys\u00adtem. Ac\u00adcord\u00ading to me, a ma\u00adjor point of this post \/\u200b frame\u00adwork is to avoid that as\u00adsump\u00adtion al\u00adto\u00adgether, and only de\u00adscribe prob\u00adlems in terms of the ac\u00adtual ob\u00adserved sys\u00adtem be\u00adhav\u00adior.(This is why within this frame\u00adwork I couldn\u2019t for\u00admal\u00adize outer al\u00adign\u00adment, and why wire\u00adhead\u00ading and the search \/\u200b mesa-ob\u00adjec\u00adtive split is un\u00adnat\u00adu\u00adral.)TurnTrout 23 Jun 2020 19:48 UTC LW: 4 AF: 3AFParentI see the ten\u00adsion you\u2019re point\u00ading at. I think I had in mind some\u00adthing like \u201can AI is re\u00adli\u00adably op\u00adti\u00admiz\u00ading util\u00adity func\u00adtion u over the con\u00adfigu\u00adra\u00adtion space (but not nec\u00ades\u00adsar\u00adily over uni\u00adverse-his\u00adto\u00adries!) if it re\u00adli\u00adably moves into high-rated con\u00adfigu\u00adra\u00adtions\u201d, and you could draw differ\u00adent ep\u00adsilon-neigh\u00adbor\u00adhoods of op\u00adti\u00admal\u00adity in con\u00adfigu\u00adra\u00adtion space. It seems like you should be able to talk about dog-max\u00adi\u00admiz\u00aders with\u00adout re\u00adquiring that the agent ro\u00adbustly end up in the max\u00adi\u00admum-dog con\u00adfigu\u00adra\u00adtions (and not in max-minus-one-dog con\u00adfigs). I\u2019m still con\u00adfused about parts of this.ESRogs 30 Jun 2020 23:59 UTC LW: 4 AF: 3AFParentDeep learn\u00ading AGI im\u00adplies mesa op\u00adti\u00admiza\u00adtion: Since deep learn\u00ading is so sam\u00adple in\u00adeffi\u00adcient, it can\u00adnot reach hu\u00adman lev\u00adels of perfor\u00admance if we ap\u00adply deep learn\u00ading di\u00adrectly to each pos\u00adsi\u00adble task T. (For ex\u00adam\u00adple, it has to re\u00adlearn how the world works sep\u00ada\u00adrately for each task T.) As a re\u00adsult, if we do get AGI pri\u00admar\u00adily via deep learn\u00ading, it must be that we used deep learn\u00ading to cre\u00adate a new op\u00adti\u00admiz\u00ading AI sys\u00adtem, and that sys\u00adtem was the AGI.I don\u2019t quite un\u00adder\u00adstand what this is say\u00ading.Sup\u00adpose we train a gi\u00adant deep learn\u00ading model via self-su\u00adper\u00advised learn\u00ading on a ton of real-world data (like GPT-N, but w\/\u200b other sen\u00adsory modal\u00adities be\u00adsides text), and then we build a sec\u00adond sys\u00adtem de\u00adsigned to provide a nice in\u00adter\u00adface to the gi\u00adant model.We\u2019d give task speci\u00adfi\u00adca\u00adtions to the in\u00adter\u00adface, and it would have some smarts about how to con\u00adsult the model to figure out what to do. (The in\u00adter\u00adface might also be learned, via re\u00adin\u00adforce\u00adment or su\u00adper\u00advised learn\u00ading, or it might be hand-coded.)It seems plau\u00adsi\u00adble to me that a sys\u00adtem com\u00adpris\u00ading these two pieces, the model and the in\u00adter\u00adface, could be an AGI ac\u00adcord\u00ading to the defi\u00adni\u00adtion here, in that when com\u00adbined with a very wide va\u00adri\u00adety of en\u00advi\u00adron\u00adments (in\u00adclud\u00ading the task speci\u00adfi\u00adca\u00adtion in the en\u00advi\u00adron\u00adment), it could perform at least as well as a hu\u00adman.And since most of the smarts seem like they\u2019d be in the model rather than the in\u00adter\u00adface, I\u2019d count it as get\u00adting AGI \u201cpri\u00admar\u00adily via deep learn\u00ading\u201d, even if the in\u00adter\u00adface was hand-coded.But it\u2019s not clear to me whether that would count as us\u00ading deep learn\u00ading to \u201ccre\u00adate a new op\u00adti\u00admiz\u00ading AI sys\u00adtem\u201d, which is it\u00adself the AGI. The whole sys\u00adtem is an Op\u00adti\u00admiz\u00ading AI, ac\u00adcord\u00ading to the defi\u00adni\u00adtion given above, but nei\u00adther of the two parts is by it\u00adself, and it doesn\u2019t seem to have the fla\u00advor of mesa-op\u00adti\u00admiza\u00adtion, as I un\u00adder\u00adstand it. So it seems like a con\u00adtra\u00addic\u00adtion to the quoted claim.Have I mi\u00ads\u00adun\u00adder\u00adstood what you\u2019re say\u00ading here, or do you dis\u00adagree with the char\u00adac\u00adter\u00adi\u00adza\u00adtion I gave of the hy\u00adpo\u00adthet\u00adi\u00adcal model + in\u00adter\u00adface sys\u00adtem? (Or have I per\u00adhaps mi\u00ads\u00adun\u00adder\u00adstood mesa-op\u00adti\u00admiza\u00adtion?)Rohin Shah 1 Jul 2020 17:34 UTC LW: 5 AF: 4AFParentThe whole sys\u00adtem is an Op\u00adti\u00admiz\u00ading AI, ac\u00adcord\u00ading to the defi\u00adni\u00adtion given above, but nei\u00adther of the two parts is by itselfYeah, I\u2019m talk\u00ading about the whole sys\u00adtem.it doesn\u2019t seem to have the fla\u00advor of mesa-optimizationYeah, I agree it doesn\u2019t fit the ex\u00adpla\u00adna\u00adtion \/\u200b defi\u00adni\u00adtion in Risks from Learned Op\u00adti\u00admiza\u00adtion. I don\u2019t like that defi\u00adni\u00adtion, and usu\u00adally mean some\u00adthing like \u201crun\u00adning the model pa\u00adram\u00ade\u00adters in\u00adstan\u00adti\u00adates a com\u00adpu\u00adta\u00adtion that does \u2018rea\u00adson\u00ading\u2019\u201d, which I think does fit this ex\u00adam\u00adple. I men\u00adtioned this a bit later in the com\u00adment:I want to note that un\u00adder this ap\u00adproach the no\u00adtion of \u201csearch\u201d and \u201cmesa ob\u00adjec\u00adtive\u201d are less nat\u00adu\u00adral, which I see as a pro of this ap\u00adproach [...]: the ar\u00adgu\u00adment is that we\u2019ll get a gen\u00aderal in\u00adner op\u00adti\u00admiz\u00ading AI, but it doesn\u2019t say much about what task that AI will be op\u00adti\u00admiz\u00ading for (and it could be an op\u00adti\u00admiz\u00ading AI that is re\u00adtar\u00adgetable by hu\u00adman in\u00adstruc\u00adtions).matthewp 31 Mar 2022 13:09 UTC 1 pointParentThanks for the ad\u00addi\u00adtions here. I\u2019m also un\u00adsure how to gel this defi\u00adni\u00adtion (which I quite like) with the in\u00adner\/\u200bouter\/\u200bmesa ter\u00adminol\u00adogy. Here is my knuckle drag\u00adging model of the post\u2019s im\u00adpli\u00adca\u00adtion:\ntarget_set = f(env, agent)\nSo if we plug in a bunch of val\u00adues for agent and hope for the best, the target_set we get might might not be what we de\u00adsired. This would be mis\u00adal\u00adign\u00adment. Whereas the al\u00adign\u00adment task is more like to fix target_set and env and solve for agent.\nThe stuff about mesa op\u00adti\u00admisers mainly sounds like in\u00adad\u00ade\u00adquate (nar\u00adrow) mod\u00adel\u00adling of what env, agent and target_set are. Usu\u00adally fix\u00adat\u00ading on some frac\u00adtion of the prob\u00adlem (win the bat\u00adtle, lose the war prob\u00adlem).Aryeh Englander 1 Jul 2020 15:46 UTC LW: 32 AF: 12AFI shared this es\u00adsay with a col\u00adleague where I work (Johns Hop\u00adkins Univer\u00adsity Ap\u00adplied Physics Lab). Here are her com\u00adments, which she asked me to share:This es\u00adsay pro\u00adposes a very in\u00adter\u00adest\u00ading defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion as the man\u00adi\u00adfes\u00adta\u00adtion of a par\u00adtic\u00adu\u00adlar be\u00adhav\u00adior of a closed, phys\u00adi\u00adcal sys\u00adtem. I haven\u2019t finished think\u00ading this over, but I sus\u00adpect it will be (as is sug\u00adgested in the es\u00adsay) a use\u00adful con\u00adstruct. The rea\u00adson\u00ading lead\u00ading to the defi\u00adni\u00adtion is clearly laid out (thank you!), with ex\u00adam\u00adples that are very use\u00adful in un\u00adder\u00adstand\u00ading the con\u00adcept. The down\u00adside of be\u00ading clearly laid out, how\u00adever, is that it makes cri\u00adtique eas\u00adier. I have a few thoughts about the rea\u00adson\u00ading in the es\u00adsay.The first thing I will note is that the es\u00adsay gives three defi\u00adni\u00adtions for an op\u00adti\u00admiz\u00ading sys\u00adtem. Th\u00adese defi\u00adni\u00adtions are close, but not ex\u00adactly equiv\u00ada\u00adlent. The nu\u00adances can be im\u00adpor\u00adtant. For ex\u00adam\u00adple, that the tar\u00adget con\u00adfigu\u00adra\u00adtion set and the basin of at\u00adtrac\u00adtion can\u00adnot be equal is ob\u00advi\u00adous; that is made ex\u00adplicit in defi\u00adni\u00adtion 3, but only im\u00adplied in defi\u00adni\u00adtions 1 and 2. A big\u00adger is\u00adsue is that there are no crite\u00adria or ra\u00adtio\u00adnale for their ex\u00adtent and rel\u00ada\u00adtive size. For ex\u00adam\u00adple, the es\u00adsay offers two rea\u00adsons why the poster\u00adchild of non-op\u00adti\u00admiz\u00aders\u2014the bot\u00adtle with a cap\u2014is not an op\u00adti\u00admiz\u00ading sys\u00adtem; they both arise from the rather ar\u00adbi\u00adtrary defi\u00adni\u00adtion of the basin of at\u00adtrac\u00adtion as equal to the tar\u00adget con\u00adfigu\u00adra\u00adtion set. I see no nec\u00ades\u00adsary rea\u00adson why the basin of at\u00adtrac\u00adtion couldn\u2019t be defined as the set of all con\u00adfigu\u00adra\u00adtions of wa\u00adter molecules both in\u00adside and out\u00adside the bot\u00adtle. That way, the defi\u00adni\u00adtional re\u00adquire\u00adment of a tar\u00adget con\u00adfigu\u00adra\u00adtion set smaller than the ba\u00adsis of at\u00adtrac\u00adtion is met. The im\u00adpor\u00adtant point is: will wa\u00adter molecules in this new, larger basin of at\u00adtrac\u00adtion tend to the tar\u00adget con\u00adfigu\u00adra\u00adtion set? Let\u2019s sup\u00adpose that capped bot\u00adtle is in a sealed room (not nec\u00ades\u00adsary but eas\u00adier to think about), and that the cap is made of a spe\u00adcial ma\u00adte\u00adrial that al\u00adlows wa\u00adter molecules to pass through it in only one di\u00adrec\u00adtion: from out\u00adside the bot\u00adtle to in\u00adside. The wa\u00adter molecules in\u00adside the bot\u00adtle stay in\u00adside the bot\u00adtle, as for any cap. The wa\u00adter molecules in\u00adside the room, but out\u00adside the bot\u00adtle, are zoom\u00ading about (ther\u00admo\u00addy\u00adnamic en\u00adergy), bounc\u00ading off the walls, each other, and the bot\u00adtle. Although it will take some time, sooner or later all the molecules out\u00adside the bot\u00adtle will hit the bot\u00adtle cap, go through, and be trapped in the bot\u00adtle. Voila! Origi\u00adnally, the bot\u00adtle-with-a-cap sys\u00adtem was a non-op\u00adti\u00admiz\u00ading sys\u00adtem by defi\u00adni\u00adtion; the bot\u00adtle cap type was ir\u00adrele\u00advant and could have been the rather spe\u00adcial one I de\u00adscribed. Sim\u00adply by chang\u00ading the defi\u00adni\u00adtion of the basin of at\u00adtrac\u00adtion, we could turn it into an op\u00adti\u00admiz\u00ading sys\u00adtem. Fur\u00adther, the origi\u00adnal, \u201cnon-op\u00adti\u00admiz\u00ading\u201d sys\u00adtem (with the origi\u00adnal defi\u00adni\u00adtions of the basin of at\u00adtrac\u00adtion and tar\u00adget set) would have be\u00adhaved ex\u00adactly the same as my op\u00adti\u00admiz\u00ading sys\u00adtem. On the other hand, chang\u00ading the bot\u00adtle cap from our spe\u00adcial one to a reg\u00adu\u00adlar cap will change the sys\u00adtem into a non-op\u00adti\u00admiz\u00ading sys\u00adtem, re\u00adgard\u00adless of the defi\u00adni\u00adtions of the basin of at\u00adtrac\u00adtion and the tar\u00adget con\u00adfigu\u00adra\u00adtion set. Per\u00adhaps, we should in\u00adsist that a prop\u00aderly formed sys\u00adtem de\u00adscrip\u00adtion has a basin of at\u00adtrac\u00adtion that is larger than the tar\u00adget set, and count on the sys\u00adtem be\u00adhav\u00adior to make the op\u00adti\u00admiz\u00ading\/\u200bnon-op\u00adti\u00admiz\u00ading dis\u00adtinc\u00adtion.Defi\u00adni\u00adtions 1 and 2 both con\u00adtain the phrase \u201ca small set of tar\u00adget con\u00adfigu\u00adra\u00adtions\u201d which im\u00adplies that the tar\u00adget set << than the basin of at\u00adtrac\u00adtion. This is a prob\u00adlem for the no\u00adtion of the uni\u00adverse as a sys\u00adtem with max\u00adi\u00admum en\u00adtropy as the tar\u00adget con\u00adfigu\u00adra\u00adtion set be\u00adcause the tar\u00adget set is most of the pos\u00adsi\u00adble con\u00adfigu\u00adra\u00adtions. For this rea\u00adson, the es\u00adsay\u2019s au\u00adthor con\u00adcludes that uni\u00adverse-with-en\u00adtropy sys\u00adtem is not an op\u00adti\u00admiz\u00ading sys\u00adtem, or at best, a weak one. Stars, galax\u00adies, black holes \u2013 there are strong forces that pull mat\u00adter into these struc\u00adtures. I would say that any sys\u00adtem that has suc\u00adceeded in get\u00adting nearly ev\u00adery\u00adthing within the basin of at\u00adtrac\u00adtion into the tar\u00adget con\u00adfigu\u00adra\u00adtion is a strong op\u00adti\u00admizer! Re\u00adgard\u00adless of the way we chose to think about strong or weak, the uni\u00adverse is a sys\u00adtem that tends to a set of con\u00adfigu\u00adra\u00adtions smaller than the set of pos\u00adsi\u00adble con\u00adfigu\u00adra\u00adtions de\u00adspite per\u00adtur\u00adba\u00adtions (the oc\u00adca\u00adsional house-build\u00ading pro\u00adject for ex\u00adam\u00adple!). Per\u00adson\u00adally, I see no value in a defi\u00adni\u00adtional limi\u00adta\u00adtion. The be\u00adhav\u00adior of the sys\u00adtem (tend\u00ading to\u00adward a smaller set of con\u00adfigu\u00adra\u00adtions out of a larger set) should gov\u00adern the defi\u00adni\u00adtion of an op\u00adti\u00admiz\u00ading sys\u00adtem, re\u00adgard\u00adless of rel\u00ada\u00adtive sizes of the sets.Between the uni\u00adverse-with-en\u00adtropy and bot\u00adtle-with-a-cap sys\u00adtems, I ques\u00adtion the util\u00adity of the \u201call con\u00adfigu\u00adra\u00adtions >= basin of at\u00adtrac\u00adtion >> tar\u00adget set con\u00adfigu\u00adra\u00adtion\u201d struc\u00adture in the defi\u00adni\u00adtion of op\u00adti\u00admiz\u00ading sys\u00adtems. I be\u00adlieve it is worth think\u00ading about what the nec\u00ades\u00adsary re\u00adla\u00adtion\u00adships among these con\u00adfigu\u00adra\u00adtions are, and how they are cho\u00adsen.The ex\u00adam\u00adple of the billiards sys\u00adtem raised an\u00adother (to me) in\u00adter\u00adest\u00ading ques\u00adtion. The es\u00adsay did not offer a sys\u00adtem de\u00adscrip\u00adtion but says \u201cCon\u00adsider a billiard table with some billiard balls that are cur\u00adrently bounc\u00ading around in mo\u00adtion. Left alone, the balls will even\u00adtu\u00adally come to rest in some con\u00adfigu\u00adra\u00adtion\u2026. If we reach in while the billiard balls are bounc\u00ading around and move one of the balls that is in mo\u00adtion, the sys\u00adtem will now come to rest in a differ\u00adent con\u00adfigu\u00adra\u00adtion. There\u00adfore this is not an op\u00adti\u00admiz\u00ading sys\u00adtem, be\u00adcause there is no set of tar\u00adget con\u00adfigu\u00adra\u00adtions to\u00adwards which the sys\u00adtem evolves de\u00adspite per\u00adtur\u00adba\u00adtions.\u201d This ex\u00adam\u00adple has some odd fea\u00adtures. Fric\u00adtion be\u00adtween the balls and the table sur\u00adface, along with the loss of en\u00adergy dur\u00ading non-elas\u00adtic col\u00adli\u00adsions, cause the balls to slow down and stop. The minu\u00adtia of their trav\u00adels de\u00adter\u00admines where they stop. The fi\u00adnal ar\u00adrange\u00adment is un\u00adpre\u00addictable (ok, it could be mod\u00adeled given com\u00adplete in\u00adfor\u00adma\u00adtion, but let\u2019s skip that as beside the point), and any ar\u00adrange\u00adment is as likely as an\u00adother. This sug\u00adgests that the billiards sys\u00adtem is a non-op\u00adti\u00admiz\u00ading sys\u00adtem even with\u00adout the pro\u00adposed per\u00adtur\u00adba\u00adtion of mov\u00ading the balls around while the balls are in mo\u00adtion. Looked at an\u00adother way, billiards sys\u00adtem does tend to a cer\u00adtain tar\u00adget con\u00adfigu\u00adra\u00adtion set, while fric\u00adtion and the non-elas\u00adtic\u00adity of the col\u00adli\u00adsions are per\u00adtur\u00adba\u00adtions. If we make the sur\u00adface fric\u00adtion\u00adless and the col\u00adli\u00adsions perfectly elas\u00adtic, the balls will bounce around the table with\u00adout stop\u00adping. Much like the wa\u00adter molecules in the bot\u00adtle-with-a-cap ex\u00adam\u00adple, each will even\u00adtu\u00adally fall into one pocket or an\u00adother dur\u00ading its trav\u00adels. Once in the pocket, the ball can\u00adnot get out, and thus even\u00adtu\u00adally all will end up in the pock\u00adets. So, this sys\u00adtem tends to a tar\u00adget con\u00adfigu\u00adra\u00adtion set of all balls in pock\u00adets. Ad\u00adding back in the per\u00adturb\u00ading fric\u00adtion and en\u00adergy loss does not mean that this sys\u00adtem is not tend\u00ading to the tar\u00adget con\u00adfigu\u00adra\u00adtion set. Reach\u00ading in and mov\u00ading a ball to a differ\u00adent point, or even redi\u00adrect\u00ading any ball head\u00ading for a pocket, will not keep this sys\u00adtem from tend\u00ading to\u00adwards the tar\u00adget con\u00adfigu\u00adra\u00adtion. It seems as though the billiards sys\u00adtem was an op\u00adti\u00admiz\u00ading sys\u00adtem all along! The larger point is that it seems, by defi\u00adni\u00adtion, an op\u00adti\u00admiz\u00ading sys\u00adtem is an op\u00adti\u00admiz\u00ading sys\u00adtem even if there are a set of per\u00adtur\u00adba\u00adtions that pre\u00advent it from ever reach\u00ading the tar\u00adget con\u00adfigu\u00adra\u00adtion! \u201cTend\u00ading to\u00adward\u201d, not \u201creach\u00ading\u201d, a tar\u00adget con\u00adfigu\u00adra\u00adtion set is in all three defi\u00adni\u00adtions. It is worth think\u00ading about an op\u00adti\u00admiz\u00ading sys\u00adtem that never ac\u00adtu\u00adally op\u00adti\u00admizes. This may have some bear\u00ading on the AGI ques\u00adtion.[And for you read\u00aders who, like me, would say, whoa\u2014it is pos\u00adsi\u00adble that the balls will en\u00adter some re\u00adpeat\u00ading pat\u00adtern of mo\u00adtion where some do not en\u00adter pock\u00adets. Maybe we need a robot to move the balls around ran\u00addomly if they seem stuck, just like the ball-in-valley+robot sys\u00adtem where the robot moves the ball over bar\u00adri\u00aders. I main\u00adtain that the point is the same.]The satel\u00adlite sys\u00adtem illus\u00adtrates (per\u00adhaps an ob\u00advi\u00adous point) that the defi\u00adni\u00adtion of the tar\u00adget con\u00adfigu\u00adra\u00adtion set can change a sin\u00adgle sys\u00adtem from op\u00adti\u00admiz\u00ading and to non-op\u00adti\u00admiz\u00ading. What is a lit\u00adtle more sub\u00adtle is that the defi\u00adni\u00adtion of the sys\u00adtem bound\u00adaries is es\u00adsen\u00adtial to the char\u00adac\u00adter\u00adi\u00adza\u00adtion of the sys\u00adtem as op\u00adti\u00admiz\u00ading or non-op\u00adti\u00admiz\u00ading, even if the be\u00adhav\u00adior of the sys\u00adtem is the same un\u00adder both defi\u00adni\u00adtions. In par\u00adtic\u00adu\u00adlar, what we con\u00adsider to be part of the sys\u00adtem and what is con\u00adsid\u00adered to be a per\u00adtur\u00adba\u00adtion can flip a sys\u00adtem be\u00adtween char\u00adac\u00adter\u00adi\u00adza\u00adtions. [This lat\u00adter point is illus\u00adtrated by the billiards sys\u00adtem as well, as I will ex\u00adplain be\u00adlow.] The es\u00adsay says that a satel\u00adlite in or\u00adbit is a non-op\u00adti\u00admiz\u00ading sys\u00adtem be\u00adcause if its po\u00adsi\u00adtion or ve\u00adloc\u00adity is per\u00adturbed, it has no ten\u00addency to re\u00adturn to its origi\u00adnal or\u00adbit; that is, the au\u00adthor defines the tar\u00adget con\u00adfigu\u00adra\u00adtion as a par\u00adtic\u00adu\u00adlar or\u00adbit. With re\u00adspect to an\u00adother tar\u00adget con\u00adfigu\u00adra\u00adtion that may be de\u00adscribed as \u201ca scorched pile of junk on the sur\u00adface of the Earth\u201d, a satel\u00adlite in or\u00adbit is an op\u00adti\u00admiz\u00ading sys\u00adtem ex\u00adactly like a ball in a valley. As soon as the launch rocket stops firing, a satel\u00adlite starts fal\u00adling to the cen\u00adter of the earth be\u00adcause at\u00admo\u00adspheric drag and so\u00adlar ra\u00addi\u00ada\u00adtion pres\u00adsure con\u00adtin\u00adu\u00adously de\u00adcrease the com\u00adpo\u00adnent of the satel\u00adlite\u2019s ve\u00adloc\u00adity per\u00adpen\u00addicu\u00adlar to the force of grav\u00adity. So, un\u00adless a per\u00adtur\u00adba\u00adtion is big enough to send it out of or\u00adbit al\u00adto\u00adgether, a satel\u00adlite tends to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion of junk lo\u00adcated on Earth\u2019s sur\u00adface.Since a par\u00adtic\u00adu\u00adlar or\u00adbit is usu\u00adally the de\u00adsired tar\u00adget con\u00adfigu\u00adra\u00adtion (!), many satel\u00adlites in\u00adcor\u00adpo\u00adrate a rocket sys\u00adtem to force them to stay in a cho\u00adsen or\u00adbit. If a rocket sys\u00adtem is in\u00adcluded in the sys\u00adtem defi\u00adni\u00adtion, then the satel\u00adlite is an op\u00adti\u00admiz\u00ading sys\u00adtem rel\u00ada\u00adtive to the de\u00adsired or\u00adbit. What is a lit\u00adtle more in\u00adter\u00adest\u00ading, with re\u00adspect to the junk-on-the-Earth tar\u00adget set, drag and so\u00adlar pres\u00adsure are the part of the op\u00adti\u00admiz\u00ading sys\u00adtem; an or\u00adbit cor\u00adrec\u00adtion sys\u00adtem is a per\u00adtur\u00adba\u00adtion. If the tar\u00adget set is the par\u00adtic\u00adu\u00adlar or\u00adbit the satel\u00adlite started in, these defi\u00adni\u00adtions swap. This ob\u00adser\u00adva\u00adtion has bear\u00ading on the billiards sys\u00adtem ex\u00adam\u00adple. If we in\u00adclude drag and non-elas\u00adtic col\u00adli\u00adsions as part of the billiards sys\u00adtem, then the sys\u00adtem is non-op\u00adti\u00admiz\u00ading. If we see them as per\u00adtur\u00adba\u00adtions out\u00adside the sys\u00adtem, then the billiards sys\u00adtem is op\u00adti\u00admiz\u00ading. I find this flex\u00adi\u00adbil\u00adity as a lit\u00adtle cu\u00adri\u00adous, al\u00adthough I haven\u2019t com\u00adpletely thought through the im\u00adpli\u00adca\u00adtions.A com\u00adpletely differ\u00adent sort of ques\u00adtion is sug\u00adgested by the sec\u00adtion on Drexler. There the es\u00adsay sets out a hi\u00ader\u00adar\u00adchy of all AI sys\u00adtems, op\u00adti\u00admiz\u00ading sys\u00adtems, and goal-di\u00adrected agent sys\u00adtems. This makes sense with re\u00adspect to AI sys\u00adtems, but I do not see how op\u00adti\u00admiz\u00ading sys\u00adtems, as defined, can be wholly con\u00adtained within the cat\u00ade\u00adgory of AI sys\u00adtems, un\u00adless you define AI sys\u00adtems pretty broadly. For ex\u00adam\u00adple, I think that pretty much any con\u00adtrol sys\u00adtem is an op\u00adti\u00admiz\u00ading sys\u00adtem by the defi\u00adni\u00adtion in the es\u00adsay. If we ac\u00adcept this defi\u00adni\u00adtion of op\u00adti\u00admiz\u00ading sys\u00adtem, and hold that all op\u00adti\u00admiz\u00ading sys\u00adtems are a sub\u00adset of AI sys\u00adtems, do we have to ac\u00adcept our ther\u00admostats as AI sys\u00adtems? What about the pro\u00adgram that de\u00adter\u00admined the square root of 2? Is that AI? Is this an is\u00adsue for this defi\u00adni\u00adtion, or does its broad\u00adness mat\u00adter in an AI con\u00adtext?And a nit\u00adpick: The first ex\u00adam\u00adple of an op\u00adti\u00admiz\u00ading sys\u00adtem offered in the es\u00adsay is a pro\u00adgram calcu\u00adlat\u00ading the square root of 2. It meets the defi\u00adni\u00adtion of an op\u00adti\u00admiz\u00ading sys\u00adtem, but it seems to con\u00adtra\u00addict the ear\u00adlier as\u00adser\u00adtion that \u201c\u2026 op\u00adti\u00admiz\u00ading sys\u00adtems are not some\u00adthing that are de\u00adsigned but are dis\u00adcov\u00adered.\u201d The al\u00adgorithm and the pro\u00adgram were both de\u00adsigned. I\u2019m not sure why this point is nec\u00ades\u00adsary. Either I do not un\u00adder\u00adstand some\u00adthing fun\u00adda\u00admen\u00adtal, or the only pur\u00adpose of the state\u00adment of dis\u00adcov\u00adery is to give peo\u00adple like me some\u00adthing to ar\u00adgue about!In sum\u00admary, the defi\u00adni\u00adtion in the es\u00adsay sug\u00adgests a few ques\u00adtions that could have a bear\u00ading on its ap\u00adpli\u00adca\u00adtion:How do we choose the ba\u00adsis of at\u00adtrac\u00adtion rel\u00ada\u00adtive to the tar\u00adget con\u00adfigu\u00adra\u00adtion set, if our choice can change the sta\u00adtus of the sys\u00adtem from op\u00adti\u00admiz\u00ading to non-op\u00adti\u00admiz\u00ading and vice versa?Is it an is\u00adsue that an op\u00adti\u00admiz\u00ading sys\u00adtem may never ac\u00adtu\u00adally op\u00adti\u00admize? How do we choose what is part of the sys\u00adtem ver\u00adsus a per\u00adtur\u00adba\u00adtion out\u00adside the sys\u00adtem when our choice changes the sta\u00adtus of the sys\u00adtem as op\u00adti\u00admiz\u00ading or non-op\u00adti\u00admiz\u00ading?All con\u00adtrol sys\u00adtems are op\u00adti\u00admiz\u00ading sys\u00adtems by the defi\u00adni\u00adtion, but are all con\u00adtrol sys\u00adtems AI sys\u00adtems? Does it mat\u00adter? If it does mat\u00adter, how do we tell the differ\u00adence?For any of these, how do they af\u00adfect our think\u00ading for AI?Fi\u00adnally, it might be bet\u00adter to have one, con\u00adsis\u00adtent defi\u00adni\u00adtion that cov\u00aders all the pos\u00adsi\u00adbil\u00adities, in\u00adclud\u00ading (in my opinion) that per\u00adtur\u00adba\u00adtions may be con\u00adfined to cer\u00adtain di\u00admen\u00adsions.Aryeh Englander 2 Jul 2020 17:44 UTC LW: 4 AF: 2AFParentThis was ac\u00adtu\u00adally part of a con\u00adver\u00adsa\u00adtion I was hav\u00ading with this col\u00adleague re\u00adgard\u00ading whether or not evolu\u00adtion can be viewed as an op\u00adti\u00admiza\u00adtion pro\u00adcess. Here are some fol\u00adlow-up com\u00adments to what she wrote above re\u00adlated to the evolu\u00adtion an\u00adgle:We could define the nat\u00adu\u00adral se\u00adlec\u00adtion sys\u00adtem as:All con\u00adfigu\u00adra\u00adtions = all ar\u00adrange\u00adments of mat\u00adter on a planet (both ar\u00adrange\u00adments that are liv\u00ading and those that are non-liv\u00ading) Ba\u00adsis of at\u00adtrac\u00adtion = all ar\u00adrange\u00adments of mat\u00adter on a planet that meet the defi\u00adni\u00adtion of a liv\u00ading thingTar\u00adget con\u00adfigu\u00adra\u00adtion set = all ar\u00adrange\u00adments of liv\u00ading things where the type and num\u00adber of liv\u00ading things re\u00admains ap\u00adprox\u00adi\u00admately sta\u00adble.I think that this sys\u00adtem meets the defi\u00adni\u00adtion of an op\u00adti\u00admiz\u00ading sys\u00adtem given in the Ground for Op\u00adti\u00admiza\u00adtion es\u00adsay. For ex\u00adam\u00adple, preda\u00adtor and prey co-evolve to be about \u201cequal\u201d in sur\u00advival abil\u00adity. If a preda\u00adtor be\u00adcome so much bet\u00adter than its prey that it eats them all, the preda\u00adtor will die out along with its prey; the re\u00admain\u00ading an\u00adi\u00admals will be in bal\u00adance. I think this works for cli\u00admate per\u00adtur\u00adba\u00adtions, etc. too.HOWEVER, it should be clear that there are nu\u00admer\u00adous ways in which this can hap\u00adpen \u2013 like the ball on bumpy sur\u00adface with a lot of con\u00advex \u201cvalleys\u201d (lo\u00adcal min\u00adima), there is not just one way that liv\u00ading things can be in bal\u00adance. So, to say that \u201cnat\u00adu\u00adral se\u00adlec\u00adtion op\u00adti\u00admized for in\u00adtel\u00adli\u00adgence\u201d is quite not right \u2013 it just fell into a \u201cvalley\u201d where in\u00adtel\u00adli\u00adgence hap\u00adpened. FURTHER, it\u2019s not clear that we have reached the lo\u00adcal min\u00adi\u00admum! Hu\u00admans may be that preda\u00adtor that is go\u00ading to fall \u201cprey\u201d to its own suc\u00adcess. If that hap\u00adpened (and any in\u00adtel\u00adli\u00adgent an\u00adi\u00admals re\u00admain at all), I guess we could say that nat\u00adu\u00adral se\u00adlec\u00adtion op\u00adti\u00admized for less-than-hu\u00adman in\u00adtel\u00adli\u00adgence!Fur\u00adther, this defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion has no con\u00adno\u00adta\u00adtion of \u201cbest\u201d or even bet\u00adter \u2013 just equal to a defined set. The word \u201cop\u00adti\u00admize\u201d is loaded. And its use in con\u00adnec\u00adtion with nat\u00adu\u00adral se\u00adlec\u00adtion has led to a lot of trou\u00adble in terms of hu\u00adman races, and hu\u00admans v. an\u00adi\u00admal rights.Fi\u00adnally, in the es\u00adsay\u2019s defi\u00adni\u00adtion, there is no im\u00adper\u00ada\u00adtive that the tar\u00adget set be reached. As long as the set of liv\u00ading things is \u201ctend\u00ading\u201d to\u00adward in\u00adtel\u00adli\u00adgence, then the sys\u00adtem is op\u00adti\u00admiz\u00ading. So even if nat\u00adu\u00adral se\u00adlec\u00adtion was op\u00adti\u00admiz\u00ading for in\u00adtel\u00adli\u00adgence there is no guaran\u00adtee that it will be achieved (in its high\u00adest man\u00adi\u00adfes\u00adta\u00adtion). Like a billiards sys\u00adtem where the table is slick (but not fric\u00adtion\u00adless) and the col\u00adli\u00adsions are close to elas\u00adtic, the balls may come to rest with some of the balls out\u00adside the pock\u00adets. The rea\u00adson I think this is im\u00adpor\u00adtant for AI re\u00adsearch, es\u00adpe\u00adcially AGI and ASI, is per\u00adhaps we should be look\u00ading for those per\u00adtur\u00adba\u00adtions to pre\u00advent us from ever reach\u00ading what we may think of as the tar\u00adget con\u00adfigu\u00adra\u00adtion, de\u00adspite our best efforts. johnswentworth 22 Jun 2020 22:05 UTC LW: 24 AF: 14AFMy biggest ob\u00adjec\u00adtion to this defi\u00adni\u00adtion is that it in\u00adher\u00adently re\u00adquires time. At a bare min\u00adi\u00admum, there needs to be an \u201cini\u00adtial state\u201d and a \u201cfi\u00adnal state\u201d within the same state space, so we can talk about the sys\u00adtem go\u00ading from out\u00adside the tar\u00adget set to in\u00adside the tar\u00adget set.One class of cases which definitely seem like op\u00adti\u00admiza\u00adtion but do not satisfy this prop\u00aderty at all: one-shot non-iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion. For in\u00adstance, I could write a con\u00advex func\u00adtion op\u00adti\u00admizer which works by sym\u00adbol\u00adi\u00adcally differ\u00aden\u00adti\u00adat\u00ading the ob\u00adjec\u00adtive func\u00adtion and then alge\u00adbraically solv\u00ading for a point at which the gra\u00addi\u00adent is zero.Is there an ar\u00adgu\u00adment that I should not con\u00adsider this to be an op\u00adti\u00admizer?Alex Flint 29 Jun 2020 3:18 UTC LW: 24 AF: 9AFParent\nMy biggest ob\u00adjec\u00adtion to this defi\u00adni\u00adtion is that it in\u00adher\u00adently re\u00adquires time\n\nFas\u00adci\u00adnat\u00ading\u2014but why is this an ob\u00adjec\u00adtion? Is it just the in\u00adel\u00ade\u00adgance of not be\u00ading able to look at a sin\u00adgle time slice and an\u00adswer the ques\u00adtion of whether op\u00adti\u00admiza\u00adtion is hap\u00adpen\u00ading?\n\nOne class of cases which definitely seem like op\u00adti\u00admiza\u00adtion but do not satisfy this prop\u00aderty at all: one-shot non-iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion.\n\nYes this is a fas\u00adci\u00adnat\u00ading case! I\u2019d like to write a whole post about it. Here are my thoughts:\nFirst, just as a fun fact, not that it\u2019s ac\u00adtu\u00adally ex\u00adtremely rare to see any non-iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion in prac\u00adti\u00adcal us\u00adage. When we solve lin\u00adear equa\u00adtions, we could use gaus\u00adsian elimi\u00adna\u00adtion but it\u2019s so un\u00adsta\u00adble that in prac\u00adtice we use, most likely, the SVD, which is iter\u00ada\u00adtive. When we solve a sys\u00adtem of polyno\u00admial equa\u00adtion we could use some\u00adthing like a Grob\u00adner ba\u00adsis or the re\u00adsul\u00adtant, but it\u2019s so un\u00adsta\u00adble that in prac\u00adtice we some\u00adthing like a com\u00adpan\u00adion ma\u00adtrix method, which comes down to an eigen\u00advalue de\u00adcom\u00adpo\u00adsi\u00adtion, which is again iter\u00ada\u00adtive.Con\u00adsider find\u00ading the roots of a sim\u00adple quadratic equa\u00adtion (ie solv\u00ading a cu\u00adbic op\u00adti\u00admiza\u00adtion prob\u00adlem). We can use the quadratic equa\u00adtion to do this. But ul\u00adti\u00admately this comes down to com\u00adput\u00ading a square root, which is typ\u00adi\u00adcally (though not nec\u00ades\u00adsar\u00adily) solved with an iter\u00ada\u00adtive method.That these meth\u00adods (for solv\u00ading lin\u00adear sys\u00adtems, polyno\u00admial sys\u00adtems, and quadratic equa\u00adtions) have at their heart an iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion al\u00adgorithm is not ac\u00adci\u00adden\u00adtal. The iter\u00ada\u00adtive meth\u00adods in\u00advolved are not some small or sideline part of what\u2019s go\u00ading on. In fact when you solve a sys\u00adtem of polyno\u00admial equa\u00adtions us\u00ading a com\u00adpan\u00adion ma\u00adtrix, you spend a lot of en\u00adergy re\u00adar\u00adrang\u00ading the sys\u00adtem into a form where it can be solved via an eigen\u00advalue de\u00adcom\u00adpo\u00adsi\u00adtion, and then the eigen\u00advalue de\u00adcom\u00adpo\u00adsi\u00adtion it\u00adself is very much op\u00ader\u00adat\u00ading on the full prob\u00adlem. It\u2019s not some unim\u00adpor\u00adtant side op\u00ader\u00ada\u00adtion. I find this fas\u00adci\u00adnat\u00ading.Nev\u00ader\u00adthe\u00adless it is pos\u00adsi\u00adble to solve lin\u00adear sys\u00adtems, polyno\u00admial sys\u00adtems etc with non-iter\u00ada\u00adtive meth\u00adods.Th\u00adese meth\u00adods are definitely con\u00adsid\u00adered \u201cop\u00adti\u00admiza\u00adtion\u201d by any nor\u00admal use of that term. So in this way my defi\u00adni\u00adtion doesn\u2019t quite line up with the com\u00admon lan\u00adguage use of the word \u201cop\u00adti\u00admiza\u00adtion\u201d.But these non-iter\u00ada\u00adtive meth\u00adods ac\u00adtu\u00adally do not have the core prop\u00aderty that I de\u00adscribed in the square-root-of-two ex\u00adam\u00adple. If I reach in and flip a bit while a Guas\u00adsian elimi\u00adna\u00adtion is run\u00adning, the al\u00adgorithm does not in any sense re\u00adcover. Since the al\u00adgorithm is just perform\u00ading a lin\u00adear se\u00adquence of steps, the er\u00adror just grows and grows as the com\u00adpu\u00adta\u00adtion un\u00adfolds. This is the op\u00adpo\u00adsite of what hap\u00adpens if I reach in and flip a bit while an SVD is be\u00ading com\u00adputed: in this case the er\u00adror will be driven back to zero by the iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion al\u00adgorithm.You might say that my fo\u00adcus on er\u00adror-cor\u00adrec\u00adtion sim\u00adply doesn\u2019t cap\u00adture the com\u00admon lan\u00adguage use of the term op\u00adti\u00admiza\u00adtion, as demon\u00adstrated by the fact that non-iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion al\u00adgorithms do not have this er\u00adror-cor\u00adrect\u00ading prop\u00aderty. You would be cor\u00adrect!But per\u00adhaps my real re\u00adsponse is that fun\u00adda\u00admen\u00adtally I\u2019m in\u00adter\u00adested in these pro\u00adcesses that some\u00adwhat mys\u00adte\u00adri\u00adously drive the state of the world to\u00adwards a tar\u00adget con\u00adfigu\u00adra\u00adtion, and keep do\u00ading so de\u00adspite per\u00adtur\u00adba\u00adtions. I think these are cen\u00adtral to what AI and agency are. The term \u201cop\u00adti\u00admiz\u00ading sys\u00adtem\u201d might not be quite right, but it seems close enough to be com\u00adpel\u00adling.\nThanks for the ques\u00adtion\u2014I clar\u00adified my own think\u00ading while writ\u00ading up this re\u00adsponse.\njohnswentworth 29 Jun 2020 4:50 UTC LW: 4 AF: 2AFParentAnother big thing to note in ex\u00adam\u00adples like e.g. iter\u00ada\u00adtively com\u00adput\u00ading a square root for the quadratic for\u00admula or iter\u00ada\u00adtively com\u00adput\u00ading eigen\u00adval\u00adues to solve a ma\u00adtrix: the op\u00adti\u00admiza\u00adtion prob\u00adlems we\u2019re solv\u00ading are sub\u00adprob\u00adlems, not the origi\u00adnal full prob\u00adlem. Th\u00adese cru\u00adcially differ from most of the ex\u00adam\u00adples in the OP in that the sys\u00adtem\u2019s ob\u00adjec\u00adtive func\u00adtion (in your sense) does not match the ob\u00adjec\u00adtive func\u00adtion (in the usual in\u00adtu\u00aditive sense). They\u2019re iter\u00ada\u00adtively op\u00adti\u00admiz\u00ading a sub\u00adprob\u00adlem\u2019s ob\u00adjec\u00adtive, not the \u201cfull\u201d prob\u00adlem\u2019s ob\u00adjec\u00adtive.That\u2019s po\u00adten\u00adtially an is\u00adsue for think\u00ading about e.g. AI as an op\u00adti\u00admizer: if it\u2019s us\u00ading iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion on sub\u00adprob\u00adlems, but us\u00ading those re\u00adsults to perform some higher-level op\u00adti\u00admiza\u00adtion in a non-iter\u00ada\u00adtive man\u00adner, then al\u00adign\u00ading the sobprob\u00adlem-op\u00adti\u00admiz\u00aders may not be syn\u00adony\u00admous with al\u00adign\u00ading the full AI. In\u00addeed, I think a lot of rea\u00adson\u00ading works very much like this: we de\u00adcom\u00adpose a high-di\u00admen\u00adsional prob\u00adlem into cou\u00adpled low-di\u00admen\u00adsional sub\u00adprob\u00adlems (i.e. \u201cgears\u201d), then ap\u00adply iter\u00ada\u00adtive op\u00adti\u00admiz\u00aders to the sub\u00adprob\u00adlems. That\u2019s ex\u00adactly how eigen\u00advalue al\u00adgorithms work, for in\u00adstance: we de\u00adcom\u00adpose the full prob\u00adlem into a se\u00adries of op\u00adti\u00admiza\u00adtion sub\u00adprob\u00adlems in nar\u00adrower and nar\u00adrower sub\u00adspaces, while the \u201chigh-level\u201d part of the al\u00adgorithm (i.e. out\u00adside the sub\u00adprob\u00adlems) doesn\u2019t look like iter\u00ada\u00adtive op\u00adti\u00admiza\u00adtion.johnswentworth 30 Jun 2020 17:04 UTC LW: 3 AF: 2AFParentFas\u00adci\u00adnat\u00ading\u2014but why is this an ob\u00adjec\u00adtion? Is it just the in\u00adel\u00ade\u00adgance of not be\u00ading able to look at a sin\u00adgle time slice and an\u00adswer the ques\u00adtion of whether op\u00adti\u00admiza\u00adtion is hap\u00adpen\u00ading?No, the is\u00adsue is that the usual defi\u00adni\u00adtion of an op\u00adti\u00admiza\u00adtion prob\u00adlem (e.g. maxx f(x)) has no built-in no\u00adtion of time, and the in\u00adtu\u00aditive no\u00adtion of op\u00adti\u00admiza\u00adtion (e.g. \u201cthe sys\u00adtem makes Y big\u201d) has no built-in no\u00adtion of time (or at least lin\u00adear time). It\u2019s this re\u00adally fun\u00adda\u00admen\u00adtal thing that isn\u2019t pre\u00adsent in the \u201corigi\u00adnal prob\u00adlem\u201d, so to speak; it would be very sur\u00adpris\u00ading and in\u00adter\u00adest\u00ading if time had to be in\u00advolved when it\u2019s not pre\u00adsent from the start. If I speci\u00adfi\u00adcally try to brain\u00adstorm things-which-look-like-op\u00adti\u00admiza\u00adtion-but-don\u2019t-in\u00advolve-ob\u00adjec\u00adtive-im\u00adprove\u00adment-over-time, then it\u2019s not hard to come up with ex\u00adam\u00adples:Rather than a func\u00adtion-value \u201cim\u00adprov\u00ading\u201d along lin\u00adear time, I could think about a func\u00adtion value im\u00adprov\u00ading along some tree or DAG\u2014e.g. in a heap data struc\u00adture, we have a tree where the \u201cfunc\u00adtion value\u201d always \u201cim\u00adproves\u201d as we move from any leaf to\u00adward the root. There, any path from a leaf to the root could be con\u00adsid\u00adered \u201ctime\u201d (but the whole set of nodes at the \u201csame level\u201d can\u2019t be con\u00adsid\u00adered a time-slice, be\u00adcause we don\u2019t have a mean\u00adingful way to com\u00adpare whole sets of val\u00adues; we could in\u00advent one, but it wouldn\u2019t ac\u00adtu\u00adally re\u00adflect the tree struc\u00adture).The ex\u00adam\u00adple from the ear\u00adlier com\u00adment: a one-shot non-iter\u00ada\u00adtive optimizerA dis\u00adtributed op\u00adti\u00admizer: the sys\u00adtem fans out, tests a whole bunch of pos\u00adsi\u00adble choices in par\u00adallel, then se\u00adlects the best of those.Var\u00adi\u00adous fla\u00advors of con\u00adstraint prop\u00ada\u00adga\u00adtion, e.g. the sim\u00adplex al\u00adgorithm (and mar\u00adkets more gen\u00ader\u00adally)Davidmanheim 24 Jun 2020 17:15 UTC LW: 4 AF: 2AFParentI think this is cov\u00adered in my view of op\u00adti\u00admiza\u00adtion via se\u00adlec\u00adtion, where \u201cdi\u00adrect solu\u00adtion\u201d is the third op\u00adtion. Any one-shot op\u00adti\u00admizer is im\u00adplic\u00aditly rely\u00ading on an in\u00adter\u00adnal model com\u00adpletely for de\u00adci\u00adsion mak\u00ading, rather than iter\u00adat\u00ading, as I ex\u00adplain there. I think that is com\u00adpat\u00adi\u00adble with the model here, but it needs to be ex\u00adtended slightly to cover what I was try\u00ading to say there.newstorkcity@gmail.com 23 Jun 2020 17:41 UTC 1 pointParentThis model is ex\u00adplic\u00aditly re\u00adquiring that you deal only with phys\u00adi\u00adcal pro\u00adcesses, so your con\u00advex func\u00adtion solver would re\u00adquire time to get from the start\u00ading state to the end state. If it is hap\u00adpen\u00ading non-iter\u00ada\u00adtively then it would cease to be an op\u00adti\u00admiz\u00ading sys\u00adtem af\u00adter it has com\u00adpleted the func\u00adtion, since there is no longer a tar\u00adget con\u00adfigu\u00adra\u00adtion.johnswentworth 23 Jun 2020 18:12 UTC 2 pointsParentI\u2019m not sure what you\u2019re try\u00ading to say here. What\u2019s the state space (in which both the start and end state of the op\u00adti\u00admizer live), what\u2019s the basin of at\u00adtrac\u00adtion (i.e. set of al\u00adlowed ini\u00adtial con\u00addi\u00adtions), and what\u2019s the tar\u00adget re\u00adgion within the state space? And re\u00admem\u00adber, the tar\u00adget re\u00adgion needs to be a sub\u00adset of the al\u00adlowed ini\u00adtial con\u00addi\u00adtions.newstorkcity 24 Jun 2020 14:27 UTC 1 pointParentThis end state state is the solu\u00adtion to the con\u00advex func\u00adtion be\u00ading stored in some phys\u00adi\u00adcal reg\u00adisters. The ini\u00adtial state is those reg\u00adisters con\u00adtain\u00ading ar\u00adbi\u00adtrary data to be over\u00adwrit\u00adten. It\u2019s not par\u00adtic\u00adu\u00adlarly in\u00adter\u00adest\u00ading as op\u00adti\u00admiza\u00adtion prob\u00adlems go (not a very large basin of at\u00adtrac\u00adtion) but it fulfills the ba\u00adsic crite\u00adria.The unique thing about your ex\u00adam\u00adple is that it solves once and then it is done (rel\u00ada\u00adtive to the ex\u00adam\u00adples in the post), so it ceases to be an op\u00adti\u00admiz\u00ading sys\u00adtem once it finishes com\u00adput\u00ading the solu\u00adtion to your con\u00advex func\u00adtion.With a slight mod\u00adifi\u00adca\u00adtion, you could be re\u00adpeat\u00ading this al\u00adgorithm in a loop so it con\u00adstantly re\u00adcalcu\u00adlates a new func\u00adtion. Now the ini\u00adtial state can be some value in the re\u00adsult and in\u00adput reg\u00adisters, and the tar\u00adget re\u00adgion is the set of in\u00adput equa\u00adtions and ap\u00adpro\u00adpri\u00adate solu\u00adtion in the out\u00adput reg\u00adisters. It widens the basin of at\u00adtrac\u00adtion to both the in\u00adput and out\u00adput reg\u00adisters rather than just the out\u00adput.johnswentworth 24 Jun 2020 15:34 UTC 2 pointsParentOk, two prob\u00adlems with this:There\u2019s no rea\u00adson why that tar\u00adget set would be smaller than the basin of at\u00adtrac\u00adtion. Given one such op\u00adti\u00admiza\u00adtion prob\u00adlem, there are no ob\u00advi\u00adous per\u00adtur\u00adba\u00adtions we could make which would leave the re\u00adsult in the tar\u00adget re\u00adgion.The tar\u00adget re\u00adgion is not a sub\u00adset of the basin of at\u00adtrac\u00adtion. The sys\u00adtem doesn\u2019t evolve from a larger re\u00adgion to a smaller sub\u00adset (as in the Venn-di\u00ada\u00adgram vi\u00adsu\u00adals in the OP), it just evolves from one set to an\u00adother.The first prob\u00adlem ex\u00adplic\u00aditly vi\u00ado\u00adlates the OP\u2019s defi\u00adni\u00adtion of an op\u00adti\u00admizer, and the sec\u00adond prob\u00adlem vi\u00ado\u00adlates one of the un\u00adspo\u00adken as\u00adsump\u00adtions pre\u00adsent in all of the OP\u2019s ex\u00adam\u00adples.newstorkcity 24 Jun 2020 17:42 UTC 1 pointParentI don\u2019t be\u00adlieve that ei\u00adther of these points are true. In your origi\u00adnal ex\u00adam\u00adple, there is one cor\u00adrect solu\u00adtion for any con\u00advex func\u00adtion. I will as\u00adsume there is a sin\u00adgle hard-coded func\u00adtion for the fol\u00adlow\u00ading, but it can be ex\u00adtended to work for an ar\u00adbi\u00adtrary func\u00adtion. The out\u00adput reg\u00adister hav\u00ading the cor\u00adrect solu\u00adtion is the tar\u00adget set.The out\u00adput reg\u00adister hav\u00ading any state is the basin of at\u00adtrac\u00adtion.Clearly any spe\u00adcific num\u00adber (or rather sin\u00adgle\u00adton of that num\u00adber) is a sub\u00adset of all num\u00adbers, so the tar\u00adget is a sub\u00adset of the basin. And fur\u00adther, be\u00adcause \u201call num\u00adbers\u201d has more than one el\u00ade\u00adment, the tar\u00adget set is smaller than the basin.johnswentworth 24 Jun 2020 17:44 UTC 2 pointsParentThis ar\u00adgu\u00adment ap\u00adplies to liter\u00adally any de\u00adter\u00adminis\u00adtic pro\u00adgram with nonempty out\u00adput. Are you say\u00ading that ev\u00adery pro\u00adgram is an op\u00adti\u00admizer?newstorkcity 24 Jun 2020 18:03 UTC 1 pointParentPretty much, yes, ac\u00adcord\u00ading to defi\u00adni\u00adtion given. Like I said, not a par\u00adtic\u00adu\u00adlarly in\u00adter\u00adest\u00ading op\u00adti\u00admiza\u00adtion but an op\u00adti\u00admiza\u00adtion none the less.To ex\u00adtend on this, the basin of op\u00adti\u00admiza\u00adtion is not any smaller than an iter\u00ada\u00adtive pro\u00adcess act\u00ading on a sin\u00adgle reg\u00adister (and if you loop the pro\u00adgram, then the time hori\u00adzon is the same). In both cases your basin is any\u00adthing in that reg\u00adister and the tar\u00adget state is one par\u00adtic\u00adu\u00adlar num\u00adber in that reg\u00adister. As far as I can tell the defi\u00adni\u00adtion doesn\u2019t have any way of say\u00ading that one is \u201cmore of an op\u00adti\u00admizer\u201d than the other. If any\u00adthing, the fixed out\u00adput is more op\u00adti\u00admized be\u00adcause it ar\u00adrives more quickly.johnswentworth 24 Jun 2020 19:42 UTC 2 pointsParentOk, well, it seems like the one-shot non-iter\u00ada\u00adtive op\u00adti\u00admizer is an op\u00adti\u00admizer in a MUCH stronger sense than a ran\u00addom pro\u00adgram, and I\u2019d still ex\u00adpect a defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion to say some\u00adthing about the sense in which that holds.Richard_Ngo 22 Jun 2020 8:52 UTC LW: 19 AF: 7AFThis seems great, I\u2019ll read and com\u00adment more thor\u00adoughly later. Two quick com\u00adments:It didn\u2019t seem like you defined what it meant to evolve to\u00adwards the tar\u00adget con\u00adfigu\u00adra\u00adtion set. So it seems like ei\u00adther you need to com\u00admit to the sys\u00adtem ac\u00adtu\u00adally reach\u00ading one of the tar\u00adget con\u00adfigu\u00adra\u00adtions to call it an op\u00adti\u00admiser, or you need some sort of met\u00adric over the con\u00adfigu\u00adra\u00adtion space to tell whether it\u2019s get\u00adting closer to or fur\u00adther away from the tar\u00adget con\u00adfigu\u00adra\u00adtion set. But if you\u2019re rank\u00ading all con\u00adfigu\u00adra\u00adtions any\u00adway, then I\u2019m not sure it adds any\u00adthing to draw a bi\u00adnary dis\u00adtinc\u00adtion be\u00adtween tar\u00adget con\u00adfigu\u00adra\u00adtions and all the oth\u00aders. In other words, can\u2019t you keep the defi\u00adni\u00adtion in terms of a util\u00adity func\u00adtion, but just add per\u00adtur\u00adba\u00adtions?Also, you don\u2019t cite Den\u00adnett here, but his defi\u00adni\u00adtion has some im\u00adpor\u00adtant similar\u00adi\u00adties. In par\u00adtic\u00adu\u00adlar, he defines sev\u00aderal differ\u00adent types of per\u00adtur\u00adba\u00adtion (such as ran\u00addom per\u00adtur\u00adba\u00adtions, ad\u00adver\u00adsar\u00adial per\u00adtur\u00adba\u00adtions, etc) and says that a sys\u00adtem is more agen\u00adtic when it can with\u00adstand more types of per\u00adtur\u00adba\u00adtions. Can\u2019t re\u00admem\u00adber ex\u00adactly where this is from\u2014per\u00adhaps The In\u00adten\u00adtional Stance?What links here?Bridg\u00ading Ex\u00adpected Utility Max\u00adi\u00admiza\u00adtion and Optimization by Whispermute (5 Aug 2022 8:18 UTC; 25 points)Rohin Shah 23 Jun 2020 18:55 UTC LW: 8 AF: 5AFParentIt didn\u2019t seem like you defined what it meant to evolve to\u00adwards the tar\u00adget con\u00adfigu\u00adra\u00adtion set.+1 for swap\u00adping out the tar\u00adget con\u00adfigu\u00adra\u00adtion set with a util\u00adity func\u00adtion, and look\u00ading for a ro\u00adbust ten\u00addency for the util\u00adity func\u00adtion to in\u00adcrease. This would also let you ex\u00adpress mild op\u00adti\u00admiza\u00adtion (see this thread).TurnTrout 23 Jun 2020 19:50 UTC LW: 2 AF: 1AFParentWould this work for highly non-mono\u00adtonic util\u00adity func\u00adtions? Richard_Ngo 23 Jun 2020 20:28 UTC LW: 2 AF: 1AFParentIt would work at least as well as the origi\u00adnal pro\u00adposal, be\u00adcause your util\u00adity func\u00adtion could just be what\u00adever met\u00adric of \u201cget\u00adting closer to the tar\u00adget states\u201d would be used in the origi\u00adnal pro\u00adposal.\nBen Pace 22 Jun 2020 22:28 UTC LW: 17 AF: 10AFCu\u00adrated. Come on dude, stop writ\u00ading so many awe\u00adsome posts so quickly, it\u2019s too much.This is a cen\u00adtral ques\u00adtion in the sci\u00adence of agency and op\u00adti\u00admiza\u00adtion. The pro\u00adposal is sim\u00adple, you con\u00adnected it to other ideas from Drexler and Dem\u00adski+Garrabrant, and you gave a ton of ex\u00adam\u00adples of how to ap\u00adply the idea. I gen\u00ader\u00adally get scared by the aca\u00addemic style, wor\u00adried that the au\u00adthors will fill out the text and make it re\u00adally hard to read, but this was all highly read\u00adable, and set its own con\u00adtext (re-ex\u00adplain\u00ading the ba\u00adsic ideas at the start). I\u2019m look\u00ading for\u00adward to you dis\u00adcussing it in the com\u00adments with Ricraz, Ro\u00adhin and John.Please keep writ\u00ading these posts!Alex Flint 29 Jun 2020 2:31 UTC LW: 7 AF: 3AFParentThank you Ben. Read\u00ading this re\u00adally filled me with joy and gives me en\u00adergy to write more. Thank you for your cu\u00adra\u00adtion work\u2014it\u2019s a huge part of why there is this place for such high qual\u00adity dis\u00adcus\u00adsion of top\u00adics like this, for which I\u2019m very grate\u00adful.\nBen Pace 29 Jun 2020 3:09 UTC LW: 3 AF: 2AFParentYou\u2019re wel\u00adcome :-)SoerenMind 22 Jun 2020 23:42 UTC 2 pointsParentSe\u00adconded that the aca\u00addemic style re\u00adally helped, par\u00adtic\u00adu\u00adlarly dis\u00adcussing the prob\u00adlem and prior work early on. One clas\u00adsic in\u00adtro\u00adduc\u00adtion para\u00adgraph that I was miss\u00ading is \u201cwhat have prior works left un\u00adad\u00addressed?\u201d.AdamGleave 31 Jul 2020 1:03 UTC LW: 16 AF: 8AFThanks for the post, this is my favourite for\u00admal\u00adi\u00adsa\u00adtion of op\u00adti\u00admi\u00adsa\u00adtion so far!One con\u00adcern I haven\u2019t seen raised so far, is that the defi\u00adni\u00adtion seems very sen\u00adsi\u00adtive to the choice of con\u00adfigu\u00adra\u00adtion space. As an ex\u00adtreme ex\u00adam\u00adple, for any given sys\u00adtem, I can always aug\u00adment the con\u00adfigu\u00adra\u00adtion space with an ar\u00adbi\u00adtrary num\u00adber of dummy di\u00admen\u00adsions, and choose the dy\u00adnam\u00adics such that these dummy di\u00admen\u00adsions always get set to all zero af\u00adter each time step. Now, I can make the basin of at\u00adtrac\u00adtion ar\u00adbi\u00adtrar\u00adily large, while the tar\u00adget con\u00adfigu\u00adra\u00adtion set re\u00admains a fixed size. This can then make any such dy\u00adnam\u00adi\u00adcal sys\u00adtem seem to be an ar\u00adbi\u00adtrar\u00adily pow\u00ader\u00adful op\u00adti\u00admiser.This could per\u00adhaps be solved by de\u00admand\u00ading the con\u00adfigu\u00adra\u00adtion space be se\u00adlected ac\u00adcord\u00ading to Oc\u00adcam\u2019s ra\u00adzor, but I think the out\u00adcome still ends up be\u00ading prior de\u00adpen\u00addent. It\u2019d be nice for two ob\u00adservers who model op\u00adti\u00admis\u00ading sys\u00adtems in a sys\u00adtem\u00adat\u00adi\u00adcally differ\u00adent way to always agree within some con\u00adstant fac\u00adtor, akin to Kol\u00admogorov com\u00adplex\u00adity\u2019s in\u00advar\u00adi\u00adance the\u00ado\u00adrem, al\u00adthough this may well be im\u00adpos\u00adsi\u00adble.As a less face\u00adtious ex\u00adam\u00adple, con\u00adsider a com\u00adputer pro\u00adgram that re\u00adpeat\u00adedly sets a vari\u00adable to 0. It seems again we can make the op\u00adti\u00admis\u00ading power ar\u00adbi\u00adtrar\u00adily large by mak\u00ading the vari\u00adable\u2019s size ar\u00adbi\u00adtrar\u00adily large. But this doesn\u2019t quite map onto the in\u00adtu\u00aditive no\u00adtion of the \u201cdifficulty\u201d of an op\u00adti\u00admi\u00adsa\u00adtion prob\u00adlem. Per\u00adhaps in\u00adclud\u00ading some no\u00adtion of how many other op\u00adti\u00admis\u00ading sys\u00adtems would have the same tar\u00adget set would re\u00adsolve this.What links here?Bridg\u00ading Ex\u00adpected Utility Max\u00adi\u00admiza\u00adtion and Optimization by Whispermute (5 Aug 2022 8:18 UTC; 25 points)Richard_Ngo 27 Jun 2020 12:11 UTC LW: 16 AF: 5AFTwo ex\u00adam\u00adples which I\u2019d be in\u00adter\u00adested in your com\u00adments on:1. Con\u00adsider adding a big black hole in the mid\u00addle of a galaxy. Does this turn the galaxy into a sys\u00adtem op\u00adti\u00admis\u00ading for a re\u00adally big black hole in the mid\u00addle of the galaxy? (Credit for the ex\u00adam\u00adple goes to Ra\u00admana Ku\u00admar).2. Imag\u00adine that I have the goal of trav\u00adel\u00adling as fast as pos\u00adsi\u00adble. How\u00adever, there is no set of states which you can point to as the \u201ctar\u00adget states\u201d, since what\u00adever state I\u2019m in, I\u2019ll try to go even faster. This is an\u00adother ar\u00adgu\u00adment for, as I ar\u00adgue be\u00adlow, defin\u00ading an op\u00adti\u00admis\u00ading sys\u00adtem in terms of in\u00adcreas\u00ading some util\u00adity func\u00adtion (rather than mov\u00ading to\u00adwards tar\u00adget states).What links here?Ramana Kumar's comment on Op\u00adti\u00admiza\u00adtion Con\u00adcepts in the Game of Life by Vika (26 Oct 2021 9:31 UTC; 6 points)Ben Pace 29 Jun 2020 3:27 UTC LW: 3 AF: 1AFParentOn the topic of the black hole...There\u2019s a way of view\u00ading the world as a se\u00adries of \u201dforces\u201d, each try\u00ading to con\u00adtrol the fu\u00adture. Eukary\u00adotic life is one. Black holes are an\u00adother. We build many things, hu\u00admans, from chairs to planes to AIs. Of those three, turn\u00ading on the AI feels the most like \u201ca new force has en\u00adtered the game\u201d. All these forces are fight\u00ading over the fu\u00adture, and while it\u2019s odd to think of a black hole as an agent, some\u00adtimes when I look at it it does feel nat\u00adu\u00adral to think of physics as an\u00adother op\u00adti\u00admi\u00adsa\u00adtion force that\u2019s play\u00ading the game with us.Alex Flint 27 Jun 2020 19:22 UTC LW: 2 AF: 1AFParentGreat ex\u00adam\u00adples! Thank you.\n\nCon\u00adsider adding a big black hole in the mid\u00addle of a galaxy. Does this turn the galaxy into a sys\u00adtem op\u00adti\u00admis\u00ading for a re\u00adally big black hole in the mid\u00addle of the galaxy?\n\nYes this would qual\u00adify as an op\u00adti\u00admiz\u00ading sys\u00adtem by my defi\u00adni\u00adtion. In fact just plac\u00ading a large planet close to a bunch of smaller planets would qual\u00adify as an op\u00adti\u00admiz\u00ading sys\u00adtem if the even\u00adtual re\u00adsult is to col\u00adlapse the mass of the smaller planets into the larger planet.\nThis seems to me to be a lot like a ball rol\u00adling down a hill: a black hole doesn\u2019t seem al\u00adive or agen\u00adtic, and it doesn\u2019t re\u00adally re\u00adspond in any mean\u00adingful way to hur\u00addles put in its way, but yes it does qual\u00adify as an op\u00adti\u00admiz\u00ading sys\u00adtem. For this rea\u00adson my defi\u00adni\u00adtion isn\u2019t yet a very good defi\u00adni\u00adtion of what agency is, or what post-agency con\u00adcept we should adopt. I like Ro\u00adhin\u2019s com\u00adment on how we might view agency in this frame\u00adwork.\n\nImag\u00adine that I have the goal of trav\u00adel\u00adling as fast as pos\u00adsi\u00adble. How\u00adever, there is no set of states which you can point to as the \u201ctar\u00adget states\u201d, since what\u00adever state I\u2019m in, I\u2019ll try to go even faster. This is an\u00adother ar\u00adgu\u00adment for, as I ar\u00adgue be\u00adlow, defin\u00ading an op\u00adti\u00admis\u00ading sys\u00adtem in terms of in\u00adcreas\u00ading some util\u00adity func\u00adtion (rather than mov\u00ading to\u00adwards tar\u00adget states).\n\nYes it\u2019s true that us\u00ading a set of tar\u00adget states rather than an or\u00adder\u00ading over states means that we can\u2019t han\u00addle cases where there is a di\u00adrec\u00adtion of op\u00adti\u00admiza\u00adtion but not a \u201cdes\u00adti\u00adna\u00adtion\u201d. But if we use an or\u00adder\u00ading over states then we run into the fol\u00adlow\u00ading prob\u00adlem: how can we say whether a sys\u00adtem is ro\u00adbust to per\u00adtur\u00adba\u00adtions? Is it just that the sys\u00adtem con\u00adtinues to climb the prefer\u00adence gra\u00addi\u00adent de\u00adspite per\u00adtur\u00adba\u00adtions? But now ev\u00adery sys\u00adtem is an op\u00adti\u00admiz\u00ading sys\u00adtem, be\u00adcause we can always come up with some prefer\u00adence or\u00adder\u00ading that ex\u00adplains a sys\u00adtem as an op\u00adti\u00admiz\u00ading sys\u00adtem. So then we can say \u201cwell it should be an or\u00adder\u00ading over states with a com\u00adpact rep\u00adre\u00adsen\u00adta\u00adtion\u201d or \u201cit should be more com\u00adpact than com\u00adpet\u00ading ex\u00adpla\u00adna\u00adtions\u201d. This may be okay but it seems quite dicey to me.\nIt ac\u00adtu\u00adally seems quite im\u00adpor\u00adtant to me that the defi\u00adni\u00adtion point to sys\u00adtems that \u201cget back on track\u201d even when you push them around. It may be pos\u00adsi\u00adble to do this with an or\u00adder\u00ading over states and I\u2019d love to dis\u00adcuss this more.\nRichard_Ngo 28 Jun 2020 17:27 UTC LW: 8 AF: 4AFParentBut now ev\u00adery sys\u00adtem is an op\u00adti\u00admiz\u00ading sys\u00adtem, be\u00adcause we can always come up with some prefer\u00adence or\u00adder\u00ading that ex\u00adplains a sys\u00adtem as an op\u00adti\u00admiz\u00ading sys\u00adtem.Hmmm, I\u2019m a lit\u00adtle un\u00adcer\u00adtain about whether this is the case. E.g. sup\u00adpose you have a box with a rock in it, in an oth\u00ader\u00adwise empty uni\u00adverse. Noth\u00ading hap\u00adpens. You per\u00adturb the sys\u00adtem by mov\u00ading the rock out\u00adside the box. Noth\u00ading else hap\u00adpens in re\u00adsponse. How would you de\u00adscribe this as an op\u00adti\u00admis\u00ading sys\u00adtem? (I\u2019m as\u00adsum\u00ading that we\u2019re rul\u00ading out the triv\u00adial case of a con\u00adstant util\u00adity func\u00adtion; if not, we should analo\u00adgously in\u00adclude the triv\u00adial case of all states be\u00ading tar\u00adget states).As a more gen\u00aderal com\u00adment: I sus\u00adpect that what starts to hap\u00adpen af\u00adter you start dig\u00adging into what \u201cper\u00adtur\u00adba\u00adtion\u201d means, and what counts as a small or big per\u00adtur\u00adba\u00adtion, is that you run into the prob\u00adlem that a *tiny* per\u00adtur\u00adba\u00adtion can trans\u00adform a highly op\u00adti\u00admis\u00ading sys\u00adtem to a non-op\u00adti\u00admis\u00ading sys\u00adtem (e.g. flick\u00ading the switch to turn off the AGI). In or\u00adder to quan\u00adtify size of per\u00adtur\u00adba\u00adtions in an in\u00adter\u00adest\u00ading way, you need the pre-ex\u00adist\u00ading con\u00adcept of which sub\u00adsys\u00adtems are do\u00ading the op\u00adti\u00admi\u00adsa\u00adtion.My preferred solu\u00adtion to this is just to stop try\u00ading to define op\u00adti\u00admi\u00adsa\u00adtion in terms of *out\u00adcomes*, and start defin\u00ading it in terms of *com\u00adpu\u00adta\u00adtion* done by sys\u00adtems. E.g. a first at\u00adtempt might be: an agent is an op\u00adti\u00admiser if it does plan\u00adning via ab\u00adstrac\u00adtion to\u00adwards some goal. Then we can zoom in on what all these words mean, or what else we might need to in\u00adclude\/\u200bex\u00adclude (in this case, we\u2019ve ruled out evolu\u00adtion, so we prob\u00ada\u00adbly need to broaden it). The broad philos\u00ado\u00adphy here is that it\u2019s bet\u00adter to be vaguely right than pre\u00adcisely wrong. Un\u00adfor\u00adtu\u00adnately I haven\u2019t writ\u00adten much about this ap\u00adproach pub\u00adli\u00adcly\u2014I briefly defend it in a com\u00adment thread on this post though.ESRogs 1 Jul 2020 7:43 UTC 2 pointsParentI briefly defend it in a com\u00adment thread on this post though (https:\/\/\u200b\u200bwww.less\u00adwrong.com\/\u200b\u200bposts\/\u200b\u200b9px\u00adcekdNjE7oNwvcC\/\u200b\u200bgoal-di\u00adrect\u00aded\u00adness-is-be\u00adhav\u00adioral-not-struc\u00adtural )FYI: I think some\u00adthing got messed up with this link. The text of the link is a valid url, but it links to a man\u00adgled one (s.t. if you click it you get a 404 er\u00adror).Richard_Ngo 1 Jul 2020 8:34 UTC 4 pointsParentThat\u2019s weird; thanks for the catch. Fixed.Alex Flint 28 Jun 2020 23:27 UTC LW: 2 AF: 1AFParent\nsup\u00adpose you have a box with a rock in it, in an oth\u00ader\u00adwise empty uni\u00adverse [...]\n\nYes you\u2019re right, this sys\u00adtem would be de\u00adscribed by a con\u00adstant util\u00adity func\u00adtion, and yes this is analo\u00adgous to the case where the tar\u00adget con\u00adfigu\u00adra\u00adtion set con\u00adtains all con\u00adfigu\u00adra\u00adtions, and yes this should not be con\u00adsid\u00adered op\u00adti\u00admiza\u00adtion. In the tar\u00adget set for\u00admu\u00adla\u00adtion, we can mea\u00adsure the de\u00adgree of op\u00adti\u00admiza\u00adtion by the size of the tar\u00adget set rel\u00ada\u00adtive to the size of the basin of at\u00adtrac\u00adtion. In your rock ex\u00adam\u00adple, the sets have the same size, so it would make sense to say that the de\u00adgree of op\u00adti\u00admiza\u00adtion is zero.\nThis dis\u00adcus\u00adsion is up\u00addat\u00ading me in the di\u00adrec\u00adtion that a prefer\u00adence or\u00adder\u00ading for\u00admu\u00adla\u00adtion is pos\u00adsi\u00adble, but that we need some anal\u00adogy for \u201cde\u00adgree of op\u00adti\u00admiza\u00adtion\u201d that cap\u00adtures how \u201ctight\u201d or \u201ccon\u00adstrained\u201d the sys\u00adtem\u2019s evolu\u00adtion is rel\u00ada\u00adtive to the size of the basin of at\u00adtrac\u00adtion. We need a way to say that a con\u00adstant util\u00adity func\u00adtion cor\u00adre\u00adsponds to a de\u00adgree of op\u00adti\u00admiza\u00adtion equal to zero. We also need a way to han\u00addle the case where our util\u00adity func\u00adtion as\u00adsigns util\u00adity pro\u00adpor\u00adtional to en\u00adtropy, so again we can de\u00adscribe all phys\u00adi\u00adcal sys\u00adtems as op\u00adti\u00admiz\u00ading sys\u00adtems and ther\u00admo\u00addy\u00adnam\u00adics en\u00adsures that we are cor\u00adrect. This util\u00adity func\u00adtion would be ex\u00adtremely flat and wide, with most con\u00adfigu\u00adra\u00adtions re\u00adceiv\u00ading near-iden\u00adti\u00adcal util\u00adity (since the high en\u00adtropy con\u00adfigu\u00adra\u00adtions con\u00adsti\u00adtute the vast ma\u00adjor\u00adity of all pos\u00adsi\u00adble con\u00adfigu\u00adra\u00adtions). I\u2019m sure there is some way to quan\u00adtify this\u2014do you know of any ap\u00adpro\u00adpri\u00adate mea\u00adsure?\nThe challenge here is that in or\u00adder to ac\u00adtu\u00adally deal with the case you men\u00adtioned origi\u00adnally\u2014the goal of mov\u00ading as fast as pos\u00adsi\u00adble\u2014we need a mea\u00adsure that is not based on the size or cur\u00adva\u00adture of some lo\u00adcal max\u00adima of the util\u00adity func\u00adtion. If we are work\u00ading with lo\u00adcal max\u00adima then we are re\u00adally still work\u00ading with sys\u00adtems that evolve to\u00adwards a spe\u00adcific des\u00adti\u00adna\u00adtion (al\u00adthough there still may be ad\u00advan\u00adtages to think\u00ading this way rather than in terms of a bi\u00adnary set).\n\nMy preferred solu\u00adtion to this is just to stop try\u00ading to define op\u00adti\u00admi\u00adsa\u00adtion in terms of out\u00adcomes, and start defin\u00ading it in terms of com\u00adpu\u00adta\u00adtion done by systems\n\nNice\u2014I\u2019d love to hear more about this\nESRogs 1 Jul 2020 7:37 UTC 2 pointsParentBut if we use an or\u00adder\u00ading over states then we run into the fol\u00adlow\u00ading prob\u00adlem: how can we say whether a sys\u00adtem is ro\u00adbust to per\u00adtur\u00adba\u00adtions? Is it just that the sys\u00adtem con\u00adtinues to climb the prefer\u00adence gra\u00addi\u00adent de\u00adspite per\u00adtur\u00adba\u00adtions? But now ev\u00adery sys\u00adtem is an op\u00adti\u00admiz\u00ading sys\u00adtem, be\u00adcause we can always come up with some prefer\u00adence or\u00adder\u00ading that ex\u00adplains a sys\u00adtem as an op\u00adti\u00admiz\u00ading sys\u00adtem. So then we can say \u201cwell it should be an or\u00adder\u00ading over states with a com\u00adpact rep\u00adre\u00adsen\u00adta\u00adtion\u201d or \u201cit should be more com\u00adpact than com\u00adpet\u00ading ex\u00adpla\u00adna\u00adtions\u201d. This may be okay but it seems quite dicey to me.Doesn\u2019t the set-of-tar\u00adget-states ver\u00adsion have just the same is\u00adsue (or an analo\u00adgous one)?For what\u00adever be\u00adhav\u00adior the sys\u00adtem ex\u00adhibits, I can always say that the states it ends up in were part of its set of tar\u00adget states. So you have to count on com\u00adpact\u00adness (or nat\u00adu\u00adral\u00adness of de\u00adscrip\u00adtion, which is ba\u00adsi\u00adcally the same thing) of the set of tar\u00adget states for this con\u00adcept of an op\u00adti\u00admiz\u00ading sys\u00adtem to be mean\u00adingful. No?Alex Flint 1 Jul 2020 23:53 UTC 2 pointsParentWell most sys\u00adtem don\u2019t have a ten\u00addency to evolve to\u00adwards any small set of tar\u00adget states de\u00adspite per\u00adtur\u00adba\u00adtions. Most sys\u00adtems, if you per\u00adturb then, just go off in some differ\u00adent di\u00adrec\u00adtion. For ex\u00adam\u00adple, if you per\u00adturb most run\u00adning com\u00adputer pro\u00adgrams by mod\u00adify\u00ading some vari\u00adable with a de\u00adbug\u00adger, they do not self-cor\u00adrect. Same with the satel\u00adlite and billiard balls ex\u00adam\u00adple. Most sys\u00adtems just don\u2019t have this \u201cat\u00adtrac\u00adtor\u201d dy\u00adnamic.\nESRogs 2 Jul 2020 1:42 UTC 2 pointsParentHmm, I see what you\u2019re say\u00ading, but there still seems to be an anal\u00adogy to me here with ar\u00adbi\u00adtrary util\u00adity func\u00adtions, where you need the set of tar\u00adget states to be small (as you do say). Other\u00adwise I could just say that the set of tar\u00adget states is all the di\u00adrec\u00adtions the sys\u00adtem might fly off in if you per\u00adturb it.So you might say that, for this ver\u00adsion of op\u00adti\u00admiza\u00adtion to be mean\u00adingful, the set of tar\u00adget states has to be small (how\u00adever that\u2019s quan\u00adtified), and for the util\u00adity max\u00adi\u00admiza\u00adtion ver\u00adsion to be mean\u00adingful, you need the util\u00adity func\u00adtion to be sim\u00adple (how\u00adever that\u2019s quan\u00adtified).EDIT: And ac\u00adtu\u00adally, maybe the two con\u00adcepts are sort of dual to each other. If you have an agent with a sim\u00adple util\u00adity func\u00adtion, then you could con\u00adsider all its lo\u00adcal op\u00adtima to be a (small) set of tar\u00adget states for an op\u00adti\u00admiz\u00ading sys\u00adtem. And if you have an op\u00adti\u00admiz\u00ading sys\u00adtem with a small set of tar\u00adget states, then you could eas\u00adily con\u00advert that into a sim\u00adple util\u00adity func\u00adtion with a gra\u00addi\u00adent to\u00adwards those states.And if your util\u00adity func\u00adtion isn\u2019t sim\u00adple, maybe you wouldn\u2019t get a small set of tar\u00adget states when you do the con\u00adver\u00adsion, and vice versa?Alex Flint 2 Jul 2020 1:54 UTC 4 pointsParentI\u2019d say the util\u00adity func\u00adtion needs to con\u00adtain one or more lo\u00adcal op\u00adtima with large bas\u00adins of at\u00adtrac\u00adtion that con\u00adtain the ini\u00adtial state, not that the util\u00adity func\u00adtion needs to be sim\u00adple. The sim\u00adplest pos\u00adsi\u00adble util\u00adity func\u00adtion is a con\u00adstant func\u00adtion, which al\u00adlows the sys\u00adtem to wan\u00adder aim\u00adlessly and cer\u00adtainly not \u201ccor\u00adrect\u201d in any way for per\u00adtur\u00adba\u00adtions.\nESRogs 2 Jul 2020 5:01 UTC 2 pointsParentAh, good points!clwainwright 23 Jun 2020 0:22 UTC LW: 11 AF: 3AFThis seems like a good defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion for al\u00adgorith\u00admic sys\u00adtems, but I don\u2019t see how it works for phys\u00adi\u00adcal sys\u00adtems. Go\u00ading by the pri\u00admary defi\u00adni\u00adtion,An op\u00adti\u00admiz\u00ading sys\u00adtem is a sys\u00adtem that has a ten\u00addency to evolve to\u00adwards one of a set of con\u00adfigu\u00adra\u00adtions that we will call the tar\u00adget con\u00adfigu\u00adra\u00adtion set, when started from any con\u00adfigu\u00adra\u00adtion within a larger set of con\u00adfigu\u00adra\u00adtions, which we call the basin of at\u00adtrac\u00adtion.But in the phys\u00adi\u00adcal world, there are liter\u00adally zero closed sys\u00adtems with this prop\u00aderty. En\u00adtropy always in\u00adcreases*, and the tar\u00adget con\u00adfigu\u00adra\u00adtion set will never be smaller than the basin of at\u00adtrac\u00adtion. The dirt-plus-seed-plus-sun\u00adlight sys\u00adtem has a vastly smaller con\u00adfigu\u00adra\u00adtion space than the dirt-plus-tree-plus-heat sys\u00adtem. Per\u00adhaps one could ob\u00adject that one should dis\u00adcount the in\u00adcom\u00ading sun\u00adlight and out\u00adgo\u00ading heat since the sys\u00adtem isn\u2019t re\u00adally closed, but then con\u00adsider a very similar sys\u00adtem con\u00adsist\u00ading of only dirt, air, and fun\u00adgal spores. Surely if a grow\u00ading tree is an op\u00adti\u00admiz\u00ading sys\u00adtem, then a grow\u00ading mush\u00adroom in a closed sys\u00adtem is an op\u00adti\u00admizer too. But the en\u00adtropy in\u00adcrease in the lat\u00adter case is un\u00adam\u00adbigu\u00adous: the num\u00adber of ways to ar\u00adrange atoms into a fully grown mush\u00adroom is again vastly larger than the num\u00adber of ways to con\u00adfigure atoms into dirt with\u00adout mush\u00adrooms but with the nu\u00adtri\u00adents to grow them.It may be pos\u00adsi\u00adble to get around this by re\u00addefin\u00ading con\u00adfigu\u00adra\u00adtion spaces that bet\u00adter match our in\u00adtu\u00adition (it does seem like a mush\u00adroom is more spe\u00adcial than dirt), but I don\u2019t see any way to do this rigor\u00adously.*or, at least, en\u00adtropy always tends to in\u00adcrease.Thomas Kwa 26 Mar 2021 21:25 UTC 3 pointsParentI agree that closed phys\u00adi\u00adcal sys\u00adtems aren\u2019t op\u00adti\u00admiz\u00ading sys\u00adtems. It seems like the first patch given by the au\u00adthor works when worded more care\u00adfully: \u201cWe could stipu\u00adlate that some [low-en\u00adtropy] power source [and some en\u00adtropy sink] is pro\u00advided ex\u00adter\u00adnally to each sys\u00adtem we an\u00ada\u00adlyze, and then perform our anal\u00ady\u00adsis con\u00addi\u00adtional on the ex\u00adis\u00adtence of that power source.\u201d\nThen an op\u00adti\u00admiz\u00ading sys\u00adtem with X bits of \u201cop\u00adti\u00admiza\u00adtion power\u201d (which is log(tar\u00adget states \/\u200b basin of at\u00adtrac\u00adtion size) or some\u00adthing) has to sink at least X bits, and this seems like it works. Maybe it gets hard to rigor\u00adously define the ex\u00adact form of the power source and en\u00adtropy sink though? Dis\u00adclaimer: I don\u2019t know statis\u00adti\u00adcal me\u00adchan\u00adics.Davidmanheim 24 Jun 2020 17:12 UTC LW: 10 AF: 5AFI think this is great. I would want to re\u00adlate it to a few key points out which I tried to ad\u00address in a few ear\u00adlier posts. Prin\u00adci\u00adpally, I dis\u00adcussed se\u00adlec\u00adtion ver\u00adsus con\u00adtrol, which is about the differ\u00adence be\u00adtween what op\u00adti\u00admiza\u00adtion does ex\u00adter\u00adnally, and how it uses mod\u00adels and test\u00ading. This re\u00adlated strongly to your con\u00adcep\u00adtion of an op\u00adti\u00admiz\u00ading sys\u00adtem, but fo\u00adcused on how much of the op\u00adti\u00admiza\u00adtion pro\u00adcess oc\u00adcurs in the sys\u00adtem ver\u00adsus in the agent it\u00adself. This is prin\u00adci\u00adpally im\u00adpor\u00adtant be\u00adcause of how it re\u00adlates to mis\u00adal\u00adign\u00adment and Good\u00adhart\u00ading of var\u00adi\u00adous types.I had hopes to fur\u00adther ap\u00adply that con\u00adcep\u00adtual model to meas-op\u00adti\u00admiza\u00adtion, but I was a bit un\u00adsure how to think about it, and have been work\u00ading on other pro\u00adjects. At this point, I think your dis\u00adcus\u00adsion is prob\u00ada\u00adbly a bet\u00adter con\u00adcep\u00adtual model than the one I was try\u00ading to build there\u2014it just needs to be slightly ex\u00adtended to cover the points I was try\u00ading to work out in those posts. I\u2019d like to think about how it re\u00adlates to mesa-op\u00adti\u00admiza\u00adtion as well, but I\u2019m un\u00adlikely to ac\u00adtu\u00adally work on thatStuart_Armstrong 31 Jul 2020 15:48 UTC LW: 8 AF: 4AFVery good. A lot of po\u00adten\u00adtial there, I feel.johnswentworth 22 Jun 2020 21:39 UTC LW: 7 AF: 4AFThis is ex\u00adcel\u00adlent! Very well done, I would love to see more work like this.I have a whole bunch of things to say along sep\u00ada\u00adrate di\u00adrec\u00adtions so I\u2019ll break them into sep\u00ada\u00adrate com\u00adments. This first one is just a cou\u00adple minor notes:For the uni\u00adverse sec\u00adtion, the uni\u00adverse doesn\u2019t push \u201cto\u00adward\u201d max\u00adent, it just wan\u00adders around and usu\u00adally ends up in max\u00adent states be\u00adcause that\u2019s most of the states. The basin of at\u00adtrac\u00adtion in\u00adcludes all states.Re\u00adgard\u00ading \u201cwhether dy\u00adnam\u00adi\u00adcal sys\u00adtems the\u00adory ex\u00adplic\u00aditly stud\u00adies at\u00adtrac\u00adtors that op\u00ader\u00adate along a sub\u00adset of the sys\u00adtem\u2019s di\u00admen\u00adsions\u201d, I be\u00adlieve there\u2019s an old the\u00ado\u00adrem that the long-term be\u00adhav\u00adior of dy\u00adnam\u00adi\u00adcal sys\u00adtems on a com\u00adpact space is always er\u00adgodic on some man\u00adi\u00adfold within the space. That man\u00adi\u00adfold has a name which I don\u2019t re\u00admem\u00adber, which is prob\u00ada\u00adbly what you want to look for.ryan_b 23 Jun 2020 21:09 UTC 2 pointsParentDoes \u201cer\u00adgodic on some man\u00adi\u00adfold\u201d here mean it ap\u00adproaches ev\u00adery point within the man\u00adi\u00adfold, as in the er\u00adgod\u00adic\u00adity as\u00adsump\u00adtion, or does it mean de\u00adscribed by an er\u00adgodic func\u00adtion? I re\u00adal\u00adize the lat\u00adter im\u00adplies the former, but what I am driv\u00ading at is the be\u00adhav\u00adior vs. the for\u00admal\u00adism.johnswentworth 23 Jun 2020 22:30 UTC 2 pointsParentNot sure.Rohin Shah 21 Jun 2020 20:06 UTC LW: 7 AF: 4AFPlanned sum\u00admary for the Align\u00adment Newslet\u00adter:Many ar\u00adgu\u00adments about AI risk de\u00adpend on the no\u00adtion of \u201cop\u00adti\u00admiz\u00ading\u201d, but so far it has eluded a good defi\u00adni\u00adtion. One nat\u00adu\u00adral ap\u00adproach is to say that an op\u00adti\u00admizer causes the world to have higher val\u00adues ac\u00adcord\u00ading to some rea\u00adson\u00adable util\u00adity func\u00adtion, but this seems in\u00adsuffi\u00adcient, as then a <@bot\u00adtle cap would be an op\u00adti\u00admizer@>(@Bot\u00adtle Caps Aren\u2019t Op\u00adti\u00admisers@) for keep\u00ading wa\u00adter in the bot\u00adtle.This post pro\u00advides a new defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion, by tak\u00ading a page from <@Embed\u00added Agents@> and an\u00ada\u00adlyz\u00ading a sys\u00adtem as a whole in\u00adstead of sep\u00ada\u00adrat\u00ading the agent and en\u00advi\u00adron\u00adment. An **op\u00adti\u00admiz\u00ading sys\u00adtem** is then one which tends to evolve to\u00adward some spe\u00adcial con\u00adfigu\u00adra\u00adtions (called the **tar\u00adget con\u00adfigu\u00adra\u00adtion set**), when start\u00ading any\u00adwhere in some larger set of con\u00adfigu\u00adra\u00adtions (called the **basin of at\u00adtrac\u00adtion**), _even if_ the sys\u00adtem is per\u00adturbed.For ex\u00adam\u00adple, in gra\u00addi\u00adent de\u00adscent, we start with some ini\u00adtial guess at the pa\u00adram\u00ade\u00adters \u03b8, and then con\u00adtinu\u00adally com\u00adpute loss gra\u00addi\u00adents and move \u03b8 in the ap\u00adpro\u00adpri\u00adate di\u00adrec\u00adtion. The tar\u00adget con\u00adfigu\u00adra\u00adtion set is all the lo\u00adcal min\u00adima of the loss land\u00adscape. Such a pro\u00adgram has a very spe\u00adcial prop\u00aderty: while it is run\u00adning, you can change the value of \u03b8 (e.g. via a de\u00adbug\u00adger), and the pro\u00adgram will prob\u00ada\u00adbly _still work_. This is quite im\u00adpres\u00adsive: cer\u00adtainly most pro\u00adgrams would not work if you ar\u00adbi\u00adtrar\u00adily changed the value of one of the vari\u00adables in the mid\u00addle of ex\u00ade\u00adcu\u00adtion. Thus, this is an op\u00adti\u00admiz\u00ading sys\u00adtem that is ro\u00adbust to per\u00adtur\u00adba\u00adtions in \u03b8. Of course, it isn\u2019t ro\u00adbust to ar\u00adbi\u00adtrary per\u00adtur\u00adba\u00adtions: if you change any other vari\u00adable in the pro\u00adgram, it will prob\u00ada\u00adbly stop work\u00ading. In gen\u00aderal, we can quan\u00adtify how pow\u00ader\u00adful an op\u00adti\u00admiz\u00ading sys\u00adtem is by how ro\u00adbust it is to per\u00adtur\u00adba\u00adtions, and how small the tar\u00adget con\u00adfigu\u00adra\u00adtion set is.The bot\u00adtle cap ex\u00adam\u00adple is _not_ an op\u00adti\u00admiz\u00ading sys\u00adtem be\u00adcause there is no broad basin of con\u00adfigu\u00adra\u00adtions from which we get to the bot\u00adtle be\u00ading full of wa\u00adter. The bot\u00adtle cap doesn\u2019t cause the bot\u00adtle to be full of wa\u00adter when it didn\u2019t start out full of wa\u00adter.Op\u00adti\u00admiz\u00ading sys\u00adtems are a su\u00adper\u00adset of goal-di\u00adrected agen\u00adtic sys\u00adtems, which re\u00adquire a sep\u00ada\u00adra\u00adtion be\u00adtween the op\u00adti\u00admizer and the thing be\u00ading op\u00adti\u00admized. For ex\u00adam\u00adple, a tree is cer\u00adtainly an op\u00adti\u00admiz\u00ading sys\u00adtem (the tar\u00adget is to be a fully grown tree, and it is ro\u00adbust to per\u00adtur\u00adba\u00adtions of soil qual\u00adity, or if you cut off a branch, etc). How\u00adever, it does not seem to be a goal-di\u00adrected agen\u00adtic sys\u00adtem, as it would be hard to sep\u00ada\u00adrate into an \u201cop\u00adti\u00admizer\u201d and a \u201cthing be\u00ading op\u00adti\u00admized\u201d.This does mean that we can no longer ask \u201cwhat is do\u00ading the op\u00adti\u00admiza\u00adtion\u201d in an op\u00adti\u00admiz\u00ading sys\u00adtem. This is a fea\u00adture, not a bug: if you ex\u00adpect to always be able to an\u00adswer this ques\u00adtion, you typ\u00adi\u00adcally get con\u00adfus\u00ading re\u00adsults. For ex\u00adam\u00adple, you might say that your liver is op\u00adti\u00admiz\u00ading for mak\u00ading money, since with\u00adout it you would die and fail to make money.The full post has sev\u00aderal other ex\u00adam\u00adples that help make the con\u00adcept clearer.Planned opinion:I\u2019ve <@pre\u00advi\u00adously ar\u00adgued@>(@In\u00adtu\u00aditions about goal-di\u00adrected be\u00adhav\u00adior@) that we need to take gen\u00ader\u00adal\u00adiza\u00adtion into ac\u00adcount in a defi\u00adni\u00adtion of op\u00adti\u00admiza\u00adtion or goal-di\u00adrected be\u00adhav\u00adior. This defi\u00adni\u00adtion achieves that by pri\u00admar\u00adily an\u00ada\u00adlyz\u00ading the ro\u00adbust\u00adness of the op\u00adti\u00admiz\u00ading sys\u00adtem to per\u00adtur\u00adba\u00adtions. While this does rely on a no\u00adtion of coun\u00adter\u00adfac\u00adtu\u00adals, it still seems sig\u00adnifi\u00adcantly bet\u00adter than any pre\u00advi\u00adous at\u00adtempt to ground op\u00adti\u00admiza\u00adtion.I par\u00adtic\u00adu\u00adlarly like that the con\u00adcept doesn\u2019t force us to have a sep\u00ada\u00adrate agent and en\u00advi\u00adron\u00adment, as that dis\u00adtinc\u00adtion does seem quite leaky upon close in\u00adspec\u00adtion. I gave a shot at ex\u00adplain\u00ading sev\u00aderal other con\u00adcepts from AI al\u00adign\u00adment within this frame\u00adwork in this com\u00adment, and it worked quite well. In par\u00adtic\u00adu\u00adlar, a com\u00adputer pro\u00adgram is a goal-di\u00adrected AI sys\u00adtem if there is an en\u00advi\u00adron\u00adment such that adding the com\u00adputer pro\u00adgram to the en\u00advi\u00adron\u00adment trans\u00adforms it into a op\u00adti\u00admiz\u00ading sys\u00adtem for some \u201cin\u00adter\u00adest\u00ading\u201d tar\u00adget con\u00adfigu\u00adra\u00adtion states (with one caveat ex\u00adplained in the com\u00adment).Andrew_Critch 15 Dec 2020 1:06 UTC LW: 6 AF: 4AFThis post re\u00adminds me of think\u00ading from 1950s when peo\u00adple tak\u00ading in\u00adspira\u00adtion from Wiener\u2019s work on cy\u00adber\u00adnet\u00adics tried to op\u00ader\u00ada\u00adtional\u00adize \u201cpur\u00adpose\u00adful be\u00adhav\u00adior\u201d in terms of ro\u00adbust con\u00adver\u00adgence to a goal state: https:\/\/\u200b\u200bheinon\u00adline.org\/\u200b\u200bHOL\/\u200b\u200bPage?col\u00adlec\u00adtion=jour\u00adnals&han\u00addle=hein.jour\u00adnals\/\u200b\u200bjosf29&id=48&men_tab=srchresults> When an op\u00adti\u00admiz\u00ading sys\u00adtem de\u00advi\u00adates be\u00adyond its own rim, we say that it dies. An ex\u00adis\u00adten\u00adtial catas\u00adtro\u00adphe is when the op\u00adti\u00admiz\u00ading sys\u00adtem of life on Earth moves be\u00adyond its own outer rim.I ap\u00adpre\u00adci\u00adate the di\u00adrect at\u00adten\u00adtion to this pro\u00adcess as an im\u00adpor\u00adtant in\u00adstance of op\u00adti\u00admiza\u00adtion. The first talk I ever gave in the EECS de\u00adpart\u00adment at UC Berkeley (to the full EECS fac\u00adulty) in\u00adcluded a di\u00ada\u00adgram of Earth drift\u00ading out of the re\u00adgion of phase spare where hu\u00admans would ex\u00adist. Need\u00adless to say, I\u2019d like to see more ex\u00adplicit con\u00adsid\u00ader\u00ada\u00adtion of this type of sce\u00adnario.johnswentworth 22 Jun 2020 21:54 UTC LW: 6 AF: 3AFThe set of op\u00adti\u00admiz\u00ading sys\u00adtems is smaller than the set of all AI ser\u00advices, but larger than the set of goal-di\u00adrected agen\u00adtic sys\u00adtems....A tree is an op\u00adti\u00admiz\u00ading sys\u00adtem but not a goal-di\u00adrected agent sys\u00adtem.I\u2019m not sure this is true, at least not in the sense that we usu\u00adally think about \u201cgoal-di\u00adrected agent sys\u00adtems\u201d.You make a case that there\u2019s no dis\u00adtinct sub\u00adsys\u00adtem of the tree which is \u201cdo\u00ading the op\u00adti\u00admiz\u00ading\u201d, but this isn\u2019t ob\u00advi\u00adously rele\u00advant to whether the tree is agenty. For in\u00adstance, the tree pre\u00adsum\u00adably still needs to model its en\u00advi\u00adron\u00adment to some ex\u00adtent, and \u201cmake de\u00adci\u00adsions\u201d to op\u00adti\u00admize its growth within the en\u00advi\u00adron\u00adment\u2014e.g. new branches\/\u200bleaves grow\u00ading to\u00adward sun\u00adlight and roots grow\u00ading to\u00adward wa\u00adter, or the tree \u201cpre\u00addict\u00ading\u201d when the sea\u00adsons are turn\u00ading and grow\u00ading\/\u200bdrop\u00adping leaves ac\u00adcord\u00adingly.One to think about whether \u201cthe set of op\u00adti\u00admiz\u00ading sys\u00adtems is smaller than the set of all AI ser\u00advices, but larger than the set of goal-di\u00adrected agen\u00adtic sys\u00adtems\u201d is that it\u2019s equiv\u00ada\u00adlent to Scott\u2019s (open) ques\u00adtion does agent-like be\u00adhav\u00adior im\u00adply agent-like ar\u00adchi\u00adtec\u00adture?johnswentworth 22 Jun 2020 21:46 UTC LW: 6 AF: 3AFAt first I par\u00adtic\u00adu\u00adlarly liked the idea of iden\u00adti\u00adfy\u00ading sys\u00adtems with \u201can op\u00adti\u00admizer\u201d as those which are ro\u00adbust to changes in the ob\u00adject of op\u00adti\u00admiza\u00adtion, but brit\u00adtle with re\u00adspect to changes in the en\u00adg\u00adine of op\u00adti\u00admiza\u00adtion.On re\u00adflec\u00adtion, it seems like a use\u00adful heuris\u00adtic but not a re\u00adli\u00adable defi\u00adni\u00adtion. A coun\u00adterex\u00adam\u00adple: sup\u00adpose we do man\u00adage to build a ro\u00adbust AI which max\u00adi\u00admizes some util\u00adity func\u00adtion. One de\u00adsir\u00adable prop\u00aderty of such an AI is that it\u2019s ro\u00adbust to e.g. one of its servers go\u00ading down or cor\u00adrupted data on a hard drive; the AI it\u00adself should be ro\u00adbust to as many in\u00adter\u00adven\u00adtions as pos\u00adsi\u00adble. Ideally it would even be ro\u00adbust to minor bugs in its own source code. Yet it still seems like the AI is the \u201cen\u00adg\u00adine\u201d, and it op\u00adti\u00admizes the rest of the world.Alex Flint 27 Jun 2020 19:44 UTC LW: 4 AF: 2AFParentYeah I agree that du\u00adal\u00adity is not a good mea\u00adsure of whether a sys\u00adtem con\u00adtains some\u00adthing like an AI. There is one kind of AI that we can build that is highly du\u00adal\u00adis\u00adtic. Most pre\u00adsent-day AI sys\u00adtems are quite du\u00adal\u00adis\u00adtic, be\u00adcause they are pred\u00adi\u00adcated on hav\u00ading some ro\u00adbust com\u00adpute in\u00adfras\u00adtruc\u00adture that is sep\u00ada\u00adrate from and mostly un\u00adper\u00adturbed by the world around it. But there is ev\u00adery rea\u00adson to go be\u00adyond these du\u00adal\u00adis\u00adtic de\u00adsigns, for pre\u00adcisely the rea\u00adson you point to: such sys\u00adtems do tend to be some\u00adwhat brit\u00adtle.\nI think it\u2019s quite fea\u00adsi\u00adble to build highly ro\u00adbust AI sys\u00adtems, al\u00adthough do\u00ading so will likely re\u00adquire more than just hard\u00aden\u00ading (mak\u00ading it re\u00adally un\u00adlikely for the sys\u00adtem to be per\u00adturbed). What we re\u00adally want is an AI sys\u00adtem where the core AI it\u00adself tends to evolve back to a sta\u00adble con\u00adfigu\u00adra\u00adtion de\u00adspite per\u00adtur\u00adba\u00adtions to its core in\u00adfras\u00adtruc\u00adture. My sense is that this will ac\u00adtu\u00adally re\u00adquire a sig\u00adnifi\u00adcant shift in how we think about AI\u2014speci\u00adfi\u00adcally mov\u00ading from the agent model to some\u00adthing that cap\u00adtures what is good and helpful in the agent model but dis\u00adcards the du\u00adal\u00adis\u00adtic view of things.\nChantiel 17 Aug 2021 20:31 UTC 3 points\nAn op\u00adti\u00admiz\u00ading sys\u00adtem is a sys\u00adtem that has a ten\u00addency to evolve to\u00adwards one of a set of con\u00adfigu\u00adra\u00adtions that we will call the tar\u00adget con\u00adfigu\u00adra\u00adtion set, when started from any con\u00adfigu\u00adra\u00adtion within a larger set of con\u00adfigu\u00adra\u00adtions, which we call the basin of at\u00adtrac\u00adtion, and con\u00adtinues to ex\u00adhibit this ten\u00addency with re\u00adspect to the same tar\u00adget con\u00adfigu\u00adra\u00adtion set de\u00adspite per\u00adtur\u00adba\u00adtions.\n\nFirst, I want to say that I think your defi\u00adni\u00adtion says some\u00adthing im\u00adpor\u00adtant.\nThat said, I\u2019m con\u00adcerned that the above defi\u00adni\u00adtion would have some po\u00adten\u00adtially prob\u00adle\u00admatic false nega\u00adtives. I\u2019m a lit\u00adtle un\u00adclear what counts as a per\u00adtur\u00adba\u00adtion, though, but I haven\u2019t been able to find a way to clar\u00adify it that doesn\u2019t re\u00adsult in false nega\u00adtives.\nSpeci\u00adfi\u00adcally, con\u00adsider a com\u00adputer pro\u00adgram that performs hill-climb\u00ading. This would nor\u00admally be con\u00adsid\u00adered an op\u00adti\u00admiz\u00ading sys\u00adtem. When do\u00ading hill-climb\u00ading, it doesn\u2019t seem like there is any\u00adthing that would count as a per\u00adtur\u00adba\u00adtion un\u00adless some ex\u00adter\u00adnal sys\u00adtem mod\u00adified the pro\u00adgram\u2019s state. I mean, dur\u00ading a nor\u00admal, undis\u00adturbed ex\u00ade\u00adcu\u00adtion, the hill-climb\u00ading al\u00adgorithm would just go right to the top of its near\u00adest hill and then stop; that doesn\u2019t seem to in\u00adclude any per\u00adtur\u00adba\u00adtions.\nBut sup\u00adpose the pro\u00adgram checked for any ex\u00adter\u00adnal per\u00adtur\u00adba\u00adtions, that is, mod\u00adifi\u00adca\u00adtions, of its code or pro\u00adgram mem\u00adory and would im\u00adme\u00addi\u00adately halt ex\u00ade\u00adcu\u00adtion if it found any. For ex\u00adam\u00adple, sup\u00adpose the pro\u00adgram would si\u00admul\u00adta\u00adneously run thou\u00adsands of iden\u00adti\u00adcal in\u00adstances of a hill-climb\u00ading al\u00adgorithm and would im\u00adme\u00addi\u00adately halt ex\u00ade\u00adcu\u00adtion if any of the in\u00adstances of the op\u00adti\u00admiza\u00adtion al\u00adgorithm failed to ex\u00adactly match any other one. That way, if some ex\u00adter\u00adnal force mod\u00adified one of the in\u00adstances, for ex\u00adam\u00adple, by mod\u00adify\u00ading one of the can\u00addi\u00addate solu\u00adtions, it would fail to match with all the other in\u00adstances and the en\u00adtire pro\u00adgram would halt.\nNow, there are some ex\u00adter\u00adnal per\u00adtur\u00adba\u00adtions of the sys\u00adtem that would make it still reach its tar\u00adget state, for ex\u00adam\u00adple by mak\u00ading the ex\u00adact same ex\u00adter\u00adnal mod\u00adifi\u00adca\u00adtion to ev\u00adery in\u00adstance of the op\u00adti\u00admiza\u00adtion pro\u00adce\u00addure. But still al\u00admost all per\u00adtur\u00adba\u00adtions would re\u00adsult in the pro\u00adgram failing to reach its tar\u00adget con\u00adfigu\u00adra\u00adtion of hav\u00ading found the lo\u00adcal max\u00adi\u00admum or min\u00adi\u00admum. So it doesn\u2019t re\u00adally seem to tend to reach its tar\u00adget con\u00adfigu\u00adra\u00adtion de\u00adspite per\u00adtur\u00adba\u00adtions. So it doesn\u2019t seem it would be clas\u00adsified as an op\u00adti\u00admizer ac\u00adcord\u00ading to the given defi\u00adni\u00adtion.\nThis could be prob\u00adle\u00admatic if the defi\u00adni\u00adtion is used to pre\u00advent mesaop\u00adti\u00admiza\u00adtion. If the above would in\u00addeed not be con\u00adsid\u00adered an op\u00adti\u00admizer by your defi\u00adni\u00adtion, then it could po\u00adten\u00adtially al\u00adlow for pow\u00ader\u00adful mesaop\u00adti\u00admiz\u00aders to be cre\u00adated with\u00adout match\u00ading the given defi\u00adni\u00adtion of \u201cop\u00adti\u00admizer\u201d.What links here?Chantiel's comment on Chantiel\u2019s Shortform by Chantiel (21 Aug 2021 17:25 UTC; 3 points)David Cato 23 Jun 2020 12:10 UTC LW: 3 AF: 1AFTruly a joy to read! Thank you.To what ex\u00adtent can we iden\u00adtify sub\u00adsets of the sys\u00adtem cor\u00adre\u00adspond\u00ading to \u201cthat which is be\u00ading op\u00adti\u00admized\u201d and \u201cthat which is do\u00ading the op\u00adti\u00admiza\u00adtion\u201d?The in\u00adfor\u00adma\u00adtion the\u00ado\u00adretic mea\u00adsure of in\u00addi\u00advi\u00add\u00adu\u00adal\u00adity at\u00adtempts to an\u00adswer ex\u00adactly this type of ques\u00adtion.From this view, a set of com\u00adpo\u00adnents (the sys\u00adtem) is de\u00adcom\u00adposed into two sub\u00adsets (sub\u00adsys\u00adtem + en\u00advi\u00adron\u00adment). The pro\u00adposed sub\u00adsys\u00adtem is as\u00adsigned a de\u00adgree of in\u00addi\u00advi\u00add\u00adu\u00adal\u00adity by mea\u00adsur\u00ading the amount of in\u00adfor\u00adma\u00adtion it shares with its fu\u00adture state, op\u00adtion\u00adally con\u00addi\u00adtioned on its en\u00advi\u00adron\u00adment. This leads to 2 types of in\u00addi\u00advi\u00add\u00adu\u00adal\u00adity. The first type says that a pro\u00adposed sub\u00adsys\u00adtem is in\u00addi\u00advi\u00add\u00adu\u00adal\u00adis\u00adtic to the de\u00adgree that the sub\u00adsys\u00adtem is pre\u00addic\u00adtive of its fu\u00adture state af\u00adter ac\u00adcount\u00ading for the in\u00adfor\u00adma\u00adtion in the en\u00advi\u00adron\u00adment. The sec\u00adond type cap\u00adtures the no\u00adtion of in\u00adsep\u00ada\u00adra\u00adbil\u00adity by as\u00adsign\u00ading a high de\u00adgree of in\u00addi\u00advi\u00add\u00adu\u00adal\u00adity to sub\u00adsys\u00adtems that are strongly cou\u00adpled with their en\u00advi\u00adron\u00adment in such a way that nei\u00adther the sub\u00adsys\u00adtem nor en\u00advi\u00adron\u00adment alone are pre\u00addic\u00adtive of the next state of the sub\u00adsys\u00adtem.For ex\u00adam\u00adple, con\u00adsid\u00ader\u00ading the set of atoms mak\u00ading up the space con\u00adtain\u00ading the robot-op\u00adti\u00admizer and vase, the set of robot-atoms re\u00adtains the de\u00adsired prop\u00ader\u00adties of an op\u00adti\u00admizer, and is also highly in\u00addi\u00advi\u00add\u00adu\u00adal\u00adis\u00adtic in the first sense since know\u00ading the state of the robot atoms tells you a lot about their next state, but know\u00ading about the set of non-robot atoms tells you very lit\u00adtle about the state of the robot. On the other hand, con\u00adsid\u00ader\u00ading the set of atoms mak\u00ading up the tree, the sys\u00adtem as a whole is an op\u00adti\u00admiz\u00ading sys\u00adtem, but no in\u00addi\u00advi\u00add\u00adual sub\u00adset of atoms ac\u00adcom\u00adplishes the tar\u00adget of the larger op\u00adti\u00admiz\u00ading sys\u00adtem.Alex Flint 27 Jun 2020 18:50 UTC LW: 2 AF: 1AFParentThank you for the poin\u00adter to this ter\u00adminol\u00adogy. It seems rele\u00advant and I wasn\u2019t aware of the ter\u00adminol\u00adogy be\u00adfore.\nVivek Hebbar 8 Sep 2021 0:23 UTC LW: 2 AF: 2AFIs a metal bar an op\u00adti\u00admizer? Look\u00ading at the tem\u00adper\u00ada\u00adture dis\u00adtri\u00adbu\u00adtion, there is a clear set of tar\u00adget states (states of uniform tem\u00adper\u00ada\u00adture) with a much larger basin of at\u00adtrac\u00adtion (all tem\u00adper\u00ada\u00adture dis\u00adtri\u00adbu\u00adtions that don\u2019t va\u00adpor\u00adize the bar).I sup\u00adpose we could con\u00adsider the sec\u00adond law of ther\u00admo\u00addy\u00adnam\u00adics to be the true op\u00adti\u00admizer in this case. The con\u00adse\u00adquence is that any* closed phys\u00adi\u00adcal sys\u00adtem is triv\u00adially an op\u00adti\u00admiz\u00ading sys\u00adtem to\u00adwards higher en\u00adtropy.In gen\u00aderal, it seems like this op\u00adti\u00admiza\u00adtion crite\u00adrion is very easy to satisfy if we don\u2019t spec\u00adify what ex\u00adactly we care about as a mean\u00adingful as\u00adpect of the sys\u00adtem. Even the bot\u00adtle cap \u2018op\u00adti\u00admizes\u2019 for triv\u00adial things like main\u00adtain\u00ading its shape (against the per\u00adtur\u00adba\u00adtion of elas\u00adtic de\u00adfor\u00adma\u00adtion). Do you think this will be\u00adcome a prob\u00adlem when us\u00ading this defi\u00adni\u00adtion for AI? For ex\u00adam\u00adple, we might find that a par\u00adtic\u00adu\u00adlar pro\u00adgram in\u00adci\u00adden\u00adtally tends to \u2018op\u00adti\u00admize\u2019 cer\u00adtain sim\u00adple mea\u00adsures such as the av\u00ader\u00adage mag\u00adni\u00adtude of net\u00adwork weights, or some other func\u00adtions of weights, loss, policy, etc. to a set point\/\u200brange. We may then find slightly more com\u00adplex things be\u00ading op\u00adti\u00admized that look like sub-goals (which could in a cer\u00adtain con\u00adtext be un\u00adwanted or dan\u00adger\u00adous). How would we know where to draw the line? It seems like the defi\u00adni\u00adtion would clas\u00adsify lots of things as op\u00adti\u00admiza\u00adtion, and it would be up to us to de\u00adcide which kinds are in\u00adter\u00adest\u00ading or con\u00adcern\u00ading and which ones are as triv\u00adial as the bot\u00adtle cap main\u00adtain\u00ading its shape.That be\u00ading said, I re\u00adally like this defi\u00adni\u00adtion. I just think it should be ex\u00adtended to clas\u00adsify the in\u00adter\u00adest\u00ading\u00adness of a given op\u00adti\u00admiza\u00adtion. An AI agent which com\u00adpe\u00adtently pur\u00adsues com\u00adplex goals is a much more in\u00adter\u00adest\u00ading op\u00adti\u00admizer than a metal bar, even though the bar seems more ro\u00adbust (delet\u00ading a tiny piece of metal won\u2019t stop it from con\u00adduct\u00ading; delet\u00ading a tiny piece in the AI\u2019s com\u00adputer could to\u00adtally dis\u00adable it).Also a nit\u00adpick on the sec\u00adtion about whether the uni\u00adverse is an op\u00adti\u00admiz\u00ading sys\u00adtem:I don\u2019t think it is cor\u00adrect to say that the tar\u00adget space is al\u00admost as big as the basin of at\u00adtrac\u00adtion. Either:We use area to rep\u00adre\u00adsent the num\u00adber of macro\u00adscopic states\u2014in this case, the tar\u00adget space is ex\u00adtremely small (one state only(?) -- an ul\u00adtra-low-den\u00adsity bath of par\u00adti\u00adcles with uniform tem\u00adper\u00ada\u00adture). The uni\u00adverse is an ex\u00adtremely pow\u00ader\u00adful op\u00adti\u00admizer from this per\u00adspec\u00adtive, with the caveat that it takes al\u00admost for\u00adever to achieve its tar\u00adget.We use area to rep\u00adre\u00adsent the num\u00adber of micro\u00adscopic states (as I think you in\u00adtended). In this case, I think the tar\u00adget space is ex\u00adactly iden\u00adti\u00adcal to the basin of at\u00adtrac\u00adtion. Low en\u00adtropy microstates are not any less likely than high en\u00adtropy microstates\u2014there just hap\u00adpen to be as\u00adtro\u00adnom\u00adi\u00adcally fewer of them. There is no \u2018op\u00adti\u00admiz\u00ading force\u2019 push\u00ading the uni\u00adverse out of these states. From the microstate per\u00adspec\u00adtive, there is no rea\u00adson to ex\u00adclude them from the tar\u00adget zone, since any small and un\u00adre\u00admark\u00adable sub\u00adset of the tar\u00adget space will dis\u00adplay the prop\u00aderty that the sys\u00adtem tends to stum\u00adble out of it at ran\u00addom.I would say that the first lens is al\u00admost always bet\u00adter than the sec\u00adond, since macro-states are what we ac\u00adtu\u00adally care about and how we nat\u00adu\u00adrally di\u00advide the con\u00adfigu\u00adra\u00adtion space of a sys\u00adtem.Fi\u00adnally, just want to say this is an amaz\u00ading post! I love the style as well as the con\u00adtent. The di\u00ada\u00adgrams make it re\u00adally easy to get an in\u00adtu\u00aditive pic\u00adture.*Un\u00adsure about the ex\u00adis\u00adtence of ex\u00adcep\u00adtions (can an iso\u00adlated sys\u00adtem be con\u00adtrived that fails to reach the global max for en\u00adtropy?)DanielFilan 18 Aug 2020 18:21 UTC LW: 2 AF: 1AF\nBut Filan would surely agree on this point and his ques\u00adtion is more spe\u00adcific: he is ask\u00ading whether the liver is an op\u00adti\u00admizer.\n\nFYI, it seems pretty clear to me that a liver should be con\u00adsid\u00adered an op\u00adti\u00admiser: as an or\u00adgan in the hu\u00adman body, it performs var\u00adi\u00adous tasks mostly re\u00adli\u00adably, achieves home\u00adosta\u00adsis, etc. The ques\u00adtion I was rhetor\u00adi\u00adcally ask\u00ading was whether it is an op\u00adti\u00admiser of one\u2019s in\u00adcome, and the an\u00adswer (I claim) is \u2018no\u2019.Pattern 20 Jun 2020 18:22 UTC LW: 2 AF: 1AF the ex\u00adact same an\u00adswer it would have out\u00adput with\u00adout the per\u00adtur\u00adba\u00adtion. It always gives the same an\u00adswer for the last digit?Alex Flint 27 Jun 2020 18:53 UTC LW: 2 AF: 1AFParentWell we could always just set the last digit to 0 as a post-pro\u00adcess\u00ading step to en\u00adsure perfect re\u00adpeata\u00adbil\u00adity. But point taken, you\u2019re right that most nu\u00admer\u00adi\u00adcal al\u00adgorithms are not quite as perfectly sta\u00adble as I claimed.\nmattmacdermott 9 May 2023 9:59 UTC 1 pointAn in\u00adter\u00adest\u00ading point about the agency-as-re\u00adtar\u00adgetable-op\u00adti\u00admi\u00adsa\u00adtion idea is that it seems like you can make the per\u00adtur\u00adba\u00adtion in var\u00adi\u00adous places up\u00adstream of the agent\u2019s de\u00adci\u00adsion-mak\u00ading, but not down\u00adstream, i.e. you can re\u00adtar\u00adget an agent by per\u00adturb\u00ading its sen\u00adsors more eas\u00adily than its ac\u00adtu\u00ada\u00adtors.\nFor ex\u00adam\u00adple, to change a ther\u00admo\u00adstat-con\u00adtrol\u00adled heat\u00ading sys\u00adtem to op\u00adti\u00admise for a higher tem\u00adper\u00ada\u00adture, the most nat\u00adu\u00adral per\u00adtur\u00adba\u00adtion might be to turn the tem\u00adper\u00ada\u00adture dial up, but you could also tam\u00adper with its ther\u00admis\u00adtor so that it re\u00adports lower tem\u00adper\u00ada\u00adtures. On the other hand, mak\u00ading its heat\u00ading el\u00ade\u00adment more pow\u00ader\u00adful wouldn\u2019t af\u00adfect the fi\u00adnal tem\u00adper\u00ada\u00adture.\nI won\u00adder if this sug\u00adgests that an agent\u2019s goal lives in the last place in a causal chain of things you can per\u00adturb to change the set of tar\u00adget states of the sys\u00adtem.Chantiel 15 Aug 2021 19:55 UTC 1 pointAFYou said your defi\u00adni\u00adtion would not clas\u00adsify a bot\u00adtle cap with wa\u00adter in it as an op\u00adti\u00admizer. This might be re\u00adally nit-picky, but I\u2019m not sure it\u2019s gen\u00ader\u00adally true.\nI say this be\u00adcause the wa\u00adter in the bot\u00adtle cap could evap\u00ado\u00adrate. Thus, sup\u00adpos\u00ading there is no rain, from a wide range of pos\u00adsi\u00adble states of the bot\u00adtle cap, it would tend to\u00adwards no longer hav\u00ading wa\u00adter in it.\nI know you said you make an ex\u00adcep\u00adtion for ten\u00adden\u00adcies to\u00adwards in\u00adcreased en\u00adtropy be\u00ading con\u00adsid\u00adered op\u00adti\u00admiz\u00aders. How\u00adever, this does not in\u00adcrease the en\u00adtropy of the bot\u00adtle\u00adcap, It could po\u00adten\u00adtially in\u00adcrease the en\u00adtropy of the wa\u00adter that was in the bot\u00adtle cap, but this is not nec\u00ades\u00adsar\u00adily the case. For ex\u00adam\u00adple, if the bot\u00adtle cap is kept in a sealed con\u00adtainer, the wa\u00adter va\u00adpor could po\u00adten\u00adtially con\u00addense into a small pud\u00addle with the same en\u00adtropy as it had in the bot\u00adtle cap.\nIf my mem\u00adory of physics is cor\u00adrect, wa\u00adter evap\u00ado\u00adrat\u00ading would still in\u00adcreases the to\u00adtal en\u00adtropy of the to\u00adtal sys\u00adtem in which the bot\u00adtle cap is lo\u00adcated, by virtue of re\u00adleas\u00ading some heat into the en\u00advi\u00adron\u00adment . How\u00adever, note that hu\u00admans and robots also, merely by do\u00ading me\u00adchan\u00adi\u00adcal work and thus form\u00ading heat which is then dis\u00adpersed into the en\u00advi\u00adron\u00adment, re\u00adsult in in\u00adcreased en\u00adtropy of the sys\u00adtem they\u2019re in. So you can\u2019t rule out any sys\u00adtem that makes its en\u00advi\u00adron\u00adment tend to\u00adwards in\u00adcreased en\u00adtropy from be\u00ading an op\u00adti\u00admizer, be\u00adcause that\u2019s what hu\u00admans and robots do, too.\nThat said, if you clar\u00adify that the bot\u00adtle cap is not in any such con\u00adtained sys\u00adtem, I think the wa\u00adter would re\u00adsult in a higher-en\u00adtropy state.Alex Flint 16 Aug 2021 17:17 UTC LW: 2 AF: 1AFParentThank you for this com\u00adment Chantiel. Yes, a con\u00adtainer that en\u00adg\u00adineered to evap\u00ado\u00adrate wa\u00adter poured any\u00adwhere into it and con\u00addense it into a cen\u00adtral area would be an op\u00adti\u00admiz\u00ading sys\u00adtem by my defi\u00adni\u00adtion. That is a bit like a ball rol\u00adling down a hill, which is also an op\u00adti\u00admiz\u00ading sys\u00adtem and also has noth\u00ading re\u00adsem\u00adbling agency. I am\nThe bot\u00adtle cap ex\u00adam\u00adple was ac\u00adtu\u00adally about putting a bot\u00adtle cap onto a bot\u00adtle and ask\u00ading whether, since the wa\u00adter now stays in\u00adside the bot\u00adtle, it should be con\u00adsid\u00adered an op\u00adti\u00admizer. I pointed out that this would not qual\u00adify as an op\u00adti\u00admiz\u00ading sys\u00adtem be\u00adcause if you moved a wa\u00adter molecule from the bot\u00adtle and place it out\u00adside the bot\u00adtle, the bot\u00adtle cap would not act to put it back in\u00adside.Chantiel 14 May 2021 23:17 UTC 1 pointAn op\u00adti\u00admiz\u00ading sys\u00adtem is a sys\u00adtem that has a ten\u00addency to evolve to\u00adwards one of a set of con\u00adfigu\u00adra\u00adtions that we will call the tar\u00adget con\u00adfigu\u00adra\u00adtion set, when started from any con\u00adfigu\u00adra\u00adtion within a larger set of con\u00adfigu\u00adra\u00adtions, which we call the basin of at\u00adtrac\u00adtion, and con\u00adtinues to ex\u00adhibit this ten\u00addency with re\u00adspect to the same tar\u00adget con\u00adfigu\u00adra\u00adtion set de\u00adspite per\u00adtur\u00adba\u00adtions.If I\u2019m rea\u00adson\u00ading cor\u00adrectly, I think this defi\u00adni\u00adtion could clas\u00adsify just about any\u00adthing as an op\u00adti\u00admizer.Con\u00adsider inan\u00adi\u00admate biolog\u00adi\u00adcal sub\u00adstances, like a leaf. From a wide range of ini\u00adtial con\u00adfigu\u00adra\u00adtions of a leaf, effec\u00adtively all make the leaf evolve to\u00adwards be\u00ading dirt, be\u00adcause leafs de\u00adcom\u00adpose even\u00adtu\u00adally. Are leaves op\u00adti\u00admiz\u00aders?Peo\u00adple tend to get older and wrin\u00adklier when ag\u00ading. From a wide range of states, peo\u00adple would tend to \u201cevolve\u201d to\u00adwards be\u00ading aged. Are peo\u00adple op\u00adti\u00admiz\u00aders for ag\u00ading?If the rock is hot\u00adter than the sur\u00adround\u00ading aid, vir\u00adtu\u00adally any ini\u00adtial con\u00adfigu\u00adra\u00adtion of the rock would tend to\u00adwards the rock be\u00ading some\u00adwhere around the tem\u00adper\u00ada\u00adture of the sur\u00adround\u00ading air. Are rocks op\u00adti\u00admiz\u00aders?Sup\u00adpose you have a pro\u00adgram with that shows the user a wel\u00adcome and in\u00adfor\u00adma\u00adtion blurb the first time they run the pro\u00adgram, and then won\u2019t show it again. Con\u00adsider the tar\u00adget con\u00adfigu\u00adra\u00adtion to be \u201cpro\u00adgram does not show the wel\u00adcome blurb\u201d. The pro\u00adgram would evolve into such a con\u00adfigu\u00adra\u00adtion from any other con\u00adfigu\u00adra\u00adtion. Are wel\u00adcome blurbs op\u00adti\u00admiz\u00aders?Let us now ex\u00adam\u00adine a sys\u00adtem that is not an op\u00adti\u00admiz\u00ading sys\u00adtem ac\u00adcord\u00ading to our defi\u00adni\u00adtion. Con\u00adsider a billiard table with some billiard balls that are cur\u00adrently bounc\u00ading around in mo\u00adtion. Left alone, the balls will even\u00adtu\u00adally come to rest in some con\u00adfigu\u00adra\u00adtion. Is this an op\u00adti\u00admiz\u00ading sys\u00adtem?In or\u00adder to qual\u00adify as an op\u00adti\u00admiz\u00ading sys\u00adtem, a sys\u00adtem must (1) have a ten\u00addency to evolve to\u00adwards a set of tar\u00adget con\u00adfigu\u00adra\u00adtions that are small rel\u00ada\u00adtive to the basin of at\u00adtrac\u00adtion, and (2) con\u00adtinue to evolve to\u00adwards the same set of tar\u00adget con\u00adfigu\u00adra\u00adtions if per\u00adturbed.If we reach in while the billiard balls are bounc\u00ading around and move one of the balls that is in mo\u00adtion, the sys\u00adtem will now come to rest in a differ\u00adent con\u00adfigu\u00adra\u00adtion. There\u00adfore this is not an op\u00adti\u00admiz\u00ading sys\u00adtem, be\u00adcause there is no set of tar\u00adget con\u00adfigu\u00adra\u00adtions to\u00adwards which the sys\u00adtem evolves de\u00adspite per\u00adtur\u00adba\u00adtions. A sys\u00adtem does not need to be ro\u00adbust along all di\u00admen\u00adsions in or\u00adder to be an op\u00adti\u00admiz\u00ading sys\u00adtem, but a billiard table ex\u00adhibits no such ro\u00adbust di\u00admen\u00adsions at all, so it is not an op\u00adti\u00admiz\u00ading sys\u00adtem.What about tak\u00ading the tar\u00adget con\u00adfigu\u00adra\u00adtion to be any state in which all the billiard balls are sta\u00adtion\u00adary? A wide range of states of billiards bounc\u00ading around on a table would re\u00adsult in all the balls end\u00ading up sta\u00adtion\u00adary, so I don\u2019t see how it wouldn\u2019t be clas\u00adsified as an op\u00adti\u00admiza\u00adtion pro\u00adcess.Also, I\u2019ve made my own at\u00adtempt at defin\u00ading \u201cop\u00adti\u00admizer\u201d here, in case you\u2019re in\u00adter\u00adested.Alex Flint 15 May 2021 16:07 UTC 2 pointsParentThank you for this com\u00adment Chantiel.\n\nCon\u00adsider inan\u00adi\u00admate biolog\u00adi\u00adcal sub\u00adstances, like a leaf. From a wide range of ini\u00adtial con\u00adfigu\u00adra\u00adtions of a leaf, effec\u00adtively all make the leaf evolve to\u00adwards be\u00ading dirt, be\u00adcause leafs de\u00adcom\u00adpose even\u00adtu\u00adally. Are leaves op\u00adti\u00admiz\u00aders?\nPeo\u00adple tend to get older and wrin\u00adklier when ag\u00ading. From a wide range of states, peo\u00adple would tend to \u201cevolve\u201d to\u00adwards be\u00ading aged. Are peo\u00adple op\u00adti\u00admiz\u00aders for ag\u00ading?\n\nIt\u2019s a good ques\u00adtion. How\u00adever, the de\u00adcom\u00adpo\u00adsi\u00adtion of a leaf and of the body are both ex\u00adam\u00adples of in\u00adcreases in en\u00adtropy over time, but ac\u00adtu\u00adally if you look at the size of the \u201ctar\u00adget con\u00adfigu\u00adra\u00adtion set\u201d you find that it\u2019s al\u00admost as big as the whole con\u00adfigu\u00adra\u00adtion space, be\u00adcause most of the con\u00adfigu\u00adra\u00adtions of a sys\u00adtem are high en\u00adtropy con\u00adfigu\u00adra\u00adtions. So I don\u2019t think a leaf or an ag\u00ading body qual\u00adify as op\u00adti\u00admiz\u00ading sys\u00adtems ac\u00adcord\u00ading to the defi\u00adni\u00adtion in this post. See also this sec\u00adtion.\n\nIf the rock is hot\u00adter than the sur\u00adround\u00ading aid, vir\u00adtu\u00adally any ini\u00adtial con\u00adfigu\u00adra\u00adtion of the rock would tend to\u00adwards the rock be\u00ading some\u00adwhere around the tem\u00adper\u00ada\u00adture of the sur\u00adround\u00ading air. Are rocks op\u00adti\u00admiz\u00aders?\n\nWell you re\u00adally have to look at the whole sys\u00adtem. It\u2019s true that if you have a sys\u00adtem that con\u00adsists of a hot part and cold part, the sys\u00adtem over\u00adall will evolve to\u00adwards con\u00adfigu\u00adra\u00adtions in which the parts are the same tem\u00adper\u00ada\u00adture. But this is again an ex\u00adam\u00adple of en\u00adtropy in\u00adcreas\u00ading. Most of the con\u00adfigu\u00adra\u00adtions of the joint rock+en\u00advi\u00adron\u00adment sys\u00adtem have the rock and the en\u00advi\u00adron\u00adment at ap\u00adprox\u00adi\u00admately the same tem\u00adper\u00ada\u00adture, since if you ran\u00addomly sam\u00adple a tem\u00adper\u00ada\u00adture for each par\u00adti\u00adcle, the large num\u00adber of par\u00adti\u00adcles in the rock and the en\u00advi\u00adron\u00adment mean that the av\u00ader\u00adage tem\u00adper\u00ada\u00adture of all the par\u00adti\u00adcles in the rock will be very similar to the av\u00ader\u00adage tem\u00adper\u00ada\u00adture of all the par\u00adti\u00adcles in the en\u00advi\u00adron\u00adment, with high prob\u00ada\u00adbil\u00adity.\n\nWhat about tak\u00ading the tar\u00adget con\u00adfigu\u00adra\u00adtion to be any state in which all the billiard balls are sta\u00adtion\u00adary? A wide range of states of billiards bounc\u00ading around on a table would re\u00adsult in all the balls end\u00ading up sta\u00adtion\u00adary, so I don\u2019t see how it wouldn\u2019t be clas\u00adsified as an op\u00adti\u00admiza\u00adtion pro\u00adcess.\n\nYes, just like a ball rol\u00adling down a hill qual\u00adifies as an op\u00adti\u00admiz\u00ading sys\u00adtem, a table with with billiard balls qual\u00adifies as an op\u00adti\u00admiz\u00ading sys\u00adtem in the sense that you point out.\nBut the whole point of this post is to get past the no\u00adtion of \u201cop\u00adti\u00admizer\u201d and \u201cop\u00adti\u00admiza\u00adtion\u201d to the ex\u00adtent that these con\u00adcepts im\u00adply that there is some \u201cagent\u201d perform\u00ading op\u00adti\u00admiza\u00adtion, and some thing \u201cbe\u00ading op\u00adti\u00admized\u201d, which sneaks the agent model into all our think\u00ading and leads to a very con\u00adfused pic\u00adture of things.\n\nAlso, I\u2019ve made my own at\u00adtempt at defin\u00ading \u201cop\u00adti\u00admizer\u201d here, in case you\u2019re in\u00adter\u00adested.\n\nThank you for the poin\u00adter!paulfchristiano 15 May 2021 16:20 UTC 2 pointsParentYes, just like a ball rol\u00adling down a hill qual\u00adifies as an op\u00adti\u00admiz\u00ading sys\u00adtem, a table with with billiard balls qual\u00adifies as an op\u00adti\u00admiz\u00ading sys\u00adtem in the sense that you point out.Both of these ex\u00adam\u00adples also seem like in\u00adcreases in en\u00adtropy if you con\u00adsider the full sys\u00adtem.With a fixed amount of en\u00adergy, there are a tiny num\u00adber of ways to use it to make the ball move (or to spend en\u00adergy putting it some\u00adwhere other than the bot\u00adtom of the hill) but an ex\u00adpo\u00adnen\u00adtially vast num\u00adber of ways to use it to in\u00adcrease the tem\u00adper\u00ada\u00adture of the billiard ball and table (since there are billions of billions of micro\u00adscopic de\u00adgrees of free\u00addom that could be vibrat\u00ading or what\u00adever). Chantiel 15 May 2021 20:46 UTC 1 pointParentThanks for the re\u00adsponse.A lot of the ex\u00adam\u00adples I pointed out can end up tend\u00ading to\u00adwards in\u00adcreas\u00ading en\u00adtropy, but I think there are a lot of things that would be con\u00adsid\u00adered op\u00adti\u00admizer that don\u2019t in\u00adcrease en\u00adtropy.For ex\u00adam\u00adple, con\u00adsider a leaf out in the sun, dry\u00ading out and go\u00ading from a green\u00adish color to a yel\u00adlow one. Pretty much all con\u00adfigu\u00adra\u00adtions of the leaf would re\u00adsult in the leaf get\u00adting more yel\u00adlow over time. Is the leaf op\u00adti\u00admiz\u00ading for yel\u00adlow-ness?What about a knife that is be\u00ading used and never sharp\u00adened? From a wide range of con\u00adfigu\u00adra\u00adtions the knife would tend to\u00adwards get\u00adting dul\u00adler. Is it op\u00adti\u00admiz\u00ading dul\u00adl\u00adness?What about a space\u00adship leav\u00ading Earth? Is it op\u00adti\u00admiz\u00ading for the dis\u00adtance from Earth?I sup\u00adpose we could con\u00adsider these things op\u00adti\u00admiz\u00aders if you re\u00adally want to. But I\u2019m con\u00adcerned that a defi\u00adni\u00adtion that in\u00adclude leaves, knives, billiard balls, and rocket ships is overly broad.More gen\u00ader\u00adally, it seems like this defi\u00adni\u00adtion clas\u00adsifies a lot of things that change in some way over time as an op\u00adti\u00admizer. In gen\u00aderal, if some\u00adthing tends to be differ\u00adent in some ways when it\u2019s young than old, then I think you can say the sys\u00adtem is an op\u00adti\u00admizer op\u00adti\u00admiz\u00ading for what\u00adever char\u00adac\u00adter\u00adis\u00adtics cor\u00adre\u00adlate with old\u00adness. Joe_Collman 30 Jun 2020 18:04 UTC 1 pointGreat post.I\u2019m not keen on the re\u00adquire\u00adment that the basin of at\u00adtrac\u00adtion be strictly larger than the tar\u00adget con\u00adfigu\u00adra\u00adtion set. I don\u2019t think this buys you much, and seems to need\u00adlessly rule out goals based on nar\u00adrow main\u00adte\u00adnance of some sta\u00adtus-quo. Switch\u00ading to a util\u00adity func\u00adtion as sug\u00adgested by oth\u00aders im\u00adproves things, I think.For ex\u00adam\u00adple: a highly ca\u00adpa\u00adble AI whose only goal is to main\u00adtain a chess set in a par\u00adtic\u00adu\u00adlar po\u00adsi\u00adtion for as long as pos\u00adsi\u00adble, but not to care about it af\u00adter it\u2019s dis\u00adturbed.Here the tar\u00adget set is iden\u00adti\u00adcal to the basin of at\u00adtrac\u00adtion: states con\u00adtain\u00ading the chess set in the par\u00adtic\u00adu\u00adlar po\u00adsi\u00adtion (or his\u00adto\u00adries where it\u2019s re\u00admained undis\u00adturbed).This doesn\u2019t tell us any\u00adthing about what the AI will do in pur\u00adsu\u00ading this goal. It may not do much un\u00adtil some\u00adthing ap\u00adproaches the board; it may re-ar\u00adrange the galaxy to min\u00adimise the chances that a piece will be moved (but ar\u00adbi\u00adtrar\u00adily small en\u00advi\u00adron\u00admen\u00adtal changes might have it take very differ\u00adent ac\u00adtions, so in gen\u00aderal we can\u2019t say it\u2019s op\u00adti\u00admis\u00ading for some par\u00adtic\u00adu\u00adlar con\u00adfigu\u00adra\u00adtion of the galaxy).I want to say that this sys\u00adtem is op\u00adti\u00admis\u00ading to keep the chess set undis\u00adturbed.With util\u00adity you can eas\u00adily rep\u00adre\u00adsent this goal, and all you need to do is com\u00adpare un\u00adper\u00adturbed util\u00adity with the util\u00adity un\u00adder var\u00adi\u00adous per\u00adtur\u00adba\u00adtions.Some\u00adthing like: The sys\u00adtem S op\u00adti\u00admises U \ud835\udeff-ro\u00adbustly to per\u00adtur\u00adba\u00adtion x if E[U(S)] - E[U(x(S))] < \ud835\udeffBack to top","rejected":"\n\n\n\n\n\n\nA research workflow with Zotero and Org mode | mkbehr.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\nToggle navigation\n\n\n\n\n\nmkbehr.com\n\n\n\n\n\n\nAbout me\n\n\nArchive\n\n\nTags\n\n\nRSS feed\n\n\nGithub\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSource\n\n\n\n\n\n\n\n\n\n\nA research workflow with Zotero and Org mode\n\n\n Michael Behr\n \nSeptember 19, 2015\n\nComments\n\nSource\n\n\n\n\n\n\nAny research project is going to involve a literature search: reading\nthrough a bunch of papers that might be relevant to your topic in\norder to get a sense of what the field already knows. Now, maybe\nthere's some magic technique for picking out the information that\nmatters, passing over the rest, and writing out a single, coherent\nstory in one pass through all the papers you can find. If that\ntechnique exists, I have no idea what it is.\nSo when every paper brings up ten new questions and twenty papers to\nstart answering them, I need a system to keep my notes organized. I\nneed notes that let me jump back and forth between papers without\nlosing my place, draw links between papers, and store lists of\ncitations to come back to. Here's how I do it.\n\nStoring papers with Zotero\nThe first tool I use is Zotero, a reference\nmanager. Zotero's job is to store all the actual papers I come across,\nalong with information like data on how to cite the papers and any\ntags they might have been published with. It can grab that information\nfrom my web browser, whether from a journal's website or someplace\nlike Google Scholar or PubMed. It's also great for quickly putting\ntogether a bibliography, using bibtex or similar programs, when I want\nto write up some results.\n\n\n\nZotero stores the papers I want to\nread and reference. I scaled up the font size here to make it readable\nin a tiny blog image.\nZotero isn't the only choice for reference managers.\nMendeley is another popular choice, and\nthere are a\nwhole bunch more\nout there. I picked Zotero arbitrarily a few years ago, but it's\nworking out well because of its emacs integration.\nKeeping notes with Emacs and Org mode\nYou see, Zotero has some note-taking functions, and I used to keep my\nnotes there, but there were some problems. Notes are stored as\nseparate files for each paper, but I want to cross-reference notes\nfrom a lot of different papers at once. And while the editor has some\nrich-text capabilities (e.g. bold and italic text), it's missing\nimportant things I need in my notes, like the ability to typeset\nequations.\nThat's where Emacs and its\nextension Org mode come in. To borrow a term\nfrom Perl enthusiasts, Org mode is the swiss army chainsaw of text\ndocument formats. Org mode documents have a lot of features, and it's\nway beyond this post's scope to describe them all. For the purpose of\nresearch notes, the most useful things it lets me do are:\n\nI can store my notes in a hierarchical tree structure, and I can\n hide parts of the tree from view in order to focus on other parts.\nI can put hyperlinks into my notes, including links to papers,\n websites, or other parts of the file.\nI can put math in my notes using Latex, and view the typeset\n equations right in my Emacs buffer.\n\n\n\n\nA sample from my notes file. You\ncan see the tree structure of the file, some links to papers, and a\nlittle bit of inline math, using Latex.\nGluing it all together with zotxt\nNow, see those links to papers in my notes buffer? I didn't have to\ncopy and paste them from anywhere. I inserted them with just three\nkeystrokes each. So far, I've just described some useful pieces of\nsoftware, but the interesting part of my workflow is how they fit\ntogether.\nzotxt is an extension that lets\nother programs talk to Zotero, and Emacs has a package to talk to it.\nIt's even structured specifically to work with Org mode documents.\nWith zotxt, my workflow looks like this:\n\nI find a paper I want to look at somewhere on the internet.\nI use Zotero's browser plugin to save it to Zotero. Hopefully it\n grabs the paper itself and this happens in one click; if the site\n doesn't play along, I spend a minute grabbing a pdf and feeding it\n to Zotero.\nI insert a link to the Zotero entry into my notes file in Emacs. I\n can do this with the key chords C-c \" \". I don't need to further\n specify what paper I want to grab: the browser plugin leaves the\n paper selected in Zotero, and zotxt can grab the selected paper.\nWhen I want to read the paper, I go to the link and tell Emacs to\n open the paper in my system PDF viewer. The key chords for this are\n C-c \" a, and then selecting the PDF attachment from the Helm\n window that appears (usually I just type pdf RET).\nWhen I'm reading a paper and see a citation that might be useful, I\n look it up on the internet and repeat this process to store a note\n linking to it.\n\nIt took me a while to get it set up to my liking, so here's how I did\nit:\n\nFirst, install zotxt. If you're\n using Zotero as a firefox extension, you just need to install zotxt\n as another extension. If you're using the standalone Zotero client,\n you can still do it: download the extension file from that link,\n then go to the Add-Ons Manager under the Tools menu and find the\n option to install an add-on from a file.\n\n\n\n\nThe menu option looks like\nthis.\n\nNext, install the zotxt package in emacs. If your\n package manager is set up, you\n can just type M-x package-install RET zotxt RET.\nNow, when org-zotxt-mode is active, you can use its functions in\n your org-mode buffers. You can search for papers and insert them\n with C-c \" i, insert the currently-selected paper in Zotero with\n C-u C-c \" i, and open a paper's PDF or other related files by\n moving the cursor to a link and typing C-c \" a. However, you might\n want a little bit more setup to deal with some annoyances.\nYou probably want to have org-zotxt-mode automatically activated\n in all your org-mode documents. To make that happen, you can add\n some code to your .emacs file to start up this mode on all your\n org-mode buffers - see below this list for the .emacs\n configuration I use.\nIf you want to insert a link to the currently-selected item a lot,\n C-u C-c \" i is an awkward sequence to type. I rebound it to C-c \"\n \".\nYou might notice that when you insert a link to a paper, the text of\n that link is a full citation. That might be what you want, but I\n just want the authors, paper name, and year. It took me a bit of\n hacking to get around that: it's possible to tell the emacs zotxt\n interface to use a different citation format than the default, but I\n had to throw together a little XML file to give it a shorter format\n than a full citation. (This may not be the easiest or cleanest way\n to do it, but it works!)\n That XML file is here. To use it, go into\n your Zotero preferences and select Cite -> Styles, and add the file.\n It should appear in the menu as \"mkbehr's short reference format\".\n Then add the last two lines in the .emacs snippet below, and you\n should get shorter citations.\nYou probably want to install the\n Helm package, to make zotxt's\n search interface easier to navigate. That link should tell you\n everything you need to know.\n\nHere's that .emacs setup code:\n;; Activate org-zotxt-mode in org-mode buffers\n(add-hook 'org-mode-hook (lambda () (org-zotxt-mode 1)))\n;; Bind something to replace the awkward C-u C-c \" i\n(define-key org-mode-map\n (kbd \"C-c \\\" \\\"\") (lambda () (interactive)\n (org-zotxt-insert-reference-link '(4))))\n;; Change citation format to be less cumbersome in files.\n;; You'll need to install mkbehr-short into your style manager first.\n(eval-after-load \"zotxt\"\n'(setq zotxt-default-bibliography-style \"mkbehr-short\"))\n\nOf course, I'm not done tinkering to make my workflow better. I hear\ngood things about the org-ref\nand helm-bibtex\npackages - if only I can keep an up-to-date bibtex file as I add papers\nto my library, I can associate links with not only a paper's pdf, but\nalso that paper's section of my notes file. And I haven't found a\nsmooth way to take a paper and pull up the papers it cites in my\nbrowser. But until then, I'm pretty happy with this setup.\nHappy researching!\n\n\n\nemacs\nresearch\n\n\n\nPrevious post\n\n\nNext post\n\nComments\n\nPlease enable JavaScript to view the comments powered by Disqus.\n\nComments powered by Disqus\n\n\n\n\n Contents \u00a9 2015 Michael Behr - Powered by Nikola\n\n\n\n\n\n\n"},{"chosen":"\n\n\n\n\n\n\n\n\n\nRent Seeking - Econlib\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLiberty Fund Network\n\nEconlib\nLiberty Fund\nOLL\nAdam Smith Works\nLaw & Liberty\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nEconLog\n\nBlog\nBrowse by Author\nBrowse by Topic\nSearch EconLog\nRSS\nSubscribe\n\n\nEconTalk\n\nLatest Episodes\nBrowse by Date\nBrowse by Guest\nBrowse by Category\nBrowse Extras\nSearch EconTalk\nRSS Feeds\n\n\nArticles\n\nLatest Articles\nLiberty Classics\nBrowse by Author\nBrowse by Date\nSearch Articles\n\n\nBooks\n\nBooks\nBios\nBooks by Date\nBooks by Author\nSearch Books\n\n\nEncyclopedia\n\nIndex\nBrowse by Author\nBrowse by Title\nBiographies\nSearch Encyclopedia\n\n\nGuides\n\nIndex\n#ECONLIBREADS\nCollege Topics\nHigh School Topics\nSubscribe to QuickPicks\nSearch Guides\n\n\nVideos\n\nIndex\nSearch Videos\n\n\nLiberty Fund Network\n\nEconlib\nOLL\nAdam Smith Works\nLibrary of Law & Liberty\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHome\u00a0 \/ \u00a0\n\n\n\n\n\n\n\n\nECONLIB CEE\n\n\n Government Policy \n\n\n\n\n\n\n\n\n\n\n\nRent Seeking\n\n\t\t\t\t\t\t\tBy David R. Henderson\t\t\t\t\t\t\n\n\n\n\n\t\t\t\t\t\t\t\tCategories:\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t Government Policy\n\n\n\t\n\t\t\t\t\t\t\t\tBy David R. Henderson, \n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\n\n SHARE\n \n POST:\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\u201c\nRent seeking\u201d is one of the most important insights in the last fifty years of economics and, unfortunately, one of the most inappropriately labeled. Gordon Tullock originated the idea in 1967, and Anne Krueger introduced the label in 1974. The idea is simple but powerful. People are said to seek rents when they try to obtain benefits for themselves through the political arena. They typically do so by getting a subsidy for a good they produce or for being in a particular class of people, by getting a tariff on a good they produce, or by getting a special regulation that hampers their competitors. Elderly people, for example, often seek higher Social Security payments; steel producers often seek restrictions on imports of steel; and licensed electricians and doctors often lobby to keep regulations in place that restrict competition from unlicensed electricians or doctors.\n\n\n\nBut why do economists use the term \u201crent\u201d? Unfortunately, there is no good reason. David Ricardo introduced the term \u201crent\u201d in economics. It means the payment to a factor of production in excess of what is required to keep that factor in its present use. So, for example, if I am paid $150,000 in my current job but I would stay in that job for any salary over $130,000, I am making $20,000 in rent. What is wrong with rent seeking? Absolutely nothing. I would be rent seeking if I asked for a raise. My employer would then be free to decide if my services are worth it. Even though I am seeking rents by asking for a raise, this is not what economists mean by \u201crent seeking.\u201d They use the term to describe people\u2019s lobbying of government to give them special privileges. A much better term is \u201cprivilege seeking.\u201d\n\n\nIt has been known for centuries that people lobby the government for privileges. Tullock\u2019s insight was that expenditures on lobbying for privileges are costly and that these expenditures, therefore, dissipate some of the gains to the beneficiaries and cause inefficiency. If, for example, a steel firm spends one million dollars lobbying and advertising for restrictions on steel imports, whatever money it gains by succeeding, presumably more than one million, is not a net gain. From this gain must be subtracted the one-million-dollar cost of seeking the restrictions. Although such an expenditure is rational from the narrow viewpoint of the firm that spends it, it represents a use of real resources to get a transfer from others and is therefore a pure loss to the economy as a whole.\n\n\nKrueger (1974) independently discovered the idea in her study of poor economies whose governments heavily regulated their people\u2019s economic lives. She pointed out that the regulation was so extensive that the government had the power to create \u201crents\u201d equal to a large percentage of national income. For India in 1964, for example, Krueger estimated that government regulation created rents equal to 7.3 percent of national income; for Turkey in 1968, she estimated that rents from import licenses alone were about 15 percent of Turkey\u2019s gross national product. Krueger did not attempt to estimate what percentage of these rents were dissipated in the attempt to get them. Tullock (1993) tentatively maintained that expenditures on rent-seeking in democracies are not very large.\n\n\n\nAbout the Author\n\nDavid R. Henderson is the editor of this encyclopedia. He is a research fellow with Stanford University\u2019s Hoover Institution and an associate professor of economics at the Naval Postgraduate School in Monterey, California. He was formerly a senior economist with President Ronald Reagan\u2019s Council of Economic Advisers.\n\n\n\n\nFurther Reading\n\u00a0\nKrueger, Anne O. \u201cThe Political Economy of the Rent-Seeking Society.\u201d American Economic Review 64 (1974): 291\u2013303.\nTullock, Gordon. Rent Seeking. Brookfield, Vt.: Edward Elgar, 1993.\nTullock, Gordon. \u201cThe Welfare Costs of Tariffs, Monopolies and Theft.\u201d Western Economic Journal 5 (1967): 224\u2013232.\n\u00a0\n\n\n\n\n\n\nRELATED\n CONTENT \nDon Boudreaux on Public Choice\n\nDon Boudreaux of George Mason University talks with EconTalk host Russ Roberts about public choice: the application of economics to the political process. Boudreaux argues that political competition is a blunt instrument that works less effectively than economic competition. One reason for this bluntness is the voting process itself--where intensity does not matter, only whether a voter prefers one candidate to the other. A second reason is that political outcomes tend to be one-size-fits-all, w...\n\nRead This\n Article\n\n\n\n\n\n\n\n\n SHARE\n \n POST:\n \n\n\n\n\n\n\n\n\n\nEnter your email address to subscribe to our monthly newsletter:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRELATED\n CONTENT \nDon Boudreaux on Public Choice\n\nDon Boudreaux of George Mason University talks with EconTalk host Russ Roberts about public choice: the application of economics to the political process. Boudreaux argues that political competition is a blunt instrument that works less effectively than economic competition. One reason for this bluntness is the voting process itself--where intensity does not matter, only whether a voter prefers one candidate to the other. A second reason is that political outcomes tend to be one-size-fits-all, w...\n\nRead This\n Article\n\n\n\n\n\n\nCOLLECTION: GOVERNMENT POLICY\n\n\n\n\n The article you\u2019re reading is part of Econlib\u2019s Government Policy collection. Explore other\n Government Policy articles:\n \n\n\n\nFeb 5 2018\nHoover's Economic Policies\n\nSteven Horwitz \n\n\n\nFeb 5 2018\nUnemployment Insurance\n\nDavid Francis \n\n\n\nFeb 5 2018\nThird World Debt\n\nKenneth Rogoff \n\n\n\nFeb 5 2018\nTrucking Deregulation\n\nThomas Gale Moore \n\n\n\n\n\n\n\n\n\n\n\n\nEconlib\n\n\nThe Library of Economics and Liberty\nLiberty Fund, Inc.\n11301 N. Meridian Street\nCarmel, IN 46032-4564, USA\neconlib@libertyfund.org\n\n\nAbout\nAbout Us\nContact Us\nPrivacy Policy\n \n\nPublications\nBooks\nArticles\nEconTalk\nEconLog\nVideos\n \n\nResources\nQuickpicks\nCEE Encyclopedia\nCollege Guides\nHigh School Guides\n \n\n\nSign up for our newsletter\nEnter your email address to subscribe to the Econlib monthly newsletter.\n\n\n\n\n\n\u00bb\n\n\n\n\nLiberty Fund, Inc.\n11301 N. Meridian Street\nCarmel, IN 46032-4564, USA\ninfo@libertyfund.org\n\n\n\n\n\n\n \n\n\n\n\n\n\u00a9 2023 Econlib, Inc. All Rights Reserved. Part of the Liberty Fund Network.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","rejected":"\n\n\n\n\nOff the Convex Path \u2013 Off the convex path\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout\n\nSubscribe\n\n\n\n\n\n\n\n\n\nOff the Convex Path\n\n\nContributors\n\nSanjeev Arora\nNisheeth Vishnoi\nNadav Cohen\n\nFormer contributors:\n\nMoritz Hardt\n\nMission statement\nThe notion of convexity underlies a lot of beautiful mathematics. When combined with computation, it gives rise to the area of convex optimization that has had a huge impact on understanding and improving the world we live in. However, convexity does not provide all the answers. Many procedures in statistics, machine learning and nature at large\u2014Bayesian inference, deep learning, protein folding\u2014successfully solve non-convex problems that are NP-hard, i.e., intractable on worst-case instances. Moreover, often nature or humans choose methods that are inefficient in the worst case to solve problems in P.\nCan we develop a theory to resolve this mismatch between reality and the predictions of worst-case analysis? Such a theory could identify structure in natural inputs that helps sidestep worst-case complexity.\nThis blog is dedicated to the idea that optimization methods\u2014whether created by humans or nature, whether convex or nonconvex\u2014are exciting objects of study and, often lead to useful algorithms and insights into nature. This study can be seen as an extension of classical mathematical fields such as dynamical systems and differential equations among others, but with the important addition of the notion of computational efficiency.\nWe will report on interesting research directions and open problems, and highlight progress that has been made. We will write articles ourselves as well as encourage others to contribute. In doing so, we hope to generate an active dialog between theorists, scientists and practitioners and to motivate a generation of young researchers to work on these important problems.\nContributing an article\nIf you\u2019re writing an article for this blog, please follow these guidelines.\n\n\n\n\n\n\n\n\n Theme available on Github.\n \n\n\n\n\n\n\n\n\n"}]