From f1c12409279f4fcb2191aaacd2cbd424503e1237 Mon Sep 17 00:00:00 2001 From: issa Date: Wed, 7 Mar 2018 17:09:42 -0800 Subject: [PATCH] (No edit information) --- "Wei_Dai\342\200\231s_views_on_AI_safety.page" | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git "a/Wei_Dai\342\200\231s_views_on_AI_safety.page" "b/Wei_Dai\342\200\231s_views_on_AI_safety.page" index c0dc2fc..fb16a98 100644 --- "a/Wei_Dai\342\200\231s_views_on_AI_safety.page" +++ "b/Wei_Dai\342\200\231s_views_on_AI_safety.page" @@ -15,7 +15,7 @@ categories: AI_safety |How "prosaic" AI will be|| |Kind of AGI we will have first (de novo, neuromorphic, WBE, etc.)|| |Difficulty of philosophy|Philosophy is hard. Wei has discussed this in many places. See [here](http://lesswrong.com/lw/ig9/outside_views_and_miris_fai_endgame/9o19) for one discussion. See [here](http://lesswrong.com/lw/jgz/aalwa_ask_any_lesswronger_anything/dx46) for a recent comment. I'm not aware of a single comprehensive overview of his views on the difficulty of philosophy.| -|How well we need to understand philosophy before building AGI|We need to understand philosophy well. See some of the discussions with Paul Christiano. See also threads like [this one](http://lesswrong.com/lw/ua/the_level_above_mine/8b8u).| +|How well we need to understand philosophy before building AGI|We need to understand philosophy well. See some of the discussions with Paul Christiano. See also threads like [this one](http://lesswrong.com/lw/ua/the_level_above_mine/8b8u). "I think we need to solve metaethics and metaphilosophy first, otherwise how do we know that any proposed solution to normative ethics is actually correct?"[^metaethics_first]| |How much alignment work is possible early on|"My model of FAI development says that you have to get most of the way to being able to build an AGI just to be able to *start* working on many Friendliness-specific problems, and solving those problems would take a long time relative to finishing rest of the AGI capability work."[^fai_dev]| # See also @@ -26,4 +26,6 @@ categories: AI_safety - [Timeline of Wei Dai publications](https://timelines.issarice.com/wiki/Timeline_of_Wei_Dai_publications) -[^fai_dev]: [“Wei\_Dai comments on How does MIRI Know it Has a Medium Probability of Success?”](http://lesswrong.com/lw/i7p/how_does_miri_know_it_has_a_medium_probability_of/9j6s) LessWrong. Retrieved March 8, 2018. \ No newline at end of file +[^fai_dev]: [“Wei\_Dai comments on How does MIRI Know it Has a Medium Probability of Success?”](http://lesswrong.com/lw/i7p/how_does_miri_know_it_has_a_medium_probability_of/9j6s) LessWrong. Retrieved March 8, 2018. + +[^metaethics_first]: [“Wei\_Dai comments on AALWA: Ask any LessWronger anything”](http://lesswrong.com/lw/jgz/aalwa_ask_any_lesswronger_anything/ap84). LessWrong. Retrieved March 8, 2018.