-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EA anchors too hard on existing orgs/ideas/strategies #2
Comments
Example: Lack of productive competition between orgsSummary: "To encourage this, I'd love to see more support for individuals doing great projects who are better suited to the flexibility of doing work independently of any organization, or who otherwise don't fit a hole in an organization." Date: Feb 8th 2017 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: Only 'whitelisted' activities/goals are really EASummary: If it isn't on the shortlist of approved effective activities, it's a waste of time. Examples of whitelisted things; working at an EA-branded organization, or working directly on AI safety. Date: 7th Feb 2017 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: EAs might not actually change their mind much about values and goals, or form new opinionsSummary: As listed Date: 8th Feb 2017 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses: This one doesn't seem to hold up, especially since the mass shift of focus to AI/longtermism. Examples of people who updated their values:
|
Example: Over-focused, over-confident, over-reliantSummary:
Date: 1st May 2014 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
Over-confident:
Over-reliant:
|
Example: Inconsistent Rigor / Standard of EvidenceSummary: "Effective altruists insist on extraordinary rigor in their charity recommendations—cf. for instance GiveWell’s work. Yet for many ancillary problems—donating now vs. later, choosing a career, and deciding how “meta” to go (between direct work, earning to give, doing advocacy, and donating to advocacy), to name a few—they seem happy to choose between the not-obviously-wrong alternatives based on intuition and gut feelings. " Date: 12th Feb 2013 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses: |
Example: EA has a motivated reasoning problemSummary: EA has a problem with motivated reasoning and emotional biases which impairs its truth-seeking powers. Date: 14th Sept 2021 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: EA makes implicit and mute assumptionsSummary: Looking at the underlying assumptions that create EA culture, and in turn create "intellectual blind spots", specifically relating to homogeny, heirarchy and intelligence. Date: 15th May 2020 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: EA is overly hierarchical and top-downSummary: "Cultural norms around intelligence keep diversification at bay. A leader’s position is assumed justified by his intelligence and an apprehension to appear dim, heightens the barrier to voicing fundamental criticism." EA is driven by the notion of solving all the world's problems through the sheer power of intellect. This leads to a pecking order of smarts, which in turn leads to fear of criticising those on top, lest ye be considered dumb. Doubt = lack of understanding. Guru worship. Date: 15th May 2020 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: Longtermism and feedback loopsSummary: No way to tell how things are going, since the results won't be known for another 1000 years. Thus feedback tends to come from peers, increasing the risk of groupthink. Date: 24th March 2022 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: Needs qualitative researchSummary: Too much of a focus on numbers, which can allow mistakes to happen. Such as. Date: - Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: Lack of mentorship and guidanceSummary: Too many people going it alone. Nothing designed to increase group effectiveness. Date: 2nd Jul 2017 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: Neglectedness may be a poor predictor of marginal impactSummary: The assumption that more good can be done in areas not recieving a lot of attention could be misguided Date: 9th November 2018 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: EA is being slow to recognise its own limitationsSummary: "So EA is discovering the limits of the philosophy that underpins it (Rational Choice Theory). It's just slow. It could move much faster by rejecting it and adopting Effectual logic wholesale." Date: 28 Apr 2022 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: OpenPhil made inflation worseSummary: As listed Date: 24th March 2022 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
Example: Earning to give should have focused more on “entrepreneurship to give”Summary: Entrepreneurship can offer a potentially higher reward than the tried-and-true path of earning to give as an employee Date: 9th Aug 2022 Status: Lag to response: Current canonical instance: Prior status of critic: Fundamental criticism: Public responses:
|
No description provided.
The text was updated successfully, but these errors were encountered: