Skip Navigation
20 comments
  • HN is being ... surprisingly on point with this one. Choice quotes:

    “However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now" ” — https://news.ycombinator.com/item?id=44317651

    “It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight" ” — https://news.ycombinator.com/item?id=44319387

    “I feel like I'm witnessing something that Adam Curtis would cover in the last part of The Century of Self, in real time.” — https://news.ycombinator.com/item?id=44317313

    etc etc.

    • jhbadger:

      As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".

      I haven't read Becker's book and probably won't spend the time to do so. But if this is an accurate summary, it's a bad sign for that book, because plenty of them were bonkers all along.

      As journalists and scholars scramble to account for this ‘new’ version of EA—what happened to the bednets, and why are Effective Altruists (EAs) so obsessed with AI?—they inadvertently repeat an oversimplified and revisionist history of the EA movement. It goes something like this: EA was once lauded as a movement of frugal do-gooders donating all their extra money to buy anti-malarial bednets for the poor in sub-Saharan Africa; but now, a few EAs have taken their utilitarian logic to an extreme level, and focus on ‘longtermism’, the idea that if we wish to do the most good, our efforts ought to focus on making sure the long-term future goes well; this occurred in tandem with a dramatic influx of funding from tech scions of Silicon Valley, redirecting EA into new cause areas like the development of safe artificial intelligence (‘AI-safety’ and ‘AI-alignment’) and biosecurity/pandemic preparedness, couched as part of a broader mission to reduce existential risks (‘x-risks’) and ‘global catastrophic risks’ that threaten humanity’s future. This view characterizes ‘longtermism’ as a ‘recent outgrowth’ (Ongweso Jr., 2022) or even breakaway ‘sect’ (Aleem, 2022) that does not represent authentic EA (see, e.g., Hossenfelder, 2022; Lenman, 2022; Pinker, 2022; Singer & Wong, 2019). EA’s shift from anti-malarial bednets and deworming pills to AI-safety/x-risk is portrayed as mission-drift, given wings by funding and endorsements from Silicon Valley billionaires like Elon Musk and Sam Bankman-Fried (see, e.g., Bajekal, 2022; Fisher, 2022; Lewis-Kraus, 2022; Matthews, 2022; Visram, 2022). A crucial turning point in this evolution, the story goes, includes EAs encountering the ideas of transhumanist philosopher Nick Bostrom of Oxford University’s Future of Humanity Institute (FHI), whose arguments for reducing x-risks from AI and biotechnology (Bostrom, 2002, 2003, 2013) have come to dominate EA thinking (see, e.g., Naughton, 2022; Ziatchik, 2022).

      This version of events gives the impression that EA’s concerns about x-risk, AI, and ‘longtermism’ emerged out of EA’s rigorous approach to evaluating how to do good, and has only recently been embraced by the movement’s leaders. MacAskill’s publicity campaign for WWOTF certainly reinforces this perception. Yet, from the formal inception of EA in 2012 (and earlier) the key figures and intellectual architects of the EA movement were intensely focused on promoting the suite of causes that now fly under the banner of ‘longtermism’, particularly AI-safety, x-risk/global catastrophic risk reduction, and other components of the transhumanist agenda such as human enhancement, mind uploading, space colonization, prediction and forecasting markets, and life extension biotechnologies.

      To give just a few examples: Toby Ord, the co-founder of GWWC and CEA, was actively collaborating with Bostrom by 2004 (Bostrom & Ord, 2004),18 and was a researcher at Bostrom’s Future of Humanity Institute (FHI) in 2007 (Future of Humanity Institute, 2007) when he came up with the idea for GWWC; in fact, Bostrom helped create GWWC’s first logo (EffectiveAltruism.org, 2016). Jason Matheny, whom Ord credits with introducing him to global public health metrics as a means for comparing charity effectiveness (Matthews, 2022), was also working to promote Bostrom’s x-risk agenda (Matheny, 2006, 2009), already framing it as the most cost-effective way to save lives through donations in 2006 (User: Gaverick [Jason Gaverick Matheny], 2006). MacAskill approvingly included x-risk as a cause area when discussing his organizations on Felificia and LessWrong (Crouch [MacAskill], 2010, 2012a, 2012b, 2012c, 2012e), and x-risk and transhumanism were part of 80K’s mission from the start (User: LadyMorgana, 2011). Pablo Stafforini, one of the key intellectual architects of EA ‘behind-the-scenes’, initially on Felificia (Stafforini, 2012a, 2012b, 2012c) and later as MacAskill’s research assistant at CEA for Doing Good Better and other projects (see organizational chart in Centre for Effective Altruism, 2017a; see the section entitled “ghostwriting” in Knutsson, 2019), was deeply involved in Bostrom’s transhumanist project in the early 2000s, and founded the Argentine chapter of Bostrom’s World Transhumanist Association in 2003 (Transhumanismo. org, 2003, 2004). Rob Wiblin, who was CEA’s executive director from 2013-2015 prior to moving to his current role at 80K, blogged about Bostrom and Yudkowksy’s x-risk/AI-safety project and other transhumanist themes starting in 2009 (Wiblin, 2009a, 2009b, 2010a, 2010b, 2010c, 2010d, 2012). In 2007, Carl Shulman (one of the most influential thought-leaders of EA, who oversees a $5,000,000 discretionary fund at CEA) articulated an agenda that is virtually identical to EA’s ‘longtermist’ agenda today in a Felificia post (Shulman, 2007). Nick Beckstead, who co-founded and led the first US chapter of GWWC in 2010, was also simultaneously engaging with Bostrom’s x-risk concept (Beckstead, 2010). By 2011, Beckstead’s PhD work was centered on Bostrom’s x-risk project: he entered an extract from the work-in-progress, entitled “Global Priority Setting and Existential Risk: Crucial Ethical Considerations” (Beckstead, 2011b) to FHI’s “Crucial Considerations” writing contest (Future of Humanity Institute, 2011), where it was the winning submission (Future of Humanity institute, 2012). His final dissertation, entitled On the Overwhelming Importance of Shaping the Far Future (Beckstead, 2013) is now treated as a foundational ‘longtermist’ text by EAs.

      Throughout this period, however, EA was presented to the general public as an effort to end global poverty through effective giving, inspired by Peter Singer. Even as Beckstead was busy writing about x-risk and the long-term future in his own work, in the media he presented himself as focused on ending global poverty by donating to charities serving the distant poor (Beckstead & Lee, 2011; Chapman, 2011; MSNBC, 2010). MacAskill, too, presented himself as doggedly committed to ending global poverty....

      (Becker's previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)

      • That Carl Shulman post from 2007 is hilarious.

        After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

        Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

        Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

        I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

        The "two articles below" are by Yudkowsky.

        User "gaverick" replies,

        Carl, I'm inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky's chapter on AI risks for Bostrom's bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

        Shulman's response begins,

        Have you read through Bostrom's work on the subject? Kurzweil has relevant info for computing power and brain imaging.

        Ray mothersodding Kurzweil!

      • nah it's good, you should read it

    • astrange:

      They're members of a religion which says that if you do math in your head the right way you'll be correct about everything, and so they think they're correct about everything.

      They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it's really high and then you're good.

      Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.

    • s/o to one of the comments basically saying that Scott is the Simone de Beauvoir of rationalism

  • It's probably been discussed a shit-ton already, but boy does this guy not get why he's a chud. Consider these two quotes, heavily edited for brevity:

    I’m [...] a liberal Zionist, [...] etc. ([an identity] well-enough represented at LessOnline [...]).

    and:

    The closest to right-wing politics that I witnessed at LessOnline was [moderate politics].

    Of course, one shouldn't expect s11n.blog to understand the fascist, let alone right-wing, nature of liberalism or Zionism. That is simply how fascists are.

  • “You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”

    And then everybody clapped.

    (This is extra funny because he lost friends over the gaza genocide debate when his leftwing (and Jewish) friend told him 'well we do have power over them' re the protesting students, to which he has a good Rationalist replied with 'FUCK YOU'. He himself describes the situation slightly differently, but he has shown he doesn't always have the best ability to understand others in these kinds of emotional moment, and he cannot fathom he might be wrong (and that is how you end up stealing from the tip jar)).

    E: And so much references to 'the sneerers' and our arguments again, he promised he would stop reading our shit because it is unhealthy for him. But looking at the arguments, I'm happy to see that he has indeed not read our stuff. The I in TESCREAL/TREACLES() stands for Incel.

    Also, while the Rationalists are not incels, what does the name of your blog stand for Scott? (E: wanted to edit in a link but can't find the explainer page, wonder if he read it again and went 'wow yeah I get why people think that isn't great' and deleted it (I did find this congrats on being consistently wrong)).

    : still think these are dumb abbreviations, but not letting that get in the way of a dumb joke.

20 comments