The high water mark of any movement is also the beginning of its decline. In October 2022, I attended the Effective Altruism Global conference — the first major conference the Effective Altruists held since their explosion onto the mainstream cultural scene thanks to the PR blitz accompanying Will MacAskill’s What We Owe The Future, and the final conference the group would hold before the implosion of EA wunderkind Sam Bankman-Fried’s FTX crypto trading firm in early November.
The scene at the keynote speech should have been a gigantic red flag. This was the triumphant Washington DC debut of a hot young political movement from Oxford and the Bay, flush with cash and talent, with strong moral commitments to boot. And it concluded with an audience of hundreds of brilliant, altruistic young people, in the auditorium of the Ronald Reagan Building and International Trade Center, getting told (correctly) by Matt fucking Yglesias that their vibes were off.
There have been waves of self-reckoning, tell-alls and takedowns within the EA movement — how, exactly, did they get from “let’s figure out which charities are most efficient at helping people” to deciding that there are only three (in fact, one) ethical principles:
Grow the EA movement
Make as much money as possible
In service of
Save humanity from imminent destruction by runaway AI
The downsides of A and B have become clear over the past four months. But, ironically, this same period seems to have vindicated at least the spirit of 1: ChatGPT and related LLMs have thrust AI into public awareness and kickstarted large-scale financial interest in this technology. If AI turns out to be a major technology in the near future, EA’s epistemic investment seems justified. Indeed, a paper arguing for the use of LLMs as “silicon samples” that could replace human subjects has just been published in a prominent Political Science journal; I think this is bad.
So how do we make sense of this? I don’t think we can, yet. The story of EA is far from finished. But I believe that this group and its descendants will play a major role in shaping the coming decade. The energies they have unleashed will continue to develop, and the direction of this development will be a defining political question. The long era of Boomer Ballast is coming to a close, and the future will be built by whoever is able to step into the vacuum this too-powerful generation leaves behind.
It’s still too early to tell whether the French Revolution was a success, I’m told, but this is precisely the question at hand: what is the legacy of Liberal Reason in the 21st century? The spatial metaphor of “Left” and “Right” originated in the physical layout of the French National Assembly, with the royalists on the King’s right hand and revolutionaries on his left. These categories are losing their meaning; the emerging dimension of conflict is now pro/anti establishment, local versus global, based versus cringe.
From a media theory standpoint, the latter is the most important but still downstream of the crucial question of how we should use the internet. Effective Altruists have an answer to this question—raised on the simplified ontology of video games and cutthroat meritocracy, Effective Altruism minmaxed their way to one pole of the dialectic set in motion by the Enlightenment. These are, in a technical sense, the least cool people in the world because the most rational: they are maximally interested in discovering the right rules and then living by them alone. Their ideal is the total realization of the rational liberal subject, the triumph of codes over the sloppy excess of human existence.
I wrote recently that "If you and your community have not invested serious energy taking advantage of the internet revolution -- if you do not have a concrete set of norms, practices and institutions designed to allow you to use the internet without the internet using you -- you are destined to lose. In fact, you’re not even trying to win."
The Effective Altruists are trying to win. So, too, are the Dimes Square / Urbit crowd, the antithesis of EA: abject aesthetes who care only about vibes, whose ideal is the total annihilation of the liberal subject, the dissolution of the individual into a purely rhizomatic, relational node in a new networked spirituality. This crowd is far messier, but they are broadly illiberal. This includes people like monarchist Curtis Yarvin, aspiring Grima Wormtongue to whichever tech billionaire manages to restore order to our decadent democracy and become God-CEO. Also present is the accelerationist strand, the amphetamine-fueled ravings of anti-humanist philosopher Nick Land. In the same intellectual tradition as self-styled “art extremists” like Charlotte Fang and the rest of the Remilia Collective, this current strikes me as more dangerous because far more plausible: they aspire to fully aestheticized neoliberalism without any of the messy humans slowing down technocapital's runaway feedback loop.
As a liberal conservative, I hope they all fail. As an older American Millennial, I mostly just want to go back to the golden age of my youth, bookended by the fall of the Berlin Wall and Lehman Brothers. But we’re welllll past that, and things are only going to get weirder and weirder. The best weird that I can see involves some kind of synthesis of these two young “movements,” the creation of something genuinely novel rather than gigantism of one latent trend or the other.
I have only the vaguest idea of what this looks like. But I’m hoping that this “scene report” is a step in the right direction. There have been hundreds of scene reports about Dimes Square / Urbit; the medium is the message, so these parties are all vibey, druggy and lacking substance. Indeed, the inability to neatly summarize Dimes Square / Urbit in a single term, the fact that this “movement” consists of little more than runaway media feedback loops, stands in stark contrast to the institutionalized, hierarchical and self-appointed Effective Altruism Movement.
Scott Alexander notes that EA is addicted to big-picture criticism of itself. That’s true, but only if that criticism is phrased in EA’s preferred medium: immaculately logical posts on the EA forum. The criticism that EA needs cannot be delivered in that medium; the reform that EA needs is media-theoretic, and more fundamentally, aesthetic. Hence, this.
The vibes of EAGlobal were off from the start. I hate DC. I biked down from my hotel on Embassy Row, past the recently-militarized White House to the hideous Ronald Reagan Building and International Trade Center. There was a short line to go through the metal detectors. This struck me as unnecessary, but with that many high-agency young millenarians, you can't be too careful.
Most conferences (and all academic conferences not held in Palo Alto or a castle in, like, Tuscany) are tightly budget-constrained but desperate to project professionalism and gravitas. This was the opposite, with a twist of ethical fanaticism.
I’d guess they spent over $1k per person, for the venue, catering, and staff. Registration was $200 (£200, in fact, reinforcing the Oxford connection) but included both a sliding scale option to pay less and an option to pay more to subsidize other attendees. You could apply for travel funding, too; after the event, they emailed everyone telling them to submit receipts—they even emailed me, though I hadn't applied.
There was unlimited free food (catered meals, coffee, fancy snacks, a rainbow spread of La Croix) for over 12 hours a day and unlimited free beer and wine for four hours every night. This open bar would’ve been absolutely slammed at every academic conference I’ve seen, but the half-dozen bartenders were bored all night.
There was also unlimited free swag—and totes, t-shirts and hardcover editions of Will MacAskill's new book, What We Owe The Future. But none of it was fancy, and no one was trying to project gravitas: the strength of their shared moral and epistemological self-importance made projection unnecessary, even redundant. It’s not hard to imagine the famously schlubby SBF fitting in nicely.
The food was all vegan! And not good vegan, just nasty, catered vegan. Faux chicken nuggets. Soysauges. Nut cheese. I drank bitter black coffee all weekend; the two options were oat or soy creamer. The only animal product replacement I was tempted to try were the breakfast "quiches"—crispy on the outside but eerily gooey on the inside, even when lukewarm.
My first encounter was weird in a way I hadn't realized was possible. As I was chatting with an acquaintance (one of maybe 100 people out of 1,300 who was over thirty years old), a non-gender-conforming person strolled near enough to us to listen to our conversation, but still far enough away to not obviously be trying to participate. I wanted to be welcoming, and shifted my body and focus to include them; no response. So I said "hey! We're talking about social science reform and the impossibility of generalizable knowledge, wanna join?"
My hunch with this crowd is to just dive into the deep end. They might say "oh nah I don't care about that" or "can you give me some context" or some other blunter-than-normal response; this tolerance for information-dense socializing is a real strength of the community norms. While I enjoy social niceties, I'm also...neurocognitively aligned enough for maximum information upload/download.
Or so I thought. Our young interlocutor said something to the effect that it was impossible, but that we still had a moral obligation to try.
I said, "You're saying social science is impossible?"
"No. Communication."
We tried to get on the requisite level but apparently couldn't hold their interest; after a few minutes of impossibility, they abruptly turned, picked up an unflavored La Croix, and walked away. Communication may be impossible, after all.
The first night, I got tipsy on two glasses of pinot grigio after eating just olives for dinner. To be clear, I have a lot of respect for EA’s philosophical commitment to animal welfare, but it was stupid of me to go to this conference while doing keto. For what I'm assuming were covid reasons, the food was all individually packaged, so I had to carry four of these massive clamshells of 9 olives each to a seminar on forecasting and prediction markets by an energetic young co-founder Manifold Markets. I’ve long been a fan of prediction markets, and they’re a major part of the epistemic technology of EA.
The seminar turned out to be on the 8th floor of a tower at the opposite end of the building, and I walked over with two adults (people over 30) and a passel of bubbly teenagers. I teach undergrads, so I'm used to the nervous energy of a mixed-gender group in a social state of exception, but there was something else at play here. They barely seemed able to register our existence. And of course they weren't cool, not at all; they seemed animated by the confidence of their knowledge, the confidence that no one over 30 could possibly know anything of interest.
The hippie Boomers accomplished this youthism in the 60s through the confluence of sex, drugs (psychedelic), benign economic conditions, and the existential threat of nuclear war. The EAs have the internet, drugs (nootropic), malign economic conditions and the existential threat of AI. And a big part of how they use the internet is through online prediction markets like Manifold.
This technology solves a genuine problem. The standard of predictions in punditry is abysmal. The metascientist Phil Tetlock argues that the status quo prevents us from ever learning from our mistakes. Thanks to him and to the next generation of epistemic reformers, people are open to a wide range of new possible institutions.
But the EAs are once again a few steps ahead. They've already considered the options and are all the way in on prediction markets. First championed by iconoclastic economist Robin Hanson, prediction markets work by commodifying the central tendencies of individual thought, aggregating the heterogeneous judgments of all multitudes of otherwise disconnected humans into auguries about binary states of the world.
"Legacy" prediction markets, already multiple years old, suffer from the authoritarianism of the agenda setter: the people who run the thing are the ones who choose the questions, and perhaps worse, are the ones who choose the specifics of the prediction and how to decide whether it came true. Manifold’s value proposition is the standard neoliberal reform: introduce competition to protect the consumer and brand recognition as the regulatory mechanism.
Our assignment in the workshop was to collaborate with our tablemates to create a prediction market on the Manifold app. I proposed that our prediction should be that an article about this event would be published online in the next week. I bought the "yes" and watched the price of "yes" plummet as the smart money said that nobody gave a shit. Some critics of prediction markets about specific, short-term events worry that they encourage insider trading—like bookies paying off boxers to throw the bout. Defenders of prediction markets think that insider trading is great: it brings the information into the noosphere faster than events themselves take place.
In either event, insider trading doesn't always work. Sometimes the insider overestimates their ability to sit down and write.
But what if I had phrased the question slightly differently? "Will the NYT mention the words "effective altruism" more than 100 times this year?" This prediction seems to be getting at the same underlying concept but operationalizes it in a very different way. My prediction was wrong; the alternatively phrased prediction would have been right, very right, but for an unexpected reason.
Prediction markets require a disembodied, transcendent conception of knowledge. For problems where human institutions have already reduced the complexity of the world – things like “Who will be elected President in 2024?” – this strategy is just about the best we can do. But it does not generalize, and shouldn’t be forced to. This is not how embodied knowledge works, which is to say, it is not how humans work. Confronted by the internet, the quintessentially print-based ideology of liberal reason can either fold or double the fuck down. Effective Altruism did the latter.
I spent the rest of the evening chastely flirting with an overdressed mother of three with a PhD who worked at the CDC. She was amused by the youthful energy of the event and bemused by the ideology; she was mostly there to recruit high-end talent, of which there was plenty. But she confessed a genuine sympathy for the movement. She had been raised a fundamentalist Christian, and this was the best secular replacement she had found. I could certainly see the parallels.
The next morning, I biked down to a Whole Foods near GWU and demolished a pound of scrambled eggs and sausage. I can concede the long-term nutritional viability and ethical superiority of a vegan diet, but I could not face debating the AI apocalypse for 12 hours on a diet of crudites, beet hummus and vegan gruel (is gruel always vegan?).
My first semi-official “one-on-one” was weird in exactly the way I expected for the conference: a PhD student on indefinite leave to work full time on the AI Alignment problem. They were intellectually interested in my pitch for science reform, they thought I had diagnosed the problem well and that my action plan sounded plausible --- but that none of this could possibly matter unless it led to progress on the AI alignment problem in the next ten years, at which point the future of humanity would be irrevocably decided: Skynet or Star Trek.
I am not exaggerating. To be slightly more charitable to this strand of EA thinking, my interlocutor was convinced that spending as much of their professional and intellectual efforts in addressing the AI Alignment problem would maximize the positive longterm utility they could generate for humanity.
To their credit, they had put their money where their mouth was, fully abandoning the institutional warren of the PhD student for the brave new world of an AI alignment nonprofit. Unfortunately for them, their mouth (and now their money) turned out to be full of hot air from Sam Bankman-Fried's ass. Perhaps this was the concept of asymmetric risk in action—if you succeed, you've saved the human race; if you get bamboozled by a charismatic (?) grifter, the worst consequence is that you've got to slink back to grad school
I spent most of the day in these 25-minute one-on-one meetings that we had been strongly encouraged to set up in advance. The conference app was dramatically superior to anything I'd used before. Everyone had a profile and we were algorithmically recommended to each other, like eHarmony for saving the world. At basically any other conference I've been to, everyone would have been too socially unsure to play along. Here, I got plenty of meeting requests, sent several myself, all of which were accepted. It all just worked — a genuinely impressive accomplishment of technology and community norms.
Most of the people I talked to were academics of various stripes, and we had normal-ish, productive conversations. I met an ex-Amish guy who started college at 24, not knowing arithmetic, who had benefited dramatically from Khan academy and was just starting a PhD in applied Number Theory; I was impressed, but wondered if he told his life story in each of these meetings, and why. I met a private high school teacher who had developed course materials for a seminar on existential risk; a normal, earnest guy just trying to Do Good, Better (™). I met some anarchist programmers developing epistemic software, and a woman who had just started an elite MD/PhD just trying to Do Good, The Best.
There were also a few sessions with EA VIPs. A highlight was a virtuosic Q&A session with Tyler Cowen that covered huge intellectual territory in his high-information-density style; I asked him if the US should ban TikTok.
For dinner, I rode an e-scooter to the Whole Foods down by the Navy Yard, had chicken wings and a few hard seltzers at the in-house bar, then watched the Eagles game. I can't imagine what would have happened if I had tried to talk about football back at EA. It was time for a meeting about designing better epistemic institutions. This was my wheelhouse—I know this subsubsubfield of social science as well as anyone. I'll spare you the details, but I had a blast and three glasses of pinot grigio poured almost to the brim by the army of amiable but entirely unoccupied bartenders.
It's 9:15pm and I realize I'm hammered, sitting in the overlit atrium of the Ronald Reagan Building and International Trade Center. The center of the room is covered with beanbags (what is it with these people and beanbags?), which are covered in Effective Altruists. Some of them are sprawled on the floor, deep in conversations that I can barely imagine. I’m sitting on a couch next to an aspiring reactionary; his takes are tepid (wokeism is slowing down AI progress; a bit more virtue ethics and a bit less utilitarianism) but the conservative intellectual flank of EA is so wide open that he’s made a bit of a splash.
Eliezer Yudkowksy—perhaps the most important intellectual in the Rationalism movement, related to and now co-evolving with Effective Altruism—stalks the room, just waiting for an acolyte to pluck up the courage to approach him. They do, in constant succession. Again, this is a big improvement over standard academic conferences, where senior professors hold forth and grad students need to be introduced by someone of higher status, like a Victorian period drama complete with rank and pedigree.
Rather than the aristocratic approach, EAs rely on illegibility through mountains of text and elaborate jargon. If you try to argue with someone on the Rationalist LessWrong forum and you haven’t read Yudkowksy’s “The Sequences,” you’ve got no chance. It’s like today’s Marxists, for whom reading the Gundrisse is a pre-requisite for deciding whether to vote for Bernie or work on mutual aid projects.
The Urbit Yin to The Sequences' Yang is Yarvin's "A Gentle Introduction to Unqualified Reservations." These are the hardcore weedout texts, the Organic Chemistry of internet political theory. Each of them realized the need for an on-ramp to their philosophy.
Yarvin's is "An Open Letter to Open-Minded Progressives." Self-aware, honeyed propaganda: the Red Pill, not for the losers who can't handle today's gender relations, but for the losers who can't handle liberal democracy. 120,000 words—nearly twice as long as my book, and about 20% longer than The Hobbit.
Yudkowsky's version is infinitely cringier and over 5x as long. "Harry Potter and the Methods of Rationalism" (HPMOR) has 100,000 more words than the Lord of the Rings trilogy and the same number of fans within the EA community (all of them). The difference is that Yarvin's prolix transgression is merely Proof-of-Work for the Urbit "I Like Art"-type kids, while HPMOR is literally supposed to replace religion, or culture, as a system of thought for adolescents.
From one perspective, Yudkowsky’s peripatetic interactions with admirers echoed the classical form of the old master sparring with young disciples, the very picture of Socratic dialogue. But this ideal only works if new information is generated through dialogue, if the neophytes come with a wealth of individual knowledge to be refined in the master’s system. The failure mode of EA is that of the cult: if the system is too successful, too totalizing, it creates epistemic closure and the dominance of discourse over dialogue. I’m still a few Sequences short, so I didn’t dare intrude, opting to go walk around the Washington Monument with the kids smoking weed and listening to Bad Bunny on shitty bluetooth speakers.
Sunday is more of the same: failing to palliate my hangover with a tureen of black coffee and seven coconut LaCroix, alternating between normal and insane conversations, roaming the Ronald Reagan Building and International Trade Center. I spot a professor I know from my normal research; unsure how to proceed, I default to awkwardly using the app to schedule the last time slot of the day with him.
I assumed that he was also a secret EA sympathizer; turns out that he was just there to see a talk by a recent Econ Nobel Prize winner that was relevant to his research. The first thing he asked me was “What the hell is going on here?” Not wanting to reveal my power level, I just said I was there to talk about metascience and that this was weird for me too. We watched the teens mill about the atrium, their energy undiluted, before the closing keynote: a dialogue between Yglesias and Kelsey Piper, a journalist from Vox. If you recognize Piper’s name, that may very well be because she’s the editor of Vox’s SBF-funded Future Perfect vertical, who parlayed her cozy relationship with the FTX founder into an astonishingly frank interview over Twitter DM just as Bankman-Fried was settling into his role as the company’s “now-disgraced former CEO.”
By this point, I had fully adopted the EAs’ infectious sense of their own moral rectitude and boundless capacity to change the world. This was a poor mental state in which to listen to a self-satisfied DC insider who did not seem to appreciate the gravity of what The Movement was about to accomplish, and the room was underwhelmed by Yglesias.
The vibes were off, he said - we all need to learn to wear a suit, shake someone’s hand and look them in the eye. And what a cynic! To paraphrase:
So you’re the new kids on the block, trying to make a splash, throw some money around and become serious players in Washington. You’re talking all this stuff about saving the world, pandemic prevention and artificial intelligence. The average lobbyist/political flack isn’t going to blink, they’re just going to ask “So what do these guys really want? This moral philosophy stuff is just a cover for looser crypto regulations, right?”
As I said, the story of EA is far from over. I hope they can adapt, to preserve the beautiful impulse and channel their youthful energies into still-radical but more human-scale activities. More Hume and less Mill, of course, but also more vibes and less brute-force cognition. You’re all very smart, much smarter than the people with power, but as long as human freedom persists, the wicked problems will always remain out of reach.
Because the ultimate tragedy for this group of maximally unorthodox, ambitious and well-intentioned young people would be to remain nothing more and nothing less than what Matt Yglesias expected.
If you liked or hated this, check out this recent conversation with my friends at New Models, the internet community that is cool and rational.