Earlier this year, the release of ChatGPT and subsequent mainstreaming of AI laid bare what, for some people, is the defining crisis of our time: Accelerating progress in artificial intelligence has the potential to literally destroy the world.
I have serious reservations about AI doomerism, but as I said, AI progress is shocking and important and so it seems dangerous to dismiss out of hand this community of concerned rationalists and Effective Altruists, people who have been talking about imminent AI for years.
One of the only “establishment” figures widely respected by this community is economist and applied media theorist Tyler Cowen. I admire Cowen, who has been an influential poster for essentially my entire adult life. To say that someone “understands the internet” is among the highest compliments, in my book, and he does.
I shared some surprise, then, that Cowen vehemently (for him) criticized AI alarmism, as did Scott Alexander. (Read all that if you want, it’s not necessary for the point of this essay.)
Cowen was dismissive, not even deigning to articulate but merely to gesture at counterarguments. Others have pushed back against some of these, but today I focus on my area of competency:
“[Scott] does not engage with the notion of historical reasoning (there is only a narrow conception of rationalism in his post), does not consider Hayek and the category of Knightian uncertainty”
This is a genuine weak point for EA and young people in general; it would be useful to read more books. Whatever my personal failings, however, no one can accuse me of not having read Hayek. This background, combined with Flusserian media theory, gives me lots to say about the end of history (in the technical sense) and the concomitant failure of historical reason — in the second half of this essay.
The best defense is a good offense, and Cowen’s habit of cryptic pronouncements have rendered him largely unassailable. Refuting innuendo with exhaustive argumentation is a sucker’s game. So I will first advance the hypothesis that
Tyler Cowen is an information monster.
His capacity to intake and process information is so atypical — and information processing is so central to contemporary life — that his ideal information environment threatens other humans’ flourishing, our dignity.
But that’s a bit bloodless. Cowen concludes:
We should take the plunge. We already have taken the plunge. We designed/tolerated our decentralized society so we could take the plunge.
This is a dangerous technological accelerationist, gloating over what he sees as the impossibility of democratic oversight over technology or indeed the possibility of human choice in any large-scale issues. Who exactly is the “we” who designed/tolerated this, I ask the methodological individualist. I personally know many who have tried (thus far unsuccessfully) to destroy “our decentralized society.”
The superficially amiable food-loving humanist has reached precisely the same conclusion as the cruel amphetamine-popping anti-humanist Nick Land — that it is inevitable that techno-capitalist progress will eliminate humanity as we know it…and that this is good.
I said Cowen was an economist and applied media theorist. The first label is enthusiastically embraced; the second is mine, and reflects Cowen’s remarkably effective adoption of the logic of the internet and the attention economy.
The man has been posting daily links and occasional commentary to his blog for twenty years. (The anniversary, in fact, is next month — mazel.) This is astonishing. And according to Matt Hindman’s cumulative audience theory, this is precisely what he should have been doing. People learn that when they navigate to *his website*, every day they are rewarded with a few minutes’ worth of fresh, surprising (that is, non-redundant) information.
As a scholar of the co-elution of traditional media and social media in the early and mid 2010s, I can say that Cowen has avoided making the crucial mistake that doomed everyone from Buzzfeed to the Cleveland Plain Dealer: granting control over the distribution of their content to Facebook etc. Thinking they could cut costs and maintain or even grow ad revenue, these companies ended up with zero bargaining power over the platforms, when they weren’t getting actively defrauded.
Cowen’s expansion into podcasting carried the same excellent intuition. Conversations With Tyler involves interesting guest talking as much as possible, with a minimum of filler and often jarring transitions between questions. Where other podcasts are premised on parasociality and an audience with time to kill, Cowen’s seeks to be as information-dense as possible.
The best CWT guests are unaware of the format. Tyler often plays the game of “overrated or underrated” with his guests; tragicomic hipster Chuck Klosterman advanced this game (including the explicit category of “rated”) in his best book, Sex, Drugs and Cocoa Puffs.
As the Conversation is ending, Klosterman asks the question that’s been on all our minds. “So are you like, friends, with Zizek? What was it like talking to him?” Tyler reveals that they had met the day of their interview before hopping into a wide-ranging 90-minute Conversation.
But to be clear, these are not novel insights. Cowen is a self-professed “infovore” who thinks that we live in a golden age of access to information. That’s clearly correct, and I confess that I share some of the cognitive style that makes me see information density as a cardinal value of media.
Cowen is in the habit of throwing out a journalistic account of, say, the market for hired actors to attend funerals in Japan, and enjoining his audience to “model this!” I am in the habit of applying the logic of reflexivity.
So my model of Tyler Cowen is that he is an information monster.
The “utility monster” is a thought experiment proposed by Robert Nozick as a churlish but effective response to vulgar utilitarianism. It posits the existence of an entity with constant or even increasing marginal utility from consumption. According to the logic of utilitarianism, the morally correct allocation of resources is to starve the rest of the world in order to feed the beast, whose monumental utility gains would outweigh the utility losses for the rest.
As a young man, I was entertained by ideas like this. Now, this seems like a damning indictment of both utilitarianism and thought experiments. The bits about the utility monster above *seem like* English language sentences. They *resemble* an attempt at communication. But they are in fact meaningless: in no way do they refer to the world.
Communication devoid of meaning is inhuman communication, but it can still be an effective form of control. Indeed, this insight is at the heart of Norbert Weiner’s Cybernetics, the foundational text for post-historical reason. Cybernetics marks the end of history in a technical sense, and it reveals the weakness of Hayekian (and Popperian!) historical reason.
Hayek was brilliant, and prescient, having anticipated some of the most important intellectual developments of postwar science. Mainstream economics was a pompous and elitist endeavor, and Hayek was effective in combining the firsthand sociological insights of Karl Menger with the “praxeology” of von Mises to produce a much more accurate description of how individual economic actors experience the price system.
But that was some time ago. Hayek has been superseded, specifically by cybernetics. The insight in “The Use of Knowledge in Society” is proto-cybernetic, but Hayek never made the connection despite having more than twenty productive years in which to do so. Instead of recognizing the price system as one of many possible cybernetic information-processing systems, he retreated into a mental fortress of frankly second-rate constitutional legal theory, satisfied to treat the price system with mystical reverence.
There are other weaknesses. What Hayek didn’t understand about Hegel could fill a book—specifically, the final third of The Counter-Revolution in Science, a dogmatic and downright whiny tract which provides no insight except into Hayek’s own psyche. (For a more recent example of failing to understand Hegel, see Cowen’s Conversation with Zizek.)
This is not incidental. It is precisely the Anglophile Hayek’s overreliance on that tradition which prevents him from appreciating the intellectual current of Franco-German dialectical historicism which was led directly to cybernetics. See Weiner’s invocation of Newtonian time being supplanted by Bergsonian time in Cybernetics.
Cowen argues that “virtually all of us have been living in a bubble “outside of history.” I firmly agree with the first half of this; I wrote a book about “Boomer Ballast” and precisely this dominance of Boomer Realism over our collective imagination. His discussion of “history” is not merely academic point, at least insofar as this article has any point at all. The logic of historical reason implies a certain continuity in human-kind; Flusser and other media theorists, however, point to media technology as causing mutations, in the creation of new humans and new human relations.
Historical reason is contingent; it did not always apply, and it no longer applies. It is contingent on the medium of the linear, logical alphabet, of written text, which all media theorists agree is on the way out. This is Hot Flusser Summer, so I’m not going to re-iterate the background for Flusser’s arguments.
“History” (like “facts,” in Jon Askonas’ excellent recent essay) was created by linear, written text. The memeplex of “objectivity,” “science,” “history,” “liberalism,” “United States of America” is only possible with this media technology. Cowen says that postwar America has been non-historical; I agree. Where we disagree is whether we will return to history.
Forget the childish meme about Frank Fukuyama being shocked to hear that war still exists; what’s at issue here is whether “stuff happening” is equivalent to the concept “history.” Stuff will still happen, but we will no longer recognize it as history due to the mutations in what it means to be human. History is a situation of linear causal chains, of the past becoming the present and the present becoming the future.
Before linear history came the eternal recurrence of magical time: the sun caused the moon and the moon caused the sun, the seasons cycled through, and human experience was terrifying and strange. Nothing was explained in the sense of a historical causal chain, the familiar scientific “tides are caused by the moon.” History is this attitude applied to human societies, and is co-extensive with concept of progress.
The current temporality, what I call cybernetic time, is once again circular. But this circularity, of the feedback loop, combines the circle with the line in what Flusser calls “circular progress.” Each time through the feedback loop intensifies the action, so that we return to the same place but with everything amplified. This is what Flusser calls “the circularity not of the wheel, but of the whirlpool.”
Hayek’s optimism was premised on humans existing within history, within time, within the world. If that ceases to be the case, if we produce a runaway feedback loop which overwhelms any dialectical response, then we are post-historical. We continue to go, and indeed we go faster than ever before, but we lose both the past and the future; we don’t know where we are going.
In many ways, Flusser (and I) agree with Cowen’s diagnosis of the situation, especially in recognizing the magnitude of the stakes. I believe that his emphasis on historical reason is mistaken…perhaps intentionally so. Moral philosophers, normative theorists, care where we are going . Tyler Cowen only cares about how fast we are going, because he is an information monster.
So when he says that:
It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad. In my view the good will considerably outweigh the bad…but I do understand that the absolute quantity of the bad disruptions will be high.
We should take seriously the “in my view” part—he means, “for me.” He wants a world for information monsters; I want a world for humans (as they currently exist).
Flusser: the best proof for the paranoic lunacy of current scientific explanations is the meaningless technical progress they lead to…technical progress is a striking proof of the fact that the relation between humanity and texts has turned around and that progress no longer serves humanity but humanity serves progress.
Cowen’s stated ideology as outlined in Stubborn Attachments is one of progress, defined as maximizing economic growth subject to avoiding collapse.
This is mealy-mouthed; many people want that. For contrast, see Cowen’s Conversations with evil and tragic (respectively) Effective Altruism brain-geniuses Sam Bankman-Fried and Will MacAskill.
In these, Cowen is unwilling to allow them the same mealy-mouthed rhetorical move. EA stakes a claim as morally radical, but when put to the question, their position is simply “Maximize utility subject to ethical constraints.” So how, Cowen pushes, is that different from standard Utilitarianism? We’ve already established that you’re not radical, now we’re just haggling over the price.
But Cowen’s “maximizing economic growth subject to avoiding collapse” is also merely haggling over the price. There is a strong positive commitment in this nominally normative statement. By outlining this principle, Cowen is arguing that at the margin, we are too worried about avoiding collapse and not enough about maximizing economic growth.
With this model, we can see how Effective Altruism has been useful to Cowen’s goals in having de-centered ecological collapse, the primary concern for most people today. Environmentalism and especially the “degrowth” version is anathema to Cowen’s agenda. And the EA ideology, which argues that climate change is extremely unlikely to kill more than a few hundred million people, is thus not an existential threat.
But EA poses a problem with their newly prominent obsession with AI apocalypse. An extremely smart, driven and most importantly young group of potential Cowenist allies or acolytes have decided that there is a very real threat of collapse — something that we should in fact be willing to sacrifice significant growth for!
And this is simply unacceptable. Just as Effective Altruists saying “maximize utility subject to ethical constraints” means that they think everyone else is too worried about ethical constraints, Cowen saying “maximize economic growth subject to avoiding collapse” means that he thinks everyone else is too worried about collapse.
My model of Cowen’s information-monsterism implies that he wants us to go as fast as possible because it means more information for him. All of the handwaving, obscuritanism and fatalism—“this is all happening anyway (talk more to people in DC)”—are merely a tactical smokescreen designed to make it impossible to focus on Cowen himself: a human agent, with his own ideological program.
Mannheim’s “unmasking” of ideology is the critical social theorist’s response to Straussian opacity. The goal is to reveal the function that a statement is playing rather than straightforwardly interpreting it.
With this in mind, here is what I see as the weak point of Cowen’s initial argument, the crucial sleight of hand which makes possible the rest of the performance.
Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances. “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.
I would put it this way. Our previous stasis…is going to end anyway. We are going to face that radical uncertainty anyway. And probably pretty soon. So there is no “ongoing stasis” option on the table.
He’s absolutely correct that there is no “ongoing stasis” option—but we can still try to slow things down! In fact, this is the center of my political program! There is no possibility of democratic action if the speed of change eclipses the speed at which democracy functions. I am still a liberal — and the only way to be be a liberal today is to be a conservative, to try to slow down technological change.
The only evidence Cowen brings to bear on the central question at issue is:
probably pretty soon
to which I say
how about a bit less soon?
I say this for a number reasons. Generally, I agree with Karl Deutsch that
When we defend a man’s dignity, we defend his ability to use his personality; we defend him against an intolerably high speed of learning, an intolerable speed of changing his behavior—intolerable, that is, because incompatible with the continuous functioning of his self-determination, his autonomous learning.
And more specifically, to Cowen’s stated goal of progress, I believe that we must slow down technological change in order to re-assert our collective ability to definition a destination, a goal, something to progress towards. Cowen only wants to go faster and is thus uninterested in arguments that prioritize control over speed.
The model of Tyler Cowen as information monster explains a number of otherwise puzzling data points.
He is happy to adopt a vulgar “positivist nihilism” on the question of adolescent social media use, that we should assume that this massive technological shock is harmless until proven otherwise.
He is against the US banning TikTok, again with positivist nihilism: “What is the actual evidence that it is serving up slanted, pro-Chinese content, or otherwise swaying public opinion in a negative manner?” As I argued, why leave this gaping national security hole open — they would only have to de-legitimize one presidential election!
He tolerates and even encourages contrarian public intellectuals, no matter how mid. I cannot fathom that he learns anything from them, but the move makes sense if the goal is to increase the diversity (and not the accuracy!) of the political information environment.
He wants more cars in Manhattan.
I agree with some, perhaps most, of Cowen’s nominal ideological commitments. Perhaps you do too, to a greater or lesser extent. My argument in this post is that these commitments are epiphenomenal, downstream of the fact that he is an information monster.
The most important political question of the day is whether we should go faster or slower, whether you look out at society and say, “Things are going pretty [well/poorly/terribly]…but I really wish I had more information to consume.”
This is the opposite of my position, and I suspect that I am far from alone in this.
This post would have been interesting if it had been brief and clear.
Your conclusion seems to me to horribly confuse two different things. I share your panic at the overwhelming flood of information. But I can't for the life of me think that stemming it at the source is a good thing. What we need is for that information to be out there, so that institutions and machines can use it for useful purposes. And we need high quality filters that stop the deluge crashing down on the head of innocent net surfers.
Tyler Cowen is one of my filters - he read the net so I don't have to.