Discover more from Never Met a Science
The Tech Policy of Nate Dogg and Warren G
Regulation as Action
At one point, we believed in the law. Some appointed officials wrote words on paper, they performed some democratic ritual, and the words and the paper were then transformed. This regulation intended to make the world more regular, to project the linear code of the text into the messy high-dimensional world of the social.
The rule of law now seems like a myth. We might as well try to will ourselves to believe in the divine right of kings. What we need is governance, and we need to use a more modern media technology than the written word.
It was only in the early 1600s that “regulate” came to mean “govern by restriction.” This is a fundamentally textual conception, of carving up a static world of discrete entities and actions into the legal and the illegal.
Lawyers and regulators, politicians and lobbyists---the conflict between these parties is how the rule of law is supposed to operate. Even when they're not straightforwardly corrupt, though, their conflict within this symbolic arena serves to legitimate the entire affair, regardless of whether it still functions.
We’ve had seven years of Trump-centered legal intrigue splashed across the headlines of major newspapers, yet we’re one 82 year-old’s stroke away from Trump’s re-election. Because of Boomer Ballast, many of the most powerful people in society continue to believe in the Court tout court. But younger generations are disillusioned. The cultural idol of Gen Z is the scammer.
Downtown NYC artists spin metatextual autofiction across anonymous Twitter accounts; a major sports TikTok account just did a deep dive into “what was going on at Burning Man” that involved amplifying misinformation intentionally created by the dirtbag leftist who volunteered to fight with the Kurds in Syria that there was an ebola outbreak on The Playa. The whirlpool rages while lawmakers consider how best to contain it with wooden planks.
Those in the orbit of elite institutions remain, as ever, the exception. The simplified ontology of the meritocracy is accomplished by radically reducing complexity. Their “education” or perhaps better “formation” involves decades spent learning to pick the right option of four, or, at worst, arranging some words. The correct combination, in this symbolic world, literally unlocks their future.
This younger generation of meritocratic wordcels has seen through the law; they no longer believe in liberal democracy. But because of their formation, their only option is to replace it with another linear, textual code—that is, code—a digital update that is far less radical than it seems.
The crypto mantra “code is law” begins with the realization that law is no longer law. Take the mentality of someone like SBF. Everything is a video game. Meritocratic ascendence, morality, finance, romance—the crucial premise is that the world is simple enough to decode. Everything else is optimization.
But the left hand of this tragic dialectic is the even more pathetic (because more impotent) belief that tweet is law. There is a fundamental symmetry between the twitter wordcel and the crypto bro, with the Twitter “ratio” serving as the far-more-easily manipulated analogue to the consensus protocol of the blockchain.
The emergent morality of Twitter is deontological: the format demands that we pronounce ever-stricter textual laws. Because Twitter is a video game, there are no rewards for positive action -- the overwhelming force points towards moral rectitude in inaction. This is regulation in the “modern” (and thus outdated) sense: prohibition.
Nate Dogg and Warren G, in the relatively less mediated reality of 1990s Compton, remind us of the etymology of regulate. This is regulation through action: “Regulators. We regulate any stealin’ of his property. We’re damn good too. But you can’t be any geek off the street. You gotta be handy with the steel, if you know what I mean. Earn your keep. Regulators, mount up!”
Nate Dogg’s regulation is encoded directly into the world: the offenders are killed and the network of associations is changed. This “knowledge,” encoded as it is in the memories (bodies) of the people involved, as well as their relations, involves a natural process of forgetting. The “knowledge” is also stored the bodies and minds of the regulators themselves.
Contemporary technocratic regulation involves the production of knowledge by experts, the transmission of that knowledge to lawmakers and the encoding of that knowledge in the form of written laws. Post-modern regulation must involve action, including the capacity to require rather than merely prohibit action.
This is the confusion that faces the authors of the recent Facebook experiments. Their goal is to produce an academic paper, one which tests some written hypotheses, as part of the larger fantasy that if we can preform enough “scientific” rituals, those written hypotheses will be transmuted into written scientific laws.
This is the aspiration and downfall of scientific knowledge: that it is immortal, that it does not decay. Like legal precedent, it accumulates. In both cases, this accumulation is assumed to take place outside of the world, in the special textual realm of the law.
The actions described in these papers are a success in spite of the absurdity of their authors’ nominal goals. These authors are not scientists but regulators.
The experimental interventions and the measurement of their effects are all valuable not because they contribute to the vast hoard of disembodied ~~knowledge~~ stored beneath the Smaug-like corpulence of for-profit academic journals. This kind of knowledge is valuable because it already *is* where it needs *to be* in order to be used.
As I wrote in the Ontological Case for RCTs, the true advantage of this method is that it creates a counterfactual world. The material capacity to do this is both proof that it is possible and a step in the direction of that counterfactual world. But experiments are a special kind of action, one which tells us whether the step was in fact a good one.
“Good” is a normative political question. Tech companies already agree with everything I’ve said here. They don’t bother with textual “theory,” they simply run millions of A/B test experiments and implement the version that performs best according to whatever metric they’re using as a proxy for profit.
Effective democratic regulation of tech, then, requires changing which experiments they run and how they’re evaluated. We need scientists embedded at every step of the way in the development of these experimental regulations, and we need independent, bottom-up information channels for the feedback from users. Drew Dimmery provides more details in his excellent first Substack post — I strongly recommend that you subscribe for more.
Consider Instagram’s shift, last summer, to a much more heavily algorithmic feed. Adam Mosseiri seemed almost apologetic with the announcement, since he knew people would protest — and people like Kylie Jenner and Kim Kardashian did protest — but he claimed that their internal experiments demonstrated that users “preferred” the switch.
Here we see that 1) tech companies already “regulate” themselves in exactly the way that I argue we should regulate them, and 2) the slippery equivalence of the “preferences” of users with the metric that maximizes profits, “time on platform.” (Don’t get me started on the idea of “revealed preferences” — Paul Samuelson pulled that out of his ass to save his pet theory, and it’s bad economics and worse psychology.)
Effective (cybernetic) and desirable (democratic) regulation means running experiments to try things and then seeing if they work, for us.