This is an unusual post from me, in that I’m far from unique in making these kind of arguments. Despite what I see as a growing chorus of thoughtful critics advocating for a TikTok ban, no one seriously seems to think it could happen. If TikTok is already un-ban-able, our capacity for democratic control is already lost. I am optimistic that this is not the case, and that this remains a stand worth taking.
In anticipation of the 2020 US Presidential Election, President Trump threatened to ban TikTok -- and went so far as to sign an Executive Order to that effect.
This was a hastily conceived response to what is a genuine but complicated problem. The immediate polarization of the issue and the liberal framing that Trump's motive was xenophobic have prevented the development of a more reasoned debate. TikTok is the first major social media platform developed by a geopolitical rival to gain widespread adoption in the United States.
Although President Biden rescinded Trump's EO, his administration has continued to investigate the platform and is considering new regulations that reflect this novel challenge. There is no reason to give TikTok the benefit of the doubt. The major US platforms have consistently failed to be responsible stewards of the awesome power they have appropriated over our media and politics, and TikTok has demonstrated the same irresponsibility --- except that they are far more vulnerable to pressure from the Chinese regime.
In a Congressional hearing in late October 2021, a TikTok executive said that TikTok “does not give information to the Chinese government and has sought to safeguard U.S. data.” Like so many other tech company executives, they were lying. Buzzfeed News released a major investigation in June 2022 that found that:
“engineers in China had access to US data between September 2021 and January 2022, at the very least...nine statements by eight different employees describe situations where US employees had to turn to their colleagues in China to determine how US user data was flowing.”
The reality is that TikTok -- like Facebook and other massive social media platforms -- is so large and so haphazardly constructed in their pursuit of explosive growth that no single employee can know everything about what data are collected, where they are stored and how they are used. So while I am sympathetic from an engineering perspective, we should have zero confidence in the ultimate security of any of these platforms.
They made this problem for themselves, and it is up to them to fix them. In the short run, the US government should follow the example set in the 2019 FTC $5billion Facebook fine and continue to impose harsh penalties for data breaches and irresponsible data collection. The hypergrowth mindset of these companies has polluted our information commons, but with sufficiently harsh penalties, we can change their optimal economic strategy from “grow as fast as possible, period” to “grow as fast as possible while still following the law and maintaining secure data practices.”
On this line of criticism, it is fair to say that TikTok is not significantly worse than other social media platforms — the data they collect is not categorically more intrusive, say. So it is legitimate to argue that banning TikTok and leaving the other platforms be is arbitrary is unfair. I’m willing to bite that bullet, and say that between banning none of them and all of them, given their track record so far, we should just ban them all.
Sure, this would be a huge blow to a major US industry and a serious disincentive to innovation in this area. But as I argued in my post about LLMs, is anyone really looking at the world today and saying:
“Things are going pretty well! My biggest worry is that things would be much worse if we were to dramatically reduce the rate of change of digital media technology.”
Buzzfeed identifies another concern: that “the soft power of the Chinese government could impact how ByteDance executives direct their American counterparts to adjust the levers of TikTok’s powerful “For You” algorithm. I believe that this is the primary concern, from the perspective of the fragile state of the legitimacy of our democratic elections.
2016 caused widespread alarm in liberal circles that Russian misinformation and other forms of “Fake News” had swung the election in favor of Trump. TikTok represents a far more serious vulnerability to Chinese interference in 2024. Imagine that TikTok subtly stops some forms of content moderation and then the “For You” Page algorithm is made to promote false content at a rate far larger than ever before.
The actual effect of this move --- measured in the number of different people exposed to misinformation multiplied by the amount of aggregate time they spent consuming it --- could be orders of magnitude larger than anything Russia did in 2016. And it wouldn't even have to misinformation. What if they juiced the weights in favor of pro-DeSantis (or pro-Biden) content, shifting the partisan balance of the platform?
More troubling still is that the persuasive effect on the viewers might be small (as my read of the evidence suggests was the case in 2016) but that the mere fact of the informational attack would further delegitimize the election. Caesar’s wife is supposed to be beyond suspicion --- and given the precarious state of election legitimacy, is it even conceivable that political activity on TikTok does not at least appear suspicious?
One concern might be that China will retaliate and ban US-based social media platforms from operating within their borders. This concern is somewhat lessened if we observe that China has been doing so for over a decade already. The move to ban TikTok is thus a move that puts us at parity with our primary geopolitical rival. Insofar as massive datasets and human-algorithm feedback are important resources in the current race to develop more powerful AI for both economic and military advantage, this strategic asymmetry is unnecessary and unwise.
More broadly, this move might accelerate the rise of digital protectionism and more regional and even national internet platforms. I argue that this trend is both inevitable and for the better. The conjunction of internet speed and global scale has made the potential profit from software and digital services so preposterous that it has warped the global economy.
“Software is eating the world,” Marc Andreessen wrote in a famous 2011 Op-Ed in the Wall Street Journal. (I guess the WSJ readers somehow thought this would be a good thing?) He concludes by saying that “instead of constantly questioning their valuations, let’s seek to understand how the new generation of technology companies are doing what they do.”
Twelve years on, I think we have a good idea of how these tech companies are doing what they do: they expand recklessly quickly, setting themselves impossible tasks like content moderation at a global scale; they break local laws or share data with autocrats, as best suits them; they lie about user and viewership numbers to prop up a digital advertising house of cards; they prevent independent oversight of basic descriptive facts, let alone the possibility of legitimate democratic control.
Between software and the world, give me the world!
Several months ago, I was at a conference where some regulators and platform people were discussing campaign finance on social media. The regulators were all super concerned about targeted advertising. It’s unaccountable in that it can’t be viewed by everyone, potentially allowing unsavory niche arguments to be delivered without public scrutiny. One regulator said that his goal was to require Facebook to provide a list of the demographics at whom each ad was targeted — to see if a given ad were target at women, or Hispanics, or retirees, etc.
The Facebook engineer scoffed. “Do you have any idea how the ad auction at the heart of Facebook works? No one knows who is “targeted,” it’s all in real time and a super complex interaction between supply and demand.” The regulator, with a law background, was clearly out of his technical depth.
What the engineer was saying is true: most people don’t understand these ad auctions and his description was exactly correct. But as I told the lawyer: “Tell them to give you what you want or go to jail! Who cares what bullshit systems they use to maximize the efficiency of serving ads. The law creates reality.”
Someone is going to be build the future. The current technological revolutions and the closing of the long postwar era of Boomer Ballast means that reality is up for grabs; the present battle is ontological. The tech companies are creating a reality of code, of inhuman scale and complexity, one in which the human subject is reduced to a monkey in a Skinner box while the technocapitalists dream of annihilation.
I prefer liberal ontology, defined by institutions, the rule of law, and the reasoning liberal subject. Clearly, faith in this reality is fading: is it really possible to prosecute Trump? Can we really tolerate being governed by the collective will of our fellow citizens? Do we really accept the outcomes produced our institutions?
Can we really ban TikTok?
You had me until "The law creates reality” part. Many of your points are excellent and even John's comment below about regulating device standards are critical. But telling a lawyer it's the law's job to legal the rest of us is just as ineffective (to wit, Roe v Wade). Universal, natural, and human laws will always impact our experience of reality in different ways. Since reality is constructed by the brain, improving society's "soft skills" of discernment and caution, sourcing unbiased references and not relying on a single source for information all go a long way towards protecting ourselves from misinformation, external and internal. Just banning one application out of the hundreds being developed now based on algorithms isn't going to make much difference. Banning TikTok now would be as successful as banning ChatGPT. The cows have already left the barn, now is time to mend the fences.
If one leaves aside the politics, there are two issues: device security, and the algorithm.
The first, device security, is actually the most important and it is amazing how much commentary leaves it out. No one asks Apple what TokTok(*) can get away with, and what standards they have for keeping us safe.
Second, the algorithm, I think runs up against a problem that is also hidden. That is, to what degree should people, including teens, have freedom of access?
We speak much more often about freedom of speech, but denial of access can cripple a society just as much.
* - or Facebook