Mark Zuckerberg Wants You To Think That His Algorithms are All-Powerful
But not, like, in a bad way!
We have just seen the release of part of the 2020 US Election research project, in which teams of prominent social scientists collaborated with Facebook to conduct massive on-platform experiments. I haven’t had time to pore over the details, of course, but the obvious investment of time and energy by first-rate researchers (many of whom are my friends and colleagues) and the publication in Science and Nature gives me confidence in their technical soundness. Furthermore, I trust these academics’ integrity, and endorse the results-blind model as a step forward in academic-corporate collaboration.
I’m more interested in the interpretation. It pains me to report that the framing of these experiments only reinforces themes I’ve been developing for critiquing social media and research on same:
Temporal Validity: it’s been thirty-two months since these experiments were conducted. After almost a decade of what we might as well call Web 2.0 or the Social Network era, social media has been completely upended by TikTok and the algorithm-first approach to content distribution. How can social science work at this time scale?
The Algorithm: as I argued in Susan Wojcicki Wants You To Think That YouTube's Algorithm is All-Powerful, the Meta press release and even the framing of the papers makes it clear that social media companies want us to believe in the power of their algorithms. “Senator, we sell ads” — their business model hinges on advertisers paying top dollar for algorithmically served ads. However, they are walking a fine line: they don’t want to be accused of swinging elections. Thankfully for them, it’s extremely difficult to change presidential vote choice!
Social Feedback: my model of Supply and Demand on social media suggests that audience feedback on producers is by far the largest effect. Producers are either literally addicted to social feedback or are simply profit-maximizing media companies attuned to the quantitative metrics provided to them by the platforms All of these experiments look for effects on individual social media users — which is still, I argue, a holdover from the broadcast era of media effects and thus not where we should expect to see significant effects.
Although all of these experiments seem interesting, I’ll briefly explore these themes through the experiment which seems likely to attract the most attention: the chronological versus algorithmic feed, or “How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?”
Temporal validity is always with us; if you’re interested, Drew Dimmery and I have a new working paper demonstrating just how serious this problem is under different assumptions. Tl;dr — this is probably not a great context from a temporal validity standpoint. On the other hand, the power of the experiment is impressive, and none of the results are “marginally significant.” Time will tell, hopefully, how well these results hold up.
(It’s worth noting that, despite the unprecedented scope of these experimental interventions, the publication and peer review process was still the slowest part of the whole endeavor.)
The algorithm is of course the central point of this experiment. This is as good an experimental intervention as anyone could possibly hope for; in addition to the large sample size, the scope of the experiment is unprecedented, and essentially impossible absent collaboration with a platform.
So I hope that this can be the nail in the coffin of what I see as the wrong criticisms of “the algorithm.” These criticisms are based on mechanisms which seem straightforwardly not to exist.
First, echoing previous experimental results reported internally by Instagram, the paper finds that switching users to a chronological feed makes them spend way less time on the platform; that is, it provides an inferior experience.
More importantly, the algorithmic feed decreased political content and dramatically decreased content from untrustworthy sources. “The Algorithm” actively results in less political content and less zero-credibility clickbait. It just doesn’t make any sense that Facebook/Instagram/literally any platform would be pushing political content; the closest thing that we have to a scientific law in the field of political communication is that
most people prefer anything to political media
Furthermore, as TikTok has demonstrated, these platforms would happily ban contentious political content if they could; the headaches and congressional hearings are absolutely not worth the small fraction of forgone ad revenue. The ideal posts, for these platforms, are about what products the consumer wants to consume; everything else is a loss leader.
Social feedback and the effects on producers, however, are the most important missing piece of this experiment.
The algorithmic feed matches matches producers with consumers. To expect an effect on only one of these two parties is absurd; just like in any marketplace, both parties are constantly adapting to the information about what is available to consume and how popular it is. As I wrote in The Audience Does Not Exist, the always-encoded audience metrics are an inexorable component of the hybrid media object which is a social media post.
But just like neither producer nor consumer is an “unmoved mover” in this transaction, neither is the algorithm itself static, instead updating its weights with each pass through the cycle of content production and consumption.
And look. I’m a content producer, I respond to audience demand too! So when I keep getting emails that say “hey I love the blog, but I really wish you’d talk about Flusser more” — buddy, I read you loud and clear. This topic is discussed at length in my third and final lecture on Communicology:
This feedback loop represents the circularity of the whirlpool, with the apparatus becoming stronger (gaining knowledge) while the preferences of both producers and consumers are blown in arbitrary directions.
In Flusser’s terms:
Those who participate actively in the production of information
(the specialists) are themselves being programmed by the mass-media meat choppers for information production.
So: even this massive experiment is unable to manipulate either the history of the platform or the effects on content creators.
This is a fundamental limitation. Many of the most important questions are simply beyond the scope of “science,” especially with contemporary standards of rigor. I’m extremely frustrated with headlines about the current study like So Maybe Facebook Didn't Ruin Politics. Everyone involved agrees that these experiments couldn’t possible have demonstrated that!
So again, I’m against “positivist nihilism” applied to massive phenomena like a decade of billions of Facebook and Instagram users. The idea that our null hypothesis should be that these platforms had no effect until proven otherwise is absurd — a bad-faith technocratic depoliticization of some of the most important issues confronting democratic politics today.
"hey I love the blog, but I really wish you’d talk about Flusser more"
This, but unironically.