A recent Artnet article by the critic Ben Davis captured an important contemporary sentiment: the equivalence of value with quantitative measures of popularity. Even in the rarefied art world, the role of criticism has shifted; it's no longer tenable to evaluate art either through the lens of intrinsic quality or individually-subjective appreciation. There is only *mass* consumption. For specialized media, this mass consumption refers to a community of potential consumers, not hoi polloi, but this is still a significant departure from the 19th and 20-century models of art criticism.
This blog is sometimes about technological determinism, and I'll adopt that lens today. The study of communication technology allows us to put Davis's insight into historical context. As usual, I'm trolling by invoking technological determinism; media technology *literally makes the world*, but so do the humans who decide how to use that technology.
I draw heavily on the work of the recently retired Communications professor James Webster and his "theory of audiences." The crucial insight is that audiences do not exist. They come into being only when they are measured. New technologies allow for better but also different kinds of audience measurement; the actors doing the measuring decide which to use depending on their goals, usually making money but sometimes influencing politics or simply understanding the cultural mood.
Or, as Davis describes in a podcast with the excellent New Models crew, to settle debates between warring fandoms about which cultural product is better. Box office sales of the recent Mario movie provide ironclad evidence of the inferiority of "woke" pop culture, say.
Box office sales are a great example because they are perhaps the most salient long-running "public measures." Much of audience measurement has traditionally been "private measures," either because it was only of interest to the media creator to learn how to adapt their offerings, or because it is valuable proprietary data that only businesses can afford.
Social media, however, represents an explosion of public measures. Twitter keeps adding more -- the "Views" metric now encoded with every tweet. Phenomenologically, the ubiquity of public audience measures is a major source of anxiety: everyone can see how well you're doing, in excruciating detail. Consider that Instagram long ago removed the public measure of "Likes" on posts. It was simply too stressful.
Here the technological determinism angle has real bite. It would take superhuman will to ignore the public audience measures that make up at least half of social media. As I told Willy Staley, “You can’t actually conceive of a tweet except as a synthetic object, which contains both the original message and the audience feedback.”
Social media is thus literally other people.
On the other hand, news organizations and social media creators are not helpless. They can't *ignore* these measures but they can decide how to use them, as the qualitative evidence produced media scholars by Angèle Christin and Rebecca Lewis demonstrate below.
I apply audience theory to YouTube as part of my forthcoming book (out later this year with Cambridge University Press). The thesis is that the magnitude of influence on YouTube is at least as large running from consumers to producers as the reciprocal. In other words, content on YouTube is demand-driven to a greater extent than any earlier video media because of increased competition and ubiquitous audience measurement.
The audience is a deceptively slippery category of actor, created not simply by the interaction between a consumer and a media object but rather by how the creator of that media object (or some other actor) understands the mass of consumers’ interaction with a media object. The isolated media effects experiment conducted in a controlled lab does not fully capture “the audience” for that media object as either an ontological or conceptual category.
Webster adapts Giddens' classic theory of structuration to understand the audience as downstream of the fundamental “Duality of Media”: “duality is a process through which agents and structures mutually reproduce the social world...the structures that agents use to accomplish their purpose are not static, but reflect the work of institutionally embedded actors who constantly monitor and adapt to user behavior.” Audience structuration requires seeing the act of media consumption through a different lens than that of the liberal ideal of the individual citizen becoming informed in order to fulfill their democratic duties. We’re not talking about deliberating reasoners but about socially, technologically constructed aggregates.
The crucial questions posed by this approach are thus not the standard “what happens, and how do rational actors respond?” but rather the cybernetic “what kinds of information do actors have? What concepts are relevant to their perception and interpretation of this information?” Most important for my understanding of YouTube audiences are the concepts of public and private measures. These quantified measures are the mechanism by which audiences are constructed in the minds of those with access to those measures.
“Public measures that distill and report user information,” Webster argues, are a “pivotal mechanism that coordinates and directs the behaviors of both media providers and media users.” Public measures circulate among all sorts of actors, and include things like university rankings and academic journal impact factors. These measures act as currencies that facilitate exchange and comparison, and while they are most effective when generated by some “objective” process or trusted third party, they cannot in fact be fully “objective” in the sense of apolitical. The very existence of these measures changes the behaviors of the agents and thus of the structures they mutually co-create.
There is thus a powerful irony in his categorization: the public measures designed to serve the interests of media producers in fact operate as a vehicle for agency among the agents. The user information regimes, in contrast, guide and inform the actions of the consumer while simultaneously creating digital trace records of great interest to advertisers and thus to producers. In summary, Webster argues that audience measurement is the “mechanism through which audiences cause the media to change.”
The “user information regime” primarily consists of the publicly available measures about different creators and pieces of content, in addition to the qualitative information signals that creators send in order to attract audience attention. It follows that the “market information regime” is composed of viewcounts, likes, user comments on videos and other forms of qualitative commentary on given creators, videos or topics.
In the case of YouTube and many other social media, these two regimes contain some of the same public measures. All involved actors are aware of the number of views, subscribers and likes that different videos and channels have, and they use this information to make decisions about what to consume or what to produce.
Personalized information—“private measures"—also plays an important role. Algorithmic recommendation of content, derived from users’ previous consumption habits and from the multitude of data tracking systems embedded at various levels of the technological stack, provides the user information about what the platform “thinks” they might like to consume, as well as providing a shortcut to make these selections cognitively and temporally “cheaper.”
The dominant concern about recommendation algorithms is that they would only reinforce the rich-get-richer dynamic. Through various iterations of the internet, from the earlier era of Google’s PageRank algorithm based on incoming links and the blogosphere to the more recent social media built on an explicitly constructed social graph, the primary difference between the internet and either newspapers or broadcast media is that algorithms make audience inequality more extreme, as media economist Matt Hindman demonstrates. This is reflected back in the experience of users. “Virtually all user information regimes privilege popularity” Webster concludes — making it ironic that the primary theory of the YouTube algorithm is that it draws users away from the mainstream into niche rabbit holes of unpopular content.
Earlier shifts in the technological environment, and corresponding responses among both producers and consumers, prove illustrative. Webster describes the effect of Nielsen’s new “local peoplemeters” (LPMs) as a technology for “objective” audience measurement. Audiences are intensely aware of the relationship between measurement and the actions of media producers; a public interest group called “Don’t Count Us Out” argued that these new LPMs would undercount media consumption by minority viewers and therefore result in less television targeting those viewers. Ironically, this campaign was supported by funding from Rupert Murdoch’s NewsCorp, which believed that they would lose out on advertiser’s dollars under the new technological approach.
The evolution of these audience measures is thus best understood as a kind of contestation. No measure is perfect because there is no “real” audience. It is true that new technology tends to produce more detailed and extensive information, but it is generally better to conceive of different measures as political in the sense of serving different purposes.
For example, the standard measure of television audiences is a binary measure of viewership: the LPMs record how households have the television on and turned to a given channel when a given program is being broadcast. This makes sense as part of a media economic system financed by advertisers deciding how much to pay to broadcast an ad during a given time slot for the purposes of raising general awareness and approval of their product.
But for other goals, in other financing models, different measures could be more effective. Webster describes these competing audience measures as forms of “currencies” that are more valuable when exchanged for different services. Using web tracking data from comScore, he and a co-author find that audience size and engagement (in terms of time spent) measures are uncorrelated. The enhanced ease of direct-conversion advertising sales on digital media — that is, advertising that entices consumers to click through and immediately make a purchase — makes engagement measures more valuable as currency for advertisers using this business model.
This economics-inflected analysis largely ignores the sociological processes by which media organizations acquire, analyze and act upon knowledge of their audiences. These processes are of course essential micro-foundations for any incentive-based model of media organization behavior, and crucially for the application of my framework, these internal processes are themselves affected by changing technology.
This is not to say that different social structures cannot change how these analytics are used. Christin demonstrates stark differences in the reliance on audience analytics by US and French media companies. The former have relied on professional prestige networks to shield journalistic practice, while the latter have more fully incorporated these audience numbers into their workflow and evaluations.
Many of the following insights and references are taken from Phil Napoli's excellent book on the topic of audience evolution. From the beginning of the medium, video-based media organizations have experienced conflict over the process of generating audience data and using it to guide their creative decision-making. In the early days of the film industry, executives relied on “fan mail” to supplement their then-spotty data on ticket sales in evaluating the reception of their movies, sometimes measuring the response in terms of pounds and ounces rather than reading them all.
One major concern of media organizations about the use of viewer letters as a source of information is that letter-writers are decidedly non-representative sample of the viewership. Napoli cites British audience research Robert Silvey’s account of this process at the BBC: there were “seeds of doubt...when it became quite apparent that the overwhelming majority of letters came from middle-class writers...that while many letter began ‘I have never written to the BBC before,’ others came from people who wrote so often that they might be called BBC pen friends” (p34).
YouTube creators’ internal analytics data come prepackaged; they’re impossible for creators to miss and difficult for them to ignore. But this automated “audience rationalization” was not pre-determined, and even the powerful data collection that underlies it is necessarily incomplete.
The immediate and highly specific and quantified feedback that creators of online content receive enables (and almost forces) them to figure out what the audience wants. The YouTube analytics that are baked into the platform — the public measures in the form of subscriber counts, likes and views — are both reified and challenged by the creators themselves. Christin and Lewis presents an in-depth analysis of a network of YouTuber creators who actively discuss these metrics, taking them as credible signals of popularity within their community while also developing strategies for resisting the power of these metrics. The key insight, for my argument, is that everyone is aware of these metrics—and further, that this is public knowledge: everyone knows that everyone is aware of these metrics.
In the era where simple exposure numbers were the primary currency for media organizations bargaining with advertisers over prices, there was little reason to care more about the opinions of rich people or devoted fans; neither of these tendencies was monetizable. For YouTubers, however, the increased capacity to price discriminate and the importance of cultivating an active community gives them reason not to weight all audience feedback equally.
A small number of highly engaged audience members can have a big impact on a channel’s overall community, and a small number of wealthy audience members play an outsized role in determining a channel’s revenue. It thus matters immensely who exactly these people are. A recurring theme, both conceptually and empirically, is that the YouTube case supports different interpretations of existing theories based on how we decide to operationalize the categories on which those theories are based. The classic two-step flow model of media influence, for example, claims that the direct effect of “media” on “the public” is limited, but that there is a significant “indirect effect” that travels from the media to “local” “opinion leaders” who consume and interpret information to their “communities.”
Conceptually, this model is either trivially correct or completely undermined by the YouTube case depending on how we interpret these terms in scare quotes. Are politics influencers on YouTube who make reaction videos in which they give their take on emerging news “the media” or “opinion leaders”?
I find it instructive to ask whether these are useful categories at all. A social scientist primarily focused on determining whether an inherited theory is supported or not is forced to spend considerable effort working on these conceptual boundaries derived to explain a bygone sociotechnical context. But I am not this sort of social scientist.
The long-term trend in media duality, Webster concludes, is one of increasing efficiency. Better and more prominent public measures, enabled by structures that enable a higher density of decisions by agents and by technology that captures, synthesizes and applies those decisions, lead us towards the “triumph of convergence”: a convergence of media supply with public demand. “Over time,” he argues, “as digital media become more pervasive and the systems that power them more ‘user friendly,’ the distance between supply and demand will shrink.”
Analytically, we need to move beyond asking the question of whether audiences affect creators or creators affect audiences: it's so obviously *both*. The fact that the primary goal of much contemporary social science is unidirectional causal inference is therefore a huge problem; human behavior is not obligated to comprehensible with our preferred methodological tools. Instead, as Webster says, we should begin with the premise that “actions, freely taken, are the input for user information regimes that continuously structure and direct subsequent action. It is a process of reciprocal causation that evolves in real time.”
"Quantitative aesthetics" is what the "triumph of convergence" looks like, the politics and culture of ubiquitous public audience measures.
Audience numbers are a bit of a mystery.
I post reviews and photos of places we visit on Google Maps, which provides feedback on number for photo views. For some reason a photo my local suburban swimming pool has over 2.5m views, and the street view of a cafe we visited on Brisbane bayside has over 1.5m views (Brisbane population around 2.25m).
Not only being popular, but being seen to be popular is important.