Poetry is Political Methodology
I told the cop my license was "mostly valid"; you won't believe what happened next
My initial definition of metascience was that
Metascience is the study of the allocation of scarce social science resources.
I stand by this. And I have connected it with my substantive research on social media and the attention/information economy, including my forthcoming book. But ironically I failed to be meta enough, to appreciate where our resource constraints are most binding.
Herbert Simon is one of those geniuses who was forced to become an economist because economists have so much power. Thankfully, he wriggled out of the stultifying abstract mathematical economics coming out of the Cowles Commission and the broader neoclassical movement to make actual contributions to our understanding of social science. (By the way, Philip Mirowski’s book Machine Dreams: Economics Becomes a Cyborg Science is the best book I’ve yet read on the creation of social science in the postwar US, a central period in my overall project.)
“What information consumes is rather obvious: it consumes the attention of its recipients.”
As media, including social science, has been eaten by the attention economy, it follows that the scarcest resources is
attention
The way in which we “set the academic agenda” is thus the central metascientific question. And the technology that I have for setting the meta-metascientific agenda is choosing words that will cause you to change the words you use. AKA poetry. I’ve grasped towards this argument in previous posts but I’ll make it more formally now.
“Methods” are taken to be techniques related to data. This can be qualitative, but my experience with methods have generally been quantitative. How are data coded? What statistical techniques have been applied to them? How are the results visualized?
“Methodology” is a field of research into these methods, aiming to improve them. The Society for Political Methodology and its flagship journal Political Analysis have come to define what “quantitative methods research” means within the discipline of Political Science: a combination of applied statistics, programming code and technical validation of data sources. Generally, “methodology” is a process by which these data operations are deemed to be “valid.” The output of valid methods are combined with natural language to produce peer-reviewed manuscripts.
“Formal methods” are the exception that prove the rule: they do not make reference to any data. Instead, they combine natural language with another kind of language, generally some kind of applied mathematics, to produce their peer-reviewed manuscripts. “Formal methodology” involves validating novel combinations or neologisms within this formal language---it is thus a form of formal poetry.
In both cases, natural language serves as a bridge between the methods and the world. It must explain how the formal statistical, computational or mathematical operations help us understand politics. Only if both the methods and the natural language are “valid” does the manuscript succeed at communicating knowledge to the reader; the chain of knowledge production and transmission is only as strong as its weakest link.
The words and phrases that we use are therefore also “methods.” However, at present, we have paid very little attention to these methods; we have not developed a “poetic methodology.”
Consider my favorite example, “echo chambers.” In the course of peer review for my book, an anonymous reviewer contested my discussion of the topic. My summary of the literature was that it finds that “echo chambers” don’t exist--except among users in specialized (partisan or professional) networks. The reviewer wrote:
Claims about the supposed lack of echo chambers online are also overstated here. Particularly with the release (finally!) of the Facebook 2020 studies, it’s clear that most Facebook users, at least, are exposed overwhelmingly to like-minded content, even if the impact of that skewed exposure is extremely weak in terms of persuasion or polarization. I’d read much of the preexisting literature as pointing to a similar conclusion: highly skewed exposure is well-established among us a substantial fraction of the public.
A poetic methodology would investigate the validity of the bridge between word and world; here, the question of whether “echo chamber” means “highly skewed exposure.” The reviewer is advocating the mainstream position: that the “operationalization” of (the statistical operation corresponding to) the words “echo chamber” involves categorizing the ideological slant of media sources and measuring the aggregate media diet of citizens.
This strikes me as incorrect. Scholars of political communication have been analyzing the composition and ideological slant of media diets for as long as they have had the data and quantitative methods to do so, but without referring to even the most skewed media consumer as inhabiting an “echo chamber.”
Axel Bruns traces the history of the term, reminding us that the current usage of the phrase dates to legal scholar Cass Sunstein's 2001 book Republic.com (the title is a considerably less successful metaphorical intervention). Bruns notes that Sunstein never provides a formal definition, let alone a quantitative operationalization, of his “echo chambers.” But even if he had, it wouldn't be the final word; not even lawyers have the power to permanently fix semantic relationships.
Bruns further establishes the slipperiness of the term, the way its definition has changed over time, as well as the fact that empirical evidence seems to have little impact on its popularity. The term “echo chamber,” we might say, is low in “poetic validity” when it is operationalized by measuring the diversity of media diets.
But what, exactly, is “validity”? Methods are valid or invalid, we said, according to the methodological high priests. But “validity” is a word, too.
In some of my more nihilistic writing, I’ve claimed that temporal validity is impossible—that the necessary conditions for external validity cannot obtain when the target context is in the future. No one has disputed this, exactly — but many seminar participants and peer reviewers have been quick to say that my position is overstated, that of course we know that validity is continuous and not binary. One such example:
“I don’t think it is accurate to conclude that external validity is impossible, because time can’t be randomly assigned or as-if randomly assigned. First, external validity involves a continuum and is not binary, and the same is true for internal validity.”
Ok. That’s definitely not how the causal inference movement in social science thinks about internal validity. Research designs are either “identified” or “not identified” aka confounded. If your design has “some internal validity,” then it is confounded.
This makes sense, because that is what the word valid means. The validity of a driver’s license, or a password, or a logical argument, is obviously binary.
I agree with the thrust of the reviewer’s argument. Conceptualizing “external validity” as binary, we run headfirst into a brick wall of impossibility. As a poetic methodologist, my critique is of the method used to bridge the gap between the statistical operations and the world — the word valid. This method, of talking about validity, is itself invalid.
Why did we start using this method? I’ve been going down rabbit holes here. A glimpse deep in the warrens:
Philip Mirowski argues that the way that cybernetics, information theory and game theory percolated through the military and competing schools of neoclassical economics in the postwar era led to a reification of mathematical rather than statistical or computational economics. (Herbert Simon was the high-profile exception who took information seriously.) So my gloss is that “validity,” which is a perfectly valid method in formal mathematics, became enshrined in the lexicon and has simply overstayed its welcome.
Constructivist political scientist Francis Beer makes an argument in that tradition: “Validity is a central legitimating word in the lexicon of political science, suggesting the connection of scientific theory and research with the political world.” We could have gone with verity instead, truthfulness, except that the goal is not to produce truth but rather to convince other actors in society that we are Scientists doing Science things.
If anyone can pinpoint the exact moment “validity” became enshrined in social science, and the arguments for why, I’d be interested to hear more!
Still, I’d argue that the structure of social science as media reveals that validity is in fact the correct word, and that it is binary. This is because our central media technologies of peer-reviewed publication and subsequent in-line citation are ineluctably binary.
A paper is peer-reviewed or it isn’t! Exactly how continuous-valid does a paper have to be before it passes peer review? Whatever that threshold is, there’s the binary imposed by our media technology and the emergent sociology of how we think about what we’re doing. And then future papers summarize the findings of Smith et al (2022) with a verbal and thus binary claim.
Formal meta-analysis digs back into the data of previous studies and accesses those directly — eliminating the throttling of the passage of evidence through these tiny binary channels. That’s where the real knowledge aggregation happens — but then why write these papers at all?
That’s the central theme of this blog (when I’m not complaining about Twitter), so I won’t rehash that all here. I’ll instead conclude with some poetic methodology.
As Drew Dimmery and I argue in our new paper on generalization, instead of talking about “validity” we should talk about “power.” Statistical power is a well-understood and intrinsically continuous concept that relates to the amount of information that is generated from a research operation. This information then naturally decays when transported across time and space (or other dimensions identified by the researcher).