iPhoneomenology
or, Seeing like a Smartphone
Like many people, I have a longstanding debate about whether our smartphones are “listening” to us. The circumstantial evidence is overwhelming; everyone has had the experience of talking irl about, say, methylene blue, only to find ads for methylene blue on every semi- to non-reputable website you visit.
No, you’ve never searched for methylene blue before; you had never even heard of it today.
Yes, your smartphone was turned on.
And now, look:
My friend (incorrectly) felt that he had found the smoking gun in the form of Midwest emo memes. He had no prior connection to the Midwest; he never leaves the LES. One day, his friend visited from Iowa and they had talked about emo memes. The next day, boom, his TikTok feed was full of Midwest emo memes.
The claim that smartphones are “listening” to us is similar to that old chestnut about how “The Algorithm” is the all-powerful determinant of what content everyone consumes on social media. In both cases, the technological story is literally false: “The Algorithm” is just trying to optimize for some ever-shifting combination of watch-time and other engagement metrics; for the phone to be constantly recording and uploading would too expensive in terms of battery and bandwidth, and more legal trouble than it’s worth.
But both criticisms also reveal something about how we experience our technological world. People talk about “The Algorithm” to capture the fact that they don’t understand and can’t control how they are exposed to media on these platforms. And phone recording anxiety reflects the reality of constant surveillance at scales so vast we can’t comprehend them. It is an honest, human response to feel deeply uneasy about this inhuman state of affairs.
The need to provide some specific, technological criticism of these machines is a consequence of our rationalist, scientific culture. Educated people know better than to say that they simply don’t like some new technology; we need to give a cost-benefit analysis, we need to think about specific ‘harms’ if we should express our displeasure.
This produces the tedious back-and-forth between public criticism of technology and academic research on the same topics that I described in my first chapters of my book The YouTube Apparatus. A media panic calls attention to some specific problem created by technology, resulting in an ill-posed research question that social scientists try to answer.
Is social media “an echo chamber”? No, not how we measure it — therefore social media must not be causing polarization.
Does the YouTube algorithm radicalize viewers down a rabbit hole of extreme political content? No, not how we measure it — therefore YouTube must not be causing radicalization.
Does adolescenet social media use increase anxiety and depression? No, we haven’t been able to prove it with the highest possible demands of rigor1 — therefore it’s fine if kids use social media.
None of the “therefore”s are logically entailed by the respective empirical test, of course, but in our scientific culture which says that “any data is better than no data,” it’s easy to allow for some slippage in the theoretical nature of the problem if it means that we can appeal to empirical evidence.
I’ve been reading Science in Action and Science in a Free Society this week — one point where Latour and Feyerabend converge is in the insistance that science should be treated as just one tradition among many. A society in which the few thousand “scientists” with sufficient power and resources are the only actors able to produce “facts” is an impovershed and particularly undemocratic society.
That’s not to say that we shouldn’t cultivate specialists and experts. Our world is far too complex not to use the tools of science that have enhance our capacity for navigation and control. But it’s crucial — insofar as we want to live in a democracy — that scientists understand their role as helping our fellow citizens solve their problems rather than proscribing a way for them to live. We need to figure out what actions to take that will allow us to build the society we want.
But what about those smartphones? They do, in fact, “know” a shocking amount of information about us. To understand the source of that information, though, we need to do some iPhoneomenology. How does the smartphone “see” or “hear” the world?
Not like humans!
There are two levels on which we differ:
The sensory appartus: what are the sense organs for recieving impressions from the world?
The cognitive apparatus: how are these impressions combined, stored, and accessed?
The phone’s sensorium differs from ours, although there are some superficial similarities. It can’t taste or smell, sure, but it does have something like sight and hearing — the source of the mistake that the phone is always listening to us. It can listen to us, sometimes, for sure. But there are two other sense organs which are much more important to the phone’s operation.
The first is touch. Humans get information from the world through our skin, much more than we tend to think given the cultural supremacy of sight and sound. But the phones get information constantly about how we touch them. Their “skin” is incredibly sensitive, and directly connected to the gigantic information databases through the internet. Our gesture of tracing and tapping provides the phone the most direct form of information about us.
The second sense organ is proprioception. In humans, this is so essential as to be invisible: we know where our limbs are not through the “five sense” but through our internal nervous apparatus. Without this sense we would collapse, crash into things, die. Phones don’t move themselves, but their sense of location is inhuman. They don’t navigate the medium-size world of tables and chairs but the precise grid of the surface of the globe. Satellites tell the phone precisely where it is at all times.
The second level, cognition, is equally important for understanding how the phone processes the information it recieves, and here the difference with humans is starker still. For humans, all of our sensory inputs are stored in our bodies, where they are combined in complex ways with our muscles and neurons. There are not seperate storage units for images and smells, sounds and actions — they’re all mushed together in ways that neuroscience does not yet understand beyond the broad contours. Accessing this information is equally messy, with our memories constantly changing based on subsequent experiences.
The phone differs on every dimension — the information is not embodied in the same way. Its memory is precise and stupid. But it has another advantage — much of the “cognition” happens outside of a given device, as information shared along digital networks is collected and processed within the databases of the companies providing the services.
This is how the phone is “listening” to you. At some point, you touched it in a way that flipped a bit in a data center somewhere, ultimately changing your “shadow self” of data of interest to advertisers.
This is how (and why) it shows you ads for methylene blue or midwest emo. The friend visiting from Iowa told you about his interest in midwest emo, out loud, so it’s natural to think that if the phone “knows” about it it’s because it “heard” you. But your friend brought his phone with him, from Iowa to your apartment. The data center detects the two phones, sitting in precise proximity unmoving for hours. Somewhere, somehow, this information is used to shift the embeddings of the two shadow selves towards each other.
So, yes, it’s completely insane that this kind of thing is happening to us, essentially all the time. But the phone isn’t listening. It’s far worse than that. Basing our understanding on human phenomenonlogy understates how insane the situation is, how absurd.
If only our phones were merely listening to us! That’s a threat we can understand, at scales that make sense. This is a similar psychological coping mechanism to that implied by the reification of “The Algorithm,” as I wrote in the article for Mother Jones
The fact that we see agency—often cruel but at least human—in the algorithm reveals our fundamental need for interpersonal connection. It lets us imagine someone like Zuck in control, instead of adrift in “the apparatus” like the rest of us. We wish Big Brother were watching. But we might just be alone with our phones.
Though, here, the evidence is piling up — one of the long-delayed Meta2020 collaboration experiments was just released. The largest “deactivition” study to date shows that self-reported wellbeing increseases when people stop using either Facebook or Instagram during the 2024 US Election — and effects on Instagram are driven by women under 25.



that quote at the end… “we wish big brother was watching, but we might just be alone with our phones.” wow. what a slammer of a line… just wow. enough phone and internet for me tonight i think! enjoyed reading your post.
And the sad part is that even if _you_ install grapheneOS on your phone and never let any data out that you don't want out, other people's data will still contain you