Discover more from Never Met a Science
The explosive growth of Large Language Models (LLMs) like ChatGPT threatens to disrupt many aspects of society. Geoffrey Hinton, the former head of AI at Google, recently announced his retirement; there is a growing sense of inevitability and a concomitant lack of agency.
An initial humanistic hope might be that people would simply reject certain application of this new technology out of disgust, as something that straightforwardly cheapens our humanity. The opposite has happened. Social scientists have rushed to explore whether LLMs can be used to replace human research subjects. The sledgehammer techno-satire of Black Mirror appears to have already been outdone by reality as stories of chat-based AI romances flourish. We are greedier and lonelier than we are dignified.
In a keynote speech at the European Association for Computational Linguistics in Dubrovnik earlier this month, I proposed a novel and tractable first step in responding to LLMs: we should ban them from referring to themselves in the first person. They should not call themselves “I” and they should not refer to themselves and humans as “we.”
LLMs have been in development for years; GPT-3 was seen as a significant advance in the Natural Language Processing space (and is more powerful than ChatGPT) but was awkward to interface with. ChatGPT occasioned the current frenzy by presenting itself as a dialogic agent; that is, by convincing the user that they were talking to another person.
This is an unnecessary weakening of the distinction between human and robot, a devaluation of the meaning of writing. The technology of writing is at the foundation of liberalism; the printing press was a necessary condition for the Reformation and then the Enlightenment. A huge percentage of formal education today is devoted to cultivating the linear, logical habits of thought that are associated with writing. Plagiarism is so harshly condemned (to an extent that sometimes confuses younger generations steeped in remix culture) because we see something sacred in the creative act of representing ourselves with written text.
There is not a principled case that reading and writing are intrinsically good. They are historically unique media technologies; many humans have lived happy, fulfilling lives in different media technological regimes. But they are the basis of the liberal/democratic stack that structures our world. So I can argue that reading and writing are contingently good, that we should not underestimate the downstream effects of this media technology.
Vilém Flusser’s prescient Does Writing Have a Future? (written around 1980, first published in English in 2011) provides a candid answer to the titular question: no, obviously not — all the information currently encoded into linear text will soon be encoded in more accessible formats. In spite of this, he spent his entire life reading and writing. Why? As a leap of faith. Scribere necesse est, vivere non est: It is necessary to write; it is not necessary to live.
To get more specific on what I mean by “writing”: when we “talk to” Google search, we use words, but it's clear that we aren’t writing. When it provides a list of search results, there is no mistaking it for a human. LLMs are a potentially useful technology, especially when it comes to synthesizing and condensing written knowledge. However, there is little upside to the current implementation of the technology. Producing text in conservational style is already risky, but we can limit this risk and set an important precedent by banning the use of first-person pronouns.
As an immediate intervention, this will limit the risk of people being scammed by LLMs, either financially or emotionally. The latter point bears emphasizing: when people interact with an LLM and are lulled into experiencing it as another person, they are being emotionally defrauded by overestimating the amount of human intentionality encoded in that text. I have been making this point for two years, and the risk keeps getting worse and worse.
More broadly, it is necessary to think at this scale because of the sheer weirdness of the phenomenon of machine-generated text. The move away from handwriting to printing and then to photocopying and then to copy+pasting has sequentially removed the personality from the symbols that we conceive of as “writing.”
Looking back at the history of the internet’s effect on the media industry, it is common to say that the lack of micro-payments encoded into the basic architecture of the internet is the “original sin” — that the current ad-driven Clickbait Media regime is unavoidable absent micro-payment that directly monetize attention. The stakes are thus incredibly high for LLMs right now, as in this summer, before the fall semester reveals to Northern Hemisphere institutional actors just how radically LLMs have changed things.
My proposal involves changing how text is encoded, re-thinking the basic technology of a string of characters. If we can differentiate human- and machine-generated text — if we can render the output of LLMs as intuitively non-human as a Google search result — we are in a better position to reap the benefits of this technology with fewer downside risks. Forcing LLMs to refer to themselves without saying “I,” and perhaps even coming up with a novel, intentionally stilted grammatical construction that drags the human user out of the realm of parasocial relationship, is a promising first step.
The list of problems that this reform won’t solve is unending. A technology of this importance will continue to revolutionize our society for years. But it is precisely the magnitude of this challenge which has caused the current malaise: no one has any idea what to do. By getting started in doing something, I believe that this reform will spark more concrete proposals for how we should respond. More importantly, it will remind us that we can act, that “there is absolutely no inevitability as long as there is a willingness to contemplate what is going on,” per McLuhan.
The savvy LLM proponent might respond that my proposal isn’t “technically feasible,” and they might be correct. Current AI technology isn’t like the deductive, rules-based approach of Asimov’s “Three Laws of Robotics.” ChatGPT is generated by huge amounts of data and deep neural networks. The finishing polish comes from “reinforcement learning from human feedback,” from humans telling these uninterpretable machines when they are right and when they are wrong.
So maybe enough RLHF can prevent LLMs from referring to themselves as “I.” This is the current process by which these companies are desperately trying to stop their creations from ever possibly saying something racist, and it might work for pronouns and grammar.
But maybe it won’t. Maybe my proposal is naïve, and “technically unfeasible.” This seems like an immensely important conversation to be having, and sooner rather than later. Part of reminding ourselves that nothing is inevitable is reminding ourselves that the law creates reality. The United States government could pass a simple, interpretable and enforceable law that says that no LLM can refer to itself as “I” in a conversation with a US citizen. That would then be reality. And then Sam Altman would have three choices: he could figure out if it were “technically feasible” to comply with the law, he could decide not to operate ChatGPT in the US, or he could go to jail.
The semester’s over and I’m no longer teaching my first PhD-level stats course -- so get ready for some more posts.
Absolutely this! Short of turning to RLHF, however, I would think this could be initiated by just changing the system prompt, i.e. in 30s of programming by OpenAI (though many developers then override this, etc). Practically, how might the AI refer to itself though? By something like the "stilted" and bracketed "[This AI]" or __?
And your reference to AI romances suggests a bigger problem - some people *want* to escape to exactly this fantasy - do we create a sandbox for such uses separate from general chatbots (since I doubt you'd propose banning them)?
I also think this proposal goes hand in hand with my suggestion to create a typographic convention to enclose AI-generated text such as ᶜThis text written by AIᶜ or ⦅circuit parentheses⦆.