Discussion about this post

User's avatar
Mike B's avatar

tl;dr?

Expand full comment
Chris Schuck's avatar

Your breakdown of these four different LLM advances is very helpful. Have you seen Jac Mullen's new Substack, After Literacy? He recently kicked off a series of posts advancing the thesis that LLMs, coupled with our new communication infrastructure, is effectively externalizing attention in a way analogous to how writing externalized memory. The way he applies this seems very related to your fourth form of LLM innovation: the ability to attend to both itself and the user given the bigger context window. (Or was this the third? You only seemed to cover three).

It feels like you might have buried the lede just a tiny bit at the end (is there a special term for burying the final point not the lede?). If I understand correctly, what's key here is not simply that we have been shortening our context windows and weakening attentional capacities, but that only very recently LLMs began *replacing* our window as they expand their own - meaning some complex interaction between the distinctively human effects of letting our own context window decay, and the distinctively LLM effect of lengthening theirs in more algorithmic fashion. So one implication is that we increasingly outsource attention to the LLM, even more than what was already happening with media technology. But maybe also, this is much more than a matter of outsourcing tasks, or direct linear substitution of our attention with theirs?

Expand full comment
3 more comments...

No posts

Ready for more?