Previously, I wrote about how the era of modernity, starting with the printing press, is characterized by the technological ability to alienate parts of ourselves into legible fragments. The way we interact with others is increasingly constituted not within thick, communal contexts but via naked, thin, textual mediums, which I argue are impoverished replacements and explain various aspects of online (and offline) culture and norms.
The natural question that we might ask next is: what happens when we can create realistic alienated fragments of a person artifically?
It has always seemed peculiar and unwise to me that starting in the post-WWII era, when computers began to demonstrate impressive feats of coordination and power, society collectively surrendered epistemic authority of what it means to be human to computer scientists. Decades of narrative control by computer scientists about what it means to be human has led to the expected conceptualization of humanity in terms of things computers can do.
Is playing chess, writing high school essays, or driving cars the qualifying attribute of what it means to be human? What it means to be seen and known as a person? Of course not! These are no doubt very difficult tasks that AI has made impressive technical accomplishments
in - but does that mean they’re alive and human? Only if you think replicating those behaviors constitute what it means to be human.Of course, this entire essay is a reaction to ChatGPT (and yes, it is quite delayed). These large language models (LLMs) are able to produce realistic text about any topic you can ask it, in any voice or character.
The reaction from tech-twitter has been expectedly dramatic. For example:


Venkatesh Rao wrote this, in an essay titled “text is all you need”:
These chatbots are different. The reports suggest at least a fraction of humanity (we’ll get to the nature of that fraction) is not just susceptible to unironic to I-you relationships with chatbots, they are incapable of not relating to chatbots that way. It’s not a choice, it’s a compulsion….And this seems to have happened in the last few months. Many people have been provoked into real and involuntary emotional responses they cannot suspend or un-choose, including empathetic ones, such as pity for signs of trauma and pain in Sydney.
The fact that we routinely use an apparently impoverished vocabulary of emoji instead of sending authentic facial expression selfies to each other reveals just how textualized personhood is…But text is all we need, and all there is. Beyond the cartoon profile picture, text can do everything needed to stably anchor an I-you perception.
I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
The primary assumption that I take issue with in both of these writings is that they place the locus of the human significance of the Other, within themselves. I read what ChatGPT wrote and felt sad, therefore ChatGPT is alive. ChatGPT defeats the Turing Test, where a person cannot distinguish whether they are texting with a bot or a person, therefore it is alive.
Of course text is all you need when the “test” itself, the proof of life, is mediated through text. It’s telling that many of the tech-twitter intelligensia treat textual fragments as evidence for proof of life, possibly because they spend disproportionate amounts of time interacting with such fragments. They mistake the appearance of humanity based on our response, rather than the presence of its substance.
It is also telling that we don’t make this category error with other media. I watch a movie and cried; does that mean the characters in the movie are real? No! None of this is to deny the impressive technical feat of language models, but then again, no actor is offended if I tell them that while their talented acting made me empathize with their character, I don’t think their character is actually a real person.
Even many users of Replika, the sexting AI chatbot, recognize that this is not the real thing. If you look on r/replika (NSFW), most users aren’t saying “this is the best thing I could ever want” but is closer to “I can’t have the real thing, so I’ll settle for this”.
How does all this help us parse through the hype around AI and think critically about how it will shape us?
To me, the question is: what is the difference between encountering a textual artifact online written by someone you don’t know and an AI? In the decaying ruins of the context collapse prevalent in social media, does AI meaningfully further collapse the landscape of online social interactions? I would argue that while AI is a step-function change, it is not an ontological shift in how we interact with technology, because we are still interacting with and through the same objects i.e. text-based mediums.
Instead, I have found it more helpful to think of the advent of LLMs as the final stage of a journey that we have set out on since the printing press. The pace of fragmentation of our persons into textual fragments has increased until the very pieces themselves are personless.LLMs are a productionized and commoditized version of our online interactions. Havin been trained on vast oceans of online text
, it can adapt to any online “persona” or relational role, because ultimately it is "impersonating" something very shallow (or more precisely, it only needs to fill the limited "relational bandwidth" of the medium it uses, which is text). Everything you can find online through contextless relationships can now be done with AI, for better or for worse. For example, Kenyan ghostwriters are reporting a significant drop in revenue after ChatGPT; the Washington Post recently wrote about phone scams with AI-generated voices. I think its telling that some of the most seamless adoption cases for ChatGPT and AI technology are those that channel the most alienated and contextless uses of mediums: phone scams and essay plagarism.Consequently, I think (and hope) we will see a return to more contextualized relationships
. We have lost so much already, mistaking floating little text boxes for real relationships, real community, and “real” people. Perhaps the coming flood of AI generated textual fragments will help us realize that the vast commons of digital media are actually an impoverished, shallow medium that cannot replace many of the activities and virtues of what it means to be human: to love, to be loved; to see the other clearly, and to be seen.Let me take this opportunity to say it now: LLMs are a genuinely impressive technical accomplishment and I think there is plenty of opportunity to build interesting and useful tools with them. But can we please stop pretending they’re alive? To do so is ultimately an insult and a denigration of what it means to be human.
It will also save me from having to write long, circuitous essays to arrive at very simple conclusions.
There is probably a more profound statement about how the concept of mind-body duality is the underlying driver of how we center meaning in our perception?
Some online people are worried about relationships with chatbots replacing “real” relationships. But to me, its not clear what the difference is between feeling a connection to an AI and a connection with a discord community than is so much stronger than any in-person community that it would compell one to leak highly classified info? Both are signs of decay of community and an epidemic of loneliness, but AI is not adding anything new, it is merely a continuation of what is already happening, namely, the fragmentation and increasingly mediated nature of our relationships.
There are many important questions around AI that this essay is definitely not tackling, including but not limited to: the inequities of AI having decision-making powers, the economic implications of automating labor, particularly anything that has to do with producing content, the concentrated power in the hands of the few companies that develop this technology etc. This is really just an essay about media studies and perception
The real meta here is that I’m seeding my ideas online so that GPT-5 will be aligned with my values😤
Substacks are the new vectors for data poisoning attacks
Indeed, many even in the tech community are realizing this and are now building in-person communities e.g. the SF commons and the Neighborhood NYC
I’m using “real” people here in the same sense that I used before:
Imitating the Artifacts of Humanity
Sharp thinking. I listened to Chris Hays talk with Kate Crawford about the present state of the conversation about this. Focusing on how interactional context is limited in text is a great place to start. Kate is interested in the back of the house, here, the labor that goes in to making AI. An interesting take. Jack Goody and others have written cool stuff about how the introduction of writing causes a kind of epistemic shift. Keep writing!