Getting Gooier
How AI is transforming humans
Everything is now about AI except AI. AI is about the shapes of human beings.1 David Lang likes to point out, and I agree, that in thinking about the impact of major technologies, everybody is obsessed with how the world might change, but few have the nerve to examine the uncomfortable question of how humans might change in response; what sorts of dysphorias might be set in motion, and what the results might be.
We typically pay lip service to the idea that humans are changing in response to a new technology — it’s too obvious to deny — and then hurriedly move on to analyzing the world of the future under the operating assumption that they don’t.
One way this manifests is via the bad habit of cashing out the human side of expectations of the future in terms of rise and fall in the stocks of particular sorts of familiar fixed-shape humans, often defined by education (“humanities majors will thrive” is a perennially popular one) or rather vacuously defined broad-strokes aptitudes (“generalists” is a popular one; they are always just about to inherit the earth), or behavioral dispositions (“curators” and “storytellers” are always just about to recognized for their value; “lawyers” and “bureaucrats” are always just on the verge of disappearing).
What these lazy extrapolations of the human condition share is the presumption that there will be no deep change in human nature itself, only in the shape of the distribution of an eternal range of human types. It’s a tempting trap, and one I myself fall into frequently. For example, recently I’ve been proclaiming that non-technical project managers are going to inherit the software world. Not only is that kinda wrong on its own terms, the terms are wrong. Programmers vs. project managers is the wrong ontology for the transformed versions of that subset of humans.
Somewhat more usefully, people like to speculate about certain shapes of humans going away, and speculating about new ones. For example, Boris Cherny, the inventor of Claude Code was arguing that the entire category of “software engineer” will disappear in a year. Not like buggy-whip makers though. It’s more like the functional role of “software engineer” will get refactored across other tbd roles. On the speculative new roles front, we have people thinking about Asimovian robopsychologists emerging. And I’m sure someone somewhere is proposing the creation of Chief Vibes Officer roles.
While an improvement on the rise-and-fall-of-human-stocks approach, this creation-and-destruction-of-roles approach is still not quite there yet. You’re still transforming the ontology of ways of being through deletions and additions that feel taxonomically familiar. These are remixes and portmanteaus. Not visceral changes.
So what does it mean for humans to change in response to technology. One example of getting the analysis right is Virginia Woolf’s essay Mr. Bennett and Mrs. Brown, which includes the famous assertion that “on or about December, 1910, human character changed.” The whole passage is worth quoting:
And now I will hazard a second assertion, which is more disputable perhaps, to the effect that on or about December, 1910, human character changed. I am not saying that one went out, as one might into a garden, and there saw that a rose had flowered, or that a hen had laid an egg. The change was not sudden and definite like that. But a change there was, nevertheless; and, since one must be arbitrary, let us date it about the year 1910. The first signs of it are recorded in the books of Samuel Butler, in The Way of All Flesh in particular; the plays of Bernard Shaw continue to record it. In life one can see the change, if I may use a homely illustration, in the character of one’s cook. The Victorian cook lived like a leviathan in the lower depths, formidable, silent, obscure, inscrutable; the Georgian cook is a creature of sunshine and fresh air; in and out of the drawing-room, now to borrow the Daily Herald, now to ask advice about a hat. Do you ask for more solemn instances of 2 the power of the human race to change? Read the Agamemnon, and see whether, in process of time, your sympathies are not almost entirely with Clytemnestra. Or consider the married life of the Carlyles and bewail the waste, the futility, for him and for her, of the horrible domestic tradition which made it seemly for a woman of genius to spend her time chasing beetles, scouring saucepans, instead of writing books. All human relations have shifted—those between masters and servants, husbands and wives, parents and children. And when human relations change there is at the same time a change in religion, conduct, politics, and literature. Let us agree to place one of these changes about the year 1910.
This is, in my opinion, the right way to analyze and model human change in the wake of major technologies. I’ve argued elsewhere that the change Woolf was talking about in this particular case was a consequence of the rise of clock time. Big Ben tolling repeatedly is a motif in Mrs. Dalloway, and the whole modernist literary style she helped pioneer is arguably about subjective internal time (“stream of consciousness”) diverging from objective, external time, creating a kind of temporal alienation, and a deep war among temporal psychotypes. You could tell the story of the 20th century as a deep conflict between temporal orientations. But this story isn’t visible through ordinary analytical lenses.
The conflict Woolf posed, between the titular Mr. Bennett and Mrs. Brown (who represented old and new ways of being human), is not easily reducible to legible types, defined by class, gender, professions, educational markers, or legible personality traits. The best I can do is to describe the invisible Woolfian time war as a conflict between people defined by strong interiority, who felt alienated by the emerging clock-based society, and people defined by strong exteriority, who felt deeply at home in the chronos-shaped environment.
The division wasn’t a clean one. There were interiority-driven people who thrived by gaining mastery over clock-time cultures, and exteriority-driven people who struggled. But by and large, it’s fair to say that the grain of the twentieth century favored exteriority. Woolf gave voice to a kind of awakened resistance that carried the interiority torch for nearly a century. I have been making the prediction in recent years that we’re overdue for a reversal of fortunes for the two types, but I’m not sure now. AI has muddied the picture.
How do we apply Woolf’s approach to the question of how humans are being changed by AI? This is the transhumanism question. We’ve seen early examples — people getting into intimate confessional relationships with chatbots, people driven to hypomanic agentic paranoia by being in a loop with Claude Code, and so on.
Do these transformations of the human have any shared features? I think they do: the balance between what Alan Watts called prickles and goo in the make-up of the human is changing.
Kevin Simler had a great primer on the idea you should read first if you’re not familiar with the idea. Here’s the key bit, including a quote from Watts:
Here’s Watts contrasting “prickly” people with “gooey” people:
The prickly people are tough-minded, rigorous, and precise, and like to stress differences and divisions between things.... The gooey people are tender-minded romanticists who love wide generalizations and grand syntheses.... Prickly philosophers consider the gooey ones rather disgusting — undisciplined, vague dreamers who slide over hard facts like an intellectual slime which threatens to engulf the whole universe in an “undifferentiated aesthetic continuum”.... But gooey philosophers think of their prickly colleagues as animated skeletons that rattle and click without any flesh or vital juices, as dry and dessicated mechanisms bereft of all finer feelings.
But it’s not just whole persons who are prickly or gooey. All of us have our prickly parts and our gooey parts. The questions to ask are Which parts? and What’s the ratio of prickles to goo?
We could also call these the hard and soft parts of our identities. The hard/prickly parts are uncompromising and unyielding. They feel necessary and essential. They are exclusive; they define boundaries with a privileged ‘inside’ and an excluded ‘outside.’ If your tastes in music are hard or prickly, you’ll feel good about excluding certain genres and artists from your identity.
Here’s my hypothesis: because AI is perceived as a psychologically safe counterparty for human-like relationships (whether or not it actually is depending on how your favorite LLM handles your data), we are more willing to expose our gooey side to it, and suppress our pricklier instincts in engaging with it. To the extent this relational posture is successful, it amplifies the gooey side. We become gooier.
This is not universally true, but the people who start using AI in sustained ways typically fit this profile. People who try to form prickly, cautious, and suspicious interfaces with AIs typically don’t end up using it effectively enough to make it worth their while, and eventually retreat to older human modes. There is a reason the first major gooified interface is called vibecoding. If you’re not capable of vibing with the machine, it will do far less with and for you. Checking every line of code a coding agent writes is a prickly relationship. Never even opening up the code in a code editor, but just watching meta-commentary fly past in the command shell is a gooey relationship. One is doomed. The other will likely thrive.
What about the human-facing side? I think as more of our needs for gooey relationships are met by AIs, our human-facing side is less inclined to take the risks required to balance prickles and goo in human relationships. We get, not necessarily pricklier, but simply less gooey. The result is a tendency to cool off and disengage unless the expected relationship rewards are significantly higher. It’s the psychological equivalent of flying instead of taking the train when flights are cheap enough.
Human nature is an intersubjective thing, and if enough of your intersubjective relationships are with machines, your machine-face gets gooier, while your human face gets less gooey. Overall, you get gooier, but look relatively pricklier to other humans.
In the short to medium term, I think the second-order effect of greater gooeyness is growing divergence and acceleration of the already increasing atomization of humans. Unlike atomization due to social media, AI-driven transhuman atomization feels fundamentally more sustainable. Perhaps we should call it molecularization. You may be growing more distant from other humans, but you're going to get more intimately entangled with your AIs.
In the long term, I think new forms of human-to-human sociality (and AI-to-AI sociality via multiagency) will rein in the divergence and atomization/molecularization. There is already talk of AI-mediated digital egregores. My old idea of graph minds seem relevant here. But I’m not yet seeing viable mechanisms for this sort of re-convergence into new digitally mediated intimate socialities. I predict they’ll appear in about a year or two.
Following Woolf, we can assert that on or about December 2025, human nature changed. We can argue about whether the ChatGPT moment or the Claude Code moment was the more definitive moment of change. I vote for the latter because it went deeper and offers a fundamentally open-ended architecture for scaffolding human-AI relationships in the future. Unlike the chatbot form factor, which is anthropocentric in conception and fundamentally depth-limited, the agentic coding strange loop creates a fundamentally alien way of being, capable of making us as alien as we dare to become. It is a portal to transhumanism.
I thought the line “everything is about sex except sex; sex is about power” was due to Elton John. Apparently many think it was Oscar Wilde. There is no clear source.


