Welcome back.
August was exactly as unpleasant as I expected it to be, but I did get some good thinking done on several angsty questions that have been troubling my sensitive soul, and filled about half a Moleskine notebook. I awarded myself one gold snowflake for making my break about as productive as is possible in August.
One of the questions I was thinking about is: what’s the role of words in the future?
The conceit of this newsletter is that it is a studio for words, so it’s the sort of self-absorbed question I think should be asking here from time to time, but it’s also a question that’s of broader interest. Words are an indicator species for the health of the cultural ecosystem, even for those who harbor thinly veiled contempt for words and those attached to producing and consuming them like they matter.
I came up with an answer that was at least amusing to me, possibly insightful, and perhaps even correct: Low-Level Vitalist Memeing. LLVM for short.
The programmers among you can probably guess where I’m going with this strained joke.
I was thinking about the question in the spirit of this passage by Henry James in a 1915 essay in the New York Times:
“The war has used up words; they have weakened, they have deteriorated like motor-car tyres; they have like millions of other things, been more overstrained and knocked about and voided of the happy semblance during the last six months than in all the long ages before, and we are now confronted with a depreciation of all our terms, or otherwise speaking, with a loss of expression through increase of limpness, that may well make us wonder what ghosts will be left to walk.”
Something very like this is going on today, and has been since about 2016. Beneath the glut of text being produced (I myself am subscribed to a couple of dozen newsletters), there is a crisis in the world of words.
Words don’t work the way they used to anymore.
They’ve gone limp. Writing in old modes feels like trying to drag around a bunch of kids who’ve gone boneless. You can kinda arrange them in position by brute force, but it’s perhaps better to ask why they’ve gone boneless.
To the extent you don’t grok where and how they still do work, you might even be tempted to speculate, as James did in 1915, that words have undergone a sort of irreversible devaluation, and are destined for a ghostly future.
This is not new. Big global crises tend to cause dramatic shifts in literary culture. And for a while words seem so “limp” you begin to wonder if language itself is about to go obsolete as a critical load-bearing element of culture. But the crisis contains the seed of a renewal.
Henry James’ 1915 comment foreshadowed the post-WWI/Spanish Flu rise of literary modernism and realism, and the decline of Victorian and Edwardian modes of writing. This is what Virginia Woolf was going on about in Mr. Bennett and Mrs. Brown.
Writers came up with many solutions to the problem of “limpness” flagged by James.
Hemingway, for instance, famously adopted a laconic “iceberg” style of writing, with 9/10 of the important stuff left unsaid. He fought the “limpness” of words by infusing the few he did use with intense phenomenological vigor.
Others came up with other solutions. H. P. Lovecraft invented cosmic horror to talk about unconscious demons. Virginia Woolf developed what became stream of consciousness fiction. James himself developed what I consider an unreadable style. All these innovations, arguably, were a response to the threat to the vitality of words. It is not surprising that the vitalism of Henri Bergson appealed to many writers of this generation.
You find similar shifts if you go further back. The Black Death catalyzed a shift from ancient forms of epic and folk fiction — think Arthurian legends, which had turned into tired and played-out governing cultural texts by the mid 1300s — to a more realistic, historically situated kind of fiction exemplified by the Decameron (which I’m reading now), and The Canterbury Tales, which was inspired by it.
Why does this sort of thing happen? Why does the role of words in culture shift in the wake of big, dislocating shifts?
Why do we go mute on one front, and start talking funny on another?
I mean, of course words don’t exist in a vacuum, and are linked to the realities of life, but still, how does the coupling actually work?
***
Here’s a different way of thinking about the phenomenon that might help answer the question.
Natural human languages are sometimes analogized to computer languages, and they do share many key features, but there is one key difference. A single natural human language like English maps not to a specific programming language like Python, but to an entire stack of computing languages, from the lowest to the highest level.
Sometimes we use English in ways that look like machine code (think step-by-step instructions or UI text strings), sometimes in ways that look like shell scripts, sometimes like C, sometimes like Javascript. In each case, the “compile target” is the human brain. As with computer languages, sometimes you compile one language into a lower-level language — esoteric scientific journal jargon into pop-science language for example.
Given a piece of natural language text, it is always interesting to ask — what level of abstraction in a computing stack is it analogous to?
A PowerPoint-based speech is perhaps like Javascript and CSS. It belongs in the public presentation layer of the stack.
A legal contract is perhaps at the C level. A to-do list is perhaps at the bytecode level.
The precise mappings do not matter. The point is, human natural languages fluidly operate at all levels of abstraction, and there are no sharp boundaries. They can be used to program other minds at any level from machine code to CSS.
Now take a macro view of this. Given all the text being produced and consumed in the world at any given time, what is the distribution of weight and value? How, to make the question more concrete, is the GDP of words distributed among the words flowing right now?
A friend of mine once remarked that though programming language mavens hate it, PHP remains a critically important language because so much of the internet’s GDP runs on it. Could you say something similar about certain levels of usage of English? Like we all hate PR boilerplate and corporate-speak, but like PHP, a lot of GDP rides that level of language.
At what level of the language stack is the real action unfolding? What is the economic, political, and cultural capital latent in the words flowing through our world at different levels?
Big global crises, arguably, cause sudden disruptive shifts not just in what is said, and by whom, but in the kinds of words have value, and the levels at which that value flows.
When we say, for example, that culture is shifting from “talking” to “doing,” we don’t really mean words are becoming worthless. We mean words at the Substack newsletter or PowerPoint talk-track level are becoming less valuable, while words at say the level of contracts or slack messages among coworkers are becoming more valuable. Or that words spoken charismatically from a public stage are being devalued, while words spoken in private among intimates is increasing in value.
Which brings me to my answer, Low-Level Vitalist Memeing, which is only partly a joke.
***
The programmers among you will recognize that my LLVM is a play on a better known LLVM, an initialism that originally stood for Low-Level Virtual Machine but is now just an open brand for a bunch of industry standard compiler-y things. The computing-stack LLVM includes what is known as an intermediate representation, or IR, which is the lowest level of abstract language compilers generate, before you hit hardware-specific instruction sets (the “languages” that run on specific bits of silicon).
By analogy, for natural languages, the next level after LLVM is whatever private pre-linguistic thoughtforms you use inside your head to think. Low-level vitalist memeing is the lowest-abstraction way to communicate with another human being without them having a brain-to-brain connection to you.
Twitter is obviously the heart and soul of LLVM in English today.
A particular good example came up recently: The meme-phrase few understand this.
As far as I’ve been able to determine, it started as an unironic and self-important tag phrase attached to messianic proclamations in MRA (Men’s Rights Activists) forums. It then migrated to the crypto world, and people like the Winkelvoss twins started delivering ironic, tongue-in-cheek aphorisms with the tag attached.
Then it turned into a full-on ironic meme, and people started using it both for their own aphorisms, and as an in-joke reply to tweets.
Pretty soon, it had been reduced to a single word: few.
Within the subcultures that were literate in this short-lived usage, this operated as a shorthand that communicated volumes, within fast moving discourses that looked like parallel processing by thousands of tightly coupled human brains.
A great deal of communication on Twitter is in the form of this sort of short-lived cryptic-seeming shorthand. It’s not quite slang, nor is it in-group shibboleths. And it is definitely not enduring enough to be called a dialect. It is something like an efficient representation of an idea-fragment that works to qualify and modify other ideas in a live, real-time shared headspace — for a while.
The phrase few understand this is something like a bit of intermediate representation code we were using to program each other’s brains for a while. It’s low-level code. To even talk about it at this newsletter level, I have to explain it because it’s from a level of language that rarely penetrates through to the newsletter level.
LLVM is this sort of thing. Words used as low-level but still brain-independent carriers of highly fragmentary meaning between densely interacting brains. It is language operating at a sub-conversational level really, barely above the level of synapses firing. Language that rarely bubbles up to higher levels.
This is where the action is, and has been, for nearly 5 years. It’s the only level at which words haven’t succumbed to Henry James’ “limpness.”
People don’t really talk on Twitter. It’s not a conversational medium but a sub-conversational medium, where you can have a dozen parallel 1:1 exchanges in the replies to a single tweet. A tree of collective processing unfolds. And the order of things said does not quite matter. Sometimes tweets form a sequence, other times they form a set of parallel texts, and still other times, they constitute a grab-bag that can be consumed or produced in arbitrary order.
We don’t talk like this in normal conversational life. This is closer to group-mind telepathic connection than it is to traditional oral or textual culture.
And it’s not just Twitter. It’s Discord. It’s Slack. It’s every messenger. And it’s not really text by itself, but text in a multi-media stream of referents that can include images, videos, and gifs. It’s all part of the LLVM level of human language.
In some ways this is not even a metaphor. A discord I’m part of has a bunch of bots you interact with via commands that are mixed-in with the people-chatter stream. Human LLVM and bot LLVM are starting to interoperate.
Increasingly, the stack level at which you primarily use a language now matters more than the specific natural language you use. The fact that I do an increasing amount of my thinking on Twitter rather than my blog is more salient than the fact that I do so in English rather than Hindi.
In some ways, such horizontal distinctions have always been more important, just as it is in computing. In computing, if you’re a systems programmer working at C and lower levels, the differences between say Python and Ruby don’t really matter. It’s just that these distinctions are also unusually stable, and things rarely shift around.
But when they do, it’s probably a sign that the world is changing.
***
So to explain my answer, the future of words, at least in the medium term, lies at a very low level in the abstraction hierarchy spanned by any natural language. You can still use words at any level, but the most powerful level is the lowest one.
And then there’s the content. Vitalist memeing. By which I mean words used at a low level, to trace the contours of flows of collective consciousness energy. The way iron filings line up along magnetic field lines.
But why?
There are some practical reasons of course. The medium is the message, and there are simply more words flowing through messaging and Twitter-like channels. Then there is the intense competition for attention, which means we switch attention more often, and are more likely to take in just a few words rather than walls of text. Then there is the increasingly sophisticated science of hacking attention with just a few words — clickbait. We’ve talked about all this for decades at this point.
I think, however, that these are all peripheral matters. The big reason the LLVM level is increasing in importance is that higher levels have kinda collapsed from progressive rot, abuse, and disconnection from reality.
The production and consumption of clever words at a high level of abstraction was a big part of cultural life in the run up to the Great Weirding. As with everything else that was a big part of The Way Things Were, this kind of word-culture was already in a slow, worrying decline by 2015. The decline accelerated through the Great Weirding, and then fell off a cliff with the pandemic.
High-level words used to matter a great deal. And most of them were at the PowerPoint level of abstraction. The most important words in the world were the ones spoken with premium-mediocre whiggish gravitas at TED talks, accompanied by striking (and carefully crafted for impact) images. The “package” included affect (authenticity! power poses!) and context ($6000 TED tickets! Delivering insights unto the movers and shakers at Davos! Priestly black turtlenecks!).
Words at this level operate in chunks of several thousand words — typical speeches, New Yorker feature articles, New York Times op-eds. Until 2015, they helped create, and preserve the normalcy fields of our world. They programmed people by the millions, even billions.
Big, difficult problems were Taken Care Of so long as people were delivering jazzy TED talks about them, complete with the dopamine hit of an Essential Insight that reassured you that the right people were on top of things. All you had to do was spread the Ideas Worth Spreading.
So long as David Brooks or Paul Krugman wrote a thousand words on the impossible quagmires of the day, you and I didn’t need to worry about them.
Then at some point they stopped working. They went limp. Only words at the LLVM level retained the power to do anything to reality. But it took us a while to recognize it.
Then of course, all words at those levels of abstraction suffered a rude and rapid devaluation through the Great Weirding. And their value fell off a cliff through the pandemic. I doubt anyone is paying $6000 to listen to a TED talk on Zoom. If they are, I have a bridge NFT to sell them. Nobody takes their cues about how the world works from the New York Times op-ed section anymore.
As with any tall stack of abstract levels, when the thing began toppling, only the lowest levels were left functioning and intact.
We have been reduced to Low-Level Vitalist Memeing largely because high-level word layers have crashed.
It is a response allied in spirit to Hemingway’s iceberg writing.
When faced with the limpness of words, use as few as possible, in as alive a way as possible, and make them count.
***
There is a whole pragmatic question attached here, and I could deliver unto you a TED talk on the intricate intertwined fates of old and new media, big platforms, advertising and so on. But that’s boring. You probably know some version of the story that’s not too wrong. It’s the sausage factory stuff that’s of interest to writers but nobody else.
And it’s also not particularly interesting to dwell on the rearguard paradoxes of Substack, the rats leaving sinking old media ships with whatever klout they can carry, the old-school bloggers looking on with a mix of smugness and resentment, and the young kids going off to build their own NFT-funded static site empires.
These things are important, but there’s also a level of analysis at which such things become boring. Yes, the medium is the message, and yes, a lot rides on jumping on the right medium at the right time in its cycle of growth and decay and so forth. I did that myself when WordPress was as new as NFTs are today.
But the most interesting thing about words remains what you choose to say with them, when, and why.
What interests me more is where we go from here, in terms of the fundamentals of worlds build with words.
What needs to be said? How should it be said? Who should say it?
What’s worth saying, in the larger new scheme of cultural things worth doing at all?
After clearing away the rubble of TED talks and NYT op-eds, and having spent a few years learning to work with LLVM on Twitter and elsewhere, how do we rebuild more ambitious levels of language use? How do we recover a “full stack” use of natural language?
We can’t hide out on Twitter for ever after all. Not everything can be done at the LLVM level.
Are new literary forms going to be invented? What about new styles of essay? How’s network realism going to evolve? How is surveillance fiction going to evolve?
Where’s our Decameron? Where’s our Canterbury Tales? Where are Boccaccio and Chaucer lurking? Where is a Petrarch digging around for ancient Greek and Roman texts?
What’s our equivalent of Arthrurian legends that we must turn away from?
I’m not vain enough to imagine that I necessarily have a role to play here. Perhaps I’m too much part of the old, dying world of words to have a role in the brave new one. I am sure (hell, I know) that some view me and my generation of bloggers as the dinosaurs who must be pushed aside so that a better world, built from better words, by better, bolder people, might emerge. And perhaps they’re not wrong.
Or perhaps there’s still something for me to do. I don’t know.
Either way, I’m genuinely curious to see how it all turns out.
https://aeon.co/essays/machine-writing-is-closer-to-literatures-history-than-you-know something you might be interested in?
Unlike programming languages, natural language acquires meaning in a context way larger than any artificial language's compiler, or, to put it another way, the compiler for natural language is world-sized and fluid and the output isn't predictable because the OS isn't predictable, unlike with php/c/js/whatever where those parameters are tightly controlled.
The eventual translation of author's intention into understanding depends more on where it's executed than on the author's internal workings, although the author's "in-tune-ness" to the new OS affects how well their message will be received. I think the audience is more critical to this process than the origin. If the linguistic methods that developed and were successful in the past were only successful because the recipient OS made them so, it stands to reason that if the recipient OS shifts in response to major events like wars, pandemics, etc, previous modes of communication will no longer be as powerful as they once were, or even make any sense in the new context. Any new method that accidentally (or scientifically) taps in to the new semantic reality will prevail, at least until the OS shifts again. The question of "what's the role of words in the future", IMO is fairly simple to answer: same as it ever was.
What would be incredibly interesting and helpful isn't an analysis of the shift but the next step: prediction of the next shift. What are the patterns in language understanding changes and what events precipitated them? What shifts are coming next? What do we do to switch our mode of communication to one that is becoming prevalent in some areas of the world OS?
One area where I find this particularly interesting is the beyond-word communication that seems to have occurred notably twice in the last century: first, in 1918-1930s Germany, with Hitler's & Goebbels' use of meta-language to ignite a populist/nationalist world war, and second, in 2010-now US (and a few other countries), with Trump's and [unknown Goebbels equivalent]'s use of meta-language to ignite a populist/nationalist insurrection (and a barely perceptible slow-motion civil war).