Opt-in section of the Contraptions Newsletter devoted to backgrounders, notes, and experiments cooked up with AI assistance. Serving up elevated slop 1-2 times a week. Cooking time of this one: 1 hour. Recipe at end. If you only want my hand-crafted writing, you can unsubscribe from this section.
In the late 20th century, something curious began to happen with baby names in the United States. For decades, mass broadcast media—television, film, radio—had helped homogenize naming conventions. A handful of names dominated birth certificates across the country. “Jennifer,” “Michael,” “Lisa,” and “Jessica” ruled entire generations, boosted by soap operas, sitcoms, and celebrity culture. Names traveled through society the same way pop songs or slogans did: from a few powerful sources outward, radiating influence and narrowing choice.
But with the rise of the internet, that trend reversed. As culture fragmented into niches and search engines made obscure knowledge instantly accessible, naming became a site of individuation. Parents began seeking uniqueness. By 2020, only 7% of babies in the U.S. were given a top-10 name—down from nearly a third a century earlier. Identity, once homogenized by mass media, had become increasingly customized, self-aware, and aestheticized.
This arc—from homogenization to diversification—is now repeating itself at the level of language itself, not just naming. And this time, the shaping force is not broadcast media, but large language models.
Foundation models like GPT and Claude now serve as the index funds of language. Trained on enormous corpora of human text, they do not try to innovate. Instead, they track the center of linguistic gravity: fluent, plausible, average-case language. They provide efficient, scalable access to verbal coherence, just as index funds offer broad exposure to market returns. For most users, most of the time, this is enough. LLMs automate fluency the way passive investing automates exposure. They flatten out risk and elevate reliability.
But they also suppress surprise. Like index funds, LLMs are excellent at covering known territory but incapable of charting new ground. The result is a linguistic landscape dominated by synthetic norms: smooth, predictable, uncontroversial. Writing with an LLM is increasingly like buying the market—safe, standardized, and inherently unoriginal.
In this new environment, the act of writing raw, unassisted text begins to resemble picking penny stocks. It’s risky, inefficient, and potentially seen as naïve. Yet it remains the only place where genuine linguistic alpha—the surplus value of originality—can be found. Alpha lives in human voice, conceptual invention, emotional charge, and expressive risk. It emerges from the irreducible tensions of context, personality, and thought. And like financial alpha, it is quickly absorbed and neutralized by the systems it disrupts. What begins as a surprise becomes a template; what once felt radical becomes the new benchmark.
As a result, the most original language is retreating into private markets. In Substacks, Signal threads, Discord servers, and private memos, new forms are being tested in semi-anonymous, high-context settings. These are the linguistic equivalents of venture capital and private equity—spaces of risk, scarcity, and concentrated attention. Just as companies now avoid going public too soon, writers may delay or even refuse public release, fearing dilution or misappropriation. Only once an idea matures might it “IPO” into the public sphere—perhaps as a viral tweet, a manifesto, or a cultural phrase. But even then, its time is limited: LLMs will soon flatten it into beta.
This recursive cycle—alpha to beta, originality to norm—is shaping a two-tiered linguistic economy. In the public sphere, language is increasingly frictionless but interchangeable. In the private sphere, language remains risky, inventive, and alive. The future of writing will depend not on mastering the average, but on learning how to stand out against it.
The baby name curve gives us a cultural preview. What once felt efficient and unified eventually came to feel bland and overexposed. As tools for discovery and self-expression became available, people began to opt out of the norm. Now, as LLMs flood the public sphere with plausible language, the same dynamic is underway. The search for linguistic alpha has already begun. It lives in the dark forests and cozywebs of the internet—not yet indexed, not yet flattened. And it belongs to those who understand the value of staying hidden—until the right moment to speak.
Recipe
1. Start with a Structural Analogy
Identify a complex system in one domain (LLMs in language) and map it to a well-understood system in another (index funds in finance). Ensure the analogy is not superficial—seek systemic parallels in behavior, incentives, and emergent dynamics.
2. Establish Historical Precedent
Introduce a real-world, data-backed precedent (baby name diversification) to show how similar pressures (homogenization vs. individuation) have played out under earlier technological regimes (broadcast media → internet).
3. Define Conceptual Currency
Coin or adopt precise terms to ground the analogy:
Linguistic alpha = surplus expressive or conceptual value
Beta language = average, LLM-produced fluency
IPO of language = public release of original work
4. Frame a Two-Tier System
Articulate how the new environment bifurcates into:
A public, LLM-saturated layer (standardized, low-variance)
A private, experimental layer (risky, creative, context-rich)
5. Loop the Analogy Recursively
Describe how alpha becomes beta: how original language gets commodified by models, mirroring market cycles of innovation and assimilation.
6. Subtly Anchor in Existing Theory
Minimalistically reference concepts like the cozyweb and dark forest as locations for hidden, non-indexed value without diluting the main analogy.
7. Maintain Clean, High-Energy Prose
Keep tone neutral but alive—avoid manifesto or personal voice shifts. Let the ideas carry the charge.
8. Terminate on a Conceptual Open Loop
Conclude not with resolution, but with a hint at future discovery. Signal that this pattern (alpha → beta → renewed alpha) is ongoing, and that the edge belongs to those who can stay ahead of the flattening.
Ha this feels apropos as an llm sloptraption
llm as post-operational language (it's trained on reports after all)