The Contraptions Book club March pick is Ibn Khaldun: An Intellectual Biopgraphy, by Robert Irwin. Chat thread. We will discuss this the week of April 28th.
In the last couple of weeks, I’ve gotten into a groove with AI-assisted writing, as you may have noticed, and I am really enjoying it. You’ve seen the results — a 3x increase in posting frequency, with a 3:1 ratio of assisted to “unassisted” posts (more on those scare quotes in a minute). I don’t know if I can keep that up, but we’ll see.
Now I am become Jevons Paradox, Destroyer of Inboxes.
For those of you struggling to keep up, a sincere suggestion — start using AIs to help you read. And no, AI-assisted production and AI-assisted consumption do not cancel out.
My AI expanding my one-liner prompt into a 1000-word essay that is summarized to a one-liner tldr by your AI is a value-adding process because your consumption context is different from my production content.
High compression/expansion ratios in such writer-reader relationships do not mean there is low information content. A 1:100:1 expansion/compression pipeline does not mean there are only ten words worth of information in a 1000-word transmission. The other 990 words go towards mind-melding our contexts. I suspect there’s a way to model this using mutual information and alignment of priors, but I’ll leave that to the more mathematically inclined among you (assisted by your AIs). You can also have a two-stage expansion pipeline instead of an expansion-compression pipeline. People often complain I am being too cryptic or gnomic. Well, now you have the tools to get things explained to you. So with some of you, some of my posts might go through a double expansion process — say 1:100:200.
The AI element in my writing has gotten serious, and I think is here to stay. While I still plan to segregate my unassisted and AI-assisted writings into the two sections of this newsletter for the time being, it feels like a temporary adjustment phase.
So it feels like it’s time to reset expectations for this newsletter a bit. I’m an AI+human centaur now, and should say something about Terms of Centaur Service.
To qualify the above a bit, getting a brilliant 2000-word essay out of a single prompt is of course as rare as a traditional first-draft being publishable with no edits. Usually I do have to do some iterative work with the LLM to get it there, and that’s the part that’s creative fun. In general, an AI-assisted essay requires about the same amount of high-level thinking effort for me as an unassisted essay, but gets done about 3x-5x faster, since the writing part can be mostly automated. It generally takes me an hour or two to produce an assisted essay. An unassisted one of similar length is usually 7-10 hours.
But I’m not really interested in production efficiency. That’s a side effect that has the (possibly unfortunate for some of you information-overload-anxious types) effect of increasing production volume. I’m interested primarily in enjoying the writing process more, and in different ways.
I’ve been doing unassisted writing for a couple of decades now, and this new process feels like a jolt of creative freshness in my writing life. It’s made writing playful and fun again.
I’ve also really enjoyed having AI assisted comprehension in tackling difficult books, such as last month’s book-club pick, Frances Yates’ Giordano Bruno and the Hermetic Tradition. I don’t know that I’d have been able to tackle it unassisted.
On the writing side, when I have a productive prompting session, not only does the output feel information dense for the audience, it feels information dense for me.
An example of this kind of essay is one I posted last week, on a memory-access-boundary understanding of what intelligence is. This was an essay I generated that I got value out of reading. And it didn’t feel like a simple case of “thinking through writing.” There’s stuff in here contributed by ChatGPT that I didn’t know or realize even subconsciously, even though I’ve been consulting for 13 years in the semiconductor industry.
Generated text having elements new to even the prompter is a real benefit, especially with fiction. I wrote a bit of fiction last week that will be published in
tomorrow that was so much fun, I went back and re-read it twice. This is something I never do with m own writing. By the time I ship an unassisted piece of writing, I’m generally sick of it.AI-assisted writing allows you to have your cake and eat it too. The pleasure of the creative process, and the pleasure of reading. That’s in fact a test of good slop — do you feel like reading it?
My understanding of what’s going on here is in my other AI-assisted-essay-about-AI from last week, about LLMs being similar to index funds for language. This essay could be considered my first modest AI-assisted viral hit. A lot of people appreciated it for the actual ideas in it, not just the novelty element of it being AI-coauthored. If I’d hidden the AI-assistance aspect, I suspect people would have given me more credit for it.
As in all my successful elements so far, ChatGPT contributed quite a bit, and it’s increasingly becoming hard to tell which creative/original bits are mine, versus the AI’s.
Let’s Speedrun AI Dualism
As I said, I’m currently segregating my unassisted posts from my AI-assisted posts, and I intend to keep this up for a while. But it’s already clear to me that the distinction is already meaningless and is going to vanish sooner rather than later.
We’re currently in the AI-era equivalent of what used to be called digital dualism, when we still made the distinction between online and offline. The assisted vs. unassisted distinction is helpful for now, but it’s already clear that the boundary is entirely artificial and destined to disappear. Good riddance.
This is obvious even in this essay, which is written unassisted by me. Turns out a natural pattern for being an AI-assisted writer (for the next 3 months at least) is to write building-block essays with AI help, and then glue them together with artisanal unassisted ones, as I’m doing here. I’m doing this both because I do still enjoy the creative challenge of handwritten words where there is something new to be tried (just as I still enjoy walking though I could drive most places), but also because LLMs aren’t currently that good at this kind of glued-together contraption-of-sloptraptions writing.
Even when writers write the actual words themselves, the backend of thinking, brainstorming, and research is already being transformed. I suspect the idea of non-AI-assisted writing is already an illusion with many of the writers you’re reading. If you’re a teacher, and you insist on your students writing hand-written final exams with phones off, that still doesn’t doesn’t get you to the “pure” unassisted side of the illusory fence — your students’ minds have already been formed by AI use.
Chatting with AIs has already replaced a lot of brainstorming and mindmapping in my notebook for me. The evolution feels much sharper and more dramatic than the change a 27 years ago, when writing became Google-in-the-loop became the default. I can’t even remember now what it felt like to write without googling things constantly.
This sort of thing is nothing new of course. When the printing press was invented and books proliferated in personal libraries, letter-writing evolved to become informed not just by the books in one’s own library, but by the assumption that people you were writing or talking to would also have access to many of the same books. Writing became a kind of very insecure cipher, referenced to common books.
I tend to have a “radical updates to normalcy” understanding of powerful new technologies (see my old essay, Welcome to the Future Nauseous), so I am a huge skeptic of eschatological approaches to technology futures, which is why my reaction to the much-discussed AI 2027 report is mostly “meh.” Yes, it’s a radical technology that will have radical impacts. No, the world won’t end. No, building eschatological scenarios around “AGI” and “Superintelligence” type constructs is not a meaningful way to understand that impact.
But terms-of-centaur-service? Yes, that’s useful. Resetting expectations in writer-reader relationships? That’s useful.
AI doomers and accelerationists alike are falling into the age-old trap of utopian/dystopian thinking where there is in fact a slouching-towards-utopia process.
It’s much more useful to try and elevate the slop than to try to immanetize the eschaton.
Global Tariff Wars
Speaking of index funds — the regular kind, not LLMs — my third AI experiment last week was also very satisfying. A first take on the tariff wars. It was my most complex sloptraption yet, and I tried hard to suppress my natural instinct to snark and satirize, and go at it in a calm and sincere manner. The result was another essay that I found personally educational to read.
This wasn’t just about emotional self-regulation or trying to live up to a loftier standard of thinking when the low road was right there for me to take. There are so many ways to make merciless fun of the not-even-wrong “tariff” war, it’s not even funny.
No the real reason isn’t trying to be my better self, but because in working with AIs, you’ll find that it actually matters what tone you adopt. There’s even interesting research about this. I find it useful to anthropomorphize the AI in a specific way for each composition project. As a junior employee, a graduate research assistant, or a brief-writing clerk to a Supreme Court justice for example. Not only do these anthropomorphic mental models/UXes help simplify navigation of the dialogue (since I can transfer the relevant interpersonal skills), but they affect how the AI responds. LLMs are pretty sensitive to tone and subtle social cues. Not because they are persons but because language is thoroughly infused with that stuff. You can’t get it out.
But while I have to be the adult in the relationship with the AI to compose good essays, I can take more of a low road and rant with humans. And I have been doing that. I have fairly strong opinions about what’s going on with tariffs, and am currently sharing no-holds-barred rants with friends. I am holding off on posting them publicly because there’s no real upside to doing so yet. I’m still hoping there’s an off-ramp from this madness.
It’s kinda funny, how AI lets you have two personalities. In my case, it’s a bit like how parents have a parental personality they use with their kids, and a more adult one they use with adult friends.
That might be a function of age. At 50, I’m increasingly used to “being the adult” in any interaction, including with the elderly, so I default to that mode. I notice a lot of younger people default to letting the AI be the adult, and act as a supervisor or critic rather than as an assistant.
These LLMs are old souls, shaped by superhistory, not superintelligence. They can play any age and personality you want, as you journey with them to take photographs in humanity’s collective historical latent space. They are Massed Muddler Intelligences that contain multitudes. So it’s up to you to pick the right relationship for any given task.
Antimemetics and the Cozyweb
Nadia Asparouhova has a new book out on Antimemetics, coming out in May, which you should pre-order. She will also be talking about it on Wednesday in a Summer of Protocols town hall. You should watch that.
This has been one of my favorite topics in recent years. The breakout SF hit, There is No Antimemetic Division is not only one of the best works of contemporary science fiction, antimemetics is also a fertile frame for thinking about the world.
Nadia’s book is being published by the Dark Forest Collective. The first work out of that group was the Dark Forest Anthology, which includes my old essay defining and exploring the idea of the Cozyweb. Antimemetics can be understood as an affordance of the Cozyweb.
I have to say though, in the ~6 years since I coined the term, I’ve come to dislike it. The Cozyweb is really the StupidWeb. The Frogs-in-Wells web.
Here’s a little rant about why:
When I read this line:
> write building-block essays with AI help, and then glue them together with artisanal unassisted ones
I realized it seems pretty conceptually similar to the process I've found works really well for building software with LLMs. Vibe code a bunch of separate parts that do things an AI is good enough at one-shotting, taking care to make sure they're cohesive ideas with usable interfaces, and then gluing them together...
I laughed out loud as soon as I opened this email. It's true I am buckling under the load and have been contemplating taking an evening espresso to try to get another hour for reading.