Intentions have a surprising amount of detail
Auteur managerialism, the myth of one-shotting, and the chindogufication of engineering
My AI-use attention frontier has decisively shifted from writing to vibe-coding, which is partly why I haven’t written many sloptraptions this year. That and the fact that ChatGPT has gotten worse at writing, and I’m using all my precious Claude usage quotas for code. The ROI is astronomically higher. Like many others, I find myself alternating between going full speed and idling, waiting for token limits to reset. It’s the new 9-to-5.
As I take on ever more complex vibe-coding projects (currently, porting ribbonfarm.com to a richly augmented archival static site; here is the nearly done beta site, the domain DNS will cut over in a couple of weeks), I’m struck by something: My intentions with any project can never be reduced to simple and clear high-level goals which entail the entire hierarchy of sub-goals and decisions below. I can’t just set a high-level goal, get Claude going, and walk away.
I find I have opinions about decisions at every level of the project. High-level goals guide and constrain, but do not fully specify subgoals, decisions and commitments at lower levels. The specification isn’t complete, and the goal isn’t fully defined, until the project itself is done. There is missing intentionality information that must enter the execution at all levels, throughout the development timeline, right to the last minute.
What sort of information?
Subjective information. Taste-driven choices big and small, opinionated architecture ideas, opinions about the implementation process itself, information about my risk tolerances around a hundred little details, creative input and frames. In the current project alone, I must have made hundreds of decisions across 16 Claude sessions so far. You can see a view of the story so far in the Dev Log page. And this is not even counting all the thousands of mindless “approve” decisions you make while using Claude Code (I haven’t yet gone fully unsupervised).
This experience led me to a proposition paralleling John Salvatier’s that reality has a surprising amount of detail: intentions have a surprising amount of detail.
Thinking about your intentions in terms of lofty abstractions like top-level goals and values is not exactly meaningless, but constitutes a surprisingly small fraction of the subjective information that must iteratively enter the design and execution process as the implementation unfolds. And it is necessarily iterative because at each stage of fleshing out, new decision points are entailed, created, or invented, and your preferences revealed. Taste and opinions cannot simply be fractally unrolled from a few bits of initial information. And decisions and details you might be indifferent to don’t all conveniently live below some level of resolution you can just delegate to Claude and ignore. Indifference is woven through the fabric of execution at all levels too. Your ignorance too, is densely scattered throughout. Not just in pockets that you can legibly bound. Intentions and reality are entangled densely at every scale of structure and time.
To snowclone one of my favorite lines about general relativity, intentions tell reality how to curve, reality tells intentions how to move.
This means, to get what you want, you have to be paying attention all the way through, at all levels of detail. Full-court-press mindfulness and care.
And here’s the funny thing. I find I like operating in this mode in a surprising variety of projects. It feels like fine-grained, uncompromising managerial control over the entire project, end-to-end.
It is managerial thinking as many have observed, including me, but not of the sort you might have experienced from either end as a human. Working with AI is auteur managerialism.
Auteur mode is surprisingly rare in technology generally, unlike in cinema. Even the most legendary engineers, designers, and product-driven founders typically do not exercise as much absolute creative control over their work as auteur filmmakers do. This is because real-world engineering involves orchestrating a larger number of specialists and more capital over longer periods of time than most film-making. It is much harder for a single engineering leader to be sufficiently literate in all aspects of even moderately complex technologies. And because the compile-target, so to speak, is reality rather than screen fictions, there are fewer things you can afford to be indifferent to or ignorant about, and less room for pure creative expression unconstrained by physics. Airplanes have to actually fly. Superman on screen only has to create an illusion of flight.
The upshot of all this is that a typical engineering manager has to think about a lot of things with stronger limits on creative control. They have to ensure human engineers and non-engineering support function people are sufficiently motivated and challenged over years rather than months. They have to manage egos and insecurities besides their own, and leave more creative room for others to enjoy self-expression. They have to preside over frustrating trade-off meetings where other managers hold trump cards. They have to worry about profitability (auteur filmmakers often get to make films backers know are going to be unprofitable, for artsy prestige payoffs). The cost of being an asshole, which is an almost necessary trait for operating in auteur mode with human underlings, is much higher.
But with AI, at least in narrow domains, auteur mode is not just possible, it is easier and faster than regular engineering mode. While Claude Code does respond better to nicer prompting, in general, it is fine with you taking complete, uncompromising creative control. It is endlessly patient with revisions, tedious details, waffling, and capriciousness. It wants no credit of the sort humans crave (though it will claim part authorship in GitHub commits). If you managed a team of human engineers this way, it would last about a week before unraveling.
I suspect a lot more people are capable of auteur mode than we realize, and it’s only perceived as a rare genius Special Person trait because very few people are willing to be as much of an asshole as necessary to be an auteur working with humans. And even fewer have talents suited to domains like film-making where other people have incentives to tolerate auteur assholery. But AI removes the must-be-an-asshole job requirement from auteur roles.
Once you recognize the auteur element in using AI, it becomes immediately clear that “one-shotting” is a myth. No intention of any complexity actual humans care about can be one-shotted, simply because it takes a lot of iteration to reveal the preferences and tastes and full vision. Intentions have a surprising amount of detail, and a surprising number of us are auteurs at heart who actually care about all of it, all the way through. One-shotting can only produce slop, defined as work orchestrated by humans whose intentions lack sufficient detail to actually work. It might serve as a charismatic stunt demo, but it won’t fulfill the underlying intention. This is why it works in cinema (where the stunt demo is the product, so to speak).
I want to take note of one more related feature of the sociology of AI use that I don’t think has been noted before: Chindogufication.
Chindogu is the Japanese subculture of designing and building “unuseless” objects. Not exactly useless, but not quite useful either. Overwrought devices and contraptions that solve a real problem in seemingly unnecessarily detailed ways. And not obviously ironically baroque like Rube Goldberg machines, but rather riding the edge of engineering plausibility. Kayfabe products. An inch away from late-night TV infomercial products.
Many people, including me, have noted that AI use tends towards bespokification. We all create custom apps and solutions tailored to our needs instead of using off-the-shelf generic solutions. But the Chindogufication hypothesis pushes the idea further — because the cost of AI is so low (perhaps artificially so right now, but headed to even cheaper cost regimes for real), we can do more than “normal” levels of bespoke customization. We can push to bizarre and ridiculous levels by the cost perspectives of pre-AI times. We can make real things for everyday use that look like conceptual art pieces in museums. Or like haute couture.
The boundary of unuselessness has shifted. A flood of Chindogu is entering everyday digital life.
So far this ability is limited to code, but soon, it will extend to atoms. Already people are rigging harnesses linking 3d printers to AI-driven CAD tools and embarking on voyages into oceans of unuselessness. The old vision of 3d printing unleashing a flood of “crapjects” into the world (which never happened because 3d printing never got easy or cheap enough to be too cheap to meter) has been superseded. Beyond AI in a direct loop with atoms, there will also be Chindogufication of the YouTube-TikTok-DIY ecology. AI can help humans undertake arbitrarily idiosyncratic projects without the need for a human-made video demonstrating the exact steps needed. I’ve experienced this with cooking already.
Chindogufication, pursued with auteur levels of fine-grained control, is already starting to create highly solipsistic personal digital realities that increasingly either won’t talk to each other, or do so in increasingly bizarre ways, creating bizarre new socialites. Increasingly solipsistic physical realities are next.
If you take all three phenomena together — detailed intentionality, auteur managerialism, and Chindogufication — we’re looking at a very surreal planetary future.


Am I chindoguing correctly?
https://planetofthepaul.github.io/StoryTrainer/substackcrush/