In a way we discover what software architects experienced for quite some time (aptly summarised by Simon Wardley[1]): Architecture - the function and structure of a system - is not (only) diagrams and specs, but embedded in the code we write, aligned with the things we value, eventually expressing our intentions which have been „shaped“ by reality.
I also think there might be an interesting extension of the ETTO frontier for software engineering at at play here, driven by the need to efficiently engineer software at scale (industrialised) because of how easy AI makes it to produce LOC, and the new levels of thoroughness unlocked by being able to create software artistically which fully express our intentions.
I appreciate the phenomenon you're observing, but I think you're making a category error when you say "intentions" have a surprising amount of detail. Phrasing it that way makes it sound like the entire project is governed by a single intention, all of which came into being simultaneously, existing in some Platonic realm, and the manager then unrolled that Form piece-by-piece.
This is not so; a "vision" exists at the start, but that vision doesn't include the details. I assert this confidently because I've read literally hundreds of explanations by authors of their vision for a story, and how they gradually refined that vision, not just by working their way down the tree and filling in the gaps, but by writing something, bumping into problems, then backup up rewriting to solve them. And this is also how every successful engineering project proceeds.
There's a large body of philosophical symbolic AI research which has tried to figure out what the ontological categories of "intends" and "Y" are in the proposition "intends(Agent, Y)". Saying "intents are detailed" matches the practice in the 1960s+1970s, when people built AIs with sense-plan-act pipelines, in which the AI senses the environment, then constructs a plan (which is the "intention"), then executes the plan. That didn't work outside of toy domains. An agent must begin working before the plan is fully-developed. No actual intention that an agent works on should be detailed when it's first added to the goal stack.
It may seem like I'm being fussy about semantics, but you really do need to be fussy about semantics here. Saying "visions have a large amount of detail" instead of "realizing a vision will require many subsidiary decisions and new subgoals along the way" can and has misled urban planners into planning out cities in detail before starting construction (see Le Corbusier and Brasilia), and managers into constructing detailed battle plans before making contact with the enemy. I know. I've worked for some managers like that.
The claim that auteurs must be assholes is an antecedent to your consequent claim that many people can work in auteur mode. So it needs support. It isn't obvious except to people for whom "auteur" connotes being an asshole, and in that case it's circular reasoning. Many people like working for Spielberg and Miyazaki.
Am I chindoguing correctly?
https://planetofthepaul.github.io/StoryTrainer/substackcrush/
Indeed. That’s an unuseless app.
Genuinely funny. Good chindogu.
In a way we discover what software architects experienced for quite some time (aptly summarised by Simon Wardley[1]): Architecture - the function and structure of a system - is not (only) diagrams and specs, but embedded in the code we write, aligned with the things we value, eventually expressing our intentions which have been „shaped“ by reality.
I also think there might be an interesting extension of the ETTO frontier for software engineering at at play here, driven by the need to efficiently engineer software at scale (industrialised) because of how easy AI makes it to produce LOC, and the new levels of thoroughness unlocked by being able to create software artistically which fully express our intentions.
Wardley, S. (2025, October 23). Specifications and architectural diagrams were rarely any more than wishes / prompts in the software world [LinkedIn Post]. LinkedIn. https://www.linkedin.com/posts/simonwardley_return-of-the-spec-activity-7387057018718662656-HY4M
I appreciate the phenomenon you're observing, but I think you're making a category error when you say "intentions" have a surprising amount of detail. Phrasing it that way makes it sound like the entire project is governed by a single intention, all of which came into being simultaneously, existing in some Platonic realm, and the manager then unrolled that Form piece-by-piece.
This is not so; a "vision" exists at the start, but that vision doesn't include the details. I assert this confidently because I've read literally hundreds of explanations by authors of their vision for a story, and how they gradually refined that vision, not just by working their way down the tree and filling in the gaps, but by writing something, bumping into problems, then backup up rewriting to solve them. And this is also how every successful engineering project proceeds.
There's a large body of philosophical symbolic AI research which has tried to figure out what the ontological categories of "intends" and "Y" are in the proposition "intends(Agent, Y)". Saying "intents are detailed" matches the practice in the 1960s+1970s, when people built AIs with sense-plan-act pipelines, in which the AI senses the environment, then constructs a plan (which is the "intention"), then executes the plan. That didn't work outside of toy domains. An agent must begin working before the plan is fully-developed. No actual intention that an agent works on should be detailed when it's first added to the goal stack.
It may seem like I'm being fussy about semantics, but you really do need to be fussy about semantics here. Saying "visions have a large amount of detail" instead of "realizing a vision will require many subsidiary decisions and new subgoals along the way" can and has misled urban planners into planning out cities in detail before starting construction (see Le Corbusier and Brasilia), and managers into constructing detailed battle plans before making contact with the enemy. I know. I've worked for some managers like that.
The claim that auteurs must be assholes is an antecedent to your consequent claim that many people can work in auteur mode. So it needs support. It isn't obvious except to people for whom "auteur" connotes being an asshole, and in that case it's circular reasoning. Many people like working for Spielberg and Miyazaki.