We’re back with a whole new name and attitude. Billing has been unpaused for paid subscribers. Future newsletters will be under the Contraptions name.
Anybody can slip on a banana peel, but what makes it funny is the person slipping happening to be a pompous tycoon strutting about with a self-important air. I learned this in a comedy writing class.
What’s funny is not the mechanical fact of a person slipping on a banana peel, but the undignified laying-low of someone with a strong sense of their own dignity. The funny part is the violation of an implicit egotistical narrative.
I couldn’t get ChatGPT to draw me a whole coherent comic strip, but imagine the image above as panel 3 in a 4-panel strip:
Panel 1: Pompous tycoon in top hat giving some sort of self-congratulatory and preachy “how to succeed in life” speech to a bunch of bored kids.
Panel 2: Tycoon strutting away, pleased with himself, nose in air, maybe eyes closed, superior smirk.
Panel 3: Tycoon slips on banana peel and falls!
Panel 4: Tycoon is sitting disheveled, undignified and chagrined on the sidewalk, with a banana peel on his head and the battered hat on the sidewalk.
Now here is a weird question. In the sense of this little story, can a machine slip and fall on a banana peel?
If so, what sort of machine would it be?
Here is a boring, on-the-nose answer that misses the point. You can imagine a humanoid robot, call it TycoonBot, governed by an AI that has been multi-modally trained and fine-tuned on the speeches, writings, tweets, and general comportment of actual human tycoons. Our TycoonLLM, its mind fine-tuned on the smarmy pieties and hypocrisies of your favorite billionaires, and its bodily-kinesthetic intelligence fine-tuned on their strutting physical movements and upturned-nose mannerisms, is primed for a fall from dignity. One can imagine this TycoonBot slipping on a banana peel for the right reasons, in the right sort of little story. One can also imagine it being kinda-sorta funny, in a Futurama-robot sort of way, and serving some sort of political commentary/satire purpose.
But this is a bad answer. Not least because there’s no good reason to build such a robot. It’s more a robotic model of a human slipping on a banana peel than of a machine qua machine doing so. It’s a story about humans being projected onto machines. Not a story about machines. At least not meaningful ones.
So let me restate the question. What does it mean for a machine qua machine to slip on a banana peel?
To figure it out, we have to probe the original joke for deeper structure. What exactly makes it funny? Yes, self-important dignity laid low is the surface level, but what’s going on if you dig a bit deeper? Is there a deeper narrative violation beneath the obvious one that can be transposed to stories about machines?
One way to get at the answer is to first try and generalize the joke to other human archetypes.
For example, the Karen archetype getting her comeuppance is a feminine version of the tycoon joke. In this case, the Karen suffers from an inflated sense of her own privileged access to authority (BIRA perhaps: Basking in Reflected Authority?). When she’s laid low by some mix of social-media censure and the authorities not actually backing her up, you get a banana-peel moment.
The joke gets harder to translate when you try to port it to archetypes that have no obvious source of inflated self-importance or are even obviously weak.
For example, a young child who has only recently learned to walk slipping on a banana peel lacks the raw material. Young children are generally awkward and contraptiony beings, and fall a lot anyway, mostly suffering no damage, and mostly just looking cute rather than funny. But there is one subtle way the joke could potentially work. Young children tend to discover the power of fake-crying at about the same time they learn to walk. Often they will visibly check to see if an adult is watching and fake-cry to get attention and manipulate the adult. You can imagine a comic strip like so:
Panel 1: Child is denied cookie by adult
Panel 2: Child fake cries but adult ignores
Panel 3: While fake-crying, he slips on banana peel and now cries for real
Panel 4: Adult picks up and consoles actually crying child, while laughing
Ie the child is laid low not by his fragile walking skills, but by the sensorimotor distraction of manipulative fake-crying. I’ve seen this sort of scene actually unfold. The comic element is admittedly a bit dark, but the sight of a child trying to manipulate adults with the limited power and capacity for self-importance they possess, and being laid low by the actual difficulties of the world is kinda funny.
Or take an elderly person. Elderly people falling is not a joke. It is a leading cause of injury and death. Normally, there’d be no way to do a banana-peel joke with an ordinary elderly person. BUT, a certain type of stereotypical elderly person is also famously curmudgeonly and likes to preach about the good old days and how young people are no-good weaklings ruining everything. You can imagine the joke working with an elderly person who does a “darn kids!” stick-shaking speech and then, while walking away in a pouting huff, slipping on a banana peel (but not really hurting themselves). And again, the slip having more to do with the curmudgeonly huff than frailty. The Mandelbaums joke in Seinfeld is an elaborate example of this.
What these exercises in transposing the joke reveal is that status and dignity dynamics are cosmetic elements related to the social relations of human society. They don’t translate in any obviously useful way to machines. But they also reveal something that does translate.
What’s really going on is that the unconscious behavior embodies some sort of egocentric view of the world (which in humans manifests as self-importance, pompousness, preaching, selfish manipulation, fake crying, etc) in a way that creates exceptional vulnerability to banana peels. This world view then comes into conflict with a reality where the individual is… less important than they think they are, and experience a mishap that could have been avoided if they’d had a more realistic view of the world and their place in it, leading to better attunement to circumstances.
In the prototypical case, the tycoon may have their nose in the air and eyes semi-closed as an expression of social superiority, but that affect also contributes to them being physically less present in the environment than they should be. Perhaps they stride jauntily, a gait that is perhaps less suitable for a hazardous pavement littered with banana peels.
This misregistration between abstract worldview (with the body serving as an instrument of status performance) and concrete situation (with the body serving as a instrument to navigate it effectively) is, we might say, the marginal cause of the mishap. The problem is neither unavoidable misfortune, nor any sort of physical incapacity, but the distortionary effects of self-inflicted egocentric mental models. This is why in the case of the child or the elderly person, we have to work harder to script a clearly distorted situational awareness attributable to self-inflicted egocentric delusions.
Now, this gives us something to go on. This is like a space mission going wrong because you’re working with a Ptolemaic geocentric view of the solar system. That would be a space-mission banana-peel-slip moment. But we need more realistic examples.
Where else do we find such misregistration between abstract, egocentrically distorted worldviews creating vulnerabilities relative to situational realities?
Here are some examples I’ve been thinking of:
“Systems” for investing or gambling. For example, the failure of LTCM, based on derivative-trading theories of Nobel winners feels like an investment machine slipping on a banana peel. Yes, the human authors of the debacle were also embarrassed, but it was the egocentric conceit of mathematical models assuming a world of low correlations that actually “slipped.”
Macroeconomic or sociology structuralist models of all sorts, which often come with smug narratives like “trickle-down” or “broken window” or “creative capital” or “equitable distribution.”
In the world of literal machines, many after-market modifications of cars strike me as having banana-peel potential. For example, low-profile tires on oversized wheels might look good according to certain aesthetics, but make for a worse ride, more tire failures, and lowered traction. So accidents and blowouts attributable to these mods are banana-peel moments.
How about Juicero? Or the Humane AI pin? Or the original Segway? There’s something banana-peel about all these cases.
Arguably, many of the political contraptions James Scott discusses in Seeing Like a State are set up for banana-peel-slip failures. Authoritarian High Modernism is a systematic ethos of egocentric world-modeling/building that has predictable misregistration problems relative to its environment. Banana peels can serve as a motif for systemic illegibilities.
Where authoritarian high modernists are arguably blinded by their unconscious aesthetic commitments, similar effects can emerge from consciously cynical thinking. Populist political proposals that are simple enough for non-experts to understand without any significant study or analysis are typically “political-economy machine” models that are set up cynically to slip on the banana peels of political realities after the election.
While there is definitely some correlation/conflation with egocentricities of designers or architects in the picture, it seems to me that these are genuine examples of machines qua machines slipping on banana peels. The mishaps can be traced to features of the designs rather than the designers.
Can we systematically model what’s going on all these examples of literal or conceptual machines, in terms of properties of the machines themselves, rather than the foibles of their designers or architects?
Enter the contraption factor. The thing about machines that slip on banana peels is that they try to appear less contraptiony than they actually are, increasing the risk of particular kinds of failures. The equivalent of an egocentric mental model is an appearance of more, and better integrated functional capacity than is actually present.
In Contraption Theory, I defined the Contraption Factor (CF) as the ratio of system complexity to design integrity.
You could say machines that slip on banana peels have a lower apparent CF than actual CF. This is achieved by understating the complexity or overstating the design integrity.
The misregistration, CF_actual-CF_apparent, is the banana peel slip potential, BPSP. You could define an honest contraption as having a WYSIWYG quality to it. It looks exactly as contraptiony as it is.
In many ways, the great divide between engineers and product managers involves contraption honesty. Product managers often want to increase BPSP and lower contraption honesty by focusing on lowering CF_apparent.
This gives us a way to analyze things that on the surface look very non-contraptiony, like Apple products. Apple hardware is famously well-built. The build quality, in engineering-speak, is often distractingly spectacular. The CF of the hardware is probably close to zero unless you open it up and start analyzing the tradeoffs under the hood.
But if you poke around in the real insides, Apple software has a much higher contraption factor. The CF faced by Apple developers for example, is often forbiddingly high. I’m told that the Apple developer ecosystem is horrendous.
You can also try and fail to signal lower CF than you have, creating a sort of campy element in a design. The early iterations of Elon Musk branded things often have this issue. Tesla models typically have build quality issues early on, but signal Apple-like aspirations from Day 1 (the CyberTruck currently strikes me as campy because of this, but older Tesla sedans have mostly evolved past that stage). SpaceX technologies by contrast, tend to have more of an honest contraption appeal, perhaps because as industrial technologies with very harsh and tight constraints, rockets (and the Mechazillas that capture them) offer less room to fudge your contraption factor. Twitter’s fate over the last couple of years strikes me as a gigantic banana-peel slip of the technology itself, not just Musk personally.
What about emerging technologies? These are harder to analyze since maturing technologies often have rapidly evolving contraption factors as problems get solved. But you can still tease apart some banana-peel-slip behaviors.
Take AI for example. You don’t need to go to the trouble of designing a TycoonBot. A regular, off-the-shelf chat AI is constantly slipping on notional banana peels. The voice of these bots is typically very self-assured and confident, but the output does not inspire corresponding believability. If you point out a wildly wrong response to a prompt, ChatGPT will typically pivot with equally confident self-assurance and smooth apologies, but the first time it happens, it will feel like a banana-peel slip (once you get used to it of course, you slowly learn to ignore the illusion of confidence and self-assurance).
NFTs were a massive banana-peel slip moment for the entire blockchain sector. Lofty theories of art economies wearing top hats slipped and fell. In this case, it is hard to tease apart the individual humans slipping and falling (failing to realize artistic ambitions for value the NFTs were supposed to unlock) from the technology itself slipping and falling (limitations of how much you can do simply with securely ownable pointers to things that must still live in the real world). A clearer example is algorithmic stablecoins getting depegged. Here, there are weird software machines that encode and embody untenable theories of how currencies work slipping and falling.
Can you avoid having machines slipping on banana peels? Can honest contraptioneering be consciously practiced? I don’t know, but I suspect yes and yes. I suspect it is easier to achieve contraption honesty with physical machines than conceptual ones underlying political or economic activities. It’s much easier to make things look more well-put-together than they are if nobody has access to the insides to do teardowns or code reviews.
This is the first issue of the newsletter under the new Contraptions name. I’m going to be experimenting for a while, including with day-of-week timing, while I work things out. Contraption Factor will be high while this contraptioneering is in progress.
The rebrand to Contraptions has had me excited ever since you announced it and really enjoyed this first post!
Definitely putting in to better words the things I’ve been feeling about working with generative ai and ai agents. Have started mentally translating “AI Agent” to “AI contraption/gizmo/doodad”, not to dismiss or downplay what people are building, more on the expectations of reliability.
With any new foundational technology you probably need to go through this contraption phase as people try to put a veneer of low CF_apparent on it before the reliable implementation patterns have been discovered and formalized. You saw this with Smalltalk > C++/Java in the early desktop days, then again with Ruby/javascript/python > rust/go/typescript with web2.0 style web apps.
Most recently I’ve actually been using Postel’s Law, which seems like a contraptiony principle from the early days of networking, as a talking point for how to think about working and building with Gen AI and LLMs.
Also fun to think about how a lot of the jokes around contraption makers are that their things have a high CF but others don’t realize it like Honey I Shrunk the Kids or a lot of the contraptions that Rick Sanchez builds…
Other candidate examples:
- Waymos honking at each other in a parking lot (at 4am) https://www.usatoday.com/story/tech/2024/08/15/waymo-driverless-cars-honking-parking-lot-video/74810195007/
- Bots' bidding war on an Amazon book: https://www.wired.com/2011/04/amazon-flies-24-million/
- Amazon Alexa ordering dollhouses after overhearing a news segment https://www.theverge.com/2017/1/7/14200210/amazon-alexa-tech-news-anchor-order-dollhouse