The Dawn of Mediocre Computing
AI and crypto have achieved mediocre-human parity. Where to now?
This is the opening essay of the Mediocre Computing series.
Well, we all knew it was coming. Computers already easily overwhelm the best humans at chess and Go. Now they have done something far harder: achieved parity with David Brooks at writing.
OpenAI’s ChatGPT, released as a research beta two days ago, has done to the standard high-school essay what cameras did to photorealistic painting and pocket calculators did to basic arithmetic. It is open sign-up and free for now, but I suspect not for much longer, so go try it; and make sure to trawl social media for interesting and revealing examples being posted by people.
As an open-world, real-ish (I’ll define real-ish in a minute) domain, the correct standard for judging an AI on writing is not beating the “best” humans1 in a stylized closed-world competition (the existence of such competitions is a mark of a certain kind of simplicity), but achieving indistinguishability from mediocre humans. And when it comes to writing, nobody does mediocre more mediocrely than David Brooks. I’m in the parity band too, but he epitomizes my thesis in Survival of the Mediocre Mediocre in a way I can only aspire to. In the grim darkness of the far future where there are only extreme weather reports, civilization will be dominated by Brooks-like humans and Brooks-equivalent computers living together in an awkward symbiosis. And that future starts today. We are witnessing the dawn of mediocre computing.
Don’t let my succumbing to the temptation to dunk on David Brooks distract you from the significance of what just happened though. We’ve just seen AI cross a very significant milestone, and for once, I don’t think the specific goalposts in question can or should be moved.
Declarations of AIs passing the Turing test are a bit like declarations that Voyager 1 has exited the solar system, but I think this event is genuinely significant.
The original form of the Turing test2 is worth recalling.
His high-pitched voice already stood out above the general murmur of well-behaved junior executives grooming themselves for promotion within the Bell corporation. Then he was suddenly heard to say: "No, I'm not interested in developing a powerful brain. All I'm after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company."
Though Turing’s formulation in this famous anecdotal account was arguably intended to troll AT&T management, I’m going to argue — seriously — that consistently mediocre performance is in fact the correct standard for evaluating AIs in realish domains like writing. I have a whole series about mediocrity if you’re interested, but this essay kicks off an independent series that is specifically going to be about mediocrity as an aspirational design principle for computing systems. If it’s good enough for me, it’s good enough for my computers.
Let me define realish now:
A realish domain is one that is sufficiently open, structured, gamified and resourced that mediocre performance against artificially constructed but illegible criteria is sufficient for real-world survival, with better-than-wilderness odds.
Most of us spend most of our time in realish domains. The urban built environment, workplaces, shopping, and modern systems of roads are all examples of realish domains. But I want to focus on two big and important ones in particular: language and money. Vast numbers of mediocre humans make good livings producing words and/or moving money around. These activities are also the home domains of the two frontiers of computing today, AI and crypto. The Second and First Foundations of the mediocre future of computing.
Via seemingly unrelated computational pathways, these two realish domains have succumbed to computerized automation. Incompletely, imperfectly, and unreliably, to be sure, but they definitely have succumbed. And in ways that seem conceptually roughly right rather than not even wrong. Large language models (LLMs) are the right way for software to eat language. Blockchains are the right way for software to eat money. And the two together are the right way to eat everything from contracts to code.
The automation works well enough that we can now — as of December 2022 — delegate non-trivial work to computers in these domains. Work that was previously done by “President of AT&T” level mediocre human intelligences, such as David Brooks (writing bad takes on issues du jour) and average central bankers (mismanaging economies), can now be at least partially done by algorithms.
Though the immediate provocation for today’s newsletter comes from the AI corner, there is a reason I want to talk about AI and crypto in the same breath. And it’s not just that both are algorithmically mediated computing domains that have crossed interesting technological thresholds in 2022.
Nor am I particularly interested in socio-political theses like Thiel’s “AI is communist, crypto is libertarian,” or application-level convergence, as in using both technologies in the same product (which is definitely a good thing to try if you’re an entrepreneur).
I strongly suspect a much deeper challenge has just presented itself to humanity. It increasingly feels like there is a deep conceptual and technical connection between the two domains that calls for careful research. It feels like AI and crypto are mathematical evil twins of sorts; that each is somehow deeply incomplete without the other. The mild culture-warring between the two tribes is in fact a symptom of deeper kinships.
The hints are subtle and all over the place. I’ll take an inventory in a future post, but here’s one as a sample: AIs can be used to generate “deep fakes” while cryptographic techniques can be used to reliably authenticate things against such fakery. Flipping it around, crypto is a target-rich environment for scammers and hackers, and machine learning can be used to audit crypto code for vulnerabilities. I am convinced there is something deeper going on here. This reeks of real yin-yangery that extends to the roots of computing somehow. It’s not just me hallucinating patterns where there are none.
Unifying AI and crypto at a foundational level smells like a problem on par with unifying relativity and quantum mechanics in physics.
Besides being conceptually very interesting — such a unification would get us past anthropocentric anchor notions like “intelligence” and “economics” — I think there would be a huge practical consequence: we would get mediocre computing. Language and money are sufficiently pervasive eigen-modes in the realish world that they probably span all of mediocre computing in some way.
I’m convinced getting to mediocre computing would be vastly more significant than what we have now: merely excellent computing.
Let’s define both quickly and roughly. I may refine these definitions later in this series:
Mediocre computing is computing that aims for parity with mediocre human performance in a realish domains where notions of excellence are ill-posed.
Excellent computing is computing that aims to surpass the best-performing humans in stylized, closed-world domains where notions of excellence are well-posed.
Remember, in the weird but realish world we live in, mediocre is actually harder than excellent, and way more useful.
I’m not trying to be cute here. I sincerely believe mediocre computing in realish domains is not just harder than excellent computing in stylized domains, but constitutes a whole higher category of hardness.
There is an element of Moravec’s paradox in my reasoning here. Roughly, the paradox states that tasks that look simple, and which all humans can do, are harder for AIs than tasks that look hard, and which seem like exceptional achievements among humans.
But I’ll treat Moravec’s paradox (which dates to the 80s and is not informed by recent developments) as a point of departure rather than a lighthouse concept. I think there’s a lot more going on here than even the prescient Moravec realized.
I made a lot of notes about all this in the last couple of days, way too much to cover in a single essay, so this is the kickoff essay for a new series exploring mediocre computing, a TBD unified model of computing that includes deep learning based AI and blockchain-based computing as conceptually entangled subsets. I obviously don’t expect to do the unification myself, since I am not even mediocre at computer science stuff. But I plan to scan the landscape of ongoing developments and try to coerce whatever happens into this mediocre computing frame, and perhaps annoy better minds enough with my wrong answers to get them to go look for the right ones.
In this first part, I want to develop the idea of realish domains a little more, and say a bit about why I think there’s a genuine physics-grade grand unification challenge here.
Realish Domains
Let’s add some color to my notion of realish domains.
Unlike the many unrealistic game domains that AIs have conquered, like chess or Go, realish domains are not entirely closed off from reality, merely somewhat insulated from it by evolved human design. But they are not fully open either. Realish is reality rendered a bit user-friendly. Natural reality with some improvements.
Realish domains are not defined by stylized rules with a purely metaphoric connection to reality. Chess and Go as metaphors for war are stylized domains dominated by excellent computing and exceptional humans. But actual war is a realish domain where we are only just starting to get mediocre computing (eg. realistic video games, drones that are piloted in video-gamish ways) to work, and human performance is grimdark tragedy at best, with no real winners. There are no ELO ratings for generals, only competing biographies in a narrative marketplace, and arguments about whether they were geniuses or merely lucky. There are no dispositive resolutions. More to the point, modern wars are not sporting competitions among generals to establish ranking orders (they used to be that way during some historical periods).
In stylized domains like chess and Go, humans have to deliver exceptional performance (ie good enough to make money at them by winning championships) to ensure mediocre survival in the real world (enough prize money to live on).
Realish domains are far more demanding. They are typically defined by what I have called human-complete problems like “earning a living.” But the good news is, only mediocre performance is expected for the only prize on offer: survival. Living to fight another day; the option to continue the game rather than win it.
But though they are more demanding than stylized excellence domains, realish domains are not as demanding as true wildernesses, which AIs and modern humans alike are equally bad at. A self-driving car could no more survive in the wilderness than a typical modern human. The typical modern human would likely fail to find food and water, construct adequate shelter, or make fire. Even a typical prehistoric human would not do well enough to survive as long as we do in self-domesticated captivity within civilization. And this is true for other species as well. Domestic cats and dogs live longer than their feral counterparts.
A “full self-driving” car would not be able to cobble together a charging station, let alone repair or maintain itself in the wild. But on the streets of a modern country with well-maintained roads, somewhat reliable rules, signaling, and signage, and a somewhat smart environment (charging outlets, maintenance shops), it could probably “survive” as a robotaxi earning enough of a living to power and maintain itself.
Let’s unpack my definition of realish domains pointwise:
Sufficiently open: There are boundaries, but they are not entirely leakproof. A child can run across a street and will die if you hit it with a car. Random new pieces with unexpected properties cannot enter a chess board.
Sufficiently structured: There are laws, but they don’t cover all the possible phenomenology, and aren’t perfectly enforced, unlike in nature. Grammar in natural language is neither as strong as the laws of physics, nor as determinative as the rules of chess.
Sufficiently gamified: There are performance standards with incentives and rewards, and there are patterns of competition, but the gamified aspect is not the whole thing.
Sufficiently resourced: This aspect is often missed. Realish domains are fundamentally friendlier and easier to survive in, by design. They offer some resources required for survival cheaply or for free.
Mediocre performance against artificially constructed but illegible criteria: Things like “career success,” “getting rich,” and “land an op-ed gig at the New York Times” drive behavior in realish domains. There is such a thing as “success” but it is not as crisply defined as in chess or Go, but you don’t have to do exceptionally to live off your performance in the real world.
Sufficient for real-world survival with better-than-wilderness odds: If you can navigate realish domains mediocrely well, such as by writing NYT op-eds or Substack newsletters, you can live twice as long as your prehistoric ancestors.
I want to wrap this series kick-off with ChatGPT’s responses to some prompt inquiring about the similarities and differences between blockchains and machine learning models. I did two tries on two prompts. The four responses ChatGPT generated are appended below, in a teletype font (I am not going to try clever Turing tests on you).
There are a couple of howlers in there (see if you can spot them), but overall, this is a surprisingly coherent and reasonable first brain-dump on our topic, close to what I would probably produce myself (and so now I don’t have to). It is not particularly insightful, and is almost the text equivalent of stock photography, but it is not useless, and there are some genuine thought-provocations in there, though the density is admittedly low.
Comparison of AI and Crypto
(guest sub-newsletter by ChatGPT)
Prompt 1: Similarities and differences between blockchains and machine learning models
Response 1
Both blockchains and machine learning models are used for the purpose of storing and processing data in a secure and efficient manner.
One similarity between blockchains and machine learning models is that both rely on complex algorithms to process data and make decisions. In the case of blockchains, this involves using cryptographic algorithms to secure transactions and prevent tampering, while in the case of machine learning models, this involves using algorithms to identify patterns and make predictions based on data inputs.
Another similarity is that both blockchains and machine learning models are designed to be decentralized and distributed, allowing multiple parties to access and share data without the need for a central authority. This helps to ensure that data is more secure and can be accessed more easily by multiple parties.
One key difference between blockchains and machine learning models is that blockchains are designed to store and process transactions, while machine learning models are designed to make predictions and identify patterns based on data inputs. This means that blockchains are typically used for financial transactions and other types of data that require a high level of security and transparency, while machine learning models are used for tasks such as image recognition, natural language processing, and predictive analytics.
Another difference is that blockchains are typically used in a public context, allowing anyone to access and participate in the network, while machine learning models are typically used in a more private context, with data access and usage being controlled by the organization that owns the model.
Overall, both blockchains and machine learning models are powerful tools for storing and processing data, but they serve different purposes and are used in different contexts.
Response 2
Blockchains and machine learning models are both technologies that are used for different purposes. A blockchain is a distributed database that is used to maintain a continuously growing list of records, called blocks. Each block contains a timestamp and a link to the previous block, making it difficult to tamper with the data stored in the blockchain. In contrast, a machine learning model is a mathematical model that is trained on a dataset to make predictions or take actions based on the data it has learned from.
One of the main differences between these two technologies is that a blockchain is a decentralized system, while a machine learning model is typically a centralized system. This means that in a blockchain, data is stored and maintained by multiple parties, each of whom has a copy of the entire blockchain. In contrast, a machine learning model is typically trained and used by a single entity, such as a company or an individual.
Another key difference between blockchains and machine learning models is the way they are used. Blockchains are often used to record transactions or other types of data in a secure and transparent way, while machine learning models are used to make predictions or take actions based on the data they have learned from. For example, a blockchain might be used to record financial transactions, while a machine learning model might be used to predict the likelihood of a customer making a purchase based on their past behavior.
In summary, while both technologies have their own unique characteristics and uses, the main differences between blockchains and machine learning models are their decentralized vs centralized nature, and the way they are used.
Prompt 2: What are some of the similarities in the mathematics used in blockchains and machine learning?
Response 1
Some of the similarities in the mathematics used in blockchains and machine learning include:
Both fields rely heavily on probability and statistics to make predictions and identify patterns in data.
Both fields use algorithms to process data and make decisions, with blockchains using cryptographic algorithms to secure transactions and prevent tampering, and machine learning models using algorithms to identify patterns and make predictions based on data inputs.
Both fields rely on complex mathematical concepts such as linear algebra, calculus, and matrix operations to process data and make decisions.
Both fields use optimization techniques to find the best possible solution to a given problem, with blockchains using optimization algorithms to find the optimal distribution of rewards and incentives in a network, and machine learning models using optimization algorithms to find the optimal model parameters for a given task.
Overall, there are many similarities in the mathematics used in blockchains and machine learning, as both fields rely on complex algorithms and mathematical concepts to process data and make decisions.
Response 2
Some similarities in the mathematics used in blockchains and machine learning include the use of algorithms and data structures, probability theory, and linear algebra.
Algorithms and data structures are used in both blockchains and machine learning to process and organize data efficiently. In the case of blockchains, this involves using algorithms such as hashing and consensus mechanisms to secure transactions and prevent tampering, while in the case of machine learning, this involves using algorithms such as gradient descent and decision trees to identify patterns and make predictions based on data inputs.
Probability theory is used in both blockchains and machine learning to model uncertainty and make decisions based on data. In the case of blockchains, this involves using probability to determine the likelihood of certain events occurring, such as the likelihood of a transaction being accepted by the network or the likelihood of a network attack. In the case of machine learning, this involves using probability to model the uncertainty inherent in data and make predictions based on that uncertainty.
Linear algebra is also used in both blockchains and machine learning to manipulate and transform data. In the case of blockchains, this involves using linear algebra to encrypt and decrypt data, while in the case of machine learning, this involves using linear algebra to represent and manipulate data in a mathematical form that can be used by algorithms to make predictions.
Overall, both blockchains and machine learning rely on complex mathematical algorithms and concepts to process and make decisions based on data.
We’ll pick up from here in the next part, but you have to admit: this is seriously good and actually useful first-dump brainstorming input.
I think I could borderline get away with just posting this kind of text as my own writing. You’d sense something was off, but you’d probably accept it. If I did just a little bit of pruning, correction, and touch-up work, you probably wouldn’t be able to tell at all. Especially if I went to the extra lengths to train GPT specifically on my style. It’s not yet very good at individual style impressions/transfers within modern English, but its obviously going to get there.
My writing is already in the model by the way: it knows who I am, and about some of my major ideas, like the Gervais Principle, though it mysteriously attributes all my good ideas to other people.
To state the obvious, we don’t even know how to define and run “best writing” contests. The best we’ve been able to do is Nobel literature prize committees making decidedly subjective decisions. I don’t think this is an accident. Writing is too “real” to admit the kind of satisfying stylized competitive structure that chess and Go do. The best “language games” like Scrabble and cryptic crosswords, are somewhere on the boundary between stylized closed-world competition and realish domain illegible performance.
This episode seems to have occurred in 1943, at the Bell Labs Cafeteria in New York.
Quoted in A Hodges, Alan Turing the Enigma of Intelligence, (London 1983) 251.