Life After AI
A couple of weeks ago, I had a chance to visit the lovely Northumberland region of the UK. Among the sights I took in was Holy Island of Lindisfarne, the site of one of the earliest Christian outposts in Britain, from where a series of doughty saints spread Christianity in the North of Britain, starting in the 7th century. The story of Holy Island is a story of slow and steady Christianization of the region, against a backdrop of Viking raids, wars between England and Scotland, and changing relationships with local royalty. It also showcases the complexity of the process by which Christianity altered the societal fabric of Europe, modifying and being modified by existing local political and civic institutions.
Christianity neither "took over" pagan Europe, nor was it taken over by pagan Europe. For that kind of simple supervenience relationship to emerge, you need relatively monolithic and cohesive entities coming into a relationship. Instead, the Christianization of Europe was a process by which the two got intertwingled gradually, creating a new political-economic-cultural matrix for society. But though the process was syncretic, the result was undoubtedly a radical break from the past. Though you cannot cleanly tease apart the "Christian" and "pre-Christian" components of Europe today, there is a clear distinction between life before/after Christianity. If Christianity had not arrived in Europe, history would have evolved along a very different path.
This story are important to appreciate today, because the world is getting transformed by AI in a very similar way.
_But before I lay it out for you, a quick announcement. With support from the enterprising Zach Faddis, I just started a Mastodon instance at refactorcamp.org, and we're trying to grow it in a controlled, manageable way. This invite link is good for 100 signups and will be active for 1 week, though it'll probably run out before then. If you sign up, note that the confirmation email will come from Zach, not me, and may end up in spam.
Mastodon is a federated open-source Twitter-like messaging platform. If you're interested in hanging out with the ribbonfarm/refactorcamp/breaking smart crowd, you may want to sign up. If you already have a Mastodon profile on some other instance, you can also follow me and others there. I've been looking for a good, low-effort way to catalyze community that doesn't require too much work from me, and isn't too much of a closed-off walled garden, and this seemed like the most interesting thing to try._
Anyhow, back to AI and religion.
Lindisfarne Holy Island Priory
AI as Religion
Analogies between AI and religion -- and I make them often -- usually tend to be made to satirize the beliefs of some of the more cultish corners of the field, such as Singularitarianism or Simulationism. These belief systems are wrong for the same reason a statement like "Christianity took over Europe" is wrong. They are based on postulating simplistic relationships between two complicated technological tangles coming together in a larger tangle.
They are also stances that derive from absolute theological distinctions. A devout Christian would tell the story of Christianization of Europe in a very different way than an atheist like me does. To a devout Christian, it is not a story of foreign institutions intertwingling with local ones through a socio-political process, but a story of God's Word spreading through godless deserts of unsaved souls. If you are a believer, you must tell the story in a way that absolutely distinguishes the Christian and non-Christian components, between "One True God" stuff and "non-God stuff". No matter how messy the story of the spread, the essential difference between the sacred and profane remains intact in the mind of a believer. But crucially, that's the only place where the difference exists.
If you poke a bit, you'll find something very similar going on in cultish accounts of the future of AI: there is some sort of sharp dualist distinction between two types of "stuff". There is "AI stuff" (analogous to God stuff) and then there is "non-AI stuff" (humans, and non-AI tools). The specifics of the divide don't matter. To some it is the point where AIs become "recursively self-improving." To others, it is the assumption that subjective consciousness is an illusion, allowing for the possibility that we could be computer simulations. To still others, it is that magical moment when AIs go from "closed" to "open" or "specific" to "general".
Maybe it's just me, but these arguments sound like the arguments about transubstantiation that marked medieval Christianity. The underlying beliefs sound like literalist beliefs about afterlife effects of acts like baptism.
But if you assume there is no absolutely meaningful ontological distinction between "AI stuff" and "non-AI stuff", such dividing lines become mostly arbitrary.
In some cases (such as a declaration that a particular AI has passed the Turing test), they have no material consequences. In other cases, they are merely empirical phenomena to be modeled and characterized. For instance, once you take away the idea that "recursive self-improvement" is some sort of mystically significant boundary between profane and sacred, or that "sentience" begins there, it becomes just another thing to tackle with our usual profane, messy, incomplete models. At best, it is some sort of complex system criticality boundary, of the sort that separates stable nuclear reactions from explosions, or stars that go supernova from ones that don't, or network-effect businesses from non-network-effect businesses.
AI as a System-of-Systems Intertwingling
Abandoning AI/non-AI dualism has a profound consequence: you are free to tell the story of life before/after AI as an intertwingling rather than a destined conversion of the heathen by a higher spiritual force.
There is no reason to bring sharp theological distinctions into the story. It's just another big engineering story. A very interesting, challenging, and radical one, but still, not different in any fundamental way from stories like electrification or steam power.
So what is the story of life after AI, non-religious version? Specific religion stories like that of Lindisfarne help us see the pattern.
In Northumbria, Christianity was a new software technology diffusing through an existing pagan technological stack of governance. The analogy is very direct in some ways. Like software, Christianity existed primarily in the form of code: new rules for running institutions, new doctrines concerning transgressions and forgiveness. New patterns of rights. New expectations of rulers and ruled. There were differences small and big between Christianity and the various things it was grafted onto, but both were in the same category of things.
The story of Europe between the fall of the Roman Empire and the Renaissance becomes a double-helix of a cultural Dark Age, marked by radical simplification of the institutional structure of Europe (helped along by the Great Hard Drive Reformatting that was the Black Death), and the integration of Christian codes via very specific and local programming efforts like the one that took place at Lindisfarne.
The parallel is remarkably close. We are going through a similar radical simplification of the physical, technological structure of the world (this is ordinary, non-AI "software eating the world" which is really the decline and fall of the industrial empire) accompanied by the integration of AI codes.
In the atheist telling of the story of Christianization, there is no need to make any sort of mystical distinction between (say) codes of law governing royal succession or individual rights, or women's inheritance rights under pagan versus Christian regimes. They were just different in an ordinary sense: different sorts of software for the same kind of hardware. And the pagan societal landscapes were not static reprogramming targets. They were already evolving in response to the forces of the collapse of the Roman empire. "Christianization" just entered as an additional path-shaping force alongside many others.
In the atheist telling of the story of AI, there is no need to make any sort of mystical distinction between (say) software systems governing factory automation, automobile control, or healthcare rights under non-AI versus AI regimes. They are just different in an ordinary sense. And the industrial technological stack is not a static reprogramming target. It is already evolving in response to the forces of software eating the world. "AI-ization" is just entering as an additional path-shaping force alongside many others.
Shorn of mysticism, the problem of "AI risk" becomes just another sort of engineering risk, like nuclear-war risk, climate-change risk, or plague risk. When you consider problems that supposedly make AI risk a special category, you find the equivalent of "going to hell" risks like Roko's basilisk that require adopting dualist-AI beliefs to take seriously, just as you have to be religious to take the risk of going to hell seriously.
When you look at the diffusion of Christianity in Europe without a believer perspective, you can't draw any sort of facile, simplistic conclusion like "Christianity took over Europe". Neither can you argue that existing political power structures entirely bowed to the power of Christianity. All you can say is that there was a transformation, and a relatively sharp distinction between before and after.
So what might the AI-ization of the technological stack look like? Let's work through an example, but first we need to talk about the importance of examples.
Interior AI Problems
In getting specific about AI, the type of problem you pick is significant. I believe if you want to really push AI technologies ahead today, you should work on what I call Interior AI problems rather than what I call Boundary AI problems (which could also be called John Henry problems).
When people discuss specific AI problems, they usually pick ones designed to find an "essential" qualitative boundary between humans and AIs, or failing that, ones that set up a straightforward species-superiority competition (which sets up a human/super-human performance boundary).
These are Boundary AI problems, designed primarily to stake out boundaries rather than solve problems. Chess, Go, logic, autonomous driving, machine design, reading X-rays, speech recognition, storytelling, humor, are all examples of this kind, though to varying degrees.
The intention of making a comparison between humans and AIs leads to the selection of a particular kind of problem: those with clear objectives (since the quality of achieving them is the basis for comparative judgments) and a relatively harmonious, even stylized, skills-base required to tackle it (since such a base lends itself to rapid improvements, leading to an interesting contest).
The most dramatic Boundary AI problem, but revealingly one that never gets directly tackled, is the so-called "Artificial General Intelligence" or AGI problem, the ultimate contest, over the most sacred human trait, supposedly "general" intelligence (never quite defined beyond vague gestures at IQ, which itself, interestingly enough, was originally a bureaucratic test designed to get at machine-like human performances in industrial settings), where the adjective "general" takes on connotations of omniscience, omnipotence, and universality. This notional problem is designed to draw a boundary not between humans and real AIs that exist or are being built, but between gods-in-the-image-of-humans and speculative sci-fi AIs. I have a longer rant about that I'll write up someday.
The selection of prototypical AI problems is a crucial point. We are so obsessed with how we might be different from AIs that we've historically prioritized problems that illuminate the differences and establish superiority/inferiority competitive rankings relative to both our real and aspirational selves. There is great dramatic value to both religious AI types and human-essentialists in showing that an AI is better than us at chess, since that sets up a clear boundary point of value to both sides. There is not as much dramatic value in comparing how I eat an apple to how a robot plugs itself into a power outlet, even though that's actually a much more illuminating comparison to make (energy-seeking patterns).
This is not a new phenomenon. In the history of both technology and religion, you find a tension between two competing priorities that lead to two different patterns of problem selection: establishing the technology versus establishing a narrative about the technology. In proselytizing, you have to manage the tension between converting people and helping them with their daily problems. In establishing a religion in places of power, you have to manage a tension between helping the rulers govern, versus getting them to declare your religion as the state religion.
You could say Boundary AI problems are church-building problems. Signaling-and-prayer-offering institutions around which the political power of a narrative can accrete. Even after accounting for Moravec's paradox (easy for humans is hard for machines/hard for humans is easy for machines), we still tend to pick Boundary AI problems that focus on the theatrical comparison, such as skill at car-driving.
In technology, the conflict between AC and DC witnessed many such PR battles. More recently VHS versus Betamax, Mac versus PC, and Android versus iOS are recognized as essentially religious in part because they are about competing narratives about technologies rather than about the technologies themselves. To claim the "soul" of a technological narrative is to win the market for it. Souls have great brand equity.
Boundary problems are healthy features of early technological development chapters, as are John Henry human-machine arms races that create martyrs for humanism. Such essentially arbitrary religious divides help create the tribal variety that fuels competition. That doesn't mean the distinctions involved are meaningful or even based on anything real. This is especially true in AI, where the distinctions are metaphysical rather than merely superficial-but-material
In AI, boundary problems can usually be cast as Turing tests, allowing head-to-head comparisons with humans. Often, they are not just boundary problems, but charismatic human problems -- problems that showcase humans in ways that highlight their finite identities.
Human essentialists keep trying to find a "no true human" charismatic criterion for humanness, while Strong AI evangelists complain about moving goalposts but nevertheless go after the new goalposts anyway. They cross one boundary after the other. It's a game that has been valuable in pushing technological frontiers for 70 years, but has degenerated into a sort of useless pattern of ritual combat now, like jousting at a Renaissance fair.
I like a different class of problems, which I call Interior AI problems. These are problems that get at how humans are currently embedded in their environments (which include both technological and human elements) and how they might be embedded in it in the future. The goal is to look at humans not with a view to establishing how they are special, or how they might be improved upon, but to figure out how they actually exist and survive today, and how that might change tomorrow.
Think of it this way: an Interior AI problem is a problem designed to "patch" the hole created by ripping out a human from a solution to a problem. A software+hardware patch. What matters in designing such a patch is not the essential nature of the human who was pulled out, but how the human was embedded there before being ripped out.
Take for instance, the problem of installing a ceiling fan, a problem I solved last week that nobody gave me any prizes for. A banal, uncharismatic problem that only exists and makes sense in the heart of existing technology.
The Ceiling Fan Problem
Installing a ceiling fan is a typical Interior AI problem.
There is nothing essentially, charismatically human about the problem of installing a ceiling fan. It is not like being world chess champion, a famous musician, or a Formula 1 race car driver. It takes no real creativity, morality, or other supposedly ineffable human traits that are the currency of the moving-goalposts game. There is nothing particularly humans-at-their-peak or most-essential about it. I didn't even need to use my soul to solve it. Questions about it will never be part of the Voight-Kampff test. Much of what you learn by solving a ceiling fan problem is about ceilings and fans, not about humans. It is a problem about an embedding, not the agent embedded.
Most importantly, there is nothing about the ceiling fan problem that cannot already be tackled by existing AI and robotics technologies, at least in principle. It is not a challenge boundary or a goalpost-moving problem. At first pass, it would appear there are no profundities or trolley problems lurking there at all.
There is no meaningfully superhuman way to install a ceiling fan.
It's not exactly easy, but it probably wouldn't even make the news if it were solved, unlike humans being bested at chess or Go or driving. It is similar in spirit to the search-and-rescue robotics problem that is the core of the current DARPA grand challenge, but it isn't mysterious or out-of-reach based on supposedly fundamental arguments the way Go appeared to be 10 years ago. It would take a lot of engineering grunt-work to build a ceiling-fan-installation robot, but likely not a ton of breakthrough insight or creativity. It would be fuel for a forgettable PhD or two at best.
In other words, ceiling-fan installation is a very typical problem that is well within the capabilities of both mediocre humans like me, and relatively mediocre robots that could be built to tackle it.
It is special in a different way though. It is a structural evolution problem. Solving it alters the pattern of human embedding in the environment. It changes the system's structural state and how I exist within it. For me in particular, as opposed to an experienced installation technician, it is also a bottleneck problem: something I kinda had to figure out for the first time, and am not likely to do often enough in the future to turn into a systematized skill.
I installed a ceiling fan last week. It was a messy muddle-through effort, but I got it done with much cursing, and some help from my wife. The thing works and does not wobble or make a noise (the two signs of a failed fan installation).
The sheer messiness of the process is interesting to reflect on:
There was looking up of youtube videos for tricks ("how on earth do you get this dome light off? Oh you whack around the rim with a rubber tool handle"),
There was brute-force jamming/jiggling of stiff tangles of wires into tight spaces.
There was applying of half-remembered metis from my teenage tinkering years.
There was applying basic safety rules ("turn off the mains!") drilled into all of us by civilization
There was improvising (knife+teeth instead of wire strippers)
There was guesswork about wire colors
There was calculated risk-taking ("hmm I don't want to move the bed to make room for the stepladder, so I'll just stand on the bed and do a wobbly installation")
There were mistakes (loosening the wrong screw because I misread the bad instruction diagram), fumbles, and luck (I dropped the fan once, but luckily there was a bed under it!). Some were live-with-it irreversible, others required rework.
There was deductive reasoning ("oh there's a nubby thing in the way here and I have to rotate this a bit to get it through")
There was residual mystery ("I have no idea how I got this aligned, but I'm not going to try and figure out how to do it again, I'd better finish it right this time since I might not get lucky again").
And this is just the installation. I'm not even getting into the process of shopping around and figuring out which fan to buy, and the decision-making involved in whether to buy a fan at all. What I did not use was any explicit knowledge from my electrical engineering courses long ago. This was a bit exceptional. In general, when I do handyman stuff, I find at least 1-2 minor ways my engineering education is helpful, even if there is no need for creative design in the problem solving.
Notice something here: this is a general intelligence problem. It requires all kinds of notions of intelligence, skill, learning, training, and knowledge to be deployed. It is not a very hard problem, but it is what I call a human-complete problem: if you can install a fan, chances are you can solve the general problem of living a life in the modern world as a functioning adult.
But it is not an AGI problem in the mystically omnipotent/omniscient sense used in religious AI arguments. A robot that solves the ceiling fan problem will embody general intelligence only in a situated sense. Building an AI or robot to install a ceiling fan today is a problem akin to building a monastery on Holy Island to spread Christianity locally. There are certainly vast global consequences when you consider all the ceiling-fan installation going on in the world, but it remains a specific, local challenge.
I have an argument for why it is highly unlikely that a ceiling-fan robot could evolve into Skynet, let alone into a Dyson sphere around the sun powering a paperclip-manufacturing solar system in which humans have been apathetically exterminated by accident, but this margin is too small to present it.
The ceiling-fan example captures how inhabiting the technological stack works today. Unlike a closed-domain, rule-bound game with finite rules like chess or Go, or even driving a car, there are essentially no clean boundaries or rules to the problem, and more importantly, no Olympic-sport like natural competitive/branding characteristic to it. It is not a John Henry problem. It does not mark a useful boundary between humans and AIs. In fact it muddies what boundaries exist. You have to keep expanding the scope of intermediate goals and actions until you land on a state you're willing to call "done." Or you give up. Or you hand the problem off to somebody on Craigslist. You don't make it a point of human honor to install your own ceiling fan.
The thing is, most problems of being human are ceiling fan problems. Interior AI problems rather than Boundary AI problems.
Replacing humans in Interior AI problems is a vast program of patching the holes in the technological world created by us trying to get the hell away from having to deal with "dirty, dull, or dangerous problems."
Now reimagine this particular problem in a life-after-AI world.
Two Ceiling-Fan Installation AIs
How would you AI-ize the ceiling-fan installation problem? There are two extreme approaches with a range in between.
At one extreme, you have an anthropomorphic solution. The technological stack is complex enough that it might perhaps be easier to design an anthropomorphic AI to inhabit it than to rebuild it to suit a non-anthropomorphic AI. This is essentially the Asimovian robotics idea. You'd build a robot adapted to the human UX of technology that can faithfully replicate my problem-solving process, but with less frustration and cursing, more patience, and Terminator-like dedication. And perhaps Three Laws of Robotics.
If an anthropomorphic, or even loosely biomorphic (spider-form-factor say) robot solved this problem, it wouldn't make for a particularly meaningful Turing test. It likely would not solve it the same way I did, but it would require a similar "muddling through" mix of general intelligence behaviors and knowledge to get it done. Chances are, any biomorphic robot that could solve the ceiling fan problem could also do a bunch of other domestic things like wash dishes, install other equipment, plunge a clogged toilet, change a flat tire, and so on. But it likely wouldn't be able to pass an abstract Turing test. It's a patch for a particular human-shaped hole in a particular place, not a replacement human.
It wouldn't be human, but it would inhabit the technological stack of the modern world in a roughly human way. Not in a roughly nuclear-reactorish way or a roughly-Boeing-747 way.
It wouldn't use a single paradigm like deep learning or GOFAI ("good old-fashioned AI"), or a single kind of knowledge. It would not use a single static definition of a goal state or utility function. Like a human, it would likely jiggle around the scope and goal, make compromises, and sort of land somewhere it could call done, or give up. It wouldn't pursue a single life goal with singular dedication (such laser-focused technologies tend to be very fragile) but sort of improvise as it goes along.
In other words, it would muddle through its life in a mediocre way, much as we muddle through ours.
That is a characteristic of at least human-scale general-but-situated intelligences. We survive through mediocre muddling through across many, non-repeating specialized situations, none of which we navigate particularly well. Occasionally, we encounter stretches of problems where we can achieve excellence and artistry through training on a large set of examples, but that is not essential to our survival. As I argued in a recent blog post, humans embody survival of the mediocre mediocre.
At the other extreme, you have a first-principles-rethink solution, and a lot more outcomes are possible there.
Perhaps we land on a smart apartment with a smart ceiling fixture docking port that can hold various fixtures orders a "ceiling fan" and it arrives via autonomous drone, unpacks itself, and clicks into place. Maybe the apartment pays the fan company (both self-owned) in a cryptocurrency earned through renting itself out to me. Done. The solution can't do anything else that to us looks similar to ceiling fan installation, but it can handle problems that are adjacent in different AI-ish ways. Maybe the fan company specializes in all sorts of rotor-like devices and can also handle automated helicopter repairs (which I definitely couldn't do and don't see as a ceiling-fan-adjacent problem). Maybe the smart apartment is a general modular facilities management program designed to expect certain patterns of modular construction. Maybe it can manage anything from a human apartment to a nuclear reactor. Again, I can manage an apartment, but I couldn't manage a nuclear reactor.
Adjacencies are a function of how you scope your problems, how you, as an intelligent general agent, choose to be embedded in your environment for survival, and what battles you pick. That's what gives you your essential character as a general intelligence._
In fact, I'd define a measure of situated general intelligence in terms of the characteristic problem adjacencies that define how the intelligence is embedded in its environment. _
Think about that. It's a fairly radical definition of intelligence. It is an agency-decentering definition that locates its character primarily in the context rather than the agent.
Such an AI would be a ceiling-fan-installation AI in only the most narrow sense possible, where all the entailments of the existence of the problem, throughout the global technological stack, are bounded as tightly as possible, and solved as narrowly as possible in the most efficient location. Such a world would be highly efficient at meeting its own, and human needs, but within a limited, and to humans, odd, range of variation. It would be an artificial general intelligence, but it would be "general" with respect to its own natural adjacencies.
This shouldn't be that surprising. Problems that are adjacent for cats are not adjacent for humans. That's a function of having differently shaped and sized bodies and modes of survival in the environment. A "cat-shaped" hole in an environment created by removing a cat is not like a human-shaped hole created by removing a human. An "AI shaped" hole in a future environment, created by removing a very mature AI (like say "driverless car infrastructure") will also have its own characteristic shape. One that makes John Henry arms-race comparisons not even wrong.
You could say: specialized technologies are all alike, but every general intelligence is general in its own way.
This wouldn't really be a solution to the ceiling fan problem, but a higher-level optimization decision to move the bottleneck -- and associated general intelligence resources -- elsewhere. DIY ceiling-fan installation happens to be a structural evolution bottleneck problem calling for general intelligence given the current ways humans inhabit technology, but it need not be. Maybe, for an AI, it would be better to trade it for a different problem.
Ceiling-fan installation may be a battle that an AI (or rather, cooperating assemblage of AIs) would never pick, even if it decides to serve human needs that include ceiling fans.
Carry this kind of logic all the way through the technological stack. Every problem gets reimagined in an AI-first way, first at a local resolution, than recursively through the stack until some sort of macro-scale homeostatic balance is achieved between say the fragility and efficiency of every problem. Structural evolution bottlenecks migrate to the parts of the intertwingled stack where it makes most sense, given the characteristics of the AIs and robots running around the world.
AI Alienization
This sort of thing is, I suspect, the most likely trajectory of AI in the next few decades. Thousands of teams solving "ceiling-fan" class Interior AI problems, initially in anthropomorphic ways to take advantage of the human-optimized UX of the current technology stack. Then problems get refactored, and structural evolution bottlenecks start migrating to better places in the stack. Some ceiling-fan problems remain. Others get designed away through modularity, decomposition, boundary redrawings, scope changes, various sorts of broader architectural patterns, and so on. Eventually, we'll see large platform corporations that solve large swathes of AI problems that are adjacent in certain globally efficient ways, but are not necessarily related in any human sense.
Something should strike you about this story: this is what is already happening. This is how non-AI-software eating the world has unfolded for decades, and it is how AI-eating-the-world is already unfolding in its early years. Google search is an Interior AI. And for some reason, it is adjacent to a whole bunch of other problems, like email, that are not prima facie related in the older technological stack (your old paper book library was not also your paper mail post office).
Amazon recommendations are AI. Factory automation is AI. Even the the ceiling fan I installed already has a ton of AI in the supply chain that produced and delivered it to me. The way the ineffable ceiling-fan AI that already exists in the world solved the problem of getting my ceiling fan installed is to have me do the last bit at the end of a very long chain.
This is not a sexy story. Locally, it feels like a very dull, incremental trend within software eating the world. It feels like Wheel and Fire, Chapter 3250, or Computing, Chapter 32, rather than Artificial Intelligence, Chapter 3.
But zoom out a bit, to time scales of decades, and you start to see the long arc of technological history bending towards greater artificial intelligence. As this process proceeds, you will get what I call an alienization of society, as it transforms to accommodate the presence of AIs that have migrated from their anthropomorphic roots to their most natural locations.
Alienization is what the AI equivalent of "Christianization" will be like. Our world will start to seem increasingly non-anthropocentric.
Why is it so hard to see this future? Why is it so tempting to get distracted by religious AI cultism? I think the answer is that AI has a human-scale initial condition. The annoying Terminator stock photos in AI thinkpieces get one thing right: the form factor with which AI is entering our lives is human-scale, even if it is not anthropomorphic, so it is extra threatening. An example is the sensational headline that if AIs replace humans as truck drivers, the largest employment sector of about 6%, will vanish. What people don't mention alongside that factoid is that that's about the month-to-month churn level of jobs in the economy anyway. But the fact that it's a human-scale structural transformation makes us turn to religion over it.
This is not always the case with all technologies.
Microchips are tiny. Airbus A380s are huge. Birth-control pills are couple-scale. Genome-hacking happens at electron-microscope scales. Nuclear power generation is a grid-scale thing. Artificial chemical fertilizers operate at farmland-soil-scale.
The reason of course, is that all these technologies are based on some sort of natural physics (or chemistry, or biology) phenomenon that is easier to exploit at some non-human scale.
But the starting point for the AI alienization of the world is not an exploitable principle in physics or chemistry, but the myriad existing human ways of inhabiting the world. Interior AIs in particular, are built to solve particular problems, requiring situated general intelligence, that arise from the particular way humans are embedded in their environments today.
It is a starting point that has a very particular history, and can lead to a lot of very interesting futures, but I doubt any of them will look anything like the product of impoverished religious imaginations.
Life after AI is getting set up in far too interesting a way to end up in such uninteresting places.
Feel free to forward this newsletter on email and share it via the social media buttons below. You can check out the archives here. First-timers can subscribe to the newsletter here. More about me at venkateshrao.com
Check out the 20 Breaking Smart Season 1 essays for the deeper context behind this newsletter. You can follow me on Twitter @vgr
Copyright © 2018 Ribbonfarm Consulting, LLC, All rights reserved.