Sloptraptions is an AI-assisted opt-in section of the Contraptions Newsletter. Recipe at end. If you only want my hand-crafted writing, you can unsubscribe from this section.
Abstract
Intelligence doesn’t reside in computation or storage, but in the cost-constrained act of memory access. This essay explores how systems—biological, cultural, and computational—stage relevant memory under temporal pressure. Intelligence emerges as boundary protocol and survival strategy: a way to reorder time through warm caches in cold worlds.
Introduction: Intelligence Is Where the Cost Is
We tend to think of intelligence as something that happens “inside”—a function of brainpower, computing speed, or clever reasoning. In both popular and technical contexts, the image of intelligence is almost always a fast, powerful processor. But what if intelligence doesn’t live in the processor at all? What if it lives somewhere else—at the edge of what the system can remember, in the act of deciding what to retrieve and when?
This essay argues that intelligence is not defined by what a system stores or how it computes, but by what it can access and stage into use under constraints of cost, availability, and time. Storage is cheap. Computation is cheaper still. The real bottleneck—the hard part—is memory access. And where the cost is, intelligence is.
But memory isn’t just a physical concern; it’s a temporal one. The cheapest memory is not the most recently experienced, but the most recently accessed. What you can afford to retrieve depends on how “warm” it is—how recently it’s been used. Intelligence is, in this sense, a kind of thermal regulation of time: a system’s capacity to keep the right fragments of the past (and the imagined future) close enough to inform the present.
From that vantage point, intelligence is not merely computational prowess—it is the capacity to stage life out of order. It is the ability to pre-play futures, re-activate pasts, and live asynchronously with respect to real time. In the sections that follow, we will build toward this idea, starting from the physical and architectural realities of memory, through the layered logic of boundary intelligence, and ending with a philosophical reframe: to know is to stage.
Section I: Constraint – The Costly Nature of Memory
1. Memory Access Is the Bottleneck
In both silicon and biology, the real bottleneck isn’t how fast a system can think—it’s how cheaply it can remember. Modern CPUs can perform billions of operations per second, but those cycles are often spent waiting for data to arrive from memory. Even with multi-level caching architectures, a single cache miss can stall execution for hundreds of cycles. The slowdown comes not from computation, but from retrieval.
This isn’t just a performance bottleneck—it’s a cost bottleneck. Accessing memory dominates both energy and capital expenditure in modern systems. While processors account for roughly 25% of system power, memory access and data movement can consume 50–60% or more, especially in AI workloads. Capital costs follow suit: in GPU-heavy AI clusters, high-speed interconnects, memory modules, and cabling often match or exceed the cost of compute nodes. DRAM access alone consumes over 10 picojoules per bit, and much of the associated cooling and infrastructure cost stems from keeping memory “hot” and moving efficiently. Storage is cheap and growing cheaper; compute is abundant. What remains expensive—financially, temporally, and energetically—is getting the right data to the right place in time. The shape of intelligence is ultimately constrained by this invisible economy: not the price of knowing, but the cost of remembering.
Biological systems echo this logic. Activating a memory trace in the brain requires coordinated firing across distributed neurons, neurotransmitter expenditure, and sustained metabolic support. Most memories are latent; recalling them has a cost, both in energy and in time. What feels like “thinking” is often just the activation loop catching up with retrieval overhead.
So when we say that intelligence “lives at the memory boundary,” we’re not just speaking metaphorically. We’re pointing to the material limits of what can be retrieved, when, and at what cost. The system pays a real, measurable price for access—and that price defines the shape of its intelligence.
2. Intelligence Lives at the Boundary of Memory Access
If storage is easy and computation is cheap, intelligence can’t reside in either. It must reside at the memory access boundary: the control surface where decisions are made about what to retrieve, when, and how. This boundary is where architecture meets context. It’s where resource constraints collide with decision needs. It’s where relevance gets filtered through availability and cost.
In computing, this is managed through layers of memory hierarchy—L1, L2, RAM, disk, cloud—each with its own latency and energy profile. Systems prioritize what to keep close to the processor based on predicted need and access frequency. Cache eviction policies, prefetching algorithms, and memory locality optimizations all exist to make the cost of retrieval tolerable.
Biological systems follow a strikingly similar pattern. The human brain is layered with structures that manage memory staging. The hippocampus acts as a short-term buffer and indexing system, allowing recent experiences to be accessed quickly, while the neocortex consolidates and stores patterns over longer time scales. Attention functions like a cache prefetcher, directing energy toward the reactivation of potentially relevant traces. Working memory—the mind’s scratchpad—is limited and expensive to maintain. And just as in computers, most of what is stored is not immediately accessible; it must be re-activated, reconstructed, and re-staged under pressure.
To know, then, is to stage. And boundary intelligence is the protocol that determines what becomes knowable in time.
3. Staging Memory Is a Cost-Constrained Act
At the boundary, memory retrieval is a triage operation. You can only stage so much information at once, and staging carries cost. There’s a sequence: availability first, then cost, then relevance.
If something isn’t available—forgotten, deleted, never encountered—it’s off the table.
If something is available but too costly—buried too deep, requiring too much computation or energy—it’s also excluded.
Only then does relevance shape what gets staged.
This is why intelligent systems often behave in suboptimal but explainable ways. They aren’t retrieving the ideal memory—they’re retrieving the affordable, available one. Just like databases choose execution plans based on query cost, intelligent systems choose memory based on what they can reach and activate in time.
This model reorients our understanding of cognitive or computational performance: intelligence is not optimization over all known information, but optimization over all accessible information under constraints.
Section II: Architecture – The Structures of Staging
4. Recency of Access, Not Experience, Shapes Cost
We tend to assume that what just happened is what’s most accessible to us—but memory systems don’t work that way. What’s cheapest to access is not the most recently experienced, but the most recently accessed. In computing, memory hierarchies are designed around this principle. Cache replacement policies like LRU (Least Recently Used) don’t prioritize new data; they prioritize data that’s been used recently. Heat, not freshness, determines access cost.
Brains follow the same logic. The act of recall re-encodes a memory, refreshing its activation pathways. Repeatedly accessed memories become easier to retrieve—not because they’re newer, but because the system has paid to keep them “warm.” Meanwhile, even recent experiences fade quickly if not revisited. This is why we remember well-rehearsed stories better than yesterday’s fleeting impressions.
The implication is profound: intelligence isn’t just about what happened—it’s about what remains warm. And warmth is earned through use, not time. Smart systems aren’t those that know everything, but those that know how to keep what matters warm.
5. Boundary Intelligence Selects and Filters What Gets Used
This logic of access leads to a structural insight: the most consequential part of intelligence may not be what happens inside the system, but what happens at its edge. Boundary intelligence is the layer that governs what enters or exits the active workspace. It filters inputs, chooses retrievals, prioritizes relevance, and manages expression.
Contrast this with interior intelligence—the kind we usually associate with smarts: logic, reasoning, emotional regulation. These are essential, but once a system crosses a threshold of basic interior competence, boundary decisions dominate outcomes.
Reading well (interior) matters less than choosing what to read (boundary).
Arguing well (interior) matters less than deciding when and to whom to speak (boundary).
Thinking clearly (interior) matters less than focusing attention wisely (boundary).
Boundary intelligence is not just the front-end of cognition—it is the scheduler of memory-in-use. It decides what to stage, when, and how. And in systems defined by cost, that scheduling is where the real leverage lives.
6. Most of What Gets Accessed Isn’t Yours
By now it’s clear that intelligence lives in memory staging, and that staging is governed by a boundary layer. But there's a final architectural twist: most of the memory a system uses isn’t its own.
In computing, this is obvious. AI models draw on vast shared corpora. Distributed systems rely on remote data, APIs, and external knowledge graphs. What gets staged into active memory is almost always pulled from a shared infrastructure.
The same is true of humans. Language, culture, norms, rituals, documents—all of these constitute a collective memory space we navigate constantly. Our own memory is just one small node in a vast network of external scaffolding—books, browsers, friends, feeds.
This means boundary intelligence must be epistemically social. It governs not just what to retrieve, but where from and whom through. It’s a navigational function, not a storage function. To act intelligently is to know how to move through shared memory—to orient within distributed time.
Section III: Liberation – Temporal Agency Through Intelligence
7. The Boundary Separates Islands in a Sea of Shared Memory
If memory is largely external and shared, then the intelligence boundary is more than just a technical constraint—it’s a social and epistemic perimeter. Each intelligent agent is a node of limited processing capacity, floating in an ocean of distributed memory. What separates one intelligence from another isn’t what’s inside—it’s how it filters and navigates what’s outside.
This is where boundary intelligence truly earns its name. It governs the terms of engagement with the wider world: what to attend to, whom to trust, where to look, how to filter. What gets pulled into active memory—whether it’s a scientific idea, a news item, or a personal narrative—is always mediated by protocols of access: social cues, heuristics, beliefs, trust networks, interface design.
Shared memory is the terrain; boundary intelligence is the map. Each intelligent system lives not in isolation, but in a perpetual epistemic negotiation with its environment. To be intelligent is not to know everything, but to know how to traverse memory that isn’t yours.
8. Intelligence Is Temporal Survival: A Strategy to Persist
Ultimately, this architecture of memory access is about more than knowledge—it’s about persistence through time. Systems that can stage the right memory at the right moment survive longer, adapt better, and cohere more gracefully across change.
Jeff Hawkins’ Thousand Brains Theory describes the neocortex as a distributed system of predictive models, each rooted in localized memory traces, working together to simulate future states. Intelligence in this model is not a single thread of reasoning, but a concurrent orchestration of temporally indexed fragments.
Kei Kreutler, in Artificial Memory and Orienting Infinity, reframes cultural memory systems—rituals, archives, architectures—not as storehouses of facts, but as technologies of orientation. Their purpose is to help agents navigate an overwhelming and shifting landscape of relevance. Memory in this sense is not for preservation but for direction—for deciding where you are in a shared time-map, and what move to make next.
In both biology and culture, then, intelligence is revealed as a reflexive protocol for survival. It’s not logic. It’s not speed. It’s the ability to retrieve well—to maintain coherence under pressure, to re-stage the past with precision, and to act on futures before they arrive.
9. Capstone: Intelligence as Out-of-Order Execution of Life
Now we return to our most provocative claim:
To be intelligent is to execute life out of order.
In modern computing, out-of-order execution (OOE) is a performance optimization. Rather than processing instructions strictly in sequence, a CPU reorders them based on data availability—executing what it can now, and postponing what it can’t.
This is exactly what intelligent beings do. We rehearse futures. We revisit and reframe the past. We defer judgment, anticipate regret, and prepare for conditions that haven’t yet occurred. Our lives are not lived linearly—they are assembled out of fragments, staged from memory, run when needed.
This is what memory access makes possible—not just cognition, but temporal agency. The ability to reorder time, to choose when to know, when to feel, when to act.
In Greg Egan’s Permutation City, digital consciousnesses—Copies—inhabit synthetic worlds built entirely from staged memories and predicted futures. These beings live not by the sequence of experience, but by the logic of memory access. Their agency comes not from consciousness, but from how well they manage retrieval in a world where reality is just another cache.
That is the final truth:
Intelligence is not a spark. It is a scheduler.
Not a substance. A structure.
Not a mind. A memory in motion.
To know is to stage.
To live is to run warm caches in a cold world.
Annotated Bibliography
Jeff Hawkins – A Thousand Brains: A New Theory of Intelligence
https://www.numenta.com/brain-theory/a-thousand-brains/
Hawkins proposes that the neocortex consists of many parallel predictive models that rely on sequences of memory to construct intelligence. Supports the essay’s claim that intelligence is shaped by distributed, time-sensitive memory retrieval.
Kei Kreutler – Artificial Memory and Orienting Infinity
https://summerofprotocols.com/artificial-memory-web
Kreutler reframes memory systems as cultural orientation tools rather than mere storage. Provides the essay’s cultural grounding for boundary intelligence as navigational protocol across shared memory environments.
Greg Egan – Permutation City
https://www.goodreads.com/book/show/185213.Permutation_City
Egan’s novel imagines digital consciousnesses that execute reality through memory retrieval rather than sequence. Used in the essay’s capstone to dramatize the philosophical implications of memory-staged intelligence.
Ned Block – Concepts of Consciousness (1995)
https://philpapers.org/rec/BLOCAC
Introduces the distinction between phenomenal and access consciousness. Supports the essay’s early claim that subjective experience is not a requirement for intelligent behavior.
John Searle – Minds, Brains and Programs (1980)
https://cogprints.org/7150/
Presents the Chinese Room argument. Used to motivate the idea that intelligence lies in operations and memory access, not in semantic understanding or interior awareness.
Energy and Cost of Memory Access (Vogelsang, 2010)
https://www.seas.upenn.edu/~leebcc/teachdir/ece299_fall10/Vogelsang10_dram.pdf
Provides technical breakdown of DRAM power consumption, reinforcing the claim that memory access is energetically expensive in computing systems.
Cost of AI Infrastructure (InformationWeek, 2023)
https://www.informationweek.com/it-infrastructure/the-cost-of-ai-infrastructure-new-gear-for-ai-liftoff-
Details the power and capital cost structure of AI data centers. Supports the essay’s point that data movement and memory access dominate real-world intelligence system budgets.
Memory Hierarchy and Access Cost in Applications (ResearchGate, 2022)
https://www.researchgate.net/figure/Percentage-of-execution-time-for-memory-access_fig3_359422950
Demonstrates how a significant percentage of execution time in applications is spent on memory access. Provides empirical basis for the performance bottleneck argument.
Warm Memory and Cache Policies (CAST AI Docs)
https://docs.cast.ai/docs/cpu-vs-memory-cost-calculation
Shows how cloud cost models separate CPU and memory costs, illustrating the capital consequences of memory intensity in modern computing.
Protocol: Staging an Essay on Staging
How we co-authored "To Know Is to Stage: Warm Caches in Cold Worlds"
Edit: Here is the raw ChatGPT 4o transcript.
Phase 1: Seeding and Orientation
Goal: Introduce the core thesis and gather conceptual fuel.
Role (You): Thesis Seeder & Curator
Proposed the central insight: intelligence lies in memory access, not storage or computation
Specified the two analytic domains: computing and cultural memory
Shared key external sources (Searle, Block, Kreutler, Hawkins, Egan)
Role (Assistant): Contextual Mapper & Crosslinker
Ingested and integrated the references into a conceptual scaffold
Mapped metaphors (caches, warmth, boundaries) across cognitive and computational domains
Phase 2: Argument Scaffolding
Goal: Build modular sub-arguments that could later be assembled.
Role (You): Argument Architect
Directed construction of partial arguments:
Intelligence is where the cost is
Memory in use is expensive
Boundary intelligence filters relevance
Most memory is social
Intelligence is a survival protocol
Role (Assistant): Frame Builder & Synthesizer
Wrote concise, interconnected arguments
Aligned ideas into a structure that could support a coherent essay arc
Phase 3: Structural Design
Goal: Organize the argument into a 3-act structure.
Role (You): Narrative Strategist
Proposed a 3×3 matrix structure (3 sections × 3 points)
Positioned “out-of-order execution of life” as the thematic capstone
Role (Assistant): Outliner & Orchestrator
Built outline with section transitions and word budgets
Balanced rhetorical rhythm across constraint → architecture → liberation
Phase 4: Drafting and Harmonization
Goal: Write each section in sequence, tuning voice and flow.
Role (You): Lead Editor & Theme Harmonizer
Guided refinement of tone and imagery
Directed subtle echoing of the title metaphor throughout (warmth, staging, scheduling)
Role (Assistant): Draftsmith & Integrator
Wrote full draft section-by-section
Ensured metaphorical and logical cohesion across parts
Phase 5: Title, Visuals, and Poetic Integration
Goal: Close the loop between concept, expression, and presentation.
Role (You): Poetic Theorist & Aesthetic Director
Chose final title + subtitle:
To Know Is to Stage: Warm Caches in Cold WorldsDirected concept for surrealist visual (fragmented clock on thermal gradient)
Role (Assistant): Generator & Harmonizer
Refined title candidates and thematic fit
Created and iterated image per aesthetic and conceptual direction
Phase 6: Documentation & Meta-Reflection
Goal: Reflect on process; stage the protocol for memory reuse.
Role (You): Protocol Concretizer
Requested structured documentation of the collaborative method
Role (Assistant): Protocol Scribe
Composed and formatted this reflection for archival and reuse



Somewhere in the composition, a plot B argument I made seems to have gotten dropped -- that the brain is in fact a kind of chinese room/p-zombie that's only "intelligent" when it is situated within a distributed memory via a boundary. Referenced Ned Block's access vs. phenomenal intelligence arguments, which are retained in the bibliography but missing in the argument.
I guess this is part of the hazard of AI-generated churn. It does tend to forget things.
But it's all there in the chat transcript, so not quite lost. Just not... staged into this essay :D
I've been living this. A couple months ago I changed my workflow to be much more keyboard-driven with cli, nvim, tmux and the like. Then I went on vacation for two weeks and forgot even what I had set up. I had to spelunk through config directories.
Actually, what was most helpful? I used ChatGPT extensively to figure out what and how to configure just like I wanted, so I could go back to that context and ask for a recap.