I just finished Jimena Canales’ 2010 book, One Tenth of a Second. It is a weirdly delightful work of disorganized, scatter-brained genius. I’ve been researching and writing about time for over a decade now, so I consider myself widely read and hard to surprise on the topic, but this book managed to go beyond filling a few known-unknown gaps in my understanding, and got me looking at the whole subject in a genuinely new light. It has gotten me reconsidering some of the ideas I wrote about in Tempo in 2011, and reworking some draft ideas I am thinking through for The Clockless Clock.
I found the book backtracking from Canales second, more conventionally pop-sci 2015 book, which I haven’t yet finished, The Physicist and the Philosopher, about the Einstein-Bergson debate of 1922 and its consequences. It’s clear that the second book grew out of the first, since the latter concludes with a longish chapter on the Einstein-Bergson debate. But the first book is more interesting to me personally, since it contains the core of the imaginative refactoring that informs both books, and its relative roughness (parts almost read more like private research notes) is actually a plus, since you get a better sense of the tricky big idea Canales is trying to pin down.
While I wouldn’t necessarily recommend either book to the casual non-time-nerd reader (I suspect they would be tough reads if you don’t have sufficient context), the central argument of One Tenth of a Second is of much broader interest. It is worth appreciating at least in a skeleton form, even if you don’t have a good reason to do a deep dive.
The basic thesis is that between the late eighteenth and early twentieth centuries, a period of time — 0.1s — acquired a weirdly powerful role at the heart of science, and indirectly, in the shaping of all modernity.
The tldr of the book might even be: the failure to come to terms with the 0.1s limit in human cognition gave birth to modernity, with all its inherent tensions, via a set of parallel crises.
You’ve probably heard something along the lines that the human persistence of vision is around 1/10th of a second, and that this is what allows for the experience of movies. This is the gloriously messy backstory of that idea.
The rough story (which the book sort of conveys in passing, but in a very elliptical way) is as follows:
In the late 18th century, astronomers were investigating questions which required observations precise to 0.1s, running up against the limits of human reaction time (the origin story involves an astronomer firing a junior because their observation time records didn’t match)
This created a simultaneous crisis across science (stressing and undermining trust in experimental methods), and philosophy of science (stressing notions of objectivity).
For both practical and philosophical reasons, the “human factor” had to be tackled, and this gave rise to interest in something called the personal equation — a model of a particular observer’s reaction times on the 0.1s scale, for recording stimuli. It was the “cognitive biases” idea of its time. Observatories even used to measure and record the “personal equations,” of astronomers alongside observations attributed to them, to allow suitable (and dubious) corrections to be applied to calculations.
Personal equation studies gave birth to the field of psychology in its modern form. To a first approximation, 19th century psychology in the William Wundt era was reaction-time studies around 0.1s. The philosophical presuppositions and imperatives behind this could roughly be described as “rescuing science from the 0.1s objectivity crisis.” This meant modeling and empirically studying the human being as an instrument capable of characterization and calibration (leading to infinite regress questions that were never really resolved).
This gave rise to really thorny questions. Some were well-posed, like the question of the speed of nerve transmission, which eventually got answered in straightforward ways. Others were less well-posed, like breaking down a stimulus-response cycle into a sensation lag, processing lag, and motor activation lag, that we are not really much better at thinking about today, despite our vastly superior understanding of neuroscience.
Psychology quickly found itself in an impossibly messy 0.1s yak-shave, and at the same time, the experimentalists who originally created the practical motivation increasingly began looking for ways around the human factor. This led to, among other things, the birth of high-speed photography and cinematography.
So through the 19th century, we find three main strands of research inspired by the 0.1s limit: experimental methods to mitigate “personal equations”, early “reaction time” psychology research, and the development of photography and other automated techniques to short-circuit human factors altogether, thereby hopefully restoring classical objectivity to science. The book jumps around among these three strands.
In parallel, at a meta-level, we see the evolution of statistics. Gauss and least squares make a surprise appearance in the context of the 0.1s problem. But in this context of real empirical messiness, the mathematical developments seem naive, despite their clear power. Unlike in the standard telling, Gauss didn’t simply solve the problem of varying observer reactions for a grateful experimentalist community with least squares methods. There were fierce debates about whether human reactions were in fact normally distributed, and Karl Pearson’s chi-square test was born of this debate.
There was also a steadily developing philosophical crisis, as it became clear through the 19th century that the attempt to rescue objectivity in the classic 17th and 18th century senses (think 1600-1780) would in fact fail.
By the late 19th century, methodological/psychological efforts to mitigate human factors even with the best discipline had basically failed. On the other hand, physics-based approaches (in particular interferometry) to try and cut out humans at least to first degree had basically succeeded. This led to physics taking over from astronomy as the most prestigious science.
In parallel, psychology had run aground in impossible problems trying to ground an understanding of humans in reaction-time-based empiricism. And philosophy was in crisis too, unable to formulate a coherent philosophy of science now that “objectivity” was in such trouble. It was a dual epistemic crisis: the reliability of “objective” human observation had been undermined, and the rise of technology had created an “alternative” epistemology of seemingly “observer-free” information with murky qualities.
On the technology front, photography (leading a pack of auto-sensing technologies) was in a race against the human eye, in a story eerily reminiscent of today’s conversations around AI. Though in the early decades skilled human observers easily did better at astronomical observation tasks, the writing was clearly on the wall. Slowly but inexorably, photography (and related automated methods) took over from humans.
Photography itself turned into a practical-philosophical quagmire, with a divide emerging between the experimental scientists who wanted to evolve it as an objective instrument, and commercial adopters, who were turning it into a medium of entertainment, with philosophers unsure what to make of knowledge created without observers, by inanimate devices. Was “sampled data” reality really the same as continuous reality as subjectively experienced by humans? If not, what was the difference? What essence leaked out between frames?
There were huge political stakes as well, particularly around the question of standards for length in post-revolutionary France. The century-long rise of physics at the expense of astronomy is evident here. The increasingly troubled efforts to define the meter in terms of the earth’s circumference gave way to elegant definitions in terms of the wavelength of light.
Things came to a head with the high-stakes international competition to measure the solar parallax during the transits of Venus in 1874 and 1882, important for determining distances in the solar system. Roughly speaking, both human and photographic methods failed due to the 0.1s problem, and the physics approach of getting at the same questions starting with interferometry-based speed-of-light measurements won out. Human observers retreated from the center stage of empirical science along with astronomy, and the synthetic, rather than analytic, function of photography began shaping its future.
In this milieu, the philosophy of time became the most important topic in the philosophy of science itself, and the philosopher Henri Bergson rose to prominence on the strength of his study of the matter, establishing a subjectivist tradition that insisted on the inseparability of the human experience of time, and observations and measurements. Effectively, 0.1s became a barrier humans could not see past without the aid of instruments whose epistemic status was deeply suspect. As a result, it turned into something of a religious question. A sort of frequency-domain afterlife zone.
On the other side, a naive-empiricist anti-philosophical tradition arose in science, insisting on the meaningfulness of objective “freeze-frame” notions of time. Roughly, this tradition took reality to be synonymous with whatever was measured by instruments (a modern equivalent is the conflation of computation and subjective consciousness). This tradition increasingly short-circuited the human factor without actually addressing either the psychological or philosophical conundrums raised by running into its limits. A similar naive-empiricist tradition grew in psychology, effectively based on the false security of a shaky empiricism (think phrenology).
The philosophical conflict came to a head in the 1922 Einstein-Bergson debate, creating a temporary victory for the objectivists (in the science sense, not Ayn Rand sense) and a deep schism between two views of time — philosophical time on the one hand (which Einstein grandly declared didn’t exist) and physics-and-psychology time, both rooted in a rather shaky empiricism.
So the story ends around 1922 with: the 0.1s practical problem kinda went away, photography blew blithely past epistemic concerns to ever greater heights (and so here we are at deep fakes and false-colored astrophotography now), psychology abandoned the human-as-instrument reaction time approach and ended up disrupted by unabashedly subjectivist and introspective Freudian-Jungian approaches, and philosophy and science ended up in a weird schism, each claiming the other had no locus standi on certain questions.
Overall, the collision with the 0.1s barrier led to the displacement of human-centric empiricism by automation, the rise of photography as a powerful but epistemologically suspect modality (it is not an accident that film is primarily a medium of fiction rather than non-fiction), the re-booting of psychology in a subjectivist mode. The philosophy of science evolved from classical through positivist, anti-positivist, and post-positivist phases to its modern indeterminate (imo) condition, marked by no clear consensus on the nature of scientific “knowing.”
The book tells this whole story in a different way, as a scholarly stream-of-consciousness tour through the empirical underbelly of the relevant historical era. A vast cast of characters marches through the pages, and you get a visceral, impressionistic, all-at-once sense of the evolving scientific ferment. Famous names appear in different lights. Gauss and Pearson appear as peripheral figures, Einstein comes across unsympathetically, as a sort of reactionary objectivist (ironic given that he discovered relativity). Figures like Le Verrier, the head of the Paris observatory, who predicted the existence of Neptune, loom large along with observational astronomy. You get a strange view of the history of photography and cinema.
There is a certain useful arbitrariness to how the history is chopped off at both ends by the 0.1s problem, which is essentially forgotten today. A more natural way to tell the story would be to start with Spinoza vs. Leibniz in the 17th century, and end with the role of observers in quantum mechanics, and the unresolved question of unifying quantum mechanics vs. general relativity. But in an interesting way, the “underbelly” perspective enforced by the reign of a critical empirical and phenomenological concern is enlightening in a way a theory-first narrative would not be. We are much more at the mercy of the instrumental capacities of an era than we like to admit, shaped by the message of the medium and its limits.
It’s a disorienting but effective way to tell the story. This is the history not just of 0.1s, but of an entire weird century, as Virginia Woolf and Douglas Adams might have written it. A chronos crisis viewed from the perspective of the kairos tumult it caused.
The details are fascinating, but the central argument — that the birth of modernity can be traced to a meta-crisis spawned by the 0.1s problem — is worth understanding and appreciating whether or not you’re a time nerd like me.
The argument is particularly salient given that history is rhyming furiously right now. Consider:
Instead of human eye vs. camera, we have human brain vs. AI.
Instead of epistemic crises triggered by moving pictures, we have one triggered by deep fakes.
Instead of an elaborate psychology grounded in shaky “reaction time” research, we have one grounded in equally shaky “cognitive bias” research (I am a minority I think, in believing the Kahnemann-Tversky industrial complex to be a house of cards that’s due to collapse soon).
As in the era of Gauss and Pearson, statistics is undergoing ferment and trying to offer interesting new tools to mitigate human limits — think long tails, antifragility, ergodicity — but as was the case then, they don’t seem to be making as much of a dent in core philosophical and psychological problems as their evangelists think.
Instead of the 0.1s limit as a limit on observation, we worry about how software interfaces “hack” attention by presenting clickstreams that move faster than that limit, operating in our temporal “stupid zone” (I think this is a bad, but interesting argument and critique of social media).
Instead of worrying about what really happens between the frames of a 12 fps film-reel that is not captured by freeze-frame models of reality, we worry about what really happens during human cognition that does not happen when an AI replicates its input-output behavior.
There is no convenient leitmotif, comparable to the 0.1s problem, for our contemporary version of the rhyming conditions, but something very similar to the “tenth of a second crisis” is going on today. I suspect our Great Weirding too involves some sort of limiting factor on human cognition that we haven’t yet properly wrapped our minds around. It isn’t reaction time, but something analogous.
To wrap up this précis, I was very entertained by the parallel between a line in the book and a Douglas Adams line. Somewhere towards the end, discussing how 0.1s shaped the ideas of Lévi-Strauss, Canales writes:
Yet there is also an aspect of the tenth of a second that can help us reevaluate the longer period of time known as modernity and problematize the progression described by Lévi-Strauss…Longue durée narratives are often based on common (mis) understandings about the short periods of time that constitute them.”
This reminded me of a bit in Hitchhiker’s Guide:
“So the hours are pretty good then?' he resumed.
The Vogon stared down at him as sluggish thoughts moiled around in the murky depths.
Yeah,' he said, 'but now you come to mention it, most of the actual minutes are pretty lousy.”
I might do a newsletter issue on The Physicist and the Philosopher too, when I’m done with it.
Canales has a recent third book out, Bedeviled (2020), a history of demons in science (as in Maxwell’s demon, Laplace’s demon, and so on) that I’ve added to my queue (she seems to have a knack for zooming in on intriguing original angles on well-studied topics in the history and philosophy of science).
Are substack and note-taking software such as Roam Research & Amplenote replacing the blog? Or should I say, replacing your blog? I write long marketing emails to my lists. Some of them end up on my blog. Some don't. Whatever the case, I find it easier and more satisfying to write via email directly to my readers. You seem to do the same. Just thinking out loud if anyone is interested.