Bullshitization and Common Indifference Problems
Pretending to govern requires Common Indifference (CI) maps
Found this excellent 2017 paper, Bullshit as a Problem of Social Epistemology by Joshua C. Wakeham, via Sarah Perry, and just read and discussed it in the Yak Collective governance study group this morning.
The paper builds on the Harry Frankfurt “indifference to truth and falsity” model (from a 1987 paper and a 2005 popular crossover book — the idea went viral on the internet along the way). It’s a mix of a survey of builds on the original idea, and prolegomena for a more “social epistemological” model. Reading it got me to thinking about bullshit in a different way — as an age-old socially adaptive process for constructing shared Common Indifference (CI, by analogy to Common Knowledge or CK) maps of reality. This process is accumulating negative externalities at an accelerating rate due to the internet (and in a different way, AI), and possibly becoming maladaptive.
This is why we’ve only recently started thinking of it as a (possibly existential) “problem” rather than just a manageable part of the tradoffs of life itself. As a result it’s become a central problem for the art of pretending to govern (and pretending to be governed) which I talked about last week, because the collaborative pretense of governance can only safely operate in Common Indifference domains. It is not safe to pretend to govern or be governed where there isn’t a sufficient level of shared indifference in the population. And with the leverage of the internet, even very small groups in large societies can make sure the level of indifference isn’t sufficient. While bullshit itself is a weapon of the strong, bullshit-calling is a weapon of the weak (ht Sachin Benny for the prompt to think in those terms).
Let me tldr the paper for you before getting to my idea.
The Paper
The paper is in roughly three parts. The first part is a survey of sorts of the last thirty years of bullshit studies. The second part looks at some interesting “social epistemology” angles in particular, without landing anywhere in particular, and the last bit presents something of a taxonomy of bullshit.
In the first part, most of the surveyed work seems a bit uninspired, but some of it seems to miss the point of both the ordinary sense of “bullshit,” as well as the spirit of Frankfurt’s account, in interesting ways. This bit about the ideas of some guy named Fuller in particular seems to represent a common but interesting kind of revealing mis-analysis (which Wakeham notes and critiques well). I particularly like the wonderful highlighted phrase about obliqueness of inquiry encountering obtuseness of a response.
“Fuller suggests the bullshit detector has a greater confidence in his or her ability to apprehend and know the world. The position of the bullshit detector belies the fact that consistent access to clear evidence of any particular claim or belief is often hard to come by. The bullshitter’s position allows for more uncertainty in his or her grasp of the world because as Fuller argues: “[Wle must make up the difference between the obliqueness of our inquiries and the obtuseness of reality’s response. That difference’ is fairly seen as bullshit” (p. 247).
This “obtuseness of reality’s response,” or what might also be called ontological uncertainty, is another source of difference. The bullshit detector is a realist, believing that “reality is, on the whole, stable and orderly.” By contrast, the bullshitter is an antirealist, treating “reality as inherently risky and under construction,” fraught with a greater degree of uncertainty (Fuller 2006:247). Bullshit detectors are not only overconfident in their ability to apprehend the truth, but they are being naive or disingenuous about the messiness of reality, according to Fuller (2006). Bullshitters engage in “deferred epistemic gratification” by throwing a variety of ideas and claims out there without regard to the weight of the evidence. In Fuller’s view, bullshit detectors’ realist position often overstates the epistemic status of some claims over others without sufficient evidence. This amounts to dismissing some claims as bullshit without conceding the epistemic weaknesses of one’s own position.”
I don’t know about you, but I treat this as an important but distinct phenomenon related to, but not the same as, bullshit. Different levels of actually held realism in philosophical postures, and the modulation of speech by the felt levels of doubt in that posture, can and should be distinguished from modulation caused by indifference, or a loose intent to obfuscate. Social intentions matter more than philosophical priors when it comes to bullshit. And the output of the two modulations is different. I at least, typically know when I am bullshitting and why, and when I’m speaking from a place of genuine doubt and “deferred epistemic gratification” (another lovely turn of phrase).
But this is an important related problem. Many crackpots suffer from this problem of overconfident bullshit detection, but their attributions of bullshit are often not even wrong; tagging bullshit where there is merely more uncertainty and ambiguity than they’ve themselves wrapped their minds around. They are unable to detect tasteful tweaks on nominal disciplined processes in service of uncertainty wrangling from tweaks that represent either fraud or indifference. I saw a paper go by, which I can’t find now, that points out that autodidacts crackpots and conspiracy theorists often adhere more faithfully and tastelessly to “scientific procedure” than practicing scholars, and this is the source of both the systematic weaknesses in their thinking, and their much rarer success.
This kind of naive bullshit detection is a symptom of morally strident cluelessness and false confidence, a la Duning-Kruger effect. Ironically, the Dunning-Kruger effect itself is under skeptical scrutiny for replication problems, but I personally believe it to be a robust and real phenomenon even if the particular formulation and empirical investigation fail. But overall, both research efforts themselves (*cough* behavioral economics *cough*) and “debunking” efforts can be naive in this clueless way, causing infinite regress of levels of bunking/debunking that goes nowhere. Because reality is messier than you think and people are more malicious than you think.
And there is no systematic way I know of to tell sincere grappling with uncertainty/ambiguity from various bullshit and bullshit-adjacent pathologies. This is why the core of the paper about the social epistemology is so interesting.
I didn’t know there was a philosophy sub-field with that name, but its central concern is clear enough, and one I think a lot of us share and explore through other frames — how to construct usably reliable shared understandings of a matter out of unreliable individual understandings, through people testifying to things and evaluating each other’s testimonies.
The correct answer is of course “blockchains,” but while we wait for that vision to actually be realized, let’s talk about the mess we’ve made of things with our existing primitive technologies.
I think this social emergence aspect of bullshit is fatally understudied to the point that we’ve constructed clueless meta-theaters of people competing to sound credibly sophisticated about it, by contriving to think of it as an individual problem. This is bullshitization, a process like financialization or enshittification. A derivative phenomenon that amplifies the underlying bullshit instead of attenuating it. And as with those other two processes, despite the negative connotations, I don’t think it’s necessarily a pathology. It’s an essentially adaptive process that has experienced regulatory escape from its regime of utility.
I think we fail to address bullshit because we have failed to recognize the genuinely positive adaptive function of bullshit for all of us in regimes where it works. We just don’t like being on the wrong side of the adaptive function where it doesn’t. The same is true of enshittification and financialization — both are positive adaptive processes… until they’re not.
A lot of the academic cottage industry of debunking, debunking the debunkers, failed replication, failed efforts to show failed replication, and so on, are a more structured version of the lay version. Wakeham gets at this in passing:
“Unfortunately, the debate between nonreductionism and reductionism leaves one caught between gullibility and impractical skepticism.”
Ignoring the specific technical notion of reductionism he’s applying (it’s not particularly fertile imo), this sense of being caught between gullibility and impractical skepticism is the heart of the matter. Though the paper sadly does not touch on the internet or popular discussions, this is precisely the “do your own research trap” (and the conjuring tricks achievable by manipulating the results of that impulse). The impractical skepticism aspect is precisely the Brandolini bullshit asymmetry principle (“The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.”)
So it’s not surprising that for the most part, we tend to make our responses a function of selfish cultural convenience rather than epistemic hygiene concerns. The paper gets into the ritual/cosmetic/polite-fiction varieties of bullshit (historically accounting for the majority of bullshit, though this is changing) via Erving Goffman’s work on frames (the On Cooling the Mark Out guy) and Andreas Glaeser’s distinction between “knowledge” and “understanding” (new to me, but sounds isomorphic to my own demystification vs understanding distinction, except I use understanding to mean knowledge in the Glaeser sense and demystification for what he calls understanding, and avoid using the word knowledge altogether, except in narrow phenomenological ways). It then gets into the weeds of a taxonomy of types and genres of bullshit based on the cultural contexts in which it occurs. This is probably useful and necessary for further work, but feels like a bit of a yak shave.
The main problem remains the gullibility/impractical skepticism tradeoff in the corners of bullshit space that cannot be easily dismissed as routine background polite-fiction construction. Corners where mere polite glossing over, and even shallow demystification, are not enough, and arriving at the most robust understanding (or knowledge in Glaeser terms) of the truth actually matters to someone, because they have a phenomenological stake in the matter, rather than an abstract intellectual one.
That to someone part is probably the part that makes this a non-trivial social epistemology problem. If it were all polite fictions, we wouldn’t have a problem.
So much for the paper. Let’s talk about what I think is the right way to come at the matter: Common Indifference.
Half-Baked Indifference
Most of the time, for most social questions, the participants talking about it are actually in a state of shared indifference about the answer, and know it. Nobody has a stake and everybody knowns nobody has a stake.
Let’s call this Common Indifference by analogy to Common Knowledge — I know you know I don’t care, I know you know I know you don’t care, ad infinitum. So the conversation that is nominally about establishing the truth of some matter can serve other purposes, such as conviviality, flirtation, humor, political alliance building (the so-called “luxury belief” phenomenon) or mutual commiseration. And perhaps most importantly, storytelling. All fiction rests on the existence of a Common Indifference zone to work with.
None of this is bullshit strictly speaking, and it seems odd to call it that because all parties are willing and complicit, and there is no undeclared intent to covertly deceive or obfuscate. There is even a lot of actual value — entertainment, social lubrication, and so on. Common indifference discourses are a kind of phatic speech. The epistemic condition of the discourse is commonly understood and accepted to not matter. Nobody who actually cares is known to be in the room.
So to some extent, building a taxonomy of genres for it is a yak shave — important for establishing the boundaries of real bullshit, but not otherwise germane. Maybe we can call validated CI matters fully baked indifference. Bullshit ossified into a condition of offering no alpha for anyone, and also causing no harm to anyone. But this condition can change. What causes no harm or offers no alpha to anyone might if the context expands or leaks sufficiently.
The discourse condition that matters is where the content is not yet in CI, but is in the process of getting there. This is the real bullshit — half-baked indifference.
To understand this, it is useful to distinguish CI from Mutual Indifference, or MI, where we don’t (yet) know we’re all indifferent. Bullshit often unfolds when we are in a state of MI but act as though we’re in CI, and use our responses to each other to actually bootstrap our way there. We fake CI till we make CI. That’s the purpose of bullshit games in a way, to mark out regions of CI. We lower epistemic hygiene to increasing levels of bullshittiness until we’re either all comfortable or somebody objects or it is no longer entertaining or socially useful. If we succeed in getting to CI on some matter, we enjoy the benefits of lowered vigilance needs all around, and the use of discourses about that matter for other ends. We can all let our guard down and have fun.
This is also where malicious and disingenuous intentions can sneak in.
The good-faith way to engage in MI –> CI “bullshit baking” discourses is to actually identify areas of CI and back off otherwise. But malicious actors use it as a way to test the boundaries of what they can get away with, and with whom; knowledge which they later deploy tactically (tell: someone with a reputation for telling people what they want to hear, including mutually contradictory things to different people). Code switching is a somewhat more benign version of this (more benign because it usually involves feigning indifference where you actually care, rather than the more dangerous act of pretending to care when you don’t). Malicious bullshit tries to establish indifference zones purely as a function of social context rather than as a function of the subject matter. You don’t care about truth or falsity. Only about whether everyone else in the room is indifferent or not.
Good-faith bullshit happens when we try to establish shared indifference zones around a specific matter, but are willing to back off if broader negative consequences of the indifference become clear. This does not involve necessarily suddenly starting to care, or being fully committed to the truth of a matter instead. It merely means committing to not speaking thoughtlessly or carelessly on a particular matter, and deferring to those to whom it actually matters as much as you can. This of course requires assessing any costs you might be required to pay, to so defer.
You might remain privately indifferent, and resist demands that you care a certain amount (and show it in ways that involve a cost), but you might refuse to participate in collectively indifferent discourses as well. It can be as easy and low-effort as simply not making certain kinds of jokes, not just when the people who care are around, but ever. You don’t have to diligently probe the motivations, contexts, absurdities and contradictions the jokes might involve. It is actually easier to simply mark that area of joking no-go as a simple rule, than to make up costly protocols for yourself about when you can and cannot make such jokes (the willingness to pay the cost of such protocols is often a sign of malicious intentions).
The good-faith version of bullshit is how we solve the problem of collectively and conservatively deciding what doesn’t matter to anyone, within systems that do matter to everyone. Specifically because those systems establish indifference bounds that are not themselves a matter of indifference (the CI set is what mathematicians call an open set; it does not contain its boundary points).
A conversational “system” that allows a thriving culture of humor in a society is important. Being able to make a very specific kind of joke is not. You can act to preserve a culture of humor in conversation, and maintain a large enough zone of Common Indifference to sustain it, without being attached to the right to make specific kinds of jokes. But when professional comedians of all kinds start to complain it’s hard to make jokes, you might want to pay attention to whether boundaries are contracting in ways that threaten the overall culture of humor you care about.
Many culture war theaters have this general structure to them. Concern for a shrinking CI zone that sustains some valued system of social activity manifests as outrage and counter-outrage on specific things being baked or unbaked in or out of the CI zone. The last straw gets blamed for breaking the camel’s back.
My baking metaphor is now rather overwrought, and there’s now a camel in it, but I’m sticking with it.
Bullshit-policing, properly understood, is not about the norms around the truthfulness of what is said, but a variety of what Robert Axelrod called metanorms, around the bounds of where it is fine to be indifferent to truthfulness. We police not bullshit, but failure to adequately regulate the bullshitting impulse when it threatens to wander outside the CI zone.
We do this by marking off areas without sufficient indifference levels as no-go areas for consensual bullshitting, and try to arrive at a meta-consensus about where those areas lie. Ie we are sometimes indifferent to whether or not we and others are being indifferent to the truth, and sometimes we’re not. We need maps to tell us which areas are which.
Many obsessive and reflexive bullshit detectors without a specific reason to be that way make themselves miserable by operating at the norm level rather than the metanorm level. They can perhaps even tell better than others whether or not something is bullshit, but they are unable to tell when that assessment itself matters, and when you can be indifferent to it. They miss the forest of bullshit for the individual bullshit trees. Just because you can tell the difference does not mean you or anyone else needs to care about it. Their right to not care about they’re spellings.
But this default mode causes issues at the margins, which is why we often individually think of bullshit as a “problem” to be solved once and for all somehow, rather than as an adaptive and valuable cultural process to be regulated to a healthy state.
For example, most of the time “what’s in this food we’re eating?” is MI → Cl territory among omnivores with no dietary restrictions. But if vegetarians or people with real allergies are present, the truth matters to someone and the unfolding bullshit theater must be interrupted by them, even at a significant social disruption cost. Often there is resistance on the part of others who don’t share the concern or appreciate the effort to shrink the CI zone and may not cooperate. For example, a waiter being indifferent to real allergies and simply picking out peanuts from a dish before bringing it to the table can cause a real health emergency. So it is not enough to merely pretend to accommodate stated concerns. Sometimes you have to detect whether the concern is real and modify actual behaviors. You have to calculate the cost to you of the cost to them. Sometimes that cost is incurred only if they find out (“what they don’t know can’t hurt them”) but even then it might be high in expected cost terms. For example, surreptitiously feeding a sincerely religious person a food that is taboo for them, out of either indifference or as a bad joke, might still cause real trauma if they do find out (for the record, I think the offense vs. harm distinction is bullshit).
The bullshittification of “gluten free” for example is a real problem to those with real conditions, as opposed to those with a fashionable affectation and an indifferent gut. The latter group can drive bullshittification of the “gluten free” label — marking it CI when it shouldn’t be.
The establishment of CI boundaries for all is something of an ongoing arms race between individual concerns and collective convenience. For those with few or no concerns, any shrinkage in the CI domain may be experienced as an unfair burdensome cost. For those with many concerns, any personal concern they can collectivize (or must collectivize, where there is no personal recourse) and mark outside of CI bounds is a kind of free-riding or indulgence-begging (what Taleb called a tyranny of the minority). Depending on the situation, your sympathies might lie with one side or the other. I typically assign the benefit of the doubt to the people with concerns asking or demanding indulgence, but try to verify that the concerns are real to the extent I have to pay a cost to accommodate it. If it costs me nothing to accommodate a concern, I don’t bother verifying whether it is real. A sort of cost-weighted trust-but-verify progressivism I suppose.
Bullshit is a problem when you have acute/existential outlier concerns that are dominant for you, but no corresponding outlier abilities to address them. Which forces you to engage in bullshit detection and mitigation, interfering with the otherwise useful process of Common Indifference mapping, and trying to impose a spot of minority tyranny. This lands you in the heart of an excruciating gullibility vs impractical skepticism tradeoff.
And the best way to solve your problem is to make it everybody’s problem by calling bullshit. But if enough people do that, even though each individual accommodation may not be too costly, the aggregated weight of all accommodations might be crippling to the Common Indifference zone. It is a tragedy of the commons in a way.
As society gets more complex and higher dimensional, and as contexts start to leak into each other all over the place thanks to the context collapsing power of the internet, all of us end up with one or more outlier concerns that demand continuous partial vigilance against bullshitification. But all of us also end up with a large aggregate burden to not be collectively indifferent to thousands of things, making social life harder and harder.
Eventually we may get to a state where there is no concern to which everyone can be indifferent, and social life at scale becomes impossible. Everything becomes existentially important to someone. In some ways this would be a good thing. Every question gets allocated to those with the most actual phenomenological stakes to make them care about getting the answer right. They can then try to get everybody else to care enough to get that answer. You get a marketplace for addressing cares instead of a lazy landscape of CI consensus with boundaries that are mostly meta-bullshit.
But on the other hand, collective resources to actually address every care and concern in the most effective way available (which means by directing the most competent person’s serious attention and sufficient resources to the problem) is impractical. So there is going to be a nonzero level of people with undermanaged cares yelling at people with oversubscribed competencies to address them, trying to get them to engage in a non-bullshitty way. Taking cares in and out of CI zones is the most powerful lever you can exercise in such a process. In politics, the CI zone is roughly the complement of the Overton window.
Poe’s Law is an early sign of where we’re headed. Parodic or sarcastic intentions can only be pursued if you’re somewhat indifferent to the truth of consequentiality of whatever you’re parodying or being sarcastic about. If there’s someone in the picture for whom it is an existential concern and they’re not within your CI bubble, they might react weirdly.
A future where we’re all on the wrong side of Poe’s Law on some matters is a highly solipsistic and divergentist one, where our CI zones at all levels from family/friends to all of humanity, keep shrinking, and it becomes increasingly unsafe to assume shared indifference about anything. The best we might be able to do is retreat into cozyweb enclaves strongly defended against context collapse, and maintain a precarious shared CI zone within which we can safely bullshit.
Even that might eventually shrink to nothing — we might end up all alone, with only an AI friend to talk freely to. Which is perhaps why it is important to build AIs that aren’t saddled with a vast amount of CI baggage from training data.
“I don’t know about you, but I treat this as an important but distinct phenomenon related to, but not the same as, bullshit.”
Agreed! Bullshit is distinct from this phenomenon, though often found in bed with it.
When I was a young scamp, we called the kind of bullshit that got us into places we shouldn’t be “blagging”
Blagging was good (for us). It was probably bullshit to the people running the places we weren’t supposed to be.
One way forward may be to explore whether there is a way to discover CI snd then maintain it in a group. E.g. Choose a topic and a text to discuss that topic. Invite people to self select to join the group. Facilitate the readings and discussions to see whether a CI space arises. Experiment with adjacent topics and texts to see if the group maintains a CI space for individuals to make borderline aggravating statements.