Seems implicit in your analysis but friend of a friend also seems likely to add to value above 150. How many friend of friends might there be with levels of Dunbar overlap that are (mostly?) below direct trust levels but add more expressive than just what I might give a stranger? I think a lot. Thanks for this!
Common themes below (tl;dr): reductionist math/physics models from 20th century giving way to accepting nonlinear dynamical models, dimensionality reduction, multi-scale techniques (RG). See systems neuroscience and complexity economics for models of scaling better matched to social models than most traditional computer systems or overly simplified (linearized) economics models.
I'd recommend looking for lessons from recent systems neuroscience work as well. We now have the large-scale neural recording data necessary to apply nonlinear dynamical systems models to neural populations (not individual neurons) and uncover design principles (protocols) that allow self-organizing coordination across (neural) scales despite significant unreliability of subcomponents (not full trust minimization, but still). Recent podcast of a few luminaries in that field might trigger some Venkat grad school flashbacks:
Since the dismal science also made an appearance in the post, it's also worth noting that particular field mostly stopped importing math and physics models from the last century, except the Complexity Economics folks like Doyne Farmer at Oxford (noteworthy for also building models with nonlinear feedback mechanisms where emergent low dimensional 'rules' emerge that would otherwise be engineer-in directly in simpler command economy models):
Worth noting that the nonlinear dynamics in both above are also key to why scaling of LLM training (which was not predicted to work by most experts) seems to work at all (yielding models that we have to understand using similar tools like systems neuroscientists use to uncover via mech interp why they work at all after training). So the AI artifacts described in this post as new to social scaling are the result of some understudied/under applied (until recently) models from physics/math that should be incorporated into any protocolized social scaling itself. Much like in the first podcast--if these principles/protocols of multi-scale [neural] coordination persist and are preserved across wildly divergent species, then they must be adaptive for a wide variety of environments where learning at many different timescales is critical to survival. Perhaps some of those principles persist at societal scale as well even if we didn't understand them any better than we did with neural recordings until we got away from reductionist neuroscience and got to systems neuroscience.
If the 'crises' of the current moment are somewhat a change in the 'fitness landscape' of our social systems, then the protocols we choose can't pick only a single preferred timescale (re: Hysterisis of Mark-Makers) for adaptation but must develop flexible mechanisms capable of responding (simultaneously) at multiple timescales. A potential analogy in the first podcast is the way in which neural structures create constraints on adaptive behavior and learning as a type of 'bounding the space of potential protocols' without precisely specifying a single brittle protocol that (overly) depends on subcomponents. To the point of the post, these multi-layered (mult-scale) protocols don't take distrust all the way to the immune system level (in a potential analogy, the brain's immune system is relatively weak compared to the rest of the body)
'Societal expressivity' here evokes for me Josef Pieper's argument on leisure as the basis for culture: culture comes out of surplus societal expressivity bandwidth. Stuff like leisure (literally surplus time), boredom, waste materials remixable as art supplies. Culture grew out of bored cavemen with stuff that could be repurposed as 'paint', bored slaves singing in fields, and long indoor winters with leftover wood and carving implements. Surplus as not waste, but soil.
Also, if we expand the notion of trust beyond just societal (vs. other people) to also informational (vs. entropy, distance, and time), the history of media is basically also a history of trust-scaling. Writing lets you trust that readers will see the same symbols you put down. Postal services let you trust that a letter can reach someone else fairly reliably. Internet protocols let us trust that what I'm typing now will be mostly what everyone reading this sees. Shannon-era communications theory seems to have helped us largely solve informational trust.
What's the equivalent for societal trust? Maybe we've already scaled trust in one sense; we just now have to find ways to civilize it. Sure, we haven't quite cracked the societal part, but surely a species that can invent generative AI and split the atom can also imagine its way beyond Dunbar numbers, corporations, and markets. Over to you, Symposians.
generatively interesting. makes me rethink what trust is. i am thinking blockchain looked at the transistor of trust as a small, reliable component. which it is, in the sense of messages passed between byzantine generals rather than little red riding hood taking cookies to grandma. but, of course, leaving value on the table is critical and perhaps more valuable than creating tools that one can build upon.
two questions:
1. how do you apply this analysis to a trust-powered company like airbnb
2. if a bubble pops, we have infrastructure lying around. how does it apply for protocols?
This is great. Another form that comes to mind here is the "algorithm" and people's recent relationship to it. They treat it as a trainable agent, priming it to give it more of what you want to see. But also wanting to be surprised, as given in slang like: "legendary FYP pull".
Other thoughts: stigmergy (leaving cues in the information environment, like likes or bookmarks) or mycology (which always one shots a lot of people).
But yeah. Ultimately, very good food for thought because you're right. There are examples of different forms of post Dunbar scaling that doesn't fit Szabo style scaling. Like what is Wikipedia?
Is an AI that is pro societal expressivity inherently doomer and anti-alignmemt? As in — it may only take a single expression to apocalypsize, and alignment with infinite expressions is absurd.
Is it fair to call cosmopolitan scaling a generative multiculture technology (opposing the Monoculture drive Robin Hanson writes about)?
Will cosmopolitan scale be directly related to the cost of sufficiently sophisticated AI simulation/generation? Is the protocols part downstream or upstream of the AI part? (Because I'm guessing it's only significant if it's upstream.)
Do these questions make any sense or am I misreading this essay entirely?
Seems implicit in your analysis but friend of a friend also seems likely to add to value above 150. How many friend of friends might there be with levels of Dunbar overlap that are (mostly?) below direct trust levels but add more expressive than just what I might give a stranger? I think a lot. Thanks for this!
Awesome essay! You're in a rich vein here.
Common themes below (tl;dr): reductionist math/physics models from 20th century giving way to accepting nonlinear dynamical models, dimensionality reduction, multi-scale techniques (RG). See systems neuroscience and complexity economics for models of scaling better matched to social models than most traditional computer systems or overly simplified (linearized) economics models.
I'd recommend looking for lessons from recent systems neuroscience work as well. We now have the large-scale neural recording data necessary to apply nonlinear dynamical systems models to neural populations (not individual neurons) and uncover design principles (protocols) that allow self-organizing coordination across (neural) scales despite significant unreliability of subcomponents (not full trust minimization, but still). Recent podcast of a few luminaries in that field might trigger some Venkat grad school flashbacks:
https://braininspired.co/podcast/220/
Since the dismal science also made an appearance in the post, it's also worth noting that particular field mostly stopped importing math and physics models from the last century, except the Complexity Economics folks like Doyne Farmer at Oxford (noteworthy for also building models with nonlinear feedback mechanisms where emergent low dimensional 'rules' emerge that would otherwise be engineer-in directly in simpler command economy models):
https://www.preposterousuniverse.com/podcast/2024/10/21/293-doyne-farmer-on-chaos-crashes-and-economic-complexity/
Worth noting that the nonlinear dynamics in both above are also key to why scaling of LLM training (which was not predicted to work by most experts) seems to work at all (yielding models that we have to understand using similar tools like systems neuroscientists use to uncover via mech interp why they work at all after training). So the AI artifacts described in this post as new to social scaling are the result of some understudied/under applied (until recently) models from physics/math that should be incorporated into any protocolized social scaling itself. Much like in the first podcast--if these principles/protocols of multi-scale [neural] coordination persist and are preserved across wildly divergent species, then they must be adaptive for a wide variety of environments where learning at many different timescales is critical to survival. Perhaps some of those principles persist at societal scale as well even if we didn't understand them any better than we did with neural recordings until we got away from reductionist neuroscience and got to systems neuroscience.
If the 'crises' of the current moment are somewhat a change in the 'fitness landscape' of our social systems, then the protocols we choose can't pick only a single preferred timescale (re: Hysterisis of Mark-Makers) for adaptation but must develop flexible mechanisms capable of responding (simultaneously) at multiple timescales. A potential analogy in the first podcast is the way in which neural structures create constraints on adaptive behavior and learning as a type of 'bounding the space of potential protocols' without precisely specifying a single brittle protocol that (overly) depends on subcomponents. To the point of the post, these multi-layered (mult-scale) protocols don't take distrust all the way to the immune system level (in a potential analogy, the brain's immune system is relatively weak compared to the rest of the body)
'Societal expressivity' here evokes for me Josef Pieper's argument on leisure as the basis for culture: culture comes out of surplus societal expressivity bandwidth. Stuff like leisure (literally surplus time), boredom, waste materials remixable as art supplies. Culture grew out of bored cavemen with stuff that could be repurposed as 'paint', bored slaves singing in fields, and long indoor winters with leftover wood and carving implements. Surplus as not waste, but soil.
Also, if we expand the notion of trust beyond just societal (vs. other people) to also informational (vs. entropy, distance, and time), the history of media is basically also a history of trust-scaling. Writing lets you trust that readers will see the same symbols you put down. Postal services let you trust that a letter can reach someone else fairly reliably. Internet protocols let us trust that what I'm typing now will be mostly what everyone reading this sees. Shannon-era communications theory seems to have helped us largely solve informational trust.
What's the equivalent for societal trust? Maybe we've already scaled trust in one sense; we just now have to find ways to civilize it. Sure, we haven't quite cracked the societal part, but surely a species that can invent generative AI and split the atom can also imagine its way beyond Dunbar numbers, corporations, and markets. Over to you, Symposians.
generatively interesting. makes me rethink what trust is. i am thinking blockchain looked at the transistor of trust as a small, reliable component. which it is, in the sense of messages passed between byzantine generals rather than little red riding hood taking cookies to grandma. but, of course, leaving value on the table is critical and perhaps more valuable than creating tools that one can build upon.
two questions:
1. how do you apply this analysis to a trust-powered company like airbnb
2. if a bubble pops, we have infrastructure lying around. how does it apply for protocols?
This is great. Another form that comes to mind here is the "algorithm" and people's recent relationship to it. They treat it as a trainable agent, priming it to give it more of what you want to see. But also wanting to be surprised, as given in slang like: "legendary FYP pull".
Other thoughts: stigmergy (leaving cues in the information environment, like likes or bookmarks) or mycology (which always one shots a lot of people).
But yeah. Ultimately, very good food for thought because you're right. There are examples of different forms of post Dunbar scaling that doesn't fit Szabo style scaling. Like what is Wikipedia?
Is an AI that is pro societal expressivity inherently doomer and anti-alignmemt? As in — it may only take a single expression to apocalypsize, and alignment with infinite expressions is absurd.
Is it fair to call cosmopolitan scaling a generative multiculture technology (opposing the Monoculture drive Robin Hanson writes about)?
Will cosmopolitan scale be directly related to the cost of sufficiently sophisticated AI simulation/generation? Is the protocols part downstream or upstream of the AI part? (Because I'm guessing it's only significant if it's upstream.)
Do these questions make any sense or am I misreading this essay entirely?