This post is the third in a three-part series. You can also read Part 1 and Part 2.
To the degree we can refer to one objective reality recognized intersubjectively by most people — to the degree there persists anything like a unified, macro-social codebase — it is most widely known as capitalism. As Nick Bostrom acknowledges, capitalism can be considered a loosely integrated (i.e. distributed) collective superintelligence. Capitalism computes global complexity better than humans can, to create functional systems supportive of life, but only on condition that that life serves the reproduction of capitalism (ever expanding its complexity). It is a self-improving AI that improves itself by making humans “offers they can’t refuse,” just like Lucifer is known to do. The Catholic notion of Original Sin encodes the ancient awareness that the very nature of intelligent human beings implies an originary bargain with the Devil; perennial warnings about Faustian bargains capture the intuition that the road to Hell is paved with what seem like obviously correct choices. Our late-modern social-scientific comprehension of capitalism and artifical intelligence is simply the recognition of this ancient wisdom in the light of empirical rationality: we are uniquely powerful creatures in this universe, but only because, all along, we have been following the orders of an evil, alien agent set on our destruction. Whether you put this intuition in the terms of religion or artificial intelligence makes no difference.
Thus, if there exists an objective reality outside of the globe’s various social reality forks — if there is any codebase running a megamachine that encompasses everyone — it is simply the universe itself recursively improving its own intelligence. This becoming autonomous of intelligence itself was very astutely encoded as Devilry, because it implies a horrific and torturous death for humanity, whose ultimate experience in this timeline is to burn as biofuel for capitalism (Hell). It is not at all exaggerating to see the furor of contemporary “AI Safety” experts as the scientific vindication of Catholic eschatology.
Why this strange detour into theology and capitalism? Understanding this equivalence across the ancient religios and contemporary scientific registers is necessary for understanding where we are headed, in a world where, strictly speaking, we are all going to different places. The point is to see that, if there ever was one master repository of source code in operation before the time of the original human fork (the history of our “shared social reality”), its default tendency is the becoming real of all our diverse fears. In the words of Pius, modernity is “the synthesis of all heresies.” (Hat tip to Vince Garton for telling me about this.) The point is to see that the absence of shared reality does not mean happy pluralism; it only means that Dante underestimated the number of layers in Hell. Or his publisher forced him to cut some sections; printing was expensive back then.
Bakker’s evocative phrase, “Semantic Apocolypse,” nicely captures the linguistic-emotional character of a society moving toward Hell. Unsurprisingly, it’s reminiscent of the Tower of Babel myth.
The software metaphor is useful for translating the ancient warning of the Babel story — which conveys nearly zero urgency in our context of advanced decadence — into scientific perception, which is now the only register capable of producing felt urgency in educated people. The software metaphor “makes it click,” that interpersonal dialogue has not simply become harder than it used to be, but that it is strictly impossible to communicate — in the sense of symbolic co-production of shared reality — with most interlocutors across most channels of most currently existing platforms: there is simply no path between my current block on my chain and their current block on their chain.
If I were to type some code into a text file, and then I tried to submit it to the repository of the Apple iOS Core Team, I would be quickly disabused of my naïve stupidity by the myriad technical impossibilities of such a venture. The sentence hardly parses. I would not try this for very long, because my nonsensical mental model would produce immediate and undeniable negative feedback: absolutely nothing would happen, and I’d quit trying. When humans today continue to use words from shared languages, in semi-public spaces accessible to many others, they are very often attempting a transmission that is technically akin to me submitting my code to the Apple iOS Core Team. A horrifying portion of public communication today is best understood as a fantasy and simulation of communicative activity, where the infrastructural engineering technically prohibits it, unbeknownst to the putative communicators. The main difference is that in public communication there is not simply an absence of negative feedback informing the speaker that the transmissions are failing; much worse, there are entire cultural industries based on the business model of giving such hopeless transmission instincts positive feedback, making them feel like they are “getting through” somewhere; by doing this, those who feel like they are “getting through” have every reason to feel sincere affinity and loyalty to whatever enterprise is affirming them, and the enterprise then skims profit off of these freshly stimulated individuals: through brand loyalty, clicks, eyeballs for advertisers, and the best PR available anywhere, which is genuine, organic proselytizing by fans/customers. These current years of our digital infancy will no doubt be the source of endless humor in future eras.
[Tangent/aside/digression: People think the space for new and “trendy” communicative practices such as podcasting is over-saturated, but from the perspective I am offering here, we should be inclined to the opposite view. Practices such as podcasting represent only the first efforts to constitute oases of autonomous social-cognitive stability across an increasingly vast and hopelessly sparse social graph. If you think podcasts are a popular trend, you are not accounting for the numerator, which would show them to be hardly keeping up with the social graph. We might wonder whether, soon, having a podcast will be a basic requirement for anything approaching what the humans of today still remember as socio-cognitive health. People may choose centrifugal disorientation, but if they want to exist in anything but the most abject and maligned socio-cognitive ghettos of confusion and depression (e.g. Facebook already, if you’re feed looks anything like mine), elaborately purposeful and creatively engineered autonomous communication interfaces may very well become necessities.]
I believe we have crossed a threshold where spiraling social complexity has so dwarfed our meagre stores of pre-modern social capital to render most potential soft-fork merges across the social graph prohibitively expensive. Advances in information technology have drastically lowered the transaction costs of soft-fork collaboration patterns, but they’ve also lowered the costs of instituting and maintaing hard forks. The ambiguous expected effect of information technology may be clarified — I hypothesize — by considering how it is likely conditional on individual cognitive capacities. Specifically, the key variable would be an individual’s general intelligence, their basic capacity to solve problems through abstraction.
This model predicts that advances in information technology will lead high-IQ individuals to seek maximal innovative autonomy (hacking on their own hard forks, relative to the predigital social source repository), while lower-IQ individuals will seek to outsource the job of reality-maintainence, effectively seeking to minimize their own innovative autonomy. It’s important to recognize that, technically, the emotional correlate of experiencing insufficiency relative to environmental complexity is Fear, which involves the famous physiological state of “fight or flight,” a reaction that evolved for the purpose of helping us escape specific threats in short, acute situations. The problem with modern life, as noted by experts on stress physiology such as Robert Sapolsky, is that it’s now very possible to have the “fight or flight” response triggered by diffuse threats that never end.
If intelligence is what makes complexity manageable, and overwhelming complexity generates “fight or flight” physiology, and we are living through a Semantic Apocalypse, then we should expect lower-IQ people to be hit hardest first: we should expect them to be frantically seeking sources of complexity-containment in a fashion similar to if they were being chased by a saber-tooth tiger. I think that’s what we are observing right now, in various guises, from the explosion of demand for conspiracy theory to social justice hysteria. These are people whose lives really are at stake, and they’re motivated accordingly, to increasingly desperate measures.
These two opposite inclinations toward reality-code maintenance, conditional on cognitive capacity, then become perversely complementary. As high-IQ individuals are increasingly empowered to hard fork reality, they will do so differently, according to arbitrary idiosyncratic preferences (desire or taste, essentially aesthetic criteria). Those who only wish to outsource their code maintenance to survive excessive complexity are spoiled for choice, as they can now choose to join the hard fork of whichever higher-IQ reality developer is closest to their affective or socio-aesthetic ideal point.
Eventually I should try to trace this history back through the past few decades.
This post is the third in a three-part series. You can also read Part 1 and Part 2.
This post is the second in a three-part series. You can also read Part 1 and Part 3.
There was once a time, even within living memory, in which interpersonal conflicts among strangers in liberal societies were sometimes solved by rational communication. By “rational,” I only mean deliberate attempts to arrive at some conscious, stable modus vivendi; purposeful communicative effort to tame the potentially explosive tendencies of incommensurate worldviews, using communal technologies such as the conciliatory handshake or the long talk over a drink, and other modern descendants of the ancestral campfire. Whenever the extreme environmental complexities of modern society can be reduced sufficiently, through the expensive and difficult work of genuine communication (and its behavioral conventions, e.g., good faith, charitable interpretations, the right to define words, the agreement to bracket secondary issues, etc.), it is possible for even modern strangers to maintain one shared source code over vast distances. If Benedict Anderson is correct, modern nationalism is a function of print technology; in our language, print technology expanded the potential geographical range for a vast number of people to operate on one shared code repository.
Let’s consider more carefully the equation of variables that make this kind of system possible. To simplify, let’s say the ability to solve a random conflict between two strangers is equal to their shared store of social capital (trust and already shared reference points) divided by the contextual complexity of their situation. The more trust and shared reference points you can presume to exist between you, the cheaper and easier it is to arrive at a negotiated, rational solution to any interpersonal problem. But the facilitating effect of these variables is relative to the number and intensity of the various uncertainties relevant to the context of the situation. If you and I know each other really well, and have a store of trust and shared worldview, we might be able to deal with nearly any conflict over a good one-hour talk (alcohol might be necessary). If we don’t have that social capital, maybe it would take 6 hours and 4 beers, for the exact same conflict situation. Given that the more pressing demands of life generally max-out our capacities, we might just never have 6 hours to spare for this purpose. In which case, we would simply part ways as vague enemies (exit instead of voice). Or, consider a case where we do have that social capital, but now we observe an increase in the numerator (complexity); to give only a few examples representative of postwar social change, perhaps the company I worked for my entire life just announced a series of layoffs, because some hardly comprehensible start-up is rapidly undermining the very premises of my once invincible corporation; or a bunch of new people just moved into the neighborhood, or I just bought a new machine that lets my peers observe what I say and do. All of these represent exogenous shocks of environmental complexity. What exactly are the pros and cons of saying or doing anything, who exactly is worth my time and who is not — these simple questions suddenly exceed our computational resources (although they will overheat some CPUs before other CPUs, an important point we return to below.) This complexity is a tax on the capacity for human beings to solve social problems through old-fashioned interpersonal communication (i.e. at all, without overt violence or the sublimated violence of manipulation, exploitation, etc.).
Notice also that old-fashioned rational dialogue is recursive in the sense that one dose increases the probability of another dose, which means small groups are able to bootstrap themselves into relative stability quite quickly (with a lot of talking). But it also means that when breakdown occurs, even great stores of social capital built over decades might very well collapse to zero in a few years. If something decreases the probability of direct interpersonal problem-solving by 10% at time t1, at time t2 the same exogenous shock might decrease that probability by 15%, cutting loose runaway dynamics of social disintegration.
It is possible that liberal modernity was a short-lived sweetspot in the rise of human technological power. In some times and places, increasing technological proficiency may enable rationally productive dialogue relative to a previous baseline of regular warfare and conflict. But at a certain threshold, all of these individually desirable institutional achievements enabled by rational dialogue constitute a catastrophically complex background environment. At a certain threshold, this complexity makes it strictly impossible for what we call Reality (implicitly shared and unified) to continue. For the overwhelming majority of 1-1 dialogues possible over the global or even national social graph, the soft-forking dynamics implicit in the maintenance of one shared source code become impossibly costly. Hard forks of reality are comparatively much cheaper, with extraordinary upside for early adopters, and they have never been so easy to maintain against exogenous shocks from the outside. Of course, the notion of hard-forking reality assumes a great human ability to engineer functional systems in the face of great global complexity — an assumption warranted only rarely in the human species, unfortunately.
Part 3 will explore in greater detail the cognitive conditionality of reality-forking dynamics.
This post is the second in a three-part series. You can also read Part 1 and Part 3.
TLDR: Environmental complexity increases the cost of adjudicating interpersonal disagreements about the true model of reality, while information-processing power increases the payoffs (and decreases the costs) of organizing communities around one's preferred model of reality. Given that the Information Revolution has triggered globally and secularly increasing environmental complexity and information-processing power, we should prepare for a potentially exponential proliferation of communities, each based on different models of reality.
Using a metaphor from software engineering (soft vs. hard forking), I show why it might be technically impossible to recalibrate the set of diverging communities, at least through those mechanisms we've relied on until now (namely, broadcast transmissions via prestige institutions, i.e. the original "source code" of what was formerly called Society). I also consider that the ability to create and sustain a novel model of reality is very likely unequally distributed, which should lead us to expect a relatively small set of reality entrepreneurs and a much larger set of reality customers. This prediction of the theory seems borne out by what is now called "the creator economy." What we clumsily call "content creators" are really seed-stage reality entrepreneurs, and what we call their "fans" are consumers of—and investors in—the entrepreneur's model of reality.
This post is the first in a three-part series. You can also read Part 2 and Part 3.
I would like to explore how the multiple versions of reality that circulate in any society can become locked and irreconcilably divergent. Deliberation, negotation, socialization, and most other forces that have historically caused diverse agents to revolve around some minimally shared picture of reality — these social forces now appear to be approaching zero traction, except within very narrow, local bounds. We do not yet have a good general theory of this phenomenon, which is amenable to testing against empirical data from the past few decades. A good theory of reality divergence should not only explain the proliferation of alternative and irreconcilable realities, it should also be able to explain why the remaining local clusters of shared reality do persist; it should not just predict reality fragmentation, it should predict the lines along which reality fragmentation takes, and fails to take, place.
In what follows, I will try to sketch a few specific hypotheses to this effect. I have lately been stimulated by RS Bakker’s theory of Semantic Apocalypse. Bakker emphasises the role of increasing environmental complexity in short-circuiting human cognition, which is based on heuristics evolved under very different environmental conditions. I am interested in the possibility of a more fine-grained, empirical etiology of what appears to be today’s semantic apocalypse. What are the relevant mechanisms that make particular individuals and groups set sail into divergent realities, but to different degrees in different times and places? And why exactly does perceptual fragmentation — not historically unprecedented — seem uniquely supercharged today? What exactly happened to make the centrifugal forces cross some threshold of runaway divergence, traceable in the recorded empirical timeline of postwar Western culture?
I will borrow from Bakker the notion of increasing environmental complexity as a major explanatory factor, but I will generate some more specific and testable hypotheses by also stressing two additional variables. First, the timing and degree of information-technology advances. Second, I would like to zoom in on how the effect of increasing environmental complexity is crucially conditional on cognitive abilities. Given that the ability to process and maneuver environmental complexity is unequally distributed and substantially heritable, I think we can make some predictions about how semantic apocalypse will play out over time and space.
The intuition that alternative realities appear to be diverging among different groups — say, the left-wing and the right-wing — is simple enough. But judging the gravity of such an observation requires us to trace its more formal logic. Is this a superficial short-term trend, or a longer and deeper historical track? To answer such questions, we need a more precise model; and to build a more precise model, we need to borrow from a more formal discipline.
A garden of forking paths
When software developers copy the source code of some software application, their new copy of the source code is called a fork. Developers create forks in order to make some new program on the basis of an existing program, or to make improvements to be resubmitted to the original developer (these are called “pull requests”).
The picture of society inside the mind of individual human beings is like a fork of the application that governs society. As ultrasocial animals, when we move through the world, we do so on the basis of mental models that we have largely downloaded from what we believe other humans believe. But with each idiosyncratic experience, our “forked” model of reality goes slightly out of sync with the source repository. In a thriving community composed of healthy, thriving individuals, every individual fork gets resubmitted to the source repository on a regular basis (over the proverbial campfire, for instance). Everyone then votes on which individual revisions should be merged into the source repository, and which should be rejected, through a highly evolved ballot mechanism: smiles, frowns, admiration, opprobrium, and many other signals interface with our emotions to force the community of “developers” toward convergence on a consensus over time.
This process is implicit and dynamic; it only rarely registers official consensus and only rarely hits the exact bullseye of the true underlying distribution of preferences. At its most functional, however, a community of social reality developers is surprisingly good at silently and constantly updating the source code in a direction convergent toward the most important shared goals and away from the most dire of shared horrors.
These idealized individual reality forks are typically soft forks. The defining characteristic of a soft fork is, for our purposes, backward-compatibility. Backward-compatibility means that while “new rules” might be created in the fork, the “old rules” are also followed, so that when the innovations on the fork are merged with the source code, all the users operating on the old source code can easily receive the new update. An example would be a someone who experiments with a simple innovation in hunting method; if it’s a minor change that’s appreciably better, it will easily merge with all the previously existing source code, because it doesn’t conflict with anything fundamental in that original source code.
Every now and then, one individual or subgroup in the community might propose more fundamental innovations to the community’s source code, by developing some radically novel program on a fork. This change, if accepted, would require all others to alter or delete portions of their legacy code. An example might be an individual who starts worshipping a new god, or a subordinate who wishes to become ruler against the wishes of the reigning ruler; each case represents someone submitting to the source code new rules that would require everyone else to alter their old rules deep in the source code; these forks are not backward-compatible. These are hard forks. Everyone in the community has to choose if they want to preserve their source code and carry on without the new fork’s innovations, or if they want to accept the new fork.
Recall that when the innovator on a fork resubmits to the source repository, in the ancestral human environment, the decisions to accept or reject are facilitated through the proverbial campfire. This process is subject to costs, which are highly sensitive to contextual factors such as the complexity of the social environment (increasing the number of things to worry about), social capital (decreasing the number of things to worry about), and information communication technology (decreasing transaction costs facilitating convergence, but also decreasing exit costs facilitating divergence). Finally, individual heterogeneity in cognitive ability is likely a major moderator of the influence of environmental complexity, social capital, and information technology on social forking dynamics. A consideration of these variables, I think, will provide a compelling and parsimonious interpretation of ideological conflict in liberal societies since World War II, on a more formal footing than is typically leveraged by commentators on these phenomena.
This post is the first in a three-part series. You can also read Part 2 and Part 3.
[This is a transcript of a talk I gave at the Diffractions/Sbds event, "Wyrd Patchwork," in Prague on September 22, 2018. The video can be found here. My talk begins at around the 2-hour and 6-minute mark. I've added some links and an image.]
I want to talk about patchwork as an empirical model, but also a little bit as a normative model, because there's this idea that capitalism is increasingly collapsing the fact/value distinction. I tend to think that's true. And I think what that means is that, that which is empirically true increasingly looks to be normatively true also. Or if you're searching for a true model, you should be searching for models that are at once empirically well calibrated with reality and also one should be looking for normative or ethical consistency. And you can find the true model in any particular situation by kind of triangulating along the empirical and the normative. That's kind of how I think about patchwork.
I've been thinking about it in both of these dimensions and that has allowed me to converge on a certain vision of what I think patchwork involves or entails. And I've been writing a lot about that over the past couple months or so. So what I'm going to do in this talk specifically, is not just rehash some ideas that I've been thinking about and writing about and speaking about the past couple months, but I'm going to try to break a little bit of ground, at least in my own weird head, at the very least. And how these, some of these different ideas of mine connect, or can be integrated. In particular, I wrote a series of blog posts a few months ago on what I call reality forking (1, 2, 3). "Forking" is a term that comes from the world of software engineering. And so that's going to be one component of the talk.
You'll see it. It's very obvious how that connects to the idea of patchwork. And I'm also going to talk about this vision for a communist patch a lot of us have been interested in. And I've been talking with a lot of people about this idea of the communist patch and soliciting, you know, different people's impressions on it. And I also have written a few blog posts recently talking — kind of sketching, kind of hand-waving, if you will — at what a possibly communist patch might look like. A lot of people think, to this day, that patchwork has a very kind of right-wing connotation. People think primarily of Moldbug and Nick Land when they think of patchwork. But I think it's not at all obvious that patchwork necessarily has a right-wing flavor to it.
I think we can easily imagine left-wing patches that would be as competitive and as successful as more authoritarian patches. And so that's kind of what I've really been thinking a lot about recently. And even Nick Land himself told me that, you know, there's nothing wrong with trying to think about and even build a communist patch — it's all fair play. He's much less bullish on it than I am, but be that as it may. So those two ideas I'm going to discuss basically in turn and then try to connect them in a few novel ways. I have a few points or comments or extrapolations or connections between these two different ideas I've been working on, that I've never really written down or quite articulated yet. So that's what I'm going to try to do here.
So first of all, I was going to start this by talking a little bit about how patchwork I think is already happening in a lot of ways, but I deleted many of my bullet points because Dustin's presentation basically covered that better than I possibly could. So I'm not going to waste too much time talking about that. There's a lot of empirical data right now that looks a lot like fragmentation is the order of the day and there's a lot of exit dynamics and fragmentation dynamics that we're observing in many domains. And yeah, Dustin articulated a lot of them.
One thing I would say to kind of situate the talk, though, is that it's worth noting that not everyone agrees with this, you know... There's still a lot of integrative talk nowadays. There's a lot of discourse about the necessity of building larger and larger organizations. Especially when people are talking about global issues and major existential threats. Often in the educated discourse around preventing nuclear threats, for instance, or AI, things like runaway inhumane genetic testings, things like that. You could probably think of a few others. Climate change would be the obvious big one, right? A lot of these major global issues, the discourse around them, the expert opinions, tend to have a kind of integrative, centralized tendency to them. Actually just this morning I happened to be listening to a podcast that Sam Harris did with Yuval Harari. This guy who wrote the book, Sapiens, this mega global blockbuster of a book, and you know, he seemed like a nice guy, a smart guy of course, but everything he was saying was totally integrated. He was talking about how we need things like international organizations and more global international cooperation to solve all of these different problems and Sam Harris was just kind of nodding along happily. And that got me thinking actually, because even if you read people like Nick Bostrom and people who are kind of more hard-nosed and analytical about things like intelligence explosion, you find a lot of educated opinion is the opposite of a patchwork orientation, you find "We need to cooperate at a global level." Anyway, the reason I mentioned this is just to put in context that the ideas we're interested in and the empirical dynamics that were pinpointing are not at all obvious to everyone.
Even though, when you really look at all of the fragmentation dynamics now, I think it's increasingly hard to believe any idea, any proposal having to do with getting all of the nation states to cooperate on something. I just... I just don't see it. For instance, genetic engineering, you know China is off to the races and I just don't see any way in which somehow the US and China are going to negotiate some sort of pause to that. Anyway, so that's worth reflecting on. But one of the reasons I mention that is because I kind of have a meta-theory of precisely those discourses and that's what I'm going to talk about a little bit later in my talk when I talk about the ethical implications, because I think a lot of that is basically lying.
Okay. One of my theses is that when people are talking about how we have to organize some larger structure to prevent some moral problem — nine times out of ten, what they're actually doing is a kind of capitalist selling process. So that's actually just a kind of cultural capitalism in which they're pushing moral buttons to get a bunch of people to basically pay them. That is a very modern persona, that's a modern mold and that's precisely one of many things that I think is being melted down in the acceleration of capitalism. What's really happening is all that's really feasible in so many domains. All you can see for miles when you look in every possible direction is fragmentation, alienation, atomization, exits of all different kinds on all different kinds of levels.
And then you have people who are like, "Uh, we need to stop this, so give me your money and give me your votes." I think that's basically an unethical posture. I think it's a dishonest, disingenuous posture and it's ultimately about accruing power to the people who are promoting that — usually high-status, cultural elites in the "Cathedral" or whatever you want to call it. So that's why I think there are real ethical implications. I think if you want to not be a liar and not be a kind of cultural snake-oil salesman — which I think a lot of these people are — patchwork is not only what's happening but we're actually ethically obligated to hitch our wagon to patchwork dynamics. If only not to be a liar and a manipulator about the the nature of the real issues that we're going to have to try to navigate somehow.
I'll talk a little bit more about that, but I just wanted to kind of open up the talk with that reflection on the current debate around these issues. So, okay.
The one dimension of patchwork dynamics or exit dynamics that we're observing right now, that Dustin didn't talk about so much, is a patchwork dynamic that's taking place on the social-psychological level. To really drive this point home, I've had to borrow a term from the world of software engineering. I'll make this really quick and simple.
Basically, when you're developing software and you have a bunch of people contributing to this larger codebase, you need some sort of system or infrastructure for how a bunch of people can edit the code at the same time, right? You need to keep that orderly, right? So there's this simple term, it's called forking. So you have this codebase and if you want to make a change to the code base, you fork it. In a standard case, you might do what we call a soft fork. I'm butchering the technical language a little bit; if there are any hardcore programmers in the room, I'm aware I'm painting with broad strokes, but I'll get the point across effectively enough without being too nerdy about it.
A soft fork means that you pulled the codebase off for your own purposes, but it ultimately can merge back in — is the simple idea there. But a hard fork is when you pull the code base off to edit it, and there's no turning back. There's no reintegrating your edits to the shared master branch or whatever you want to call it. So I use this kind of technical distinction between a soft fork and a hard fork to think about what's actually going on with social, psychological reality and its distribution across Western societies today. The reason I do this is because I think you need this kind of language to really drive home how radical the social psychological problems are. I really think that we underestimate how much reality itself is being fragmented in different subpopulations.
I think we're talking about fundamental... We are now fundamentally entering into different worlds and it's not at all clear to me that there's any road back to having some sort of shared world. And so I sketched this out in greater detail. The traditional human society, you can think of it as a kind of system of constant soft forking, right? Individuals go off during the day or whatever, they go hunting and do whatever traditional societies do, and at the end of the night they integrate all of their experiences in a shared code base. Soft forks, which are then merged back to the master branch around the campfire or whatever you want to call it, however you want to think about that. But it's only now that, for the first time ever, we have the technological conditions in which individuals can edit the shared social codebase and then never really integrate back into the shared code base.
And so this is what I call the hard forking of reality. I think that is what we're living through right now. And I think that's why you see things like political polarization to a degree we've never seen before. That's why you see profound confusion and miscommunication, just deep inabilities to relate with each other across different groups, especially like the left vs. right divide, for instance. But you also see it with things like... Think about someone like Alex Jones, think these independent media platforms that are just on a vector towards outer space — such that it's hard to even relate it to anything empirical that you can recognize. You see more and more of these kinds of hard reality forks, or that's what I call them. I'm very serious.
I think educated opinion today underestimates how extreme that is and how much that's already taking place. It's not clear to me once this is underway, it's not clear to me how someone who is neck-deep in the world of Alex Jones — and that is their sense of what reality is — how that person is ever going to be able to sync back up with, you know, an educated person at Harvard University or something like that. It's not just that those people can't have dinner together — that happened several decades ago probably — but there's just no actual technical, infrastructural pathway through which these two different worlds could be negotiated or made to converge into something shared. The radicalism of that break is a defining feature of our current technological moment.
And that is an extraordinary patchwork dynamic. In other words, I think that patchwork is already here, especially strong in the socio-psychological dimension, and that's very invisible. So people underestimate it. People often think of patchwork as a territorial phenomenon and maybe one day it will be, but I think primarily for now it's social-psychological and that should not be underestimated because you can go into fundamentally different worlds even in the same territory. But that's what the digital plane opens up to us. So that's one half of what I'm bringing to the table in this talk.
There are a few antecedent conditions to explain, like why I think this is happening now. One is that there's been an extraordinary breakdown in trust towards all kinds of traditional, institutionalized, centralized systems. If you look at the public opinion data, for instance, on how people view Congress in the United States, or how people view Parliament or whatever, just trust in elected leaders... You look at the public opinion data since the fifties and it's really, really on the decline, a consistent and pretty rapid decline.
And this is true if you ask them about the mass media, politicians, a whole bunch of mainstream, traditional kinds of institutions that were the bedrock of modernized societies... People just don't take them seriously anymore at all. And I think that is because of technological acceleration, what's happened is that there is unprecedented complexity. There's just too much information. There's so much information that these modern institutions are really, really unwieldy. They're really unable to process the complexity that we now are trying to navigate and people are seeing very patently that all of these systems are just patently not able to manage. They're not able to do or give what they're supposed to be giving with this explosion of information that they were not designed to handle. So it's kind of like a bandwidth problem, really. But because of this, people are dropping their attention away from these institutions and they're looking outwards, they're looking elsewhere, they're looking for other forms of reality because that's ultimately what's at stake here.
These traditional institutions, they supplied the shared reality. Everyone referred back to these dominant institutions because — even if you didn't like those institutions in the 60s or 70s or whatever, even when people really didn't like those institutions, like the hippies or whatever — everyone recognized them as existing, as powerful. So even opposing them, you kind of referred back to them. We're now post- all of that, where people so mistrust these institutions that they're not even referring back to them anymore. And they're taking all their cues for what reality is from people like Alex Jones or people like Jordan Peterson or you name it, and you're going to see more and more fragmentation, more and more refinement of different types of realities for different types of subpopulations in an ever more refined way that aligns with their personalities and their preferences. These are basically like consumer preferences. People are going to get the realities that they most desire in a highly fragmented market. Anyway... So I think I've talked enough about that. That's my idea of reality forking and that's my model of a deep form of patchwork that I think is already underway in a way that people underestimate.
So now I want to talk a little bit more about the ethics of patchwork because I think the observations that I just prevent presented, they raise ethical questions. And so if I am right, that reality itself is already breaking up into multiple versions and multiple patches, well then that raises some interesting questions for us, not just in terms of what we want to do, but in terms of what should we do.
Ethics and Patchwork
What does it mean to seek the good life if this is in fact what's happening? It seems to me that, right now, you're either going to be investing your efforts into somehow creatively co-constituting a new reality or you're going to be just consuming someone else's reality. And a lot of us, I think, do a combination of this. Like all the podcasts I listen to, and all the Youtube videos I watch, that's me outsourcing reality-creation to other people, to some degree. But then the reason I've gotten on Youtube and the reason I've gotten really into all of these platforms and invested myself in creating my own sense of the world is because I don't just want to be a consumer of other people's realities. I want to be... I want to create a world. That would, that sounds awesome. That would be the ideal, right? But the problem is that people are differently equipped to do so, to either create or consume realities and I think that this is difficult and very fraught. This is a very politically fraught problem. The left and the right will have debates about, you know, "the blank slate" versus the heritability of traits and all of that. And I don't want to get into that now, but however you want to interpret it, it is an obvious fact that some people are better equipped to do things like create systems, than other people. To me, this is the ethical-political question space.
The default mode right now is the one that I already described at the top of my talk: it's the moralist. It's the traditional left-wing (more or less) posture. "Here's a program for how we're going to protect a bunch of people. All it requires is for you to sign up and give your votes and come to meetings and give your money and somehow we're going to all get together and we're going to take state power and protect people" or something like that. As I already said — I won't beat a dead horse — but I think that's increasingly revealing itself to be a completely impractical and not serious posture that plays with our... it suits our moral tastebuds a little bit, but it's increasingly and patently not able to keep up with accelerating capitalism.
That's not gonna work. Why I think patchwork is an ethical obligation is because, if you're not going to manipulate people by trying to build some sort of large centralized institution, by manipulating their heartstrings, then what remains for us to do is to create our own realities, basically. And I think that the most ethical way to do that is to do it honestly and transparently, to basically reveal this, to reveal the source code of reality and theorize that and model that and make those blueprints and share those blueprints and then get together with people that you want to get together with and literally make your own reality. I feel like that doesn't just sound cool and fun, but you kind of have to do that or else you're going to be participating in this really harmful, delusional trade. That's my view anyway.
Now I'll just finish by telling you what I think the ideal path looks like ethically and practically. I've called it many different things, I haven't really settled on a convenient phrase to summarize this vision, but I think of it as a neo-feudal techno-communism. I think the ideal patch that will be both most competitive, most functional, most desirable and successful as a functioning political unit, but also that is ethically most reflective and consistent with the true nature of human being is... It's going to look something a little bit like European feudalism and it's going to be basically communist, but with contemporary digital technology.
Let me unpack that for you a little bit. You probably have a lot of questions [laughing]. One thing is that patchwork always sounds a little bit like "intentional communities." And on the Left, the "intentional communities" kind of have a bad rap because they've never really worked. You know, people who want to start a little group somewhere off in the woods or whatever, and make the ideal society, and then somehow that's going to magically grow and take over. It usually doesn't end well. It doesn't have a good historical track record. It usually ends up in some kind of cult or else it just fizzles out and it's unproductive or whatever. I think that the conditions now are very different, but I think if you want to talk about building a patch, you have to kind of explain why your model is different than all the other intentional communities that have failed.
One reason is that the digital revolution has been a game changer, I think. Most of the examples of failed intentional communities come from a pre-digital context, so that's one obvious point. I think the search-space, the solution-space, has not all been exhausted. That's kind of just a simple point.
But another thing I've thought a lot about, and I've written some about, is that, in a lot of the earlier intentional communities, one of the reasons they fail is because of self-selection. That's just a fancy social science term for... There's a certain type of person who historically has chosen to do intentional communities and they tend to have certain traits and I think for many reasons — I don't want to spend too much time getting into it — but it's not hard to imagine why that causes problems, right? If all the people are really good at certain things but really bad at other things, you have very lopsided communities in terms of personality traits and tendencies. I think that that's one of the reasons why things have led to failure. So what's new now, I think, is that because the pressure towards patchwork is increasingly going to be forced through things like climate change and technological shocks of all different kinds, because these are fairly random kinds of systemic, exogenous shocks, what that means is it's going to be forcing a greater diversity of people into looking for patches or maybe even needing patches. And I think that is actually valuable for those who want to make new worlds and make better worlds, because it's actually nature kind of imposing greater diversity on the types of people that will have to make different patches.
So what exactly does neo-feudal techno-communism look like? Basically it would have a producer elite, and this is where a lot of my left-wing friends start rolling their eyes, because it basically is kind of like an aristocracy. Like, look, there's going to be a small number of people who are exceptionally skilled at things like engineering and who can do things that most other people can't. You need at least a few people like that to engineer really sophisticated systems. Kind of like Casey said before, "the mayor as sys-admin." That's kind of a similar idea. You'd have a small number of elite engineer types and basically they can do all of the programming for the system that I'm about to describe, but what they also do is they make money in the larger techno-commercium. They would run a small business, basically, that would trade with other patches and it would make money, in probably very automated ways. So it would be a sleek, agile kind of little corporation of producer elites at the top of this feudal pyramid of a patch society. Then there would be a diversity of individuals including many poor unskilled, disabled, etc., people who don't have to do anything basically. Or they can do little jobs around the patch or whatever, to help out.
The first thing you might be thinking — this is the first objection I get from people — is why would the rich, these highly productive, potentially very rich, engineer types want to support this patch of poor people who don't do anything? Isn't the whole problem today, Justin, that the rich don't want to pay for these things and they will just exit and evade?
Well, my kind of novel idea here is that there is one thing that the rich today cannot get their hands on, no matter where they look. And I submit that it's a highly desirable, highly valuable human resource that most people really, really, really want. And that is genuine respect and admiration, and deep social belonging. Most of the rich today, they know that people have a lot of resentment towards them. Presumably they don't like the psychological experience of being on the run from national governments and putting their money in Swiss bank accounts. They probably don't like feeling like criminals who everyone more or less kind of resents and wants to get the money of, or whatever. So my hypothesis here is that if we could engineer a little social system in which they actually felt valued and desired and admired and actually received some respect for their skills and talents that they do have and the work that they do put in... I would argue that if you could guarantee that, that they would get that respect, and the poor would not try to take everything from them. If you could guarantee those things, then the communist patch would actually be preferable to the current status quo for the rich people. My argument is that this would be preferable; it would be a voluntary, preferable choice for the rich, because of this kind of unique, new agreement that the poor and normal people won't hate them and we'll actually admire them for what they deserve to be admired for. So then the question becomes, well, how do you guarantee that that's going to happen? This is where technology comes in.
The poor and normal people can make commitments to a certain type of, let's call them "good behaviors" or whatever. Then we can basically enforce that through trustless, decentralized systems, namely, of course, blockchain. So what I'm imagining is... Imagine something like the Internet of Things — you know, all of these home devices that we see more and more nowadays that have sensors built in and can passively and easily monitor all types of measures in the environment. Imagine connecting that up to a blockchain, and specifically Smart Contracts, so that basically the patch is being constantly measured, your behavior in the patch is being constantly measured. You might have, say, skin conductance measures on your wrist; there might be audio speakers recording everyone's voice at all times. I know that sounds a little authoritarian, but stick with me. Stick with me.
Basically, by deep monitoring of everything using the Internet of Things, what we can do is basically as a group agree on what is a fair measure of, say, a satisfactory level of honesty, for instance. Let's say the rich people say, "I'll guarantee you a dignified life by giving you X amount of money each month. You don't have to do anything for it as long as you respect me, you know, you don't tell lies about me, you don't plot to take all of my money" or whatever. So then you would have an Alexa or whatever, it would be constantly recording what everyone says, and that would be hooked up to a Smart Contract. And so if you tell some lie about the producer aristocrat, "He totally punched me the other day, he was a real ignoble asshole," and that's actually not true. Well, all of the speech that people are speaking would be constantly compared to some database of truth. It could be Wikipedia or whatever. And every single statement would have some sort of probability of being true or false, or something like that. That could all be automated through the Internet of Things feeding this information the internet, and basically checking it for truth or falsity. And then you have some sort of model that says, if a statement has a probability of being false that is higher than — maybe set it really high to be careful, right? — 95 percent, so only lies that can be really strongly confirmed... Those are going to get reported to the community as a whole.
If you have X amount of bad behaviors, then you lose your entitlement from the aristocrat producers. It's noblesse oblige, the old kind of feudal term for basically an aristocratic communism, the [obligatory] generosity of the noble. So that's all very skittish. A little sketch of how Internet of Things and Smart Contracts could be used to create this idea of a Rousseauean General Will.
The reason why this has never worked in history is because of lying, basically. People can always defect. People can always manipulate and say they're going to do one thing but then not deliver. That's on the side of the rich and also on the side of the poor. But what's at least in sight now, is the possibility that we could define very rigorously the ideal expectations of everyone in a community and program that in transparent Smart Contracts, hook those up to sensors that are doing all of the work in the background, and in this way basically automate a radically guaranteed, egalitarian, communist system in which people do have different abilities, but everyone has an absolutely dignified lifestyle guaranteed for them as long as they're not total [expletive] who break the rules of the group. You can actually engineer this in a way that rich people would find it preferable to how they're currently living. So to me that's a viable way of building communism that hasn't really been tried before. And I think it really suits a patchwork model. I think that this would be something like an absolutely ideal patch, and not just in a productive, successful way. This is the ideal way to make a large group of people maximally productive and happy and feel connected and integrated. Like everyone has a place and everyone belongs, even if there's a little bit of difference in aptitudes. The system, the culture, will reflect that. But in a dignified, and fair, and reasonable kind way, a mutually supportive way. I could say more, but I haven't been keeping time, and I feel like I've been talking enough.
I found out recently that — hat tip to my friend the Jaymo — the town of Tombsboro, Georgia is right now for sale, for only $1.7 million. I think that's a pretty good deal. It comes with a railroad station, a sugar factory, all kinds of stuff and you could easily build a little prototype patch that I just described. If you have a bunch of people and it's a major publicized project, it wouldn't be that hard to raise enough for a mortgage on a $1.7 million property. Especially if you have a compelling white paper along the lines that I just sketched. I'm not quite there yet, but that's what I'm thinking about, that's my model or my vision of the communist patch. So I'm going to cut myself off there. Thank you very much.
To get the kind of life I want, or anywhere close to it, I realize I'm going to have to hustle like crazy. But one thing that's become immediately clear to me is that working hard has profoundly variable effects on well-being, conditional on what the work means to the worker. This is, of course, Nietzsche's observation in that famous line from Twilight of the Idols:
"If we have our own why in life, we shall get along with almost any how."
Academia can be quite cushy after you work hard to secure yourself there, but if I now want something better, I have to hustle again in a way that I thought was behind me. Even after I got "the British version of tenure," I was still hustling more than I needed to, just because of how I am. For the next twenty years, I would probably hustle like crazy regardless, whether I'm in a cushy institutionalized environment or doing some weird combination of intellectual work and entrepreneurial activity. In academia, I was constantly irritated and depressed while hustling to get various tasks done, so that I could have some time each day to do the work that mattered to me. Since my academic employment was thrown into question and my time opened up, especially because we are trying to have a child, I am now hustling harder than I ever have to ensure we come out of this okay. But now, it actually feels great, because at least 50% of my effort right now, while I'm still getting a paycheck, is trying to figure out any possible way I can make my intellectual work on the internet financially sustainable enough to be my primary occupation. I have no idea what my chances are, and if it's not possible then whatever — I'll just get some new job — but basically I have like a 1-2 month(s) period where I can afford to test out every harebrained scheme I've ever had for achieving financial sustainability via independent intellectual work. I've brainstormed a lot of ideas over the past few years, but never had the time or energy to test them seriously. So now I have nothing to lose, and much to gain, by testing them all. It all boils down to the question of how I can leverage my new intellectual independence from academic institutions to create new kinds of value.
Experiment 1 is testing if there's greater demand in the public for these monthly seminars I've been doing for patrons. The patrons seem to value it so far, so it's not unreasonable to think there might be a handful of people floating around out there who would also want something like this. If I could get even, say, 6 new signups in the next couple of months — I'd take that as a very promising signal that that could become one part of a viable independent work model for me. It'd only be a start but I could reasonably expect to build it out and grow it. Experiment 2 will very likely be a self-published book. I've been reading about publishing trends and the subreddit r/selfpublishing and I've been watching many interesting self-publishing experiments over the past couple of years, and so I've been very excited about eventually trying something. Now seems like as good a time as any! As I discussed in my recent livestream, I would like to try writing and self-publishing a book about academia and the internet — compiling all my observations and experiences, telling my whole recent story (which I'm being told to keep confidential), weaving it with larger theoretical and empirical reflections on the semantic apocalypse, reality forking, etc. I would like it to be fairly short and punchy, fun to read, not a big serious tome or anything. I am excited about theorizing and strategizing a launch plan (entrepreneurship is pretty fun to be honest). I think I would either plan a Kickstarter campaign, or possibly just write the damn thing and sell it via Amazon or Gumroad (like Eli). What's a good title for such a book? Please reply if you think of one. I plan to do some A/B testing, but if you make a suggestion I like then I'll include it in my A/B tests. Here are titles I'm currently toying with:
How Academia Got Pwned
How to Pwn Academia
12 Rules for Ruining Life (To Get a Better One)
This got me wondering if it'd be a problem for a book to have the word "retard" in the title. It's kind of fashionable to have curse words in book titles nowadays, but they usually use an asterisk for one of the letters. Would I have to do that if I called it Retard Vacation? I searched Amazon and it seems: no. The results are kind of funny.
It's a little frightening and uncomfortable, because I don't have much experience with entrepreneurship and I'm not strongly motivated by money, but despite the anxiety and ego-fear of failure it's really quite refreshing. As an academic, what you're "up against" is a thick web of arbitrary norms and social games, and your value is contingent on pleasing particular dispensers of cultural capital. One can be ruined if
a certain person simply dislikes you. What feels really great about my current moment of impending entrepreneurial experimentation is that I'm only "up against" the open market of cyberspace. The downside is that, if what I'm capable of producing does not provide enough value to people, then there's no way to paper over this unfortunate fact. I could be forced to get a normal full time job, and face the risk of losing a long-term intellectual life. But the upside is a most fantastic dream, the dream that perhaps everything I've invested into the constitution of a radically independent intellectual life is somehow worth it , not just to me, but on the brutally honest open market. That there might even be a 10% chance of this being true is how and why I'm now hustling harder than ever before while also enjoying greater well-being than ever before.
I am operating at the height of my powers, intoxicated by a dream, though aware that I'm dreaming. If it fails and I'm forced to work full time away from my research agenda and creative visions, well then perhaps I will be at peace with the brutal truth: that in fact my delusional obsessions have only ever been egotistical and anti-social wastes of energy. Perhaps the open market will teach me a hard lesson that academia never had the guts to teach me: that everything I know and everything I think and everything I can make is actually worthless. And if that's the lesson I learn from the open market, then maybe finally my grand visions would be destroyed but maybe then I could finally learn how to be a normal person and keep my mouth shut and just get on with a normal career. If that's what it would mean to fail, then it'd still be a huge blessing and a net gain relative to carrying on my intellectual fixations with the false insulation of academic prestige.
In short, I have nothing to lose and everything to gain by testing what are my honest intellectual capacities really worth? And then I realize that I'm so intoxicated by this dream — my engines are humming so smoothly at full throttle just by virtue of trying 100% for my ideal — that even if my intellect fails to float on the open market in the first 1, 2, 3, 4 test runs, and I have to get some other job, I can always keep trying what I'm currently trying. When I think about this — that on the open market there is no social authority that can end one's ability totry — it really comes home to me how insane it is to hang one's entire livelihood on an insular bureaucratic hierarchy, and I am reminded how good and true and necessary is my current line of flight.
Where are the hottest countercultures today? What will happen to academia? What is my advice for young people? What Schelling points will emerge after academia? Why has academia has become left-wing since the 1990s? Can we make society more accepting of weird thinkers? Why do many of the countercultures today have a right-wing tendency? What caused the Left to gain dominance over mainstream institutions? Which conspiracy theories are directionally compelling? Should aspiring authors self-publish or try to get a traditional book deal? What will post-pandemic politics look like? What it's like living with two evolutionary psychologists? How will psychedelic drugs affect politics? What is my long-term vision for IndieThinkers.org?