fbpx
Eric Weinstein's Error

Eric Weinstein has released his highly self-aggrandized anticipated research paper on geometric unity.

I read the paper. I gave it a solid hour or two. I did read the whole thing.

The paper is not really a research paper, it's a collection of briefly formalized mathematical intuitions combined with some comments about how these intuitions could possibly be turned into a significant finding, plus a number of paranoid intuitions about why and how this significant finding is thwarted by various political forces.

I'm not sure we've seen this kind of megalomania since Nietzsche. To be clear, I would say that's a compliment, given that Nietzsche was the absolute chad of late-19th century Europe. What happens to this kind of intellectual temperament in the 21st century is, of course, a different question.

I was mostly interested in this paper as an example of what a sophisticated outsider intellectual could do, after having gained a large social-media audience. For a couple years now, I've been listening to Eric's story about his suppressed theory, which, he has claimed, overturns all of modern economic theory, transcends Satoshi Nakamoto's conception of the blockchain, and more.

If I have any horse in this race, my bias is in favor of Eric dropping a world-historical research paper and totally dunking on the institutions from his outsider social-media perch. If anyone is capable of doing it, at this very moment, it would be him—and it would vindicate and flatter a lot of my recent theorizing. I would love to see it.

This paper and its whole self-flattering build up, unfortunately, reveal the author to be tremendously out of touch with both institutional legitimacy dynamics and indie legitimacy dynamics.

The paper certainly demonstrates that Eric knows advanced math, and has some creative ideas. But this simple fact is already priced into his stock as a public intellectual, and it means virtually nothing inside of institutions... given the basic nature of institutions.

What you have to understand about mathematics and mathematical physics is that lots of people in these fields can generate highly impressive numerical edifices making all kinds of claims about possible applications to other fields. Search around and you can find a ton of physics-based theories of the economy, most of which do not win Nobel prizes and do not overthrow the economics profession. But those authors don't feel like persecuted victims, perhaps because they are not megalomaniacs or perhaps because they lack large internet audiences. They understand that, ultimately, super-advanced math is a language game (not unlike woke theory, actually, except that woke theory is the econo-physics of verbal-IQ elites).

Advanced mathematics can occasionally find vindication by experiment, or inspire creative application in the world, or gain some public awareness for highly stochastic reasons, but to imagine that advanced mathematical sophistication combined with some creativity entitles one to any amount of institutional recognition is to woefully misunderstand the epistemology of scientific method (rooted in experiment), the nature of institutions, and the sociology of intellectual history. What gets power and credit in academic philosophy, for instance, is largely a function of who your teacher was (see Randall Collins' The Sociology of Philosophies). Most super-mathy academics understand that, while worth doing in the pursuit of truth, especially if it can get you an academic job, super-advanced math is very often strictly useless. Most artists will agree about their own work, find it worthwhile anyway, and never complain. Eric seems genuinely not to understand this.

Science is deeply rooted in experiment, whether one likes it or not. In the paper, Eric mentions that not all mathematics need be demonstrated by experiment. That is correct, but note that the academic variants of woke theory also make the same point in favor of themselves. The problem is that, as a matter of historical reality, scientific method is the one and only high-status validation and legitimation method recognized in modern culture. You can cite any number of theorists who have shown or theorized something like this—from Weber, to Heidegger, Adorno, Foucault, many others, most of the twentieth-century continental tradition revolves around this massive fact.

Scientific experiment is the only way an intellectual can claim to be more correct than others, in the public sphere, with inter-subjectively recognizable criteria to calibrate judgment. If your field doesn't lend itself to experiment, that's fine, some work can be described as better or worse than other work, but you can never claim some objective or inter-subjective superiority that would ever avail itself of universal recognition—in the way that you can with experimentally validated and engineering-generative insights.

The supreme irony here is that what's called postmodernism is the academic field where this basic problem has been most fully specified and ethically questioned. They are the ones who have been saying for years that modern culture over-weights that which can be validated by experiment, denying legitimacy to important truths that happen to not lend themselves to that particular mode of veridiction. This critique of science indeed plays some role in the unleashing of increasingly unhinged woke epistemology, but apparently it also unleashes the paranoid victim complexes of extremely intelligent rationalists as well. Rather than claiming to be persecuted by the economics profession, Eric should read post-structuralism and understand that his unique ideas are de-legitimated in the same way all "non-scientific" ideas are de-legitimated, but that, in this elision, he may also find unique forms of power and freedom. Even more ironically, Eric can thank the Deleuzian nature of the internet for already granting him historically novel and extraordinary sources of power and freedom. But it seems that Eric does not recognize these powers for what they are, or he doesn't understand how they work, which explains why he tried to publish a research paper in the way that he did.

The paper is almost explicitly pseudo-scientific in that he self-consciously typesets the paper using LaTeX (the cool kid's writing application in STEM research disciplines) and it contains advanced math, but it does not deliver the basic promise of a scientific research article, namely a competent review of up-to-date research followed by a specific, novel insight which is then integrated back into the literature. He punts on those basic requirements of a scientific research paper, bizarrely mentioning in the opening footnote that he is merely an Entertainer. And yet as an entertainer he is constantly claiming he has a revolutionary theory that has been suppressed by academia.

Basically he does not have the grand, unifying, discipline-shattering research paper or discovery that I honestly hoped he did.

So the overall effect of this paper is quite a let down. There are only a few thousand people who will be capable of judging the formal math in the paper, but even if they are correct and interesting—which I'm happy to grant!—his naivety on the sociology of science and the sociology of the internet significantly deflates my estimation of Eric as social thinker.

I think Eric is a genius and a courageous, fascinating, impressive individual who could have extraordinary impact in the long-run of intellectual history. But sadly, he is becoming a genuine crank, insofar as the distinction between an independent intellectual and a crank is that the independent intellectual supersedes institutions and gains long-term influence, whereas the crank becomes possessed by resentment toward institutions and fails to gain long-term influence.

He makes good points about the selection effects of institutional science. It is true certain findings are likely to be rejected, even if true. But this is a reason for doing extra-institutional science. The error Weinstein insists on making is trying to force extra-institutional knowledge into institutional acceptance. The result can be nothing other than failure, crankhood, and the paranoid bitterness which, frankly, Weinstein exudes in his recent appearances. Fortunately he has plenty of time to change course. I hope that he does, and I wish him nothing but success.

What is an "image of thought" for Deleuze?

From Lecture #3 in my video course for Based Deleuze:

What's really at stake here, I think, is the attack on representational thought... That's one of the core components of the Deleuzian project. Deleuze argued that any philosophy presents an image of thought and that this image of thought, it's not really explicit. It's never really demonstrated or proven. It's sort of a presupposition. Whenever a philosopher or any type of thinker or theologian or whatever presents a philosophy, there is in the background a certain image of what thought is and what thought should be, and what thought can be, and that's never really fully spelled out. It's never really justified.


It's essentially a kind of aesthetic. And there are different images of thought. This is something that Deleuze really wants to show to us… That we have a choice: an essential, irreducible kind of freedom or aesthetic decision to make about what type of thought we want to engage in.

In retrospect, "choice" is not the best word, because Deleuze wants to steer us away from any naive conception of free will. One is almost tempted to use an ugly deconstructionist term here, such as undecidability. The key point is that an 'image of thought' is extra-rational. It's never justified or formalized rationally, although it's implied in modes of justification or formalization. We might not "choose" our image of thought, exactly, although there is a kind of pre-rational selection process that sorts creators and their creations. Perhaps we could say that our 'image of thought' chooses us...

We Are All Conspiracy Theorists Now

The collapse of trust in mainstream authorities is discussed as if it is only one of many troubling data points. It's not. People are still underestimating the gravity of the interlocking trends that get summarized in this way.

For instance, when trust in mainstream authorities is sufficiently low, one implication is that conspiracy theories become true, even if you personally trust the mainstream authorities, even if you're a rational Bayesian, even if you're the type of person who is resolved to be above conspiracy theories.

Let's say you're an optimally rational person, with the utmost respect for science and logic and empirical reality. An optimally rational person has certain beliefs, and they are committed to updating their beliefs upon receiving new information, according to Bayes' Rule. In layman's terms, Bayes' Rule explains how one should integrate new information with one's past beliefs to update one's beliefs in the way that is best calibrated to reality. You don't need to understand the math to follow along.

How does a Bayesian update their beliefs after hearing a new conspiracy theory? Perhaps you wish to answer this question in your head right now.

For my part, I just watched the Netflix documentary about Flat Earth theorists the other night. I spent the next day puzzling over what exactly is the rational response to a film like that. The film certainly didn't convince me that the Earth is flat, but can I really say in all honesty that the documentary conveyed to me absolutely no new information corroborating a Flat Earth model of the world?

One could say that. Perhaps you want to say that the rational response to conspiracy theory documentaries is to not update your beliefs whatsoever. The whole documentary is clearly bunk, so I should assign zero credence to the thesis that the Earth is flat. This would be a little strange, in my view, because how many people understand astronomy deeply enough with first-hand familiarity to possess this kind of prior confidence? Ultimately most of us, even highly smart and educated non-astronomers, have to admit that our beliefs about the celestial zones are generally borrowed from other people and textbooks we've never quite adversarially validated. If I'm confronted with a few hundred new people insisting otherwise, I surely don't have to trust them, but giving them a credence of absolute zero seems strange given that my belief in the round Earth pretty much comes from a bunch of other people telling me Earth is round.

Personally I become even more suspicious of assigning zero credence because, introspectively, I sense that the part of me that wants to declare zero credence for Flat Earth theory is the part of me that wants to signal my education, to signal my scientific bona fides, to be liked by prestigious social scientists, etc. But I digress. Let's grant that you can assign Flat Earth zero credence if you want.

If you assign Flat Earth a zero likelihood of being correct, then how do you explain the emergence of a large and thriving Flat Earth community? Whether you say they're innocent, mistaken people who happen to have converged on a false theory, or you say they are evil liars trying to manipulate the public for dishonorable motives — whatever you say — your position will ultimately reduce to seeing at least the leaders as an organized cabal of individuals consciously peddling false narratives for some benefit to themselves. Even if you think they all started out innocently mistaken, once they fail to quit their propaganda campaigns after hearing all the rational refutations, then the persistence of Flat Earth theory cannot avoid taking the shape of a conspiracy to undermine the truth. So even if you assign zero credence to the Flat Earth conspiracy theory, the very persistence of Flat Earth theory (and other conspiracy theories) will force you to adopt conspiracy theories about all these sinister groups. Indeed, you see this already toward entities such as Alex Jones, Cambridge Analytica, Putin/Russia, etc.: Intelligent and educated people who loathe the proliferation of conspiracy theories irresistibly agree, in their panic, to blame any readily available scapegoat actor(s), via the same socio-psychological processes that generate all the classic conspiracy theories.

If I'm being honest, my sense is that after watching a feature-length documentary about a fairly large number of not-stupid people arguing strongly in favor of an idea I am only just hearing about — I feel like I have to update my beliefs at least slightly in favor of the new model. I mean, all the information presented in that 2-hour long experience? All these new people I learned about? All the new arguments from Flat Earthers I never even heard of before then? At least until I review and evaluate those new arguments, they must marginally move my needle — even if it's only 1 out of a million notches on my belief scale.

In part, this is a paradoxical result of Flat Earth possessing about zero credence in my mind to begin with. When a theory starts with such low probability, almost any new corroborating information should bump up its credence somewhat.

So that was my subjective intuition, to update my belief one tiny notch in favor of the Flat Earth model — I would have an impressively unpopular opinion to signal my eccentric independence at some cocktail party, but I could relax in my continued trust of NASA…

Then it occurred to me that if this documentary forces me to update my belief even slightly in favor of Flat Earth, then a sequel documentary would force me to increase my credence further, and then… What if the Flat Earthers start generating Deep Fakes, such that there are soon hundreds of perfectly life-like scientists on Youtube reporting results from new astronomical studies corroborating Flat Earth theory? What if the Flat Earthers get their hands on the next iteration of GPT-2 and every day brings new scientific publications corroborating Flat Earth theory? I've never read a scientific publication in Astronomy; am I suddenly going to start, in order to separate the fake ones from the reliable ones? Impossible, especially if one generalizes this to all the other trendy conspiracy theories as well.

If you watch a conspiracy documentary and update your beliefs even one iota in favor of the conspiracy theory, then it seems that before the 21st century is over your belief in at least one conspiracy theory will have to reach full confidence. The only way you can forestall that fate is to draw an arbitrary line at some point in this process, but this line will be extra-rational by definition.

Leading conspiracy theorists today could very well represent individuals who subjectively locate themselves in this historical experience — they see that this developing problem is already locked in, so they say let's get to the front of this train now! One could even say that Flat Earth theorists are in the avant-garde of hyper-rationalist culture entrepreneurs. Respectable scientists who go on stages insisting, with moral fervor, that NASA is credible — are these not the pious purveyors of received authority, who choose to wring their hands morally instead of updating their cultural activity in a way that's optimized to play and survive the horrifying empirical process unfolding before them? Perhaps Flat Earth theorists are the truly hard-nosed rationalists, the ones who see which way the wind is really blowing, and who update not only their beliefs but their entire menu of strategic options accordingly.

It's no use to say that you will draw your line now, in order to avoid capture by some hyper-evolved conspiracy theory in the future. If you do this, you are instituting an extra-rational prohibition of new information — effectively plugging your ears, surely a crime to rationalism. Even worse, you would be joining a cabal of elites consciously peddling false narratives to control the minds of the masses.

Algorithms and prayers

The mild-mannered socialist humanist says it's evil to use algorithms to exploit humans for profit, but the articulation of this objection is an algorithm to exploit humans for profit. Self-awareness of this algorithm may vary, but cultivated ignorance of one's own optimizing functions does not make them any less algorithmic or exploitative. The opposite of algorithmic exploitation is not moralistic objection, but probably prayer, which is only — despite popular impressions — attention, evacuated of instrumental intentions. One point of worshipping God is that, by investing one's desire into an abstraction of perfection, against which all existing things pale in comparison, one may live toward the good and still live as intensely as possible. Secular "good people" often makes themselves good by eviscerating their desire, de-intensifying their vitality to ensure their mundane algorithmic optimizing never goes too far. But a life of weak sin is not the same as a good life. Prayer, the practice of de-instrumentalizing attention, does not feign superiority to the sinful, exploitative tendencies of man (like socialist humanism). Prayer is code. Prayers have never hidden their nature as exploitative algorithms — "say these words and it will be Good" — but they exploit our drive to exploit, routing it into a pure and abstract circle, around a pure and abstract center. Secular solutions to the problem of evil typically involve lying about human behavior, whereas a holy life is the application of one's wicked intelligence to the production of the good and the true.

Semantic Apocalypse and Life After Humanism with R. Scott Bakker

I talked to fantasy author, philosopher, and blogger R. Scott Bakker about his views on the nature of cognition, meaning, intentionality, academia, and fiction/fantasy writing. See Bakker's blog, Three Pound Brain.

Listeners who enjoy this podcast might check out Bakker's What is the Semantic Apocalypse? and Enlightenment How? Omens of the Semantic Apocalypse.

This conversation was first recorded as a livestream on Youtube. Subscribe to my channel with one click, then click the bell to receive notifications when future livestreams begin.

Big thanks to all the patrons who keep this running.

Download this episode.

Against the Epistemic Status

I've been considering the idea of assigning an "epistemic status" to each of my blog posts, in the fashion of Scott Alexander. Basically: adding an addendum at the top of each blog post indicating the degree to which I really believe what is said in the blog post. Perhaps I no longer believe what I wrote a year ago — in that case, I might add an epistemic status warning readers that I no longer believe it. That's the idea.

I've decided I'm against epistemic statuses. TLDR: I think at best they are useless, begging the problem they seek to address; and at worst, I think they could very well decrease the total, long-run truth-value obtained within a writing/reading community.

The epistemic status gives a false sense of rigor and humility. One reason is because there's no epistemic status for the epistemic status. An ES is not a confidence interval, derived by some transparent calculation procedure. It is probably more subjective and error-prone than the full blog post. One reason I never post an ES — when I've sometimes had the urge to, especially after weaker posts — is that I always feel so radically unsure of my post-writing impressions that for an ES to actually increase the transparency/reliability of the post, I feel like I'd have to say I'm also utterly unsure of the ES, and so on to infinite regression. Thus, tacking on an ES at the top of the article feels to me primarily like rational self-skepticism/humility-signaling, which doesn't in any way solve the problem. Also, from the reader's perspective, the epistemic status begs the question of how reliable any blog post is, because they still have to decide whether they trust the epistemic status. For new visitors, the epistemic status therefore solves no problem, and merely adds text while bumping the trust/credibility problem up a level.

The practice of adding post-hoc epistemic statuses lends to the entire blog an impression of always being epistemically up to date, but I don't feel I will ever have the time or conscientiousness to really keep all the posts' epistemic statuses up to date with my current judgment. Therefore if I simply overlook some old posts I don't really care about anymore, and readers see there is no epistemic status downgrading them, they might reasonably infer I still fully own those beliefs.

For return visitors and regular readers of a blog, the ES is essentially an appeal to one's own authority, a cashing-in on past trust and cultural capital earned by the author's substantive content.

Ultimately, every claim I make, or inference I imply, nested in every article I write, nested in every collection of articles, has to be given some level of credence by each individual reader. Whether one line is a joke or not, whether one claim is likely to be true or mistaken — these are questions every reader must make for themselves based on whatever information they have about my claims, and the project I'm embarked on, and my reliability as a source. Assigning an ES to each unit I publish would be to lull the reader's vigilance into an unjustifiably comfortable slumber. It might make them feel like I can take care of their meta-rationality for them, when in fact it's an irreducible existential burden for all thinking adults. I don't want my readers to feel like they are cast adrift in the wilderness, but alas they are. So I don't really want to make them feel otherwise.

I think the normal presumptions about the nature of blogging are meta-rationally superior to epistemic statuses. It's just a blog: take everything with a huge grain of salt, but if something is really well demonstrated and supported then believe it, as you see fit. If you see a post from three years ago, of course the author has probably changed their views to some degree. The best response to this is to read more contemporary posts, to judge for yourself what this author really thinks on the whole. If a reader doesn't care to do this, no epistemic status is going to ensure their initial exposure is lodged into their long-term memory correctly. Such a person will either never remember the blog post or, if they are so unwise as to memorize and repeat to their friends something I reported in one blog post three years ago, I suspect they would bulldoze right over even the most cautious epistemic status warnings.

Better is to just put super-wide confidence intervals on everything one writes. Some things I say will be dumb, biased, and/or mistaken. But some things I write will — hopefully — get closer to way bigger truths than I can even appreciate! If you assign epistemic statuses to your blog posts, you really should also say when and where you think you are super correct. Most sane people will not want to place at the top of a blog post "Epistemic status: I feel a 5% chance that the claims below could change the course of world history." But any serious and passionate intellectual gets some taste of this genuine feeling every now and then! Thus, if this epistemic status business does not include such self-aggrandizing caveats, that too might be systematically biasing. I'd rather just give one big caveat about my whole body of writing, that it is merely the inspired guesswork of one person trying their best to be correct. Implicitly, some stuff will be more wrong than it might seem, and some stuff will be even more right than it seems. The only commitment one needs to make is to do one's best, in a way that updates moving forward, rather than attempting to move backward with post-hoc re-evaluations.

I admit that some of my intuition on this question is due to my temperament: I like to work fast, always move forward, never look back. I can do the disciplined work of editing but I'm not exceptionally high in Orderliness; I run mostly on the dopaminergic movements of exploration, inspiration and creation, adding just enough conscientiousness to complete things responsibly. As far as bloggers and "content creators" go, I'm high-variance: I put out a lot of high-quality stuff that I take very seriously, but I also put out a lot of random stuff sometimes bordering on bad comedy. So part of what I wrote above is just rationalizing all of this. But this is also my personal alternative to the epistemic status: self-conscious reflections weaved immanently into any given unit of production.

1 2 3

The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE. The Privacy Policy can be found here. This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.