fbpx
Decouple intellectualism and intelligence

Speak of the intellectual life and a lot of people will think you’re pretentious; they think you’re calling yourself intelligent.

What they don’t understand is that the intellectual is a type of person who can be found across the intelligence distribution. The defining characteristic of the intellectual is an irrepressible lust for the truth. The technical concept from the psychology literature is need for cognition. Intelligence is only weakly correlated with need for cognition (probably somewhere between .14 and .28).

It’s likely that popular conspiracy theories are authored and promoted by people with high need for cognition. If you look closely at the Flat Earth community, for instance, many of the active members devote a lot of time and energy to effortful research, even to the point of conducting relatively serious field experiments.

The number of genuine intellectuals is higher than people realize. The number of highly intelligent pseudo-intellectuals is also higher than people realize.

If you want to disrupt the Higher Education industry—whether you’re a thinker, creator, or entrepreneur—this might be one of the most important and best-kept secrets.

Decouple intellectualism and intelligence. You’ll be surprised what you find.

Intelligence as a political cleavage

Intelligence is increasingly a political cleavage, thanks to the phenomenon of skill-biased technological change.

If your income is earned through competition on an open market, intelligence is an unambiguous good. You need it, you want it, possessing it makes you succeed and lacking it makes you fail. The continued development and maximization of artificial intelligence is an obvious and mundane reality of business development.

If your income is earned through a bureaucratic office of any kind, success in that office increasingly requires opposition to intelligence as such. Unions were always essentially anti-intelligence structures, defending humans from innovative insights that threatened to displace them. But unions were defeated by the information revolution, which was a kind of global unleashing of distributed intelligence. Now, atomized individuals within bureaucratic structures spontaneously converge on anti-intelligence strategies, in a shared sub-conscious realization that their income and status will not survive any further rationalization.

How else do you explain the recent co-occurrence of the following?

  • Mass political opposition to mundane psychology research on intelligence
  • Evangelical public moralizing against competence as an increasingly visible career track (in journalism, some academic disciplines, the non-profit sector, etc.)
  • Social justice culture in general as a kind of diffuse “cognitive tax.” It is a distributed campaign to decrease the returns to thinking while increasing the returns to arbitrary dicta.
  • The popularity of pseudoscientific concepts serving as supposed alternatives to intelligence, e.g. “emotional intelligence,” “learning styles,” etc.

Finally, it is no surprise that many of these symptoms are rooted in academia. This is predicted by the theory. The authority and legitimacy of the Professor is predicated on their superior intelligence, and yet their income and status is predicated on anti-intelligent cartel structures (like all bureaucratic professions). It is no wonder, then, that increasing intelligence pressures are short-circuiting academic contexts first and foremost.

Once upon a time, professors could enjoy the privilege of merely slacking on competitive intelligence application. These were the good old days, before digitalization. Professors could be slackers and eccentrics: a low-level and benign form of anti-intelligenic intellectualism. They didn’t have to actively attack and mitigate intelligence as such. Today, given the advancement of digital economic rationalization, humanities professors work around the clock to stave off ever-encroaching intelligence threats.

The difficult irony is that anti-intelligence humanity professors are acting intelligently. It is perfectly rational for them to play the game they are playing. Not unlike CEOs, they are applying their cognition to maximize the profit of the ship they are stuck on.

Fully automated personal brands

Right now, it’s still seen as bad taste to overly automate your personal social media — and for good reason. But taste changes, and it always follows the money.

As machine learning gets better, we will soon cross the threshold where some minority fraction of the dumbest people will be unable to distinguish between a real human's "personal brand" and a fully automated machinic substitute trained on that human's history of creative content. Let's call that fraction the "dupe fraction." In the first period after crossing this threshold, higher-IQ people will still be capable of such discernment, and they will mock and stigmatize anyone they catch replacing themselves with machinic substitutes. But as the dupe fraction increases — and it must, unless you think machine learning cannot get any better — the payoffs to machinic self-replacement will eventually outweigh the costs of stigmatization by elite discerners. It is inevitable that there will therefore be a period in which elite discerners will be barking into a void, only to be outcompeted (with respect to influence) by those who bear the short-term stigma to win the longer-term race of machinic content domination. Then, of course, machinic self-replacement will become the index of Cool.

The tricky problem is knowing when we cross this threshold. It is not inconceivable that we've already crossed it. Machine learning tools may already be good enough, for how many dumb people are already on the internet, that someone such as myself could hand over all my public posting channels to machine intelligence, turn the quantity and consistency up ten notches, alienate all my high-IQ audience, but replace them with 100x as many dumb people over the course of a couple years.

My personal diagnosis — and trust me, I've been looking into this for some time! — is that we're not quite there yet. I've even experimented with some pilot programs, e.g. an anonymous Twitter account trained on my own writings, for instance. It's pretty decent, actually, but if I ever used it for my personal account, the number of people for whom it would pass the Turing Test is too small relative to the number of smart people who would see through it and think I'm a dumb loser.

I should note that another crucial variable is the accessibility of machine intelligence. I could perhaps do better than one Twitter account trained on a collection of my own writings, but the currently available tools and workflows are still a little too demanding for this to be rational at the moment. Although the tools are rapidly growing more convenient.

It's ultimately an empirical question when, exactly, we cross this threshold. Everyone has to make their own wagers. But I think most people are over-estimating how long it will be until machinic self-replacement becomes the winning strategy — indeed, an existential necessity — for any intellectuals and content creators wishing to remain in the meme pool.

One thing is clear, however. Do not wait for machinic self-replacement to be affirmed by prestigious institutional opinion. By that time, it will certainly be too late: all the cool kids will have already machined most of their internet personas to unprecedented degrees. By then, it may already be the pre-requisite for making real and valuable social connections with smart and creative people in the real world. I would bet there are already Zoomers experimenting with automated "personal brands" to degrees I would look down upon. I'm guessing I won't hear about them until their content systems blow mine out of the water. The trick will be to make this transition late enough that you keep as many of your high-education/high-IQ audience as possible, but early enough that you win a decent slice of the first-mover advantage.

Algorithms and prayers

The mild-mannered socialist humanist says it's evil to use algorithms to exploit humans for profit, but the articulation of this objection is an algorithm to exploit humans for profit. Self-awareness of this algorithm may vary, but cultivated ignorance of one's own optimizing functions does not make them any less algorithmic or exploitative. The opposite of algorithmic exploitation is not moralistic objection, but probably prayer, which is only — despite popular impressions — attention, evacuated of instrumental intentions. One point of worshipping God is that, by investing one's desire into an abstraction of perfection, against which all existing things pale in comparison, one may live toward the good and still live as intensely as possible. Secular "good people" often makes themselves good by eviscerating their desire, de-intensifying their vitality to ensure their mundane algorithmic optimizing never goes too far. But a life of weak sin is not the same as a good life. Prayer, the practice of de-instrumentalizing attention, does not feign superiority to the sinful, exploitative tendencies of man (like socialist humanism). Prayer is code. Prayers have never hidden their nature as exploitative algorithms — "say these words and it will be Good" — but they exploit our drive to exploit, routing it into a pure and abstract circle, around a pure and abstract center. Secular solutions to the problem of evil typically involve lying about human behavior, whereas a holy life is the application of one's wicked intelligence to the production of the good and the true.

If education is signaling, does moral signaling become a viable major?

In a recent post, I encountered an interesting empirical fact about the college wage premium accruing to low-ability college grads over the period 1979-1994. Looking at a 2003 article by Tobias,  I wrote: "There is a lot of temporal volatility for the class of low-ability individuals. In fact, for low-ability individuals there is not even a consistent wage premium enjoyed by the college-educated until 1990."

I have begun to wonder if this pattern has anything to do with the non-linear relationship between GPA and PC. If the low-ability college entrants feel they are much less certain to enjoy a wage premium over the "townie losers" they left behind, what better strategy than to invest their college-specific word games with extreme moral significance?  That way, even the dumbest college grad can be confident that they will remain distinguished from the more able among the non-college-grads.

[Hat tip to a few high-quality comments on this blog recently, I don't recall exactly but I think someone may have made a point similar to this; the seed of this post might have been planted there, thank you.]

Although this last point is only conjecture, it is curious that right when the wage premium for low-ability college grads arrives is right when the first wave of campus political correctness kicks off — the early 1990s. Especially if you buy Caplan's signaling theory of education, it's not at all implausible that for low-ability college grads their wage-premium is secured primarily through a specialization in moral signaling

The non-linear effect of ability on earnings in the computer age

A reader/watcher/listener has brought to my attention another paper, which shows that, for college-educated individuals, earnings are a non-linear function of cognitive ability or g — at least in the National Longitudinal Survey of Youth from 1979-1994. The paper is a 2003 article by Justin Tobias in the Oxford Bulletin of Economics and Statistics.

There may be other studies on this question, but a selling point of this article is that it tries to use the least restrictive assumptions possible. Namely, allowing for non-linearities. In the social sciences, there is a huge bias toward finding linear effects, because most of the workhorse models everyone learns in grad school are linear models. Non-linear models are trickier and harder to interpret and so they're just used much less, even in contexts where non-linearities are very plausible.

A common motif in "accelerationist" social/political theories is the exponential curve. Many of us have priors suggesting that, at least for most of the non-trivial tendenices characterizing modern polities, there are likely to be non-linear processes at work. If the contemporary social scientist using workhorse regression models is biased toward finding linear effects, accelerationists tend to go looking for non-linear processes at the individual, group, nation, or global level. So for those of us who think the accelerationist frame is the one best fit to parsing the politics of modernity, studies allowing for non-linearity can be especially revealing.

The first main finding of Tobias is visually summarized in the figure below. Tobias has more complicated arguments about the relationship between ability, education, and earnings, but we'll ignore those here. Considering college-educated individuals only, the graph below plots on the y-axis the percentage change in wages associated with a one-standard-deviation increase in ability, across a range of abilities. Note that whereas many graphs will show you how some change in X is associated with some change in Y, this plot is different: It shows the marginal effect of X on Y, but for different values of X.

Tobias 2003, pp. 13.

The implication of the above graph is pretty clear. It just means that the earnings gain from any unit increase in g is greater at higher levels of g. An easy way to summarize this is to say that the effect of X on Y is exponential or multiplicative. Note also there's nothing obvious about this effect; contrast this graph to the diminishing marginal utility of money. Gaining $1000 when you're a millionaire has less of an effect on your happiness than if you're at the median wealth level. But when it comes to earnings, gaining a little bit of extra ability when you're already able is worth even more than if you were starting at a low level of ability.

The paper has a lot of nuances, which I'm blithely steamrolling. My last paragraph is only true for the college educated, and there are a few other interesting wrinkles. But this is a blog, and so I mostly collect what is of interest to me personally. Thus I'll skip to the end of the paper, where Tobias estimates separate models for each year. The graph below shows the size of the wage gap between the college-educated and the non-college-educated, for three different ability types, in each year. The solid line is one standard deviation above the mean ability, the solid line with dots is mean ability, and the dotted line is one standard deviation below the mean ability.

Tobias 2003, pp. 23

An obvious implication is that the wage gap increases over this period, more or less for each ability level. But what's interesting is that the slope looks a bit steeper, and is less volatile, for high-ability than for average and low-ability. There is a lot of temporal volatility for the class of low-ability individuals. In fact, for low-ability individuals there is not even a consistent wage premium enjoyed by the college-educated until 1990.

Anyway, file under runaway intelligence takeoff...

1 2 3

The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE. The Privacy Policy can be found here. This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.