Restricted Range and the RPG Fallacy

[A note: this post may be more controversial than is usual for me? I’m not sure; I lost my ability to perceive controversy years ago in a tragic sarcasm-detector explosion]

[Another note: this post is long]


Consider the following sentence:

“Some people are funnier than others.”

I don’t think many people would take issue with this statement – it’s a fairly innocuous thing to say. You might quibble about humour being subjective or something, but by and large you’d still likely agree with the sentiment.

Now imagine you said this to someone and they indignantly responded with the following:

“You can’t say that for sure – there are different types of humour! Everyone has different talents: some people are good at observational comedy, and some people are good at puns or slapstick. Also, most so-called “comedians” are only “stand-up funny” – they can’t make you laugh in real life. Plus, just because you’re funny doesn’t mean you’re fun to be around. I have a friend who’s not funny at all but he’s really nice, and I’d hang out with him over a comedian who’s a jerk any day. Besides, no one’s been able to define funniness anyway, or precisely measure it. Who’s to say it even exists?”

I don’t know about you, but I would probably be pretty confused by such a response. It seems to consist of false dichotomies, unjustified assumptions, and plain non-sequiturs. It just doesn’t sound like anything anyone would ever say about funniness.

On the other hand, it sounds exactly like something someone might say in response to a very similar statement.


“Some people are more intelligent than others.”

“You can’t say that for sure – there are different types of intelligence! Everyone has different talents: some people have visual-spatial intelligence, and some people have musical-rhythmic intelligence. Also, most so-called “intellectuals” only have “book-smarts” – they can’t solve problems in the real world. Plus, just because you’re smart doesn’t mean you’re a hard worker. I have a friend who’s not very bright but he works really hard, and I’d choose him over a lazy brainiac any day. Besides, no one’s been able to define intelligence anyway, or precisely measure it. Who’s to say it even exists?”

Sound more familiar?


The interesting thing is, you don’t always get a response like that when talking about intelligence.

Quick – think about someone smarter than yourself.

Pretty easy, right? I’m sure you came up with someone. Okay, now think of someone less smart than you. Also not too hard, I bet. When you forget about all the philosophical considerations, when there’s no grand moral principle at stake – when you just think about your ordinary, everyday life, in other words – it becomes a lot harder to deny that intelligence exists. Maybe Ray from accounting is a little slow on the uptake, maybe your friend Tina always struck you as sharp – whatever. The point is, we all know people who are either quicker or thicker than ourselves. In that sense almost everyone acknowledges, at least tacitly, the existence of intelligence differences.

It’s only when one gets into a debate about intelligence that things change. In a debate concrete observations and personal experiences seem to give way to abstract considerations. Suddenly intelligence becomes so multifaceted and intangible that it couldn’t possibly be quantified. Suddenly anyone who’s intelligent has to have some commensurate failing, like laziness or naivete. Suddenly it’s impossible to reason about intelligence without an exact and exception-free definition for it.

Now, to be clear, I’m not saying a reasonable person couldn’t hold these positions. While I think they’re the wrong positions, they’re not prima facie absurd or anything. And who knows, maybe intelligence really is too ill-defined to discuss meaningfully. But if you’re going to hold a position like that then you should hold it consistently – and the debaters who say you can’t meaningfully talk about intelligence often seem perfectly willing to go home after the debate and call Ray from accounting an idiot.

My point, stated plainly, is this: people act as if intelligence exists and is meaningful in their everyday life, but are reluctant to admit that when arguing or debating about intelligence. Moreover, there’s a specific tendency in such debates to downplay intelligence differences between people, usually by presuming the existence of some trade-off – if you’re smart then you can’t be diligent and hardworking, or you have to be a bad person, or you have to have only a very circumscribed intelligence (e.g. book smarts) while being clueless in other areas.

This is a weird enough pattern to warrant commenting on. After all, as I pointed out in the opening, most people don’t tie themselves up in knots trying to argue that funniness doesn’t exist, or that everyone is funny in their own way. Why the particular reticence to admit that someone can be smart? And why is this reticence almost exclusively limited to debates and arguments? And – most importantly – why is there this odd tendency to deny differences by assuming trade-offs? I can think of at least three answers to these questions, each of which are worth going into.


First, let’s address this abstract-concrete distinction I’ve been alluding to. People seem to hold different beliefs about intelligence when they’re dealing with a concrete, real-life situation than they do when talking about it abstractly in a debate. This is a very common pattern actually, and it doesn’t just apply to intelligence. It goes by a few names: construal level theory in academic circles, and near/far thinking as popularized by Robin Hanson of Overcoming Bias. Hanson in particular has written a great deal about the near/far distinction over the years – here’s a summary of his on the topic:

The human mind seems to have different “near” and “far” mental systems, apparently implemented in distinct brain regions, for detail versus abstract reasoning.  Activating one of these systems on a topic for any reason makes other activations of that system on that topic more likely; all near thinking tends to evoke other near thinking, while all far thinking tends to evoke other far thinking.

These different human mental systems tend to be inconsistent in giving systematically different estimates to the same questions, and these inconsistencies seem too strong and patterned to be all accidental.  Our concrete day-to-day decisions rely more on near thinking, while our professed basic values and social opinions, especially regarding fiction, rely more on far thinking.  Near thinking better helps us work out complex details of how to actually get things done, while far thinking better presents our identity and values to others.  Of course we aren’t very aware of this hypocrisy, as that would undermine its purpose; so we habitually assume near and far thoughts are more consistent than they are.

– See more at:

The basic idea is, people tend to have two modes of thought: near and far. In near mode we focus on the close at hand, the concrete, and the realistic. Details and practical constraints are heavily emphasized while ideals are de-emphasized. Near mode is the hired farmhand of our mental retinue: pragmatic, interested in getting the job done, and largely unconcerned with the wider world or any grand moral principles.

In contrast, far mode focuses more on events distant in space or time, the abstract over the concrete, and the idealistic over the realistic. Thinking in far mode is done in broad strokes, skipping over boring logistics to arrive at the big picture. Far mode is the brain’s wide-eyed dreamer: romantic, visionary, and concerned with the way the world could be or should be rather than the way it is.

The near/far distinction is an incredibly useful concept to have crystallized in your brain – once you’ve been introduced to it you see it popping up all over the place. I’ve found that it helps to make sense of a wide variety of otherwise inexplicable human behaviours – most obviously, why I find myself (and other people) so often taking actions that aren’t in accordance with our professed ideals (actions are near, ideals are far). And if you’re ever arguing with someone and they seem to be saying obviously wrongheaded things, it’s often useful to take a step back and consider whether they might simply be thinking near while you’re thinking far, or vice versa.

The applicability of the near/far idea to the above discussion should be obvious, I think. In a debate you’re dealing with a subject very abstractly, you face relatively low real-world stakes, and you have an excellent opportunity to display your fair-minded and virtuous nature to anyone watching. In other words, debates: super far. So naturally people tend to take on a far-mode view of intelligence when debating about it. And a far mode view of intelligence will of course accord with our ideals, foremost among them being egalitarianism. We should therefore expect debaters to push a picture of intelligence that emphasizes equality: no one’s more intelligent than anyone else, everyone’s intelligent in their own way, if you’re less intelligent you must have other talents or a better work ethic, etc. In other words, people wind up saying things that are optimized for values rather than truth, and the result is a debate that – in both the Hansonesque and Larsonesque senses – is way off to the far side.

Of course, just because a thought comes from far mode thinking doesn’t mean it’s wrong – both modes have their advantages and disadvantages. But all else being equal we should expect near mode thinking to be more accurate than far, simply because it’s more grounded in reality. A screwup in near mode is likely to have much worse consequences than a screwup in far mode, and this keeps near mode relatively honest. For example, a bad performance review at your job could be the difference between you getting a promotion and not getting a promotion. (Or something? I’ve never actually had a real job before) In that case it might be very relevant for you to know if Ray from accounting is going to make some mistake that will affect you and your work. And when he does make a mistake, you’re not going to be thinking about how he probably has some other special talent that makes up for it – you’re going to get Barry to double-check the rest of his damn work.

Near mode isn’t infallible, of course – for example, near mode emphasizes things we can picture concretely, and thus can lead us to overestimate the importance of a risk that is low probability but has high salience to us, like a plane crash. In a case like that far thinking could be more rational. Importantly, though, this near mode failure comes from a lack of real-world experiential feedback, not an excess. When near mode has a proper grounding in reality, as it usually does, it’s wise to heed its word.


Let’s go back to my (admittedly made up) response in Section I for a second. I want to focus on one notion in particular that was brought up:

“… [J]ust because you’re smart doesn’t mean you’re a hard worker. I have a friend who’s not very bright but he works really hard, and I’d choose him over a lazy brainiac any day.”

On the face of it, this is a very strange thing to say. It’s a complete non-sequitur, at least formally: totally unrelated to the proposition at hand, which was that some people are more intelligent than others. In fact, if you’ll notice our interlocutor has gone beyond a non-sequitur and actually conceded the point in this case – they’ve acknowledged that people can be more or less intelligent. So what’s going on here?

First I should emphasize that, although I made this response up, I’ve seen people make exactly this argument many times before. I’ve seen it in relation to intelligence and hard work, and I’ve seen it in a plethora of other contexts:

“Do you prefer more attractive girls?”/”Well I’d rather an plain girl with a great personality than someone who’s really hot but totally clueless.”

“Movie special effects have gotten so much better!”/”Yeah but I’d prefer a good story over good CGI any day.”

“I hate winter!”/”Sure, but better it be cold and clear than mild and messy.”

What do all these responses have in common? They all concede the main point at hand but imply that it doesn’t matter, and they all do so by bringing up a new variable and then assuming a trade-off. The trade-off hasn’t been argued for or shown to follow, mind you – it’s just been assumed. And in all cases one is left to wonder: why not both? Why not someone who’s intelligent and hardworking? Why not a day that’s warm and sunny?

(Although to be fair arguments become a lot more fun when you can just postulate any trade-off you want: “You just stole my purse!”/”Ah yes, but wouldn’t you rather I steal your purse, than I not steal your purse and GIANT ALIEN ZOMBIES TORTURE US ALL TO DEATH?”)

I’ve always found this tactic annoying, and I see it frequently enough that I couldn’t help but give it a name. I’ve taken to calling it the RPG fallacy.

Roughly speaking, I would define the RPG fallacy as “assuming without justification that because something has a given positive characteristic, it’s less likely to have another unrelated positive characteristic.” It gets its name from a common mechanic in role-playing games (RPGs) where a character at the start of the game has a number of different possible skills (e.g. strength, speed, charisma, intelligence, etc.) and a fixed number of points to be allotted between them. It’s then up to you as the player to decide where you want to put those points. Note that this system guarantees that skills will be negatively correlated with one another: you can only give points to a certain skill at the expense of others. So, for example, you can have a character who’s very strong, but only at the expense of making her unintelligent and slow. Or you could make your character very agile and dexterous, but only if you’re fine with him being an awkward weakling. The fixed number of points means that for all intents and purposes, no character will be better than any other: they can be amazing in one area and completely useless in the others, or they can pretty good in a few areas and no-so-great in the others, or they can mediocre across the board, but they can’t be uniformly good or uniformly bad. They will always have a strength to compensate any particular weakness.

The RPG fallacy, then, is importing this thinking into the real world (as in the above examples). It’s a fallacy because we don’t have any particular reason to believe that reality is like a video game: we’re not given a fixed number of “points” when we’re born. No, we may wish it weren’t so, but the truth is that some people get far more points than others. These are the people who speak four languages, and have written five books, and know how to play the banjo and the flute, and have run a three-hour marathon, and are charming and witty and friendly and generous to boot. People like this really do exist, and unfortunately so do their counterparts at the opposite end of the spectrum. And once one begins to comprehend the magnitude of the injustice this entails – how incredibly unfair reality can be – one starts to see why people might be inclined to commit the RPG fallacy.

(We should try to resist this inclination, of course. Burying our heads in the sand and ignoring the issue might make us feel better, but it won’t help us actually solve the problem. This is one of the many reasons I consider myself a transhumanist)

It’s important to emphasize again that the RPG fallacy is only a fallacy absent some reason to think that the trade-off holds. If you do have such a reason, then there’s of course nothing wrong with talking about trade-offs. After all, plenty of trade-offs really do exist in the real world. For example, if someone devotes their entire life to learning about, say, Ancient Greece, then they probably won’t be able to also become a world-class expert on computer microarchitectures. This is a real trade-off, and we certainly wouldn’t want to shut down all discussion of it as fallacious. The reason this case would get a pass is because we can identify why the trade-off might exist – in this case because there’s a quantity analogous to RPG skill points, namely the fixed number of hours in a day (if you spend an hour reading about Plato’s theory of forms, that’s an hour you can’t spend learning about instruction pipelining and caching). Thus expecting trade-offs in expertise to hold in real life wouldn’t be an example of the RPG fallacy.

But even here we have to be careful. Expertise trade-offs will only hold to the extent that there really is a fixed amount of learning being divvied up. And while it’s certainly true that one has to decide between studying A and studying B on a given day, it’s also true that some people decide to study neither A nor B – nor much of anything, really. Moreover, some people are simply more efficient learners: they can absorb more in an hour than you or I could in a day. These give us some reasons to think that expertise trade-offs need not hold in all cases – and indeed, in the real world some people do seem to be more knowledgeable than others across a broad range of domains. This illustrates why it’s so important to be aware of the RPG fallacy – unless you’re taking a long hard look at exactly what constraints give rise to your proposed trade-off, it’s all too easy to just wave your hands and say “oh, everyone will have their own talents and skills.”

So that’s the second answer to our earlier questions. Why do people keep bringing up seemingly irrelevant trade-offs when talking about intelligence? Because they occupy an implicit mindset in which aptitude in one area comes at the expense of aptitude in another, and want to emphasize that being intelligent would therefore come at a cost. This in turn is related to the earlier discussion of far mode and it’s focus on egalitarianism and equality – if everyone has the same number of skill points, then no one’s any better than anyone else. We could probably classify both of these under the heading of “just-world” thinking – believing the world is a certain way because it would be fairer that way. Unfortunately, so far as I’ve been able to tell, if there is a game designer for this world he doesn’t seem much concerned with fairness.


Now we come to the most speculative (and not coincidentally, the most interesting) part of this piece. So far I’ve been essentially taking it as given that the trade-offs people bring up when discussing intelligence (e.g. one type of intelligence versus another, or being smart versus being a hard worker) are fictitious. But perhaps I’m being unfair – might there not be a grain of truth in these kinds of assertions?

Well, maybe. A small grain, anyway. There might be reason to think that even if these trade-offs don’t actually exist in the real world, people will still perceive them in their day-to-day lives – phantom correlations, if you like.

Let’s consider the same example we looked at last section, of hard work and intelligence. We want to examine why people might wind up perceiving a negative correlation between these qualities. For the sake of discussion we won’t assume that they’re actually negatively correlated – in fact, to make it interesting we’ll assume that they’re positively correlated. And we’ll further assume for simplicity that intelligence and hardworking-ness are the only two qualities people care about, and that we can quantify them perfectly with a single number, say from 0-100 (man, we really need a better word for the quality of being hardworking – I’m going to go with diligence for now even though it doesn’t quite fit). If we were to then take a large group of people and draw a scatterplot of their intelligence and diligence values, it might look something like this:


With me so far? Okay, now the interesting part. In many respects people tend to travel in a fairly constrained social circle. This is most obvious in regards to one’s job: a significant fraction of one’s time is typically spent around coworkers. Now let’s say you work at a fairly ordinary company – respectable, but not outrageously successful either. The hiring manager at your company is then going to be faced with a dilemma – of course they’d like to hire people who are both hardworking and intelligent, but such people always get scooped up by better companies that are willing to pay more. So they compromise: they’re willing to accept a fairly lazy person so long as she’s intelligent, and they’ll accept someone who’s not so bright if he’ll work hard enough to make up for it. Certainly though they don’t want to hire someone who’s both lazy and unintelligent. In practice, since in this world intelligence and diligence can be exactly quantified, they’d likely end up setting a threshold: anyone with a combined diligence+intelligence score of over, say, 100 will be hired. And since the better companies will also be setting up similar but higher thresholds, it might be that your company will be unable to hire anyone with a combined score over, say, 120.

So: for the people working around you, intelligence+diligence scores will always be somewhere between 100 and 120. Now what happens if we draw our scatterplot again, but restrict our attention to people with combined scores between 100 and 120? We get the following:


And lo and behold, we see a negative correlation! This despite the fact that intelligence and diligence were assumed to be positively correlated at the outset. It’s obvious what’s going on here: by restricting the range from 100 to 120, your company ensures that people can only be hired by either being highly intelligent and not very diligent, or highly diligent and not very intelligent (or perhaps average for both). The rest of the population will be invisible to you: the people who are both intelligent and diligent will end up at some star-studded Google-esque company, and the people who are neither will end up at a lower-paying, less prestigious job. And so you’ll walk around the office at work and see a bunch of people of approximately the same skill level, some smart but lazy and some dull but highly motivated, and you’ll think, “Man, it sure looks like there’s a trade-off between being intelligent and being hardworking.”

As I said, phantom correlations.

We can apply this idea to the real world. Things are of course much messier in reality: companies care about far more than two qualities, few of the qualities they do care about can be quantified precisely, and for various reasons they probably end up accepting people with a wider range of skill levels than the narrow slice I considered above. All of these things will weaken the perceived negative correlation. But the basic intuition still goes through I think: you shouldn’t expect to see anyone completely amazing at your company – if they were that amazing, they’d probably have a better job. And similarly you shouldn’t expect to see someone totally useless across the board – if they were that useless, they wouldn’t have been hired. The range ends up restricted to people of broadly similar skill levels, and so the only time you’ll see a coworker who’s outstanding in one area is when they’re less-than-stellar in another.

Or, to take what might be a better example, consider an average university. A university is actually more like our hypothetical company above, in that they tend to care about easily quantifiable things like grades or standardized test scores. And they would probably have a similarly restricted range: the really good students would wind up going to a better university, and the really bad ones probably wouldn’t apply in the first place. So maybe in practice it would turn out that they could only reliably get students with averages between 80 and 90. In that case you would see exactly the same trade-off that we saw above: the people with grades in that range get them by being either hardworking or smart – but not both, and not neither. If they had both qualities they’d have higher grades, and if they had neither they’d have lower. So again: trade-offs abound.

Now, how much can this effect explain in the real life? Maybe not a whole lot. For one we have the messy complications I mentioned above, that will tend to widen the overall skill range of people at a given institution and therefore weaken the effect. More important, though, is the fact that people simply don’t spend 100% of their time with coworkers. They also have family and friends, not to mention (for most people) a decade plus in the public school system. That’s probably the biggest weakness of this theory: school. The people you went to school with were probably a fairly representative sample of the population, and not pre-selected to have a certain overall skill level. So in theory one should wind up with fairly accurate views about correlations and trade-offs between characteristics simply by going through high school and looking around.

Still, I think this is an interesting idea, and I think it could explain at least a piece of the puzzle at hand. Moreover, it seems to me to be an idea gesturing towards a broader truth: that trade-offs might be a kind of “natural” state that many systems tend towards – that in some sense, “only trade-offs survive”. Scott Alexander of Slate Star Codex had much the same idea (he even used the university example), and I want to explore it further in a future post.

But I guess the future can wait until later, and right now it’s probably time I got around to wrapping this post up.

So: if you’ll recall, we started out by pondering why debates about intelligence always turn so contentious, and why people have a tendency to assume that intelligence has to be traded off against other qualities. We considered some possible explanations: one was far mode thinking, and it’s tendency to view intelligence as a threat to equality. Another was the RPG fallacy, an implicit belief that everyone has a set number of “points” to be allocated between skills, necessitating trade-offs. And now we have our third explanation: that people tend to not interact much with those who have very high or very low overall skill levels, resulting in an apparent trade-off between skills among the people they do see.

These three together go a long way towards explaining what was – to me, anyway – a very confusing phenomenon. They may not give the whole story of what’s going on with our culture’s strange, love/hate relationship with intelligence – I think you’d need a book to do that justice – but they’re at least a start.


I want to close with a sort of pseudo-apology.

By nature I tend to be much less interested in object-level questions like “What is intelligence?”, and much more interested in meta-level questions like “Why do people think what they do about intelligence?”. I just usually find that examining the thought processes behind a belief is much more interesting (and often more productive) than debating the belief itself.

But this can be dangerous. The rationalist community, which I consider myself loosely a part of, is often perceived as being arrogant. And I think a lot of that perception comes from this interest we tend to have in thought processes and meta-level questions. After all, if someone makes a claim in a debate and you respond by starting to dissect why they believe it – rather than engaging with the belief directly – then you’re going to be seen as dismissive and condescending. You’re not treating their position as something to be considered and responded to, you’re treating it as something to be explained away. Thus, the arrogant rationalist stereotype:

“Oh, obviously you only think that because of cognitive bias X, or because you’re committing fallacy Y, or because you’re thinking in Z mode. I’ve learned about all these biases and therefore transcended them and therefore I’m perfectly rational and therefore you should listen to me.”

It dismays me that the rationalist community gets perceived this way, because I really don’t think that’s how most people in the community think. Scott Alexander put it wonderfully:

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

We’re all sinners here. You don’t “transcend” biases by learning about them, any more than you “transcended” your neurons when you found out that was how your brain thought. I’ve known about far mode for years and I’m just as susceptible to slipping into it as anyone else – maybe even more so, since I tend to be quite idealistic. The point of studying rationality isn’t to stop your brain from committing fallacies and engaging in cognitive biases – sooner you could get the brain to stop thinking with neurons. No, the point of rationality is noticing when your brain makes these errors, and then hopefully doing something to correct for them.

That’s what I was trying to do with this piece. I was trying to draw out some common patterns of thought that I’ve seen people use, in the hope that they would then be critically self-examined. And I was trying to gently nudge people towards greater consistency by pointing out some contradictions that seemed to be entailed by their beliefs. But I was not trying to say “Haha, you’re so dumb for thinking in far mode.”

It’s very easy to always take the meta-road in a debate. You get to judge everyone else for their flawed thinking, while never putting forward any concrete positions of your own to be criticized. And you get to position yourself as the enlightened thinker, who has moved beyond the petty squabbles and arguments of mere mortals. This may not be the intention – I think most rationalists go meta because they really are just interested in how people think – but that’s how it comes across. And I think this can be a real problem in the rationalist community.

So in the spirit of trying to be a little less meta, I thought I’d end by giving my beliefs about intelligence – my ordinary, run-of-the-mill object-level beliefs. That way I’m putting an actual position up for criticism, and you can all feel free to analyze why it is I hold those beliefs, and figure out which cognitive biases I must be suffering from, and in general just go as meta on me as you want. It seems only fair.

Thus, without further ado…

  • I think intelligence exists, is meaningful, and is important
  • I think one can talk about intelligence existing in approximately the same way one talks about funniness existing, or athleticism
  • I think IQ tests, while obviously imperfect, capture a decently large fraction of what we mean when we talk about intelligence
  • Accordingly, I think most people are far too dismissive of IQ tests
  • I think the notion of “book-smarts” is largely bunk – in my experience intelligent people are just intelligent, and the academically gifted usually also seem pretty smart in “real life”
  • With that being said, some people of course do conform to the “book-smart” stereotype
  • I think that, all else being equal, having more intelligence is a good thing
  • I suppose the above means I sort of think that “intelligent people are better than other people” – but only in the sense that I also think friendly, compassionate, funny, generous, witty and empathetic people are better than other people. Mostly I just think that good qualities are good.
  • I think it’s a great tragedy that some people end up with low intelligence, and I think in an ideal world we would all be able to raise our intelligence to whatever level we wanted – but no higher

So there: that’s what I believe.

Scrutinize away.


28 thoughts on “Restricted Range and the RPG Fallacy

  1. “Oh, obviously you only think that because of cognitive bias X, or because you’re committing fallacy Y, or because you’re thinking in Z mode. I’ve learned about all these biases and therefore transcended them and therefore I’m perfectly rational and therefore you should listen to me.”

    Well, if the person that the rationalist is talking with actually commits fallacy Y because of their bias X that activates in the Z mode they’ve obviously used to attain their conclusion, then it’s a perfectly valid argument. Yes, it can come across as condescending, but it can very well be true. Solutions would include: to formulate it in such a way that the rationalist will not make the other person feel insulted (and there are most likely several ways to do that for each discussion), to make the other person a rationalist, to avoid discussing with non-rationalists, and others I’m not thinking of right now.

    Liked by 1 person

    • Yes, of course. The last thing I want is for rationalists to not be able to talk about biases! That’s like the best part. I was mainly endorsing your first proposed solution in this post, but in practice (near/far again!) I usually end up adopting the “avoid non-rationalists” approach.


  2. Obviously I am a little late to this post, but I thought I would/could provide you a little impetus for further blogging (which seems to be your goal).

    I think (if I understand the concepts correctly) that your emphasis on near-thinking as “more accurate” thinking is too glib. Near thinking is going to activate a bunch of built-in, hardwired, “lizard-brain” biases. You touch on this in talking about air travel, but don’t actually settle on the “why”, which I believe [citation needed] has to do with things like weighted-cost, opportunity-cost and return-on-investment. “Is the waving grass, in fact, a hidden lion” being a classic example of the opportunity cost and absolute cost of taking a slightly different path to the watering hole being easily paid for by the 100% absolute loss of being eaten by a lion. Of course, our hard-wired, near-thinking biases don’t reason this out, they simply paid for themselves, evolutionarily speaking.

    So when we talk about Ray from Accounting and the “fact” that “he is an idiot”, he might actually be an idiot. But our assessment of him may be affected by various near-thinking biases. Especially biases that come out of our evolutionary history as social apes. Ray might be an actual idiot, or Ray might have merely done something that threatened our position in the social dominance hierarchy of the company.

    None of this means the idea of “there is something, that we call intelligence, which individual [i]homo sapiens[/i] possess in greater or lesser amounts” is wrong. But it does suggest that the particular example you choose isn’t necessarily elucidating.

    And here is the other thing, Ray might be brilliant, but he also might be very bad at accounting, because he was actually a math major who would much rather be studying Fourier Transforms. Ray might think double-entry bookkeeping is dreadfully dull and only stumbled into this job in accounting.

    You don’t seem to meaningfully engaging with the fact that IQ, while real, can also really only be measured by proxy using aquired knowledge. As a thought example, think of someone raised in a setting (say the deep Amazon jungle or the Khalahari) who happens to be brilliant, but has never been exposed to writing or symbolic logic of any kind. My sense is you are going to have a very hard time measuring their innate IQ.

    Or, let’s take a more U.S.-centric example. Take 3 overhead pictures of a football field at T-2, T-1, T-0 (seconds before the snap of the football). A very smart coach or football player can make a good predictions of where all 22 players will be based on those pictures. There are definitley smarter and less smart football players. For a variety of reasons, the most brilliant physicist is unlikely to ever be able to do that. But it is probably less the innate intelligence that is at play, and more the acquired knowledge.

    As a further thought, you don’t seem to have engaged with the idea that these questions about intelligence come up within a conversationl context. That context is frequently around the fitness of an identified group. When the statement “some people are smarter than others” is made, the receiver of that statement has already been primed to respond to the likely following conlusion “therefore, group X is underrepresented in setting Y because they deserve to be.” They are responding not to the statement, but to the conclusion they believe you will draw and are attempting to prevent you from reaching this conclusion.

    Just my $0.02, as I found the post interesting and thought it deserved a reply.


    • A few thoughts:

      Regarding near/far, I’m by no means an expert, but I would caution against equating near mode with the hardwired “lizard brain.” I don’t know if you’re familiar with Daniel Kahneman’s idea of System 1 and System 2, but it’s another useful (yet distinct, I think!) brain dichotomy that explains a lot about human behaviour. Very roughly, System 1 is sort of the hardwired portion of the brain (fast, instinctive, and unconscious) while System 2 is more like the brain’s software (slow, effortful, and conscious). System 1 is used to make snap judgments, while the more expensive System 2 is trotted out only when deliberative reasoning is required. Kahneman’s basic thesis is that we underestimate how much of our decision-making is done by System 1.

      Anyway, System 1 can probably be associated with the “lizard brain” you describe, but near mode I think is something different. At least in my picture of things (and again I’m most definitely not an expert) both near mode and far mode are things that go on *within* System 2. They’re essentially flavours of thought that our conscious reasoning can take on. I think the quintessential example of near vs. far thinking is New Year’s Resolutions. Compare “I’m going to go to the gym every week this year” with “I’m not going to the gym today, it’s raining”. Both are examples of conscious (i.e. System 2) decision-making, but one is far and one is near. This might have been a better example to focus on than the plane crash, which I agree probably has other things going on than just near/far. As for which mode is more accurate, well…how many people do you know who complete their New Year’s Resolutions?

      (A case could perhaps be made that near mode is in some sense “closer” to System 1 than far mode is, and therefore more prone to bias. That would be a very interesting idea to pursue, and I haven’t really thought about it a lot. Even in that case though – especially in that case – it would be very important to keep near mode and System 1 conceptually distinct)

      The “Ray from accounting” example: my main point there was to remind people that no matter how much they might proclaim that intelligence doesn’t exist, in everyday life they certainly *act* as if intelligence existed (i.e. by calling Ray an idiot). This argument goes through whether or not people’s perceptions of idiocy are actually accurate – the important thing is that they think they are. With that being said, I definitely agree that various factors can bias our perception of one another’s intelligence – off the top of my head, I know I’ve heard many times that more attractive people tend to be rated as more intelligent than average (a kind of “halo effect”). I’m sure there are lots of other examples.

      Regarding your comment on IQ:

      “You don’t seem to meaningfully engaging with the fact that IQ, while real, can also really only be measured by proxy using aquired knowledge. As a thought example, think of someone raised in a setting (say the deep Amazon jungle or the Khalahari) who happens to be brilliant, but has never been exposed to writing or symbolic logic of any kind. My sense is you are going to have a very hard time measuring their innate IQ.”

      I’m perfectly willing to concede this. While I think the extent to which IQ tests are culturally biased is typically overblown, two *big* caveats to that notion are illiteracy and unfamiliarity with reasoning-based tests. So yes, an IQ test is probably going to miss the Khalahari equivalent of Ramanujan or whatever. But with that being said, you have to admit that it’s probably not a *coincidence* that the average eminent physicist has an IQ of 150-160. There’s definitely something real that IQ tests are picking up on.

      Finally, regarding your last point, I agree that the examples I gave in the post have a lot of subtext to them that change their meaning. I probably should have addressed this in the post. I talked a little bit about subtext in a comment below.

      Thanks for your thoughts!


      • Actually, the more I think about it, the more I think near/far can just be summarized as “Far mode is what New Year’s Resolutions feel like!” It gets across the concept very well.


      • Given that a huge percentage (92% is the one that pops up in google, not sure if that is accurate) of New Year’s Resolutions end not being carried through, and given that following through on the resolutions is undoubtably better for us, doesn’t that diminish your claims about near-thinking?

        I mean, you could reduce the near-thinking to the merely predictive “(I predict that) I won’t go to the gym today.”, and in that sense it would be more accurate. It seems to me thought, that in the way Hansen talks about it (“Our concrete day-to-day decisions rely more on near thinking”) that it is actually the decision not to go to the gym that relies on near-thinking, and that is undeniably a worse choice than the one made by the far-thinking that led you to resolve to go to the gym.

        Far-thinking: I have to stop gambling because it is going to cost me my house and my marriage.
        Near Thinking: I know I’m due for some luck. Come on lucky 7!

        Far thinking: I need to not procrastinate on this paper. My grant depends in getting an A in this class.
        Near thinking: Just one more game of 2048. Then I’ll do my paper.

        And so on. Near thinking is great at rationalizing suboptimal behavior.

        Basically I don’t think that the fact that near-thinking leads one to conclude that people are idiots can do very much lifting for you in this argument. You want people to conclude, absent evidence about the correctness of near-thinking specifically about intelligence, that near-thinking is less biased, and not not just less-biased but so supremely less-biased that we can simply ignore the (assumed) far-thinking deduction about intelligence. But, I admit that I unfamiliar with the concept, so it’s possible I am leaving something out.

        To go the other way, wouldn’t near-thinking lead one to conclude that their are a range of “mental talents”? I don’t want to use the word “intelligences” as I think it clouds the conversation. There are easily distinguishable outliers (autistic savant on the far range) who can do amazing things, say musically, and yet can’t learn to tie their shoes (metaphorically).

        None of which is to say that I think the basic thrust of your argument is wrong. There are definitely people who have higher IQs than others, and IQ does seem to roughly map a thing we call intelligence. But the idea that intelligence can simply be mapped by a single number, that it is a one-dimensional characteristic, well, I’ll just both my near and far thinking would like to challenge that. 🙂


  3. “and the academically gifted usually also seem pretty smart in “real life””

    I came across a somewhat surprising (to me) example of this some years back.

    Richard Epstein is a prominent legal scholar. I was told that he was also very effective in intra-university politics—not in the sense of political disputes but of interactions among the faculty who run the university. That surprised me, because I know Richard and he is a very bright guy who talks (very rapidly) to everyone he meets as if they are equally bright, which I would expect to result in most of the people he speaks with understanding very little of what he says. The problem should be a little less serious in a university—but only a little. I am told that Robert Nozick described Richard as the only person he knew who spoke in paragraphs.

    Apparently, when he turns his mind to getting things done within the university, the same intelligence that has made him an important influence on U.S. legal thinking—I once heard a senator questioning a Supreme Court nominee ask him if he agreed with Epstein, with the obvious implication that if he did he shouldn’t be on the court—works for that purpose as well.


    • What about the example of Richard Feynman? His genius in physics is widely recognized – indeed, he might be described as “no ordinary genius” – yet his IQ was measured at 121, only somewhat more than one standard deviation above average. This contrasts rather sharply with Scott Alexander’s recent statement that, “The average eminent theoretical physicist has an IQ of 150-160”. Also, as I recall, Gleick’s biography of Feynman (titled *Genius*, btw) relates at least one case of Feynman’s ability to solve a complex physics problem very much faster than another physicist.

      At least in Feynman’s case, it seems that IQ is failing to capture something important in his ability to function in the world of his choice.


      • I’m not sure what Scott’s source was on that number, but I believe it. Here’s a relevant study:

        Key quotes:

        “I recently came across a 1950s study of eminent scientists by Harvard psychologist Anne Roe: The Making of a Scientist (1952). Her study is by far the most systematic and sophisticated that I am aware of. She selected 64 eminent scientists — well known, but not quite at the Nobel level — in a more or less random fashion, using, e.g., membership lists of scholarly organizations and expert evaluators in the particular subfields. Roughly speaking, there were three groups: physicists (divided into experimental and theoretical subgroups), biologists (including biochemists and geneticists) and social scientists (psychologists, anthropologists).”

        “Roe devised her own high-end intelligence tests as follows: she obtained difficult problems in verbal, spatial and mathematical reasoning from the Educational Testing Service, which administers the SAT, but also performs bespoke testing research for, e.g., the US military. Using these problems, she created three tests (V, S and M), which were administered to the 64 scientists, and also to a cohort of PhD students at Columbia Teacher’s College. The PhD students also took standard IQ tests and the results were used to norm the high-end VSM tests using an SD = 15. Most IQ tests are not good indicators of true high level ability (e.g., beyond +3 SD or so).”

        “The lowest score in each category among the 12 theoretical physicists would have been roughly V 160 (!) S 130 M >> 150. (Ranges for all groups are given, but I’m too lazy to reproduce them here.) It is hard to estimate the M scores of the physicists since when Roe tried the test on a few of them they more or less solved every problem modulo some careless mistakes.”

        So basically the physicists were so smart they broke the test, but if we charitably assume that M >> 150 means 175 and then take a straight average of the three tests, we get 152. [Edit: whoops, that’s the lowest score for each subtest! The average would be even higher]

        The Feynman thing gets brought up a lot in discussions like this, but I don’t give it much weight, especially since he got the best score in the country on the Putnam (read: one of the hardest math tests in the world) with little preparation. So clearly his talents were not beyond the ability of standardized tests to measure.

        Why did he score so “low” for his IQ test then? Shrug. Obviously he would have scored ceiling for the mathematical portion. With a properly normed test his math score would have come out to…god only knows. 190? 200? Anyway, maybe he’s an outlier to the trend that math and verbal skills are highly correlated, ended up just scoring decent on verbal, was penalized by the low ceiling on math, and got 125 overall (that’s the number on wikipedia, anyway). I’ve also heard theories that the IQ test he took was outdated by modern standards, but I don’t know if that’s true.

        Whatever the case may be with Feynman, I think the study linked above is pretty definitive: eminent physicists have extremely high IQs.


      • I’m pretty skeptical of this claim, and even if true, no single counterexample disproves statistical statements. Feynman got top 5 (maybe top 1) on the Putnam exam, which I would bet is very strongly g loaded. I believe he may once have said, or even taken a test and gotten, that score. It fits his folksy image, and it is a memetically well designed statement, it will spread without evidence whether true or not because people want to believe it and display belief in it. But I would bet that he also scored much higher on other tests. I bet his math SATs and quant/quality GREs would be at a much higher IQ level. Do you have a good cite?


      • Googled this a bit. He reported getting 125 on a childhood IQ test. Given his other test taking accomplishments, it seems extremely likely that this one score was an abnegation


  4. The examples you give are not non-sequiturs. They follow directly from the subtext of actual human communication:
    Ray says to me, “Didn’t you think the special effects in Transformers 27 were great?”
    I reply “Yes, quite impressive”.
    “Great, then we should go see Transformers 28 on opening night”.
    Of course, I hate “Transformers” and now I’ve got to explain to Ray that although I agree that the special effects are good, “I’d prefer a good story over good CGI any day.” Much easier if, rather than give a literally accurate answer, I reply both to Ray’s actual question (about special effects) and to Ray’s implied statement, that Transformers 27 was a good movie.

    Generally, if I say “Quality X about thing Y is good”, the implication is that not only is that true, but in general I find Y good. Alas, words often don’t mean the same thing between people as they do in mathematical proofs.

    As a side note, this implication is so strong, that you can use it to deprecate-via-compliment (the “left-handed compliment”):
    Ray: “What did you think of Restaurant Q?”
    Barry: “The location sure is convenient!”.
    Most listeners will understand that Barry was unimpressed with Q. The pure logician will simply note that Barry must live near Q.
    I think this generally applies to the intelligence question. When you say “Some people are more intelligent than others.”, many people hear an implied statement that intelligent people are better than non-intelligent people. They fear that (as when talking about Transformers) agreement without caveat means agreeing to some larger statement about intelligent people being better than non-intelligent people.

    Language being what it is, not all descriptors carry the same baggage. With “Funny” there’s very little implication, presumably because being funny isn’t so well correlated with life outcomes, nor is it associated with how we spend the first 20-odd years of our lives being ranked and evaluated. That said, the statement “Boy, Bill Cosby sure is Funny!” will probably get you more than a simple “Yes” these days.


    • I agree. And not only do the example follow normal patterns, they also are pulled from out of whatever context the conversation is occurring in. So, if you are in the middle discussing, say, socio-economic outcomes of various sub-groups of the American population and the statement about intelligence is made, the listener is primed to attempt to refute the implied argument (as you say), rather than simply address the simple statement.


    • Well, first of all I did include the caveat that the examples were non-sequiturs “at least formally” – which is true. Formally speaking, they’re non-sequiturs.

      But anyway, I’m not a naive logician; I of course agree that conversations have subtexts. And I think what you describe is part of what’s going on in these situations. But even there there’s some odd thinking going on that needs further explanation. You write:

      “Generally, if I say “Quality X about thing Y is good”, the implication is that not only is that true, but in general I find Y good.”

      I agree with this, but I think it supports my point. To take the Transformers example, you worry that agreeing that it has good CGI would lead someone to believe that you think it’s a good movie overall.

      To which I respond: what’s wrong with that?

      Seriously, forgetting for a second that Transformers probably actually *is* a crappy movie: if all you know about a movie is that it has good CGI, why would you be worried that someone would think that you like it? All else being equal, CGI is good! (Or if you don’t like modern CGI, then just pretend I said “good special effects”). Unless you *already* have a mindset that good CGI automatically means bad story, there would be no reason to worry about someone trying to sneak in connotations. You would just be neutral to that possibility – after all, as far as you know the movie *does* have a good story.

      In other words, I think the RPG fallacy is what *creates* a lot of the subtext you describe.


  5. Great post! Especially liked the thing about near/far-thinking. I’m a psychology student so I’ve read about factor analysis and g and all that. I’ve been surprised how weird discussions about IQ can get, even among people who really have acess to all the facts. It just gets super abstract and after a while peple say things like “since we don’t know *exactly* how thinking/reasoning works we can’t talk about differences in intelligence.”

    I think there may be some reason for the perception that “book smarts” is a thing. I’m thinking that occationally you meet a highly specialized academic who doesn’t know a lot about your everyday life/work that you consider common sense. But I don’t believe that this is because this person “can’t learn everyday practical stuff” as well as other people, he/she just hasn’t learned it yet.

    (English not first language, sorry if I’m hard to understand.)


  6. While people in general aren’t issued a fixed number of RPG points to divide up, each individual person does have a (relatively) fixed number of RPG points. They have only so many neurons, which function at whatever level. Suppose that “intelligence” and “diligence” are two programs that compete for those resources. Then regardless of how many points a particular individual had, there would always be a trade-off going on.


    • Well, a naive application of this idea would suggest that different mental abilities (verbal, spatial, memory, etc.) should be negatively correlated with one another. After all, there’s only so many neurons in the brain, and the abilities should be competing for those resources. In reality, of course, we see exactly the opposite: almost all mental abilities are positively correlated with one another to a high degree, even ones you wouldn’t expect, like reaction time (the reason for this is interesting, and I want to talk about it in another post). This is Spearman’s g factor, and it’s the reason we can give a single number for an IQ test even though many different abilities are tested. So while I can’t comment on what the actual correlation is between intelligence and diligence (I haven’t really looked at the literature), I can say that a “fixed brain resources” argument isn’t going to cut it.


  7. I really enjoyed reading this. I especially like the name “RPG fallacy.”

    My comment is that I strongly suspect it’s a cultural artifact. So for example your run of the mill Shinto or Confucianist would probably classify RPG-thinking as typical western stupidity.

    On the other hand they would probably appreciate the analogy as an explanation for their racism (I don’t like to use loaded words but that one’s not avoidable). So the Japanese, being so awesome, get to roll stats with 5d6 minus the lowest, the Chinese and Koreans roll 4d6, and the Vietnamese are rolling 4d6 minus the lowest.

    For all the non-Dnd nerds: an alternate point allocation system is to roll dice and have their total be the stat. So you roll 4 dice and subtract the lowest, they land 5, 5, 3 and 2, then the stat is a 13. You do that for each stat. The dice don’t have memory, there’s no trading off. So Rosencrantz and Gildenstern can have an 18 in everything.


  8. “man, we really need a better word for the quality of being hardworking – I’m going to go with diligence for now even though it doesn’t quite fit”

    I’ve seen many authors use the word ‘conscientiousness’ to describe this trait. defines it as follows: “Conscientiousness is not just about getting to the church on time, in a freshly-ironed outfit. It is a fundamental personality trait that influences whether people set and keep long-range goals, deliberate over choices or behave impulsively, and take seriously obligations to others.”


  9. “The people you went to school with were probably a fairly representative sample of the population, and not pre-selected to have a certain overall skill level. ”

    Gotta disagree with this. At least for the US. Almost every public school draws on a small, bounded, geographical area for it’s pupils, and most small, bounded, geographical areas have a high correlation in terms of cost of housing. If your parents can’t afford living in the neighborhood, you don’t go to that school. The abilities that lead to one being able to afford living in a neighborhood are highly heritable — things like IQ, conscientiousness, energy levels, etc).

    The correlation is lower than for the job market, or the university, since there is reversion to the mean, etc, as well as non-assortive mating due to things like physical beauty, but I imagine the correlation is still strong enough to give the same sort of RPG effect you describe.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s