Didacticism and the Ratchet of Progress

So my last two posts have focused on an idiosyncratic (and possibly nonsensical) view of creativity that I’ve developed over the past month or so. Under this picture, the intellect is divided into two categories: P-intelligence, which is the brain’s ability to perform simple procedural tasks, and NP-intelligence, which is the brain’s ability to creatively solve difficult problems. So far I’ve talked about how P- and NP-intelligence might vary independently in the brain, possibly giving rise to various personality differences, and how NP-intelligence is immune to introspection, which lies at the heart of what makes us label something as “creative”. In this post I’d like to zoom out a bit and talk about how the notion of P- and NP-intelligence can be applied on a larger, societal scale.

Before we can do that, though, we have to zoom in – zoom in to the brain, and look at how P- and NP-intelligence work together in it. I think something I didn’t emphasize quite enough in the previous two posts is the degree to which the two kinds of intelligence complement one another. P- and NP-intelligence are very different, but they form an intellectual team of sorts – they each handle different jobs in the brain, and together they allow us to bootstrap ourselves up to higher and higher levels of understanding.

What do I mean by bootstrapping? Well, I talked before about how the inner workings of NP-intelligence will always be opaque to your conscious self. Whenever you have a new thought or idea, it just seems to “bubble up” into consciousness, as if from nowhere. So there’s a sense in which NP-intelligence is “cut off” from consciousness – it’s behind a kind of introspective barrier.

By the same token, though, I think you could say that the reverse is also true – that it’s consciousness that is cut off from NP-intelligence. I picture NP-intelligence as essentially operating in the dark, metaphorically speaking. It goes through life stumbling about in what is, in effect, uncharted territory, trying to make sense of things that are at the limit of – or beyond – its ability to make sense of. And as a result, when your NP-intelligence generates some new idea or thought, it does so – well, not randomly, but…blindly.

I think that’s a good way of putting it: NP-intelligence is blind. When trying to solve some problem, your NP-intelligence has no idea in advance if the solution it will come up with will be a good one. How could it? We’re stipulating that the problems it deals with are too hard to immediately see the answer to. So your NP-intelligence is essentially reduced to educated guessing: How about this? Does this work? Okay, what about this? It offers up potential solutions, not knowing if they are correct or not.

And what exactly is it offering up these solutions to? Why, P-intelligence of course! P-intelligence may not be very bright – it could never actually solve the kind of problems NP-intelligence deals with – but it can certainly handle solution-checking. After all, solution-checking is easy. So when your NP-intelligence tosses up some half-baked idea that may be complete genius or may be utter stupidity (it doesn’t know), it’s your P-intelligence that evaluates the idea: No, that’s no good. Nope, total garbage. Yes, that works! Of course, it’s probably not just one-way communication – there’s probably also some interplay, some back and forth between the two: Yes, you’re getting better. No, not quite, but that’s close. Almost there, here’s what’s still wrong. By and large, though, P- and NP-intelligence form a cooperative duo in the brain in which they each stick to their own specialized niche: NP-intelligence is the suggester of ideas, and P-intelligence is the arbiter of suggestions.

That, in a nutshell, is my view of how we manage to make ourselves smarter over time. Your NP-intelligence is essentially an undiscriminating brainstormer, throwing everything it can think of at the wall to see what sticks, and your P-intelligence is an overseer that looks over what’s been thrown and ensures that only good things stick. Together they act as a kind of ratchet that lets good ideas accumulate in the brain and bad ones be forgotten.

Of course, saying that NP-intelligence acts “indiscriminately” is probably overstating things – that would amount to saying that NP-intelligence acts randomly, which is almost certainly not true. After all, while the above “ratchet” scheme would technically work with a random idea generator, in practice it would be far too slow to account for the rate at which humans manage to accumulate knowledge – it would probably take eons for a brain to “randomly” suggest something like General Relativity. No, NP-intelligence does not operate randomly, even if it does operate blindly – it suggests possible solutions using incredibly advanced heuristics that have evolved over millions of years, heuristics that are very much beyond our current understanding. And from the many ideas that these advanced heuristics generate, P-intelligence (faced only with the relatively simple task of identifying good ideas) is able to select the very best of them. The result? Whenever our NP-intelligence manages to cough up some brilliant new thought, our P-intelligence latches onto it, and uses it to haul ourselves up to a new rung on the ladder of understanding – which gives our NP-intelligence a new baseline from which to operate, allowing the process to begin all over again. Thus the ratchet turns, but only ever forward.

Now, there’s a sense in which what I’m saying is nothing new. After all, people have long recognized that “guess and check” is an important method of solving problems. But what this picture is saying is that at a fundamental level, guessing and checking is all there is. The only way problems ever get solved is by one heuristic or another in the brain suggesting a solution that it doesn’t know is correct, and then by another part of the brain taking advantage of the inherent (relative) easiness of checking to see if that solution is any good. There’s no other way it could work, really – unless you already know the answer, the creative leaps you make have to be blind. And what this suggests in turn is that the only reason people are able to learn and grow at all, indeed the only reason that humanity has been able to make the progress it has, is because of one tiny little fact: that it’s easier to verify a solution than it is to generate it. That’s all. That one little fact lies at the heart of everything – it’s what allows you to recognize good ideas outside your own current sphere of understanding, which is what allows that ratchet of progress to turn, which is what eventually gives you…the entire modern world, and all of its wonders. The Large Hadron Collider. Citizen Kane. Dollar Drink Days at McDonald’s. All of them stemming from that one little quirk of math.

(Incidentally, when discussing artificial intelligence I often see people say things like, “Superintelligent AI is impossible. How can we make something that’s smarter than ourselves?” The above is my answer to those people. Making something smarter than yourself isn’t paradoxical – we make ourselves smarter every day when we learn. All it takes is a ratchet)

Now, I started off this post by saying I wanted to zoom out and look at P- and NP-intelligence on a societal level, and then proceeded to spend like 10 paragraphs doing the exact opposite of that. But we seem to have arrived back on topic in a somewhat natural way, and I’m going to pretend that was intentional. So: let’s talk about civilizational progress.

Civilization, clearly, has made a great deal of progress over the past few thousand years or so. This is most obvious in the case of science, but it also applies to things like art and morality and drink deals. And as I alluded to above, I think the process driving this progress is exactly the same as the one that drives individual progress: the ratchet of P- and NP-intelligence. Society progresses because people are continually tossing up new ideas (both good and bad) for evaluation, and because for any given idea, people can easily check it using their P-intelligence, and accept it only if it turns out to be a good idea. It’s the same guess-and-check procedure that I described above for individuals, except operating on a much wider scale.

(Of course, that the two processes would turn out similar is probably not all that surprising, given that society is made up of individuals)

But the interesting thing about civilizational progress, at least to my mind anyway, is the extent to which it doesn’t just consist of individuals making individual progress. One can imagine, in principle at least, a world in which all civilizational progress was due to people independently having insights and getting smarter on their own. In such a world, everyone would still be learning and growing (albeit likely at very different rates), and so humanity’s overall average understanding level would still be going up. But it would be a very different world from the one we live in now. In such a world, ideas and concepts would have to be independently reinvented by every member of the population before they could be made use of. If you wanted to use a phone you would have to be Alexander Graham Bell; if you wanted to do calculus, you would have to be Newton (or at the very least Leibniz).

Thankfully, our world does not work that way – in our world, ideas only have to be generated once. And the reason for this is the same tiny little fact I highlighted above, that beautiful asymmetry between guessing and checking. The fact that checking solutions is easy means that the second an idea has even been considered in someone’s head, the hard part is already over. Once that happens, once the idea has been lifted out of the vast, vast space of possible ideas we could be considering and promoted to our attention, then it just becomes a matter of evaluating it using P-intelligence – which other people can potentially do just as easily as the idea-generator. In other words, ideas are portable – when you come up with some new thought and you want to share it with someone else, they can understand it even if they couldn’t have thought of it themselves. So not only does every person have a ratchet of understanding, but that ratchet carries with it the potential to lift up all of humanity, and not just themselves.

Of course, while this means that humanity is able to generate a truly prodigious number of good ideas, and expand its sphere of knowledge at an almost terrifying rate, the flip side is that it’s pretty much impossible for any one person to keep up with all that new knowledge. Literally impossible, in fact, if you want to keep up with everything – scientific papers alone come out at a rate far faster than you could ever read them, and there are over 300 hours of video uploaded to youtube every minute. But even if you just want to learn the basics of a few key subjects, and keep abreast of only the most important new theories and ideas, you’re still going to have a very tough time of it.

Luckily, there are two things working in your favour. The first is just what I’ve been talking about for this whole post – P vs NP-intelligence, and the fact that it’s much easier to understand someone else’s idea than it is to come up with it yourself. Of course, easier doesn’t necessarily mean easy – you still have to learn calculus, even if you don’t have to invent it – but this is what gives you a fighting chance. Our whole school system is essentially an attempt to take people starting from zero knowledge and bring them up to the frontiers of our understanding, and it’s no coincidence that it operates mostly based on P-intelligence. Oh sure, there are a few exceptions in the form of “investigative learning” activities – which attempt to “guide” students toward making relatively small creative leaps – but for the most part, school consists of teachers explaining things. And it pretty much has to be that way – unless you get really good at guiding, it’s going to take far too long for students to generate on their own everything that they need to learn. After all, it took our best minds centuries to work all of this “science” stuff out. How’s the average kid supposed to do it in 12 years?

So that’s the first thing working in your favour – for the most part when you’re trying to learn, you “merely” have to make use of your P-intelligence to understand the subject matter, which in principle allows you to make progress much faster than the people who originally came up with it all.

The second thing working in your favour, and the reason I actually started writing this post in the first place (which at 2100 words before a mention is probably a record for me) is didacticism.

So, thus far I’ve portrayed the whole process of civilizational progress in a somewhat…overly simplistic manner. I’ve painted a picture in which the second an individual comes up with a brilliant new idea (or theory/insight/solution/whatever), they and everyone else in the world are instantly able to see that it’s a good one. I’ve sort of implied (if not stated) that checking solutions is not just easier than generating them, it’s actually fundamentally easy, and that everyone can do it. And to the extent that I’ve implied these things, I’ve probably overstated my case (but hey, in my defense it’s really easy to get swept up in an idea when you’re writing about it). I think I actually put things better in my first post about this whole mess – there I described P-intelligence as something that develops as you get older, and that can vary from person to person. And from that perspective it’s easy to see how, depending on your level of P-intelligence and the complexity of the idea in question, “checking” it could be anything from trivially easy to practically impossible. It all depends on whether the idea falls within the scope of your P-intelligence, or just at its limits, or well beyond it.

(Mind you, the important stuff I wrote about above still goes through regardless. As long as checking is easier than guessing – even if it’s not easy in an absolute sense – then the ratchet can still turn)

Anyway, so what does this all have to do with didacticism? Well, I view didacticism as a heroic, noble, and thus far bizarrely successful attempt to take humanity’s best ideas and bring them within the scope of more and more people’s P-intelligence. The longer an idea has been around the better we get at explaining it, and the more people there are who can understand it.

My view of that process is something like the following: when someone first comes up with a genuinely new idea (let’s say we’re talking about a physicist and they’ve come up with a new theory, since that’s what I’m most familiar with), initially there are going to be very few people who can understand it. Maybe a few others working in the same sub-sub-field of physics can figure it out, but probably not anyone else. So those few physicists get to work understanding the theory, and eventually after some work they’re able to better explain it to a wider audience – so now maybe everyone in their sub-field understands it. Then all those physicists get to work further clarifying the theory, and further working out the best way to explain it, and eventually everyone in that entire field of physics is able to understand it. And it’s around that point, assuming the theory is important enough and enough results have accumulated around it, that textbooks start getting written and graduate courses start being taught.

That’s an important turning point. When you’ve reached the course and textbook stage, that means you’ve gotten the theory to the point where it can be reliably learned by students – you’ve managed to mass-produce the teaching of the theory, at least to some extent. And from there it just keeps going – people come up with new teaching tricks, or better ways of looking at the theory, and it gets pushed down to upper-year undergraduate courses, and then possibly to lower-year undergraduate courses, and eventually (depending on how fundamentally complicated the theory is, and how well it fits into the curriculum) maybe even to high school. At every step along the way of this process, the wheel of didacticism turns, our explanations get better and better, and the science trickles down.

This isn’t just all hypothetical, mind you – you can actually see this process happening. Take my own research, which is on photonic crystals. Photonic crystals were invented in 1987, the first textbook on them was published in 1995, and just two years ago I sat in on a special photonic crystal graduate course, probably one of the first. So the didactic process is well on its way for photonic crystals – in fact, the only thing holding the subject back right now is that it’s of relatively narrow interest to physicists. If photonic crystals start being used in more applications, and gain importance to a wider range of physicists, then I would be shocked if they weren’t pushed down to upper-year undergraduate courses. They’re certainly no more complicated than anything else that’s taught at that level.

Or, if you’d like a more well-known example, take Special Relativity. Special Relativity is a notoriously counterintuitive and confusing subject in physics, and students always have trouble learning it. However, for such a bewildering, mind-bending theory it’s actually quite simple at heart – and so it stands the most to gain from good teaching and the didactic process in general. This is reflected in the courses it gets taught in. I myself am a TA in a course that teaches Special Relativity, and it’s not an upper year course like you might expect – it’s actually a first year physics course. And not only that, it’s actually a first year physics course for life-science majors. The students in this course are bright kids, to be sure, but physics is not their specialty and most of them are only taking the course because they need it to get into med school. And yet every year we teach them Special Relativity, and it’s actually done at a far less superficial level than you might expect. Granted, I’m not sure how much they get out of it – but the fact that it keeps getting taught in the course year after year puts a lower bound on how ineffective it could be.

Think about what that means – it means that didacticism, in a little over a hundred years, has managed to take Special Relativity from “literally only the smartest person in the world understands this” to “eh, let’s teach it to some 18-year-olds who don’t even really like physics”. It’s a really powerful force, in other words.

And not only that, but it’s actually even more powerful than it seems. The process I described above, of a theory gradually working its way down the scholastic totem pole, is only the most obvious kind of didacticism. There’s also a much subtler process – call it implicit didacticism – whereby theories manage to somehow seep into the broader cultural awareness of a society, even among those who aren’t explicitly taught the theory. A classic example of this is how, after Newton formulated his famous laws of motion, the idea of a clockwork universe suddenly gained in popularity. Of course, no doubt some people who proposed the clockwork universe idea knew of Newton’s laws and were explicitly drawing inspiration from them – but I think it’s also very likely that many proponents of the clockwork universe were ignorant of the laws themselves. Instead, the laws caused a shift in the way people thought and talked that made a mechanistic universe seem more obvious. In fact, I know this sort of thing happened, because I myself “came up” with the clockwork universe idea when I was only 14 or so, before I had taken any physics courses or knew what Newton’s laws were. And I take no credit for “independently” inventing the idea, of course, because in some sense I had already been exposed to it and had absorbed it by osmosis – it was already out there, had already altered our language in imperceptible ways that made it easier to “invent”. Science permeates our culture and affects it in very nonobvious ways, and it’s hard to overestimate how much of an effect this has on our thinking. Steven Pinker talks about much the same idea in The Better Angels of Our Nature while describing a possible cause of the Flynn effect (the secular rise in IQ scores in most developed nations over the past century or so):

And, Flynn suggests, the mindset of science trickled down to everyday discourse in the form of shorthand abstractions. A shorthand abstraction is a hard-won tool of technical analysis that, once grasped, allows people to effortlessly manipulate abstract relationships. Anyone capable of reading this book, even without training in science or philosophy, has probably assimilated hundreds of these abstractions from casual reading, conversation, and exposure to the media, including proportional, percentage, correlation,causation, [ . . . ]  and cost-benefit analysis. Yet each of them—even a concept as second-nature to us as percentage—at one time trickled down from the academy and other highbrow sources and increased in popularity in printed usage over the course of the 20th century.

-Steven Pinker, The Better Angels of Our Nature, p. 889

I have no idea if he’s right about the Flynn Effect, but what’s undoubtedly true is that right now we live in the most scientifically literate society to have ever existed. The average person knows far more about science (and more importantly, knows far more about good thinking techniques) than at any point in history. And if that notion seems wrong to you, if you’re more prone to associating modern day society with reality TV and dumbed-down pop music and people using #txtspeak, then…well, maybe you should raise your opinion of modern day society a little. A hundred years ago you wouldn’t have been able to just say “correlation does not imply causation”, or “you’ll hit diminishing returns” and assume that everyone would know what you were talking about. Heck, you wouldn’t have been able to read a blog post like this one, written by a complete layperson, even if blogging had existed back then.

All of which is to say: didacticism is a pretty marvelous thing, and we all owe the teachers and explainers of the world a debt of gratitude for what they do. So I say to them all now: thank you! This blog couldn’t exist without you.

When I put up this site I actually wrote at the outset that I didn’t want it to be a didactic blog. And while I do still hold that opinion, I’m much less certain of it than I used to be – I see now the value of didacticism in a way that I didn’t before. So I could see myself writing popular articles in the future, if not here then perhaps somewhere else. In some ways it’s hard, thankless work, but it really does bring great value to the world.

And hey, I know just the subject to write about…

[Up next: AI risk and how it pertains to didacticism. I was just going to include it here, but this piece was already getting pretty long, and it seems more self-contained this way anyway. So you’ll have to wait at least a few more days to find out why I think we’re all doomed]

Science: More of an Art than a Science

[In “which” I use “scare” quotes a “lot”]

My last post turned out to be a fruitful one, at least in terms of giving me new things to think about. I find that the mark of a good thought is whether or not it begets other thoughts, and this one was a veritable begattling gun.

Recall the thesis from last time: people have two kinds of intelligence, P-intelligence and NP-intelligence (I’m not super crazy about the names by the way, but they’ll do for now). P-intelligence is the ability to recognize a good solution to a problem when it’s presented to you (where the “problem” could be anything from coming up with a math proof to writing an emotionally moving poem), and NP-intelligence is the ability to actually come up with the solution in the first place. Since it’s obvious that it’s much easier to verify solutions as correct than it is to generate them (where “obvious” in this case means “pretty much impossible to prove, but hey, we’ll just assume it anyway”), it’s clear that one’s P-intelligence will always be “ahead” of one’s NP-intelligence: the level of quality you can reliably recognize will always exceed the level you can reliably generate. In that post I then went on to speculate that the gap between P-intelligence and NP-intelligence might vary from person to person, and even within a person from one age to another, and that this gap might explain some behavioural patterns that show up in humans.

(By the way, I should probably point out here that I totally bungled the mathematical definition of NP problems in the last post. NP problems are not those that are hard to solve and easy to verify – they are simply those that are easy to verify, full stop. Thus a problem in P also has to be in NP, since being easy to solve guarantees being easy to verify. The hardest problems in the NP class (called NP-complete problems) do – probably – have the character I described, of being difficult to solve and easy to verify, and so I’m going to retroactively claim that those were what I was talking about when I described NP problems last time. Still, many thanks to the facebook commenter who pointed this out.)

Now, before I go any further, I should probably try to clarify what I mean when I talk about P-intelligence being “ahead” of NP-intelligence, because I managed to confuse myself several times while writing these posts – and if even the author is confused about what they’re writing, what chance does the reader have? So here’s my view of things: P-intelligence is actually the “dumber” of the two intelligences. It’s limited to simple, algorithmic tasks – things like checking solutions, yes, but also things like applying a formula you learned in calculus, or running through some procedure at work that you know off by heart. Plug’n’chug, in other words, for my physicist friends. So in retrospect I probably shouldn’t have portrayed P-intelligence as merely a “verifier” – P-intelligence essentially handles anything in the brain that’s been “taskified”, or reduced to a relatively simple algorithm, and one example of this is solution verification. NP-intelligence, on the other hand, is the smartest part of yourself, and handles the creative side of things. The strokes of genius and flashes of insight you sometimes get on oh-so-rare occasions? That’s NP-intelligence. In a sense, NP-intelligence is whatever you can’t taskify in the brain.

All of which is well and good. But if NP-intelligence is so smart, why then do I talk about P-intelligence being “ahead” of it? That’s what was causing the confusion for me – half the time I seemed to be thinking of P-intelligence as smarter than NP-intelligence, and half the time it was the other way around (which is never a good sign when you’re trying to flesh out a new concept in your mind). Eventually I managed to clarify things though, at least to my own satisfaction. Here’s what I would say on the matter: NP-intelligence is definitely smarter than P-intelligence, in that it can solve much more difficult problems. However, usually we’re not interested in a direct comparison of the two intelligences and their ability to solve problems. Usually what we’re comparing is the ability of NP-intelligence to generate a solution to a given problem, and the ability of P-intelligence to recognize a solution to that same problem. And for a given problem, solution recognition is of course much easier than solution generation. That’s why we can talk about P-intelligence being ahead of NP-intelligence – it faces a much easier task than NP-intelligence for any set level of problem difficulty, and so it can handle more difficult problems despite being “dumber”.

Now, hopefully that brings you up to at least the level of clarity I have in my own head (which, realistically, is probably not all that high). Moving forward, though, I’d like to de-emphasize the definition of P-intelligence as a solution-checker – it was useful in the last post, but I’ll be going in a somewhat different direction from here on out. Better now to think of P-intelligence as the part of your brain that handles simple, algorithmic tasks (one of which is solution checking) – in fact, if you like you can think of P-intelligence as standing for “Procedural Intelligence”, and NP-intelligence as standing for “Non-Procedural Intelligence”. That captures the idea pretty well.

Okay, so recaps and retcons aside, I’m pretty sure I was trying to write a blog post or something. As I was saying at the outset, this whole idea of P- and NP-intelligence seeded many new thoughts for me, some more profound than others. And first among them, in the “not-very-profound-but-still-edifying” category, was a clarified notion of creativity in art and science.

We’ve all heard the phrase, “It’s more of an art than a science”. It’s usually used to distinguish “intuitive” fields like literature and the arts from “logical” fields like math and science. The idea seems to be that creating a great work of art requires an ineffable, creative “spark” that is unique to humans and is (even in principle) beyond our understanding, whereas doing science requires merely the logical, “mechanical” operation of thought. There are countless fictional tropes that further the idea: the “rational” Spock being outwitted by the “emotional” Kirk, the “logical” robot losing some game to a “creative” human who can think outside the box, and so on and so forth.

Anyway, needless to say I’ve never liked the phrase, but until now it’s always been a vague sort of dislike that I lacked the vocabulary to really expand upon. Now, with P- and NP-intelligence added to my concept-set, I can finally explain myself, and it turns out that I have not just one but two problems with the phrase as it stands.

First: “ineffable” doesn’t mean “magic”. When people say that some skill is an art rather than a science, they usually mean two things: one, it’s a creative skill (you can use it to generate new, original works), and two, it’s immune to introspection (you can’t just write down exactly how the skill works and thereby pass it on to someone else, because you yourself don’t know how it works). It’s this immunity to introspection that gets right down to the heart of the matter, I think: a skill is an art if you can’t verbalize, explicitly, how it works. And in that sense, saying that some skill is an art essentially amounts to saying that it requires NP-intelligence. In fact, the grouping of skills as “arts” and “sciences” actually corresponds very neatly to NP- and P-intelligence as I conceive of them. People frequently say “We’ve got it down to a science”, and what they mean is that they’ve figured out the skill to such an extent that they can say explicitly how it works – in effect, they’ve developed a procedure for implementing the skill, and so it falls under the purview of P-intelligence.

Here’s the problem, though. Yes, a skill that requires NP-intelligence (an “art”) will always be ineffable – that is, will always seem like a black box to the person who possesses the skill. If that weren’t the case – if the skill didn’t seem opaque to the person in question – then they would understand it at a conscious level and could presumably explicate exactly how it works, and then it would be an example of P-intelligence rather than NP-intelligence. So it seems as though creativity is doomed to always carry an element of mysteriousness with it, essentially by definition. But just because something seems mysterious doesn’t mean it is mysterious, and something being beyond your understanding is not the same thing as it being magic. Creativity, whatever it is, is implemented in the brain; it does not rely on some dualistic supernatural “spark” that transcends the physical. Just like everything else in the mind, creativity is an algorithm – it may be an algorithm that we lack introspective access to, but it’s an algorithm nonetheless. So there is zero reason to suspect that we couldn’t program a robot to be creative to the same extent that humans are creative. The leap from “I don’t understand how something works” to “It’s impossible to understand how this thing works” is a huge one, and it’s one that people are far too quick to make.

So that’s my first problem with the phrase “It’s more of an art than a science” – it elevates art (and by extension, creativity) to a category that’s fundamentally distinct from the “ordinary” workings of the brain, and I reject that distinction. Although now that I think about it, what I just described isn’t really a problem with the phrase per se – the phrase is actually pretty innocuous in that regard. It’s more of a problem with the set of connotations that have agglomerated around the word “creativity” in our culture, of which the phrase is kind of just a symptom.

Anyway, my second problem with the phrase actually pertains more to the phrase itself, so let’s move on to that one.

My second problem with the phrase is this: in choosing a skill to represent procedural, non-creative knowledge in contrast to art, you chose science? Seriously, of all things, SCIENCE? REALLY?

Doing science is the most creative, least procedural skill I can think of. Seriously, I’d be hard-pressed to come up with a way of getting it more backwards than describing science as non-creative. Scientists operate at the boundaries of humanity’s knowledge. They spend their days and nights trying to come up with thoughts that no one else has ever had before. They are most definitely not operating based on some set procedure – if they were, science would have been solved centuries ago. So if you want to say that they are not doing creative work, then I literally have no idea what the word creative means.

I suspect what’s going on here is that people have a warped, superficial view of what creativity is. They’ve internalized the idea that creativity belongs only to the domain of things that are “emotional” or “artistic” or “colourful”, whereas doing science only involves the manipulation of mere numbers and the use of cold, unfeeling “logic” – there’s no splattering of paint involved, so how can it be creative?

I hope that by now I’ve written enough that you can see why this idea is nonsense. If not, though, I’ll spell it out: creativity doesn’t care what your subject matter is. It doesn’t care if you’re working with reds and blues or with ones and zeros. It doesn’t care if you’re wearing a painter’s frock or a chemist’s lab coat. It doesn’t care if your tools are a pen and inkwell or an atomic force microscope. All that creativity cares about is originality: the generation of new ideas or thoughts or ways of doing something. Creativity is what happens when we employ the full extent of our intellect in solving a problem that is at the very limits of our ability to solve. Creativity is about NP-intelligence and non-proceduralizability and immunity to introspection; it’s not about some otherworldly spark of magical pixie dust. No, the demystified view of creativity is one that unifies the Sistine Chapel and Quantum Electrodynamics under one great heading of genius – the thing that separates humanity from everything else in the universe, the thing that makes homo sapiens unique.

Now, I suppose you could try to rescue the phrase. When people say “It’s more of an art than a science”, or perhaps more to the point, “We’ve got it down to a science”, what they really mean by “science” is “settled science”. They’re not talking about the actual process of doing science; they’re talking about things that scientists have already figured out, and that everyone else is trying to learn. There’s a big difference between learning something and discovering something, after all. And in that sense, what most people learn in high school science class is pretty algorithmic – certainly they’re not learning how to discover something new. Usually they’re just learning how to apply things that someone else has already discovered.

Still, though. Still I don’t like the phrase. I don’t like the idea of people associating science with a body of knowledge rather than a process of discovery. I don’t like the idea of people viewing science as a non-creative when it’s possibly the most creative thing that humanity does. And most of all I don’t like the idea of logic being the opposite of creativity.

No, I’d rather just say that science is science, and art is art, and that they’re both creative – and that they’re both genius, and that when you get right down to it, they’re both pretty special.

[Up next: more on P- and NP-intelligence, and how they pertain to AI-risk]

Commonplace

the be to of and a in that
have i it for not on with he
as you do at this but his by
from they we say she or an will

my one all would there their what so,
up out if about who get which go,
me when make can like time no
just him know

take people into year your good
some could
them see other than then now
look only come its over think

also back, after use
two how our work first well way
even new, want because,
any these give day most us

(Some “poetry” I “wrote” a few years back. Can you guess what the gimmick is? I won’t give it away, but I thought it was a fun exercise in constrained writing)

To P, or to NP? That is the question

I was having a discussion about high school a few days ago, and accidentally stumbled upon half an insight.

I was trying to articulate why I always found English class to be so unenjoyable, and one of the explanations I came up with was that we, as high schoolers, simply weren’t developed enough (as students, or as readers, or as people) to do interesting literary analysis yet. Think of the typical high school essay or book report: “The theme of this book is Death. Because the main character is dying. And even though he was rich, death is something we all must face. By the end of the book he was able to accept death.” One could be forgiven for not being enthralled. At that age our analyses were almost always either superficial, or overly simplistic, or…well, kind of boring. If the theme wasn’t Death (bad, but we should accept it), it was Racism (bad), or Hardship (sort of bad, and apparently endemic to Cape Breton and the Prairies). There was little subtlety and even less nuance. And of course it doesn’t really matter if you’re the one coming up with the boring analysis – boring is still boring. I don’t know if this was the reason I didn’t like English class, but it seems like at least part of the reason (especially since I have a strong feeling that I’d enjoy it much more at my current age).

Anyway, my instinct for self-deprecation served me well here, because I ended up offhandedly summing the whole thing up as, “Basically I wasn’t smart enough to interest myself in high school.”

And that got me thinking. I meant it as a joke, but could something like that actually be true? Could there be times in our lives that we’re more or less interesting to our own selves? Granted, it seems kind of absurd at first glance – you’re the one doing the judging here, after all, and how can you be smart or dumb relative to yourself? But it’s actually not as paradoxical as it sounds.

Consider: much has been made over the years of the analogy between creative problem solving and the class of mathematical problems that complexity theorists call NP. NP problems, recall, are those for which a solution is extremely difficult to find, but very simple to verify. The classic example is factoring large numbers. Given a large number, finding its prime factors is in general a difficult problem; however, if someone gives you a potential solution (i.e. a set of possible factors) you can easily check if the solution is correct – just multiply the factors and see if the right number comes out. Many things people do that we call “creative” (writing a novel, making a work of art, coming up with inventive solutions to problems) have a similar character – it’s much much easier to recognize good work once it has been created than it is to actually create it. Few are the people who can make a show like Breaking Bad, in other words, but many are those who can enjoy it. In that case the “problem” one faces is to create a TV show that people will enjoy, and Breaking Bad represents one of many possible “solutions”. Needless to say, generating solutions to this problem is quite difficult – I’ve never made a TV show, but I’ve heard it’s non-trivial. Verifying solutions, though? Piece of cake! Just sit down and watch the show. If you like it, you’re done. This is why many mathematicians have said that if P (roughly the class of problems that are “easy” to solve) were to turn out to be equal to NP, it would amount to an algorithm for “automating creativity”.

(Of course, you don’t really need P=NP to be able to automate creativity; you can always just do it by creating an AI, assuming you count that as “automating”. P=NP might allow you to dumbly automate creativity in full generality, though)

So let’s go back to my offhand remark. What would it mean to be smart enough (or not smart enough) to interest yourself? Well, in light of the above discussion, I propose that we can think of people as having two different kinds of intelligence (metaphorically speaking, anyway). The first we might call P-intelligence (after the class of “easy” problems, P – you could also call it recognition intelligence, or verification intelligence). It’s the kind of intelligence that allows us to recognize good solutions to problems when we see them, or enjoy good creative works. The second we might call NP-intelligence (or creative intelligence, or generative intelligence). This would be the kind of intelligence that allows us to come up with innovative solutions to problems, or generate great works of art. Of course, the names are merely meant to be suggestive; humans almost certainly can’t actually solve NP problems efficiently. But to whatever extent we can creatively solve difficult problems, I’ll label that ability NP-intelligence.

Viewed from this perspective, it makes perfect sense to talk about people not being smart enough to interest themselves. It simply means that their P-intelligence has developed to a point that they crave novel, interesting stimuli, but their NP-intelligence hasn’t developed enough to provide it. And I think this is what was going on in English class for me – I knew roughly what actually engaging work should look like, and I knew that I wasn’t producing it. Always I wrote at the outermost limits of my ability, but that was no guarantee the results would be deemed compelling by my inner critic, or even acceptable – boredom doesn’t grade on a curve after all, even if the teacher does.

Now, it may seem like overkill to invent a whole new intelligence dichotomy (especially when there’s already like 17 of them) just to explain me not liking one measly class in high school – I mean, maybe I just had bad teachers or something. But the idea rings very true to me. I feel that I really should have liked English class (or could have, anyway), and I feel that I would like it far more today. And even putting English aside, I know that now, at 28, I find my own thoughts interesting in a way that I definitely didn’t when I was younger. I find myself saying “huh” a lot more often when I have some new idea, and then doing that staring-off-into-space thing as I consider the idea in my head, and work out the implications. That didn’t really used to happen, as far as I can remember. And I’m certainly never bored while writing a blog post.

(This all sounds horribly horribly braggy, I know – “I’m so smart I can amaze even myself!” – but I really don’t mean it that way,  I swear. All I’m saying is that there’s been a relative shift between my ability to determine whether a thought is interesting, and my ability to come up with such thoughts. The latter is much closer to the former than it used to be. And that’s interesting, I think – but it’s completely independent of whether or not either ability is actually high on some absolute scale.)

Okay, so let’s grant for the interim that these two types of intelligence make sense as a concept, and actually correspond to something real. What can we say about them? Well, obviously both kinds of intelligence develop gradually over many years as we mature; it’s not simply an on-off switch. As kids we’re entertained by simple stimuli and stumped by straightforward puzzles, but as we get older the degree of complexity we’re comfortable with balloons dramatically, and we start liking novels with intricate plots, and being able to solve more and more difficult problems. Of course, the two types of intelligence need not develop in concert – in fact, it’s pretty much trivially true that your P-intelligence will always be ahead of your NP-intelligence. If you’re capable of generating a solution to a difficult problem, then you’re certainly capable of recognizing it as a solution. But the reverse does not also hold – the ability to verify that a solution works does not imply the ability to come up with the solution in the first place (see the Breaking Bad example above). So regardless of what level of problem you are capable of solving, the level of problem you can recognize a solution to will be higher still. In other words: no matter how smart you are, and no matter how hard you try, your capabilities are always going to fall at least a little short of your standards. This should maybe give you some pause if you tend to be the self-critical type.

So what else can we say about the two kinds of intelligence? Well, here we pretty much reach the limit of how far I’ve taken the idea so far (hey, I said it was half an insight). A number of interesting questions suggest themselves immediately though:

1. How real are the two types of intelligence? Can we come up with a test that can reliably distinguishes between them?

2. Further to (1), how independent are the two types of intelligence? What’s the correlation between them? Could there be individuals who have an unusually large or small gap between the two intelligences? Does the size of the gap correlate with overall intelligence level? More speculatively, could this be related to the Dunning-Kruger effect?

3. Further to (2), if the size of the gap does vary between individuals, are there any interesting correlations between gap size and personality traits? For example, does gap size correlate with extraversion or novelty-seeking behaviour?

4. What are the exact shapes of the development curves for the two types of intelligence? Obviously P-intelligence will always outstrip NP-intelligence as mentioned above, but how does the size of the gap change with age? Are there particular times in our life when the gap is unusually large or small? I alluded to high school as one such possible time – might there be a “plateau” above that age where your P-intelligence stops growing, allowing your NP-intelligence to “catch up”? And perhaps more interestingly, what happens with the development curves at young ages? I suspect that there might be particularly rapid shifts in the intelligence gap during childhood, and that this could potentially explain a number of child behaviours. For instance, it could explain why kids love to play make-believe at young ages but quickly grow out of it (NP-intelligence is initially close enough to P-intelligence to provide captivating scenarios, but eventually falls too far behind as kids get older). Or it could explain why children are so incredibly prone to being bored (the gap is particularly large at that age, so they can’t really interest themselves with their thoughts).

These are all interesting questions, and they’re the kind of questions I think you’d need to answer if you wanted to take the idea further. I have scattered thoughts on some of them, but nothing worth writing down yet.

Anyway, at this point I’m mostly just blogging out loud, so it’s probably time I wrapped this up. I don’t have any grand conclusions or anything – sorry if you were hoping for that. I’d be very interested to hear other people’s thoughts on the whole notion, though – does this dichotomy ring true to you?

If not, hopefully I’ve at least managed to interest you – I know I’ve managed to interest myself.

Restricted Range and the RPG Fallacy

[A note: this post may be more controversial than is usual for me? I’m not sure; I lost my ability to perceive controversy years ago in a tragic sarcasm-detector explosion]

[Another note: this post is long]

I.

Consider the following sentence:

“Some people are funnier than others.”

I don’t think many people would take issue with this statement – it’s a fairly innocuous thing to say. You might quibble about humour being subjective or something, but by and large you’d still likely agree with the sentiment.

Now imagine you said this to someone and they indignantly responded with the following:

“You can’t say that for sure – there are different types of humour! Everyone has different talents: some people are good at observational comedy, and some people are good at puns or slapstick. Also, most so-called “comedians” are only “stand-up funny” – they can’t make you laugh in real life. Plus, just because you’re funny doesn’t mean you’re fun to be around. I have a friend who’s not funny at all but he’s really nice, and I’d hang out with him over a comedian who’s a jerk any day. Besides, no one’s been able to define funniness anyway, or precisely measure it. Who’s to say it even exists?”

I don’t know about you, but I would probably be pretty confused by such a response. It seems to consist of false dichotomies, unjustified assumptions, and plain non-sequiturs. It just doesn’t sound like anything anyone would ever say about funniness.

On the other hand, it sounds exactly like something someone might say in response to a very similar statement.

Compare:

“Some people are more intelligent than others.”

“You can’t say that for sure – there are different types of intelligence! Everyone has different talents: some people have visual-spatial intelligence, and some people have musical-rhythmic intelligence. Also, most so-called “intellectuals” only have “book-smarts” – they can’t solve problems in the real world. Plus, just because you’re smart doesn’t mean you’re a hard worker. I have a friend who’s not very bright but he works really hard, and I’d choose him over a lazy brainiac any day. Besides, no one’s been able to define intelligence anyway, or precisely measure it. Who’s to say it even exists?”

Sound more familiar?

II.

The interesting thing is, you don’t always get a response like that when talking about intelligence.

Quick – think about someone smarter than yourself.

Pretty easy, right? I’m sure you came up with someone. Okay, now think of someone less smart than you. Also not too hard, I bet. When you forget about all the philosophical considerations, when there’s no grand moral principle at stake – when you just think about your ordinary, everyday life, in other words – it becomes a lot harder to deny that intelligence exists. Maybe Ray from accounting is a little slow on the uptake, maybe your friend Tina always struck you as sharp – whatever. The point is, we all know people who are either quicker or thicker than ourselves. In that sense almost everyone acknowledges, at least tacitly, the existence of intelligence differences.

It’s only when one gets into a debate about intelligence that things change. In a debate concrete observations and personal experiences seem to give way to abstract considerations. Suddenly intelligence becomes so multifaceted and intangible that it couldn’t possibly be quantified. Suddenly anyone who’s intelligent has to have some commensurate failing, like laziness or naivete. Suddenly it’s impossible to reason about intelligence without an exact and exception-free definition for it.

Now, to be clear, I’m not saying a reasonable person couldn’t hold these positions. While I think they’re the wrong positions, they’re not prima facie absurd or anything. And who knows, maybe intelligence really is too ill-defined to discuss meaningfully. But if you’re going to hold a position like that then you should hold it consistently – and the debaters who say you can’t meaningfully talk about intelligence often seem perfectly willing to go home after the debate and call Ray from accounting an idiot.

My point, stated plainly, is this: people act as if intelligence exists and is meaningful in their everyday life, but are reluctant to admit that when arguing or debating about intelligence. Moreover, there’s a specific tendency in such debates to downplay intelligence differences between people, usually by presuming the existence of some trade-off – if you’re smart then you can’t be diligent and hardworking, or you have to be a bad person, or you have to have only a very circumscribed intelligence (e.g. book smarts) while being clueless in other areas.

This is a weird enough pattern to warrant commenting on. After all, as I pointed out in the opening, most people don’t tie themselves up in knots trying to argue that funniness doesn’t exist, or that everyone is funny in their own way. Why the particular reticence to admit that someone can be smart? And why is this reticence almost exclusively limited to debates and arguments? And – most importantly – why is there this odd tendency to deny differences by assuming trade-offs? I can think of at least three answers to these questions, each of which are worth going into.

III.

First, let’s address this abstract-concrete distinction I’ve been alluding to. People seem to hold different beliefs about intelligence when they’re dealing with a concrete, real-life situation than they do when talking about it abstractly in a debate. This is a very common pattern actually, and it doesn’t just apply to intelligence. It goes by a few names: construal level theory in academic circles, and near/far thinking as popularized by Robin Hanson of Overcoming Bias. Hanson in particular has written a great deal about the near/far distinction over the years – here’s a summary of his on the topic:

The human mind seems to have different “near” and “far” mental systems, apparently implemented in distinct brain regions, for detail versus abstract reasoning.  Activating one of these systems on a topic for any reason makes other activations of that system on that topic more likely; all near thinking tends to evoke other near thinking, while all far thinking tends to evoke other far thinking.

These different human mental systems tend to be inconsistent in giving systematically different estimates to the same questions, and these inconsistencies seem too strong and patterned to be all accidental.  Our concrete day-to-day decisions rely more on near thinking, while our professed basic values and social opinions, especially regarding fiction, rely more on far thinking.  Near thinking better helps us work out complex details of how to actually get things done, while far thinking better presents our identity and values to others.  Of course we aren’t very aware of this hypocrisy, as that would undermine its purpose; so we habitually assume near and far thoughts are more consistent than they are.

– See more at: http://www.overcomingbias.com/2009/01/a-tale-of-two-tradeoffs.html#sthash.poFpkSqb.dpuf

The basic idea is, people tend to have two modes of thought: near and far. In near mode we focus on the close at hand, the concrete, and the realistic. Details and practical constraints are heavily emphasized while ideals are de-emphasized. Near mode is the hired farmhand of our mental retinue: pragmatic, interested in getting the job done, and largely unconcerned with the wider world or any grand moral principles.

In contrast, far mode focuses more on events distant in space or time, the abstract over the concrete, and the idealistic over the realistic. Thinking in far mode is done in broad strokes, skipping over boring logistics to arrive at the big picture. Far mode is the brain’s wide-eyed dreamer: romantic, visionary, and concerned with the way the world could be or should be rather than the way it is.

The near/far distinction is an incredibly useful concept to have crystallized in your brain – once you’ve been introduced to it you see it popping up all over the place. I’ve found that it helps to make sense of a wide variety of otherwise inexplicable human behaviours – most obviously, why I find myself (and other people) so often taking actions that aren’t in accordance with our professed ideals (actions are near, ideals are far). And if you’re ever arguing with someone and they seem to be saying obviously wrongheaded things, it’s often useful to take a step back and consider whether they might simply be thinking near while you’re thinking far, or vice versa.

The applicability of the near/far idea to the above discussion should be obvious, I think. In a debate you’re dealing with a subject very abstractly, you face relatively low real-world stakes, and you have an excellent opportunity to display your fair-minded and virtuous nature to anyone watching. In other words, debates: super far. So naturally people tend to take on a far-mode view of intelligence when debating about it. And a far mode view of intelligence will of course accord with our ideals, foremost among them being egalitarianism. We should therefore expect debaters to push a picture of intelligence that emphasizes equality: no one’s more intelligent than anyone else, everyone’s intelligent in their own way, if you’re less intelligent you must have other talents or a better work ethic, etc. In other words, people wind up saying things that are optimized for values rather than truth, and the result is a debate that – in both the Hansonesque and Larsonesque senses – is way off to the far side.

Of course, just because a thought comes from far mode thinking doesn’t mean it’s wrong – both modes have their advantages and disadvantages. But all else being equal we should expect near mode thinking to be more accurate than far, simply because it’s more grounded in reality. A screwup in near mode is likely to have much worse consequences than a screwup in far mode, and this keeps near mode relatively honest. For example, a bad performance review at your job could be the difference between you getting a promotion and not getting a promotion. (Or something? I’ve never actually had a real job before) In that case it might be very relevant for you to know if Ray from accounting is going to make some mistake that will affect you and your work. And when he does make a mistake, you’re not going to be thinking about how he probably has some other special talent that makes up for it – you’re going to get Barry to double-check the rest of his damn work.

Near mode isn’t infallible, of course – for example, near mode emphasizes things we can picture concretely, and thus can lead us to overestimate the importance of a risk that is low probability but has high salience to us, like a plane crash. In a case like that far thinking could be more rational. Importantly, though, this near mode failure comes from a lack of real-world experiential feedback, not an excess. When near mode has a proper grounding in reality, as it usually does, it’s wise to heed its word.

IV.

Let’s go back to my (admittedly made up) response in Section I for a second. I want to focus on one notion in particular that was brought up:

“… [J]ust because you’re smart doesn’t mean you’re a hard worker. I have a friend who’s not very bright but he works really hard, and I’d choose him over a lazy brainiac any day.”

On the face of it, this is a very strange thing to say. It’s a complete non-sequitur, at least formally: totally unrelated to the proposition at hand, which was that some people are more intelligent than others. In fact, if you’ll notice our interlocutor has gone beyond a non-sequitur and actually conceded the point in this case – they’ve acknowledged that people can be more or less intelligent. So what’s going on here?

First I should emphasize that, although I made this response up, I’ve seen people make exactly this argument many times before. I’ve seen it in relation to intelligence and hard work, and I’ve seen it in a plethora of other contexts:

“Do you prefer more attractive girls?”/”Well I’d rather an plain girl with a great personality than someone who’s really hot but totally clueless.”

“Movie special effects have gotten so much better!”/”Yeah but I’d prefer a good story over good CGI any day.”

“I hate winter!”/”Sure, but better it be cold and clear than mild and messy.”

What do all these responses have in common? They all concede the main point at hand but imply that it doesn’t matter, and they all do so by bringing up a new variable and then assuming a trade-off. The trade-off hasn’t been argued for or shown to follow, mind you – it’s just been assumed. And in all cases one is left to wonder: why not both? Why not someone who’s intelligent and hardworking? Why not a day that’s warm and sunny?

(Although to be fair arguments become a lot more fun when you can just postulate any trade-off you want: “You just stole my purse!”/”Ah yes, but wouldn’t you rather I steal your purse, than I not steal your purse and GIANT ALIEN ZOMBIES TORTURE US ALL TO DEATH?”)

I’ve always found this tactic annoying, and I see it frequently enough that I couldn’t help but give it a name. I’ve taken to calling it the RPG fallacy.

Roughly speaking, I would define the RPG fallacy as “assuming without justification that because something has a given positive characteristic, it’s less likely to have another unrelated positive characteristic.” It gets its name from a common mechanic in role-playing games (RPGs) where a character at the start of the game has a number of different possible skills (e.g. strength, speed, charisma, intelligence, etc.) and a fixed number of points to be allotted between them. It’s then up to you as the player to decide where you want to put those points. Note that this system guarantees that skills will be negatively correlated with one another: you can only give points to a certain skill at the expense of others. So, for example, you can have a character who’s very strong, but only at the expense of making her unintelligent and slow. Or you could make your character very agile and dexterous, but only if you’re fine with him being an awkward weakling. The fixed number of points means that for all intents and purposes, no character will be better than any other: they can be amazing in one area and completely useless in the others, or they can pretty good in a few areas and no-so-great in the others, or they can mediocre across the board, but they can’t be uniformly good or uniformly bad. They will always have a strength to compensate any particular weakness.

The RPG fallacy, then, is importing this thinking into the real world (as in the above examples). It’s a fallacy because we don’t have any particular reason to believe that reality is like a video game: we’re not given a fixed number of “points” when we’re born. No, we may wish it weren’t so, but the truth is that some people get far more points than others. These are the people who speak four languages, and have written five books, and know how to play the banjo and the flute, and have run a three-hour marathon, and are charming and witty and friendly and generous to boot. People like this really do exist, and unfortunately so do their counterparts at the opposite end of the spectrum. And once one begins to comprehend the magnitude of the injustice this entails – how incredibly unfair reality can be – one starts to see why people might be inclined to commit the RPG fallacy.

(We should try to resist this inclination, of course. Burying our heads in the sand and ignoring the issue might make us feel better, but it won’t help us actually solve the problem. This is one of the many reasons I consider myself a transhumanist)

It’s important to emphasize again that the RPG fallacy is only a fallacy absent some reason to think that the trade-off holds. If you do have such a reason, then there’s of course nothing wrong with talking about trade-offs. After all, plenty of trade-offs really do exist in the real world. For example, if someone devotes their entire life to learning about, say, Ancient Greece, then they probably won’t be able to also become a world-class expert on computer microarchitectures. This is a real trade-off, and we certainly wouldn’t want to shut down all discussion of it as fallacious. The reason this case would get a pass is because we can identify why the trade-off might exist – in this case because there’s a quantity analogous to RPG skill points, namely the fixed number of hours in a day (if you spend an hour reading about Plato’s theory of forms, that’s an hour you can’t spend learning about instruction pipelining and caching). Thus expecting trade-offs in expertise to hold in real life wouldn’t be an example of the RPG fallacy.

But even here we have to be careful. Expertise trade-offs will only hold to the extent that there really is a fixed amount of learning being divvied up. And while it’s certainly true that one has to decide between studying A and studying B on a given day, it’s also true that some people decide to study neither A nor B – nor much of anything, really. Moreover, some people are simply more efficient learners: they can absorb more in an hour than you or I could in a day. These give us some reasons to think that expertise trade-offs need not hold in all cases – and indeed, in the real world some people do seem to be more knowledgeable than others across a broad range of domains. This illustrates why it’s so important to be aware of the RPG fallacy – unless you’re taking a long hard look at exactly what constraints give rise to your proposed trade-off, it’s all too easy to just wave your hands and say “oh, everyone will have their own talents and skills.”

So that’s the second answer to our earlier questions. Why do people keep bringing up seemingly irrelevant trade-offs when talking about intelligence? Because they occupy an implicit mindset in which aptitude in one area comes at the expense of aptitude in another, and want to emphasize that being intelligent would therefore come at a cost. This in turn is related to the earlier discussion of far mode and it’s focus on egalitarianism and equality – if everyone has the same number of skill points, then no one’s any better than anyone else. We could probably classify both of these under the heading of “just-world” thinking – believing the world is a certain way because it would be fairer that way. Unfortunately, so far as I’ve been able to tell, if there is a game designer for this world he doesn’t seem much concerned with fairness.

V.

Now we come to the most speculative (and not coincidentally, the most interesting) part of this piece. So far I’ve been essentially taking it as given that the trade-offs people bring up when discussing intelligence (e.g. one type of intelligence versus another, or being smart versus being a hard worker) are fictitious. But perhaps I’m being unfair – might there not be a grain of truth in these kinds of assertions?

Well, maybe. A small grain, anyway. There might be reason to think that even if these trade-offs don’t actually exist in the real world, people will still perceive them in their day-to-day lives – phantom correlations, if you like.

Let’s consider the same example we looked at last section, of hard work and intelligence. We want to examine why people might wind up perceiving a negative correlation between these qualities. For the sake of discussion we won’t assume that they’re actually negatively correlated – in fact, to make it interesting we’ll assume that they’re positively correlated. And we’ll further assume for simplicity that intelligence and hardworking-ness are the only two qualities people care about, and that we can quantify them perfectly with a single number, say from 0-100 (man, we really need a better word for the quality of being hardworking – I’m going to go with diligence for now even though it doesn’t quite fit). If we were to then take a large group of people and draw a scatterplot of their intelligence and diligence values, it might look something like this:

restricted_range1

With me so far? Okay, now the interesting part. In many respects people tend to travel in a fairly constrained social circle. This is most obvious in regards to one’s job: a significant fraction of one’s time is typically spent around coworkers. Now let’s say you work at a fairly ordinary company – respectable, but not outrageously successful either. The hiring manager at your company is then going to be faced with a dilemma – of course they’d like to hire people who are both hardworking and intelligent, but such people always get scooped up by better companies that are willing to pay more. So they compromise: they’re willing to accept a fairly lazy person so long as she’s intelligent, and they’ll accept someone who’s not so bright if he’ll work hard enough to make up for it. Certainly though they don’t want to hire someone who’s both lazy and unintelligent. In practice, since in this world intelligence and diligence can be exactly quantified, they’d likely end up setting a threshold: anyone with a combined diligence+intelligence score of over, say, 100 will be hired. And since the better companies will also be setting up similar but higher thresholds, it might be that your company will be unable to hire anyone with a combined score over, say, 120.

So: for the people working around you, intelligence+diligence scores will always be somewhere between 100 and 120. Now what happens if we draw our scatterplot again, but restrict our attention to people with combined scores between 100 and 120? We get the following:

restricted_range2

And lo and behold, we see a negative correlation! This despite the fact that intelligence and diligence were assumed to be positively correlated at the outset. It’s obvious what’s going on here: by restricting the range from 100 to 120, your company ensures that people can only be hired by either being highly intelligent and not very diligent, or highly diligent and not very intelligent (or perhaps average for both). The rest of the population will be invisible to you: the people who are both intelligent and diligent will end up at some star-studded Google-esque company, and the people who are neither will end up at a lower-paying, less prestigious job. And so you’ll walk around the office at work and see a bunch of people of approximately the same skill level, some smart but lazy and some dull but highly motivated, and you’ll think, “Man, it sure looks like there’s a trade-off between being intelligent and being hardworking.”

As I said, phantom correlations.

We can apply this idea to the real world. Things are of course much messier in reality: companies care about far more than two qualities, few of the qualities they do care about can be quantified precisely, and for various reasons they probably end up accepting people with a wider range of skill levels than the narrow slice I considered above. All of these things will weaken the perceived negative correlation. But the basic intuition still goes through I think: you shouldn’t expect to see anyone completely amazing at your company – if they were that amazing, they’d probably have a better job. And similarly you shouldn’t expect to see someone totally useless across the board – if they were that useless, they wouldn’t have been hired. The range ends up restricted to people of broadly similar skill levels, and so the only time you’ll see a coworker who’s outstanding in one area is when they’re less-than-stellar in another.

Or, to take what might be a better example, consider an average university. A university is actually more like our hypothetical company above, in that they tend to care about easily quantifiable things like grades or standardized test scores. And they would probably have a similarly restricted range: the really good students would wind up going to a better university, and the really bad ones probably wouldn’t apply in the first place. So maybe in practice it would turn out that they could only reliably get students with averages between 80 and 90. In that case you would see exactly the same trade-off that we saw above: the people with grades in that range get them by being either hardworking or smart – but not both, and not neither. If they had both qualities they’d have higher grades, and if they had neither they’d have lower. So again: trade-offs abound.

Now, how much can this effect explain in the real life? Maybe not a whole lot. For one we have the messy complications I mentioned above, that will tend to widen the overall skill range of people at a given institution and therefore weaken the effect. More important, though, is the fact that people simply don’t spend 100% of their time with coworkers. They also have family and friends, not to mention (for most people) a decade plus in the public school system. That’s probably the biggest weakness of this theory: school. The people you went to school with were probably a fairly representative sample of the population, and not pre-selected to have a certain overall skill level. So in theory one should wind up with fairly accurate views about correlations and trade-offs between characteristics simply by going through high school and looking around.

Still, I think this is an interesting idea, and I think it could explain at least a piece of the puzzle at hand. Moreover, it seems to me to be an idea gesturing towards a broader truth: that trade-offs might be a kind of “natural” state that many systems tend towards – that in some sense, “only trade-offs survive”. Scott Alexander of Slate Star Codex had much the same idea (he even used the university example), and I want to explore it further in a future post.

But I guess the future can wait until later, and right now it’s probably time I got around to wrapping this post up.

So: if you’ll recall, we started out by pondering why debates about intelligence always turn so contentious, and why people have a tendency to assume that intelligence has to be traded off against other qualities. We considered some possible explanations: one was far mode thinking, and it’s tendency to view intelligence as a threat to equality. Another was the RPG fallacy, an implicit belief that everyone has a set number of “points” to be allocated between skills, necessitating trade-offs. And now we have our third explanation: that people tend to not interact much with those who have very high or very low overall skill levels, resulting in an apparent trade-off between skills among the people they do see.

These three together go a long way towards explaining what was – to me, anyway – a very confusing phenomenon. They may not give the whole story of what’s going on with our culture’s strange, love/hate relationship with intelligence – I think you’d need a book to do that justice – but they’re at least a start.

VI.

I want to close with a sort of pseudo-apology.

By nature I tend to be much less interested in object-level questions like “What is intelligence?”, and much more interested in meta-level questions like “Why do people think what they do about intelligence?”. I just usually find that examining the thought processes behind a belief is much more interesting (and often more productive) than debating the belief itself.

But this can be dangerous. The rationalist community, which I consider myself loosely a part of, is often perceived as being arrogant. And I think a lot of that perception comes from this interest we tend to have in thought processes and meta-level questions. After all, if someone makes a claim in a debate and you respond by starting to dissect why they believe it – rather than engaging with the belief directly – then you’re going to be seen as dismissive and condescending. You’re not treating their position as something to be considered and responded to, you’re treating it as something to be explained away. Thus, the arrogant rationalist stereotype:

“Oh, obviously you only think that because of cognitive bias X, or because you’re committing fallacy Y, or because you’re thinking in Z mode. I’ve learned about all these biases and therefore transcended them and therefore I’m perfectly rational and therefore you should listen to me.”

It dismays me that the rationalist community gets perceived this way, because I really don’t think that’s how most people in the community think. Scott Alexander put it wonderfully:

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

We’re all sinners here. You don’t “transcend” biases by learning about them, any more than you “transcended” your neurons when you found out that was how your brain thought. I’ve known about far mode for years and I’m just as susceptible to slipping into it as anyone else – maybe even more so, since I tend to be quite idealistic. The point of studying rationality isn’t to stop your brain from committing fallacies and engaging in cognitive biases – sooner you could get the brain to stop thinking with neurons. No, the point of rationality is noticing when your brain makes these errors, and then hopefully doing something to correct for them.

That’s what I was trying to do with this piece. I was trying to draw out some common patterns of thought that I’ve seen people use, in the hope that they would then be critically self-examined. And I was trying to gently nudge people towards greater consistency by pointing out some contradictions that seemed to be entailed by their beliefs. But I was not trying to say “Haha, you’re so dumb for thinking in far mode.”

It’s very easy to always take the meta-road in a debate. You get to judge everyone else for their flawed thinking, while never putting forward any concrete positions of your own to be criticized. And you get to position yourself as the enlightened thinker, who has moved beyond the petty squabbles and arguments of mere mortals. This may not be the intention – I think most rationalists go meta because they really are just interested in how people think – but that’s how it comes across. And I think this can be a real problem in the rationalist community.

So in the spirit of trying to be a little less meta, I thought I’d end by giving my beliefs about intelligence – my ordinary, run-of-the-mill object-level beliefs. That way I’m putting an actual position up for criticism, and you can all feel free to analyze why it is I hold those beliefs, and figure out which cognitive biases I must be suffering from, and in general just go as meta on me as you want. It seems only fair.

Thus, without further ado…

  • I think intelligence exists, is meaningful, and is important
  • I think one can talk about intelligence existing in approximately the same way one talks about funniness existing, or athleticism
  • I think IQ tests, while obviously imperfect, capture a decently large fraction of what we mean when we talk about intelligence
  • Accordingly, I think most people are far too dismissive of IQ tests
  • I think the notion of “book-smarts” is largely bunk – in my experience intelligent people are just intelligent, and the academically gifted usually also seem pretty smart in “real life”
  • With that being said, some people of course do conform to the “book-smart” stereotype
  • I think that, all else being equal, having more intelligence is a good thing
  • I suppose the above means I sort of think that “intelligent people are better than other people” – but only in the sense that I also think friendly, compassionate, funny, generous, witty and empathetic people are better than other people. Mostly I just think that good qualities are good.
  • I think it’s a great tragedy that some people end up with low intelligence, and I think in an ideal world we would all be able to raise our intelligence to whatever level we wanted – but no higher

So there: that’s what I believe.

Scrutinize away.

Rejected Blog Titles

Other anagrams considered (but ultimately rejected) for use as a blog title:

-The Softer Pens

Decent, but the plural on “pens” ruins it. Plus, it lacks the appealing abstractness of what I ended up with.

-Three Soft Pens

Also not bad, but sadly inapplicable as there’s only one of me, not three. For now.

-Tent of Spheres

Pleasingly geometric if nothing else.

-She Oft Repents

The sentiment is there; the gender is not.

-Pent For Theses

A little too on the nose I think. Look, I’ll graduate when I graduate! Stop bugging me!

-Soft Serpent, Eh?

Wait, what are you implying here exactly?

-Not Herpes Fest

Well okay but I don’t see why you would feel the need to specify –

-Oft Nets Herpes

Hey now that’s not very nice

-Tents of Herpes

Oh come on it’s not tents

-Fresh Tot Peens

Okay now you’re just trying to get me on a watchlist, aren’t you?

 

 

 

New Year, New Blog

Among my New Year’s resolutions for 2015 is to start blogging more. Like many resolutions this one is admirable in its intent, dubious in its attainability, and ultimately narcissistic in its motivation. But hey, that’s never stopped anyone before. Rather than just continuing on with business as usual at my old blog, I decided to start up a new one with a dedicated domain name. I did this because I thought a fresh start might do me good, and more importantly because I thought paying for a domain might motivate me to post more.

(Plus, have you ever owned a domain name? It’s really cool)

The name of this blog is The Pen Forests. It doesn’t mean anything in particular, but it’s an anagram of my own name that I discovered years ago, and I’ve always been fond of the imagery it evokes. Incidentally, I realize I might have just given up enough information to prevent me from keeping this blog fully anonymous, but I don’t think that was ever in the cards. I do wonder, though, how many other names are anagrams of my own. For what it’s worth my name is not Seth F. Peterson.

The theme of this blog will continue to be whatever happens to interest me. If history is any indication, topics will likely include a mix of philosophy, rationality, and science. This will not be a didactic blog: I don’t intend to write pop science articles and explain quantum mechanics to the layman. Those are admirable goals, but they’re not for me. I want to write at the boundaries of my knowledge; at the outermost limits of my confusion. If I write a post it’ll be because I just figured something out, or I’m trying to figure something out by writing about it.

For better or for worse, blogging thrives on contrarianism. If you’re not going against some kind of existing grain then no one pays attention to you. At its best, this dynamic encourages bloggers to push back when public opinion has swung too far in one direction. At its worst, it incentivizes writers to use convoluted, too-clever logic to come up with reasons why some common sense truth is actually false. I will try to walk the fine line between being contrarian enough to be interesting and not so contrarian as to be stupid. I hope you can all appreciate that it’s a harder line to walk than you might think.

The motto of this blog will be “Impassioned Reason”. If that sounds like a contradiction in terms to you, then I haven’t written enough yet. Do stay tuned.

With some reluctance I will link you now to my old blog. My reluctance is not so much because I disagree with anything written there (although in many cases I’m much less sure of what I wrote than I was when I wrote it). It’s more due to a general embarrassment I feel in regards to any piece of mine that was written more than…oh, let’s say a week ago. Trust me, if you think something I wrote sounds pompous or naïve or stupid, then I am probably rereading it and cringing a thousand times worse than you are. I can only assume this pattern will continue in the future.

(This might be why I don’t blog that often)

My masochism aside, I’m currently about a quarter of the way through my first real post for this blog. I hope you will enjoy it, and at least a few of my future posts. If nothing else I hope I can introduce you to a few new concepts along the way and give you some different ways of thinking about things. And if you don’t particularly like a post, then please: say so and why in the comments! I welcome dissenting opinions and criticisms – that’s how I learn. I should probably emphasize, though, that I think debates are only useful if both parties enter into them with a sense of mutual curiosity and truth-seeking. If you don’t give me any indication that you understand that, then I probably won’t waste my time engaging with you.

Like any blog, this one will either be supported enough to blossom or neglected enough to wither. Hopefully it will be the former; more likely, the latter. In either case though this is me embarking on a bit of a journey. I hope to bare my intellectual soul over the coming months (and fate willing, years). If I do it right it’ll be a bit of an adventure.

I hope you’ll join me.