Is there an echo in here?

I’m starting to wonder if I might have been too hard on echo chambers.

The standard position these days is that echo chambers are uniformly terrible; that surrounding yourself with people who agree with you on every issue can only lead to closemindedness, toxic ingroup/outgroup dynamics, and increased polarization. Many people have commented on how the rise of partisan news networks and isolated internet communities have led to a society where people never have to have their beliefs challenged, or interact with those who disagree with them. And this is obviously a very bad thing – there’s almost nothing that runs more counter to the spirit of rationality and truth seeking than the kind of self-congratulatory patting on the back you commonly see in intellectually closed-off communities. But despite all this, I still feel an impulse speak up in favour of echo chambers, at least a little bit – I now think they might also serve a useful psychological function. Just as there are people who can benefit from reading Ayn Rand, I suspect there are people out there who could use a little bit more agreement in their life.

I’ve been going through a pretty rough patch in my life lately. I’m still trying to figure out why exactly this is, but I think part of it may be due to a feeling of intellectual isolation. Right now I feel like I’m living in an anti-echo chamber. It seems like almost everything I hear or read – either from friends, or on facebook, or on the general internet – is someone disagreeing with an opinion I hold. And it seems like any agreement people might have with my beliefs is either whispered or not voiced at all. Obviously this isn’t literally the case – it’s probably mostly just selective memory and a very human tendency to notice criticism more easily than agreement. But I do have a lot of weird and semi-controversial opinions that very few people in the world share, and people are generally not shy about disagreeing with those opinions.

Now, normally this wouldn’t really bother me – and from a purely intellectual point of view, it doesn’t. After all, why should I care if other people think I’m wrong about something? I’m pretty confident in my weird opinions (otherwise I wouldn’t hold them), but in the end I’m not afraid of any challenges to my beliefs. If someone convinces me that something I believe is wrong, I’ll just change my mind. *Shrug*. The goal is not to never be wrong, the goal is just to find the truth.

But saying these words doesn’t erase the reality that humans are social animals. We evolved to care a lot about other people’s opinions – in the ancestral environment it was probably extremely relevant to know whether or not the majority of people around you agreed with you. Having popular or unpopular opinions could literally mean the difference between life and death (or, even more relevantly from evolution’s point of view, between mating and not mating).

I worry that ever since [Bad Thing] happened last year, and I lost a major source of intellectual solidarity in my life, I’ve been feeling more and more like no one agrees with me, and that I’m all alone in believing what I do. And I worry that this has been slowly wearing me down, psychologically, and tripping some ancient mammalian brain circuits – circuits that say things like “YOU HAVE NO ALLIES” and “YOU ARE ABOUT TO BE SHUNNED AND EXILED BY THE TRIBE”.

So now I wonder if maybe people need a certain amount of agreement in their lives. If maybe perceiving everyone around you as constantly disagreeing with you is just as bad, psychologically speaking, as perceiving yourself as useless or unwanted or unattractive. If maybe – just maybe, to some tiny, infinitesimal extent – having a self-congratulatory echo chamber among friends is necessary to be emotionally healthy.

And on top of that: in addition to individual mental well-being, I wonder if agreement is more necessary for friendship than I realized. I’ve always sort of implicitly believed that it didn’t really matter if you disagreed with your friends on philosophical or political matters. All that was required for two people to be friends, thought I, was that they enjoy each other’s company, and that they have each other’s back in times of need. And I still think this is at least normatively true, in the sense that this is probably how friendships should work in an ideal world. But I’m less confident that this is how friendships really do work, in the world as it is right now. I mean, who knows? Maybe the more you disagree with friends, the more you sow subtle, barely noticeable seeds of dissent. Maybe you end up gradually weakening ties to your friends with every contrary opinion, because you subconsciously signal to them that you wouldn’t be a reliable ally if they were to ever really need you. Friendship is all about trust, after all, and maybe trust is really difficult in the face of persistent disagreement.

Or, you know, maybe not. I have no idea if any of this is true. I came up with all of this last night when I couldn’t sleep. I was lying in bed, mind racing and feeling generally frustrated about some article I had read, when I realized I was getting way more bothered by other people disagreeing with me than I used to. And so I set about trying to figure why that was, and the result is this post (which I’m not all that confident in). One natural question one could ask is: why now? Why do I all of a sudden feel so isolated when my opinions haven’t really changed that much recently? I mean, yes, I did lose that source of intellectual solidarity I mentioned (and before that I had far fewer weird and semi-controversial opinions). But it could also easily just be that I’ve been depressed lately for whatever other reason, and that in such a state I’m more likely to notice negative things like criticism and disagreement.

Either way, I definitely do feel kind of isolated right now, and all of this is why I’m so glad that [Friend Who Agrees With Me About Basically Everything] is moving to Toronto soon. I think being able to talk with him more frequently could be helpful. Although, come to think of it, despite the fact that we agree on almost everything, our discussions almost invariably end up honing in on the few topics we disagree about. Granted, I enjoy that because our 99%-shared worldview tends to allow for unusually productive disagreements. But still, since I know he’s reading this: we should probably skype sometime and vent about how obvious atheism is, or how much reality is definitely objective, or something.

You know, just so I can hear an echo.

Compensatiated?

I always used to hate it when I would overcompensate for some error I made – overcompensation just seemed like something that unintelligent, under-reflective people did. So over time I developed a habit of undercompensating for errors.

Then I realized that my undercompensation was just meta-overcompensation.

Now I don’t know what to do.

On criticism

[Note: this post is unendorsed, for multiple reasons that I won’t go into right now. I’d take the post down, but that seems like cheating – I’d rather catalog and learn from my mistakes than hide them. In any case, I do still strongly believe in the last bit I wrote at the bottom: internet mob justice is a huge and growing problem, and we need to find some way of dealing with it.]

So, controversial opinion time I guess? In regards to the whole Tim Hunt/”girls in the lab” thing:

I don’t agree with Tim Hunt (obviously), but I believe in being charitable towards people even when I disagree with them. So what do I think he was trying to say?

Well, let’s look at it from his perspective. He was in charge of a lab for many decades. He probably saw an increase in the number of girls in his lab over that time. He probably also saw an increase in the number of people crying in response to criticisms he offered over that time. From his point of view, you can imagine how that might be a bad thing – he might feel as though he needs to be able to be brutally honest to budding scientists in order for them to properly develop as scientists. After all, science is all about putting your theories out there for criticism, and if you can’t handle that kind of criticism then you probably won’t be a good scientist.

So in general, he would be against people crying in response to criticism. Now, I want to emphasize here that I’ve seen both men and women cry in response to criticism – heck, I’ve cried in response to criticism many times before. But if we’re being honest, it doesn’t seem that unlikely that women cry more in response to criticism than men do, statistically speaking. Does anyone really disagree with that?

Now, does that mean women should be barred from the lab? Of course not! That’s completely ridiculous. But if crying due to criticism is bad, and more women cry than men, you could see how that would be relevant to Tim Hunt’s thought processes.

Personally, I think the lesson here should be that we need to do a much better job, societally speaking, of instilling a growth mindset in people whereby criticisms are not viewed as attacks on the self but rather as descriptions of one’s current (not permanent) set of abilities.

But I do find it more productive to view Tim Hunt’s comments from an “I would like to be able to criticize people” perspective than an “I hate women” perspective.

And as for the whole “girls distract boys in the lab” thing? Total bullshit. Tar and feather him all you want for that.

Edited to add: the above was originally posted as a facebook comment, which drew some critical comments (though fewer than I expected, actually). I responded to one, and I thought I’d on tack that response here as well:

Partly I wrote the above just to be contrarian, I admit. Plus I was bored on a friday. But also: I’m super super against internet mob justice in general. There’s a common pattern today where a) someone will say something offensive or awful, b) the internet will collectively respond by descending on them with the fury of a thousand suns, and then c) said person’s life ends up being ruined. And yes, usually whatever they said *was* really offensive or awful – but not awful enough that they deserve to have their life ruined. I really don’t like the idea of handing over the decision on the kinds of opinions that are acceptable to express in our society to “whoever on the internet can band together enough people to ruin the opinion-expresser’s life”. They may have gotten it right in this case, but they won’t always get it right, and that’s kind of a frightening prospect. This seems like a big problem to me, which will only get worse in the future. And while I have no idea how to stop it, pushing charitable readings of awful statements seems like the only thing I can do right now.

Philosophical differences

[Followup to my last post on didacticism]

[Also, I’m not sure who the audience for this post is. For now let’s just say I’m writing it for myself?]


You know what’s scarier than having enemy soldiers at your border?

Having sleeper agents within your borders.

Enemy soldiers are malevolent, but they are at least visibly malevolent. You can see what they’re doing; you can fight back against them or set up defenses to stop them. Sleeper agents on the other hand are malevolent and invisible. They are a threat and you don’t know that they’re a threat. So when a sleeper agent decides that it’s time to wake up and smell the gunpowder, not only will you be unable to stop them, but they’ll be in a position to do far more damage than a lone soldier ever could. A single well-placed sleeper agent can take down an entire power grid, or bring a key supply route to a grinding halt, or – in the worst case – kill thousands with an act of terrorism, all without the slightest warning.

Okay, so imagine that your country is in wartime, and that a small group of vigilant citizens has uncovered an enemy sleeper cell in your city. They’ve shown you convincing evidence for the existence of the cell, and demonstrated that the cell is actively planning to commit some large-scale act of violence – perhaps not imminently, but certainly in the near-to-mid-future. Worse, the cell seems to have even more nefarious plots in the offing, possibly involving nuclear or biological weapons.

Now imagine that when you go to investigate further, you find to your surprise and frustration that no one seems to be particularly concerned about any of this. Oh sure, they acknowledge that in theory a sleeper cell could do some damage, and that the whole matter is probably worthy of further study. But by and large they just hear you out and then shrug and go about their day. And when you, alarmed, point out that this is not just a theory – that you have proof that a real sleeper cell is actually operating and making plans right now – they still remain remarkably blase. You show them the evidence, but they either don’t find it convincing, or simply misunderstand it at a very basic level (“A wiretap? But sleeper agents use cellphones, and cellphones are wireless!”). Some people listen but dismiss the idea out of hand, claiming that sleeper cell attacks are “something that only happen in the movies”. Strangest of all, at least to your mind, are the people who acknowledge that the evidence is convincing, but say they still aren’t concerned because the cell isn’t planning to commit any acts of violence imminently, and therefore won’t be a threat for a while. In the end, all of your attempts to raise the alarm are to no avail, and you’re left feeling kind of doubly scared – scared first because you know the sleeper cell is out there, plotting some heinous act, and scared second because you know you won’t be able to convince anyone of that fact before it’s too late to do anything about it.

This is roughly how I feel about AI risk.

You see, I think artificial intelligence is probably the most significant existential threat facing humanity right now. This, to put it mildly, is something of a fringe position in most intellectual circles (although that’s becoming less and less true as time goes on), and I’ll grant that it sounds kind of absurd. But regardless of whether or not you think I’m right to be scared of AI, you can imagine how the fact that AI risk is really hard to explain would make me even more scared about it. Threats like nuclear war or an asteroid impact, while terrifying, at least have the virtue of being simple to understand – it’s not exactly hard to sell people on the notion that a 2km hunk of rock colliding with the planet might be a bad thing. As a result people are aware of these threats and take them (sort of) seriously, and various organizations are (sort of) taking steps to stop them.

AI is different, though. AI is more like the sleeper agents I described above – frighteningly invisible. The idea that AI could be a significant risk is not really on many people’s radar at the moment, and worse, it’s an idea that resists attempts to put it on more people’s radar, because it’s so bloody confusing a topic even at the best of times. Our civilization is effectively blind to this threat, and meanwhile AI research is making progress all the time. We’re on the Titanic steaming through the North Atlantic, unaware that there’s an iceberg out there with our name on it – and the captain is ordering full-speed ahead.

(That’s right, not one but two ominous metaphors. Can you see that I’m serious?)

But I’m getting ahead of myself. I should probably back up a bit and explain where I’m coming from.

Artificial intelligence has been in the news lately. In particular, various big names like Elon Musk, Bill Gates, and Stephen Hawking have all been sounding the alarm in regards to AI, describing it as the greatest threat that our species faces in the 21st century. They (and others) think it could spell the end of humanity – Musk said, “If I had to guess what our biggest existential threat is, it’s probably [AI]”, and Gates said, “I…don’t understand why some people are not concerned [about AI]”.

Of course, others are not so convinced – machine learning expert Andrew Ng said that “I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars”.

In this case I happen to agree with the Musks and Gates of the world – I think AI is a tremendous threat that we need to focus much of our attention on it in the future. In fact I’ve thought this for several years, and I’m kind of glad that the big-name intellectuals are finally catching up.

Why do I think this? Well, that’s a complicated subject. It’s a topic I could probably spend a dozen blog posts on and still not get to the bottom of. And maybe I should spend those dozen-or-so blog posts on it at some point – it could be worth it. But for now I’m kind of left with this big inferential gap that I can’t easily cross. It would take a lot of explaining to explain my position in detail. So instead of talking about AI risk per se in this post, I thought I’d go off in a more meta-direction – as I so often do – and talk about philosophical differences in general. I figured if I couldn’t make the case for AI being a threat, I could at least make the case for making the case for AI being a threat.

(If you’re still confused, and still wondering what the whole deal is with this AI risk thing, you can read a not-too-terrible popular introduction to the subject here, or check out Nick Bostrom’s TED Talk on the topic. Bostrom also has a bestselling book out called Superintelligence. The one sentence summary of the problem would be: how do we get a superintelligent entity to want what we want it to want?)

(Trust me, this is much much harder than it sounds)

So: why then am I so meta-concerned about AI risk? After all, based on the previous couple paragraphs it seems like the topic actually has pretty decent awareness: there are popular internet articles and TED talks and celebrity intellectual endorsements and even bestselling books! And it’s true, there’s no doubt that a ton of progress has been made lately. But we still have a very long way to go. If you had seen the same number of online discussions about AI that I’ve seen, you might share my despair. Such discussions are filled with replies that betray a fundamental misunderstanding of the problem at a very basic level. I constantly see people saying things like “Won’t the AI just figure out what we want?”, or “If the AI gets dangerous why can’t we just unplug it?”, or “The AI can’t have free will like humans, it just follows its programming”, or “lol so you’re scared of Skynet?”, or “Why not just program it to maximize happiness?”.

Having read a lot about AI, these misunderstandings are frustrating to me. This is not that unusual, of course – pretty much any complex topic is going to have people misunderstanding it, and misunderstandings often frustrate me. But there is something unique about the confusions that surround AI, and that’s the extent to which the confusions are philosophical in nature.

Why philosophical? Well, artificial intelligence and philosophy might seem very distinct at first glance, but look closer and you’ll see that they’re connected to one another at a very deep level. Take almost any topic of interest to philosophers – free will, consciousness, epistemology, decision theory, metaethics – and you’ll find an AI researcher looking into the same questions. In fact I would go further and say that those AI researchers are usually doing a better job of approaching the questions. Daniel Dennet said that “AI makes philosophy honest”, and I think there’s a lot of truth to that idea. You can’t write fuzzy, ill-defined concepts into computer code. Thinking in terms of having to program something that actually works takes your head out of the philosophical clouds, and puts you in a mindset of actually answering questions.

All of which is well and good. But the problem with looking at philosophy through the lens of AI is that it’s a two-way street – it means that when you try to introduce someone to the concepts of AI and AI risk, they’re going to be hauling all of their philosophical baggage along with them.

And make no mistake, there’s a lot of baggage. Philosophy is a discipline that’s notorious for many things, but probably first among them is a lack of consensus (I wouldn’t be surprised if there’s not even a consensus among philosophers about how much consensus there is among philosophers). And the result of this lack of consensus has been a kind of grab-bag approach to philosophy among the general public – people see that even the experts are divided, and think that that means they can just choose whatever philosophical position they want.

Want. That’s the key word here. People treat philosophical beliefs not as things that are either true or false, but as choices – things to be selected based on their personal preferences, like picking out a new set of curtains. They say “I prefer to believe in a soul”, or “I don’t like the idea that we’re all just atoms moving around”. And why shouldn’t they say things like that? There’s no one to contradict them, no philosopher out there who can say “actually, we settled this question a while ago and here’s the answer”, because philosophy doesn’t settle things. It’s just not set up to do that. Of course, to be fair people seem to treat a lot of their non-philosophical beliefs as choices as well (which frustrates me to no end) but the problem is particularly pronounced in philosophy. And the result is that people wind up running around with a lot of bad philosophy in their heads.

(Oh, and if that last sentence bothered you, if you’d rather I said something less judgmental like “philosophy I disagree with” or “philosophy I don’t personally happen to hold”, well – the notion that there’s no such thing as bad philosophy is exactly the kind of bad philosophy I’m talking about)

(he said, only 80% seriously)

Anyway, I find this whole situation pretty concerning. Because if you had said to me that in order to convince people of the significance of the AI threat, all we had to do was explain to them some science, I would say: no problem. We can do that. Our society has gotten pretty good at explaining science; so far the Great Didactic Project has been far more successful than it had any right to be. We may not have gotten explaining science down to a science, but we’re at least making progress. I myself have been known to explain scientific concepts to people every now and again, and fancy myself not half-bad at it.

Philosophy, though? Different story. Explaining philosophy is really, really hard. It’s hard enough that when I encounter someone who has philosophical views I consider to be utterly wrong or deeply confused, I usually don’t even bother trying to explain myself – even if it’s someone I otherwise have a great deal of respect for! Instead I just disengage from the conversation. The times I’ve done otherwise, with a few notable exceptions, have only ended in frustration – there’s just too much of a gap to cross in one conversation. And up until now that hasn’t really bothered me. After all, if we’re being honest, most philosophical views that people hold aren’t that important in grand scheme of things. People don’t really use their philosophical views to inform their actions – in fact, probably the main thing that people use philosophy for is to sound impressive at parties.

AI risk, though, has impressed upon me an urgency in regards to philosophy that I’ve never felt before. All of a sudden it’s important that everyone have sensible notions of free will or consciousness; all of a sudden I can’t let people get away with being utterly confused about metaethics.

All of a sudden, in other words, philosophy matters.

I’m not sure what to do about this. I mean, I guess I could just quit complaining, buckle down, and do the hard work of getting better at explaining philosophy. It’s difficult, sure, but it’s not infinitely difficult. I could write blogs posts and talk to people at parties, and see what works and what doesn’t, and maybe gradually start changing a few people’s minds. But this would be a long and difficult process, and in the end I’d probably only be able to affect – what, a few dozen people? A hundred?

And it would be frustrating. Arguments about philosophy are so hard precisely because the questions being debated are foundational. Philosophical beliefs form the bedrock upon which all other beliefs are built; they are the premises from which all arguments start. As such it’s hard enough to even notice that they’re there, let alone begin to question them. And when you do notice them, they often seem too self-evident to be worth stating.

Take math, for example – do you think the number 5 exists, as a number?

Yes? Okay, how about 700? 3 billion? Do you think it’s obvious that numbers just keep existing, even when they get really big?

Well, guess what – some philosophers debate this!

It’s actually surprisingly hard to find an uncontroversial position in philosophy. Pretty much everything is debated. And of course this usually doesn’t matter – you don’t need philosophy to fill out a tax return or drive the kids to school, after all. But when you hold some foundational beliefs that seem self-evident, and you’re in a discussion with someone else who holds different foundational beliefs, which they also think are self-evident, problems start to arise. Philosophical debates usually consist of little more than two people talking past one another, with each wondering how the other could be so stupid as to not understand the sheer obviousness of what they’re saying. And the annoying this is, both participants are correct – in their own framework, their positions probably are obvious. The problem is, we don’t all share the same framework, and in a setting like that frustration is the default, not the exception.

This is not to say that all efforts to discuss philosophy are doomed, of course. People do sometimes have productive philosophical discussions, and the odd person even manages to change their mind, occasionally. But to do this takes a lot of effort. And when I say a lot of effort, I mean a lot of effort. To make progress philosophically you have to be willing to adopt a kind of extreme epistemic humility, where your intuitions count for very little. In fact, far from treating your intuitions as unquestionable givens, as most people do, you need to be treating them as things to be carefully examined and scrutinized with acute skepticism and even wariness. Your reaction to someone having a differing intuition from you should not be “I’m right and they’re wrong”, but rather “Huh, where does my intuition come from? Is it just a featureless feeling or can I break it down further and explain it to other people? Does it accord with my other intuitions? Why does person X have a different intuition, anyway?” And most importantly, you should be asking “Do I endorse or reject this intuition?”. In fact, you could probably say that the whole history of philosophy has been little more than an attempt by people to attain reflective equilibrium among their different intuitions – which of course can’t happen without the willingness to discard certain intuitions along the way when they conflict with others.

I guess what I’m trying to say is: when you’re discussing philosophy with someone and you have a disagreement, your foremost goal should be to try to find out exactly where your intuitions differ. And once you identify that, from there the immediate next step should be to zoom in on your intuitions – to figure out the source and content of the intuition as much as possible. Intuitions aren’t blank structureless feelings, as much as it might seem like they are. With enough introspection intuitions can be explicated and elucidated upon, and described in some detail. They can even be passed on to other people, assuming at least some kind of basic common epistemological framework, which I do think all humans share (yes, even objective-reality-denying postmodernists).

Anyway, this whole concept of zooming in on intuitions seems like an important one to me, and one that hasn’t been emphasized enough in the intellectual circles I travel in. When someone doesn’t agree with some basic foundational belief that you have, you can’t just throw up your hands in despair – you have to persevere and figure out why they don’t agree. And this takes effort, which most people aren’t willing to expend when they already see their debate opponent as someone who’s being willfully stupid anyway. But – needless to say – no one thinks of their positions as being a result of willful stupidity. Pretty much everyone holds beliefs that seem obvious within the framework of their own worldview. So if you want to change someone’s mind with respect to some philosophical question or another, you’re going to have to dig deep and engage with their worldview. And this is a difficult thing to do.

Hence, the philosophical quagmire that we find our society to be in.

It strikes me that improving our ability to explain and discuss philosophy amongst one another should be of paramount importance to most intellectually serious people. This applies to AI risk, of course, but also to many everyday topics that we all discuss: feminism, geopolitics, environmentalism, what have you – pretty much everything we talk about grounds out to philosophy eventually, if you go deep enough or meta enough. And to the extent that we can’t discuss philosophy productively right now, we can’t make progress on many of these important issues.

I think philosophers should – to some extent – be ashamed of the state of their field right now. When you compare philosophy to science it’s clear that science has made great strides in explaining the contents of its findings to the general public, whereas philosophy has not. Philosophers seem to treat their field as being almost inconsequential, as if whatever they conclude at some level won’t matter. But this clearly isn’t true – we need vastly improved discussion norms when it comes to philosophy, and we need far greater effort on the part of philosophers when it comes to explaining philosophy, and we need these things right now. Regardless of what you think about AI, the 21st century will clearly be fraught with difficult philosophical problems – from genetic engineering to the ethical treatment of animals to the problem of what to do with global poverty, it’s obvious that we will soon need philosophical answers, not just philosophical questions. Improvements in technology mean improvements in capability, and that means that things which were once merely thought experiments will be lifted into the realm of real experiments.

I think the problem that humanity faces in the 21st century is an unprecedented one. We’re faced with the task of actually solving philosophy, not just doing philosophy. And if I’m right about AI, then we have exactly one try to get it right. If we don’t, well..

Well, then the fate of humanity may literally hang in the balance.

Didacticism and the Ratchet of Progress

So my last two posts have focused on an idiosyncratic (and possibly nonsensical) view of creativity that I’ve developed over the past month or so. Under this picture, the intellect is divided into two categories: P-intelligence, which is the brain’s ability to perform simple procedural tasks, and NP-intelligence, which is the brain’s ability to creatively solve difficult problems. So far I’ve talked about how P- and NP-intelligence might vary independently in the brain, possibly giving rise to various personality differences, and how NP-intelligence is immune to introspection, which lies at the heart of what makes us label something as “creative”. In this post I’d like to zoom out a bit and talk about how the notion of P- and NP-intelligence can be applied on a larger, societal scale.

Before we can do that, though, we have to zoom in – zoom in to the brain, and look at how P- and NP-intelligence work together in it. I think something I didn’t emphasize quite enough in the previous two posts is the degree to which the two kinds of intelligence complement one another. P- and NP-intelligence are very different, but they form an intellectual team of sorts – they each handle different jobs in the brain, and together they allow us to bootstrap ourselves up to higher and higher levels of understanding.

What do I mean by bootstrapping? Well, I talked before about how the inner workings of NP-intelligence will always be opaque to your conscious self. Whenever you have a new thought or idea, it just seems to “bubble up” into consciousness, as if from nowhere. So there’s a sense in which NP-intelligence is “cut off” from consciousness – it’s behind a kind of introspective barrier.

By the same token, though, I think you could say that the reverse is also true – that it’s consciousness that is cut off from NP-intelligence. I picture NP-intelligence as essentially operating in the dark, metaphorically speaking. It goes through life stumbling about in what is, in effect, uncharted territory, trying to make sense of things that are at the limit of – or beyond – its ability to make sense of. And as a result, when your NP-intelligence generates some new idea or thought, it does so – well, not randomly, but…blindly.

I think that’s a good way of putting it: NP-intelligence is blind. When trying to solve some problem, your NP-intelligence has no idea in advance if the solution it will come up with will be a good one. How could it? We’re stipulating that the problems it deals with are too hard to immediately see the answer to. So your NP-intelligence is essentially reduced to educated guessing: How about this? Does this work? Okay, what about this? It offers up potential solutions, not knowing if they are correct or not.

And what exactly is it offering up these solutions to? Why, P-intelligence of course! P-intelligence may not be very bright – it could never actually solve the kind of problems NP-intelligence deals with – but it can certainly handle solution-checking. After all, solution-checking is easy. So when your NP-intelligence tosses up some half-baked idea that may be complete genius or may be utter stupidity (it doesn’t know), it’s your P-intelligence that evaluates the idea: No, that’s no good. Nope, total garbage. Yes, that works! Of course, it’s probably not just one-way communication – there’s probably also some interplay, some back and forth between the two: Yes, you’re getting better. No, not quite, but that’s close. Almost there, here’s what’s still wrong. By and large, though, P- and NP-intelligence form a cooperative duo in the brain in which they each stick to their own specialized niche: NP-intelligence is the suggester of ideas, and P-intelligence is the arbiter of suggestions.

That, in a nutshell, is my view of how we manage to make ourselves smarter over time. Your NP-intelligence is essentially an undiscriminating brainstormer, throwing everything it can think of at the wall to see what sticks, and your P-intelligence is an overseer that looks over what’s been thrown and ensures that only good things stick. Together they act as a kind of ratchet that lets good ideas accumulate in the brain and bad ones be forgotten.

Of course, saying that NP-intelligence acts “indiscriminately” is probably overstating things – that would amount to saying that NP-intelligence acts randomly, which is almost certainly not true. After all, while the above “ratchet” scheme would technically work with a random idea generator, in practice it would be far too slow to account for the rate at which humans manage to accumulate knowledge – it would probably take eons for a brain to “randomly” suggest something like General Relativity. No, NP-intelligence does not operate randomly, even if it does operate blindly – it suggests possible solutions using incredibly advanced heuristics that have evolved over millions of years, heuristics that are very much beyond our current understanding. And from the many ideas that these advanced heuristics generate, P-intelligence (faced only with the relatively simple task of identifying good ideas) is able to select the very best of them. The result? Whenever our NP-intelligence manages to cough up some brilliant new thought, our P-intelligence latches onto it, and uses it to haul ourselves up to a new rung on the ladder of understanding – which gives our NP-intelligence a new baseline from which to operate, allowing the process to begin all over again. Thus the ratchet turns, but only ever forward.

Now, there’s a sense in which what I’m saying is nothing new. After all, people have long recognized that “guess and check” is an important method of solving problems. But what this picture is saying is that at a fundamental level, guessing and checking is all there is. The only way problems ever get solved is by one heuristic or another in the brain suggesting a solution that it doesn’t know is correct, and then by another part of the brain taking advantage of the inherent (relative) easiness of checking to see if that solution is any good. There’s no other way it could work, really – unless you already know the answer, the creative leaps you make have to be blind. And what this suggests in turn is that the only reason people are able to learn and grow at all, indeed the only reason that humanity has been able to make the progress it has, is because of one tiny little fact: that it’s easier to verify a solution than it is to generate it. That’s all. That one little fact lies at the heart of everything – it’s what allows you to recognize good ideas outside your own current sphere of understanding, which is what allows that ratchet of progress to turn, which is what eventually gives you…the entire modern world, and all of its wonders. The Large Hadron Collider. Citizen Kane. Dollar Drink Days at McDonald’s. All of them stemming from that one little quirk of math.

(Incidentally, when discussing artificial intelligence I often see people say things like, “Superintelligent AI is impossible. How can we make something that’s smarter than ourselves?” The above is my answer to those people. Making something smarter than yourself isn’t paradoxical – we make ourselves smarter every day when we learn. All it takes is a ratchet)

Now, I started off this post by saying I wanted to zoom out and look at P- and NP-intelligence on a societal level, and then proceeded to spend like 10 paragraphs doing the exact opposite of that. But we seem to have arrived back on topic in a somewhat natural way, and I’m going to pretend that was intentional. So: let’s talk about civilizational progress.

Civilization, clearly, has made a great deal of progress over the past few thousand years or so. This is most obvious in the case of science, but it also applies to things like art and morality and drink deals. And as I alluded to above, I think the process driving this progress is exactly the same as the one that drives individual progress: the ratchet of P- and NP-intelligence. Society progresses because people are continually tossing up new ideas (both good and bad) for evaluation, and because for any given idea, people can easily check it using their P-intelligence, and accept it only if it turns out to be a good idea. It’s the same guess-and-check procedure that I described above for individuals, except operating on a much wider scale.

(Of course, that the two processes would turn out similar is probably not all that surprising, given that society is made up of individuals)

But the interesting thing about civilizational progress, at least to my mind anyway, is the extent to which it doesn’t just consist of individuals making individual progress. One can imagine, in principle at least, a world in which all civilizational progress was due to people independently having insights and getting smarter on their own. In such a world, everyone would still be learning and growing (albeit likely at very different rates), and so humanity’s overall average understanding level would still be going up. But it would be a very different world from the one we live in now. In such a world, ideas and concepts would have to be independently reinvented by every member of the population before they could be made use of. If you wanted to use a phone you would have to be Alexander Graham Bell; if you wanted to do calculus, you would have to be Newton (or at the very least Leibniz).

Thankfully, our world does not work that way – in our world, ideas only have to be generated once. And the reason for this is the same tiny little fact I highlighted above, that beautiful asymmetry between guessing and checking. The fact that checking solutions is easy means that the second an idea has even been considered in someone’s head, the hard part is already over. Once that happens, once the idea has been lifted out of the vast, vast space of possible ideas we could be considering and promoted to our attention, then it just becomes a matter of evaluating it using P-intelligence – which other people can potentially do just as easily as the idea-generator. In other words, ideas are portable – when you come up with some new thought and you want to share it with someone else, they can understand it even if they couldn’t have thought of it themselves. So not only does every person have a ratchet of understanding, but that ratchet carries with it the potential to lift up all of humanity, and not just themselves.

Of course, while this means that humanity is able to generate a truly prodigious number of good ideas, and expand its sphere of knowledge at an almost terrifying rate, the flip side is that it’s pretty much impossible for any one person to keep up with all that new knowledge. Literally impossible, in fact, if you want to keep up with everything – scientific papers alone come out at a rate far faster than you could ever read them, and there are over 300 hours of video uploaded to youtube every minute. But even if you just want to learn the basics of a few key subjects, and keep abreast of only the most important new theories and ideas, you’re still going to have a very tough time of it.

Luckily, there are two things working in your favour. The first is just what I’ve been talking about for this whole post – P vs NP-intelligence, and the fact that it’s much easier to understand someone else’s idea than it is to come up with it yourself. Of course, easier doesn’t necessarily mean easy – you still have to learn calculus, even if you don’t have to invent it – but this is what gives you a fighting chance. Our whole school system is essentially an attempt to take people starting from zero knowledge and bring them up to the frontiers of our understanding, and it’s no coincidence that it operates mostly based on P-intelligence. Oh sure, there are a few exceptions in the form of “investigative learning” activities – which attempt to “guide” students toward making relatively small creative leaps – but for the most part, school consists of teachers explaining things. And it pretty much has to be that way – unless you get really good at guiding, it’s going to take far too long for students to generate on their own everything that they need to learn. After all, it took our best minds centuries to work all of this “science” stuff out. How’s the average kid supposed to do it in 12 years?

So that’s the first thing working in your favour – for the most part when you’re trying to learn, you “merely” have to make use of your P-intelligence to understand the subject matter, which in principle allows you to make progress much faster than the people who originally came up with it all.

The second thing working in your favour, and the reason I actually started writing this post in the first place (which at 2100 words before a mention is probably a record for me) is didacticism.

So, thus far I’ve portrayed the whole process of civilizational progress in a somewhat…overly simplistic manner. I’ve painted a picture in which the second an individual comes up with a brilliant new idea (or theory/insight/solution/whatever), they and everyone else in the world are instantly able to see that it’s a good one. I’ve sort of implied (if not stated) that checking solutions is not just easier than generating them, it’s actually fundamentally easy, and that everyone can do it. And to the extent that I’ve implied these things, I’ve probably overstated my case (but hey, in my defense it’s really easy to get swept up in an idea when you’re writing about it). I think I actually put things better in my first post about this whole mess – there I described P-intelligence as something that develops as you get older, and that can vary from person to person. And from that perspective it’s easy to see how, depending on your level of P-intelligence and the complexity of the idea in question, “checking” it could be anything from trivially easy to practically impossible. It all depends on whether the idea falls within the scope of your P-intelligence, or just at its limits, or well beyond it.

(Mind you, the important stuff I wrote about above still goes through regardless. As long as checking is easier than guessing – even if it’s not easy in an absolute sense – then the ratchet can still turn)

Anyway, so what does this all have to do with didacticism? Well, I view didacticism as a heroic, noble, and thus far bizarrely successful attempt to take humanity’s best ideas and bring them within the scope of more and more people’s P-intelligence. The longer an idea has been around the better we get at explaining it, and the more people there are who can understand it.

My view of that process is something like the following: when someone first comes up with a genuinely new idea (let’s say we’re talking about a physicist and they’ve come up with a new theory, since that’s what I’m most familiar with), initially there are going to be very few people who can understand it. Maybe a few others working in the same sub-sub-field of physics can figure it out, but probably not anyone else. So those few physicists get to work understanding the theory, and eventually after some work they’re able to better explain it to a wider audience – so now maybe everyone in their sub-field understands it. Then all those physicists get to work further clarifying the theory, and further working out the best way to explain it, and eventually everyone in that entire field of physics is able to understand it. And it’s around that point, assuming the theory is important enough and enough results have accumulated around it, that textbooks start getting written and graduate courses start being taught.

That’s an important turning point. When you’ve reached the course and textbook stage, that means you’ve gotten the theory to the point where it can be reliably learned by students – you’ve managed to mass-produce the teaching of the theory, at least to some extent. And from there it just keeps going – people come up with new teaching tricks, or better ways of looking at the theory, and it gets pushed down to upper-year undergraduate courses, and then possibly to lower-year undergraduate courses, and eventually (depending on how fundamentally complicated the theory is, and how well it fits into the curriculum) maybe even to high school. At every step along the way of this process, the wheel of didacticism turns, our explanations get better and better, and the science trickles down.

This isn’t just all hypothetical, mind you – you can actually see this process happening. Take my own research, which is on photonic crystals. Photonic crystals were invented in 1987, the first textbook on them was published in 1995, and just two years ago I sat in on a special photonic crystal graduate course, probably one of the first. So the didactic process is well on its way for photonic crystals – in fact, the only thing holding the subject back right now is that it’s of relatively narrow interest to physicists. If photonic crystals start being used in more applications, and gain importance to a wider range of physicists, then I would be shocked if they weren’t pushed down to upper-year undergraduate courses. They’re certainly no more complicated than anything else that’s taught at that level.

Or, if you’d like a more well-known example, take Special Relativity. Special Relativity is a notoriously counterintuitive and confusing subject in physics, and students always have trouble learning it. However, for such a bewildering, mind-bending theory it’s actually quite simple at heart – and so it stands the most to gain from good teaching and the didactic process in general. This is reflected in the courses it gets taught in. I myself am a TA in a course that teaches Special Relativity, and it’s not an upper year course like you might expect – it’s actually a first year physics course. And not only that, it’s actually a first year physics course for life-science majors. The students in this course are bright kids, to be sure, but physics is not their specialty and most of them are only taking the course because they need it to get into med school. And yet every year we teach them Special Relativity, and it’s actually done at a far less superficial level than you might expect. Granted, I’m not sure how much they get out of it – but the fact that it keeps getting taught in the course year after year puts a lower bound on how ineffective it could be.

Think about what that means – it means that didacticism, in a little over a hundred years, has managed to take Special Relativity from “literally only the smartest person in the world understands this” to “eh, let’s teach it to some 18-year-olds who don’t even really like physics”. It’s a really powerful force, in other words.

And not only that, but it’s actually even more powerful than it seems. The process I described above, of a theory gradually working its way down the scholastic totem pole, is only the most obvious kind of didacticism. There’s also a much subtler process – call it implicit didacticism – whereby theories manage to somehow seep into the broader cultural awareness of a society, even among those who aren’t explicitly taught the theory. A classic example of this is how, after Newton formulated his famous laws of motion, the idea of a clockwork universe suddenly gained in popularity. Of course, no doubt some people who proposed the clockwork universe idea knew of Newton’s laws and were explicitly drawing inspiration from them – but I think it’s also very likely that many proponents of the clockwork universe were ignorant of the laws themselves. Instead, the laws caused a shift in the way people thought and talked that made a mechanistic universe seem more obvious. In fact, I know this sort of thing happened, because I myself “came up” with the clockwork universe idea when I was only 14 or so, before I had taken any physics courses or knew what Newton’s laws were. And I take no credit for “independently” inventing the idea, of course, because in some sense I had already been exposed to it and had absorbed it by osmosis – it was already out there, had already altered our language in imperceptible ways that made it easier to “invent”. Science permeates our culture and affects it in very nonobvious ways, and it’s hard to overestimate how much of an effect this has on our thinking. Steven Pinker talks about much the same idea in The Better Angels of Our Nature while describing a possible cause of the Flynn effect (the secular rise in IQ scores in most developed nations over the past century or so):

And, Flynn suggests, the mindset of science trickled down to everyday discourse in the form of shorthand abstractions. A shorthand abstraction is a hard-won tool of technical analysis that, once grasped, allows people to effortlessly manipulate abstract relationships. Anyone capable of reading this book, even without training in science or philosophy, has probably assimilated hundreds of these abstractions from casual reading, conversation, and exposure to the media, including proportional, percentage, correlation,causation, [ . . . ]  and cost-benefit analysis. Yet each of them—even a concept as second-nature to us as percentage—at one time trickled down from the academy and other highbrow sources and increased in popularity in printed usage over the course of the 20th century.

-Steven Pinker, The Better Angels of Our Nature, p. 889

I have no idea if he’s right about the Flynn Effect, but what’s undoubtedly true is that right now we live in the most scientifically literate society to have ever existed. The average person knows far more about science (and more importantly, knows far more about good thinking techniques) than at any point in history. And if that notion seems wrong to you, if you’re more prone to associating modern day society with reality TV and dumbed-down pop music and people using #txtspeak, then…well, maybe you should raise your opinion of modern day society a little. A hundred years ago you wouldn’t have been able to just say “correlation does not imply causation”, or “you’ll hit diminishing returns” and assume that everyone would know what you were talking about. Heck, you wouldn’t have been able to read a blog post like this one, written by a complete layperson, even if blogging had existed back then.

All of which is to say: didacticism is a pretty marvelous thing, and we all owe the teachers and explainers of the world a debt of gratitude for what they do. So I say to them all now: thank you! This blog couldn’t exist without you.

When I put up this site I actually wrote at the outset that I didn’t want it to be a didactic blog. And while I do still hold that opinion, I’m much less certain of it than I used to be – I see now the value of didacticism in a way that I didn’t before. So I could see myself writing popular articles in the future, if not here then perhaps somewhere else. In some ways it’s hard, thankless work, but it really does bring great value to the world.

And hey, I know just the subject to write about…

[Up next: AI risk and how it pertains to didacticism. I was just going to include it here, but this piece was already getting pretty long, and it seems more self-contained this way anyway. So you’ll have to wait at least a few more days to find out why I think we’re all doomed]

Science: More of an Art than a Science

[In “which” I use “scare” quotes a “lot”]

My last post turned out to be a fruitful one, at least in terms of giving me new things to think about. I find that the mark of a good thought is whether or not it begets other thoughts, and this one was a veritable begattling gun.

Recall the thesis from last time: people have two kinds of intelligence, P-intelligence and NP-intelligence (I’m not super crazy about the names by the way, but they’ll do for now). P-intelligence is the ability to recognize a good solution to a problem when it’s presented to you (where the “problem” could be anything from coming up with a math proof to writing an emotionally moving poem), and NP-intelligence is the ability to actually come up with the solution in the first place. Since it’s obvious that it’s much easier to verify solutions as correct than it is to generate them (where “obvious” in this case means “pretty much impossible to prove, but hey, we’ll just assume it anyway”), it’s clear that one’s P-intelligence will always be “ahead” of one’s NP-intelligence: the level of quality you can reliably recognize will always exceed the level you can reliably generate. In that post I then went on to speculate that the gap between P-intelligence and NP-intelligence might vary from person to person, and even within a person from one age to another, and that this gap might explain some behavioural patterns that show up in humans.

(By the way, I should probably point out here that I totally bungled the mathematical definition of NP problems in the last post. NP problems are not those that are hard to solve and easy to verify – they are simply those that are easy to verify, full stop. Thus a problem in P also has to be in NP, since being easy to solve guarantees being easy to verify. The hardest problems in the NP class (called NP-complete problems) do – probably – have the character I described, of being difficult to solve and easy to verify, and so I’m going to retroactively claim that those were what I was talking about when I described NP problems last time. Still, many thanks to the facebook commenter who pointed this out.)

Now, before I go any further, I should probably try to clarify what I mean when I talk about P-intelligence being “ahead” of NP-intelligence, because I managed to confuse myself several times while writing these posts – and if even the author is confused about what they’re writing, what chance does the reader have? So here’s my view of things: P-intelligence is actually the “dumber” of the two intelligences. It’s limited to simple, algorithmic tasks – things like checking solutions, yes, but also things like applying a formula you learned in calculus, or running through some procedure at work that you know off by heart. Plug’n’chug, in other words, for my physicist friends. So in retrospect I probably shouldn’t have portrayed P-intelligence as merely a “verifier” – P-intelligence essentially handles anything in the brain that’s been “taskified”, or reduced to a relatively simple algorithm, and one example of this is solution verification. NP-intelligence, on the other hand, is the smartest part of yourself, and handles the creative side of things. The strokes of genius and flashes of insight you sometimes get on oh-so-rare occasions? That’s NP-intelligence. In a sense, NP-intelligence is whatever you can’t taskify in the brain.

All of which is well and good. But if NP-intelligence is so smart, why then do I talk about P-intelligence being “ahead” of it? That’s what was causing the confusion for me – half the time I seemed to be thinking of P-intelligence as smarter than NP-intelligence, and half the time it was the other way around (which is never a good sign when you’re trying to flesh out a new concept in your mind). Eventually I managed to clarify things though, at least to my own satisfaction. Here’s what I would say on the matter: NP-intelligence is definitely smarter than P-intelligence, in that it can solve much more difficult problems. However, usually we’re not interested in a direct comparison of the two intelligences and their ability to solve problems. Usually what we’re comparing is the ability of NP-intelligence to generate a solution to a given problem, and the ability of P-intelligence to recognize a solution to that same problem. And for a given problem, solution recognition is of course much easier than solution generation. That’s why we can talk about P-intelligence being ahead of NP-intelligence – it faces a much easier task than NP-intelligence for any set level of problem difficulty, and so it can handle more difficult problems despite being “dumber”.

Now, hopefully that brings you up to at least the level of clarity I have in my own head (which, realistically, is probably not all that high). Moving forward, though, I’d like to de-emphasize the definition of P-intelligence as a solution-checker – it was useful in the last post, but I’ll be going in a somewhat different direction from here on out. Better now to think of P-intelligence as the part of your brain that handles simple, algorithmic tasks (one of which is solution checking) – in fact, if you like you can think of P-intelligence as standing for “Procedural Intelligence”, and NP-intelligence as standing for “Non-Procedural Intelligence”. That captures the idea pretty well.

Okay, so recaps and retcons aside, I’m pretty sure I was trying to write a blog post or something. As I was saying at the outset, this whole idea of P- and NP-intelligence seeded many new thoughts for me, some more profound than others. And first among them, in the “not-very-profound-but-still-edifying” category, was a clarified notion of creativity in art and science.

We’ve all heard the phrase, “It’s more of an art than a science”. It’s usually used to distinguish “intuitive” fields like literature and the arts from “logical” fields like math and science. The idea seems to be that creating a great work of art requires an ineffable, creative “spark” that is unique to humans and is (even in principle) beyond our understanding, whereas doing science requires merely the logical, “mechanical” operation of thought. There are countless fictional tropes that further the idea: the “rational” Spock being outwitted by the “emotional” Kirk, the “logical” robot losing some game to a “creative” human who can think outside the box, and so on and so forth.

Anyway, needless to say I’ve never liked the phrase, but until now it’s always been a vague sort of dislike that I lacked the vocabulary to really expand upon. Now, with P- and NP-intelligence added to my concept-set, I can finally explain myself, and it turns out that I have not just one but two problems with the phrase as it stands.

First: “ineffable” doesn’t mean “magic”. When people say that some skill is an art rather than a science, they usually mean two things: one, it’s a creative skill (you can use it to generate new, original works), and two, it’s immune to introspection (you can’t just write down exactly how the skill works and thereby pass it on to someone else, because you yourself don’t know how it works). It’s this immunity to introspection that gets right down to the heart of the matter, I think: a skill is an art if you can’t verbalize, explicitly, how it works. And in that sense, saying that some skill is an art essentially amounts to saying that it requires NP-intelligence. In fact, the grouping of skills as “arts” and “sciences” actually corresponds very neatly to NP- and P-intelligence as I conceive of them. People frequently say “We’ve got it down to a science”, and what they mean is that they’ve figured out the skill to such an extent that they can say explicitly how it works – in effect, they’ve developed a procedure for implementing the skill, and so it falls under the purview of P-intelligence.

Here’s the problem, though. Yes, a skill that requires NP-intelligence (an “art”) will always be ineffable – that is, will always seem like a black box to the person who possesses the skill. If that weren’t the case – if the skill didn’t seem opaque to the person in question – then they would understand it at a conscious level and could presumably explicate exactly how it works, and then it would be an example of P-intelligence rather than NP-intelligence. So it seems as though creativity is doomed to always carry an element of mysteriousness with it, essentially by definition. But just because something seems mysterious doesn’t mean it is mysterious, and something being beyond your understanding is not the same thing as it being magic. Creativity, whatever it is, is implemented in the brain; it does not rely on some dualistic supernatural “spark” that transcends the physical. Just like everything else in the mind, creativity is an algorithm – it may be an algorithm that we lack introspective access to, but it’s an algorithm nonetheless. So there is zero reason to suspect that we couldn’t program a robot to be creative to the same extent that humans are creative. The leap from “I don’t understand how something works” to “It’s impossible to understand how this thing works” is a huge one, and it’s one that people are far too quick to make.

So that’s my first problem with the phrase “It’s more of an art than a science” – it elevates art (and by extension, creativity) to a category that’s fundamentally distinct from the “ordinary” workings of the brain, and I reject that distinction. Although now that I think about it, what I just described isn’t really a problem with the phrase per se – the phrase is actually pretty innocuous in that regard. It’s more of a problem with the set of connotations that have agglomerated around the word “creativity” in our culture, of which the phrase is kind of just a symptom.

Anyway, my second problem with the phrase actually pertains more to the phrase itself, so let’s move on to that one.

My second problem with the phrase is this: in choosing a skill to represent procedural, non-creative knowledge in contrast to art, you chose science? Seriously, of all things, SCIENCE? REALLY?

Doing science is the most creative, least procedural skill I can think of. Seriously, I’d be hard-pressed to come up with a way of getting it more backwards than describing science as non-creative. Scientists operate at the boundaries of humanity’s knowledge. They spend their days and nights trying to come up with thoughts that no one else has ever had before. They are most definitely not operating based on some set procedure – if they were, science would have been solved centuries ago. So if you want to say that they are not doing creative work, then I literally have no idea what the word creative means.

I suspect what’s going on here is that people have a warped, superficial view of what creativity is. They’ve internalized the idea that creativity belongs only to the domain of things that are “emotional” or “artistic” or “colourful”, whereas doing science only involves the manipulation of mere numbers and the use of cold, unfeeling “logic” – there’s no splattering of paint involved, so how can it be creative?

I hope that by now I’ve written enough that you can see why this idea is nonsense. If not, though, I’ll spell it out: creativity doesn’t care what your subject matter is. It doesn’t care if you’re working with reds and blues or with ones and zeros. It doesn’t care if you’re wearing a painter’s frock or a chemist’s lab coat. It doesn’t care if your tools are a pen and inkwell or an atomic force microscope. All that creativity cares about is originality: the generation of new ideas or thoughts or ways of doing something. Creativity is what happens when we employ the full extent of our intellect in solving a problem that is at the very limits of our ability to solve. Creativity is about NP-intelligence and non-proceduralizability and immunity to introspection; it’s not about some otherworldly spark of magical pixie dust. No, the demystified view of creativity is one that unifies the Sistine Chapel and Quantum Electrodynamics under one great heading of genius – the thing that separates humanity from everything else in the universe, the thing that makes homo sapiens unique.

Now, I suppose you could try to rescue the phrase. When people say “It’s more of an art than a science”, or perhaps more to the point, “We’ve got it down to a science”, what they really mean by “science” is “settled science”. They’re not talking about the actual process of doing science; they’re talking about things that scientists have already figured out, and that everyone else is trying to learn. There’s a big difference between learning something and discovering something, after all. And in that sense, what most people learn in high school science class is pretty algorithmic – certainly they’re not learning how to discover something new. Usually they’re just learning how to apply things that someone else has already discovered.

Still, though. Still I don’t like the phrase. I don’t like the idea of people associating science with a body of knowledge rather than a process of discovery. I don’t like the idea of people viewing science as a non-creative when it’s possibly the most creative thing that humanity does. And most of all I don’t like the idea of logic being the opposite of creativity.

No, I’d rather just say that science is science, and art is art, and that they’re both creative – and that they’re both genius, and that when you get right down to it, they’re both pretty special.

[Up next: more on P- and NP-intelligence, and how they pertain to AI-risk]