The Cost of Humility

“Blessed are the meek: for they shall inherit the earth.”

-Matthew 5:5

What do you care what other people think?”

-Richard Feynman

I.

I’ve been thinking a lot about validation lately – how we get it, why we crave it. My thoughts on the matter are still in flux, but the one thing I keep coming back to is:

Man, we sure do need a lot of it.

Seriously, the average person spends a truly prodigious amount of time and effort seeking out reassurance that they’re well-liked and popular. The rise of social media has certainly made this more obvious – look at the constant search for Likes, Retweets, and Shares – but by no means did it originate the phenomenon. We’ve always been obsessed with our image, our reputation, and our standing among our peers. And we don’t just need validation – we need a constant stream of validation. Praise almost seems to come with a built-in half-life: a compliment from five seconds ago is worth more than a compliment from five days ago, and a compliment from five months ago may as well have not happened.

What I find interesting about this is not so much that we care what other people think – that seems unsurprising and even expected, given that humans are social creatures. No, what I find interesting is how often we seek out validation for things that, at some level, we already know about ourselves.

Take the amateur artist, for example. We’re all familiar with the general archetype: they draw in a sketchbook while on the subway. They’re sort of superficially protective of said sketchbook – they put up a show of not wanting to let anyone see it, but will hand it over given some light prodding. And they’re preemptively self-deprecating about their work – they try to talk it down when they show it to anyone, saying “oh, they’re just some silly drawings I did” or “I was just messing around”.

Of course, the self-deprecation is almost always unwarranted. I don’t think I’ve ever looked in someone’s sketchbook and not been deeply impressed with what I saw. Amateur artists are usually really good. And why wouldn’t they be? If you’re going to go to the trouble of buying a sketchbook and then spend dozens or hundreds of hours drawing in it, odds are you have some talent. People don’t tend to go out of their way to embrace a hobby that they’re terrible at. I would wager that anyone who draws for pleasure can fully expect to impress any random person they show their work to, unless the person happens to be a professional artist.

And the strange this is, I think they already know this. I think amateur artists know, on some level, that what they’re producing is impressive. And I think they also know – again, on some level – that if they were to show their work to people, they would receive only praise and positive feedback.

But if this is true, then we have a puzzle on our hands. Because amateur artists do seem to care deeply about receiving compliments and praise. And they still care about receiving compliments and praise, even if they already expect to receive them.

Take a moment to think about how strange that is. If you already expect for something to happen, then having it actually happen should not cause any change in your thinking. It was…what you expected to happen, after all. You certainly don’t gain any new information when your expectations are confirmed. And yet somehow when it comes to praise, expectations aren’t enough – anticipating that you’ll get a compliment is a very different thing from actually hearing the compliment said aloud. It’s as though praise doesn’t “count” until you’ve actually received it from another person – even if you fully expect to receive it, and in fact would be very surprised not to receive it.

It’s not just amateur artists, of course – we all do this. And it’s hard not to find that a little frustrating. Because it’s one thing for us to have a deep need to seek out praise when we’re feeling insecure – that at least makes a certain amount of sense. But to find ourselves in situations where we’re not feeling particularly insecure – where we in fact have pretty confident knowledge of our own praiseworthiness – and yet still have a deep need to seek out praise anyway? That seems impressively pointless, even by “somehow-evolved-to-have-intrusive-thoughts” brain standards. It would be nice if we could just cut out the middleman, so to speak, and use our expectation of being validated as a source of actual validation. I mean, surely if you knew deep down that someone would compliment you, if they saw your work – surely that should be just as good as actually getting the compliment, right?

Unfortunately, validation doesn’t appear to work like that. We seem to be wired to accept praise only from outside sources, and discount anything we might say or think about ourselves.

And again, this is kind of frustrating. But it’s also interesting – I’m always intrigued when I notice my brain doing something seemingly pointless, because there’s usually some underlying logic to what it’s doing that I haven’t seen yet. This case is no different: I think there’s a reason we can’t self-validate at will, as much as we might wish we could at times. I want to go into what that reason is, why it ends up being a less than good reason in certain cases, and whether or not there’s anything we can – or should – do to work around it.

II.

So let’s talk about humility.

Humility is one of those universally admired virtues. We all like humble people, and aspire to be humble to some degree. We want our heroes to be humble – think of Gandalf, or Frodo (or Sam, for that matter…or Aragorn…or Faramir – man, Tolkien really liked humility). There are exceptions, of course: Sherlock Holmes comes to mind as a character who’s well-liked despite being arrogant. But such cases are rare, and rely on the character or person in question having a very specific set of compensating characteristics. I think it’s safe to say that, all else being equal, we like arrogant people much less than we like humble people.

Now, why do we value humility so much? That’s a surprisingly tricky question to answer. I mean, yes, we all have an intuitive sense that arrogant people are jerks. But where does that intuition come from?

My own guess is that humility functions as a kind of sanity check on society. Without humility, people would have an incentive to talk themselves up – to brag about themselves as much as possible; to exaggerate their own worth without limit. Worse, in such a world people wouldn’t just talk about how great they were – they would actually have an incentive to believe they were that great as well. If you’re trying to convince someone that you’re amazing and deserve a promotion or a raise or whatever, you’re going to be much more convincing if you actually believe what you’re saying. And people are absolutely capable of this – it’s been proven again and again that we can change our own beliefs when it’s favourable to do so, without ever noticing that we used to believe differently. You can even see a variant of this in our own society, in the form of overconfidence bias, and that’s with a pretty strong norm of valuing humility. So I think the end result in a society without humility would be a bunch of people who had maximally high opinions of themselves going around bragging all the time. In other words, it’s probably a good thing that humility exists, all things considered – it keeps us tethered to reality.

But it’s important to note that it does so by taking the drastic step of essentially cutting us off from having opinions about ourselves. The idea behind humility seems to be that it would be really dangerous to allow people to self-validate at will. And so to avoid the problem altogether, we just say: okay, everything you think about yourself doesn’t count. Sorry. Doesn’t matter if you’re a very nice person who’s always been extremely humble in the past. Doesn’t matter if all you want is to feel good about one tiny little drawing you did, and you’re really really sure that the drawing is actually good anyway. Still: no self-validation for you. There’s just too much of a conflict of interest at play, and we can’t allow for any exceptions because humans are notoriously good at convincing themselves that the exact situation they find themselves in just happens to warrant an exception.

In practice, what we do allow is a strange sort of quasi-belief. You can think positive things about yourself, and you can know on an intellectual level that they’re probably true. But the beliefs have no force, no ability to provide validation. They don’t really count. The only thing that does count is validation that comes from another person. And the reason we allow that is because other people presumably don’t have the same perverse incentives that we do, incentives that would lead us to a runaway ego explosion if left unchecked.

So that’s what I think is going on in the puzzle I outlined above. Remember that the puzzle was not why we seek out validation from others – that makes perfect sense. The puzzle was why we still seek out validation even when we already know we’re worthy or meritorious or talented or whatever.

And the above picture of humility provides the answer. The reason we crave validation in such cases is because we simply can’t get it from ourselves. You can think you’re as worthy or meritorious or talented as you want, but it doesn’t really matter – when it comes to yourself, your own thoughts are always going to be discarded. In essence, humility is all about decoupling self-worth from self-assessment. And that means no matter how highly you assess yourself, you’re still going to have to look to others for praise.

Or, to put it another way: no matter how well you draw, you’re still going to have to show off your damn sketchbook.

III.

If that were all that was going on here, I would say: good for humility. Clearly it serves a very important purpose, and I’m not sure our society would even be able to function without it. Forcing people to seek out a little external validation seems like a small price to pay for that.

But I don’t think that is all that’s going on here.

People internalize norms in very different ways and to very different degrees. There are people out there who don’t seem to internalize the norms of humility at all. We usually call these people “arrogant jerks”. And there are people – probably the vast majority of people – who internalize them in reasonable, healthy ways. We usually call these people “normal”.

But then there are also people who internalize the norms of humility in highly unhealthy ways. Humility taken to its most extreme limit is not a pretty thing – you don’t end up with with wise, virtuous, Gandalf-style modesty. You end up with self-loathing, pathological guilt, and scrupulosity. There are people out there – and they are usually exceptionally good, kind, and selfless people, although that shouldn’t matter – who are convinced that they are utterly worthless as human beings. For such people, showing even a modicum of kindness or charity towards themselves would be unthinkable. Anti-charity is much more common – whatever interpretation of a situation puts themselves in the worst light, that’s the one they’ll settle on. And why? Because it’s been drilled into their heads, over and over again, that to think highly of yourself – even to the tiniest, most minute degree – is wrong. It’s something that bad, awful, arrogant people do, and if they do it then they’ll be bad, awful, arrogant people too. So they take refuge in the opposite extreme: they refuse to think even the mildest of nice thoughts about themselves, and they never show themselves even the slightest bit of kindness.

Or take insecurity (please). All of us experience insecurity to one degree or another, of course. But again, there’s a pathological, unhealthy form it can take on that’s rooted in how we internalize the norms of humility. When you tell people that external validation is the only means by which they can feel good about themselves…well, surprisingly enough, some people take a liking to external validation. But in the worst cases it goes beyond a mere desire for validation, and becomes a need – an addiction, even. You wind up with extreme people-pleasers, people who center every aspect of their lives around seeking out praise and avoiding criticism.

Both of these descriptions resonate a great deal with me. I mean, thankfully I rather emphatically do not think of myself as utterly worthless. But if I were to be honest, I would have to place myself somewhere in the “unhealthy” camp when it comes to humility. I find it extremely difficult to think thoughts that are charitable towards myself, or to ever give myself the benefit of the doubt. It just feels viscerally, cringe-inducingly wrong to take my own side like that. Heck, even describing myself in that manner – showing myself the charity of saying I deserve more self-charity, essentially – is hard for me to do. And the less said about my need for validation, the better.

This isn’t really about me, though. There’s a spectrum of unhealthiness when it comes to humility, and yes, I’m probably on it somewhere. But I got off relatively easy compared to what some people are saddled with.

I think some people have a picture of humility as this unalloyed good; something with no downsides whatsoever. And because of this they see no reason not to extol the virtues of humility as often and as widely as possible. After all, it could only lead to people being more humble, and what could be wrong with that? So we’ve ended up with a culture which is absolutely saturated with pro-humility messages, where every single hero you see is humble and every single villain you see is arrogant, where being humble is seen as almost synonymous with being good (Tolkien, anyone?). And this isn’t viewed as any cause for concern, because hey – it’s just humility, right?

But what I’m trying to say here is that this isn’t true. There’s a cost to humility. When you canonize the humble and hold them up as paragons of virtue…well, yes, maybe you manage to make society a little bit less arrogant, on average. But you also push some people who were already too humble for their own good into genuinely unhealthy places. The unhealthiness might not always be obviously related to humility – I’d bet that a good number of people who praise humility don’t make the connection, and complain in the next breath about how today’s Facebook-using teens are far too obsessed with what other people think of them. But the connection is there nonetheless.

IV.

All of this does strange things to the concept of self-esteem.

Take me, for example. Whenever people have asked me about my own self-esteem in the past, I’ve never known quite what to say. Usually I’ve just ended up mumbling some vague and half-contradictory response that didn’t really answer the question.

Because there are two different sides to me. On the one hand, you have my insecure side. This is the side that’s obsessed with what people think of me; the side that is desperate for validation and praise. It’s because of this side that I write blog posts and Dinosaur Comics, that I always try to get a laugh out of people, and that I try to be as clever and insightful as I can be in conversations. Insecurity is this side’s middle name: think “cross between a teacher’s pet and a class clown”, except turned up to 1000.

(For what it’s worth, I don’t really disapprove of this part of myself – the instinct to do praiseworthy things can be a good one, as long as it’s channeled in the right direction. I have a problem with the insecure side of myself not when it spends all its time looking to earn praise, but when it spends all its time looking to avoid embarrassment. That I think has done more harm than good for me over the course of my life)

On the other hand, though, I also have a confident side. This side is made up of the quasi-beliefs that I talked about above – the beliefs that I suspect, deep down, are true, but that I don’t really allow myself to fully accept because they come from my own brain. If you were to ask this side of myself what I’m like, it would say that I’m an exceedingly smart, funny, kind and thoughtful person. In fact, it would probably be fair to call this side of myself not just confident but arrogant. This is why I’ve always felt vaguely guilty when people I know call me modest – because they don’t know about the arrogant side that I have. Granted, the arrogant side may not have any real access to how I feel about myself, but it’s there all the same.

(I should also note that most of the time the two sides roughly cancel out, and I manage to approximate a normal, functioning human being. Not always, though)

Okay, so then the question is: do I have high self-esteem, or low?

And my own answer would be that I have no idea – it depends entirely on what you mean by self-esteem.

If self-esteem refers to that deep down set of quasi-beliefs that I have, then I guess I’d have to concede that I have high self-esteem. Certainly that side of myself doesn’t lack for confidence. But if so, it’s a very strange and almost hollow sort of self-esteem: it doesn’t help me feel particularly good about myself, or stop me from seeking out validation, or really do any of the things that you might expect having high self-esteem to do. So I’m not sure that this definition really fits.

On the other hand, if self-esteem refers to the insecure, validation-seeking side of myself – well, that makes a little more sense, since at least this side of me actually has access to how I feel about myself. And in that case, I suppose you could say that I have low self-esteem. But I’m not sure that this definition really fits either. Yes, my insecure side constantly seeks out praise, and worries about whether or not people like me, and does other things you might typically associate with low self-esteem. But it is also fundamentally outward-focused – the means by which my insecure side is able to affect my feelings is through other people, not through anything I think about myself. So yes, getting praise from another person can make me feel very very good – but it’s a good feeling that’s coming entirely from someone else’s opinion of me, and it seems strange to call that self-esteem.

No, I may be generalizing too much from my own experiences here, but the more I think about it, the harder it is for me to see “self-esteem” as anything other than a contradiction in terms. As far as I can tell, the only way I ever truly get to feel good about myself – indeed, almost the way “feeling good about myself” is defined – is through external validation. Sure, I can have positive thoughts about myself, and some of those thoughts might even make me feel a little better about myself. But to the extent that they do make me feel better about myself, they do so by…well, by making it easier to imagine myself receiving praise and validation from others. Self-worth always seem to ground out in external validation eventually, if you dig deep enough. So talking about self-esteem, at least in terms of “feeling good about yourself as a result of your own thoughts and opinions”, doesn’t make any sense to me. It’s like talking about getting water from something other than H2O – you can’t just separate out self-worth from validation, because they’re basically the same thing.

I bring all of this up because there’s a particular strain of thought I’ve seen floating around – exemplified by the Richard Feynman quote included at the outset – that says you shouldn’t care what other people think of you. You’ve probably heard the platitudes: “Be comfortable in your own skin. Do what you want, and don’t worry if everyone else is doing it. Just be yourself.” The idea is that you should just try to feel good about yourself on your own terms, and not define your worth based on other people’s opinions.

But my problem with this line of thinking is that for most people, this simply isn’t possible – the only way they can feel good about themselves is through other people. The choice isn’t between external validation and self-validation – it’s between external validation and nothing. So when you tell someone “stop caring what other people think of you”, what that amounts to in practice is saying “don’t ever feel good about yourself again”. And needless to say, I don’t think this is a realistic – or even desirable – ideal for people to strive for.

No, I think we might just have to accept that we’ll always reside in a world where external validation is the fundamental currency of self-worth. And yes, that might mean we’ll always be saddled with a desire for praise – but it doesn’t mean there aren’t more and less healthy ways of seeking out that praise. I mentioned this above, but I think the best way to handle a need for validation is not to fight it but to channel it – to use it to shape our own behaviour in ways that we endorse. Even if praise is your ultimate motivator in some or even most situations, there’s still a big difference between praise motivating you do something you approve of, and praise motivating you to do something you disapprove of. The key is to try as much as possible to move yourself away from the latter and towards the former.

Mind you, I have no idea how to actually do that. But it seems like a good thing to try for.

V.

Whenever I think about all of this, my thoughts always seem to be drawn, with puzzling regularity, to one subject in particular: the internet. That may sound like a strange connection to make, but I think there’s something important going on here – so bear with me.

Time and time again, I’ve seen someone put in the unfortunate situation of having to prove to the internet that they’re a good person. Take the infamous “nice guy” debates that periodically sweep the internet, for example. They always start off when some blissfully oblivious young man decides to ask The Question:

“I’m a nice guy, so why can’t I get a girlfriend?”

No doubt this seems, to him, like an innocuous question.

(ah, to be so innocent, so naive)

Anyway, so this promptly sets off a fight that makes World War II look like a minor schoolyard scuffle, accusations of sexism and misogyny and entitlement are hurled in every direction, and after the dust has settled everyone on the internet hates each other just a little bit more. Pretty much a normal day online.

I don’t really want to get into the specifics of the nice guy debate here – that’s been done to death, and it does horrible things to my psyche anyway. But I would like to highlight one aspect of the situation that really bothers me. During these arguments, there’s always an attitude of…let’s say mild skepticism that the guy in question really is all that nice. The prevailing thought seems be that anyone who would say that they were nice couldn’t actually be nice.

And hey, fair enough. Probably this skepticism is often warranted – it’s very easy to claim to be nice online, after all. But let’s say for the sake of argument that, in this one particular case, our guy really is that nice. Like, super nice in fact – he wins niceness awards and has a PhD in Niceness from Nice University. My problem is that in a situation like that, where he actually is a nice guy, it’s not clear to me that there’s anything at all he could do to convince the internet of this.

Seriously, how would you do it? Anything you say about yourself is suspect right from the start – repeating “No, really, I swear I’m nice!” isn’t going to cut it. At best, claims like that are just going to be unconvincing, but at worst they’ll be anti-convincing – nice people don’t usually go around saying they’re nice. And if you try to back up your claim to being nice with specific examples – “But I volunteer at eight different soup kitchens!” – well, that’s probably just going to come across as more defensive than anything else. Not to mention that people will take it as further evidence that you’re conceited, because now you’re the kind of person who goes around talking publicly about all the nice things you’ve done.

So I have a great deal of sympathy for our hypothetical nice guy here, because I really don’t know if there’s anything he could do.

And it goes way beyond just the nice guy thing. I actually dread the thought of ever having to convince the internet that I have any positive quality – that I’m smart, or funny, or likeable, or anything like that. The notion of being put in that situation instills a feeling in me that is equal parts frustration and hopelessness. Because it’s basically a no-win scenario – practically anything that I could say would just sound like boasting, so it would either be dismissed, or taken as evidence against whatever I was trying to prove. It’s essentially being asked to brag while subject to the constraint of not being allowed to brag.

(hey, I think I just figured out why I hate writing online dating profiles!)

What it comes down to is that the internet is the ultimate context-free environment. Most forums are more-or-less anonymous, which means that anything you post pretty much has to stand on it’s own – you don’t really get to build up a reputation over time, or earn people’s trust. In a setting like that, faced with a skeptical audience who doesn’t know you, it’s practically impossible to credibly say something positive about yourself – you’re just going to come across as someone who’s lying or full of themselves. In the real world, you can always show people that you’re nice by doing things that are hard to fake – if you buy someone you know a thoughtful gift, or help them out when they’re in need, those are things you’re only likely to do if you’re actually a decent person. But that option isn’t available to you online – on the internet, there’s no such thing as hard to fake, because anyone can claim anything they want at no cost to themselves. There’s nothing to back up any boastful-sounding claims that get made, and so they’re inevitably met with either skepticism or hostility.

Okay, so that’s maybe kind of interesting, but you might be wondering what the big deal is. So it’s hard to convince people of things online – is that really worth getting so worked up about? After all, online dating aside, how often is it that you’re faced with the task of proving to the internet that you’re a worthy person?

And this is true – on it’s own it is kind of a niche problem to focus on. But I bring it up because it actually gets right down to the heart of what humility is all about to me, and how I experience it.

When I say that I dread the thought of having to prove to the internet that I’m smart, it’s not at all that I expect to ever encounter that situation. That does indeed seem unlikely, and not worth worrying about. No, the thing that bothers me is just knowing that if I ever did have to prove that, I wouldn’t be able to.

See, I have a very strongly-felt sense that everything I believe or think should ultimately be defensible. To me, it feels as though I’m not allowed to hold any opinion unless I can justify it to anyone imaginable, even the most skeptical of critics. This goes double for thoughts that I have about myself. And my brain doesn’t go halfway with this – no, in the interest of being “fair” (read: anti-self-charitable) it has to construct and defeat the worst skeptics it can imagine. But of course the worst skeptics it can imagine are exactly those context-lacking internet commenters I described above. And so they’re exactly who I have to convince if I want to have an opinion about myself.

That’s why I find the nice guy scenario described above so frustrating. I may not have literally experienced something like that, and don’t really expect to – but I run through it in my head about a billion times a day when I’m trying to justify things to myself.

It may sound silly, but every time I’m tempted to think something charitable about myself, an anonymous internet commenter pops up in the back of my head and demands that I justify it to them. And unless I can, I don’t get to think the thought.

(I usually can’t – I may have mentioned that I don’t think many charitable thoughts about myself?)

What it comes down to is that I have a desire – a need, even – for defensibility in my opinions about myself. And this is very closely related to humility – in fact, it might even be the same thing. I think the way that humility manifests itself in me is as a kind of fear of being called out – there’s a sense that at any second, I could be held to account for any positive thoughts I might have had about myself, and I need to have justifications ready for each of them. What counts as a justification, though? Well, definitely not my own thoughts and feelings – those might be enough to satisfy my friends and family, but there’s no way that they would sway a stranger on the internet. Remember, we need to convince everyone. No, pretty much the only thing that might do it would be something neutral, like…well, like someone else’s opinion of me.

And hey, look at that – we’ve arrived back at external validation.

I think the reason we “count” external validation but not self-validation is because external validation can be used in self-defense. You can hold up someone else’s opinion of you and say “No look, it’s okay! Someone else thinks I’m smart too, see? It’s not just me!” It’s something you can use as justification, something that offers proof that you’re not just being arrogant. And it’s one of the few things that has half a chance of satisfying even skeptics on the internet – which I think is why I crave it so much.

Without it, though – absent a set of external opinions for you to fall back on – it really isn’t clear to me that there’s anything you could do to prove to the internet that you’re smart, or funny, or (heaven help you) a nice guy. I think people are just too good at pushing back against what they see as unjustified examples of arrogance. Without context, pretty much all self-advocacy is just rounded off to bragging, and that has a way of blocking off any route you might want to take.

If I had to describe the feeling of humility, it would be that – the feeling of having no way, even in principle, of convincing someone else that you’re a good person. And as a result, being unable to believe it yourself.

VI.

In the end, though, there are always trade-offs.

I talked about the harm that pro-humility messages do, but of course some people need to hear messages like that. Just as there are those who could do with a little less humility in their life, there are also those who could do with a little more. Any societal norm you want to set has to walk a balancing act – if you push humility too much you’ll end up with overly scrupulous and insecure people, and if you push it not enough you’ll end up with people that are much too arrogant and full of themselves.

And to its credit, I think society actually recognizes this – sort of. The way we deal with this in practice is by trying to push both pro- and anti-humility messages at the same time, and hoping like hell that they find the right kind of people. Messages promoting humility are of course ubiquitous: from a young age we have it drilled into our heads that it’s wrong to brag, that we shouldn’t think too much of ourselves, that “pride comes before a fall”, et cetera et cetera – there’s no shortage of examples. But it’s easy to forget that there are also messages that go in the other direction – things like “don’t be so hard on yourself” and “you’re your own worst critic” and “be kind to yourself”. The idea – or the hope – would be that people who are already too humble would hear the latter set of messages, people who aren’t humble enough would hear the former, and the world would get a little bit healthier on the whole.

Unfortunately, I have a sick suspicion that this isn’t happening – that in fact, the messages are reaching exactly the sets of people who least need to hear them.

Consider who is likely to take the message “don’t be so hard on yourself” to heart. Would it be the humble people that you’re trying to reach?

I doubt it. To think “I am too hard on myself” is not a humble thought. It is a thought that asserts one’s own adequacy, a thought that says yes, I have gone far enough in policing myself – too far, even. And humble people are not noted for the ease with which they think self-charitable thoughts.

On the other hand, I could totally see a somewhat clueless and self-congratulatory jerk hearing that message and thinking “Hey yeah, I am too hard on myself” and then going off to be even more of a self-congratulatory jerk, because there are people out there who do not have a single self-reflective bone in their body.

The problem is that humility is self-reinforcing. If you’re not already in the habit of being charitable to yourself, then it’s tough to start. To do so, you’d have to decide that you’re currently not charitable enough to yourself…but of course that itself is a self-charitable thought, which you’re not likely to think unless you’re already sufficiently charitable…

(man, meta-humility is just the worst)

I guess my hope in writing this essay was that it might break a few people free from that trap. That by laying out the whole messed-up system of thought that produces humility, it might allow some people to step outside that system for a moment, and bootstrap themselves up to self-charity.

It’s tough, of course. Even if you manage to convince yourself that you need to be more self-charitable, old habits die hard – thinking nice thoughts about yourself can feel really really awful, like you’re being a bad person. If that describes you, though, then I’d urge you to keep trying. Erring on the side of humility is always going to feel safer – when you do that you’re only harming yourself, after all. But remember that you count as a person too, and harming that person isn’t virtuous, even if no one is going to blame you for it.

All that’s putting the cart before the horse, though. Before you could even get to that step, you’d first have to convince yourself that you really are too uncharitable towards yourself. And that can be a hard thing to do. Maybe you have a suspicion that it’s true, a suspicion that you’re too hard on yourself. But that probably doesn’t feel good enough.

The million-dollar question is: how do you know for sure if you’re too humble?

And the answer is you don’t. You can’t. You can look for hints – like say if you identified with this blog post, or if you’re thinking thoughts like “oh god, maybe I’m not really humble enough for this to apply to me”. But you can’t know, not for sure.

Ultimately, you have to take the first step towards self-charity on your own. There’s always a temptation to look for permission to take that step, to find someone to reassure you that it’s okay. But you can’t do that – to do so would be to defeat the whole purpose.

No, in the end you’ll just have to make the judgement for yourself. If you really think that you should take the step, then take the step. I can’t say for certain that you’ll be justified in doing so. But I can guarantee you for sure that there are people reading this who need to be more self-charitable.

And deep down, I think you know who you are.

VII.

960

By the way, I’m aware of the irony of writing a validation-seeking blog post in order to decry validation seeking. So don’t bother pointing that out.

Advertisements

Seriously, don’t trust science reporting!

I came across an especially annoying example of bad science reporting today, and I want to make an example of it.

The article in question comes from the always-trustworthy site I F*cking Love Science, and the headline reads: If We Don’t Cut Our Carbon Emissions, This Is What The World Will Look Like By 2100.

The piece goes on to show pictures that supposedly depict what various major cities will look like in 2100 if the projections for sea level rise due to global warming are accurate. I’ve included a few of the juicy photos below:

Paints a pretty scary picture, right? I mean, heck, look at London – it’s practically half-underwater. And this is only for 85 years in the future! Clearly dire action is needed to save the world’s cities.

The only problem is, it’s not true.

In fact, the article makes a very basic science error, and the pictures do not depict what the world will look like in 2100. At all.

I’m going to explain what the error was, but as an exercise in critical reading I encourage you all to try to figure it out for yourself first. The scientific papers that the pictures are based on are all linked in the article, and the error in question is not a particularly subtle one. Give it a try!

Spoilers below:

Okay, did you get it?

The error is: the pictures depict the amount of sea level rise expected due to a certain amount of warming by 2100, but they do not depict the amount of sea level rise by 2100.

The ocean is pretty big [citation needed]. It has an enormous heat capacity, and it takes a long time for it to respond to changes in the climate. If you increase the surface temperature of the earth then the sea level will eventually rise to reach a new equilibrium value. But the key word there is eventually. The time scale for the ocean to reach equilibrium with a new surface temperature is on the order of ~2000 years. The study that inspired the above photos looked at short-term increases in temperature – in particular it looked at two cases, 2°C or 4°C warming by 2100. But the sea level increases that are quoted are not for 2100 – they’re the long-term equilibrium values, which will not be reached for several millennia.

In other words: yes, the pictures do accurately depict what the effects of 4°C of warming will be…in the year 4100. In actual fact, the projected sea level increases for the year 2100 are more like 1 meter – which is not even close to what the pictures show.

This means the article was wrong. It was not a little bit right, it did not stretch the truth or make an ambiguous error. It was just straightforwardly wrong.

(I find this especially frustrating because the case for global warming being a significant threat is already really good – you really don’t need to make shit up to sell it)

(Moreover, I hate the seemingly prevalent notion that if you’re on the “side” of global warming, it’s right to support all arguments that show that global warming is a threat. No. All I care about is supporting arguments that are true, and I’m going to work just as hard to expose bad pro-global warming arguments as I am to expose bad anti-global warming arguments.)

Okay, so what’s the takeaway?

Well, I feel like it’s safe to say that most of the people reading this will have already heard the advice, “Don’t trust science reporting.” Heck, it’s become almost a cliche at this point – there’s been an xkcd and multiple SMBC’s on the topic, which is a good indication that something has permeated the collective consciousness of the nerdy internet.

Pictured: the face of Internet Nerd Consensus

But I also think it’s safe to say that people are not nearly paranoid enough in this regard. People pay lip service to the idea that science reporting is bad, but they don’t take it to heart. I originally saw this article linked to on a forum I frequent, and in the responses not one person questioned the validity of what the article was saying. Everyone just accepted it without question. It’s worth asking yourself whether you would have done the same, if I hadn’t primed you to be skeptical of it.

Seriously, I don’t know how to emphasize this enough: don’t trust scientific reporting. Period. Don’t trust it in general, but especially don’t trust it when the topic that’s being reported on is political. And super-duper-especially don’t trust it when the topic is political, and the conclusion of the article supports the ideological bent of the source in question.

(I mean, unless you think it’s a coincidence that the site with an overwhelmingly liberal audience happened to make an error that exaggerated the potential harms of global warming)

And needless to say, super-duper-extra-especially don’t trust scientific reporting when the topic is political, the conclusions support the ideological bent of the source in question, and the conclusions support your own ideological bent.

That’s just a recipe for disaster.

Is there an echo in here?

I’m starting to wonder if I might have been too hard on echo chambers.

The standard position these days is that echo chambers are uniformly terrible; that surrounding yourself with people who agree with you on every issue can only lead to closemindedness, toxic ingroup/outgroup dynamics, and increased polarization. Many people have commented on how the rise of partisan news networks and isolated internet communities have led to a society where people never have to have their beliefs challenged, or interact with those who disagree with them. And this is obviously a very bad thing – there’s almost nothing that runs more counter to the spirit of rationality and truth seeking than the kind of self-congratulatory patting on the back you commonly see in intellectually closed-off communities. But despite all this, I still feel an impulse speak up in favour of echo chambers, at least a little bit – I now think they might also serve a useful psychological function. Just as there are people who can benefit from reading Ayn Rand, I suspect there are people out there who could use a little bit more agreement in their life.

I’ve been going through a pretty rough patch in my life lately. I’m still trying to figure out why exactly this is, but I think part of it may be due to a feeling of intellectual isolation. Right now I feel like I’m living in an anti-echo chamber. It seems like almost everything I hear or read – either from friends, or on facebook, or on the general internet – is someone disagreeing with an opinion I hold. And it seems like any agreement people might have with my beliefs is either whispered or not voiced at all. Obviously this isn’t literally the case – it’s probably mostly just selective memory and a very human tendency to notice criticism more easily than agreement. But I do have a lot of weird and semi-controversial opinions that very few people in the world share, and people are generally not shy about disagreeing with those opinions.

Now, normally this wouldn’t really bother me – and from a purely intellectual point of view, it doesn’t. After all, why should I care if other people think I’m wrong about something? I’m pretty confident in my weird opinions (otherwise I wouldn’t hold them), but in the end I’m not afraid of any challenges to my beliefs. If someone convinces me that something I believe is wrong, I’ll just change my mind. *Shrug*. The goal is not to never be wrong, the goal is just to find the truth.

But saying these words doesn’t erase the reality that humans are social animals. We evolved to care a lot about other people’s opinions – in the ancestral environment it was probably extremely relevant to know whether or not the majority of people around you agreed with you. Having popular or unpopular opinions could literally mean the difference between life and death (or, even more relevantly from evolution’s point of view, between mating and not mating).

I worry that ever since [Bad Thing] happened last year, and I lost a major source of intellectual solidarity in my life, I’ve been feeling more and more like no one agrees with me, and that I’m all alone in believing what I do. And I worry that this has been slowly wearing me down, psychologically, and tripping some ancient mammalian brain circuits – circuits that say things like “YOU HAVE NO ALLIES” and “YOU ARE ABOUT TO BE SHUNNED AND EXILED BY THE TRIBE”.

So now I wonder if maybe people need a certain amount of agreement in their lives. If maybe perceiving everyone around you as constantly disagreeing with you is just as bad, psychologically speaking, as perceiving yourself as useless or unwanted or unattractive. If maybe – just maybe, to some tiny, infinitesimal extent – having a self-congratulatory echo chamber among friends is necessary to be emotionally healthy.

And on top of that: in addition to individual mental well-being, I wonder if agreement is more necessary for friendship than I realized. I’ve always sort of implicitly believed that it didn’t really matter if you disagreed with your friends on philosophical or political matters. All that was required for two people to be friends, thought I, was that they enjoy each other’s company, and that they have each other’s back in times of need. And I still think this is at least normatively true, in the sense that this is probably how friendships should work in an ideal world. But I’m less confident that this is how friendships really do work, in the world as it is right now. I mean, who knows? Maybe the more you disagree with friends, the more you sow subtle, barely noticeable seeds of dissent. Maybe you end up gradually weakening ties to your friends with every contrary opinion, because you subconsciously signal to them that you wouldn’t be a reliable ally if they were to ever really need you. Friendship is all about trust, after all, and maybe trust is really difficult in the face of persistent disagreement.

Or, you know, maybe not. I have no idea if any of this is true. I came up with all of this last night when I couldn’t sleep. I was lying in bed, mind racing and feeling generally frustrated about some article I had read, when I realized I was getting way more bothered by other people disagreeing with me than I used to. And so I set about trying to figure why that was, and the result is this post (which I’m not all that confident in). One natural question one could ask is: why now? Why do I all of a sudden feel so isolated when my opinions haven’t really changed that much recently? I mean, yes, I did lose that source of intellectual solidarity I mentioned (and before that I had far fewer weird and semi-controversial opinions). But it could also easily just be that I’ve been depressed lately for whatever other reason, and that in such a state I’m more likely to notice negative things like criticism and disagreement.

Either way, I definitely do feel kind of isolated right now, and all of this is why I’m so glad that [Friend Who Agrees With Me About Basically Everything] is moving to Toronto soon. I think being able to talk with him more frequently could be helpful. Although, come to think of it, despite the fact that we agree on almost everything, our discussions almost invariably end up honing in on the few topics we disagree about. Granted, I enjoy that because our 99%-shared worldview tends to allow for unusually productive disagreements. But still, since I know he’s reading this: we should probably skype sometime and vent about how obvious atheism is, or how much reality is definitely objective, or something.

You know, just so I can hear an echo.

Compensatiated?

I always used to hate it when I would overcompensate for some error I made – overcompensation just seemed like something that unintelligent, under-reflective people did. So over time I developed a habit of undercompensating for errors.

Then I realized that my undercompensation was just meta-overcompensation.

Now I don’t know what to do.

Philosophical differences

[Followup to my last post on didacticism]

[Also, I’m not sure who the audience for this post is. For now let’s just say I’m writing it for myself?]


You know what’s scarier than having enemy soldiers at your border?

Having sleeper agents within your borders.

Enemy soldiers are malevolent, but they are at least visibly malevolent. You can see what they’re doing; you can fight back against them or set up defenses to stop them. Sleeper agents on the other hand are malevolent and invisible. They are a threat and you don’t know that they’re a threat. So when a sleeper agent decides that it’s time to wake up and smell the gunpowder, not only will you be unable to stop them, but they’ll be in a position to do far more damage than a lone soldier ever could. A single well-placed sleeper agent can take down an entire power grid, or bring a key supply route to a grinding halt, or – in the worst case – kill thousands with an act of terrorism, all without the slightest warning.

Okay, so imagine that your country is in wartime, and that a small group of vigilant citizens has uncovered an enemy sleeper cell in your city. They’ve shown you convincing evidence for the existence of the cell, and demonstrated that the cell is actively planning to commit some large-scale act of violence – perhaps not imminently, but certainly in the near-to-mid-future. Worse, the cell seems to have even more nefarious plots in the offing, possibly involving nuclear or biological weapons.

Now imagine that when you go to investigate further, you find to your surprise and frustration that no one seems to be particularly concerned about any of this. Oh sure, they acknowledge that in theory a sleeper cell could do some damage, and that the whole matter is probably worthy of further study. But by and large they just hear you out and then shrug and go about their day. And when you, alarmed, point out that this is not just a theory – that you have proof that a real sleeper cell is actually operating and making plans right now – they still remain remarkably blase. You show them the evidence, but they either don’t find it convincing, or simply misunderstand it at a very basic level (“A wiretap? But sleeper agents use cellphones, and cellphones are wireless!”). Some people listen but dismiss the idea out of hand, claiming that sleeper cell attacks are “something that only happen in the movies”. Strangest of all, at least to your mind, are the people who acknowledge that the evidence is convincing, but say they still aren’t concerned because the cell isn’t planning to commit any acts of violence imminently, and therefore won’t be a threat for a while. In the end, all of your attempts to raise the alarm are to no avail, and you’re left feeling kind of doubly scared – scared first because you know the sleeper cell is out there, plotting some heinous act, and scared second because you know you won’t be able to convince anyone of that fact before it’s too late to do anything about it.

This is roughly how I feel about AI risk.

You see, I think artificial intelligence is probably the most significant existential threat facing humanity right now. This, to put it mildly, is something of a fringe position in most intellectual circles (although that’s becoming less and less true as time goes on), and I’ll grant that it sounds kind of absurd. But regardless of whether or not you think I’m right to be scared of AI, you can imagine how the fact that AI risk is really hard to explain would make me even more scared about it. Threats like nuclear war or an asteroid impact, while terrifying, at least have the virtue of being simple to understand – it’s not exactly hard to sell people on the notion that a 2km hunk of rock colliding with the planet might be a bad thing. As a result people are aware of these threats and take them (sort of) seriously, and various organizations are (sort of) taking steps to stop them.

AI is different, though. AI is more like the sleeper agents I described above – frighteningly invisible. The idea that AI could be a significant risk is not really on many people’s radar at the moment, and worse, it’s an idea that resists attempts to put it on more people’s radar, because it’s so bloody confusing a topic even at the best of times. Our civilization is effectively blind to this threat, and meanwhile AI research is making progress all the time. We’re on the Titanic steaming through the North Atlantic, unaware that there’s an iceberg out there with our name on it – and the captain is ordering full-speed ahead.

(That’s right, not one but two ominous metaphors. Can you see that I’m serious?)

But I’m getting ahead of myself. I should probably back up a bit and explain where I’m coming from.

Artificial intelligence has been in the news lately. In particular, various big names like Elon Musk, Bill Gates, and Stephen Hawking have all been sounding the alarm in regards to AI, describing it as the greatest threat that our species faces in the 21st century. They (and others) think it could spell the end of humanity – Musk said, “If I had to guess what our biggest existential threat is, it’s probably [AI]”, and Gates said, “I…don’t understand why some people are not concerned [about AI]”.

Of course, others are not so convinced – machine learning expert Andrew Ng said that “I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars”.

In this case I happen to agree with the Musks and Gates of the world – I think AI is a tremendous threat that we need to focus much of our attention on it in the future. In fact I’ve thought this for several years, and I’m kind of glad that the big-name intellectuals are finally catching up.

Why do I think this? Well, that’s a complicated subject. It’s a topic I could probably spend a dozen blog posts on and still not get to the bottom of. And maybe I should spend those dozen-or-so blog posts on it at some point – it could be worth it. But for now I’m kind of left with this big inferential gap that I can’t easily cross. It would take a lot of explaining to explain my position in detail. So instead of talking about AI risk per se in this post, I thought I’d go off in a more meta-direction – as I so often do – and talk about philosophical differences in general. I figured if I couldn’t make the case for AI being a threat, I could at least make the case for making the case for AI being a threat.

(If you’re still confused, and still wondering what the whole deal is with this AI risk thing, you can read a not-too-terrible popular introduction to the subject here, or check out Nick Bostrom’s TED Talk on the topic. Bostrom also has a bestselling book out called Superintelligence. The one sentence summary of the problem would be: how do we get a superintelligent entity to want what we want it to want?)

(Trust me, this is much much harder than it sounds)

So: why then am I so meta-concerned about AI risk? After all, based on the previous couple paragraphs it seems like the topic actually has pretty decent awareness: there are popular internet articles and TED talks and celebrity intellectual endorsements and even bestselling books! And it’s true, there’s no doubt that a ton of progress has been made lately. But we still have a very long way to go. If you had seen the same number of online discussions about AI that I’ve seen, you might share my despair. Such discussions are filled with replies that betray a fundamental misunderstanding of the problem at a very basic level. I constantly see people saying things like “Won’t the AI just figure out what we want?”, or “If the AI gets dangerous why can’t we just unplug it?”, or “The AI can’t have free will like humans, it just follows its programming”, or “lol so you’re scared of Skynet?”, or “Why not just program it to maximize happiness?”.

Having read a lot about AI, these misunderstandings are frustrating to me. This is not that unusual, of course – pretty much any complex topic is going to have people misunderstanding it, and misunderstandings often frustrate me. But there is something unique about the confusions that surround AI, and that’s the extent to which the confusions are philosophical in nature.

Why philosophical? Well, artificial intelligence and philosophy might seem very distinct at first glance, but look closer and you’ll see that they’re connected to one another at a very deep level. Take almost any topic of interest to philosophers – free will, consciousness, epistemology, decision theory, metaethics – and you’ll find an AI researcher looking into the same questions. In fact I would go further and say that those AI researchers are usually doing a better job of approaching the questions. Daniel Dennet said that “AI makes philosophy honest”, and I think there’s a lot of truth to that idea. You can’t write fuzzy, ill-defined concepts into computer code. Thinking in terms of having to program something that actually works takes your head out of the philosophical clouds, and puts you in a mindset of actually answering questions.

All of which is well and good. But the problem with looking at philosophy through the lens of AI is that it’s a two-way street – it means that when you try to introduce someone to the concepts of AI and AI risk, they’re going to be hauling all of their philosophical baggage along with them.

And make no mistake, there’s a lot of baggage. Philosophy is a discipline that’s notorious for many things, but probably first among them is a lack of consensus (I wouldn’t be surprised if there’s not even a consensus among philosophers about how much consensus there is among philosophers). And the result of this lack of consensus has been a kind of grab-bag approach to philosophy among the general public – people see that even the experts are divided, and think that that means they can just choose whatever philosophical position they want.

Want. That’s the key word here. People treat philosophical beliefs not as things that are either true or false, but as choices – things to be selected based on their personal preferences, like picking out a new set of curtains. They say “I prefer to believe in a soul”, or “I don’t like the idea that we’re all just atoms moving around”. And why shouldn’t they say things like that? There’s no one to contradict them, no philosopher out there who can say “actually, we settled this question a while ago and here’s the answer”, because philosophy doesn’t settle things. It’s just not set up to do that. Of course, to be fair people seem to treat a lot of their non-philosophical beliefs as choices as well (which frustrates me to no end) but the problem is particularly pronounced in philosophy. And the result is that people wind up running around with a lot of bad philosophy in their heads.

(Oh, and if that last sentence bothered you, if you’d rather I said something less judgmental like “philosophy I disagree with” or “philosophy I don’t personally happen to hold”, well – the notion that there’s no such thing as bad philosophy is exactly the kind of bad philosophy I’m talking about)

(he said, only 80% seriously)

Anyway, I find this whole situation pretty concerning. Because if you had said to me that in order to convince people of the significance of the AI threat, all we had to do was explain to them some science, I would say: no problem. We can do that. Our society has gotten pretty good at explaining science; so far the Great Didactic Project has been far more successful than it had any right to be. We may not have gotten explaining science down to a science, but we’re at least making progress. I myself have been known to explain scientific concepts to people every now and again, and fancy myself not half-bad at it.

Philosophy, though? Different story. Explaining philosophy is really, really hard. It’s hard enough that when I encounter someone who has philosophical views I consider to be utterly wrong or deeply confused, I usually don’t even bother trying to explain myself – even if it’s someone I otherwise have a great deal of respect for! Instead I just disengage from the conversation. The times I’ve done otherwise, with a few notable exceptions, have only ended in frustration – there’s just too much of a gap to cross in one conversation. And up until now that hasn’t really bothered me. After all, if we’re being honest, most philosophical views that people hold aren’t that important in grand scheme of things. People don’t really use their philosophical views to inform their actions – in fact, probably the main thing that people use philosophy for is to sound impressive at parties.

AI risk, though, has impressed upon me an urgency in regards to philosophy that I’ve never felt before. All of a sudden it’s important that everyone have sensible notions of free will or consciousness; all of a sudden I can’t let people get away with being utterly confused about metaethics.

All of a sudden, in other words, philosophy matters.

I’m not sure what to do about this. I mean, I guess I could just quit complaining, buckle down, and do the hard work of getting better at explaining philosophy. It’s difficult, sure, but it’s not infinitely difficult. I could write blogs posts and talk to people at parties, and see what works and what doesn’t, and maybe gradually start changing a few people’s minds. But this would be a long and difficult process, and in the end I’d probably only be able to affect – what, a few dozen people? A hundred?

And it would be frustrating. Arguments about philosophy are so hard precisely because the questions being debated are foundational. Philosophical beliefs form the bedrock upon which all other beliefs are built; they are the premises from which all arguments start. As such it’s hard enough to even notice that they’re there, let alone begin to question them. And when you do notice them, they often seem too self-evident to be worth stating.

Take math, for example – do you think the number 5 exists, as a number?

Yes? Okay, how about 700? 3 billion? Do you think it’s obvious that numbers just keep existing, even when they get really big?

Well, guess what – some philosophers debate this!

It’s actually surprisingly hard to find an uncontroversial position in philosophy. Pretty much everything is debated. And of course this usually doesn’t matter – you don’t need philosophy to fill out a tax return or drive the kids to school, after all. But when you hold some foundational beliefs that seem self-evident, and you’re in a discussion with someone else who holds different foundational beliefs, which they also think are self-evident, problems start to arise. Philosophical debates usually consist of little more than two people talking past one another, with each wondering how the other could be so stupid as to not understand the sheer obviousness of what they’re saying. And the annoying this is, both participants are correct – in their own framework, their positions probably are obvious. The problem is, we don’t all share the same framework, and in a setting like that frustration is the default, not the exception.

This is not to say that all efforts to discuss philosophy are doomed, of course. People do sometimes have productive philosophical discussions, and the odd person even manages to change their mind, occasionally. But to do this takes a lot of effort. And when I say a lot of effort, I mean a lot of effort. To make progress philosophically you have to be willing to adopt a kind of extreme epistemic humility, where your intuitions count for very little. In fact, far from treating your intuitions as unquestionable givens, as most people do, you need to be treating them as things to be carefully examined and scrutinized with acute skepticism and even wariness. Your reaction to someone having a differing intuition from you should not be “I’m right and they’re wrong”, but rather “Huh, where does my intuition come from? Is it just a featureless feeling or can I break it down further and explain it to other people? Does it accord with my other intuitions? Why does person X have a different intuition, anyway?” And most importantly, you should be asking “Do I endorse or reject this intuition?”. In fact, you could probably say that the whole history of philosophy has been little more than an attempt by people to attain reflective equilibrium among their different intuitions – which of course can’t happen without the willingness to discard certain intuitions along the way when they conflict with others.

I guess what I’m trying to say is: when you’re discussing philosophy with someone and you have a disagreement, your foremost goal should be to try to find out exactly where your intuitions differ. And once you identify that, from there the immediate next step should be to zoom in on your intuitions – to figure out the source and content of the intuition as much as possible. Intuitions aren’t blank structureless feelings, as much as it might seem like they are. With enough introspection intuitions can be explicated and elucidated upon, and described in some detail. They can even be passed on to other people, assuming at least some kind of basic common epistemological framework, which I do think all humans share (yes, even objective-reality-denying postmodernists).

Anyway, this whole concept of zooming in on intuitions seems like an important one to me, and one that hasn’t been emphasized enough in the intellectual circles I travel in. When someone doesn’t agree with some basic foundational belief that you have, you can’t just throw up your hands in despair – you have to persevere and figure out why they don’t agree. And this takes effort, which most people aren’t willing to expend when they already see their debate opponent as someone who’s being willfully stupid anyway. But – needless to say – no one thinks of their positions as being a result of willful stupidity. Pretty much everyone holds beliefs that seem obvious within the framework of their own worldview. So if you want to change someone’s mind with respect to some philosophical question or another, you’re going to have to dig deep and engage with their worldview. And this is a difficult thing to do.

Hence, the philosophical quagmire that we find our society to be in.

It strikes me that improving our ability to explain and discuss philosophy amongst one another should be of paramount importance to most intellectually serious people. This applies to AI risk, of course, but also to many everyday topics that we all discuss: feminism, geopolitics, environmentalism, what have you – pretty much everything we talk about grounds out to philosophy eventually, if you go deep enough or meta enough. And to the extent that we can’t discuss philosophy productively right now, we can’t make progress on many of these important issues.

I think philosophers should – to some extent – be ashamed of the state of their field right now. When you compare philosophy to science it’s clear that science has made great strides in explaining the contents of its findings to the general public, whereas philosophy has not. Philosophers seem to treat their field as being almost inconsequential, as if whatever they conclude at some level won’t matter. But this clearly isn’t true – we need vastly improved discussion norms when it comes to philosophy, and we need far greater effort on the part of philosophers when it comes to explaining philosophy, and we need these things right now. Regardless of what you think about AI, the 21st century will clearly be fraught with difficult philosophical problems – from genetic engineering to the ethical treatment of animals to the problem of what to do with global poverty, it’s obvious that we will soon need philosophical answers, not just philosophical questions. Improvements in technology mean improvements in capability, and that means that things which were once merely thought experiments will be lifted into the realm of real experiments.

I think the problem that humanity faces in the 21st century is an unprecedented one. We’re faced with the task of actually solving philosophy, not just doing philosophy. And if I’m right about AI, then we have exactly one try to get it right. If we don’t, well..

Well, then the fate of humanity may literally hang in the balance.

Didacticism and the Ratchet of Progress

So my last two posts have focused on an idiosyncratic (and possibly nonsensical) view of creativity that I’ve developed over the past month or so. Under this picture, the intellect is divided into two categories: P-intelligence, which is the brain’s ability to perform simple procedural tasks, and NP-intelligence, which is the brain’s ability to creatively solve difficult problems. So far I’ve talked about how P- and NP-intelligence might vary independently in the brain, possibly giving rise to various personality differences, and how NP-intelligence is immune to introspection, which lies at the heart of what makes us label something as “creative”. In this post I’d like to zoom out a bit and talk about how the notion of P- and NP-intelligence can be applied on a larger, societal scale.

Before we can do that, though, we have to zoom in – zoom in to the brain, and look at how P- and NP-intelligence work together in it. I think something I didn’t emphasize quite enough in the previous two posts is the degree to which the two kinds of intelligence complement one another. P- and NP-intelligence are very different, but they form an intellectual team of sorts – they each handle different jobs in the brain, and together they allow us to bootstrap ourselves up to higher and higher levels of understanding.

What do I mean by bootstrapping? Well, I talked before about how the inner workings of NP-intelligence will always be opaque to your conscious self. Whenever you have a new thought or idea, it just seems to “bubble up” into consciousness, as if from nowhere. So there’s a sense in which NP-intelligence is “cut off” from consciousness – it’s behind a kind of introspective barrier.

By the same token, though, I think you could say that the reverse is also true – that it’s consciousness that is cut off from NP-intelligence. I picture NP-intelligence as essentially operating in the dark, metaphorically speaking. It goes through life stumbling about in what is, in effect, uncharted territory, trying to make sense of things that are at the limit of – or beyond – its ability to make sense of. And as a result, when your NP-intelligence generates some new idea or thought, it does so – well, not randomly, but…blindly.

I think that’s a good way of putting it: NP-intelligence is blind. When trying to solve some problem, your NP-intelligence has no idea in advance if the solution it will come up with will be a good one. How could it? We’re stipulating that the problems it deals with are too hard to immediately see the answer to. So your NP-intelligence is essentially reduced to educated guessing: How about this? Does this work? Okay, what about this? It offers up potential solutions, not knowing if they are correct or not.

And what exactly is it offering up these solutions to? Why, P-intelligence of course! P-intelligence may not be very bright – it could never actually solve the kind of problems NP-intelligence deals with – but it can certainly handle solution-checking. After all, solution-checking is easy. So when your NP-intelligence tosses up some half-baked idea that may be complete genius or may be utter stupidity (it doesn’t know), it’s your P-intelligence that evaluates the idea: No, that’s no good. Nope, total garbage. Yes, that works! Of course, it’s probably not just one-way communication – there’s probably also some interplay, some back and forth between the two: Yes, you’re getting better. No, not quite, but that’s close. Almost there, here’s what’s still wrong. By and large, though, P- and NP-intelligence form a cooperative duo in the brain in which they each stick to their own specialized niche: NP-intelligence is the suggester of ideas, and P-intelligence is the arbiter of suggestions.

That, in a nutshell, is my view of how we manage to make ourselves smarter over time. Your NP-intelligence is essentially an undiscriminating brainstormer, throwing everything it can think of at the wall to see what sticks, and your P-intelligence is an overseer that looks over what’s been thrown and ensures that only good things stick. Together they act as a kind of ratchet that lets good ideas accumulate in the brain and bad ones be forgotten.

Of course, saying that NP-intelligence acts “indiscriminately” is probably overstating things – that would amount to saying that NP-intelligence acts randomly, which is almost certainly not true. After all, while the above “ratchet” scheme would technically work with a random idea generator, in practice it would be far too slow to account for the rate at which humans manage to accumulate knowledge – it would probably take eons for a brain to “randomly” suggest something like General Relativity. No, NP-intelligence does not operate randomly, even if it does operate blindly – it suggests possible solutions using incredibly advanced heuristics that have evolved over millions of years, heuristics that are very much beyond our current understanding. And from the many ideas that these advanced heuristics generate, P-intelligence (faced only with the relatively simple task of identifying good ideas) is able to select the very best of them. The result? Whenever our NP-intelligence manages to cough up some brilliant new thought, our P-intelligence latches onto it, and uses it to haul ourselves up to a new rung on the ladder of understanding – which gives our NP-intelligence a new baseline from which to operate, allowing the process to begin all over again. Thus the ratchet turns, but only ever forward.

Now, there’s a sense in which what I’m saying is nothing new. After all, people have long recognized that “guess and check” is an important method of solving problems. But what this picture is saying is that at a fundamental level, guessing and checking is all there is. The only way problems ever get solved is by one heuristic or another in the brain suggesting a solution that it doesn’t know is correct, and then by another part of the brain taking advantage of the inherent (relative) easiness of checking to see if that solution is any good. There’s no other way it could work, really – unless you already know the answer, the creative leaps you make have to be blind. And what this suggests in turn is that the only reason people are able to learn and grow at all, indeed the only reason that humanity has been able to make the progress it has, is because of one tiny little fact: that it’s easier to verify a solution than it is to generate it. That’s all. That one little fact lies at the heart of everything – it’s what allows you to recognize good ideas outside your own current sphere of understanding, which is what allows that ratchet of progress to turn, which is what eventually gives you…the entire modern world, and all of its wonders. The Large Hadron Collider. Citizen Kane. Dollar Drink Days at McDonald’s. All of them stemming from that one little quirk of math.

(Incidentally, when discussing artificial intelligence I often see people say things like, “Superintelligent AI is impossible. How can we make something that’s smarter than ourselves?” The above is my answer to those people. Making something smarter than yourself isn’t paradoxical – we make ourselves smarter every day when we learn. All it takes is a ratchet)

Now, I started off this post by saying I wanted to zoom out and look at P- and NP-intelligence on a societal level, and then proceeded to spend like 10 paragraphs doing the exact opposite of that. But we seem to have arrived back on topic in a somewhat natural way, and I’m going to pretend that was intentional. So: let’s talk about civilizational progress.

Civilization, clearly, has made a great deal of progress over the past few thousand years or so. This is most obvious in the case of science, but it also applies to things like art and morality and drink deals. And as I alluded to above, I think the process driving this progress is exactly the same as the one that drives individual progress: the ratchet of P- and NP-intelligence. Society progresses because people are continually tossing up new ideas (both good and bad) for evaluation, and because for any given idea, people can easily check it using their P-intelligence, and accept it only if it turns out to be a good idea. It’s the same guess-and-check procedure that I described above for individuals, except operating on a much wider scale.

(Of course, that the two processes would turn out similar is probably not all that surprising, given that society is made up of individuals)

But the interesting thing about civilizational progress, at least to my mind anyway, is the extent to which it doesn’t just consist of individuals making individual progress. One can imagine, in principle at least, a world in which all civilizational progress was due to people independently having insights and getting smarter on their own. In such a world, everyone would still be learning and growing (albeit likely at very different rates), and so humanity’s overall average understanding level would still be going up. But it would be a very different world from the one we live in now. In such a world, ideas and concepts would have to be independently reinvented by every member of the population before they could be made use of. If you wanted to use a phone you would have to be Alexander Graham Bell; if you wanted to do calculus, you would have to be Newton (or at the very least Leibniz).

Thankfully, our world does not work that way – in our world, ideas only have to be generated once. And the reason for this is the same tiny little fact I highlighted above, that beautiful asymmetry between guessing and checking. The fact that checking solutions is easy means that the second an idea has even been considered in someone’s head, the hard part is already over. Once that happens, once the idea has been lifted out of the vast, vast space of possible ideas we could be considering and promoted to our attention, then it just becomes a matter of evaluating it using P-intelligence – which other people can potentially do just as easily as the idea-generator. In other words, ideas are portable – when you come up with some new thought and you want to share it with someone else, they can understand it even if they couldn’t have thought of it themselves. So not only does every person have a ratchet of understanding, but that ratchet carries with it the potential to lift up all of humanity, and not just themselves.

Of course, while this means that humanity is able to generate a truly prodigious number of good ideas, and expand its sphere of knowledge at an almost terrifying rate, the flip side is that it’s pretty much impossible for any one person to keep up with all that new knowledge. Literally impossible, in fact, if you want to keep up with everything – scientific papers alone come out at a rate far faster than you could ever read them, and there are over 300 hours of video uploaded to youtube every minute. But even if you just want to learn the basics of a few key subjects, and keep abreast of only the most important new theories and ideas, you’re still going to have a very tough time of it.

Luckily, there are two things working in your favour. The first is just what I’ve been talking about for this whole post – P vs NP-intelligence, and the fact that it’s much easier to understand someone else’s idea than it is to come up with it yourself. Of course, easier doesn’t necessarily mean easy – you still have to learn calculus, even if you don’t have to invent it – but this is what gives you a fighting chance. Our whole school system is essentially an attempt to take people starting from zero knowledge and bring them up to the frontiers of our understanding, and it’s no coincidence that it operates mostly based on P-intelligence. Oh sure, there are a few exceptions in the form of “investigative learning” activities – which attempt to “guide” students toward making relatively small creative leaps – but for the most part, school consists of teachers explaining things. And it pretty much has to be that way – unless you get really good at guiding, it’s going to take far too long for students to generate on their own everything that they need to learn. After all, it took our best minds centuries to work all of this “science” stuff out. How’s the average kid supposed to do it in 12 years?

So that’s the first thing working in your favour – for the most part when you’re trying to learn, you “merely” have to make use of your P-intelligence to understand the subject matter, which in principle allows you to make progress much faster than the people who originally came up with it all.

The second thing working in your favour, and the reason I actually started writing this post in the first place (which at 2100 words before a mention is probably a record for me) is didacticism.

So, thus far I’ve portrayed the whole process of civilizational progress in a somewhat…overly simplistic manner. I’ve painted a picture in which the second an individual comes up with a brilliant new idea (or theory/insight/solution/whatever), they and everyone else in the world are instantly able to see that it’s a good one. I’ve sort of implied (if not stated) that checking solutions is not just easier than generating them, it’s actually fundamentally easy, and that everyone can do it. And to the extent that I’ve implied these things, I’ve probably overstated my case (but hey, in my defense it’s really easy to get swept up in an idea when you’re writing about it). I think I actually put things better in my first post about this whole mess – there I described P-intelligence as something that develops as you get older, and that can vary from person to person. And from that perspective it’s easy to see how, depending on your level of P-intelligence and the complexity of the idea in question, “checking” it could be anything from trivially easy to practically impossible. It all depends on whether the idea falls within the scope of your P-intelligence, or just at its limits, or well beyond it.

(Mind you, the important stuff I wrote about above still goes through regardless. As long as checking is easier than guessing – even if it’s not easy in an absolute sense – then the ratchet can still turn)

Anyway, so what does this all have to do with didacticism? Well, I view didacticism as a heroic, noble, and thus far bizarrely successful attempt to take humanity’s best ideas and bring them within the scope of more and more people’s P-intelligence. The longer an idea has been around the better we get at explaining it, and the more people there are who can understand it.

My view of that process is something like the following: when someone first comes up with a genuinely new idea (let’s say we’re talking about a physicist and they’ve come up with a new theory, since that’s what I’m most familiar with), initially there are going to be very few people who can understand it. Maybe a few others working in the same sub-sub-field of physics can figure it out, but probably not anyone else. So those few physicists get to work understanding the theory, and eventually after some work they’re able to better explain it to a wider audience – so now maybe everyone in their sub-field understands it. Then all those physicists get to work further clarifying the theory, and further working out the best way to explain it, and eventually everyone in that entire field of physics is able to understand it. And it’s around that point, assuming the theory is important enough and enough results have accumulated around it, that textbooks start getting written and graduate courses start being taught.

That’s an important turning point. When you’ve reached the course and textbook stage, that means you’ve gotten the theory to the point where it can be reliably learned by students – you’ve managed to mass-produce the teaching of the theory, at least to some extent. And from there it just keeps going – people come up with new teaching tricks, or better ways of looking at the theory, and it gets pushed down to upper-year undergraduate courses, and then possibly to lower-year undergraduate courses, and eventually (depending on how fundamentally complicated the theory is, and how well it fits into the curriculum) maybe even to high school. At every step along the way of this process, the wheel of didacticism turns, our explanations get better and better, and the science trickles down.

This isn’t just all hypothetical, mind you – you can actually see this process happening. Take my own research, which is on photonic crystals. Photonic crystals were invented in 1987, the first textbook on them was published in 1995, and just two years ago I sat in on a special photonic crystal graduate course, probably one of the first. So the didactic process is well on its way for photonic crystals – in fact, the only thing holding the subject back right now is that it’s of relatively narrow interest to physicists. If photonic crystals start being used in more applications, and gain importance to a wider range of physicists, then I would be shocked if they weren’t pushed down to upper-year undergraduate courses. They’re certainly no more complicated than anything else that’s taught at that level.

Or, if you’d like a more well-known example, take Special Relativity. Special Relativity is a notoriously counterintuitive and confusing subject in physics, and students always have trouble learning it. However, for such a bewildering, mind-bending theory it’s actually quite simple at heart – and so it stands the most to gain from good teaching and the didactic process in general. This is reflected in the courses it gets taught in. I myself am a TA in a course that teaches Special Relativity, and it’s not an upper year course like you might expect – it’s actually a first year physics course. And not only that, it’s actually a first year physics course for life-science majors. The students in this course are bright kids, to be sure, but physics is not their specialty and most of them are only taking the course because they need it to get into med school. And yet every year we teach them Special Relativity, and it’s actually done at a far less superficial level than you might expect. Granted, I’m not sure how much they get out of it – but the fact that it keeps getting taught in the course year after year puts a lower bound on how ineffective it could be.

Think about what that means – it means that didacticism, in a little over a hundred years, has managed to take Special Relativity from “literally only the smartest person in the world understands this” to “eh, let’s teach it to some 18-year-olds who don’t even really like physics”. It’s a really powerful force, in other words.

And not only that, but it’s actually even more powerful than it seems. The process I described above, of a theory gradually working its way down the scholastic totem pole, is only the most obvious kind of didacticism. There’s also a much subtler process – call it implicit didacticism – whereby theories manage to somehow seep into the broader cultural awareness of a society, even among those who aren’t explicitly taught the theory. A classic example of this is how, after Newton formulated his famous laws of motion, the idea of a clockwork universe suddenly gained in popularity. Of course, no doubt some people who proposed the clockwork universe idea knew of Newton’s laws and were explicitly drawing inspiration from them – but I think it’s also very likely that many proponents of the clockwork universe were ignorant of the laws themselves. Instead, the laws caused a shift in the way people thought and talked that made a mechanistic universe seem more obvious. In fact, I know this sort of thing happened, because I myself “came up” with the clockwork universe idea when I was only 14 or so, before I had taken any physics courses or knew what Newton’s laws were. And I take no credit for “independently” inventing the idea, of course, because in some sense I had already been exposed to it and had absorbed it by osmosis – it was already out there, had already altered our language in imperceptible ways that made it easier to “invent”. Science permeates our culture and affects it in very nonobvious ways, and it’s hard to overestimate how much of an effect this has on our thinking. Steven Pinker talks about much the same idea in The Better Angels of Our Nature while describing a possible cause of the Flynn effect (the secular rise in IQ scores in most developed nations over the past century or so):

And, Flynn suggests, the mindset of science trickled down to everyday discourse in the form of shorthand abstractions. A shorthand abstraction is a hard-won tool of technical analysis that, once grasped, allows people to effortlessly manipulate abstract relationships. Anyone capable of reading this book, even without training in science or philosophy, has probably assimilated hundreds of these abstractions from casual reading, conversation, and exposure to the media, including proportional, percentage, correlation,causation, [ . . . ]  and cost-benefit analysis. Yet each of them—even a concept as second-nature to us as percentage—at one time trickled down from the academy and other highbrow sources and increased in popularity in printed usage over the course of the 20th century.

-Steven Pinker, The Better Angels of Our Nature, p. 889

I have no idea if he’s right about the Flynn Effect, but what’s undoubtedly true is that right now we live in the most scientifically literate society to have ever existed. The average person knows far more about science (and more importantly, knows far more about good thinking techniques) than at any point in history. And if that notion seems wrong to you, if you’re more prone to associating modern day society with reality TV and dumbed-down pop music and people using #txtspeak, then…well, maybe you should raise your opinion of modern day society a little. A hundred years ago you wouldn’t have been able to just say “correlation does not imply causation”, or “you’ll hit diminishing returns” and assume that everyone would know what you were talking about. Heck, you wouldn’t have been able to read a blog post like this one, written by a complete layperson, even if blogging had existed back then.

All of which is to say: didacticism is a pretty marvelous thing, and we all owe the teachers and explainers of the world a debt of gratitude for what they do. So I say to them all now: thank you! This blog couldn’t exist without you.

When I put up this site I actually wrote at the outset that I didn’t want it to be a didactic blog. And while I do still hold that opinion, I’m much less certain of it than I used to be – I see now the value of didacticism in a way that I didn’t before. So I could see myself writing popular articles in the future, if not here then perhaps somewhere else. In some ways it’s hard, thankless work, but it really does bring great value to the world.

And hey, I know just the subject to write about…

[Up next: AI risk and how it pertains to didacticism. I was just going to include it here, but this piece was already getting pretty long, and it seems more self-contained this way anyway. So you’ll have to wait at least a few more days to find out why I think we’re all doomed]