[A note: this post may be more controversial than is usual for me? I’m not sure; I lost my ability to perceive controversy years ago in a tragic sarcasm-detector explosion]
[Another note: this post is long]
Consider the following sentence:
“Some people are funnier than others.”
I don’t think many people would take issue with this statement – it’s a fairly innocuous thing to say. You might quibble about humour being subjective or something, but by and large you’d still likely agree with the sentiment.
Now imagine you said this to someone and they indignantly responded with the following:
“You can’t say that for sure – there are different types of humour! Everyone has different talents: some people are good at observational comedy, and some people are good at puns or slapstick. Also, most so-called “comedians” are only “stand-up funny” – they can’t make you laugh in real life. Plus, just because you’re funny doesn’t mean you’re fun to be around. I have a friend who’s not funny at all but he’s really nice, and I’d hang out with him over a comedian who’s a jerk any day. Besides, no one’s been able to define funniness anyway, or precisely measure it. Who’s to say it even exists?”
I don’t know about you, but I would probably be pretty confused by such a response. It seems to consist of false dichotomies, unjustified assumptions, and plain non-sequiturs. It just doesn’t sound like anything anyone would ever say about funniness.
On the other hand, it sounds exactly like something someone might say in response to a very similar statement.
“Some people are more intelligent than others.”
“You can’t say that for sure – there are different types of intelligence! Everyone has different talents: some people have visual-spatial intelligence, and some people have musical-rhythmic intelligence. Also, most so-called “intellectuals” only have “book-smarts” – they can’t solve problems in the real world. Plus, just because you’re smart doesn’t mean you’re a hard worker. I have a friend who’s not very bright but he works really hard, and I’d choose him over a lazy brainiac any day. Besides, no one’s been able to define intelligence anyway, or precisely measure it. Who’s to say it even exists?”
Sound more familiar?
The interesting thing is, you don’t always get a response like that when talking about intelligence.
Quick – think about someone smarter than yourself.
Pretty easy, right? I’m sure you came up with someone. Okay, now think of someone less smart than you. Also not too hard, I bet. When you forget about all the philosophical considerations, when there’s no grand moral principle at stake – when you just think about your ordinary, everyday life, in other words – it becomes a lot harder to deny that intelligence exists. Maybe Ray from accounting is a little slow on the uptake, maybe your friend Tina always struck you as sharp – whatever. The point is, we all know people who are either quicker or thicker than ourselves. In that sense almost everyone acknowledges, at least tacitly, the existence of intelligence differences.
It’s only when one gets into a debate about intelligence that things change. In a debate concrete observations and personal experiences seem to give way to abstract considerations. Suddenly intelligence becomes so multifaceted and intangible that it couldn’t possibly be quantified. Suddenly anyone who’s intelligent has to have some commensurate failing, like laziness or naivete. Suddenly it’s impossible to reason about intelligence without an exact and exception-free definition for it.
Now, to be clear, I’m not saying a reasonable person couldn’t hold these positions. While I think they’re the wrong positions, they’re not prima facie absurd or anything. And who knows, maybe intelligence really is too ill-defined to discuss meaningfully. But if you’re going to hold a position like that then you should hold it consistently – and the debaters who say you can’t meaningfully talk about intelligence often seem perfectly willing to go home after the debate and call Ray from accounting an idiot.
My point, stated plainly, is this: people act as if intelligence exists and is meaningful in their everyday life, but are reluctant to admit that when arguing or debating about intelligence. Moreover, there’s a specific tendency in such debates to downplay intelligence differences between people, usually by presuming the existence of some trade-off – if you’re smart then you can’t be diligent and hardworking, or you have to be a bad person, or you have to have only a very circumscribed intelligence (e.g. book smarts) while being clueless in other areas.
This is a weird enough pattern to warrant commenting on. After all, as I pointed out in the opening, most people don’t tie themselves up in knots trying to argue that funniness doesn’t exist, or that everyone is funny in their own way. Why the particular reticence to admit that someone can be smart? And why is this reticence almost exclusively limited to debates and arguments? And – most importantly – why is there this odd tendency to deny differences by assuming trade-offs? I can think of at least three answers to these questions, each of which are worth going into.
First, let’s address this abstract-concrete distinction I’ve been alluding to. People seem to hold different beliefs about intelligence when they’re dealing with a concrete, real-life situation than they do when talking about it abstractly in a debate. This is a very common pattern actually, and it doesn’t just apply to intelligence. It goes by a few names: construal level theory in academic circles, and near/far thinking as popularized by Robin Hanson of Overcoming Bias. Hanson in particular has written a great deal about the near/far distinction over the years – here’s a summary of his on the topic:
The human mind seems to have different “near” and “far” mental systems, apparently implemented in distinct brain regions, for detail versus abstract reasoning. Activating one of these systems on a topic for any reason makes other activations of that system on that topic more likely; all near thinking tends to evoke other near thinking, while all far thinking tends to evoke other far thinking.
These different human mental systems tend to be inconsistent in giving systematically different estimates to the same questions, and these inconsistencies seem too strong and patterned to be all accidental. Our concrete day-to-day decisions rely more on near thinking, while our professed basic values and social opinions, especially regarding fiction, rely more on far thinking. Near thinking better helps us work out complex details of how to actually get things done, while far thinking better presents our identity and values to others. Of course we aren’t very aware of this hypocrisy, as that would undermine its purpose; so we habitually assume near and far thoughts are more consistent than they are.
The basic idea is, people tend to have two modes of thought: near and far. In near mode we focus on the close at hand, the concrete, and the realistic. Details and practical constraints are heavily emphasized while ideals are de-emphasized. Near mode is the hired farmhand of our mental retinue: pragmatic, interested in getting the job done, and largely unconcerned with the wider world or any grand moral principles.
In contrast, far mode focuses more on events distant in space or time, the abstract over the concrete, and the idealistic over the realistic. Thinking in far mode is done in broad strokes, skipping over boring logistics to arrive at the big picture. Far mode is the brain’s wide-eyed dreamer: romantic, visionary, and concerned with the way the world could be or should be rather than the way it is.
The near/far distinction is an incredibly useful concept to have crystallized in your brain – once you’ve been introduced to it you see it popping up all over the place. I’ve found that it helps to make sense of a wide variety of otherwise inexplicable human behaviours – most obviously, why I find myself (and other people) so often taking actions that aren’t in accordance with our professed ideals (actions are near, ideals are far). And if you’re ever arguing with someone and they seem to be saying obviously wrongheaded things, it’s often useful to take a step back and consider whether they might simply be thinking near while you’re thinking far, or vice versa.
The applicability of the near/far idea to the above discussion should be obvious, I think. In a debate you’re dealing with a subject very abstractly, you face relatively low real-world stakes, and you have an excellent opportunity to display your fair-minded and virtuous nature to anyone watching. In other words, debates: super far. So naturally people tend to take on a far-mode view of intelligence when debating about it. And a far mode view of intelligence will of course accord with our ideals, foremost among them being egalitarianism. We should therefore expect debaters to push a picture of intelligence that emphasizes equality: no one’s more intelligent than anyone else, everyone’s intelligent in their own way, if you’re less intelligent you must have other talents or a better work ethic, etc. In other words, people wind up saying things that are optimized for values rather than truth, and the result is a debate that – in both the Hansonesque and Larsonesque senses – is way off to the far side.
Of course, just because a thought comes from far mode thinking doesn’t mean it’s wrong – both modes have their advantages and disadvantages. But all else being equal we should expect near mode thinking to be more accurate than far, simply because it’s more grounded in reality. A screwup in near mode is likely to have much worse consequences than a screwup in far mode, and this keeps near mode relatively honest. For example, a bad performance review at your job could be the difference between you getting a promotion and not getting a promotion. (Or something? I’ve never actually had a real job before) In that case it might be very relevant for you to know if Ray from accounting is going to make some mistake that will affect you and your work. And when he does make a mistake, you’re not going to be thinking about how he probably has some other special talent that makes up for it – you’re going to get Barry to double-check the rest of his damn work.
Near mode isn’t infallible, of course – for example, near mode emphasizes things we can picture concretely, and thus can lead us to overestimate the importance of a risk that is low probability but has high salience to us, like a plane crash. In a case like that far thinking could be more rational. Importantly, though, this near mode failure comes from a lack of real-world experiential feedback, not an excess. When near mode has a proper grounding in reality, as it usually does, it’s wise to heed its word.
Let’s go back to my (admittedly made up) response in Section I for a second. I want to focus on one notion in particular that was brought up:
“… [J]ust because you’re smart doesn’t mean you’re a hard worker. I have a friend who’s not very bright but he works really hard, and I’d choose him over a lazy brainiac any day.”
On the face of it, this is a very strange thing to say. It’s a complete non-sequitur, at least formally: totally unrelated to the proposition at hand, which was that some people are more intelligent than others. In fact, if you’ll notice our interlocutor has gone beyond a non-sequitur and actually conceded the point in this case – they’ve acknowledged that people can be more or less intelligent. So what’s going on here?
First I should emphasize that, although I made this response up, I’ve seen people make exactly this argument many times before. I’ve seen it in relation to intelligence and hard work, and I’ve seen it in a plethora of other contexts:
“Do you prefer more attractive girls?”/”Well I’d rather an plain girl with a great personality than someone who’s really hot but totally clueless.”
“Movie special effects have gotten so much better!”/”Yeah but I’d prefer a good story over good CGI any day.”
“I hate winter!”/”Sure, but better it be cold and clear than mild and messy.”
What do all these responses have in common? They all concede the main point at hand but imply that it doesn’t matter, and they all do so by bringing up a new variable and then assuming a trade-off. The trade-off hasn’t been argued for or shown to follow, mind you – it’s just been assumed. And in all cases one is left to wonder: why not both? Why not someone who’s intelligent and hardworking? Why not a day that’s warm and sunny?
(Although to be fair arguments become a lot more fun when you can just postulate any trade-off you want: “You just stole my purse!”/”Ah yes, but wouldn’t you rather I steal your purse, than I not steal your purse and GIANT ALIEN ZOMBIES TORTURE US ALL TO DEATH?”)
I’ve always found this tactic annoying, and I see it frequently enough that I couldn’t help but give it a name. I’ve taken to calling it the RPG fallacy.
Roughly speaking, I would define the RPG fallacy as “assuming without justification that because something has a given positive characteristic, it’s less likely to have another unrelated positive characteristic.” It gets its name from a common mechanic in role-playing games (RPGs) where a character at the start of the game has a number of different possible skills (e.g. strength, speed, charisma, intelligence, etc.) and a fixed number of points to be allotted between them. It’s then up to you as the player to decide where you want to put those points. Note that this system guarantees that skills will be negatively correlated with one another: you can only give points to a certain skill at the expense of others. So, for example, you can have a character who’s very strong, but only at the expense of making her unintelligent and slow. Or you could make your character very agile and dexterous, but only if you’re fine with him being an awkward weakling. The fixed number of points means that for all intents and purposes, no character will be better than any other: they can be amazing in one area and completely useless in the others, or they can pretty good in a few areas and no-so-great in the others, or they can mediocre across the board, but they can’t be uniformly good or uniformly bad. They will always have a strength to compensate any particular weakness.
The RPG fallacy, then, is importing this thinking into the real world (as in the above examples). It’s a fallacy because we don’t have any particular reason to believe that reality is like a video game: we’re not given a fixed number of “points” when we’re born. No, we may wish it weren’t so, but the truth is that some people get far more points than others. These are the people who speak four languages, and have written five books, and know how to play the banjo and the flute, and have run a three-hour marathon, and are charming and witty and friendly and generous to boot. People like this really do exist, and unfortunately so do their counterparts at the opposite end of the spectrum. And once one begins to comprehend the magnitude of the injustice this entails – how incredibly unfair reality can be – one starts to see why people might be inclined to commit the RPG fallacy.
(We should try to resist this inclination, of course. Burying our heads in the sand and ignoring the issue might make us feel better, but it won’t help us actually solve the problem. This is one of the many reasons I consider myself a transhumanist)
It’s important to emphasize again that the RPG fallacy is only a fallacy absent some reason to think that the trade-off holds. If you do have such a reason, then there’s of course nothing wrong with talking about trade-offs. After all, plenty of trade-offs really do exist in the real world. For example, if someone devotes their entire life to learning about, say, Ancient Greece, then they probably won’t be able to also become a world-class expert on computer microarchitectures. This is a real trade-off, and we certainly wouldn’t want to shut down all discussion of it as fallacious. The reason this case would get a pass is because we can identify why the trade-off might exist – in this case because there’s a quantity analogous to RPG skill points, namely the fixed number of hours in a day (if you spend an hour reading about Plato’s theory of forms, that’s an hour you can’t spend learning about instruction pipelining and caching). Thus expecting trade-offs in expertise to hold in real life wouldn’t be an example of the RPG fallacy.
But even here we have to be careful. Expertise trade-offs will only hold to the extent that there really is a fixed amount of learning being divvied up. And while it’s certainly true that one has to decide between studying A and studying B on a given day, it’s also true that some people decide to study neither A nor B – nor much of anything, really. Moreover, some people are simply more efficient learners: they can absorb more in an hour than you or I could in a day. These give us some reasons to think that expertise trade-offs need not hold in all cases – and indeed, in the real world some people do seem to be more knowledgeable than others across a broad range of domains. This illustrates why it’s so important to be aware of the RPG fallacy – unless you’re taking a long hard look at exactly what constraints give rise to your proposed trade-off, it’s all too easy to just wave your hands and say “oh, everyone will have their own talents and skills.”
So that’s the second answer to our earlier questions. Why do people keep bringing up seemingly irrelevant trade-offs when talking about intelligence? Because they occupy an implicit mindset in which aptitude in one area comes at the expense of aptitude in another, and want to emphasize that being intelligent would therefore come at a cost. This in turn is related to the earlier discussion of far mode and it’s focus on egalitarianism and equality – if everyone has the same number of skill points, then no one’s any better than anyone else. We could probably classify both of these under the heading of “just-world” thinking – believing the world is a certain way because it would be fairer that way. Unfortunately, so far as I’ve been able to tell, if there is a game designer for this world he doesn’t seem much concerned with fairness.
Now we come to the most speculative (and not coincidentally, the most interesting) part of this piece. So far I’ve been essentially taking it as given that the trade-offs people bring up when discussing intelligence (e.g. one type of intelligence versus another, or being smart versus being a hard worker) are fictitious. But perhaps I’m being unfair – might there not be a grain of truth in these kinds of assertions?
Well, maybe. A small grain, anyway. There might be reason to think that even if these trade-offs don’t actually exist in the real world, people will still perceive them in their day-to-day lives – phantom correlations, if you like.
Let’s consider the same example we looked at last section, of hard work and intelligence. We want to examine why people might wind up perceiving a negative correlation between these qualities. For the sake of discussion we won’t assume that they’re actually negatively correlated – in fact, to make it interesting we’ll assume that they’re positively correlated. And we’ll further assume for simplicity that intelligence and hardworking-ness are the only two qualities people care about, and that we can quantify them perfectly with a single number, say from 0-100 (man, we really need a better word for the quality of being hardworking – I’m going to go with diligence for now even though it doesn’t quite fit). If we were to then take a large group of people and draw a scatterplot of their intelligence and diligence values, it might look something like this:
With me so far? Okay, now the interesting part. In many respects people tend to travel in a fairly constrained social circle. This is most obvious in regards to one’s job: a significant fraction of one’s time is typically spent around coworkers. Now let’s say you work at a fairly ordinary company – respectable, but not outrageously successful either. The hiring manager at your company is then going to be faced with a dilemma – of course they’d like to hire people who are both hardworking and intelligent, but such people always get scooped up by better companies that are willing to pay more. So they compromise: they’re willing to accept a fairly lazy person so long as she’s intelligent, and they’ll accept someone who’s not so bright if he’ll work hard enough to make up for it. Certainly though they don’t want to hire someone who’s both lazy and unintelligent. In practice, since in this world intelligence and diligence can be exactly quantified, they’d likely end up setting a threshold: anyone with a combined diligence+intelligence score of over, say, 100 will be hired. And since the better companies will also be setting up similar but higher thresholds, it might be that your company will be unable to hire anyone with a combined score over, say, 120.
So: for the people working around you, intelligence+diligence scores will always be somewhere between 100 and 120. Now what happens if we draw our scatterplot again, but restrict our attention to people with combined scores between 100 and 120? We get the following:
And lo and behold, we see a negative correlation! This despite the fact that intelligence and diligence were assumed to be positively correlated at the outset. It’s obvious what’s going on here: by restricting the range from 100 to 120, your company ensures that people can only be hired by either being highly intelligent and not very diligent, or highly diligent and not very intelligent (or perhaps average for both). The rest of the population will be invisible to you: the people who are both intelligent and diligent will end up at some star-studded Google-esque company, and the people who are neither will end up at a lower-paying, less prestigious job. And so you’ll walk around the office at work and see a bunch of people of approximately the same skill level, some smart but lazy and some dull but highly motivated, and you’ll think, “Man, it sure looks like there’s a trade-off between being intelligent and being hardworking.”
As I said, phantom correlations.
We can apply this idea to the real world. Things are of course much messier in reality: companies care about far more than two qualities, few of the qualities they do care about can be quantified precisely, and for various reasons they probably end up accepting people with a wider range of skill levels than the narrow slice I considered above. All of these things will weaken the perceived negative correlation. But the basic intuition still goes through I think: you shouldn’t expect to see anyone completely amazing at your company – if they were that amazing, they’d probably have a better job. And similarly you shouldn’t expect to see someone totally useless across the board – if they were that useless, they wouldn’t have been hired. The range ends up restricted to people of broadly similar skill levels, and so the only time you’ll see a coworker who’s outstanding in one area is when they’re less-than-stellar in another.
Or, to take what might be a better example, consider an average university. A university is actually more like our hypothetical company above, in that they tend to care about easily quantifiable things like grades or standardized test scores. And they would probably have a similarly restricted range: the really good students would wind up going to a better university, and the really bad ones probably wouldn’t apply in the first place. So maybe in practice it would turn out that they could only reliably get students with averages between 80 and 90. In that case you would see exactly the same trade-off that we saw above: the people with grades in that range get them by being either hardworking or smart – but not both, and not neither. If they had both qualities they’d have higher grades, and if they had neither they’d have lower. So again: trade-offs abound.
Now, how much can this effect explain in the real life? Maybe not a whole lot. For one we have the messy complications I mentioned above, that will tend to widen the overall skill range of people at a given institution and therefore weaken the effect. More important, though, is the fact that people simply don’t spend 100% of their time with coworkers. They also have family and friends, not to mention (for most people) a decade plus in the public school system. That’s probably the biggest weakness of this theory: school. The people you went to school with were probably a fairly representative sample of the population, and not pre-selected to have a certain overall skill level. So in theory one should wind up with fairly accurate views about correlations and trade-offs between characteristics simply by going through high school and looking around.
Still, I think this is an interesting idea, and I think it could explain at least a piece of the puzzle at hand. Moreover, it seems to me to be an idea gesturing towards a broader truth: that trade-offs might be a kind of “natural” state that many systems tend towards – that in some sense, “only trade-offs survive”. Scott Alexander of Slate Star Codex had much the same idea (he even used the university example), and I want to explore it further in a future post.
But I guess the future can wait until later, and right now it’s probably time I got around to wrapping this post up.
So: if you’ll recall, we started out by pondering why debates about intelligence always turn so contentious, and why people have a tendency to assume that intelligence has to be traded off against other qualities. We considered some possible explanations: one was far mode thinking, and it’s tendency to view intelligence as a threat to equality. Another was the RPG fallacy, an implicit belief that everyone has a set number of “points” to be allocated between skills, necessitating trade-offs. And now we have our third explanation: that people tend to not interact much with those who have very high or very low overall skill levels, resulting in an apparent trade-off between skills among the people they do see.
These three together go a long way towards explaining what was – to me, anyway – a very confusing phenomenon. They may not give the whole story of what’s going on with our culture’s strange, love/hate relationship with intelligence – I think you’d need a book to do that justice – but they’re at least a start.
I want to close with a sort of pseudo-apology.
By nature I tend to be much less interested in object-level questions like “What is intelligence?”, and much more interested in meta-level questions like “Why do people think what they do about intelligence?”. I just usually find that examining the thought processes behind a belief is much more interesting (and often more productive) than debating the belief itself.
But this can be dangerous. The rationalist community, which I consider myself loosely a part of, is often perceived as being arrogant. And I think a lot of that perception comes from this interest we tend to have in thought processes and meta-level questions. After all, if someone makes a claim in a debate and you respond by starting to dissect why they believe it – rather than engaging with the belief directly – then you’re going to be seen as dismissive and condescending. You’re not treating their position as something to be considered and responded to, you’re treating it as something to be explained away. Thus, the arrogant rationalist stereotype:
“Oh, obviously you only think that because of cognitive bias X, or because you’re committing fallacy Y, or because you’re thinking in Z mode. I’ve learned about all these biases and therefore transcended them and therefore I’m perfectly rational and therefore you should listen to me.”
It dismays me that the rationalist community gets perceived this way, because I really don’t think that’s how most people in the community think. Scott Alexander put it wonderfully:
A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.
We’re all sinners here. You don’t “transcend” biases by learning about them, any more than you “transcended” your neurons when you found out that was how your brain thought. I’ve known about far mode for years and I’m just as susceptible to slipping into it as anyone else – maybe even more so, since I tend to be quite idealistic. The point of studying rationality isn’t to stop your brain from committing fallacies and engaging in cognitive biases – sooner you could get the brain to stop thinking with neurons. No, the point of rationality is noticing when your brain makes these errors, and then hopefully doing something to correct for them.
That’s what I was trying to do with this piece. I was trying to draw out some common patterns of thought that I’ve seen people use, in the hope that they would then be critically self-examined. And I was trying to gently nudge people towards greater consistency by pointing out some contradictions that seemed to be entailed by their beliefs. But I was not trying to say “Haha, you’re so dumb for thinking in far mode.”
It’s very easy to always take the meta-road in a debate. You get to judge everyone else for their flawed thinking, while never putting forward any concrete positions of your own to be criticized. And you get to position yourself as the enlightened thinker, who has moved beyond the petty squabbles and arguments of mere mortals. This may not be the intention – I think most rationalists go meta because they really are just interested in how people think – but that’s how it comes across. And I think this can be a real problem in the rationalist community.
So in the spirit of trying to be a little less meta, I thought I’d end by giving my beliefs about intelligence – my ordinary, run-of-the-mill object-level beliefs. That way I’m putting an actual position up for criticism, and you can all feel free to analyze why it is I hold those beliefs, and figure out which cognitive biases I must be suffering from, and in general just go as meta on me as you want. It seems only fair.
Thus, without further ado…
- I think intelligence exists, is meaningful, and is important
- I think one can talk about intelligence existing in approximately the same way one talks about funniness existing, or athleticism
- I think IQ tests, while obviously imperfect, capture a decently large fraction of what we mean when we talk about intelligence
- Accordingly, I think most people are far too dismissive of IQ tests
- I think the notion of “book-smarts” is largely bunk – in my experience intelligent people are just intelligent, and the academically gifted usually also seem pretty smart in “real life”
- With that being said, some people of course do conform to the “book-smart” stereotype
- I think that, all else being equal, having more intelligence is a good thing
- I suppose the above means I sort of think that “intelligent people are better than other people” – but only in the sense that I also think friendly, compassionate, funny, generous, witty and empathetic people are better than other people. Mostly I just think that good qualities are good.
- I think it’s a great tragedy that some people end up with low intelligence, and I think in an ideal world we would all be able to raise our intelligence to whatever level we wanted – but no higher
So there: that’s what I believe.
Other anagrams considered (but ultimately rejected) for use as a blog title:
-The Softer Pens
Decent, but the plural on “pens” ruins it. Plus, it lacks the appealing abstractness of what I ended up with.
-Three Soft Pens
Also not bad, but sadly inapplicable as there’s only one of me, not three. For now.
-Tent of Spheres
Pleasingly geometric if nothing else.
-She Oft Repents
The sentiment is there; the gender is not.
-Pent For Theses
A little too on the nose I think. Look, I’ll graduate when I graduate! Stop bugging me!
-Soft Serpent, Eh?
Wait, what are you implying here exactly?
-Not Herpes Fest
Well okay but I don’t see why you would feel the need to specify –
-Oft Nets Herpes
Hey now that’s not very nice
-Tents of Herpes
Oh come on it’s not tents
-Fresh Tot Peens
Okay now you’re just trying to get me on a watchlist, aren’t you?
Among my New Year’s resolutions for 2015 is to start blogging more. Like many resolutions this one is admirable in its intent, dubious in its attainability, and ultimately narcissistic in its motivation. But hey, that’s never stopped anyone before. Rather than just continuing on with business as usual at my old blog, I decided to start up a new one with a dedicated domain name. I did this because I thought a fresh start might do me good, and more importantly because I thought paying for a domain might motivate me to post more.
(Plus, have you ever owned a domain name? It’s really cool)
The name of this blog is The Pen Forests. It doesn’t mean anything in particular, but it’s an anagram of my own name that I discovered years ago, and I’ve always been fond of the imagery it evokes. Incidentally, I realize I might have just given up enough information to prevent me from keeping this blog fully anonymous, but I don’t think that was ever in the cards. I do wonder, though, how many other names are anagrams of my own. For what it’s worth my name is not Seth F. Peterson.
The theme of this blog will continue to be whatever happens to interest me. If history is any indication, topics will likely include a mix of philosophy, rationality, and science. This will not be a didactic blog: I don’t intend to write pop science articles and explain quantum mechanics to the layman. Those are admirable goals, but they’re not for me. I want to write at the boundaries of my knowledge; at the outermost limits of my confusion. If I write a post it’ll be because I just figured something out, or I’m trying to figure something out by writing about it.
For better or for worse, blogging thrives on contrarianism. If you’re not going against some kind of existing grain then no one pays attention to you. At its best, this dynamic encourages bloggers to push back when public opinion has swung too far in one direction. At its worst, it incentivizes writers to use convoluted, too-clever logic to come up with reasons why some common sense truth is actually false. I will try to walk the fine line between being contrarian enough to be interesting and not so contrarian as to be stupid. I hope you can all appreciate that it’s a harder line to walk than you might think.
The motto of this blog will be “Impassioned Reason”. If that sounds like a contradiction in terms to you, then I haven’t written enough yet. Do stay tuned.
With some reluctance I will link you now to my old blog. My reluctance is not so much because I disagree with anything written there (although in many cases I’m much less sure of what I wrote than I was when I wrote it). It’s more due to a general embarrassment I feel in regards to any piece of mine that was written more than…oh, let’s say a week ago. Trust me, if you think something I wrote sounds pompous or naïve or stupid, then I am probably rereading it and cringing a thousand times worse than you are. I can only assume this pattern will continue in the future.
(This might be why I don’t blog that often)
My masochism aside, I’m currently about a quarter of the way through my first real post for this blog. I hope you will enjoy it, and at least a few of my future posts. If nothing else I hope I can introduce you to a few new concepts along the way and give you some different ways of thinking about things. And if you don’t particularly like a post, then please: say so and why in the comments! I welcome dissenting opinions and criticisms – that’s how I learn. I should probably emphasize, though, that I think debates are only useful if both parties enter into them with a sense of mutual curiosity and truth-seeking. If you don’t give me any indication that you understand that, then I probably won’t waste my time engaging with you.
Like any blog, this one will either be supported enough to blossom or neglected enough to wither. Hopefully it will be the former; more likely, the latter. In either case though this is me embarking on a bit of a journey. I hope to bare my intellectual soul over the coming months (and fate willing, years). If I do it right it’ll be a bit of an adventure.
I hope you’ll join me.