I love Brad DeLong’s academic work. He’s way smarter than me and, more importantly, clearly works much much harder than I do. And he tackles interesting questions. But every time I check his blog, I get an awful “Everyone in the world is evil or stupid or both except Brad and a few of his friends” vibe.
I’d not been there for a while, but switching to The Old Reader (for now) from Google Reader messed up my RSS habits. And so, there I was, looking at Brad DeLong saying that Steven Landsburg is the stupidest man alive and that “the University of Rochester has a big problem” - presumably Landsburg’s continued employment there.
What was the cause? A piece at Gawker taking great offence at a thought experiment Landsburg proposed. I’ll link Gawker at the end so you’re not tempted to read it before reading the original Landsburg piece.
In grad school, we had a lot of fun with thought experiments of this sort. The classic one is Nozick’s experience machine. Step into the machine and you’ll experience a simulated life much better than the one you’d otherwise live; moreover, you’ll never remember that you’re actually in the machine. If you stay out of the machine, there has to be something that matters more to you than experienced utility.
Tyler Cowen liked to ask a variant on it in our Economics & Philosophy class: World B is identical to World A, as far as you are ever able to observe, but in World B, your wife has been cheating on you for years and you never ever knew it, nor will you ever know it. The worlds are identical except for the unknown-to-you fact of your wife’s infidelity. Are you worse off in World B? If so, clearly state exactly how, and make that consistent with other things you believe about utility.
So, what was Landsburg’s offensive thought experiment? Recall that the rules of thought experiment club are that you don’t add in auxiliary assumptions but stick to what’s stated in the thought experiment. Landsburg asked a series of three questions, then wanted to know why our answers to 1 and 2 might differ from our answer to 3. Here they are.
Farnsworth McCrankypants just hates the idea that someone, somewhere might be looking at pornography. It’s not that he thinks porn causes bad behavior; it’s just the idea of other people’s viewing habits that causes him deep psychic distress. Ought Farnsworth’s preferences be weighed in the balance when we make public policy? In other words, is the psychic harm to Farnsworth an argument for discouraging pornography through, say, taxation or regulation?
That’s scenario 1. Most economists just ignore that psychic harm – the world’s essentially impossible to evaluate when we add this kind of thing in. Further, Farnsworth could pay other people not to watch pornography if he really cared about it that much. But we could assume that away to stick within the proper confines of the thought experiment: say transactions costs prevent it. Is pyschic harm of this sort admissible in the utilitarian calculus? Here’s scenario 2:
Granola McMustardseed just hates the idea that someone, somewhere might be altering the natural state of a wilderness area. It’s not that Granola ever plans to visit that area or to derive any other direct benefits from it; it’s just the idea of wilderness desecration that causes her deep psychic distress. Ought Granola’s preferences be weighed in the balance when we make public policy? In other words, is the psychic harm to Granola an argument for discouraging, say, oil drilling in Alaska, either through taxes or regulation?
Actually, policy does weigh Granola’s concerns. It’s counted as existence value over and above option value or use value. Sound analyses don’t put much weight on it, but it does sometimes count. If I get existence value from thinking heroic Randian thoughts about oil derricks and man’s mastery of nature, that gets ignored in the cost-benefit analysis for some reason. But, again, all these kinds of pyschic distress are treated pretty dismissively in economics. And now the third and controversial question:
Let’s suppose that you, or I, or someone we love, or someone we care about from afar, is raped while unconscious in a way that causes no direct physical harm — no injury, no pregnancy, no disease transmission. (Note: The Steubenville rape victim, according to all the accounts I’ve read, was not even aware that she’d been sexually assaulted until she learned about it from the Internet some days later.) Despite the lack of physical damage, we are shocked, appalled and horrified at the thought of being treated in this way, and suffer deep trauma as a result. Ought the law discourage such acts of rape? Should they be illegal?
If we take this as a parallel thought experiment, the only trauma here allowable is the psychic distress, which we otherwise typically ignore.
It is a hard question and so a good one. All our intuitions tell us to condemn the third scenario while dismissing the psychic harms in the first two. But if we stick within the confines of the thought experiment, it’s hard to distinguish the cases. We can say the psychic harm is worse in the third case, and it would be in the real world, but it’s not hard to have a thought experiment Granola McMustardseed who gets more psychic harm from oil drilling than from being in Scenario 3. Or a Scenario 3 victim who never learns that it happened – a case pretty close to Cowen’s World A vs World B.
I don’t have any great answer other than that when we step away from the thought experiment and into the real world, a rule allowing Scenario 3 that imposes psychic harm no greater than that imposed in Scenarios 1 & 2 would also necessarily allow much much greater harm because we cannot set rules only allowing Scenario 3. But that’s a cop-out, because even if we could do it in the real world, I’d still want it banned – in the same way that I think I’m worse off in Cowen’s World B and that I wouldn’t want to step into Nozick’s machine. My maximand isn’t just experienced utility.
Meanwhile, Gawker turns it into a story about how Landsburg thinks rape is OK and DeLong signs onto their interpretation. The contrast between the quality of comments at Landsburg’s post and DeLong’s is interesting too… Landsburg’s commenters wrestle with a difficult thought experiment; DeLong’s want Landsburg fired.
[cross-posted to Offsetting Behaviour]