Suddenly, world events have made Sam Harris’ unwise attempt to rescue his torture argument and my criticism of it (see prior post) look prescient. Various torture promoters and defenders from theBush administration have already started coming out of the woodwork to claim that information obtained by torture led to finding Osama bin Laden. On the available evidence, it looks like this claim is as completely bogus as all their prior claims to have obtained valuable information from torture: New York Times reports on the subject with its usual subdued dispassion, and Andrew Sullivan rips apart the lies.
On my admittedly still shallow first analysis, it looks like the best-case scenario for the torture promoters is that the torture of two highly-placed al Qaeda figures may have led to negative corroboration: That is, they lied about the name/importance of one of bin Laden’s trusted couriers, a name acquired through interrogation of a more cooperative prisoner who was not tortured. Of course, there’s no reason to doubt that these highly-placed al Qaeda leaders would also have lied about the courier had they not been tortured: It was the cross-checking information from multiple interrogations that led to intelligence with real potential value, not the torture-extracted misinformation.
Update: A commenter here and numerous facebook friends have also directed my attention to this interview with a professional military interrogator who supports my claims that torture is ineffective. He also argues that the use of illegal and immoral torture methods by the Bush administration was not only a great recruiting tool for al Qaeda (and like-minded terrorists), but that it actually slowed down the hunt for Osama bin Laden.
Another update: Somehow, I missed this Forbes interview with a current top U.S. military interrogator in Afghanistan, who says that…
torture played no role in locating Osama bin Laden, and that claims to the contrary by former Bush administration officials recently amount t0 “propaganda [that] degrades our intelligence operations more than any other factor I can think of.”
In this blog post, Sam Harris once more defends an argument he made comparing torture and collateral damage in The End of Faith. This detailed defense is worth reading, and I think Harris is basically right to compare torture and collateral damage in war, both being willfully done evils which people defend on morally and factually implausible “greater good” grounds. But he’s still wrong about torture, even if he’s basically right about collateral damage — and exactly why he’s wrong is interesting and subtle. Here is the tricky bit that exposes exactly where Harris’ reasoning goes off the rails:
It is widely claimed that torture “does not work”—that it produces unreliable information, implicates innocent people, etc. As I argue in The End of Faith, this line of defense does not resolve the underlying ethical dilemma. Clearly, the claim that torture never works, or that it always produces bad information, is false. There are cases in which the mere threat of torture has worked. As I argue in The End of Faith, one can easily imagine situations in which even a very low probability of getting useful information through torture would seem to justify it—the looming threat of nuclear terrorism being the most obvious case. It is decidedly unhelpful that those who claim to know that torture is “always wrong” never seem to envision the circumstances in which good people would be tempted to use it. Critics of my collateral damage argument always ignore the hard case: where the person in custody is known to be involved in terrible acts of violence and where the threat of further atrocities is imminent.
Harris is simply wrong in thinking that “torture does not work” necessarily means that torture never works or always produces bad information, nor is a critic of his argument logically compelled to take any such extreme and probably unjustifiable position. Rather, the perfectly plausible and evidence-supported claim that torture is massively unreliable is sufficient, because there is no way for an interrogator to know at the time of interrogation whether the information extracted by torture is even remotely true, or in contrast is deliberately misleading. The problem here isn’t that torture always produces false information, but that it often does, so that  there is every reason to think that torture is as likely or more likely to produce bad information or no information than good information, and  there is no way of knowing in any particular circumstances whether this is an occasion where torture will produce good information that will help alleviate a threat, rather than bad information that will exacerbate a threat by wasting time and resources investigating phony leads or mobilizing forces in the wrong fashion.
That torture has on some past occasions produced good intelligence is something that was determined after the fact, just as the fact that torture has resulted in false confessions and other sorts of harmful misinformation on many, many occasions was determined after the fact. The problem, however, is that we don’t have access to that after-the-fact knowledge until, well, after the fact: Interrogators are unable — and almost certainly always will be unable — to determine whether any particular occasion where torture might be considered, including the most radical ticking time bomb “hard case” scenario conjured by Harris or the fevered imaginations of the writers of 24, will after the fact prove to be one of those occasions where torture provides good intelligence rather than harmful misinformation. We cannot know the future in this way except based on probabilities, and the probability of getting misleading and even positively harmful misinformation from torture is high, quite possibly higher than the probability of getting useful intelligence.
Harris not only fails to compare the chance of getting useful information to the chance of getting harmful misinformation, he also fails to compare torture to the obvious alternative: When Harris talks about a scenario in which “even a very low probability of getting useful information through torture would seem to justify it,” the appearance of justification is only preserved by failing to compare torture to non-torture interrogation. It simply isn’t sufficient that torture has some probability, however low, of getting information that could save many innocent lives: To justify its use, torture must have a demonstrably higher probability of getting useful information than non-torture interrogation methods — and it does not.
We know for certain that torture is unreliable. We also know for certain that non-torture interrogation techniques are also unreliable. But there is no good reason to think non-torture interrogation is less reliable than torture, and many reasons to think that it torture is in fact the more unreliable technique by far. What we do not and probably cannot know is whether any particular interrogation situation which confronts us right now, in a given moment, is one where torture would more reliably provide useful intelligence than non-torture interrogation.
If the justification for torture is to commit a moral evil to prevent a greater evil, we must have a reasonable basis for believing that the evil we commit is genuinely worth it. Given our imperfect knowledge of the future, we can only make such a calculation based on probabilities, and Harris utterly fails to consider the appropriate comparisons between probable outcomes. We cannot consider the possibility that torture will net us information that can be used to prevent great harm in isolation, we must compare that possibility to the possibility that torture will net us information that exacerbates the harm or makes it more difficult to prevent the harm, and we must also compare it to the possibility that we could get equally reliable or even more reliable harm-averting information by using well-established interrogation techniques that do not involve torture. Harris’ argument fails to make either comparison, and fails to recognize the need to do so.
In summary, we cannot possibly be justified in committing one moral atrocity to prevent another without having good reasons to think that the moral atrocity we commit has both  a better chance of preventing than exacerbating the atrocity we seek to prevent, and  a better chance to prevent the atrocity than any other action we could take that does not involve committing an atrocity of our own. I think it is highly dubious that we have good reason to think , that torture is more likely generate useful information than misleading misinformation: Certainly, Harris has not made this argument convincingly, nor even perceived the need to make it. I think that the preponderance of the evidence already shows that  is false, although I am open to evidence to the contrary if anyone can produce it: Again, Harris has not provided evidence for , nor even perceived the need to make a case for it. With good prima facie reasons to think that  and  are false, and in the absence of any convincing arguments I am aware of that both  and  are true, universal and unqualified moral opposition to torture is wholly justified. It is justified without relying on any absolutist assertion that torture never produces reliable intelligence, but only on the theoretically coherent and empirically supportable (and I think already quite adequately supported) claim that torture produces less reliable intelligence than well-established interrogation techniques which don’t involve torture.
The error Sam Harris has made here is very much akin to recommending a drug by citing only data comparing the efficacy of the drug to a placebo, and ignoring data comparing the efficacy of the drug to other drugs which are already established as being effective against the same condition and have fewer side effects. This is an error so basic and obvious that someone who is scientifically trained should not miss it — and I think Harris would not miss it, if it weren’t his own flawed reasoning he is defending.
about the connection between epistemic values and moral values in New Atheism,over at Eric McDonald’s blog Choice In Dying.
Eric and I have been having very interesting and mutually enriching conversations (well I can’t speak for him, but I know *I* get a lot out of them) ever since he started his own blog. In fact, some of our great conversations go back a few years to comment threads at Ophelia Benson’s Notes & Comments blog on Butterflies & Wheels, long before either of us started blogs of our own. Eric is very smart and writes very interesting things — but more than that, he’s an all around terrific human being, passionately fighting the good fight. You should be reading his blog regularly, if you’re not already. And Butterflies & Wheels. (Plus, they both post a lot more than I do.)
An example of how I refuse to be miserable today in search of some hoped-for future reward: I will not be applying for even a temporary job at the school which prominently features the following paragraph in its statement of purpose.
The Christian tradition to which [school name redacted] remains committed recognizes God as the source of all truth, and believes that Jesus Christ is the revelation of that God, a God bound by no church or creed. The loyalty of the college thus extends beyond the Christian community to the whole of humanity and necessarily includes openness to and respect for the world’s various religious traditions. [redacted] dedicates itself to the quest for truth and encourages teachers and students to explore the whole of reality, whether physical or spiritual, with unlimited employment of their intellectual powers. At [redacted], faith and reason work together in mutual respect and benefit toward growth in learning, understanding, and wisdom.
It is my considered (and rigorously argued) opinion that faith is by its very nature the enemy of reason, and therefore the enemy of genuine, intellectually honest scholarship. While well-meaning ecumenical faith makes for better scholarship (and better neighbors) than fundamentalist dogmatism, that’s an awfully low standard to rise above: For example, see the implicit logical self-contradiction in the first sentence of the quoted paragraph.
Even if the hiring panel never thought to Google-stalk me and thereby discovered my outspoken atheism — which I imagine would impede my chances, to say the least — I cannot imagine what would induce me to sacrifice a year of my life to such an institution. My intellectual integrity is worth more to me than whatever fiscal or professional benefits any such job could possibly offer. I’d rather commit myself to adjunct wage-slavery. (Fortunately, there are other options on the table; hopefully, one of those will pan out.)
I refuse to be miserable today on the hope of some future reward. The world demands this of us constantly in many ways – the peculiar sub-set of the world we call “academia” especially. But it’s a sucker’s bet. The time for joy is always now. Fulfillment must be a goal sought every day, not a goal to achieve some day; anything less is death by inches.
Yes, we sometimes have to compromise in the short term for the benefit of the long term. Yes, we must make some sacrifices now to get where we want to go. But the future is unwritten: It offers no guarantees; it signs no binding contracts. Compromise too much today for the promise of tomorrow, and what will you have if that promise is not fulfilled? A life filled with compromises and empty promises. When today is too often sacrificed on the altar of tomorrow, tomorrow never comes.
Maybe I have sometimes erred on the side of present happiness over future gain, and in doing so set back my plans and undermined my own goals. Hell, no maybes about it – I surely have, probably more often than I think I have. But I am content that in doing so, I at least erred on the right side of things. Why? Because there’s no assurance that I’ll reach my long-term goals anyway: I could die tomorrow. But if I die tomorrow, I will die having enjoyed my life much more often than not. I will die having done many things I judge worthwhile: having shared life and all its joys with the many people that I like and the few that I truly love; having shared my passion for the pursuit of truths with friends and students and fellow seekers; and having done much more of both – more in quantity, and more in quality – than I would have if I erred in the other direction.
I thought it might be a nice summary of these thoughts to conclude, “I’d rather be an underachiever than unhappy,” but that’s not quite true. Stated that way, baldly and unqualified, it’s just part of the trap that the world lays for us, defining “achievement” in terms that have little to do with fulfillment. Status, title, money – we’re all presumed to value those things, but I truly don’t. Yes, I value stability and the day-to-day satisfaction of my wants and needs, and that’s a lot easier with a long-term employment contract and a decent salary. But loving what I do for a living on a day-to-day basis is much more important to me than the living I make at it – or I wouldn’t have a PhD in Philosophy, of all things! So with the caveat that I haven’t achieved all that I could by the standards of my chosen profession, and moreover that I haven’t achieved all that I could even by the standards of what I actually want from my profession rather than what others might suppose I want from it…
I’d still rather be an underachiever than unhappy.
Yes, I am career-driven. Yes, I am pursuing that nebulous dream, the ideal tenure-track job at a small liberal arts college in congenial surroundings. But I’m not making myself miserable on a day-to-day basis in pursuit of that goal: I often work only 40-50 hours a week instead of 60-70, and I don’t spend as much time keeping up with the literature and working on my pubs as I should. There are other things I should be doing right now instead of composing this meditative essay to cast out upon the ether.
Yet here I am, thinking out loud and sharing my thoughts instead of analyzing an argument or refining a publication or prepping for class or grading some papers. I am content with that choice, because I live here and now, not in that nebulous, hoped-for future. Of course, I don’t just hope for that future; I work for it, often and hard. I wouldn’t have a PhD if I didn’t work for the future. But I try not to work for it so much that I forget to live here and now, nor to work for it so little that I undermine the chances for its realization. It helps that I genuinely enjoy most of that work most of the time, but even so – it’s a balancing act, and I know I don’t always find the right balance. But if I must tilt one way or another, I know which way I’d rather tilt…
Because I refuse to be miserable today on the hope of some future reward. The time for joy is always now. Anything less is death by inches.
As Eric McDonald rightly points out, the notion of “human dignity” as formulated by religious authorities (and those who endorse and support the authority of religion) is in fact the very opposite. Religious conceptions of human dignity rob real humans of their dignity in the most profound way, by denying them the basic right at the heart of all other rights, self-determination — the right to decide for oneself what one feels and thinks about one’s own life and what one wants from it.
Don’t try to tell *me* what the worth or dignity of *my* life consists in.
You haven’t the right. No one has that right but me.
Those who lay claim to that right — those who would limit my choices and options on the basis of *their* view of what makes *my* life worthwhile, and why (almost always on religious grounds, of course) — are not only profoundly mistaken, they are presumptuous beyond all tolerance. There is no quicker way to inspire — and to deserve — my rage and contempt.
There are, of course, rigorous philosophical arguments to be made on these matters: Eric cites Ronald Dworkin, whose philosophical and legal arguments about euthanasia and assisted suicide are superb. However, I don’t feel particularly philosophical about the subject today. Instead, I feel angry — enraged on behalf of all those who have suffered needlessly, all those whose dignity has been stripped from them in the name of God.
Of the many, many evils perpetrated in the name of God and “justified” by faith, denying people the right to live and die as they see fit is the one I take most personally. Today is just a few weeks shy of the twenty-seventh anniversary of my father’s death. His welcome end came only after a long, slow, incredibly painful, dignity-shredding dissolution of body and mind. I know exactly how bad it was, because for the last few months of his life I was his primary caretaker — when I wasn’t in school. I was 16 years old.
My father need not have died that way, but for the political stranglehold of religious authoritarians claiming to know the will of God who self-righteously force their conception of God’s will on the rest of us whenever they can. Their ignorance is complete, and their arrogance is boundless. They are the enemies of human freedom, the only basis for any sound conception of human dignity. They condemned my father to a slow, torturous death.
I will not forget that, nor forgive it.
In yesterday’s New York Times Opinionator blog, Simon Critchley wrote about a Kierkegaardian conception of ‘faith,’ one which he purports is available even to atheists. I am… unconvinced, to put it mildly. To be perfectly honest, I would have gotten more out of that essay with a light vinaigrette and perhaps a glass of chardonnay. That is to say, Critchley composes a lovely word salad, as did Kierkegaard before him.
The details of Critchley’s essay aren’t interesting enough in and of themselves to address. I’ve seen it all before in many forms, and frankly a point-by-point analysis is wasted effort when each “point” is so thoroughly nebulous and insubstantial: When one cannot or will not define a single key term — faith, god, love — in any sort of clear, consistent, and/or coherent fashion, when every central concept one addresses can only be couched in metaphors and gestured towards rather than analyzed, what one is engaging in does not in any way resemble genuine, rigorous, truth-seeking argument. Without any fixed conceptual anchors — never mind facts; at this point I’d settle for one precisely defined term — the tools we use to justify claims through reasoned argumentation simply cannot be used: no deduction, no inference, no evidence, no examples, no counter-examples, etc. Such musings give an appearance of profundity, but they start from nothing, add nothing, and go nowhere. I can’t even call them intellectual masturbation; at least masturbation has a payoff.
I’ve read many variations on this theme over the years: discussions which purport to redefine ‘faith’ and ‘God,’ but in reality only obscure the meanings of such words as they are commonly used, and in the end utterly fail to offer any definitions at all, new or old. Whatever the intended purpose of the authors, such writings have no effect in the world but to provide intellectual cover for ‘faith’ as more ordinarily defined and manifested, wherein people believe claims about the world to be true — primarily religious claims — in the complete absence of legitimate evidence, or even in the face of clear counter-evidence. Defenders of traditional religious thought and institutions, even those whose views are most explicitly rejected by thinkers like Critchley and Kierkegaard, feel free to co-opt their musings nevertheless: The very Christians Kierkegaard criticizes borrow his prestige, and that of other respected academic theologians, to claim that their sort of faith and religion are intellectually respectable; they toss around Kierkegaard’s “leap of faith” language as if it were coined in support of their religious views, even though it springs from a critique that rejects so much of what they embrace. So not only do such writers fail to justify their own claims — because those “claims” are not claims at all, but rather evocative poesy without substance or definable meaning — they advance the cause of those whom they theoretically oppose.
If it weren’t for the broader social context in which this process of willful gibberish-production and disingenuous co-option occurs, I suppose I would dismiss it as harmless. But unlike many academics, I pay attention to the role religion actually plays in the world around me. Academic theologians like Critchley seem willfully blind to the pernicious real-world consequences of faith beliefs: the widespread oppression of women and persecution of non-heterosexuals, the perpetuation of all sorts of real-world economic and political injustices because the attention of so many people is cleverly misdirected from their lack of adequate health care and employment security and educational access to faith-based distractions like “defending marriage” and prioritizing fetuses over women and already-born children. To call religion the opiate of the masses is to praise it with faint damns; religion’s human consequences are far more widespread and devastating than heroin’s. But, instead of turning their intellects to honest assessment and analysis of faith and religion, academic theologians — from their positions of vast social privilege — muse about faith and god and religion in ways that ultimately empower and support the traditional religious beliefs and institutions they purport to oppose, their efforts building rather than chipping away at the massive bulwarks that protect religious claims and institutions from legitimate and well-deserved criticism.
While I don’t find it particularly surprising when privileged academics slather intellectual whitewash over systematic oppression to which they are not subject, I must not be completely overwhelmed by cynicism just yet: I still find it disappointing.