January 27, 2011
-
Is Luke Muehlhauser right that Sam Harris is unclear about morality?
Intro:
Luke Muehlhauser, over at Common Sense Atheism, influences a lot of atheists in the popular blogosophere. Most of the time that's a good thing, imo. However, on the question of moral discussion, even though his choice theory (desirism) is one I accept as probably correct, I think many of his observations on Sam Harris' latest book, "The Moral Landscape: How Science Can Determine Human Values" are unhelpful and counterproductive.
Where is Sam Harris' positive argument anyway?
Luke says:
But here, it’s again difficult to locate a positive argument that morality is concerned well-being. And, perhaps “well-being” is a question-begging term. Is well-being defined in terms of moral goodness? Then Harris’ claim is empty and circular.To which Harris has already replied:
Many people believe that the problem with talking about moral truth, or with asserting that there is a necessary connection between morality and well-being, is that concepts like "morality" and "well-being" must be defined with reference to specific goals and other criteria -- and nothing prevents people from disagreeing about these definitions. I might claim that morality is really about maximizing well-being and that well-being entails a wide range of cognitive/emotional virtues and wholesome pleasures [...]Of course, goals and conceptual definitions matter. But this holds for all phenomena and for every method we use to study them. My father, for instance, has been dead for 25 years. What do I mean by "dead"? Do I mean "dead" with reference to specific goals? Well, if you must, yes -- goals like respiration, energy metabolism, responsiveness to stimuli, etc. The definition of "life" remains, to this day, difficult to pin down. Does this mean we can't study life scientifically? No. The science of biology thrives despite such ambiguities. The concept of "health" is looser still: it, too, must be defined with reference to specific goals -- not suffering chronic pain, not always vomiting, etc. -- and these goals are continually changing. Our notion of "health" may one day be defined by goals that we cannot currently entertain with a straight face (like the goal of spontaneously regenerating a lost limb). Does this mean we can't study health scientifically?
I wonder if there is anyone on earth who would be tempted to attack the philosophical underpinnings of medicine with questions like: "What about all the people who don't share your goal of avoiding disease and early death? Who is to say that living a long life free of pain and debilitating illness is 'healthy'? What makes you think that you could convince a person suffering from fatal gangrene that he is not as healthy you are?" And yet, these are precisely the kinds of objections I face when I speak about morality in terms of human and animal well-being. Is it possible to voice such doubts in human speech? Yes. But that doesn't mean we should take them seriously.
Apparently Luke expects to be taken seriously for some reason. Luke had already grabbed a quote from that very Sam Harris article as though his objection had not been addressed. Hence the conversation is going backwards. To go forward (or rather to simply catch up to where Harris already is) it entails making the "huge" leap from ordinary general notions of well-being to something a bit more specific (or as Harris said: "a wide range of cognitive/emotional virtues and wholesome pleasures") like:
joy, love, self respect, good friendships, security/trust, contentment, peace/tranquility, self-improving behavior, sense of purpose/self worth, positive social climateAnd not:
misery, loneliness, self-loathing, bad friendships, fear/paranoia, discontentment, anxiety/stress, neurosis, depression, self-destructive behavior, purposeless/worthlessness, negative social climate.You are human, right Luke? I know you know what these things are given our mutual pattern recognition abilities as well as general mental disposition for seeking out the former states over the later. And I know you are smart enough to formulate a simple mental picture of "well being" vs. "misery" (aka, the moral landscape). I didn't need Harris to spell it out for me, because I share a great deal of psychological dispositions with members of our species like Harris. I allow his words to refer to my background knowledge without unnecessarily disrupting the conversation.
So...what is another Luke excuse? Luke says:
Or perhaps well-being just means happiness? Then his claim is not circular, but is probably false. We humans value other things than happiness, which is why many modern utilitarians speak of maximizing “preference satisfaction” or “desire satisfaction” rather than happiness.
Somehow happiness doesn't roughly equal preference and desire satisfaction in Luke's world? Very strange.
The intellectual implausibility of Luke's position is obvious. How could he have ever even known what he was looking for in a moral theory in gist unless he was already operating under the basic assumptions Harris lays out? Luke, on his blog, has related that he was grossly concerned with finding the right moral theory because otherwise he wouldn't know if he were doing a great deal more harm than good. But such a sentiment is only explicable with Harris' presentation. If desirism had somehow pointed to the "bad" category and not the "good" category, would Luke have embraced it? Um...probably not. Let's be honest. Or we'd rightly think he was crazy for doing so. How could we know what "dead" means unless we already kinda-sorta-know before we get to the point of spelling out that a "dead" body is one that fails to live up to (as Harris says):
...goals like respiration, energy metabolism, responsiveness to stimuli, etc.Right?
Will someone somewhere in the world challenge our categories to some degree as we get more and more specific? I'm sure. And the debate will progress rather than regress through territory already covered. Desirism may well be on the horizon of moral science as the next level of articulation, but offering up these kinds of nonsense objections in the meantime when establishing ground zero is entirely inappropriate for someone as knowledgeable and skilled as Luke Muehlhauser, imo.
Americans don't know a great deal about "honor," for example, even though the sentiment "It is better to die than to live without honor" is a moral truism of many cultures. I certainly don't live my life guided especially by the metric of honor, though I could characterize much of what I do as honorable. I would like to think that an honest, systematic, and comprehensive cross-cultural evaluation could be brought to my doorstep giving me the full list of pros and cons about what it means to live in a collectivistic honor/shame society (especially as compared to other types of societies). And since I'm not so close-minded, I might well find myself at some moral disadvantage for having grown up in a society that emphasizes other things. I'm human. They are human. Obviously everything has something going for it. It's obvious any one person or culture isn't going to be born with a perfect set of cultural moral expectations. None of them are going to be completely wrong either.
When person x dogmatically asserts that their preferred overblown singular mental state is the only way to go even though obviously that's not true or representative of most of the population, we don't have to listen. People can be wrong and solipsistic. Obviously. Am I alone in these kinds of sentiments? I sure hope not.
Outro:
Will we be dogmatic and close-minded when others offer reasonable challenges to our moral goals or will we be the kind of person who is on that journey and exploration of the possibilities to sort through? Will we empathize with honest alternatives or will we give up, because thinking things through is just so darn hard? Those questions are a great deal more relevant to the conversation and our moral growth, imo, than the cliche' (conversation stifling) philosophical hairsplitting that Muehlhauser appears to have presented.
Ben
Comments (4)
I have to say, I think Luke made a rather cogent argument. I'm glad you posted that article by Sam. It coincides perfectly with my discussion on functionality toward the beginning of my response. Nevertheless, Harris' response blunders in trying to say that because we share certain perspectives on something we can have a science of it. He uses medicine, as I did on my blog (along with things like, say, engineering). Medicine certainly is an ends-oriented enterprise, but science plays a role. However, Harris' central thesis is that science can determine those ends. That is not at all how medicine operates. Science informs us about those functions on which we've fixed our perspective. We can say what is good for the human body and analyze the human body scientifically to best determine how to achieve those ends that we've determined to be important. That is the important distinction: the science behind medicine did not determine those outcomes; though, certainly, scientific descriptions and clarity help us refine and perceive more clearly the ends we that want to achieve.
Returning to Luke's argument, though, it is not entirely unlike my central confusion. Harris is defining morality in one way (brain states) while defending it from another (optimizing well-being). Luke's argument adequately demonstrates how Harris claims many things that are either vacuous or tautological/circular. As he says, "A great many moral philosophers would agree that if we define morality in terms of the “well-being of conscious creatures,” then obviously most of moral theory becomes a science." The position is entirely contentious for the fact Harris leaves "well-being" on a sliding scale. For instance, he effectively equates it to the improvement of certain brain states. Harris claims that "morality ... really relates to the intentions and behaviors that affect the well-being of conscious creatures." HOW does this relation operate? If well-being is defined by the advancement of certain values that are in turn characterized by certain brain states, how does someone dying to promote an end count as well-being? Certainly the person dying has lost well-being. But this is an odd slight of hand to relate well-being to brain states and then say Jon's action improved Jan's brain state, therefore well-being has been improved. Well, not quite. This is a blunder of classical utility theory. Harris' conception is nothing more than a vague notion of redefining pleasure/utility in terms of neurobiology. It does not eliminate the problems of utility qua pleasure fulfillment. This is where Luke's initial statements are most apt. This is also why my response relying on Amartya Sen's work challenging the utility framework that has been systemic on social theories plays a role.
Since I wrote my blog last night, I decided to re-read the end of Sen's The Standard of Living on my way to work today (and consequently have the book with me). An example from his closing summary has import here. One of the other writers (Ravi Kunbar) in this lecture series talks about the living standard (a part of well-being) by contrasting how we divide "the cake" up between two hypothetical people, Sen and Williams. Kunbar argues that "taking some cake away from Sen and giving it to Williams ... would not be much persuaded by arguments that such an arrangement would in fact lead to catastrophic reduction in the standard living of both." To which Sen replies, "That is certainly right, but there is no question that taking the cake away from 'Sen' would reduce his standard of living," (106) The context of this hypothetical situation is certainly different, but it deals with a fundamentally similar problem: the concept of the living standard cannot be argued for in one way by its aggregate component when what is at stake is also the impacts on the constituents of that aggregation. It is a composition fallacy to think there is a direct (one-to-one) correspondence between the two that we can simply ignore, in this case, the welfare decrease to 'Sen' when there is no such decrease to the joint 'Sen & Williams.' Unacquainted with this topic you might think, "of course the aggregate can be just as the term implies: a summation of the ups and downs of the people in the population." That is only one way to characterize our aggregate. That is to say, Welfare[population] = Person1 + Person2 + ... + PersonN, for some N people in the population. But recognize the implicit coefficients on each 'Person'. They each have an unstated coefficient equal to unity. In reality, as long as we assume Welfare is aggregated linearly, the form is really Welfare[population] = A*Person1 + B*Person2 + ... + Z*PersonN, where A = B = ... = Z = 1 in this case. Why should it be this way? Why should it not be some other way, such that some of the sub-population ought to be weighted differently in aggregation. There are very good reasons to alter this, just like we use a progressive tax system to accommodate for the fact some people are made comparatively worse off by taxation than more affluent parts of the population. In terms of welfare, some people are physically unable to benefit from the same commodity baskets other people enjoy, whether it be due to a nutritional necessity or maybe a physical handicap, and this may obviously change over the course of each person's life. It may seem odd to throw nutritional needs into a welfare or utility assessment, but this idea dates well back to centuries ago (e.g., Lagrange, I believe Sen mentions in Ethics and Economics).
How does this apply to Harris' broad concept of well-being? I think it is very analogous to what I just explained. Consider these two cases.
(A) Jon sacrifices his health (maybe his life) to achieve his dream. In doing so, many people in the future will be made better off.
(B) Jon dies so that Jan can live out the rest of her life.
Barring issues about uncertainty of outcomes, there is something significantly important in how we handle these two situations. (A) is mentioned in my response. Notice that Jon's welfare is decreased just as hypothetical 'Sen' had a welfare decrease. The same applies to (B). In this instance, we're concerned with the brain states on Harris' account. In (B) we can most certainly say Jon has lost everything. How do we aggregate Welfare[Jon & Jan]? Certainly Welfare[Jon] = 0 now, and Welfare[Jan] at this time is greater than it otherwise would have been, but how do we compare Welfare[Jon & Jan] under the conditions of (B) to some other condition? This goes right into the issues in dimensionality I discuss toward the end of my blog. This is like trying to compare the parameter space values (5, 7) with (7, 5). I used the example from your list where we're looking at (self-respect, personal liberty). In this case, we want to know the parameters of welfare by people (Jon, Jan). How do we weight (0, 1) with (1, 0)? The same issue is implicit in the living standard example and in the two cases I've presented.
The fact is that Harris wants us to believe we can draw the kind of moral landscape he shows on his TedTalk. I find that entirely begging the question. I had thought I read it in the book this morning, so that I could use the quote; it turns out that I must have read it in chapter two of Ethics and Economics before bed last night. The point was iterated in my response already. I'll reiterate it here. I will also point out two other issues: one formal and the other practical. Going back to the "landscape" picture Harris presents we can think of it as our regular 3-dimensional coordinate system. The ground-level is the xy-plane and anything in the +z direction (toward the peaks) is a moral improvement. How do we characterize this system (x, y, z)? That is fundamentally the issue. Now the (x, y) space is unimportant. Maybe it's location and person. What is of consequence is that z is a composite indicator that unifies all moral value into one metric. One might say that we can really break z apart and talk about higher dimensional spaces, but this still begs the question (1) how do we weight the values in the z-space given the discussion above? (2) How do we know there is a "peak"? The idea of a peak is an optimization, but in the case presented there is only one direction we can go to reach it. Imagine for the moment that one could go in either the +z or +y direction for moral advancement instead. Now where do we identify peaks? One might be moving more z-ward and less y-ward. Is this an improvement? If we cannot distinguish those sort of changes precisely, then the moral metric Harris wants us to believe in "must" be well-being cannot be substantiated.
Another thing I did not discuss on my blog steps away from the availability of the optimization framework. A lot of Sen's earlier work focused on utility preference theory and its possible alternative formulations (see, for instance, his Choice, Welfare, and Measurement). Utility theory works in modern economics by making many gross assumptions about preferences that allow us to handle some rather complex utility or production functions in terms of optimization (differential) calculus. These assumptions characterize the axioms of consumer choice theory. They include things like transitivity: A is preferred to B is preferred to C implies that A is preferred to B. It also includes things like if A is preferred to B in one choice set (say, a set consisting of just A and B) that is is preferred in all choice sets (say, a set consisting of A, B, and C). Assumptions like these are false in reality. Behavioral economics has demonstrated that the choice bundle does impact consumer preference. Dan Ariely has a couple TedTalks that demonstrate this. For instance, choosing between a magazine subscription options (electronic, print) produces difference choices compared to them made under (electronic, print, both). The reason we need assumptions like this is so that we can order things monotonically (in a straight-line, like the z-axis). If we cannot do this, we cannot talk about optimization. We can have kinks and bends and branches in our ordering that never meet up again. In order theory, these are called partial orders. If Harris wants to talk seriously about morality in the terms he is using and the picture he wants to paint, it is more realistic that morality is dominated by partial orders not amenable to the type of optimization theory he would have us believe exists. Thus, not only does he face issues under an optimization calculus, he may not even get that much.
A practical example associated with my first criticism above is expounded in Mismeasuring Our Lives for which Sen is a co-author of the report that comprises the book. In it the authors (probably Sen, among others) argue how an indicator of welfare (or environmental fitness, or anything for that matter) cannot be summarized into one measure. For a long time organizations like the World Bank or the IMF characterized any measure of success--whether it be economic, social, whatever--entirely in terms of GDP. This has posed a lot of problems for a long time. Instead, the authors insist that we need multiple measures that accommodate the various factors involved. The use an "instrument panel" analogy. You would not be served well by having one gauge that tells you about both your fuel and speed. This indicator, while meaningful and useful in some capacity, would serve entirely problematic. Instead, we have separate indicators that inform us about our fuel level and a separate one for our speed. By looking at the two we can better manage our driving behavior, by getting gas when we need to or slowing down when it is appropriate. The unified indicator may still get us to the right conclusion, but probably not, certainly not effectively or consistently. This is an extension of research into environmental sustainability indicators and welfare indicators as the work on the living standard presents.
I think this practical example demonstrates the situation we're in quite nicely, since I think these sort of social scientific situations are the best manifestation of moral theory we can ever face. Harris wants us to believe we can have a science of morality. I do not disagree that science can play a role, but it cannot determine the ends of our moral inquiry. It certainly cannot do it in the unified capacity for which Harris argues as the basis of moral values and, consequently, of well-being. He is ultimately trying to advance a shoddy utility theory that doesn't overcome any of the problems that utility theory faces. Instead, science can help us determine what indicators we can have available, how to obtain them, and to assemble them. The "instrument panel" view asserted above is exactly how science plays its role in public policy, which is exactly one of those situations where science does aid our moral quests in life. But science will never determine how we read that instrument panel or ultimately decide which instruments we place on it. No amount of understanding the brain will change that. It will only improve those indicators by improving our information base, and consequently, our ability to choose among possible indicators (e.g., we used GDP for so long because it was particularly easy to obtain, but with advancement in our methodology and available information there is no respectable reason not to expand our use of indicators). I have focused in this comment on elaborating that positive side of Harris' argument while also demonstrating how his conception of morality does not apply. That is why I say he got morality wrong, and why I think Luke is right in his assessment on whether Harris has a positive argument for this conception. There are far more ethical arguments and meta-ethical arguments I could make, but they are not as important. They show up in my response, but I try not to emphasize them too much. The point is that there are even graver problems lurking behind the scene.
In closing, I just want to say that I think Luke is wrong. The cover of Harris' book is nice. It has a flashy minimalist theme, using two colors and a gradient background. I like minimalism! Furthermore, I think his idea of desirism fails, which extends directly from one simple fact (among others): desire-independent reasons for action exist. For more details on that I suggest reading John Searle's work in Intentionality or his latest book, Making the Social World. I make use of his decades of research in my latest paper.
On Sam's article you linked, he says, "However, it also seems quite rational for us to collectively act as though all human lives were equally valuable. Hence, most of our laws and social institutions generally ignore differences between people."
As I point out in my comment, this is wrong. The institution of a progressive tax generally makes there out to be a difference between people, and for good reason. This is from Sam's response to a criticism not unlike mine (mine is far more in-depth, obviously). Sam is basically trying to resolve the issue by appealing to an idealization. Returning to the economic example, as I argue in my paper linked at the end of my comment, economic science makes use of its proven false behavioral assumptions by smoothing away the problem areas caused by human differences in intentionality. They do this by appealing to "economic man" and this idealized consumer or firm. Once we make that assumption "all else equal" we remove the dimensionality problem. We don't have to worry about interpersonal utility comparisons because we're only talking about one person! The "average" person of the population, whichever population in that instance, even if that person doesn't exist. We could have people in a population behavior entirely irrational on either "side" of the rationality scale, so on average we except that population to behave as if they were that one rational consumer. That has distributional problems, which welfare economics emphasizes (distribution of allocations due to distribution of behaviors from the rational axioms discussed earlier). In the quote above, Harris is making a similar utilitarian slight of hand. It may work for us to "have a science" but it doesn't make it real, which no good economist would make the philosophical (metaphysical) argument that homo economicus actually exists. Sam is saying that this approach to well-being is studying something real, not an idealization that proves instrumental! DIFFERENCE. I saw that and thought it would make for a good further elaboration. I would like to see how he deals with this criticism in his book as I think it is the most substantial problem he'll face. Again, I don't disagree that science should play a role in our moral assessment, but this is not a novel idea. Such a science of morality has been proposed for centuries back to Bentham, and it was implicit in the "science" of the ancients, for back then science and philosophy (and ethics) were one. Aristotle approached religion, politics, and physics from the same mold, for example, and talk of morality was a talk about the reasoning about the world just like he reasoned about the physics of the world. I think Sam is attacking a divide that doesn't actually exist, even if it may be more emphasized by some over others. The fact we use science in our normative affairs everyday (medicine, art, engineering, policy, etc.) is testament to that. Thus, the substantive issue is both metaphysical and practical. I think Harris has gotten morality utterly wrong in the first instance. My critiques here ultimately center on the latter since without substantiating his conception of human flourishing in a precise way that is amenable to the type of evaluation he talks about, it can never be a science in that capacity. Thus far, what he has said does not substantiate his conception of morality, and his conception of morality cannot fit the types of analysis to which he refers, for the reasons outlined in my response and comment.
Wow. Again, your blog puts mine to shame. And I was already out of my comfort zone posting what little I did today about morality, but I can't even wrap my head around this. OK, enough philosophy for awhile. Time for me to crack open a science book to make myself feel better about all of this.
I'm totally with Sam on this one. He's not saying we can logically prove through philosophical argument that the greatest good for the greatest number is the correct moral theory, he's saying that regardless we all more or less live our lives that way and act as though it is fact, and that once we assume it to be true science can comment on which actions best carry out said goal. The philosophical argument of such topics has raged on for centuries and has never been solved, so it seems eventually we're going to need to skip that step and take some basic assumptions that most people share about morality and move on from there or we're going to get stuck at step one and never do anyone any good.
I saw his TED talk on this topic and I especially loved it because he mentioned psychology as a science which could answer such questions =) It's always nice to see someone important acknowledge that psychology isn't all about Freudian nonsense,
Comments are closed.