March 29, 2011

  • Is bryangoodrich right that Sam Harris doesn't know morality? (part 2)

    Intro:

    I do not wish to belabor this conversation since I intend to hit up a number of similar ones in this series.  To clarify what I see that Bryan doesn't seem to be aware of or isn't considering: 

    Harris isn't attempting to invent new ideas, but rather he's waging a framing war to claim morality from religious conceptions that want to remove moral truth from the realm of the well-being of conscious creatures and also to claim it from secular conceptions that want to suppose morality is too relative for any universal appeal. That's a very "Hulk smash" level in principle as Harris merely is trying to establish that there are correct answers to moral questions (even if it turns out to be impractical to actually know what any given answer is) if only we input the most obvious and defensible starting assumptions:  That everyone being really miserable is bad and that the direction away from that is "good."  I think Bryan takes that too literally at times and doesn't allow Harrisian morality to claim the category of "all of the above" when it comes to evaluating upward moves on Harris' moral landscape that Harris and I would readily factor in to our final assessment on any given question.  Bryan also seems to want to have a more advanced conversation than Harris' current aim.  That's fine in and of itself and I would hope that kind of thing is around the corner, but that's not the focus right now as Harris himself explains:

    My goal, both in speaking at conferences like TED and in writing my book, is to start a conversation that a wider audience can engage with and find helpful. Few things would make this goal harder to achieve than for me to speak and write like an academic philosopher. Of course, some discussion of philosophy is unavoidable, but my approach is to generally make an end run around many of the views and conceptual distinctions that make academic discussions of human values so inaccessible. While this is guaranteed to annoy a few people, the prominent philosophers I've consulted seem to understand and support what I am doing.

    So, hopefully with some adjusted expectations we can clear up some of the other difficulties.


    Bryan brings up a good point about the "weighting" of the many important parameters that would go into a "best" picture of human flourishing, but these seem only like details.  I imagine Alonzo Fyfe's "desirism" would likely be the next iteration of complexity (or level of clarification) which would resolve the weighting issue.  Important desires ("good desires" or "desires that deserve more weight") are the ones that in principle tend to satisfy other desires rather than thwart them.

    Most of the things I brought up in terms of the pros and cons of voluntary burqa wearing all seemed about equally important.  And surely if some things were weighted in particular ways, that would impact the ability of the other desires (or goals) to be properly fulfilled.  So not every weighting scheme will be born equal, but the final test will have to do with their overall impact on the well being of a conscious agent.  To whatever extent there is an acceptable "give" in terms of weighting is to the extent it doesn't really matter (in other words, the maximally fulfilled life will entertain all of those considerations reasonably well).  If there are a few different ways to weight the goals without any serious compromise of another important aspect of the human capacity to be fulfilled, then those are just different peaks on the moral landscape and do not concern us in principle here. 

    The only real issue remaining in terms of defending that basic starting frame of reference for scientific inquiry is the supposed disconnect between morally relevant issues and the well-being of conscious creatures.  

    In the comments of my previous post to Bryan, he said:

    ...who would say non-brain state enhancing outcomes was not welfare improving?

    Harris would.  I would.  What can you mean by "welfare improving" unless that has some positive impact on brain states?

    Bryan attempts to re-illustrate this divide here:

    Black Americans face many prejudices that the civil rights movement has done a great job diminishing. Nevertheless, black people still live fundamentally different lives due to these cultural norms. They may not experience any adverse events in their life (e.g., being arrested, profiled, or beaten) for being black, but the ultimate quality of their life is different. Sen reveals in Development as Freedom (p. 22) that for all age groups American black male survival rates are substantially lower than whites, and lower than male Chinese or Indian (Kerala), too (data available from the World Health Organization). Do these sort of facts lend themselves to Harris' thesis? I would disagree precisely because these sort of health outcomes are not part and parcel with the sort of directly related cognitive states Harris has in mind.

    Bryan already made such examples and I already responded to them.  It appears from his comments on that post that he got thrown off by my one "Ferrari" comment at the expense of the rest of what I said (since it doesn't appear that he quotes anything from that). I'll try to repeat myself as little as possible here, but either there is a reason to be morally concerned that relates to the actual mental states of these Black Americans or there isn't.  If we don't think the lives of these Black Americans can actually be improved in some way (even though Bryan even says they DIE sooner, as though that doesn't relate to mental states), then why are we just obsessed with apparently meaningless differences in their inherited cultural baselines?  Clearly Bryan thinks there is some relevant moral difference, that their self-reporting doesn't square with the maximum capacities we know are possible or something along those lines?  "You can get the same mental deal and work less, which will mean you'll live longer."  Harris would add, "So that you can have even more of that mental deal you like."  Right?  Why else would we collectively work to even the playing field and count ourselves morally righteous for the effort?  Why waste our time otherwise? 


    Outro:

    I don't understand how Bryan can hope to successfully argue that the considerations he's talking about fall outside the confines of Harris' conception of the moral landscape.   Any example that attempts to disconnect morality from the well being of conscious creatures is probably just a trivial conceptual problem that can be resolved simply enough with a bit more thought.  I invite any more counter-examples though!

    Ben

Comments (1)

  • There is a fundamental disconnect between saying morality is measured in the improvement of brain states and that welfare is improvements of those states. For instance, a welfare policy can introduce improvements to many people's lives, from access to food, basic health care, and education. Yet, this does not come freely. This is an issue of distributive justice. Only in the idealized case where one person can be made better without diminishing another (Pareto optimality) can we hope to say such a welfare policy is an improvement in the brain states. However, at most all one can say is that the measures we have of, say, the improvement one has in life from access to food, basic health care, and education are proxies to their improved cognitive states. The linkage is by no means necessary, and without that, it makes no sense to say welfare enhancement is a change in brain states. Moreover, how do we measure that with regard to those that have to give up so others can live better? Maybe cognitively the rich being taxed more outweighs the cognitive benefits to the many poor that can then live marginally better. You most certainly cannot say it goes one way or the other, yet this is a moral issue we face. This isn't just an issue of weighing the measures, it is an issue of whether the measure is even open to interpersonal comparisons. What can we say? That we can hope there is a quantitative measure we can develop? That's like saying Jane lowers her blood pressure 5 points isn't as good as Joe lowering his blood pressure 10 points. That assumes the measure is on an absolute scale, normalized between the two people, their situation, and the aim of that change. Harris' moral landscape assumes we can even point in the right direction, that we can make interpersonal comparisons, and that all these proxies are necessarily tied to changes in brain states. Yet, one would be hard pressed to qualify any of that. His notion of a moral landscape makes intuitive sense, no doubt, but it lacks the rigorous foundation of possessing anything meaningful to the moral discourse. If, as you claim, Harris is only trying to pull morality away from the hands of theology, then he has gone too far. That argument is quite simple. But that does not seem to be his aim at all. He seems to be trying to contribute to the moral discourse, but frankly there is much literature on the topics he discusses already. From what I have seen Harris argue, his theory of morality is just a reformation of classical utilitarianism that suffers all the same ills it possessed, and most of the problems that it faces can only be swept away in the manner you provided: of course all those things are ultimately accounted for by welfare enhancements of conscious human beings. Yet, you claim non-brain state enhancing improvements cannot be welfare improvements. You're trying to tie, necessarily, brain states and welfare. These are two fundamentally different things, and improved welfare (human flourishing) is not the same as thing as the utilitarian edict Harris advocates. (In fact, as I argued in previous blogs/comments, they are two fundamentally opposed perspectives to morality by the informational constraints the impose, and Sen's Rights and Agency synthesizes that very well). You say we should be able to take an "all of the above" view of the landscape, but that assumes everything in the "all of the above" points in the same direction, without conflict, or misdirection. That is the fundamental problem. You're trying to view morality in terms of a linear metric without qualification. Morality is not so simplistic, even if we think of it in terms of composites (which itself needs qualifying and can be fundamentally in error), there is no reason to think it is linked to welfare in the manner you've argued.

Post a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *