Moral intuitions and biases

Moral intuitions and biases

When making moral decisions, many people rely on intuitions and social conventions. Often, our gut feeling helps us decide whether to act in one way or another. This can be problematic.

We live in societies where speciesist attitudes and indifference for nonhuman sentient beings are still prevalent. As a result, many people find this disregard intuitive and have a resistance to changing their minds, even though there are very compelling reasons to respect all sentient beings. Something similar happens with other ideas that are relatively new in society, such as the recognition of equal rights for all humans regardless of sex and skin color.

Many people have the intuition that it is fine to exploit nonhuman animals, even if the harm they suffer is horrific. Many people also think that we should leave animals in nature alone, even though there is a great deal of evidence showing that they suffer a lot and that we could make a big difference by helping them. Furthermore, the idea that we should not only care about sentient beings who live in the present but also about those who will live in the future clashes against the intuition many people have that we should care mainly about what happens in the present or in the near future.

Are our intuitions concerning these and similar matters compelling? Actually, when we reflect more deeply on those decisions, we may conclude that they are not grounded on any reasons other than an immediate emotional reaction or unquestioned assumptions that are commonplace in the society we happen to be a part of. In fact, many of our gut feelings regarding concrete moral decisions we make are the result of an evolutionary process. They evolved to help our ancestors pass their genes to future generations. Due to this, it is not clear whether they can be a moral guide to action. We will see more about this below.

The role of “moral intuitions”

Appeals are often made to moral intuitions. For example, if a particular ethical theory yields counterintuitive results in particular cases, this might be used as an objection against it or a reason to reject it altogether. According to some intuitionist views, the wrongness or rightness of an action can be decided based on our intuitions towards it. This would mean that something like “moral knowledge” is possible thanks to our intuitions.

Intuitions can favor objectionable behaviors

However, there are many reasons to be cautious about relying on our initial intuitions. One reason is that many of the moral intuitions we held in the past are now recognized as clearly harmful and objectionable. For example, many people used to intuitively believe slavery was morally right. However, it seems highly implausible that we have reasons to regard slavery as in fact morally right. Moreover, many people nowadays hold discriminatory views, such as racism or sexism, and we would not want to say their intuitions are leading them to hold a view that is right.

In addition, our intuitions lead us to significant contradictions. A clear example that is relevant to speciesism is addressed by the argument from species overlap. Many people today believe that all humans should be morally considered in similar ways and that no difference should be made on the basis of the intelligence that each person has. However, anthropocentric speciesist attitudes are often defended by claiming that we should give special moral consideration to only those who have complex cognitive capacities. But this view contradicts the previous one. Both views can’t be held together. We have to get rid of at least one of them or be inconsistent.1

In light of this, we can reconsider our views about moral intuitions. We will now look at two positions that have been held about them.

Reflective equilibrium

The approach of reflective equilibrium consists in reaching a balance between our intuitions toward general principles and our intuitions about what to do in specific situations. We can reach this balance by modifying the principles we regard as valid in order to accommodate our intuitions in particular cases. The goal would be to achieve coherence between the principles we believe are reasonable, and at least some of our intuitions concerning particular cases. Suppose, for instance, that we think we should reduce suffering as much as possible. But suppose this means that in a certain case we can’t aid someone who needs our help, because in doing so we won’t be able to aid many others elsewhere. This may be very counterintuitive to us, because we are not in front of someone who needs help. In this case, the method of reflective equilibrium suggests that we should be ready to change either our views towards general principles or our attitudes towards the particular intuitions we have.2 We could conclude that there are other things more important than reducing suffering in general, or that we should not follow the particular intuition we have in this case.

Consider some possible clashes between principles and intuitions; suppose I take “we should minimize suffering” to be a valid principle on which to base my ethics, and that I also have the intuition that eating animals is morally permissible. There is a clash between the two, given that the suffering inflicted on animals in farms and slaughterhouses are in fact pointless (humans do not need animal products in order to have nutritious, balanced diets which give them pleasure), and by eating animals I am contributing to it. If we accept this, according to the reflective equilibrium view I have two options; I can modify the original principle, for example by simply adding “pointless suffering in humans only” at the end. Of course, this caveat cannot be added gratuitously; arguments must be found in its defense. If those arguments can, however, be rebutted, we can only achieve coherence by giving up our initial intuition that eating animals is morally permissible.

Appealing to more relevant moral views

According to a second view, the particular intuitions towards particular situations should not be considered reliable even as the method of reflective equilibrium concedes. Instead, we must adopt a broader view in ethics that can be used to determine which particular actions are right or wrong. Examples of such a broader view would be “we should give priority to the worse off,” or “we should behave towards others as we would like others to behave towards us.” 3

Evolution and moral intuitions

There’s a strong argument against the idea that we should accept our intuitions towards particular cases when they clash with other more important views we have. Evolutionary theory provides a powerful explanation of our moral intuitions. These are not exclusive to human animals. Other animals have them as well.

Recent experiments conducted by Joshua Greene at Princeton University using Magnetic Resonance Imaging support this theory.4 The participants in the experiment were presented with two different forms of what is known among philosophers as “The Trolley Problem”. In the first scenario, a trolley will kill five people, all of whom are tied to the track unless a switch is activated that will divert the trolley onto a different track where there is only one person tied to it. This one person would be killed, but five lives would be saved. Most people in the experiment said they would activate the switch. In the second scenario, a person is on a bridge and sees that the trolley will kill five people tied to the track. They can consider jumping in order to stop the trolley and save the five people, but they are much too light to stop the trolley. However, there is a very large man who could stop the trolley if pushed off the bridge. Doing so would result in saving five people and sacrificing one. Most people in this experiment said they wouldn’t push the stranger off the bridge. When presented with the second scenario, magnetic resonance scan showed increased activity in areas of the brain associated with emotion. This happened every time participants were presented with “personal” violations, such as pushing a stranger off a bridge or directly harming someone. However, there was no such reaction when they were presented with “impersonal” violations, such as activating a switch which causes a stranger’s death.

The different reactions to the two scenarios can be explained in terms of evolutionary advantage. The possibility of killing or harming someone simply by activating a switch has only existed for a very short time. For most of human history, we have lived in small groups and the only way to harm someone was by directly, for example, by hitting them, killing them, or pushing them. So, while direct and “personal” harm has impacted our inherited pattern of emotional response, killing or harming someone impersonally (e.g. by throwing a switch) has not existed long enough to have such an impact. This explains why we have strong emotional reactions to instances of “personal” harm while we remain unaffected in cases of impersonal harm which have the exact same consequences.

Ethics can’t be well justified by evolutionary advantage

The fact that something is conducive to the survival of our genes, or more generally that it helps preserve our group, is not relevant to the wrongness or rightness of an action. For example, killing other groups, or competition in order to have more resources for one’s own group has clearly been beneficial in the past for the continued existence of such group; but this doesn’t make it morally correct.

If moral intuitions are just the result of an evolutionary process and are shaped by what is useful to evolutionary fitness instead of by what is morally consistent, then we have reasons to reject that a moral decision must be based on our intuitions. Rather, we should reflect in order to see what the most consistent and sound course of action we should follow is.

We shouldn’t trust our intuitions favoring speciesist attitudes

This the case with speciesism just as it is with other moral issues.5 Many people surely have a strong intuition that there is something special about being human that draws a moral distinction between those who are classified as Homo sapiens and all other sentient beings. Also, many people have the intuition that it is justified to use animals as we see fit. Others also have the intuition that we have no reason to aid animals in need of help in the wild. There are, however, good arguments against what these intuitions suggest. The arguments presented above show that we should consider what is morally consistent instead of relying on biased intuitions that support speciesist attitudes.


Further readings

Andow, J. (2016) “Reliable but not home free? What framing effects mean for moral intuitions”, Philosophical Psychology, 29, pp. 904-911.

Bedke, M. S. (2008) “Ethical intuitions: What they are, what they are not, and how they justify”, American Philosophical Quarterly, 43, pp. 253-270.

Bedke, M. S. (2010) “Intuitional epistemology in ethics”, Philosophy Compass, 5, pp. 1069-1083.

Bengson, J. (2013) “Experimental attacks on intuitions and answers”, Phenomenological Research, 86, pp. 495-532.

Braddock, M. (2016) “Evolutionary debunking: Can moral realists explain the reliability of our moral judgments?”, Philosophical Psychology, 29, pp. 844-857.

Cappelen, H. (2012) Philosophy without intuitions, Oxford: Oxford University Press.

Caviola, L.; Everett, J. A. & Faber, N. S. (2019) “The moral standing of animals: Towards a psychology of speciesism”, Journal of Personality and Social Psychology, 116, pp. 1011-1029.

Greene, J. D. (2013) Moral tribes: Emotion, reason, and the gap between us and them, New York: Penguin.

Lillehammer, H. (2011) “The epistemology of ethical intuitions”, Philosophy, 86, pp. 175-200.

McMahan, J. (2005) “Our fellow creatures”, Journal of Ethics, 9, pp. 353-380.

McMahan, J. (2010) “Moral intuition”, in LaFollette, H. (ed.) The Blackwell guide to ethical theory, Malden: Blackwell, pp. 92-110.

Nagel, J. (2012) “Intuitions and experiments: A defense of the case method in epistemology”, Philosophy and Phenomenological Research, 85, pp. 495-527.

Ross, W. D. (2002 [1930]) The right and the good, Oxford: Clarendon.

Sencerz, S. (1986) “Moral intuitions and justification in ethics”, Philosophical Studies: An International Journal for Philosophy in the Analytic Traditions, 50, pp. 77-95.

Singer, P. (2004) “Ethics beyond species and beyond instincts: A response to Richard Posner”, in Sunstein, C. & Nussbaum, M. (eds.) Animal rights: Current debates and new directions, New York: Oxford University Press, pp. 78-92

Singer, P. (2005) “Ethics and intuitions”, The Journal of Ethics, 95, pp. 331-352.

Sinnott-Armstrong, W.; Young, L. & Cushman, F. (2010) “Moral intuitions”, in Doris, J. M. (ed.) The moral psychology handbook, Oxford: Oxford University Press, pp. 246-272.

Sosa, E. (2007) “Experimental philosophy and philosophical intuition”, Philosophical Studies, 13, pp. 99-107

Stratton-Lake, P. (ed.) (2002) Ethical intuitionism: Re-evaluations, Oxford: Oxford University Press.

Street, S. (2006) “A Darwinian dilemma for realist theories of value”, Philosophical Studies, 127, pp. 109-166.

Tersman, F. (2008) “The reliability of moral intuitions: A challenge from neuroscience”, Australasian Journal of Philosophy, 86, pp. 389-405.

Woodward, J. & Allman, J. (2007) “Moral intuition: Its neural substrates and normative significance”, Journal of Physiology (Paris), 101, pp. 179-202.


Notes

1 Pluhar, E. B. (1995) Beyond prejudice: The moral significance of human and nonhuman animals, Durham: Duke University Press. Ehnert, J. (2002) The argument from species overlap, master’s thesis, Blacksburg: Virginia Polytechnic Institute and State University [accessed on 23 August 2018]. Horta, O. (2014) “The scope of the argument from species overlap”, Journal of Applied Philosophy, 31, pp. 142-154 [accessed on 25 October 2014].

2 Rawls, J. (1951) “Outline of a decision procedure for ethics”, Philosophical Review, 60, pp. 177-197; (1999 [1971]) A theory of justice, rev. ed., Cambridge: Harvard University Press. See also Daniels, N. (1996) Justice and justification: Reflective equilibrium in theory and practice, Cambridge: Cambridge University Press.

3 Singer, P. (1974) “Sidgwick and reflective equilibrium”, The Monist, 58, pp. 490-517.

4 Greene, J. D.; Sommerville, R. B.; Nystrom, L. E.; Darley, J. M. & Cohen, J. D. (2001) “An fMRI investigation of emotional engagement in moral judgment”, Science, 293, pp. 2105-2108. See on this also Foot, P. (1967) “The problem of abortion and the doctrine of double effect”, Oxford Review, 5, pp. 5-15 [accessed on 25 March 2017]. Thomson, J. J. (1976) “Killing, letting die, and the trolley problem”, The Monist, 59, pp. 204-217; (1985) “The trolley problem”, Yale Law Journal, 94, pp. 1395-1415. Unger, P. (1996) Living high and letting die, Oxford: Oxford University Press.

5 We must also note that in the case of speciesism we are biased due to our interest in justifying the use of nonhuman animals, see Bastian, B.; Loughnan, S.; Haslam, N. & Radke, H. R. (2012) “Don’t mind meat? The denial of mind to animals used for human consumption”, Personality and Social Psychology Bulletin, 38, pp. 247-256. Jaquet, F. (2021) “A debunking argument against speciesism”, Synthese, 198, pp. 1011-1027; (2022) “Speciesism and tribalism: Embarrassing origins”, Philosophical Studies, 179, pp. 933-954.