When making moral decisions, many people rely on intuitions and social conventions. Often, our gut feeling helps us decide whether to act in one way or another one. This is often very problematic.
We live in societies where speciesist attitudes and unconcern for sentient beings other than humans, are still prevalent. As a result of this, many people find intuitive that this should be so, and therefore are often resistant to change their minds even though there are very powerful reasons to respect all sentient beings.
Something similar happens in the case of other ideas that are relatively new in society, such as the recognition of equal rights to all humans regardless of sex and skin color. In the case of concern for animals that are exploited by humans, or for those animals living in the wild; many people intuitively believe that it is fine to exploit nonhuman animals, even if the harm they suffer is horrific. It is commonly accepted to think too that we should leave animals in nature alone, even if there is overwhelming evidence that there are many reasons why they are suffering very significantly and that we could make a big difference by helping them. Also, the idea that we should not only care for the sentient beings who live in the present, but also for those who live in the future, although totally sound, clashes against the intuition many people have that we should care mainly about what happens in the present or in the near future.
Are our intuitions concerning these and other similar matters compelling? The truth is that when we reflect more deeply on those decisions we may conclude that they are not grounded in any sound reasons other than an immediate emotional reaction or unquestioned assumptions that are commonplace in the society we happen to be part of. In fact, many of our gut feelings regarding the concrete, moral decisions we must make are the result of an evolutionary process. This means that their point was to promote that our ancestors pass their genes to future generations. Due to this, it is not clear whether they can be a moral guide to action. We will see more about this below.
The role of “moral intuitions”
Moral intuitions are often appealed to. For example, if a particular ethical theory yields very counterintuitive results in particular cases, this might be held to constitute an objection against it, or even a reason to reject it altogether. On some intuitionist views, the wrongness or rightness of an action can be grasped on the basis of our intuitions towards it. This means that something like “moral knowledge” is possible thanks to our intuitions.
Intuitions can favor objectionable behaviors
Despite this, there are very strong objections to the belief that we should just accept what seems initially intuitive to us. One reason to be cautious about them is that many of the moral intuitions we have held in the past we now recognize as clearly harmful and objectionable, and therefore should not be constitutive of the ethical views we hold. For example, many people used to intuitively believe slavery was morally right. However, it seems highly implausible that we have reasons to regard slavery as in fact morally right. Moreover, many people nowadays hold discriminatory views, such as racism or sexism, and we would not want to say their intuitions are leading them to hold a view that is correct.
In addition, our intuitions lead us to significant contradictions. A clear example that is very relevant to the case of speciesism is the one that the argument from species overlap addresses. Many people today believe that all humans should be morally considered in similar ways, and that no difference to that respect should be made on the basis of the intellifence that each of them has. On the other hand, anthropocentric speciesist attitudes are often defended by claiming that we should give special moral consideration only to those who have complex cognitive capacities. But this view contradict the previous one. Both views can’t be held together. We either have to get rid of at least one of them or be inconsistent.1
In light of this, we can have a different attitude towards moral intuitions. We will now see two positions that have been held about them.
The approach of reflective equilibrium allows for the moral relevance of general views or principles and therefore avoids the objections to the claim that intuitions must always be accepted. What happens if these general principles (such as the ones mentioned above) clash with our intuitions on a more concrete level? In some other cases we may reach a point of balance between both, which we can do by modifying the principles we regard as valid in order to accommodate our intuitions in particular cases. The goal would be to achieve coherence between the principles we believe are reasonable, and at least some of our gut feelings. Suppose, for instance, that we think we should reduce suffering as much as possible. But suppose that this means that in a certain case we can’t aid someone who needs our help, because if we do so we won’t be able to aid many others elsewhere. This may be very counterintuitive to us, because, as we are right not in front of someone who needs help. In this cases, the method of reflective equilibrium suggests that we should be ready to change either our views towards general principles or our attitudes towards the particular intuitions we have.2 We may conclude that there are other things more important than reducing suffering in general, or that we should not follow the particular intuition we have in this case.
Consider some possible clashes between principles and intuitions; say I take “we should minimize suffering” to be a valid principle on which to base my ethics, and say I also have the intuition that eating animals is morally permissible. There is a clash between these two, given that the suffering inflicted on animals in farms and slaughterhouses is in fact pointless (humans do not need animal products in order to have nutritious, balanced diets which give them pleasure), and by eating animals I am contributing to it. If we accept this, according to the reflective equilibrium view I have two options; I can modify the original principle, for example by simply adding “pointless suffering in humans only” at the end. Of course, this caveat cannot be added gratuitously; arguments must be found in its defense. If those arguments can, however, be rebutted, we can only achieve coherence by giving up our initial intuition that eating animals is morally permissible.
Appealing to more significant moral views
According to a second view we may also hold, the particular intuitions towards concrete cases should not be considered reliable even as the method of reflective equilibrium concedes. Instead, we must adopt a broader view in ethics that can be used to determine which particular actions are right or wrong. Examples of such broader view would be “we should give priority to the worse off”, or “we should behave towards others as we would like others to behave towards us”. This challenges the reflective equilibrium approach which grants that intuitions are morally relevant in certain cases.3
Evolution and moral intuitions
There’s a strong argument against the idea that we should accept our intuitions towards particular cases when they clash with other, more important views we have. Evolutionary theory provides a powerful explanation of our moral intuitions. These are not exclusive to human animals, but can also be found in other species. Many monkeys, for instance, present their backs to other monkeys so that they pick out their lice. Once they are finished, if they fail to return the favor they will likely be attacked. Therefore, it seems that the best way for these animals to thrive is by recognizing those monkeys who cheat and refusing to help them in the future. This kind of reciprocal behavior has been found in birds and mammals, and evolutionary theorists believe human morality stemmed from reciprocal practices too, such as the picking out of lice in the example. Given our use of language, we became able to refer to reciprocity with the concept of “right” and to non-reciprocal behavior with the concept of “wrong”.
Recent experiments conducted by Joshua Greene at the University of Princeton using Magnetic Resonance Imaging support this theory.4 The participants of the experiment were presented with two different forms of what is known among philosophers as “the Trolley Problem”. In it, the first scenario, a trolley will kill five people who are tied to the track unless you throw a switch that will divert the trolley onto a track where there is one person tied. This one person will be killed if you divert the trolley, which means you’ll save 5 lives and sacrifice one. Most people said they would throw the switch. In the second scenario, you are on a bridge and see the trolley will kill 5 people tied to the track. You consider jumping in order to stop the trolley and save the 5, but you are way too light for that. However, there is a very large man besides you who will stop the trolley if you push him off the bridge. If you do that, you would also be saving 5 people and sacrificing one. In this case, however, most people said they wouldn’t push the stranger off the bridge. When people were presented with the latter scenario, the magnetic resonance scan showed increased activity in areas of the brain associated with emotion. This happened every time participants were presented with “personal” violations, such as pushing a stranger off a bridge or directly harming someone. However, there was no such reaction when they were presented with “impersonal” violations, such as throwing a switch which causes a stranger’s death.
The different reactions to the two scenarios can be explained in terms of evolutionary advantage. The possibility of killing or harming someone simply by throwing a switch has only existed for a very short time. For most of the history of humanity we lived in small groups, and the only way you could harm someone is directly, by hitting them, killing them, failing to reciprocate or crucially, pushing them off a bridge. Those who learned to identify harmful individuals more quickly were at an advantage to others. Just like the monkeys in the example, if we identify someone who often harms others or fails to reciprocate, we will refuse to help and befriend them, because doing so will not benefit us and could potentially destroy us. Immediate, negative emotional reactions towards instances of personal harm helped us identify wrongdoers and stay away from them. But while direct and “personal” harm has impacted our inherited pattern of emotional response, killing or harming someone impersonally (e.g. by throwing a switch), has not existed for long enough to have such an impact. This explains why we have strong emotional reactions to instances of “personal” harm, while we remain unaffected in cases of impersonal harm which has the exact same consequences.
Ethics can’t be well justified on evolutionary advantage
The fact that something is conducive to the survival of our genes, or more generally that it helps preserve our group is not relevant to the wrongness or rightness of an action. For example, violence towards other groups and selfishness has been useful in the past to protect our own genes, but this doesn’t mean it’s right or justified. Killing other groups, or competition in order to have more resources for one’s own group or family has clearly been beneficial in the past; once again, this doesn’t make it morally good. Consider slavery, which we have already discussed above. Slavery obviously benefited slave-owners and their families. This explains slavery from an evolutionary perspective, but doesn’t morally justify it. The same happens with speciesism; it benefits our group, but this doesn’t make it right.
If moral intuitions are just the result of an evolutionary process, and are shaped by what is useful to evolutionary fitness instead of by what is morally consistent, then we have reasons to reject that what we should do in every situation where we have to make a moral decision must be determined by what our intuitions drive us to think. Rather, we should reflect in order to see what is the most consistent and sound course of action we should follow.
We shouldn’t trust our intuitions favoring speciesist attitudes
This is so in the case of speciesism just as it is in the case of other moral issues as well.5 Many people surely have a strong intuition that there is something special about being human that draws a moral distinction between those who are classified within the species Homo sapiens and all other sentient beings. Also, many people have the intuition that it is perfectly justified to use animals as we see fit. Others also have the intuition that we have no reason to aid those animals in need of help in the wild. There are, however, good arguments against what these intuitions suggest. The reasons presented above shows that we should consider what those arguments tell us, and not our biased intuitions supporting speciesist attitudes.
Andow, J. (2016) “Reliable but not home free? What framing effects mean for moral intuitions”, Philosophical Psychology, 29, pp. 904-911 [accessed on 25 March 2017].
Bedke, M. S. (2008) “Ethical intuitions: What they are, what they are not, and how they justify”, American Philosophical Quarterly, 43, pp. 253-270 [accessed on 25 March 2017].
Bedke, M. S. (2010) “Intuitional epistemology in ethics”, Philosophy Compass, 5, pp. 1069-1083.
Bengson, J. (2013) “Experimental attacks on intuitions and answers”, Phenomenological Research, 86, pp. 495-532.
Braddock, M. (2016) “Evolutionary debunking: Can moral realists explain the reliability of our moral judgments?”, Philosophical Psychology, 29, pp. 844-857.
Cappelen, H. (2012) Philosophy without intuitions, Oxford: Oxford University Press.
Greene, J. D. (2013) Moral tribes: Emotion, reason, and the gap between us and them, New York: Penguin.
Lillehammer, H. (2011) “The epistemology of ethical intuitions”, Philosophy, 86, pp. 175-200.
McMahan, J. (2005) “Our fellow creatures”, Journal of Ethics, 9, pp. 353-380.
McMahan, J. (2010) “Moral intuition”, in LaFollette, H. (ed.) The Blackwell guide to ethical theory, Malden: Blackwell, pp. 92-110.
Nagel, J. (2012) “Intuitions and experiments: A defense of the case method in epistemology”, Philosophy and Phenomenological Research, 85, pp. 495-527.
Ross, W. D. (2002 ) The right and the good, Oxford: Clarendon.
Sencerz, S. (1986) “Moral intuitions and justification in ethics”, Philosophical Studies: An International Journal for Philosophy in the Analytic Traditions, 50, pp. 77-95.
Singer, P. (2004) “Ethics beyond species and beyond instincts: A response to Richard Posner”, in Sunstein, C. & Nussbaum, M. (eds.) Animal rights: Current debates and new directions, New York: Oxford University Press, pp. 78-92
Singer, P. (2005) “Ethics and intuitions”, The Journal of Ethics, 95, pp. 331-352.
Sinnott-Armstrong, W.; Young, L. & Cushman, F. (2010) “Moral intuitions”, in Doris, J. M. (ed.) The moral psychology handbook, Oxford: Oxford University Press, pp. 246-272.
Sosa, E. (2007) “Experimental philosophy and philosophical intuition”, Philosophical Studies, 13, pp. 99-107
Stratton-Lake, P. (ed.) (2002) Ethical intuitionism: Re-evaluations, Oxford: Oxford University Press.
Street, S. (2006) “A Darwinian dilemma for realist theories of value”, Philosophical Studies, 127, pp. 109-166.
Tersman, F. (2008) “The reliability of moral intuitions: A challenge from neuroscience”, Australasian Journal of Philosophy, 86, pp. 389-405.
Woodward, J. & Allman, J. (2007) “Moral intuition: Its neural substrates and normative significance”, Journal of Physiology (Paris), 101, pp. 179-202.
1 Pluhar, E. B. (1995) Beyond prejudice: The moral significance of human and nonhuman animals, Durham: Duke University Press. Ehnert, J. (2002) The argument from species overlap, master’s thesis, Blacksburg: Virginia Polytechnic Institute and State University [accessed on 13 December 2013]. Horta, O. (2014) “The scope of the argument from species overlap”, Journal of Applied Philosophy, 31, pp. 142-154 [accessed on 25 October 2014].
2 Rawls, J. (1951) “Outline of a decision procedure for ethics”, Philosophical Review, 60, pp. 177-197; (1999 ) A theory of justice, rev. ed., Cambridge: Harvard University Press. See also Daniels, N. (1996) Justice and justification: Reflective equilibrium in theory and practice, Cambridge: Cambridge University Press.
3 Singer, P. (1974) “Sidgwick and reflective equilibrium”, The Monist, 58, pp. 490-517.
4 Greene, J. D.; Sommerville, R. B.; Nystrom, L. E.; Darley, J. M. & Cohen, J. D. (2001) “An fMRI investigation of emotional engagement in moral judgment”, Science, 293, pp. 2105-2108. See on this also Foot, P. (1967) “The problem of abortion and the doctrine of double effect”, Oxford Review, 5, pp. 5-15 [accessed on 25 March 2017]. Thomson, J. J. (1976) “Killing, letting die, and the trolley problem”, The Monist, 59, pp. 204-217; (1985) “The trolley problem”, Yale Law Journal, 94, pp. 1395-1415. Unger, P. (1996) Living high and letting die, Oxford: Oxford University Press.
5 We must also note that in the case of speciesism we are biased due to our interest in justifying the use of nonhuman animals, see Bastian, B.; Loughnan, S.; Haslam, N. & Radke, H. R. (2012) “Don’t mind meat? The denial of mind to animals used for human consumption”, Personality and Social Psychology Bulletin, 38, pp. 247-256.