The importance of the future

The importance of the future

Suppose we could compare two whole histories of the world lasting from now until the end of time. The one that occurs would depend on which course of action we decided to follow. If our aim is making the world the best possible place for all sentient beings, the question is, which of the two courses of action would bring about the best outcome for sentient beings from now until the end of time?

Temporal biases

It often happens, however, that animal advocates prefer a certain strategy over other alternatives primarily on the basis of its expected impact on the animals who are living in the present or will live in the immediate future. That is, they are not making an estimation of what will be the best whole history.

This occurs because we have a tendency to consider that what will happen immediately is more important than what will happen further in time. As a result, the interests of those who will live later end up being considered less important, or are not considered at all.

Is this view correct? The fact is that sentient animals don’t suffer more or less because of the year or century they live in. The harms they suffer are as real for those who died in 2018 as they were for those who died in 1978. And they will be as real as the harms suffered by those who will die in 2058.1

This differential attitude towards someone’s interests depending on time is an instance of a cognitive bias. It’s a type of temporal bias. Temporal biases affect our appraisal of the importance that something had, has, or will have because of the time of its occurrence.

It may be argued, of course, that the time at which something good or bad happens may be relevant if it is something that causes other good or bad things to take place afterwards. If a bad event makes things worse from the moment it happens, then it’s better if it takes place as late as possible. Events like that would be part of the evidence considered in determining what would be the best whole history. However, apart from such considerations, the fact that something happens on Tuesday or Thursday, or in one century or another, is irrelevant when assessing how good or bad it is.

Another objection to considering the interests of future animals equally is that we are certain that animals are in need of help today, while we don’t know what will happen in the future. This claim, however, doesn’t seem credible. We have many reasons to think that in the future there will be sentient beings in need of help too. The odds of this are extremely high, approaching 100%.

It could also be objected that we can usually make better guesses about what will happen in the near future than about what will happen further in time. This is correct, but it doesn’t make any difference concerning the importance of what will happen in each moment. It’s only a difference about how easy or hard it is to guess what will happen. Our decisions regarding what to do need to be made on the basis of the impact they’re likely to have. It’s a mistake to make them on the basis of how easy or hard it is to evaluate that impact.

One reason temporal bias can be so powerful is because a smaller impact may be easier to assess than a more positive one. Suppose I have to choose between a certainty of saving 3 animals or a high probability of saving 10,000 animals. Suppose, however, that it would be difficult to precisely estimate that high probability. It still seems clear that the second alternative is better.2

Why the future matters a lot

There’s a further crucial point to take into account. The way we act now can affect the ways the lives of future sentient beings can go for better or for worse. The future will last for a very long time. This seems to be a trivial statement, but it has an extremely important consequence that many people appear to overlook. The odds are that there will exist sentient beings for a very long time.3 That is, not just in the near future, but also in the far future. This means that in the future there will be many more sentient beings than in the present. What “many more” means here is many orders of magnitude more (that is, a huge difference on a scale that is difficult to conceive of).

In light of this, the attitude of caring only or preferentially for beings existing in the present or the near future seems clearly unjustified. Our strategy in defense of sentient beings should be equally concerned with all sentient beings and with all the harms or benefits they could experience. This means that considering the future should be extremely important in determining the courses of action we could follow.

Risks of future suffering (“s-risks”)

There are significant risks that in the future there will be situations in which many sentient beings will suffer. In fact, there are risks that their suffering may increase at a high rate from now on and even reach astronomical levels. These are known in the literature as “suffering risks” or, for short, “s-risks.”4 There are risks that significant amounts of suffering will be created whenever at least three conditions occur:

(i) some possible developments in new technologies can very negatively affect a large group of beings;

(ii) there will be certain interests in using such technologies by those who have control over them; and

(iii) those controlling these technologies don’t care about what happens to those who will suffer due to their use.

An example of this in history is the development of factory farming in the case of nonhuman animals, or the development of new weaponry technology in the case of both humans and nonhuman animals. It would be naïve to think that episodes of this kind will soon be things of the past, and that no other scenario causing huge amounts of suffering will take place in the future.

This is especially important given that many people today still discriminate against nonhuman animals. Many of them think only human interests matter significantly. As long as this speciesist attitude remains, and humans don’t give much thought to what happens to most other sentient animals, there will be an extremely high risk that animals end up suffering massively in the future. This is very worrying, but it shouldn’t surprise us. It’s perfectly possible that in the future, human beings may develop new technologies harmful to nonhuman sentient beings but beneficial to humans. Because of speciesist attitudes, there will be important risks that the development of those technologies will bring about scenarios filled with suffering, even to a greater extent than today. The importance of changing these attitudes goes well beyond the interests of the animals who exist now or will exist in the near future.

There is also the possibility that the situation may not be as bad as s-risks seem to indicate, and in some respects at least, the future might even be better than the present. For instance, it has been argued that a very large number of animals may cease to be brought into existence just to be exploited and killed, because of the development of synthetic alternatives to animal exploitation, such as in vitro meat. Still, suppose that in vitro meat leads to a significant reduction in the number of mammals and birds who are exploited (though it won’t mean the end of their exploitation). There are other forms of exploiting animals that are likely to be furthered, entailing that the overall number of animals that we can expect to be made to suffer increases rather than decreases in the future. One such form of exploitation is fish farming. It’s possible the total number of fishes exploited by this practice may also be reduced by the eventual development of in vitro fish flesh. However, other forms of farming may be developed that outweigh those numbers, and it’s not as likely that they will be substituted. They include aquatic farms where other animals are raised in captivity (especially small crustaceans), as well as insect farming due to the development of different types of foods made with insects.

There’s also a significant risk of increasing the total amount of wild animal suffering. This can happen in two ways. One is by increasing the amount of suffering that is present in existing wild areas. Another way is by spreading wild animal suffering to other areas.

Finally, the development of new forms of sentience that may suffer significantly is a very real, if often overlooked, risk. While the level of uncertainty regarding how this may happen is high, the chances that it could happen in the future are significant.5 People have a tendency to dismiss considerations regarding this on the basis that they are too speculative. However, for the reasons explained above, that goes against what basic principles of rational decision theory indicate. It’s a case of evaluability bias, in which we make our decisions not on the basis of what’s important but on what is easy to assess. When it comes to what will happen in the future, these two things (what’s important and what’s easy to assess) are very different, and it’s a big mistake to make our decisions on the basis of the latter rather than the former.

Shifting the future

Even if it’s hard to guess accurately how the far future will be if we act in a certain way or another, we can still make some reasonable estimations based on current evidence of how enduring societal changes come about. For instance, it seems likely that challenging speciesism and promoting the relevance of sentience for moral consideration will have a positive impact on the ways sentient beings of all kinds are treated in the future. The same can be said of campaigns aimed at raising awareness about the risk of future suffering.

Measures aimed at achieving small changes for animals right now might not have a similar impact (and having that impact is not their aim). Some may lead to more and more incremental changes that have a major positive impact on the future. Some may have no impact on the future. Some may have little impact even on the near future, such as hard-won legislation that can be easily overturned or nearly impossible to enforce. Others, however, may bring about a change in the attitudes of many people, in a way that could have a significant future impact. The spread of antispeciesism could lead to greater concern for sentient beings different from ourselves, making it easier in the future to stop a technology from developing that could cause sentient beings to suffer. The key here is that different measures can have radically different impacts, and it’s vital that we try to assess what such impacts will be.

Even if we’re unable to determine what the future will be in a certain, specific manner, we can still calculate whether a certain course of action would be more likely, in comparison to others, to bring about better rather than worse situations. And this is what matters when it comes to choosing one strategy over another one.

We can’t know for sure what will work best, but as we have seen, the way to make rational decisions is not on the basis of what we know for sure. In fact, we know for sure very few things, if any. Rational decisions are made on the basis of what we can reasonably expect given the available evidence and the correct inferences we can make.

Another point to consider is that there are different ways we might want to affect the future. Some courses of action could influence the future in broader ways than others that would have a more concrete future impact. For instance, changing the attitudes people have towards discrimination in general can have a broader impact than producing a research method that will make animal experiments in a certain field unnecessary. Typically, the former will have a higher probability of success but a potentially less concrete impact than the latter. Whether to choose a broader or a more targeted approach will depend on the opportunities we have to impact the future. In order to learn about such opportunities, we first need to be aware of the importance of considering outcomes that we can’t and will never see.

Therefore, it is very important right now to raise concern about the importance of considering the future impact of our actions to defend sentient beings.


Further readings

Althaus, D. & Gloor, L. (2019 [2016]) “Reducing risks of astronomical suffering: A neglected priority”, Center on Long-Term Risk, August [accessed on 14 September 2019].

Bailey, J. M. (2014) An argument against the person-affecting view of wrongness, Master’s thesis, Boulder: University of Colorado [accessed on 26 August 2018].

Boonin, D. (2014) The non-identity problem and the ethics of future people, Oxford: Oxford University Press.

Gloor, L. (2019 [2016]) “The case for suffering-focused ethics”, Center on Long-Term Risk, August [accessed on 25 April 2020].

Mayerfeld, J. (2002) Suffering and moral responsibility, Oxford: Oxford University Press.

Roberts, M. & D. Wasserman (eds.) (2009) Harming future persons: Ethics, genetics and the nonidentity problem, Dordrecht: Springer.

Sotala, K. & Gloor, L. (2017) “Superintelligence as a cause or cure for risks of astronomical suffering”, Informatica: An International Journal of Computing and Informatics, 41, pp. 389 [accessed on 15 May 2018].

Tomasik, B. (2019 [2011]) “Risks of astronomical future suffering”, Center on Long-Term Risk, 02 Jul [accessed on 20 June 2019].


Notes

1 See Parfit, D. (1984) Reasons and persons, Oxford: Oxford University Press.

2 In addition, we are often too pessimistic when it comes to considering our capacity to estimate speculative odds and amounts. On this see Hubbard, D. W. (2010) How to measure anything, Hoboken: Wiley.

3 Although it doesn’t address the possibility of future suffering by nonhuman beings and doesn’t see this as an important issue, this work presents the case for the importance of considering the future: Beckstead, N. (2013) On the overwhelming importance of shaping the far future, PhD dissertation, New Brunswick: Rutgers University [accessed on 22 June 2018].

4 See Baumann, T. (2017) “S-risks: An introduction”, Reducing Risks of Future Suffering [accessed on 30 June 2018]. Daniel, M. (2017) “S-risks: Why they are the worst existential risks, and how to prevent them”, Center on Long-Term Risk, 20 June [accessed on 16 April 2020].

5 Even animal advocates often see this with skepticism or think it’s not an important issue, while the odds that in the future artificial forms of sentience are developed are in fact very high. On this see Mannino, A.; Althaus, D.; Erhardt, J.; Gloor, L.; Hutter, A. & Metzinger, T. (2015) “Artificial intelligence: Opportunities and risks”, Center on Long-Term Risk, p. 9 [accessed on 23 April 2018].