ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT
The Tools of Moral Philosophy | Caspar Hare and Kieran Setiya
Caspar Hare and Kieran Setiya
"Framing a discussion of the risks of advanced technology entirely in terms of ethics suggests that the problems raised are ones that can and should be solved by individual action. In fact, many of the challenges presented by computer science will prove difficult to address without systemic change.”
— Caspar Hare and Kieran Setiya, MIT Professors of Philosophy
SERIES: ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT
Caspar Hare is a professor of philosophy in MIT’s Department of Linguistics and Philosophy. He has written numerous articles on ethics, metaphysics, and practical rationality, and is the author of two books: The Limits of Kindness (Oxford University Press, 2013) and On Myself and Other, Less Important Subjects (Princeton University Press, 2009).
Kieran Setiya is a professor of philosophy at MIT, where he works on ethics and on related questions about human agency and human knowledge. He is the author of Reasons without Rationalism (Princeton University Press, 2007) and Knowing Right From Wrong (Oxford University Press, 2012), and most recently, of a self-help book, Midlife: A Philosophical Guide (Princeton University Press, 2017).
• • •
For the "good of all"
What obligations do individuals, businesses, educational institutions, and governments have to make sure that advanced technologies reflect core human values? What can we in the MIT community do to ensure that applications that emerge from our research fulfill President L. Rafael Reif’s charge to work for the “good of all”? How can the Institute’s research in computing and artificial intelligence (AI) meet the highest ethical standards? Do such standards even exist? And if, not, how can we help shape them?
Because such questions — which will be central to the work of MIT’s new Schwarzman College of Computing — have few ready or easy answers, and because the word “ethics” is used with varying degrees of precision, SHASS Communications spoke with MIT philosophers Caspar Hare and Kieran Setiya to better understand the term, as it is used in moral philosophy. The following commentary is based on this conversation.
Q: What guidance does philosophy provide in addressing questions of ethics?
In a broad sense, ethics is concerned with “normative” questions about how individuals, organizations, and societies should act. (These normative questions contrast with descriptive questions about how they do act.) In a narrower usage, ethics focuses specifically on questions of individual behavior. Essentially, philosophers who work in ethics provide systematic approaches to the question “What should I do?”
This area of scholarship centers on the obligations of individuals to one other — moral obligations — as well as their obligations to themselves. The goal is to establish general ways of thinking about ethical issues that can then be applied to specific cases. For example, moral philosophers are less likely to start with the question of whether an individual should take a particular tax deduction than the larger question of whether and when it’s ethical to cheat or why we should obey the law.
More than just opinion
Some people doubt that there are objective answers in ethics, but people face ethical questions every day, and philosophy provides a disciplined, productive way to think about them and to establish principles by which actions may be guided and judged.
Although it may be tempting to infer from the diversity of ethical beliefs that deciding what one ought to do is just a matter of opinion, philosophers point out that the persistence of disagreement does not imply that neither answer can be right. After all, humans once held very different beliefs about how and why the sun rises and sets, and they still hold conflicting views about human evolution and climate change — but it would be a mistake to infer that there is no objective truth about these matters. We shouldn’t be too quick to conclude that, when it comes to ethics, anything goes.
Even if you are convinced that ethics is purely subjective, you still need to decide on the principles by which you will live your life. Moral philosophy can help you to articulate these principles and make them more coherent.
"We face ethical questions every day. Philosophy does not provide easy answers for these questions, nor even fail-safe techniques for resolving them. What it does provide is a disciplined way to think about ethical questions, to identify hidden moral assumptions, and to establish principles by which our actions may be guided and judged."
— Kieran Setiya and Caspar Hare, MIT Professors of Philosophy
How philosophers address ethical questions
Almost every philosopher agrees that the consequences of our actions are morally significant. But the consequences are often uncertain. A pressing issue raised by technological advances is the possibility of catastrophic risk — anything from a nuclear explosion to the making of a super-virus to the so-called “singularity” (the prospect that artificial intelligence will be able to improve upon itself, transcending human limitations).
Humans take risks all the time — even just driving a car or flying in a plane — so it can be helpful to think about risk in a systematic way. One tool philosophers and economists have developed for doing this is decision theory, a system for weighing the various factors involved in a decision to determine the optimal result. The results are often surprising. Small risks of catastrophic outcomes can overwhelm deliberation. Is that a feature or a bug?
Applications of decision theory often take the form of cost-benefit analysis, determining which decision is expected to bring the greatest happiness to the greatest number. This approach derives from the philosophical theory known as “utilitarianism” and it has similar blindspots. Since it looks at aggregate happiness or well-being, it ignores fair distribution. And it pays no heed to how the greatest benefits are brought about, perhaps by infringing on individual rights.
Is it ethical to kill one innocent person and use his or her organs to save five lives? If we look at nothing but aggregate happiness, it might appear to be. But the average person is likely to judge this killing morally wrong. In other cases, as when we redirect a runaway trolley from five victims to one, many believe that it is permissible to kill. If there is a difference between these cases, cost-benefit analysis cannot account for it. (This is a variation of a famous thought experiment designed by MIT philosopher Judith Jarvis Thomson known as "The Trolley Problem," which has in recent years been employed in considering how best to program self-driving cars.)
Identifying hidden moral assumptions
As you can see, philosophy does not provide easy answers, nor even fail-safe techniques for resolving ethical questions. What it does provide is a disciplined way of thinking about these questions and identifying hidden moral assumptions.
To take an example from the ethics of AI: Designers of a software program aimed at reducing substance abuse in a target population concluded that the “optimal” approach was to treat high-risk youths and low-risk youths separately. The average rate of substance abuse was minimized, but as David Gray Grant — who recently received his MIT PhD in philosophy — observed, this was achieved by effectively sacrificing high-risk youth for the sake of others. The hidden assumption was that the right way to evaluate outcomes was in terms of aggregate substance abuse, without regard to the fair distribution of benefits or the rights of individuals. These moral complexities were lost in apparently innocent program design.
Another way in which moral assumptions can be missed is when we focus too narrowly on the questions specific to new technologies — such as social media or driverless cars — ignoring ethical and political issues that are more widespread. If you want to be an ethical person, you have to think not just about data privacy and the trolley problem but about the ways in which corporations like Amazon and Google concentrate wealth and power, and the ways in which new technologies threaten employment or exacerbate inequality. An ethical approach to new technologies must be suitably broad.
"Going forward, it will be vital to put in place social and institutional structures that support, encourage, and guide ethical behavior. One responsibility that falls on us as individuals is to work toward political conditions in which it is possible to us to live and work more ethically."
— Kieran Setiya and Caspar Hare, MIT Professors of Philosophy
The force of societal and institutional structures
Framing a discussion of the risks of advanced technology entirely in terms of ethics suggests that the problems raised are ones that can and should be solved by individual action. In fact, many of the challenges presented by computer science will prove difficult to address without systemic change. For example, users of social media face a collective action problem in which the advantages of sharing a platform make it individually costly to withdraw even if everyone would be better off they did so all together.
Widening our lens from the lone individual, we can look to social philosophy and business ethics: How should organizations — such as Google, Facebook, and MIT — regulate themselves? And to political theory: What should governments do? Going forward, it will be vital to put in place social and institutional structures that support, encourage, and guide ethical behavior. One responsibility that falls on us as individuals is to work toward political conditions in which it is possible to us to live and work more ethically.
As we strive toward positive means of developing and using new technologies, moral philosophy will play an essential role. Philosophers teaching in MIT’s Schwarzman College of Computing could help us think through urgent problems of ethics and politics in more principled ways. Moral philosophers might serve as advisers or consultants on project teams or participate in cross-disciplinary seminars. We should also draw on the expertise of colleagues in the humanities and social sciences to better understand the likely but often unanticipated impacts of technological change.
Over the coming years, MIT will be home to striking and unpredictable advances in engineering and computer science. The humanities, social sciences, and philosophy are crucial to understanding what these advances mean for us and how they can serve MIT’s mission of working “wisely, creatively, and effectively for the betterment of humankind.”
Caspar Hare webpage
Story: The Moral Calculus of Climate Change
Story: Kieran Setiya: How Philosophy Can Address the Problem of Climate Change
MOOC: Introduction to Philosophy, taught by Caspar Hare
The first introductory philosophy MOOC in the U.S.
Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of Dean Melissa Nobles
MIT School of Humanities, Arts, and Social Sciences
Series Editor and Designer: Emily Hiestand, Communication Director
Series Co-Editor: Kathryn O'Neill, Assoc News Manager, SHASS Communications
Published 18 February 2019