ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT
Who’s Calling the Shots on Data and AI? | Leigh Hafrey
"When the lifeblood of tech companies is user data, we must adopt a full-on stakeholder view of business in society and the individual in business, lest we cede control of our livelihoods, our identity, and life itself.”
— Leigh Hafrey, Senior Lecturer, Leadership and Ethics, MIT Sloan School of Management
SERIES: ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT
Leigh Hafrey is a Senior Lecturer in behavioral and policy sciences at the MIT Sloan School of Management. Since 1991, Hafrey has worked in professional ethics, with a focus on ethical leadership. For more than 20 years, he has also moderated seminars in programs of the Aspen Institute, an organization focused on values-driven leadership. He is the author of two books on values and leadership: The Story of Success: Five Steps to Mastering Ethics in Business (Other Press, 2005); and War Stories: Fighting, Competing, Imagining, Leading (Business Expert Press, 2016). Hafrey holds an AB in English from Harvard College and a PhD in comparative literature from Yale University.
• • •
Q: How can behavioral and policy sciences inform our thinking about socially beneficial and ethical uses of computing technology and artificial intelligence?
Two decades of teaching leadership and ethics at MIT Sloan have taught me how and why people and organizations optimize their processes. With enhancements via artificial intelligence (AI), the data that enable machine learning and neural nets come sometimes voluntarily, sometimes not. We have so far welcomed the results both ways, but that acceptance may be changing. Transparency matters: Looking ahead, we must reckon with the potential for institutional abuse of our data, as much in the private as in the public sector.
A case in point comes from a seminar I co-taught last spring on what we might learn from the role of business professionals in the rise of Nazism and the Holocaust. Beyond the physical remains of camp operations that I saw at Auschwitz-Birkenau, historians and others have explored how data-gathering technology in German and occupied-country census and related operations facilitated the identification, rounding up, and transport of victims to its gates.
In 1948, to help prevent further genocides, world leaders proclaimed the U.N. Universal Declaration of Human Rights; the declaration posits the dignity of each individual and the personal and institutional respect that redounds to each of us by virtue of our shared humanity.
In the time since, the business community has embraced the declaration’s principles through the U.N. Global Compact for corporate responsibility (now counting nearly 10,000 members); the U.N. Principles for Responsible Investment; and the U.N. Guiding Principles on Business and Human Rights, among other initiatives.
For business practitioners and the rest of us, though, a response to the vast reach of AI systems cannot consist merely of general agreements, no matter how inspired.
Values beyond efficiency
“Efficiency” is a perennial business value and a constant factor in corporate design, strategy, and execution. When we invoke it, we don’t mean the lethal rough draft of modern data management that the Nazis elaborated to support their Thousand-Year Reich. Yet today, data-gathering is enabling a “social credit” system in the People’s Republic of China that aims to enhance trust among institutions (including businesses) and individuals and curb corruption, but may also allow the curtailment, based on a social rating, of individual or organizational rights and amenities.
In the West, some observers see liberal democracy tending to what business scholar Shoshana Zuboff and others have dubbed “surveillance capitalism,” variously defined as an economic system that depends on the commodification or monetization of data or the use of data to exercise behavioral market or social control. In both instances — whether we call the intended result a “harmonious” or a “free” society — the exercise of social control by larger entities is real.
Against this background, developments in AI have yet to yield the ethics by which we might manage their effects. Isaac Asimov’s “Three Laws of Robotics” (1942) placed design curbs on robots to protect their creators, but the rules were and remain science fiction. In “Why the Future Doesn’t Need Us” (Wired, April 2000), Sun Microsystems’ former Chief Scientist Bill Joy went a step further than Asimov and advocated turning away from advances in robotics, nanotechnology, and genetic engineering to avoid a “gray goo” cataclysm — self-replicating nanobots that would end all biological life on Earth by consuming it. Human curiosity, the lure of new horizons, and commercial or ego-driven competition militate against self-constraint or relinquishment as solutions; and that is why this search for ethical next steps finally leads to ... each of us.
"Human curiosity, the lure of new horizons, and commercial or ego-driven competition militate against self-constraint or relinquishment as solutions; and that is why this search for ethical next steps finally leads to each of us."
— Leigh Hafrey, Senior Lecturer, Leadership and Ethics, Sloan School of Management
Setting ethical rules of engagement
What are we? Rules for relative pronouns in English dictate “who” to refer to people and “that” to things. Yet daily discourse too often offers “that” (the people that I saw at . . . ) for people and “who” (companies or governments who choose . . . ) for institutions. However inadvertent, the slippage reveals the place of both in today’s America: Who has the agency or free will normally associated with individual persons when I myself am a “that,” but a Fortune 500 corporation is a “who”?
The linguistic inversion normalizes power structures and raises a fundamental question of ethical practice for business practitioners. A similar mental slippage may have operated among business professionals in 1930s and ’40s Germany: In our seminar, for example, we discussed J.A. Topf and Sons, a company that specialized from its 19th-century origins in heating systems, chimneys, and incinerators, and under the Nazis actively aided in the design and production of crematoria ovens used in the death camps; competence became a driver to complicity.
In Civil Disobedience (1849), the American environmentalist and successful pencil designer/manufacturer, H.D. Thoreau, argued: “It is truly enough said that a corporation has no conscience. But a corporation of conscientious men is a corporation with a conscience.” Well before Citizens United v. FEC and Facebook, Thoreau recognized the centrality of corporate personhood to the national conversation. Writing from Walden Pond today, he might argue that when the lifeblood of tech companies is user data, we must adopt a full-on stakeholder view of business in society and the individual in business, lest we cede control of our livelihoods, our identity, and life itself.
Service and sustainability sit at the heart of such a vision: service not just to shareholders but to customers, employees, suppliers, and communities; and with it, a commitment to the environmental, social, and governance principles enunciated in the initiatives I named earlier. We, the people, must be the “who” who make the “that.” The ethics of AI will follow from our shared undertakings, including the eventual provision of human rights for virtual humanity. The integrity of our vision for that future depends, though, on our learning from the past and celebrating the fact that people, not artefacts and institutions, set our rules of engagement.
Ethics, Computing, and AI| Perspectives from MIT
Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of the Dean, MIT School of Humanities, Arts, and Social Sciences
Series Editors: Emily Hiestand and Kathryn O'Neill
Published 18 February 2019