The path to ethical, socially beneficial AI
Collaboration is key say leaders from government, philanthropy, academia, and industry

Panelists for "Computing for the People: Ethics and AI," L to R: Thomas Friedman (Moderator), Ursula Burns, Jennifer Chayes, Ash Carter, Darren Walker, Megan Smith, and Joi Ito. (Photo by Rose Lincoln)

"MIT is going to be the anchor of what we will know in this society as public interest technology. What MIT is doing will set the pace for every other university that wants to be relevant in the future."

— Darren Walker, President, The Ford Foundation

By February 28th, toward the close of the three-day celebration of the MIT Stephen A. Schwarzman College of Computing, there was one inescapable takeaway: "We are at an inflection point," said John E. Kelly, Executive Vice President, IBM. "With the progressing technologies of artificial intelligence (AI)", he said, "we are on the verge of incredible things."
Less clear to participants and audience after a whirlwind of TED-like talks, demonstrations, and discussion, was whether advanced computation can truly work primarily for the benefit of humanity.
"We are undergoing a massive shift that can make the world a better place," noted David Siegel, co-founder and co-chairman of Two Sigma. "But I fear we could move in a direction that is far from an algorithmic utopia."

Meeting the challenges of AI

Many speakers at the three-day celebration threw this double-edged promise of the new machine age into stark relief, and called for an approach to education, research, and tool-making that combines collective knowledge from the technology, humanities, arts, and social science fields.  

As Melissa Nobles, the Kenan Sahin Dean of MIT’s School of Humanities, Arts, and Social Sciences, introduced the final panel of the celebration — Computing for the People: Ethics and AI, moderated by New York Times columnist Thomas Friedman — she reinforced the need for such an approach, noting that that the humanities, social sciences, and arts are grappling “with the ways in which computation is changing the world,” and that “technologists themselves must much more deeply understand what they are doing, how they are deeply changing human life."

In a conversation after the panel, Dean Nobles also emphasized that the goal of the new college is to advance computation and to give all students a greater “awareness of the larger political, social context in which we’re all living.” This is the MIT vision for developing “bilinguals” — engineers, scholars, professionals, civic leaders, and policymakers who have both superb technical expertise and an understanding of complex societal issues gained in humanities, arts, and social science study.  

The perils of speed and limited perspective
The five panelists on Computing for the People — representing industry, academia, government, and philanthropy — contributed particulars to the vision of a society infused with “bilinguals,” and attested to the perils posed by an overly-swift integration of advanced computing into all domains of modern existence.
"I think of AI as jetpacks and blindfolds, that will send us careening in whatever direction we're already headed," said Joi Ito, director of the MIT Media Lab. "It's going to make us more powerful but not necessarily more wise.”
The key problem, according to Ito, is that machine learning and AI has to date been exclusively the province of engineers, who tend to talk only with each other. This means they can deny accountability when their work proves socially, politically, or economically destructive. "Asked to explain their code, technological people say ‘we're just technical people, we don't deal with racial or political problems,’" Ito said.

Can AI advance justice, strengthen democracy?

Darren Walker, president of the Ford Foundation, zeroed in on the value void at the center of this new technology. "If we go deep [into AI tool-making] without a view as to whether AI can advance justice, whether it can strengthen our democracy, if we engage this enterprise without those questions driving our discourse, we are doomed," he said.
As a case in point, he cited the predictive analytics of AI that more frequently deny parole to black men than to white men with comparable records. "So AI is in fact reifying and amplifying rather than correcting the historic biases we see every day in America," Walker said. "Will AI be a lever for good, or simply compound disadvantages built into our systems?"
Walker also noted that during the recent congressional hearings featuring the testimony of Facebook CEO Mark Zuckerberg, politicians demonstrated ignorance about the workings of social media platforms and of cellphone technology. "At any other hearing of importance in our society, there would be some smart person sitting behind a congressperson to say, [of the person testifying] ‘Challenge him, he's wrong,’" said Walker. But, he continued, "there are very few people on the Hill working in the public interest on this larger issue of the fourth industrial revolution."

Melissa Nobles, Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences (Photo by Rose Lincoln)

In a conversation after the panel, Dean Nobles emphasized that the goal of the new college is to advance computation and to give all students a greater “awareness of the larger political, social context in which we’re all living.”

This is the MIT vision for developing “bilinguals” — engineers, scholars, professionals, civic leaders, and policymakers who have both technical expertise and an understanding of complex societal issues.

Collaborations to make a better world

Panelists emphasized that the speed of the current technological transformation threatens to undermine efforts to control it. "By the time we realize there's something we must do to right the ship, the ship will be in the middle of the ocean," said Ursula Burns, executive chairman and CEO of VEON, Ltd.
But Burns and her fellow panelists believe the new College of Computing, by bringing together computer scientists with scholars from the social sciences and humanities, could help reverse the potentially destructive course of AI.
"It's not just about getting a whole bunch of computer scientists writing new programs, it is about making the world a better place," Burns said. "It's active engagement, broad knowledge, and responsibility to other people."

Jennifer Chayes, a technical fellow and managing director of Microsoft Research New England, described an initiative in her labs to promote fairness, accountability, transparency, and ethics (or FATE), in software platforms and information systems.
"It's a nascent field that brings together legal scholars, ethicists, social scientists, and people in AI to ask how we can make some decisions together in a more equitable fashion," she said.
Chayes also highlighted a method she called “algorithmic greenlining,” which makes it possible to purge inherent bias from decision-making codes that determine who in a particular population gets into a school or receives a loan. "We have a fairness component that takes an objective function and optimizes the data in a way that amplifies equities, rather than inequities," she said.

Accountability and human-centered AI

As US Secretary of Defense, Ash Carter, now the director of Harvard's Belfer Center for Science and International Affairs, said he learned that "accountability as an algorithmic matter isn't automatic," he said. "It needs to be a criterion for people designing AI."
Machines easily amplify "crummy data," said Carter, so unless system designers establish "data standards and transparency, you're just massaging yesterday into a perfected version of 'then' rather than creating 'tomorrow,'" he said.
Throughout his career, which involved deploying new technologies in the most perilous of circumstances, Carter said he always felt the imperative to act and think with broad ethical considerations in mind. In 2012, he recalled, he issued a directive at the Department of Defense dealing with the use of autonomous weapons. "It said that with any decisions to use lethal force on behalf of our people, there must be a human involved in the decision — a directive that is still in force to this day."
Since machines now weigh in on matters of life and death, justice and freedom, there is an urgency to creating an ethical, socially-informed culture in the fields of AI and data science. Panelists expressed the hope that the new College of Computing would serve as an incubator for more and much stronger interdisciplinary approaches to research and education.

The future for bilinguals

"With this new college, we could not just diversify tech, but technify everything else and really work on the hardest problems together in a collaborative way," said Megan Smith, former US Chief Technology Officer and Founder and CEO, shift7.  "Feeding 22 million children in a free and reduced lunch program is a big data problem, more important than self-driving cars, and it's the kind of computing I think we should do on inequality and poverty."

Panelists also voiced confidence that the new college will serve as a model to other higher education institutions seeking to engage the engineering and liberal arts fields to solve important societal problems collaboratively. They discussed the importance of faculty and students representing not just a range of disciplines, but a range of human beings, people whose lived experiences are relevant to discerning the ethical and societal implications of AI tools.

The panelists also welcomed the opportunity to help nurture the MIT bilinguals — students with expertise in both technical and liberal arts fields — who could swiftly assume positions as policy advisors and leaders in government and industry.
"MIT is going to be the anchor of what we will know in this society as public interest technology," predicted Darren Walker. "What MIT is doing will set the pace for every other university that wants to be relevant in the future."


Suggested links

Coda: Post-Panel Conversation

Ethics and AI: Perspectives from MIT

Melissa Nobles

Panelists for Computing for the People: Ethics and AI:
Thomas L. Friedman (Moderator), New York Times columnist; Ursula Burns, CEO, VEON, Ltd.; Ash Carter, Director, Belfer Center for Science and International Affairs, Harvard Kennedy School; and former US Secretary of Defense; Jennifer Chayes, Research Fellow and Managing Director of Microsoft Research; Joi Ito, Director of the MIT Media Lab; Megan Smith; Founder and CEO, shift7; former US Chief Technology Officer; and Darren Walker, President of the Ford Foundation


Story prepared by MIT SHASS Communications
Editorial team: Emily Hiestand and Leda Zimmerman
Photography: Rose Lincoln