ETHICS, COMPUTING AND AI | PERSPECTIVES FROM MIT

Addressing the Societal Implications of AI | Lisa Parks
Broad reach of AI tools raises questions of justice and global impact.
 


Lisa Parks, Photo courtesy of the MacArthur Foundation

“As AI tools proliferate, we need to consider whom or what is best equipped to detect and remedy bias, particularly given the potential of these tools to reinforce, intensify, or create new inequalities and injustices.”

— Lisa Parks, Professor of Comparative Media Studies; Principal Investigator, MIT Global Media Technologies and Cultures Lab



SERIES: ETHICS, COMPUTING AND AI | PERSPECTIVES FROM MIT


Lisa Parks is the principal investigator for MIT’s Global Media Technologies and Cultures Lab and a professor in Comparative Media Studies/Writing. A 2018 MacArthur Fellow, Parks explores the global reach of information technology infrastructures and the cultural, political, and humanitarian implications of the flow of information. She is the author of Cultures in Orbit: Satellites and the Televisual (Duke University Press, 2005) and Rethinking Media Coverage: Vertical Mediation and the War on Terror (Routledge, 2018).

 

• • •

Q: What ethical and societal challenges do you see emerging or increasing as computing and AI tools have an accelerating role in shaping human culture and planetary health?


 

Three fundamental societal challenges have emerged from the use of AI, particularly for data collection and machine learning.

The first challenge centers on this question: Who has the power to know about how AI tools work, and who does not? Issues of technological knowledge and literacy are increasingly important given digital corporations’ proprietary claims to information about their data collection and algorithms. The concealment or “black boxing” of such information by social media companies such as Facebook, for instance, keeps users naïve about AI tools and the ways those tools shape social media experiences and the information economy.

Most users only learn about the AI tools used in social media platforms inferentially or when information is leaked, as in the Facebook/Cambridge Analytica matter. While digital companies protect their intellectual property to compete in the marketplace, cordoning off technical information has huge stakes. It presents challenges not only for users who want to understand what data is being collected from them and what is done with that data, but also for researchers and policymakers who seek to explore the “back ends” of these platforms and their industrial, behavioral, and juridical implications.

As it stands in the United States, digital corporations’ intellectual property rights supersede consumers’ right to know about the AI tools they encounter online every day. In Europe, the  General Data Protection Rule has attempted to address these issues obliquely by defining “data subject rights,” yet to defend these rights it is essential for regulators and the public to know how AI tools are designed to work and to understand any potential for them to act autonomously. We are entering an age in which some inventors of AI tools do not even understand how their own systems are working.


"Given the power of AI tools to impact human behavior and shape planetary conditions, it is vital that a political, economic, and materialist analysis of the technology’s relation to global trade, governance, natural environments, and culture be conducted."

— Lisa Parks, Professor of Comparative Media Studies; Principal Investigator, MIT Global Media Technologies and Cultures Lab



AI’s global impact

A second societal challenge involves learning how AI tools intersect with international relations and the dynamics of globalization. As AI tools are operationalized across borders, they can be used to destabilize national sovereignty and human rights. This occurred, for instance, during the U.S. drone wars in Pakistan or in the context of Russian interference in the 2016 U.S. presidential election. Meanwhile, think tanks and nonprofits continue to celebrate the potential of AI tools to accelerate global development.

Given these contradictions, we might begin to address this area of concern by specifying which countries or regions have the resources to innovate and contribute to AI technologies and industries, and which ones are being positioned as recipients, subjects, or beneficiaries. What do the vectors of the global AI economy look like? Who are the dominant players? Where are their workforces and exactly what labor are they performing? What are the top-selling AI tools, and how do their supply chains correlate with historical trade patterns, geopolitics, or conditions of disenfranchisement?

Given the power of AI tools to impact human behavior and shape planetary conditions, it is vital that a political, economic, and materialist analysis of the technology’s relation to global trade, governance, natural environments, and culture be conducted. This involves adopting an infrastructural disposition and specifying AI’s constitutive parts, processes, and effects as they take shape across diverse world contexts. Only then can the public understand the technology well enough to democratically deliberate its relation to ethics and policy.


Effects on social justice

Beyond questions of knowledge/power and globalization, it is important to consider the relationship between AI and social justice. Will new AI and computing technologies reinforce or challenge power hierarchies organized around social difference such as race/ethnicity, gender/sexuality, national identity, and so on? MIT researchers are already advancing important projects in this area.

Consider, for instance, Joy Buolamwini’s Algorithmic Justice League or Sasha Costanza-Chock’s book, Design Justice (MIT Press, forthcoming). These projects explore how social power and bias are coded into computational systems, and challenge people to confront structural inequalities, such as racism and sexism, when using or designing AI systems. Their work suggests social justice should be core to AI innovation.

If AI tools are designed in the United States — whether in Silicon Valley or in Cambridge, Massachusetts — by predominantly white, middle-class people who understand technical innovation as distinct from questions of social justice, then AI products are likely to implicitly reproduce the values and worldviews of those with privilege.

Algorithmic bias occurs when the design process is divorced from critical reflection upon the ways social hierarchies impact technological development and use. Arguably, all algorithms are biased to a certain degree, but as AI tools proliferate, we need to consider whom or what is best equipped to detect and remedy bias, particularly given the potential of these tools to reinforce, intensify, or create new inequalities or injustices.


 

Suggested links

Ethics, Computing, and AI | Perspectives from MIT

Lisa Parks | MIT website

Lisa Parks | MacArthur webpage

Global Media Technologies and Cultures Lab

Lisa Parks | publications

Contemplating the eyes in the sky
Global media studies scholar Lisa Parks examines the way satellites and other aerial technologies have changed society.

3 Questions: Lisa Parks on drones, warfare, and the media
MIT media studies professor discusses new essay collection analyzing the impact of drones.

Lisa Parks wins 2018 MacArthur Fellowship
Media studies scholar is the latest MIT faculty to receive the prestigious “genius grant.”

EUGDPR.org
A resource to educate organisations about the main elements of the General Data Protection Regulation (GDPR) and help them become GDPR compliant.

The Algorithmic Justice League

Design Justice, AI, and Escape from the Matrix of Domination
Essay by Sasha Costanza-Chock, MIT Press
 

 


Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of Dean Melissa Nobles
MIT School of Humanities, Arts, and Social Sciences
Series Editor and Designer: Emily Hiestand, Communication Director
Series Co-Editor: Kathryn O'Neill, Assoc News Manager, SHASS Communications
Published 18 February 2019