ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT

The Environment for Ethical Action | T.L. Taylor
Processes, policies, and structures are fundamental ethical considerations.
 

"We can cultivate our students as ethical thinkers but if they aren’t working in (or studying in) structures that support advocacy, interventions, and pushing back on proposed processes, they will be stymied. Ethical considerations must include a sociological model that focuses on processes, policies, and structures and not simply individual actors."

— T.L. Taylor, MIT Professor of Comparative Media Studies


SERIES: ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT


A professor in the MIT Comparative Media Studies/Writing program, T.L. Taylor is a qualitative sociologist working in the fields of internet and game studies. Her research focuses on the interrelation between culture, social practice, and technology in online leisure environments. She is the author of Watch Me Play: Twitch and the Rise of Game Live Streaming (Princeton University Press, 2018), Raising the Stakes: E-Sports and the Professionalization of Computer Gaming (MIT Press, 2012), and Play Between Worlds: Exploring Online Game Culture (MIT Press, 2006).
 

• • •

 

Q: What opportunities do you see for sociology to inform our thinking about the benefits, risks, and societal/ethical implications of AI?

 


Each spring I teach a course named CMS.614/Network Cultures. In it, we read four or five books that tackle various aspects of what might be broadly thought of as internet and society issues. We’ve read works that explore Google’s growing role in our everyday lives (Vaidhyanathan, 2012), youth and social media (boyd, 2015), content moderation on platforms (Gillespie, 2018), the role of algorithms in perpetuating stereotypes and racism (Noble, 2018), and even a consideration of networked life in Ghana (Burrell, 2012).

Scholars focused on the critical study of the internet and digital platforms have been doing vital work documenting the entanglement of the social and technological in these systems. This means not simply the ways technology impacts society but also how such innovations are always woven through with complex human action, including the work of a variety of people embedded in companies (Seaver, 2018) as well as the often hidden labor of piecemeal workers who augment algorithmic and platform data (Gray and Suri, forthcoming).

Our students understand the stakes

Most of the students in my classes are majoring in engineering and science fields, but all MIT undergraduates, whatever their majors, take classes in the humanities, arts, and social sciences, and being exposed to the work of scholars like the ones above often proves to be an eye-opener for them. Our students see the stakes and understand in a real way that systems can indeed produce harm. While they are often excited by the promises new technologies make, they are also very open to understanding how socio-technical systems can impact society and everyday life — often profoundly.

What they also regularly say, however, is they don’t yet have ways to even imagine addressing these issues or thinking about them amid the technical work they do. So, there is a gap between their training for their future professional lives and what they recognize, critically, with even just a bit of prompting from social science scholarship. That these students are often exactly the people who will go on to work at companies producing the next iteration of developments in AI, algorithmic systems, platform structures, and big data projects is particularly devastating.

A vital step in coding: thinking through implications

This is not simply an issue of teaching ethics. Two insufficient models often drive our pedagogical activities vis-à-vis a critical engagement with technology. On the one hand, all too often the social sciences and humanities are seen as adjunct domains that simply provide an extra layer; students are asked to read some classics in ethics or reflect on a handful of cases. Alternatively, stand-alone “ethics in domain X” classes get offered to try to fill in the gap. While such moves are well-intentioned, each is rooted in a flawed model: one that assumes a dash of ethics can quickly cultivate a reflective thinker and, by hopeful extension, an ethical technical practitioner.

Dr. Casey Fiesler, who studies technology and research ethics, has written on the harm that comes from “siloing” in these ways and argued that such models typically assume ethics is a specialization area or even something outside the domain of core technical competency. She asks instead, “What if we taught students when they first learned to write code or build technologies that a fundamental component is thinking through the implications — and that if you don’t do that, you’ve missed a vital step, an error just as damaging as not learning to test or debug your code.”
 


“At its most honest, this work requires a diverse set of stakeholders beyond technologists to hold enough structural power to even propose, when warranted, that a particular technology not be developed.”

— T.L. Taylor, MIT Professor of Comparative Media Studies



Individual ethics and good intentions are not enough 

I want to link up this valuable point with another truth, one deeply evident to me as a sociologist. There are limits to individualistic models of critical engagement. We can cultivate our students as ethical thinkers, but if they aren’t working in (or studying in) structures that support advocacy, interventions, and pushing back on proposed processes, they will be stymied.

Ethical considerations must include a sociological model that focuses on processes, policies, and structures and not simply individual actors. And we must, as both a university and a high-profile stakeholder in the overall ecology of technological development, move toward thinking sociologically about critical ethical issues. This means not only in our curriculum, but in our own internal structures as we venture forth into instituting the MIT Schwartzman College of Computing.

The AI Now Institute at NYU’s 2018 report details compellingly how many technological developments — from facial and affect recognition to algorithmic decision systems — are causing real harm, good intentions notwithstanding. These are not trivial matters; they impact everything from the criminal justice system to medical care. While important strides have been made — significantly propelled by the Fairness, Accountability, and Transparency in Machine Learning community — much more remains to be done. Industry-driven attempts have proven insufficient.

The report calls for significant government oversight and regulation, the protection of whistleblowers and conscientious objectors, the waiving of “trade secrecy and other legal claims that stand in the way of accountability in the public sector,” and community and civic participation in AI accountability, to name just a few recommendations. At the heart of the AI Now report is a call to center social justice, accountability, and transparency in technological development.
 



collage, MIT SHASS Communication

"What might our new College of Computing look like if, at its heart, was a commitment to social justice?"

— T.L. Taylor, MIT Professor of Comparative Media Studies



Enlist insights from scholars who study human systems, processes, life contexts

This is a key intervention because it moves the conversation beyond simply teaching individuals ethics. While a critical component to what we should be doing, it is not sufficient. How might we include attention to structure and policy in our conversations with students? What might it look like to teach modes of accountability and transparency that operate at the individual and organizational level? To get students and researchers not just to “involve” communities who will be impacted by their work, but to give external stakeholders real power?

How might we include these orientations in the very structure of the College of Computing? What might our new college look like if, at its heart, was a commitment to social justice?

Dr. Mary Gray, an anthropologist and researcher at Microsoft Research New England, has been hard at work trying to build these bridge conversations, and her efforts are instructive here. Working closely with computer scientists, she and others are creating processes that keep a fundamental truth visible throughout the chain: The systems, experiments, and models getting built and enacted into platforms are fundamentally tied to humans. Processes of consent, accountability, and transparency from fields like anthropology and sociology have much to offer as we trek along these new paths.

Socio-technical systems and diverse stakeholders

This is aligned, I believe, with what the AI Now Institute calls for in discussing accountability across the “full stack supply chain.” This includes “training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle.” It also syncs well with a call for fuller critical ethical engagement throughout a curriculum.

This work can’t be developed or implemented only by technologists. It will require skills, expertise, and insight from all corners of the Institute. At its most basic level, it requires domain specialists and those with actual social science training — scholars who have invested many years in working with people and everyday life contexts — to provide insights into processes, variables, or confluences that actually produce bias or harm.

This also means training more social scientists and incentivizing expertise beyond the strictly technical. It requires the skills of those who think about all of these developments as fundamentally socio-technical systems — ones subject to broader collective reflection and oversight. And at its most honest, this work requires a diverse set of stakeholders beyond technologists to hold enough structural power to even propose, when warranted, that a particular technology not be developed.
 

 

Suggested Links

Series:

Ethics, Computing and AI | Persectives from MIT

T.L. Taylor:

Website


Comparative Media Studies/Writing Program

Stories:

Inside the world of livestreaming as entertainment
Taylor looks at how computer gaming and other forms of online broadcasting became big-time spectator sports.

3Q: T.L. Taylor on diversity in e-sports
MIT sociologist’s “AnyKey” initiative aims to level the playing field of online sports.

Profile: Big game hunter
MIT sociologist T.L. Taylor studies the subcultures of online gaming and the nascent world of online e-sports.

 

References


AI Now Institute. 2018. AI Now Report. Available at https://ainowinstitute.org/AI_Now_2018_Report.pdf.

boyd, danah. 2015. It’s Complicated: The Social Lives of Networked Teens. New Haven, CT: Yale University Press.

Burrell, Jenna. 2012. Invisible Users: Youth in the Internet Cafés of Urban Ghana. Cambridge, MA: The MIT Press.

Fiesler, Casey. 2018. “What Our Tech Ethics Crisis Says About the State of Computer Science Education.” Next, December 5. Available at https://howwegettonext.com/what-our-tech-ethics-crisis-says-about-the-state-of-computer-science-education-a6a5544e1da6.

Gillespie, Tarleton. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. New Haven, CT: Yale University Press.

Gray, Mary and Siddharth Suri. Forthcoming. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. New York, NY: Eamon Dolan/Houghton Mifflin Harcourt.

Gray, Mary. 2017. “Big Data, Ethical Futures.” Anthropology News, January 13. Available at https://anthrosource.onlinelibrary.wiley.com/doi/epdf/10.1111/AN.287.

Microsoft Research. 2014. Faculty Summit Ethics Panel Recap. Available at https://marylgray.org/2014/08/msr-faculty-summit-2014-ethics-panel-recap/

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York, NY: New York University Press.

Seaver, Nick. 2018. “Algorithms as Culture: Some Tactics for the Ethnography of Algorithmic Systems.” Big Data & Society July–December: 1–12.

Vaidhyanathan, Siva. 2012. The Googlization of Everything (And Why We Should Worry). Berkeley, CA: The University of California Press.

 


Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of Dean Melissa Nobles
MIT School of Humanities, Arts, and Social Sciences
Series Editor and Designer: Emily Hiestand, Communication Director
Series Co-Editor: Kathryn O'Neill, Assoc News Manager, SHASS Communications
Published 18 February 2019