ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT
The Common Ground of Stories | Mary Fuller
Conceptual meeting spaces for thinking together
Mary Fuller, photo by Jon Sachs, MIT SHASS Communications
“Stories allow us to model interpretive, affective, ethical choices; they also become common ground, conceptual meeting places that can serve to gather very different kinds of interlocutors around a common object. We need these.”
— Mary Fuller, Professor of Literature, and Head, MIT Literature
Professor Mary C. Fuller is head of the MIT Literature section. She works on the history of early modern voyages, exploration, and colonization. She is also interested in material books and how readers use them, in the past and in the present. Her books include Voyages in Print: English Travel to America, 1576-1624 (Cambridge University Press, 1995) and Remembering the Early Modern Voyage: English Narratives in the Age of European Expansion (Palgrave, 2008).
• • •
Q: What opportunities to you see for applying insights, knowledge, and methodologies from literature to promote socially beneficial and ethical uses of computing and AI technologies?
In Literature, we study meaning-making through narrative and form. Both these areas of attention carry possibilities for collaboration and exchange with computation and AI. People sometimes say, “I wish I could read” a given text. Usually, they don’t mean that they can’t, literally, read it, but rather that the very things that make literary language so dense with information are opaque or even a barrier for them. By contrast, an expert reader gains information not only from content that can be summarized, but also from the formal structures that organize and amplify what the text says and shape how it makes us feel: the beat pattern of poetic language, the numerology of some Renaissance poems, or any writer’s play with syntax.
That disparity could change. We already have many tools and strategies to aid reading, from footnotes and plot summaries to book groups and online forums; most deal with content, rather than form. But formal patterns are easy for machines to recognize and represent: In the age of AI, we could invent new tools for reading. If the expert reading skills we teach could be made even partially available to readers outside the academy, the gateway to the archive of culture would be wider.
Q: How can literature inform AI and computing projects about the risks and rewards of technological advances in terms of societal and ethical implications?
Narrative is already a research area in AI, as a “keystone competence” for computational modeling of intelligence as well as an aspect of computationally enabled creativity. Activating the existing human capacity to understand stories at depth is something we do every day. As complex systems, stories encode a range of interpretive possibilities; because they can also function, in whole or in part, as memorable metaphors for yet other stories, those possibilities aren’t easily exhausted.
Witness the invocation of Mary Shelley’s Frankenstein by commentators writing about the ambiguous potentials of modern technological innovation. In Shelley’s novel, a man decides in deliberate isolation to make something. He makes it because he can, and the exercise of capability excites him; once the thing is made, he abandons it in horror. Left to make sense of its existence and environment as best it can, what he makes ultimately becomes a monster and comes back to destroy everything the man loves. Shelley reminds us to ask, of our powerful inventions, “what could possibly go wrong?” and to be worried by secrecy and the failure to predict or own outcomes.
Her monster has other lessons to offer, however: As the French sociologist Bruno Latour has suggested in another context, perhaps it’s not that we shouldn’t create new things, so much as that we can’t walk away from what has been created — an ongoing relationship of care is necessary. How to effect that ongoing care is both a technical problem in designing and using deep learning systems — for instance, making a system’s decision process more transparent — and a question of policy. Who will care for these systems, and how will the costs of care be funded?
We might identify these technical or governance problems independently: So, what is the utility of Shelley’s novel, or of narrative in general? Stories allow us to model interpretive, affective, ethical choices; they also become common ground, conceptual meeting places that can serve to gather very different kinds of interlocutors around a common object. We need these: Computer science alone can’t shoulder the task of modeling the future, understanding social and global impacts, and making ethical decisions.
Detail from an etching by Gustave Doré (1832-1883), of Milton's Paradise Lost
"Stories are things in themselves, and they are things to think with. Reading about Milton’s angelic intelligences or William Gibson’s “bright lattices of logic” won’t tell us what we should do with the future.... But reading such stories at MIT may offer a place to think together across the diversity of what and how we know."
— Mary Fuller, Professor of Literature, and Head, MIT Literature
A place for global storytellers
I’d like to see us develop a class on “intelligences,” that would draw on a broad range of stories about artificial or non-human minds. We need more than one story, productive though Frankenstein is. We need the work of speculative fiction-making, the archive of the past with all its resources and weirdness, and a broad array of voices — not only ones from the Anglophone academy or the best-seller list. Perhaps the MIT Schwartzman College of Computing should host a series of residencies for global storytellers.
Stories are things in themselves, and they are things to think with. Reading about Milton’s angelic intelligences or William Gibson’s “bright lattices of logic” won’t tell us what we should do with the future, or substitute for knowledge about actual work in the field of AI. But reading such stories at MIT may offer a place to think together across the diversity of what and how we know.
For storytelling and understanding as a “keystone competence,” see Patrick Winston, “Moon Shot”; for examples of computationally enabled narrative, see Fox Harrell.
Bruno Latour, “Love Your Monsters” (2011)
For a perspective from a colleague in CSAIL, see “3Q: Aleksander Madry on building trustworthy artificial intelligence” and the related website for a Symposium on Robust, Interpretable AI at MIT in November 2018.
These points are made eloquently by Ed Finn (ASU) in a recent article for the New York Times, A Smarter Way to Think About Intelligent Machines.
Mary Fuller, Cambridge University Press
Fuller receives Levitan Award in the Humanities
The prize is awarded annually as a research fund to support innovative and creative scholarship in the humanities.
Fuller receives Levitan Teaching Award
Award honors the finest educators in the MIT School of Humanities, Arts, and Social Sciences
Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of Dean Melissa Nobles
MIT School of Humanities, Arts, and Social Sciences
Series Editor and Designer: Emily Hiestand, Communication Director
Series Co-Editor: Kathryn O'Neill, Assoc News Manager, SHASS Communications
Published 18 February 2019