ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT

Blind Spots | David Kaiser
Why new technology benefits from guidance from the wider community
 


David Kaiser; photo by Jon Sachs/MIT SHASS Communications

“MIT has a powerful opportunity to lead in the development of new technologies while also leading careful, deliberate, broad-ranging, and ongoing community discussions about the 'whys' and ' what ifs,' not just the 'hows.' No group of researchers, flushed with the excitement of learning and building something new, can overcome the limitations of blind spots and momentum alone.”

— David Kaiser, Germeshausen Professor of the History of Science, and Professor of Physics


SERIES: ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT


David Kaiser is Germeshausen Professor of the History of Science in MIT's Program in Science, Technology, and Society, and Professor of Physics in MIT's Department of Physics. His historical research focuses on the development of physics in the US during the Cold War. His physics research focuses on early-universe cosmology. He has also helped to design and conduct novel experiments to test the foundations of quantum theory. His books include Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics (University of Chicago Press, 2005), and How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival (W. W. Norton, 2011).

• • •

 

Q: How can the history of science and technology inform our thinking about the benefits, risks, and societal/ethical implications of AI?



All of us have blind spots; anyone who has completed driver’s education knows that. Even with the best of intentions, we each come to a given topic from a certain perspective, with a limited horizon. The driver’s ed analogy also reminds us about momentum: It is more difficult to change direction easily once a car is barreling down the road. As MIT invests even more dramatically in computing and artificial intelligence, we would do well to keep these two themes — blind spots and momentum — in mind.

Research in topics like deep learning and artificial intelligence is incredibly exciting these days. No one can deny the potency of these new technologies. Already, so many facets of our daily lives are impacted by very recent advances in computing. As ever more sophisticated algorithms become embedded in the infrastructures of everyday life — from communication and education to health care, finance, transportation, public works, and beyond — no individual can credibly expect to understand all the relevant, technical aspects of a given technology, to foresee how it might interact with other complicated systems, and to forestall unintended consequences.

History suggests, moreover, that it is especially difficult to weigh all the relevant cautions and concerns while immersed in the research process. Time pressures, competition, and the sheer excitement — even exuberance — of pushing the boundaries accelerate the intellectual momentum.

Forums for debate

How can we collectively build an infrastructure that can address these twin facets: blind spots and momentum? One approach would be to institute robust forums in which people with many different backgrounds — intellectual training, passions, concerns, and experiences—could brainstorm and debate together. When it comes to technologies as potent and far-reaching as artificial intelligence, we will benefit greatly from having as many engaged stakeholders as possible.

An historical analogy that comes to mind — albeit an imperfect one — concerns the Manhattan Project during World War II. (No, I don’t mean to suggest that artificial-intelligence systems are equivalent to weapons of mass destruction; bear with me.) Many physicists and other members of the technical staff who joined the project had significant misgivings about weapons work; but they had even more grave concerns about the Nazis’ recent advances. Several had fled fascism in Europe themselves, or had family members who were directly threatened by the regimes. Others, who had fewer connections to the horrors unfolding on the Continent, nonetheless felt a sense of duty to contribute to the Allied cause during wartime.

Momentum

For many who joined the project, the pulls were at least as strong as the pushes. The phenomenon of nuclear fission had only just been identified. Efforts to understand these new nuclear reactions drew upon some of the latest, most exciting discoveries in physics. As several veterans of the project later recalled, the opportunity to pursue such fascinating scientific questions alongside giants of the field — figures like J. Robert Oppenheimer, Hans Bethe, and Enrico Fermi — was a powerful draw in itself.

By all accounts, work at the wartime Los Alamos, New Mexico, laboratory was incredibly intense, marked by grueling schedules and nonstop time pressures. The researchers tackled significant technical challenges amid a palpable sense of momentum. In fact, virtually no one left the laboratory after the Allies had defeated the Nazis in the spring of 1945 — even though, for many of the staff toiling on the mesa, fears of the Nazis acquiring their own nuclear weapon had been a prime motivator for joining the project. (Only one physicist, Joseph Rotblat, left wartime Los Alamos once it became clear that the Nazis were unlikely to prevail in the war; he later helped found the Pugwash movement to campaign for nuclear disarmament.)

While war still raged in the Pacific, few researchers paused to ask larger questions about the uses or implications of the technology they were working so hard to invent, or whether changing circumstances warranted a fresh look at their own efforts and motivations. Only after the weapons had been used against Japanese cities, in August 1945, did a significant number of the technical staff begin to consider the political, moral, or social implications of their work.

An historic opportunity for MIT to lead

These were neither moral cowards nor mad scientists. Several veterans of the project — including MIT’s Philip Morrison and Victor Weisskopf — were remarkably thoughtful, broadly educated individuals who devoted enormous energy after the war to the new challenges of the nuclear age. Yet during that headlong rush to invent the new technology, their efforts were shaped — inevitably — by blind spots and momentum.

Unlike the Manhattan Project, today’s efforts in artificial intelligence are not (or at least not all) buried deep within secret laboratories. MIT has a powerful opportunity to lead in the development of new technologies while also leading careful, deliberate, broad-ranging, and ongoing community discussions about the “whys” and “what ifs,” not just the “hows.” No group of researchers, flushed with the excitement of learning and building something new, can overcome the limitations of blind spots and momentum alone.

 

Suggested links

Series:

Ethics, Computing, and AI | Perspectives from MIT

David Kaiser:

MIT Webpage

Books and Edited Volumes

Articles: History of Science

Articles: Physics

Editorials and Blog Posts

Program in Science, Technology, and Society

Department of Physics

Stories:

Light from ancient quasars helps confirm quantum entanglement
Results are among the strongest evidence yet for “spooky action at a distance.”

Stars align in test supporting “spooky action at a distance”
Physicists address loophole in tests of Bell’s inequality, using 600-year-old starlight.

Groovy science, man!
Q&A: David Kaiser on our debt to a countercultural era in science. 

Historian/physicist David Kaiser wins Physics World’s 'Book of the Year Award'
Kaiser is the Germeshausen Professor of the History of Science, and a Professor of Physics

3 Questions: David Kaiser on Thomas Kuhn’s paradigm shift
Scholars mark 50th anniversary of 'The Structure of Scientific Revolutions.'

Hippie days
How a handful of countercultural scientists changed the course of physics in the 1970s and helped open up the frontier of quantum information.

 


Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of the Dean, MIT School of Humanities, Arts, and Social Sciences

Series Editors: Emily Hiestand and Kathryn O'Neill
Published 18 February 2019