3 Questions with David Mindell
On human-centered robotics, autonomy, and bilingual education
 


David Mindell; photo by Len Rosenstein, Spectrum Magazine

"As an engineer and historian, I’ve been “bilingual” my entire career. Dual competence is a good model for undergraduates at MIT as well:  master two fundamental ways of thinking about the world, one technical and one humanistic or social. Sometimes these two modes will be at odds with each other, which raises critical questions. Other times they will be synergistic and energizing."

— David Mindell, Dibner Professor of the History of Engineering and Manufacturing; Professor of Aeronautics and Astronautics; Co-founder and CEO of Humatics Corporation


 

David Mindell, Frances and David Dibner Professor of the History of Engineering and Manufacturing (in STS), and Professor of Aeronautics and Astronautics, researches the intersections of human behavior, technological innovation, and automation. Mindell is the author of five acclaimed books, most recently Our Robots, Ourselves: Robotics and the Myths of Autonomy (Viking, 2015), as well as the co-founder of the Humatics Corporation, which develops technologies for human-centered automation. SHASS Communications spoke with Mindell recently on how his vision for human-centered robotics is developing and his thoughts on the MIT Schwarzman College of Computing, which aims to integrate technical and humanistic research and education.
 

Q: Interdisciplinary programs have proved challenging to sustain given the differing methodologies and vocabularies of the fields being brought together. Do you have thoughts about ways MIT’s Schwarzman College of Computing could design the curriculum to educate bilinguals — students who are adept in both advanced computation and one or more of the humanities, arts, and social science fields? 

Some technology leaders today are naive and uneducated in humanistic and social thinking. They still think that technology evolves on its own and “impacts” society, instead of understanding technology as a human and cultural expression, as part of society. 

As an historian and engineer, and MIT’s only faculty member with a dual appointment in engineering and the humanities, I’ve been “bilingual” my entire career (long before we began using that term). My education started with firm grounding in two fields — electrical engineering and history — that I continue to study. 

Dual competence is a good model for undergraduates at MIT as well. Pick two: not necessarily the two that I chose, but any two disciplines that capture the core of technology and the core of the humanities. Disciplines at the undergraduate level provide structure, conventions, and professional identity (though my appointment is in Aero/Astro I still identify as an electrical engineer). I prefer the term “dual disciplinary” to “interdisciplinary.”  

The College of Computing curriculum should focus on fundamentals, not just engineering plus some dabbling in social implications. It sends the wrong message to students that “the technical stuff is core, and then we need to add all this wrapper humanities and social sciences around the engineering.” Rather we need to say: “master two fundamental ways of thinking about the world, one technical and one humanistic or social.” Sometimes these two modes will be at odds with each other, which raises critical questions. Other times they will be synergistic and energizing. For example, my historical work on the Apollo Guidance Computer inspired a great deal of my current engineering work on precision navigation.
 



photo via Humatics Corporation website

"Decades of experience have taught us that to function in the human world, autonomy must be connected, relational, and situated. Human-centered autonomy in automobiles must be more than a fancy FitBit on a driver; it must factor into the fundamental design of the systems: What do we wish to control? Whom do we trust? Who owns our data? How are our systems trained?

— David Mindell, Dibner Professor of the History of Engineering and Manufacturing; Professor of Aeronautics and Astronautics; Co-founder and CEO of Humatics Corporation



Q: In naming the company you founded Humatics, you’ve combined “human” and “robotics,” highlighting the synergy between human beings and our advanced technologies. What projects underway at Humatics define and demonstrate how you envision people working collaboratively with machines?   

Humatics builds on the synthesis that has defined my career — the name is the first four letters of “human" and the last four letters of “robotics.” Our mission is to build technologies that weave robotics into the human world, rather than shape human behavior to the limitations of the robots. We do very technical stuff: we build our own radar chips, our own signal processing algorithms, our own AI-based navigation systems. But we also craft our technologies to be human-centered, to give users and workers information that enables them to make their own decisions and work safer and more efficiently. 

We’re currently working to incorporate our ultra-wideband navigation systems into subway and mass transit systems. Humatics' technologies will enable modern signaling systems to be installed more quickly and less expensively. It's gritty, dirty work down in the tunnels, but it is a “smart city” application that can improve the daily lives of millions of people. By enabling the trains to navigate themselves with centimeter-precision, we enable greater rush-hour throughput, fewer interruptions, even improved access for people with disabilities, at a minimal cost compared to laying new track.

A great deal of this work focuses on reliability, robustness, and safety. These are large technological systems that MIT used to focus on in the Engineering Systems Division (ESD). They are legacy infrastructure running at full capacity, with a variety of stakeholders, and technical issues hashed out in political debate. As an opportunity to improve peoples' lives with our technology this project is very motivating for the Humatics team.

We see a subway system as a giant robot that collaborates with millions of people every day. Indeed, for all their flaws, it does so today in beautifully fluid ways. Disruption is not an option. Similarly, we see factories, e-commerce fulfillment centers, even entire supply chains as giant human-machine systems that combine three key elements: people, robots (vehicles), and infrastructure. Humatics builds the technological glue that ties these systems together.
 

Q: Autonomous cars were touted to be available soon, but their design has run into issues and ethical questions. Is there a different approach to the design of AI-vehicles, one that does not attempt to create fully autonomous vehicles? If so, what are the barriers or resistance to human-centered approaches? 

Too many engineers still imagine autonomy as meaning “alone in the world.” This approach derives from a specific historical imagination of autonomy, derived from Defense Advanced Research Projects Agency (DARPA) sponsorship, and elsewhere, that a robot should be independent of all infrastructure. While that’s potentially appropriate for military operations, the promise of autonomy on our roads must be the promise of autonomy in the human world, in myriad exquisite relationships.

Autonomous vehicle companies are learning, at great expense, that they already depend heavily on infrastructure (including roads and traffic signs) and that the sooner they learn to embrace it the sooner they can deploy at scale. Decades of experience have taught us that to function in the human world, autonomy must be connected, relational, and situated. Human centered autonomy in automobiles must be more than a fancy FitBit on a driver; it must factor into the fundamental design of the systems: What do we wish to control? Whom do we trust? Who owns our data? How are our systems trained? How do they handle failure? Who gets to decide?

The current crisis over the Boeing 737 MAX control systems show these questions are hard to get right even in aviation. There we have a great deal of regulation, formalism, training, and procedure, not to mention a safety culture that evolved over a century. For autonomous cars, with radically different regulatory settings and operating environments, not to mention non-deterministic software, we still have a great deal to learn. Sometimes I think it could take the better part of this century to really learn how to build robust autonomy into safety-critical systems at scale. 

 

Suggested links

David Mindell website

MIT Program in Science, Technology, and Society

MIT Department of Aeronautics and Astronautics 

Humatics Corporation

MIT News Archive: Robots and Us

MIT News Archive: IAA honors David Mindell for "Digital Apollo"

Archive Interview: 3Q - David Mindell on the human dimensions of technological innovation
 


Interview prepared by MIT SHASS Communications
Editorial and Design Director: Emily Hiestand
Interview conducted by writer Maria Iacobo
Photograph: Len Rosenstein, MIT Spectrum