ETHICS, COMPUTING, AND AI | PERSPECTIVES FROM MIT

Biological Intelligence and AI | Matthew A. Wilson
The emotional intelligence challenge
 

“Understanding how innate morality arises in human intelligence may be an important step in incorporating such a capacity into artificial intelligences.”

— Matthew A. Wilson, Sherman Fairchild Professor of Neuroscience



Series: Ethics, Computing, and AI | Perspectives from MIT


Matthew A. Wilson is the Sherman Fairchild Professor of Neuroscience in the Department of Brain and Cognitive Sciences, the associate director of the Picower Institute for Learning and Memory, and the associate director of the Center for Brains, Minds, and Machines. Wilson’s research centers on the neural processes within the hippocampus and neocortex that enable memories to form and persist over time.
 

• • •


Q: How might insights and research from neuroscience accelerate the development of ethical safeguards for computing and AI tools?

 

The effort to develop machines that can operate on an intellectual par with humans — what is called artificial general intelligence (AGI) — naturally draws parallels to biological intelligence as the capability being mimicked. The question is, should we hold artificial intelligence (AI) systems to the same standards as the average human? Will we expect AIs to perform at the level of an ideal human? Or will we expect them to exceed humans in ways both expected and unexpected?

Our current sense of the impact AIs will have on society has emerged from advances in specific applied domains: They drive our cars; they diagnose our medical conditions; and they understand our language. In each of these instances, AIs are given specific tasks with clear metrics of performance. But as the complexity of the problem domains increases, the difficulty of implementing AIs that can solve general problems in transparent and predictable ways increases. Understanding how they are solving problems will be as important as measuring how well they are solving problems.

In human society, one way to address such questions is to apply ethics — the established norms of behavior that create an environment of trust among people. We trust that individuals will perform actions that are both constrained and motivated in clearly understood ways. Much of this trust in human intelligence is derived from our common understanding of the innate capacities of other humans.

We share a foundation in how we perceive the world, how we experience and remember our interactions with the world, how we evaluate risks and plan for the future, and how we learn from our actions and the actions of others. Each of these serves to define both our common understanding of what it means to be human and our unique individuality.

Innate morality in humans

One key element of our humanity is the assumption of innate morality — that we can predict moral judgments in novel situations. Work being done at MIT has shown the youngest children have an understanding of moral behavior that can serve as a foundation for predicting future behavior. Often this capacity is framed in terms of how we resolve moral dilemmas, but the issue is not determining the right choice, but rather how the choice will be made. Understanding how this kind of innate morality arises in human intelligence may be an important step in incorporating such a capacity into artificial intelligences.

Having developed an AGI, one objective might be to determine how “human” it was. Some might challenge this goal, arguing that there is no reason an AI should be constrained to perform like a biological intelligence. And, it’s true that we might not need an AI driving a car to behave like a human driver; in fact, we might argue that a primary motivation for developing such an AI is that it NOT drive like a human. In such cases, our trust in an AI could be derived from our ability to predict their behavior in all relevant situational contexts.

However, an AGI by definition will not be able to rely on comprehensive situational contextual knowledge and yet will have to have generalizable behavior that still conforms to the norms of conduct that we would expect of an ideal human. What are those human norms? How would we incorporate them into the programming of an AGI? How would we assess the competence of such an AGI to respond appropriately in novel situations?

Developing artificial emotional intelligence

Emotions can be thought of as representing the kind of generalizable situational context that could be used to drive predictable behavior under conditions of novelty. Developing such artificial emotional intelligence — and evaluating it against human emotional intelligence — are active areas of research that might prove crucial in creating trustworthy AGIs.

An interesting extension to this is the question of how artificial agents that are imbued with human-like emotional intelligence would blur the line between man and machine. This has been a popular premise in science fiction, but considering how we will deal with this development as it becomes science fact will become part of a broader conversation on society, technology, and human rights.

Of course, there are many other important considerations in developing AI, such as issues of job displacement and wealth distribution, all of which point to the inextricable relationship between technology and society, artificial intelligence and biological intelligence. This relationship will form the basis of ongoing research and development in these rapidly expanding fields.




Suggested links

Series:

Ethics, Computing, and AI | Perspectives from MIT

Matthew A. Wilson

Website

The Wilson Lab at MIT

Department of Brain and Cognitive Sciences

MIT Picower Institute for Learning and Memory

Department of Biology

 

 


Ethics, Computing and AI series prepared by MIT SHASS Communications
Office of Dean Melissa Nobles
MIT School of Humanities, Arts, and Social Sciences
Series Editor and Designer: Emily Hiestand, Communication Director
Series Co-Editor: Kathryn O'Neill, Assoc News Manager, SHASS Communications
Published 18 February 2019