Artificial intelligence hits the barrier of meaning
Machine learning algorithms don’t yet understand things the way humans do — with sometimes disastrous consequences. 
 

Unlocking A.I.’s barrier of meaning is likely to require a step backward for the field, away from ever bigger networks and data collections, and back to the field’s roots as an interdisciplinary science studying the most challenging of scientific problems: the nature of intelligence.

— Melanie Mitchell, Professor of Computer Science at Portland State University


 

You’ve probably heard that we’re in the midst of an A.I. revolution. We’re told that machine intelligence is progressing at an astounding rate, powered by “deep learning” algorithms that use huge amounts of data to train complicated programs known as “neural networks.”

Today’s A.I. programs can recognize faces and transcribe spoken sentences. We have programs that can spot subtle financial fraud, find relevant web pages in response to ambiguous queries, map the best driving route to almost any destination, beat human grandmasters at chess and Go, and translate between hundreds of languages. What’s more, we’ve been promised that self-driving cars, automated cancer diagnoses, housecleaning robots and even automated scientific discovery are on the verge of becoming mainstream.

The Facebook founder, Mark Zuckerberg, recently declared that over the next five to 10 years, the company will push its A.I. to “get better than human level at all of the primary human senses: vision, hearing, language, general cognition.” Shane Legg, chief scientist of Google’s DeepMind group, predicted that “human-level A.I. will be passed in the mid-2020s.”

As someone who has worked in A.I. for decades, I’ve witnessed the failure of similar predictions of imminent human-level A.I., and I’m certain these latest forecasts will fall short as well. The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, “I wonder whether or when A.I. will ever crash the barrier of meaning.” To me, this is still the most important question.

Full commentary at the New York Times

More SHASS stories about AI and Computing

 

Suggested links

MIT reshapes itself to shape the future

The Quartz guide to artificial intelligence

An AI pioneer, and the researcher bringing humanity to AI