The Quartz guide to artificial intelligence
What is it, why is it important, and should we be afraid?

Automobiles were a direct replacement for horses, but in the medium and long-term led to many other kinds of uses, such as semi-trucks for large scale shipping, moving vans, mini vans, convertibles. Similarly, AI systems in the short term will be a direct replacement for routine kinds of tasks, but in the medium and long-term we will see uses just as dramatic as automobiles.

— Jason Hong, a professor in Carnegie Mellon University’s Human Computer Interaction Lab

What is artificial intelligence? Why is it important? Why is everyone talking about it all of a sudden? If you skim online headlines, you’ll likely read about how AI is powering Amazon and Google’s virtual assistants, or how it’s taking all the jobs (debatable), but not a good explanation of what it is (or whether the robots are going to take over). We’re here to help with this living document, a plain-English guide to AI that will be updated and refined as the field evolves and important concepts emerge.

Artificial intelligence is software, or a computer program, with a mechanism to learn. It then uses that knowledge to make a decision in a new situation, as humans do. The researchers building this software try to write code that can read images, text, video, or audio, and learn something from it. Once a machine has learned, that knowledge can be put to use elsewhere. If an algorithm learns to recognize someone’s face, it can then be used to find them in Facebook photos. In modern AI, learning is often called “training” (more on that later).

Humans are naturally adept at learning complex ideas: we can see an object like an apple, and then recognize a different apple later on. Machines are very literal—a computer doesn’t have a flexible concept of “similar.” A goal of artificial intelligence is to make machines less literal. It’s easy for a machine to tell if two images of an apple, or two sentences, are exactly the same, but artificial intelligence aims to recognize a picture of that same apple from a different angle or different light; it’s capturing the visual idea of an apple. This is called “generalizing” or forming an idea that’s based on similarities in data, rather than just the images or text the AI has seen. That more general idea can then be applied to things that the AI hasn’t seen before.

Full story at Quartz

More SHASS stories about AI and Computing


Suggested links

MIT reshapes itself for the future

An AI pioneer, and the researcher bringing humanity to AI

Will there be a ban on killer robots?