When most people think of artificial intelligence (AI) they think of HAL 9000 from “2001: A Space Odyssey,” Data from “Star Trek,” or more recently, the android Ava from “Ex Machina.” But to a computer scientist that isn’t what AI necessarily is, and the question “what is AI?” can be a complicated one.
One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google’s director of research, Peter Norvig, puts artificial intelligence in to four broad categories:
The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn’t necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks. Similarly, a machine that acts like a human doesn’t necessarily bear much resemblance to people in the way it processes information.
- machines that think like humans,
- machines that act like humans,
- machines that think rationally,
- machines that act rationally.
Even IBM’s Watson, which acted somewhat like a human when playing Jeopardy, wasn’t using anything like the rational processes humans use.
Davis says he uses another definition, centered on what one wants a computer to do. “There are a number of cognitive tasks that people do easily — often, indeed, with no conscious thought at all — but that are extremely hard to program on computers. Archetypal examples are vision and natural language understanding. Artificial intelligence, as I define it, is the study of getting computers to carry out these tasks,” he said.
Computer vision has made a lot of strides in the past decade — cameras can now recognize faces in the frame and tell the user where they are. However, computers are still not that good at actually recognizing faces, and the way they do it is different from the way people do. A Google image search, for instance, just looks for images in which the pattern of pixels matches the reference image. More sophisticated face recognition systems look at the dimensions of the face to match them with images that might not be simple face-on photos. Humans process the information rather differently, and exactly how that process works is still something of an open question for neuroscientists and cognitive scientists.
Other tasks, though, are proving tougher. For example, Davis and NYU psychology professor Gary Marcus wrote in the Communications of the Association for Computing Machinery of “common sense” tasks that computers find very difficult. A robot serving drinks, for example, can be programmed to recognize a request for one, and even to manipulate a glass and pour one. But if a fly lands in the glass the computer still has a tough time deciding whether to pour the drink in and serve it (or not).
The issue is that much of “common sense” is very hard to model. Computer scientists have taken several approaches to get around that problem. IBM’s Watson, for instance, was able to do so well on Jeopardy! because it had a huge database of knowledge to work with and a few rules to string words together to make questions and answers. Watson, though, would have a difficult time with a simple open-ended conversation.
Beyond tasks, though, is the issue of learning. Machines can learn, said Kathleen McKeown, a professor of computer science at Columbia University. “Machine learning is a kind of AI,” she said.
Some machine learning works in a way similar to the way people do it, she noted. Google Translate, for example, uses a large corpus of text in a given language to translate to another language, a statistical process that doesn’t involve looking for the “meaning” of words. Humans, she said, do something similar, in that we learn languages by seeing lots of examples.
That said, Google Translate doesn’t always get it right, precisely because it doesn’t seek meaning and can sometimes be fooled by synonyms or differing connotations.
One area that McKeown said is making rapid strides is summarizing texts; systems to do that are sometimes employed by law firms that have to go through a lot of it.
McKeown also thinks personal assistants is an area likely to move forward quickly. “I would look at the movie ‘Her,'” she said. In that 2013 movie starring Joaquin Phoenix, a man falls in love with an operating system that has consciousness.
“I initially didn’t want to go see it, I said that’s totally ridiculous,” McKeown said. “But I actually enjoyed it. People are building these conversational assistants, and trying to see how far can we get.”
The upshot is AIs that can handle certain tasks well exist, as do AIs that look almost human because they have a large trove of data to work with. Computer scientists have been less successful coming up with an AI that can think the way we expect a human being to, or to act like a human in more than very limited situations.
“I don’t think we’re in a state that AI is so good that it will do things we hadn’t imagined it was going to do,” McKeown said.