Is Deep Learning the Way Towards Human-Like Intelligent Machines?

Original Source Here
Let’s take object recognition for instance. Researchers have been using ImageNet as a benchmark database since 2010. Using a DL model based on convolutional neural nets, Alex Krizhevsky and his team won the ImageNet Challenge in 2012. They obliterated their (non-DL) rivals by a +10% error margin — achieving 63.30% top-1 accuracy. Today, the best DL models reach +90% accuracy in the ImageNet benchmark. That’s better than a human.
However, those same models would experience a 40–45% drop in accuracy when classifying real-world images, such as the ones found in ObjectNet, an unbiased object database. These models can classify almost perfectly ImageNet’s clean images but can’t extrapolate to real-world scenarios.
Other examples of the lights and shadows of DL are DeepMind’s milestone systems— AlphaZero and AlphaStar— capable of beating world-class champions at chess/Go or StarCraft, respectively. These feats seem incredible but appearances can be misleading. AlphaZero can win at chess and Go, but it can’t play both at once. “Retraining a model’s connections and responses so that it can win at chess resets any previous experience it had of Go,” says Douglas Heaven in an article for Nature. “From the perspective of a human,” says Chelsea Finn, researcher at Stanford University, “this is kind of ridiculous.”
The above examples illustrate DL’s difficulties with transfer learning — the ability to keep the knowledge learned to solve one problem and apply it to another, related problem. We, humans, are way better at it. Melanie Mitchell, computer science professor at Portland State University, says that “machines often are not able to deal with input that is different from the kind of input they have been trained on.”
Furthermore, DL systems need lots of data and lots of computing power and are still very dumb. In the words of Yoshua Bengio, one of the ‘Godfathers of AI,’ “[machines] need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”
Yann LeCun, another DL pioneer, thinks these systems will still be important in the future, but they will need some changes. He argues that supervised learning — The most common type of training method, consisting of using data with labels — will be replaced with self-supervised learning.
“[self-supervised learning] is the idea of learning to represent the world before learning a task. This is what babies and animals do,” he says. “Once we have good representations of the world, learning a task requires few trials and few samples.”
It’s generally accepted that DL will still be around for some time in one form or another.
That’s the main conclusion from Martin Ford’s book Architects of Intelligence. Some of the current challenges of today’s AI are “its application to narrow domains, its overreliance on data, and its limited understanding of the meaning of language.”
In the words of Oren Etzioni, CEO of Allen Institute for Artificial Intelligence: “I think the reality is that deep learning and neural networks are particularly nice tools in our toolbox, but it’s a tool that still leaves us with a number of problems like reasoning, background knowledge, common sense, and many others largely unsolved.”
AI/ML
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot
via WordPress https://ramseyelbasheer.io/2021/04/25/is-deep-learning-the-way-towards-human-like-intelligent-machines-2/