How digital machines can learn analogical thinking



Original Source Here

Deep learning lacks common sense

In recent years, deep-learning-based algorithms have shown great progress in word recognition and machine translation.

Based on various data analyses (phonetics, word combination, and speech synthesis), they have achieved up to 90% accuracy in speech recognition. At the same time, translation machines such as Google Translation have made great strides in word-for-word translation since 2016.

These leaps in optimization come largely from algorithm refinements. First, Google’s Word2vec program has conceived a spatial structure capable that define word meaning not by a number but by multidimensional relations(mathematical vectors). Also, recurrent neural networks have made it possible to understand words as the sentence progresses. Each previous word can now become an output in the form of a vector, which will help understand the next word. Finally, in order for the machine to retain the relevant information, researchers have invented “long-short-term memories” models that select the inputs to be kept in memory.

As a result, optimistic about machine translation success, many researchers were quick to express their enthusiasm about general understanding AI.

However, these results have to be put into perspective. These algorithms still have a hard time translating ambiguous word matches, which depend on the context and the meaning of the sentence (such as a phrase like “take it with a grain of salt”).

Without the help of a human to adjust, these computers lack common sense. They lack the basic, visual knowledge of things innate to humans (“a glass is filled with water, drunk by a human being, held by the hand”). In that example, the machine can easily confuse the role of the glass with that of the water or the hand, and thus translate anything. He can thus translate a phrase like “he drank water from his glass by taking it by the hand”, by assuming the “it” is the “water”, while “the glass” is always assumed by humans.

The machine lacks the intuitive physics that humans deduce from their environment observations and by similarity to their situation. It lacks an analogical knowledge of things.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot



via WordPress https://ramseyelbasheer.wordpress.com/2021/02/22/how-digital-machines-can-learn-analogical-thinking/

Popular posts from this blog

Fully Explained DBScan Clustering Algorithm with Python

Hierarchical clustering explained

Streamlit — Deploy your app in just a few minutes