Menu Sidebar Widget Area

This is an example widget to show how the Menu Sidebar Widget Area looks by default. You can add custom widgets from the widgets in the admin.

Authors: Barry Smith and Jobst Landgrebe

https://arxiv.org/abs/1901.02918v3 https://doi.org/10.48550/arXiv.1901.02918

I heard that Barry Smith is going to publish a new book on AI very soon. I wanted to get familiar with his views before reading his book. I found two articles that seem promising to me. This is the first one. It starts with a clear overview and introduction of what Machine Learning and Neural Networks are, how they function and what they can do.

The common distinctions that the authors are following is that between single task, selective AI, which can outperform any humans in the highly constricted framework of a singular task, and general AI, which, what I gathered from another article, has a foundational link to language (in brief: no language, no GAI).

AI, making use of dNN (deep Neural Networks), can already outperform humans in the following narrowed-down tasks: games, certain kind of pattern recognition (medicine, astrology), certain industrial tasks (emphasis on repetitiveness). The key is the “pre-curation” or “preselection” of data according to the tasks at hand.

The authors then turn to the field of language understanding. Keep in mind, that this article was published in the beginning of 2019. Its publication coincided with the open access version of GPT-2, but the article is published pre-GPT-3 or LAMDA. (GPT-2 is not mentioned in the article.)

The authors criticize the then existing language-models in the following way:

“The principal problem with this approach, however, is that embedding into a linear vector of encoding real numbers – no matter how long the vector is – leads to the discarding of all information pertaining to the contexts of the input sentences.”

Making AI Meaningful Again, p. 5

They argue that the contextualization of text is a precondition for the correct understanding of language. Prior knowledge is used to put the text into a context. AI does not have this prior knowledge. It transforms meaningful words (that are meaningful because of their embeddedness in a context) into meaningless signs (information). “The reason for this shallowness of this so-called ‘neural machine translation’ is that the vector space it uses is merely morpho-syntactical and lacks semantic dimensions.” This is the reason, the authors claim, that AI translators like Google Translator (Transformer) score low (28/100) compared to an average bi-lingual speaker (75/100) (p. 5).

Again, have to tend to other tasks, – will continue reading as soon as possible.

JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.

By AIprism

Leave a Reply

Your email address will not be published. Required fields are marked *