Authors: Barry Smith and Jobst Landgrebe
https://arxiv.org/abs/1901.02918v3 https://doi.org/10.48550/arXiv.1901.02918

The common distinctions that the authors are following is that between single task, selective AI, which can outperform any humans in the highly constricted framework of a singular task, and general AI, which, what I gathered from another article, has a foundational link to language (in brief: no language, no GAI).
AI, making use of dNN (deep Neural Networks), can already outperform humans in the following narrowed-down tasks: games, certain kind of pattern recognition (medicine, astrology), certain industrial tasks (emphasis on repetitiveness). The key is the “pre-curation” or “preselection” of data according to the tasks at hand.
The authors then turn to the field of language understanding. Keep in mind, that this article was published in the beginning of 2019. Its publication coincided with the open access version of GPT-2, but the article is published pre-GPT-3 or LAMDA. (GPT-2 is not mentioned in the article.)
The authors criticize the then existing language-models in the following way:
“The principal problem with this approach, however, is that embedding into a linear vector of encoding real numbers – no matter how long the vector is – leads to the discarding of all information pertaining to the contexts of the input sentences.”
Making AI Meaningful Again, p. 5
They argue that the contextualization of text is a precondition for the correct understanding of language. Prior knowledge is used to put the text into a context. AI does not have this prior knowledge. It transforms meaningful words (that are meaningful because of their embeddedness in a context) into meaningless signs (information). “The reason for this shallowness of this so-called ‘neural machine translation’ is that the vector space it uses is merely morpho-syntactical and lacks semantic dimensions.” This is the reason, the authors claim, that AI translators like Google Translator (Transformer) score low (28/100) compared to an average bi-lingual speaker (75/100) (p. 5).
Again, have to tend to other tasks, – will continue reading as soon as possible.