The Problem with AI The Word “Intelligence”

Like Financial Times The headline is, “AI in Finance Is Like ‘From Typewriters to Word Processors’” (June 16, 2024). But, I think, not continuously, despite all the excitement (see “Ray Kurzweil on how AI will change the physical world,” The Economist, June 17, 2024). At the very least, skepticism is warranted about the “production” form of AI. (IBM defines generative AI as referring to “deep learning models that can generate high-quality text, images, and other content based on data they have been trained on.”)

The chat power of an AI bot program like ChatGPT is impressive. This bot writes better and seems to be a better conversationalist than a significant portion of people. I’m told that he (or she, unless the object has no gender and I use the neutral “she” anyway) does the work of identifying and categorizing things and doing simple coding. It’s a very complicated system. But he relies heavily on his mock database, where he makes billions of comparisons with the power of electricity. I have had moments to observe that his analytical and creative abilities are limited.

Sometimes, they are surprisingly limited. Recently, I spent a few hours with the latest version of DALL-E (the creative side of ChatGPT) trying to make him understand the following request:

Create a picture of a strong person (woman) going against the crowd led by the king.

He didn’t understand. I had to elaborate, re-arrange, and re-explain many times, as in this modified instruction:

Produce an image of a strong and individualistic person (woman) who goes against the grain of the nondescript crowd led by the king. A woman is in front and walks proudly from west to east. The crowd led by the king is near and is moving from east to west. They go in opposite directions. The camera is in the south.

(By “closer to the background,” I meant “closer to the background.” No one is flawless.)

DALL-E was able to repeat my instructions when I tested him, but he couldn’t see the glaring errors in his visuals, as if he didn’t understand. He produced many pictures in which the woman on one side, and the king and his followers on the other side, are walking in the same direction. The first image below provides an interesting example of this basic tension. When the bot finally draws a picture where the woman and the king enter the opposite directions (reproduced as the second image below), the king’s followers had disappeared! A child who learns to dry sees his mistakes better when they are explained.

I said about DALL-E “as if he doesn’t understand,” and that’s really the problem: the machine, essentially a piece of code and a huge database, just does it. not understand. What it does is impressive compared to what computer programs can do so far, but this is not thinking or understanding—intelligence as we know it. It is a very advanced calculator. But ChatGPT does not know that he thinks, which means he does not think and does not understand. He just repeats the patterns he finds in his database. It looks like analogical thinking but without logic. Thinking means analogies, but analogies do not mean thinking. Therefore it is not surprising that DALL-E did not suspect a possible individual interpretation of my order, which I did not explain: a private person refused to follow a crowd loyal to the king. A computer program is not a person and does not understand what it means to be one. As suggested by the featured image of this post (redrawn by DALL-E after much production, and reproduced below), AI cannot, and I suspect will not, understand Descartes’s. Cogito ergo sum (I think, therefore I am). And this is not because he cannot find Latin in his knowledge.

Nowhere can DALL-E find a robot with a cactus on its head. Another Dali, Salvator, could easily imagine that.

Of course, no one can predict the future and how AI will develop. It takes wisdom and humility. Advances in computing will likely produce what we would now consider miracles. But from what we know of thinking and understanding, we can safely conclude that electronics, as useful as they are, probably won’t wise. What is missing from “artificial intelligence” is intelligence.

******************************

DALL-E’s wrong illustration of a simple request from P. Lemieux

Another misconception of DALL-E is a simple application

Another misconception of DALL-E is a simple application

An automatic representation of DALL-E, under the influence of your humble blogger

Your picture of DALL-E, under the influence of your humble blogger


Source link