Why not replace the country (all government services) with an AI robot? It is not difficult to imagine that a bot will be more efficient than the government, let’s say, in managing their budget to name one example. It is true that citizens may not be able to control the ruling bot, without starting as trainers (putting a “constitution” on the bot) or perhaps post office ex by taking it out. But the citizens of course don’t rule the state, except for a mindless and disorganized crowd where the average person has no influence (I’ve written several EconLog posts explaining this point from a public-choice perspective). The AI government, however, has not been able to replicate the main advantage of democracy, when it works, which is to drive out rascals when they harm the majority of citizens.
It is very likely that those who see AI as an imminent threat to humanity are greatly exaggerating the danger. It’s hard to see how AI can do this without dominating individuals. One of the three so-called “fathers” of AI is Yann LeCun, a professor at New York University and Chief Scientist at Meta. You think AI as we know it is dumber than a cat. A The Wall Street Journal the author cites LeCun’s response to another AI researcher’s tweet (see Christopher Mims, “This AI Pioneer Thinks AI Is Bigger Than a Cat,” The Wall Street JournalOctober 12, 2024):
It seems to me that before we can “rapidly figure out how to control AI systems that are smarter than us,” we need to have a starting point for the design of a system that is smarter than a house cat.
The columnist adds:
[LeCun] he likes the cat metaphor. Felines, after all, have a mental model of the visual world, a continuous memory, a certain ability to think and the ability to plan, he says. None of these qualities are present in today’s “frontier” AIs, including those made by Meta itself.
And, quoting LeCun:
We’re used to the idea that people or organizations that can express themselves, or manipulate language, are smart—but that’s not true. You can manipulate the language and not be smart, and that’s what LLM is all about [AI’s Large Language Models] they show.
The idea that language manipulation is not evidence of intelligence is historically interesting, although just listening to the average con man or post-truth politician or con man proves it. Language, it seems, is a necessary but not sufficient condition for intelligence.
In any case, those who believe that AI is so dangerous that it should be controlled by governments ignore how often political power, including the modern state, has been dangerous or dangerously ineffective in human history, and the economic theories that explain it. why. Yoshua Bengio, one of the three gods and a friend of LeCun, points out this error:
“I don’t think we should give up on the competition between companies and the aim of making profit alone in order to protect society and democracy,” said Bengio. “That’s why I think we need responsible governments.”
The main reason why the state should leave AI alone is that the government is a very simple and abstract organization compared to the complexity and productivity of free competition and free social interaction. Free markets produce price signals that contain more information than political processes, as shown by Friedrich Hayek in his 1945 book. American Economic Review entitled, “The Use of Information in Society.” Understanding this represents a knowledge frontier that is more important than the current evolution of AI.
Which brings us back to my opening question. In the best case scenario, AI will not be able to effectively link the actions of individuals to any society except perhaps that of a nation. But this is not a reason to extend the current government’s dirigisme to AI research and development. One way or another, appointing a wolf to guard the sheep against the wolf is not a good idea.
******************************
On the lighter side (or isn’t it?), I had the usual problems getting DALL-E to draw something close to the image in my mind—in this case, about the current AI being dumber than a cat—even though I realize that the data-devouring robot is much better at drawing than I am. My idea was to show a cat confused by what it sees in the world. When he finally wasn’t too far from what I wanted, I wrote, “That’s great! She is a beautiful cat.” He replied (I reproduce our “conversation” verbatim):
“Thank you! I’m glad you like this picture, and I’m always here to help, like a good cat would be. If there’s anything else you need, feel free to ask!”
“Where did you get your little sense of humor?” I asked. The silicon creature replied:
“I guess you could say my sense of humor is programmed! It’s a mix of language patterns, some playful thinking, and a dash of trying to make communication more enjoyable. It’s like a cat that throws things out of control—sometimes it works, sometimes it’s just silly!”
Source link