Let’s say you have a manufacturing process, and the AIs involved work at IQ = 160, and the humans work at IQ = 120. The O-Ring model, as you probably know, predicts that you’ll end up producing a product with IQ = 120. The model , in short, says that a manufacturing process is no better than its weakest link.
Practically, it can be the case that the superior understanding of intelligent AIs is lost on the people who need to work with them. Or the overall reliability is reduced by the people in the production chain. This last problem is very important if there is consistency in the production process, ie each part must work well for the whole to work. Many security problems have that structure.
Overall productivity may end up being somewhat higher than IQ = 120, if only because AIs will work longer hours at a much lower rate. However, the quality of the final product may be closer to IQ = 120 than you would like.
This is another reason why I think AI productivity will gradually spread across the world.
Sometimes when I read AI analysts I feel like they only think of AI production processes. Ultimately, but I don’t see that situation coming anytime soon, if only for legal and regulatory reasons.
Furthermore, those AIs may have other shortcomings, IQ aside. And O-Ring logic can work in those qualities too, even within the circle of AIs themselves. So if you say that Claude and the o1 model “work together,” you might end up with a world worse than the best.
Source link