Yves here. While we all wait for the next shoe to drop in the Middle East escalation game, it seems useful to look at some important economic issues. The biggie is the prospects for AI, and in particular, OpenAI.
Ed Zitron reviewed and advanced his compelling case against OpenAI last week in a weighty post last week (estimated 31 minutes read). As his argument is bifurcated, detailed, and well written, I am concerned that our recapitulation here will not do justice to his great work. So I urge those who argue against Zitron’s case to read his post to confirm that the glaring errors are caused by my leaving the bulk of his argument to the cutting edge.
Before turning to Zitron’s forced downgrade, the fact that the use of AI has been greatly exaggerated does not mean that it is useless. In fact, it may have applications in small firm settings. The hysteria of some months back about AI posing a danger to humanity was to justify the law. The reason for that, was that the promoters of AI woke up saying that there are no barriers to entry into AI. Itty bitty players may come up with useful apps based on itty bitty training sets. Imagine a professional services firm that uses AI to generate regular correspondence for clients.
Some hedge funds have made the ultimate bid, so-called black box trading. I will admit that I have never seen any performance statistics on various strategies (so-called quantitative vs. “event driven” as in combination arbitrage vs. market neutral vs. global arbitrage and a few other flavors). However, I don’t recall any trick being extremely effective all the time, not even the black box AI. I’m sure the media would be all over it if there was success in this field.
He returned to Zitron. He portrays OpenAI as the mother of all bezzles, which must do many impossible or near-impossible things in order to survive. Keep in mind the growing number of risk factors that apply to small businesses. If you have to do seven things to make a business successful, and the probability of success in each area is 90%, that’s a winner, right?
No. Take out the calculator. .9 x .9 x .9. x 9 x 9. x .9 x .9 = .478, as in less than 50% chance of success.
He also compares OpenAI to Uber, very badly. We have to argue about his positive portrayal of Uber as meeting consumer demand. That becomes questionable when you consider that Uber is inherently a high-cost provider, with no barriers to entry. Its popularity is largely dependent on investors who fund the cost of the ride. If you were getting the lowest price service, what’s not to like?
One mistake we may have made in our analysis of Uber is not recognizing it as an investment play. Remember that in the 1800s in the US, railroad after railroad was introduced, some with directly competing lines. Yet despite the near-inevitable bankruptcy, many new operators are making extra tracks. Why? These were stock market games (some might say swindles), with many takers despite a track record of failure.
Uber and the later unicorns were also helped and helped by business investors using analytical procedures that had the effect of greatly increasing the value of the business, and thus made this investment look more attractive than it was.
Answer from Zitron:
I think that for OpenAI to survive for more than two years, it will need to (in random order):
- Successfully navigate the complex and difficult relationship with Microsoft, which exists both as a way of life and as a direct source of competition.
- Raise more money than any startup has ever raised in history, and continue to do so at a pace unprecedented in the history of crowdfunding.
- It had such a significant technological breakthrough that it reduced the cost of building and operating a GPT – or any successful model – by thousands of percent.
- It has such a significant technical success that GPT is able to take new cases of use that are not visible, those that are not currently possible or considered possible by any artificial intelligence researchers.
- These conditions of use must be those that can create new jobs and self-sustain all existing ones in a way that will ensure the capital expenditure and infrastructure investment needed to continue.
Ultimately I believe that OpenAI in its current form is unviable. There is no path to profit, the rate of burnout is very high, and productive AI as a technology requires a lot of energy for the power grid to continue, and training these models is not possible equally, both because of ongoing legal issues (as a result of theft) and the amount of training data required for development.
And, quite simply, any technology that requires hundreds of billions of dollars to prove itself is based on bad architecture. There is no historical precedent for anything OpenAI needs to happen. No one has ever raised the amount of money it would need, and no piece of technology has ever required such financial and systemic power — like rebuilding America’s power grid — to live, let alone have proven themselves to be technologies worthy of such investment.
To be clear, this piece focuses on OpenAI rather than Generative AI as a technology – although I believe the continued existence of OpenAI is necessary to keep companies interested/invested in the industry at all…
What I can say is that OpenAI will definitely fail, or that artificial AI will definitely fail…my point here is to coldly explain why OpenAI, in its current form, cannot survive for more than a few years without some amazing technological convergence. success and financial acumen, some of which is possible, most of which is not historical.
Zitron begins by looking at the opaque yet seemingly murky relationship between Microsoft and OpenAI, and how that might affect standardization. This is a puzzle for the generalist reader but has experience in the technology industry and financial types. Because this part needs to be a little dense, we suggest you go to Zitron’s post to read it in full.
This discussion goes into the question of financing. The key point here (emphasis original):
Thinking that everything exists in a vacuum, OpenAI needs at least 5 billion dollars a year to survive. This would require it to raise more capital than any startup in history, perhaps forever, would need to raise capital on a scale that I cannot find a comparable company in the history of business.
Zitron goes through a very short list of companies that have raised large amounts of capital in the past and says that OpenAI is more than a money pit, just from a burn rate and long-term burn perspective.
He then focuses on profitability, or the lack of it, combined with what in earlier days would be called construction problems:
As I’ve written repeatedly, AI manufacturing is deeply unprofitable, and based on informational metrics, the cost of goods sold is not sustainable.
The cost of OpenAI has increased over time, and the cost of making these models “better” only increases, and it remains to, aside from Jim Covello of Goldman Sachs, solve the kind of complex problems that can justify their cost…As of November 2022, ChatGPT has grown more complex, it’s generationally faster, able to accommodate more data, but has yet to produce a “killer app,” iPhone-esque moment.
In addition, transformer-based models are becoming more popular…As a result, we are already seeing a race to the bottom…
As a result, OpenAI’s revenue may increase, but it is likely to increase by reducing the cost of its services rather than its operating costs…
As discussed earlier, OpenAI – like all transformer-based models – needs a lot of training data to make its models “better” …
Doing so may also lead to endless legal action…
And, to be more clear, I’m not sure there is enough training data out there to get these models past the next generation. Even if AI production companies were able to legally and freely download all text and visual components from the Internet, it does not seem to be enough to train these models…
And then there’s a much bigger, vexing problem – that artificial AI doesn’t have the product-market fit at the scale needed to support its existence.
To be clear, I’m not saying that generative AI is completely useless, or has no product-market fit…
But what they are not, at this point, is that it is important.
Generative AI has yet to come up with a reason for you you absolutely have to put it together, without the sense that your company is “behind” if you don’t use AI. This wouldn’t be a problem if the operating costs of productive AI were a fraction – tens or hundreds of thousands of percent – of what they are today, but as things stand, OpenAI is successfully funding the productive AI movement, all the while facing a problem. at that time it’s cool again usefulGPT is only changing the world as much as the markets allow.
He has a lot more to say about this topic.
Oh, and that’s before getting to the subject of power, which he analyzes in depth.
He then goes back to presenting what OpenAI would need to do to overcome these obstacles, and why it seems unlikely.
Also, if OpenAI or AI in general is a topic of interest, be sure to read all of Zitron’s posts. And make sure you spread it widely.
Source link