Late last May, OpenAI CEO Sam Altman testified before the Senate Judiciary Committee to the rising technology of productive AI. A compelling motive for Altman’s visit to the Senate floor was to assuage congressional anxieties from a combination of ignorance and a Matrix-Terminator-Robocop dystopian myth. The real motivation was a combination of protectionist concerns about the negative distribution of technology in domestic labor markets, the spread of misinformation, and the anger that technology was bypassing the control tools of the administrative state.
While corporate governance of technology companies is often opposed, as demonstrated by Senator Josh Hawley’s (R-Mo.) war treatment According to Google CEO Sundar Pichai, the relationship between corporations and the state is often mutualistic—consumers as hosts. Provided by Altman’s regulatory applications– including the latest call at the international AI regulatory center—private meetings with Senators, and dinners with members of the House, it should come as no surprise that no one found a “friendly audience for members of the subcommittee” on privacy, technology, and law.
Altman courted policymakers for protection, though he was secretive about civil society. The former president of YCombinator has no excuses for a shift of hands like “‘if this technology goes wrong, it can go wrong.'” If Altman had some real public concern, he should have expressed it directly.
Although OpenAI was founded as a non-profit organization in 2015, it became a limited for-profit organization in 2019. There is nothing. ipso facto wrong by changing the profit model. OpenAI needs huge amounts of money to afford tens of thousands of H100 GPUs (~$40k per GPU), attract talent, and tens of millions of dollars to train its largest language model, ChatGPT. To cover these costs, OpenAI needed to attract shareholders, talent—including startups like Inflection and Adept—and strategic investment from corporate rivals the way all firms do: with the promise of highly discounted future returns.
The result? Productive AI that’s easy to use and accessible It’s free in society.
Still, since OpenAI is in the (obligatory) profit-maximizing business, it is under a perverse incentive to earn rent—and return for its shareholders—by legal capture.
Instead of maintaining high profit margins through expensive, constant innovation and iterative development of ChatGPT, OpenAI can reduce the number of firms entering the market by government fiat.
The suggestion that Altman promoted at the hearing? Per The New York Times reports“the agency that issues licenses for the development of large-scale AI models, the safety regulations and tests that AI models must pass before they can be released to the public.”
Read: Barriers, barriers, and barriers to entry.
Capital investment makes market entry difficult; controlled capture makes it completely impossible.
As Don Lavoie enters National Economic Planning: What’s Left? (1985), central planning was “no more or less than government-mandated measures by the leaders of large industries to protect against risk and the dynamics of market competition.”
Regulation is simply a succession of central planning coupled with small ambitions: in Lavoie’s words, the way for elite corporations is to “use the power of government to protect their profits from the threat of competitors.” For more on Don Lavoie and an in-depth analysis of his contributions to the information problem, we refer the reader to him. This is Cory Massimino’s place of EconLib.
In fact, OpenAI has adopted both of these strategies; monetizes the first result of its research and development and, recently, OpenAI also entered into a partnership with Apple to integrate ChatGPT with services such as Siri, using the existing network of devices and existing applications of the firm. At the same time, OpenAI is trying to raise rents with controlled capture. Although all strategies reduce allocation efficiency, the first two work dynamically while the latter do not; two increase the total value and the third destroys it.
One would think that the current Neo-Brandeisian FTC regime would raise the alarm about such an apparent bid to restrict market entry and facilitate collusion—staff at the Bureau of Competition & Office of Technology even issued a statement, “Generative AI Raises Competitive Concerns.” Unsurprisingly, though unfortunately, regulators do not raise a single concern about it the combination was aided and abetted the government intervention.
Go find out!
As AI advances, Luddism grows, and Congress holds more hearings on regulations, we must keep a close eye on the targeted public crackdown on the inevitable regulatory takeovers that must follow.
Samuel Crombie is the founder of actionbase.co and a former Product Manager at Microsoft AI.
Jack Nicastro is a Senior Producer with the Foundation for Economic Education and a research fellow at the Cato Institute.
(0 COMMENTS)
Source link