Drawing on the political science and international relations literature on economic growth, we design a new war game simulation and scoring framework to assess the cumulative risk of actions taken by these agents in different contexts. In contrast to previous studies, our study provides qualitative and quantitative data and focuses on large linguistic models (LLMs). We find that all five LLMs studied off the shelf show types of escalation and escalation patterns that are difficult to predict. We see that models often promote the dynamics of an arms race, leading to major conflicts, and in rare cases, even to the use of nuclear weapons. Accordingly, we also collect the reported opinions of the selected action models and consider the reasons for concern based on the prevention and use of the original claim.
That’s from a new paper by Juan-Pablo Rivera, et.al., by the excellent Ethan Mollick. Note that these recommended strategies are for the US, so maybe the LLMs are just telling us that America should be more hawkish.
The post Thomas Schelling meets LLMs? appeared first on Marginal REVOLUTION.
Source link