From the Institute for Progress. There are four of us, Dylan Matthews, Matt Clancy, and Jacob Trefethen again. There is a transcript, and here is one very short episode:
Tyler Cowen: I see long-term risks to economic growth centered on wars. There are many books about the Industrial Revolution. People were expelled. Other parts of the country fared worse. Those are overstated.
But the more productive power you have, the more easily you can – and almost always – have the power to destroy. The next time there is a big war, which may be many decades later, many people will be killed, there will be great accidents, and an increase in political chaos. That is another part of the balance. Now, you always hope that the next time we go through this we will do a better job. We all hope so, but I don’t know.
Also:
Tyler Cowen: But the irony is why we don’t have more terrorist attacks than we do, right? You would think that people would dump basic poisons in the reservoir or show up in suburban malls with guns in the basement, but in reality it doesn’t happen very much. I’m not sure what the binding bond is, but since I don’t think it’s a science, that’s one thing that makes me more optimistic than many other people in this area.
Dylan Matthews: I’m curious to know what people’s thoughts are, as I often think about things that seem like they might have more potential for a terrorist attack. I don’t use Google because after Edward Snowden, that doesn’t seem safe.
I live in DC, and I keep seeing large groups of very energetic people. I wonder, “Why does everyone feel safe? Why, given the current situation, are we not seeing more of this?” Tyler, you said you don’t know what a binding obligation is. Jacob, do you have a theory about what a binding obligation is?
Jacob Trefethen: I don’t think I have a theory that explains the premise.
Tyler Cowen: Management would be mine. For example, it would be ironic if the main concern of the GPT models was to help terrorists become better managers, rather than just giving them basic management tips like you’d find in a best-selling management book. That’s my best guess.
I would note that this was recorded some time ago, and for some of the AI security issues I would put things differently now. Maybe some of that has changed my mind, but more than anything I would have presented the points in a very different context.
Source link