CISOs to face a slew of emerging regulations after Newsom votes for California’s AI bill

  • to create or use certain weapons of mass destruction to cause mass destruction,
  • causing mass injury or at least $500 million in damages by conducting a cyber attack on critical infrastructure or operating with limited human supervision and causing death, bodily injury, or property damage in a manner that would be criminal if committed by a person.
  • and other similar risks.

It also required engineers to use a kill-switch or “shutdown power” in case of disruption to critical infrastructure. The bill also stipulates that covered models implement cyber security and safety policies under strict testing, auditing, reporting, and auditing obligations.

Some AI experts say these and other provisions of the bill are excessive. David Brauchler, head of AI and machine learning in North America at NCC Group, tells CSO that this bill “talks about the danger presented by the culture of fear, where people are afraid that these types will slow down and start working. outside in ways that are not created by behavior. In the space where we work together with these systems, we didn’t see that that was close to an imminent or imminent systemic risk.”

The burdens of heavy accidents were too heavy for the big players

In addition, the serious injury burdens of the bill may have been too heavy for even the elite players to bear. “The definition of critical damage is so broad that developers will have to make guarantees and make guarantees that cover a large number of areas that can be vulnerable and make guarantees that are very difficult to make if you release that model publicly and openly,” Benjamin. Brooks, Fellow at the Berkman Klein Center for Internet & Society at Harvard University, and former head of public policy for Stability AI, tells CSO.


Source link