CISOs face uncharted territory in preparing for AI security risks

In addition, under the White House’s 2023 AI safety and security order, NIST released last week three final guidance documents and a guidance document from the newly created US AI Safety Institute, all of which are intended to help reduce AI risks. NIST has also released a test platform called Dioptra to assess the “trusted” aspects of AI, i.e. AI that is “effective and reliable, safe, secure and robust, responsive and transparent, interpretable and interpretable, privacy-enhanced, and fair,” harmful bias is managed.

CISOs must prepare for a rapidly changing environment

Despite the intelligence, expertise, and government resources devoted to creating AI risk models, practical advice for CISOs on how to best manage AI risk is currently lacking.

While CISOs and security teams have experienced the supply chain risks of traditional software and code, especially open source software, managing AI risks is a whole new ball game. “The difference is that AI and the use of AI models are new.” Alon Schindel, VP of data and threat research at Wiz tells CSO.


Source link