Consolidating the continuous red team is your only defensive AI threat

We have seen respected independent bodies such as NIST launch its AI Risk Management Framework and CISA its AI Guide. And there have been various governments that have established new guidelines, such as the EU AI EthicsGuidelines. The Five Eyes Alliance (FVEY) which includes Australia, Canada, New Zealand, the United Kingdom, and the United States has also benchmarked and developed Secure AI guidelines, broad recommendations for many organizations to address but talk about shared concerns. that these nations have this new AI threat.

How businesses can deal with it

To make matters worse, a shortage of online talent and an overcrowded roadmap doesn’t help. This new world requires new skills that are not available in most IT shops. Just think how many employees in IT understand AI models – the answer is not many. Then extend this question to who understands Cybersecurity and AI Models? I already know the answer and it’s not good.

Until businesses get up to speed there, current best practice involves establishing a productive AI standard that includes guidance on how to use AI, and what risks need to be considered. Within large enterprises the focus has been on dividing productive AI use cases into low risk and medium/high risk. Low-risk cases can proceed quickly. On the other hand, strong business cases are needed for medium and high risk models to ensure that new risks are understood and part of the decision process.


Source link