Essentially, they say, “The AI red team strives to go beyond model-level security measures by simulating real-world attacks against end-to-end systems. However, there are many open questions about how red team operations should be carried out and a healthy amount of skepticism about the effectiveness of current red team AI efforts. “
This paper noted that, when it was founded in 2018, the Microsoft AI Red Team (AIRT) was mainly focused on identifying traditional security vulnerabilities and attacks to avoid classical ML models. “Since then,” it said, “both the scope and scale of the AI red team at Microsoft have grown significantly in response to two major trends.”
The first, we said, is that AI has become more sophisticated, and the second is that Microsoft’s recent investment in AI has led to the development of many products that require red-collision. “This increase in volume and the expanded scope of the red AI team made manual testing completely impossible, forcing us to increase our operations with the help of automation,” the authors wrote.
Source link