In November, researchers from JFrog announced the results of their effort to analyze the machine learning tool ecosystem, which led to the discovery of 22 vulnerabilities in 15 different ML projects, both in server-side and client-side components. In early October, Protect AI reported 34 vulnerabilities in a series of open source AI/ML vulnerabilities disclosed by its bug bounty program.
Research efforts like this highlight that, as new projects, many AI/ML frameworks may not be mature enough from a security perspective or have not received the same level of scrutiny in the security research community as other types of software. Although this is changing, as researchers are testing these tools more and more, malicious attackers are looking at them again, and it seems that there are enough bugs left to find.
7. Security feature bypasses make the attack more powerful
While organizations should always prioritize the risk of using remote code in their patching efforts, it’s worth remembering that in reality attackers can also use strong but useful flaws in their attack chains, such as privilege escalation or bypassing security features.
Source link