Additionally, data poisoning can harm user applications in a number of other ways, including spreading misinformation, biased presentation, performance degradation, and the potential for denial-of-service attacks.
Separating applications can help
Synopsys emphasized that the only available solution to this problem is to isolate applications that may impact the aggregated networks. The Synopsys Cybersecurity Research Center (CyRC) said in a blog post that it “recommends removing applications from networks immediately.”
“CyRC reached out to the developers but did not receive a response within the 90-day timeline specified by our responsible disclosure policy,” the blog added.
The vulnerability was discovered by Mohammed Alshehri, security researcher at Synopsy. “There are products where they take existing AI implementations and put them together to create something new,” Alshehri told DarkReading in an interview. “What we want to highlight here is that even after integration, companies should check to make sure that the same controls that we have for web applications are also applied to the APIs of their AI systems.”
The study highlights that the rapid integration of AI into business operations carries risks, especially for companies that allow LLMs and other generative AI (GenAI) applications to access data warehouses. Despite being an emerging area, security vendors such as Dig Security, Securiti, Protect AI, eSentire, etc. are already trying to protect themselves from the evolving GenAI threats.
Source link