How organizations can secure their AI code

While Reworkd has been open about their mistake, many similar incidents remain unknown. CISOs often learn about it behind closed doors. Financial institutions, healthcare systems, and e-commerce platforms have all encountered security challenges as code-filling tools may introduce vulnerabilities, disrupt operations, or damage data integrity. Many risks are associated with AI-generated code, library names that are the result of guesswork, or the introduction of untracked and unverified third-party dependencies.

“We are facing a perfect storm: increasing reliance on AI-generated code, rapid growth in open source libraries, and the inherent complexity of these systems,” said Jens Wessling, chief technology officer at Veracode. “It is only natural that security risks will increase.”

Often, code completion tools like ChatGPT, GitHub Copilot, or Amazon CodeWhisperer are used implicitly. A survey by Snyk showed that nearly 80% of developers ignore security policies to install AI-generated code. This trend creates blind spots for organizations, which often face the difficulty of mitigating security and legal issues that arise as a result.


Source link