Converting LLM Codes – Schneier in Security

Converts LLM codes

Really interesting research: “LLM-Assisted Easy-to-Trigger Backdoor Attacks on Code Completion Models: Hidden Vulnerability Injection vs. Hard Detection“:

Abstract: Large Language Models (LLMs) transform the code com-
pletion activities, which provide context-based suggestions to increase developer productivity in software engineering. Since users often fine-tune these models for specific applications, poisoning and backdoor attacks can subtly change the model’s results. To address this critical security challenge, we introduce CODEBREAKER, an LLM-assisted backend attack framework for code completion models. Unlike recent attacks that embed malicious payloads in accessible or non-essential sections of code (eg, comments), CODEBREAKER supports LLMs (eg, GPT-4) in modifying complex payloads (without affecting performance), ensuring that both toxic data is fined – tuning and generated code can avoid robust vulnerability detection. CODEBREAKER stands out for its comprehensive coverage of vulnerabilities, making it the first to offer such a comprehensive set for testing. Our extensive testing and user studies confirm CODEBREAKER’s robust attack performance across a variety of settings, confirming its superiority over existing methods. By integrating malicious payloads directly into the source code with minimal changes, CODEBREAKER challenges current security measures, underscoring the critical need for strong defenses to break the code.

A smart attack, and another illustration of why trusted AI is important.

Posted November 7, 2024 at 7:07 AM • 0 Comments

Bruce Schneier sidebar photo by Joe MacInnis.


Source link