However, noted Jeremy Kirk, an analyst at Intel 471, not all claims of AI use can be accurate. “We use the term ‘targeted’ to represent that it is a claim made by a threat actor and that it is often unclear to what extent AI is embedded in a product, what LLM model is used, etc.,” he said via email. “As far as cybercrime tool developers jumping into the fray for commercial gain, there seems to be a real effort to see how AI can help in cybercrime work. Underground markets are competitive, and there is often more than one seller of a particular service or product. It is to their commercial advantage that their product works better than another, and AI can help. “
Intel 471 has seen a number of questionable claims, including one by four University of Illinois Urbana-Champaign (UIUC) computer scientists who claim to have used OpenAI’s GPT-4 LLM to independently exploit vulnerabilities in real-world systems by feeding standard LLM. vulnerabilities and exposures (CVE) advice that describes errors. However, the research showed, “Because many important aspects of the research have not been published – such as the agent’s code, information or model output – it cannot be accurately reproduced by other researchers, and it invites doubt.”
Default
Some threat actors have provided tools that capture and digest CVE data, and a tool that combines what Intel 471 calls a known AI model into a multi-purpose hacking tool that allegedly does everything from scanning networks and looking for vulnerabilities in content management systems to writing malicious scripts. . .
Source link