LLMs are fueling the “genAI crime revolution” according to a Netcraft report

However, there may be references in the email or on the site. Netcraft said that sometimes malicious actors mistakenly include large-scale language modeling (LLM) results in fraudulent emails. For example, a phishing email she encountered, which claimed to contain a link to a file transfer of family photos, also included the sentence, “Sure! Here are 50 more family photo phrases.”

“We can assume that threat actors, using ChatGPT to generate the body text of an email, have accidentally inserted an introduction line into their randomizer,” Netcraft said. “This case suggests a combination of both genAI and traditional techniques.”

The telltale evidence still shows which phishing emails are fake

Another phishing email he looked at would be believable — if it weren’t for the sentence at the beginning, including the LLM introduction line, “Here’s your message translated into professional English.” And a fake investment website that touts the benefits of a fake company looks good, except for the headline, “Sure! Here are six key strengths of Cleveland Invest Company.


Source link