The word “free” has always been tempting for employees looking for an app or template to make their work easier. These days, combine “free” with “AI” and the allure is almost irresistible.
Since the release of ChatGPT in late 2022, free AI-themed apps have exploded. Unfortunately, some are created by creepy characters. One of the latest examples, reported this week by Malwarebyes, claims to be an AI video editor, but actually includes the Lumma Stealer malware.
Victims were lured by promises like “Create amazing videos in minutes,” “No special skills needed – anyone can do it,” and “On September 1st we’re giving our AI editor 50 live licenses!”
According to a report released last month by Slack, the use of AI in business is on the rise. Of those employees who use AI systems, 81% said it improved their productivity. That’s why some may be curious – or eager – to try a free AI app.
However, that same report notes that nearly 40% of respondents said their company has no guidelines for using AI. One result: Shadow AI, defined as the unauthorized use of applications based on artificial intelligence.
CISOs need a strategy to deal with it. It starts with managers deciding whether they want to allow the use of AI in the workplace at all.
There are no magic tricks
To stop employees from falling for fake AI apps, there are no magic tricks – it’s just general awareness training to prevent the installation of any unwanted app: Tell employees, “There is a company rule: Do not download unauthorized apps (or vice versa: ” Download only authorized apps).”
If there is no list of approved applications, there should be a rule that IT must give permission for anything to be added to an employee’s computer that the company has not already installed.
If it hasn’t already done so, IT also needs to configure whatever operating system the organization uses so that only those with administrator accounts — and there should be very few employees with that access — can install applications.
“AI has sparked a broad interest in all audiences, from cybercriminals looking to perfect their scams to everyday consumers interested in learning more about and hacking their product with new AI-powered tools,” Pieter Arntz, Malwarebytes intelligence researcher, told CSO in email. “This bot attack has sparked a slew of AI-related scams, and I don’t see it stopping anytime soon.
“Many cybercriminals are focused on making money, and will use any new cultural moment to trick users. I’ve seen scams ranging from a free trial with a very long product to outright malware downloads. I warn people to be aware of new, free tools and use a browser extension that blocks malware and phishing.”
According to Malpedia, Lumma Stealer (also known as LummaC2Stealer) is a data-stealing malware model that has been in Russian-speaking criminal domains since at least August, 2022. It primarily targets cryptocurrency wallets and two-factor authentication. browser extensions, before stealing sensitive information from the victim’s machine. Once the target data is received, Malpedia notes, it is released to the C2 (command and control) server via HTTP POST requests using the user agent “TeslaBrowser/5.5″.” Thief also has a non-resident loader that can deliver additional EXE, DLL, and PowerShell payloads.
Lumma is often distributed through email campaigns, the Malwarebytes report says, but there’s nothing stopping threat actors from distributing it like downloading an AI editor, as they did in this example.
To stop the infection, CISOs must implement Cybersecurity 101. That not only includes security awareness training, it also means making multifactor phishing resistant authentication mandatory for all employees, and monitoring IT networks for suspicious behavior.
Infosec professionals looking for signs of infection with this application should hunt for a file called “Edit-ProAI-Setup-newest_release.exe” for Windows, and “EditProAi_v.4.36.dmg” for macOS.
Source link