The best-selling product on the darknet: Hacked GenAI accounts

Cybercriminals looking to abuse the power of artificial intelligence to build sophisticated phishing and malware campaigns can now buy easy access to it on underground markets as dozens of malicious actors sell stolen GenAI credentials for sale every day.

Hackers sell the usernames and passwords of about 400 GenAI accounts per day, according to eSentire research.

“Cybercriminals advertise information on Russia’s popular Underground Markets, which specialize in everything from malware to infostealers to crypters,” eSentire researchers said in a report. “Most GenAI data is stolen from end users’ computers when they are hacked by an infostealer.”

The Theft Record, which refers to all the information an infostealer finds on victims’ devices including GenAI data, is currently being sold for $10 each on underground markets.

LLM Paradise is one of the most used

One of the prominent underground marketplaces found to facilitate the exchange of GenAI data was LLM Paradise, the researchers said.

“The scary actor who runs this market had marketing skills, naming his store LLM Paradise and stolen GPT-4 and Claude API keys with ads that read: ‘The only place to get GPT-4 APIKEYS at unbeatable prices,’” the researchers said.

The threat actor advertised GPT-4 or Claude API keys starting at just $15 each, while typical prices for various OpenAI models run between $5 and $30 per million tokens used, the researchers added.

LLM Paradise, however, could not sustain itself for long and, due to unknown reasons, closed its services recently. However, malicious actors are getting around the snag and are still using some ads for GPT-4 API keys stolen from TikTok, published since before the market was closed.

Besides the GPT-4 and Claude APIs, other APIs sold on marketplaces like LLM Paradise include those of Quillbot, Notion, Huggingface, and Replit.

The information can be used for phishing, malware and law enforcement

eSentire researchers said that stolen credentials have a high value in the hands of cybercriminals with multiple returns. “Threat actors use popular AI platforms to create phishing campaigns, develop sophisticated malware, and generate chatbots for their private platforms,” ​​they said.

Additionally, they can be used to access the GenAI organization’s corporate accounts and allow access to customers’ personal and financial information, proprietary property, and personally identifiable information.

The stolen data could allow access to data restricted to corporate customers only, thereby affecting GenAI platform providers. OpenAI was found to be the most affected with over 200 OpenAI listings being posted for sale per day.

Regularly monitoring employee GenAI usage, having GenAI providers implement WebAuthn with MFA options, including passkey or password best practices for GenAI authentication, and using dark web monitoring tools to identify stolen credentials are just a few steps corporate users can take to they are immune to GenAI attacks.


Source link