Similarly, an Iranian operation known as the “International Union of Virtual Media” (IUVM) used AI tools to write long-form articles and articles to be published on the vumpress.co website.
Additionally, a commercial company in Israel called “Zero Zeno,” also used AI tools to create articles and comments that were then posted across multiple platforms, including Instagram, Facebook, iX, and their own websites.
“The content posted by these various operations focuses on a wide range of issues including Russia’s invasion of Ukraine, the Gaza conflict, the Indian election, politics in Europe and the United States, and criticism of the Chinese government by Chinese and foreign opponents. governments,” said the report.
The OpenAI report, the first of its kind by the company, highlights several trends among these activities. Bad actors rely on AI tools like ChatGPT to generate high volumes of content with fewer grammatical errors, create the illusion of social media engagement, and improve productivity by summarizing posts and debugging code. However, the report added that none of the works were able to “engage the original audience in a meaningful way.
Facebook recently published a similar report and echoed OpenAI’s sentiments about the growing misuse of AI tools by conducting “influence operations” to further malicious programs. The company calls them CIB or coordinated fake behavior and describes them as “systematic attempts to manipulate public debates for strategic purposes, where fake accounts are the core of the operation.
In all cases, people collaborate and use fake accounts to mislead others about who they are and what they do.”
Source link