OpenAI Disrupts Influence Operations Linked to China, Russia, and Others

MAGA News Central: Making American Businesses Great Again
The Epoch Times Header

Actors behind these operations used OpenAI tools to generate comments, produce articles, or create fake names or bios for social media accounts.

OpenAI announced that it has disrupted five influence operations from four countries using its artificial intelligence (AI) tools to manipulate public opinion and shape political outcomes across the internet.

The company said on May 30 that these covert influence operations were from Russia, China, Iran, and Israel. Actors behind these operations used OpenAI tools to generate comments, produce articles, or create fake names or bios for social media accounts over the last three months.

The report found that the content pushed by these operations targets multiple ongoing issues, including criticisms of the Chinese regime from Chinese dissidents and foreign governments, U.S. and European politics, Russia’s invasion of Ukraine, and the conflict in Gaza.

However, such operations did not achieve their goals, meaningfully increasing their audience engagement due to the company’s services, OpenAI said in a statement.

The company found trends from these actors using its AI tools, including content generation, mixing old and new between AI-generated materials and other types of content, faking engagement by creating replies for their own social posts, and productivity enhancement like summarizing social media posts.

Pro-Beijing Network

OpenAI said it disrupted an operation from a pro-Beijing Spamouflage disinformation and propaganda network in China. The Chinese operation used the company AI model to seek advice about social media activities, research news, and current events, and generate content in Chinese, English, Japanese, and Korean.

Much of the content generated by the Spamouflage network are topics praising the Chinese communist regime, criticizing the U.S. government, and targeting Chinese dissidents.

Such content was posted on multiple social platforms, including X, Medium, and Blogspot. OpenAI found that in 2023, the Chinese operation generated articles that claimed that Japan polluted the environment by releasing wastewater from the Fukushima nuclear power plant. Actor and Tibet activist Richard Gere and Chinese dissident Cai Xia are also targets of this network.

The network also used the OpenAI model to debug code and generate content for a Chinese-language website that attacks Chinese dissidents, calling them “traitors.”

By Aaron Pan

Read Full Article on TheEpochTimes.com

Contact Your Elected Officials