Home Featured News OpenAI Shuts Down Operation Using ChatGPT for U.S. Election Misinformation

OpenAI Shuts Down Operation Using ChatGPT for U.S. Election Misinformation

0
19
Image Credits: Nikos Pekiaridis/NurPhoto / Getty Images

OpenAI has taken decisive action against a cluster of ChatGPT accounts linked to an Iranian influence operation that generated content related to the U.S. presidential election. In a blog post on Friday, the company detailed how the operation utilized AI to create articles and social media posts. However, it appears the content did not gain significant traction.

This crackdown is not an isolated incident. OpenAI has previously banned accounts associated with state-affiliated actors who are misusing ChatGPT. In May, the company disrupted five campaigns that aimed to manipulate public opinion through AI-generated content.

These efforts echo the tactics used by state actors in the past to influence elections through platforms like Facebook and Twitter. Now, similar groups—or perhaps the same ones—are turning to generative AI to spread misinformation across social media channels. OpenAI, much like social media companies, is employing a whack-a-mole strategy to ban accounts as they emerge.

OpenAI’s investigation into this latest cluster was supported by a Microsoft Threat Intelligence report published last week, which identified the group as Storm-2035. This Iranian network has been active since 2020, with multiple sites mimicking legitimate news outlets. Their strategy involves engaging U.S. voter groups on opposite ends of the political spectrum, using polarizing messaging on issues such as presidential candidates, LGBTQ rights, and the Israel-Hamas conflict. The goal, as seen in other influence operations, is not necessarily to promote a specific policy but to create division and conflict.

OpenAI identified five websites associated with Storm-2035, which posed as both progressive and conservative news outlets with convincing domain names like “evenpolitics.com.” The group used ChatGPT to draft several long-form articles, including one falsely claiming that “X censors Trump’s tweets”—a statement that is not only untrue but contradicts Elon Musk’s encouragement of Trump to engage more on the platform X (formerly Twitter).

On social media, OpenAI discovered a dozen X accounts and one Instagram account controlled by this operation. ChatGPT was used to rewrite various political comments, which were then posted on these platforms. One tweet, for example, falsely claimed that Kamala Harris attributed “increased immigration costs” to climate change, followed by the hashtag “#DumpKamala.”

Despite these efforts, OpenAI reports that Storm-2035’s articles did not achieve widespread circulation, and most of its social media posts received little to no engagement. This is typical of such operations, which can be quickly and cheaply produced using AI tools like ChatGPT. However, with the U.S. election approaching, this is likely just the beginning of many more attempts to use AI to influence voters and stir up partisan conflict online.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here