OpenAI has taken action against users from China and North Korea who were allegedly exploiting its AI technology for malicious activities, including misinformation campaigns and online fraud.
In a recent report, OpenAI highlighted how authoritarian regimes might attempt to leverage AI-powered tools like ChatGPT for surveillance, opinion manipulation, and cyber operations targeting both their citizens and foreign nations.
The company did not disclose the number of accounts banned or the exact timeline of its enforcement actions.
However, it confirmed that AI-driven methods were used to detect and curb these operations.
Among the documented cases, OpenAI detailed several instances of AI misuse. For misinformation campaigns, users instructed ChatGPT to generate anti-US news articles in Spanish, which were later published in mainstream Latin American news outlets under an alleged Chinese company’s byline.

Also, AI was used to create fraudulent CVs and online profiles to help alleged North Korean operatives secure jobs in Western firms under false pretences.
Additionally, a Cambodian-based fraud network used ChatGPT to translate and generate content for deceptive activities across social media and communication platforms like X (formerly Twitter) and Facebook.
The U.S. government has repeatedly raised concerns over China’s potential misuse of AI to suppress dissent, spread propaganda, and threaten national security.
With over 400 million weekly active users, OpenAI’s ChatGPT remains the world’s most popular AI chatbot, making it a valuable target for both state-backed and independent bad actors.
Meanwhile, OpenAI is in talks to raise $40 billion, with a potential valuation of $300 billion—a move that could set a new funding record for a private company.
As AI technology advances, so do concerns over its exploitation for misinformation, fraud, and cyber threats, prompting tighter security measures from leading AI firms.