OpenAI has revealed that Chinese groups are increasingly using its AI tools—like ChatGPT—for covert operations online, including spreading political propaganda and supporting cyberattacks. In a new report released Thursday, the company said these efforts are still relatively small in scale but are getting more advanced and harder to detect.
According to the report, OpenAI shut down several accounts linked to Chinese influence campaigns. These accounts used ChatGPT to pump out fake and politically charged content for social media—ranging from attacks on Taiwan-related video games to spreading false accusations against activists, and even criticizing U.S. policies like foreign aid and Trump-era tariffs.
Some of the content aimed to inflame both sides of hot-button U.S. political debates, using AI-generated posts and fake profile pictures to stir division and confusion.
OpenAI also found signs of Chinese actors using its AI tools to help with cyberattacks—running open-source research, editing hacking scripts, fixing system setups, and building tools for brute-force password cracking and social media automation.
China’s foreign ministry brushed off the claims, saying there’s “no basis” for them and insisting that China opposes the misuse of AI.
This report adds to the growing list of concerns about how powerful AI tools like ChatGPT can be exploited – not just for spreading misinformation, but for fueling cyber warfare. OpenAI says it will keep cracking down on malicious use as its tools become more widespread and more influential.