When artificial intelligence meets geopolitical deception, the stakes for democracy rise.

6/5/2025 | Technology | US

The revelation that state-backed actors are using generative AI tools like ChatGPT to craft propaganda, manufacture engagement, and even write performance reviews for their own operatives feels like a dystopian twist in the ongoing saga of digital disinformation. OpenAI's recent disclosure isn't just about bad actors exploiting technology's flaws; it's a stark reminder that the battleground for truth itself is increasingly automated. The emotional trigger here is visceral: the erosion of authenticity in our most basic digital interactions. Every comment, every post, every piece of content we encounter online now carries an invisible question: Was this written by a human or a machine directed by a geopolitical agenda?

Hidden within OpenAI's report lies an uncomfortable contradiction. The same Silicon Valley ecosystem that champions AI as a democratizing force must now reckon with its weaponization. China's use of ChatGPT to simultaneously criticize and praise the dismantling of USAID programs exposes a hypocrisy in how authoritarian regimes employ ostensibly Western innovations. They leverage openness to undermine openness itself, using AI-generated content to flood discourse with strategically contradictory narratives; a digital equivalent of Orwellian doublethink. This isn't merely spam flooding timelines—it's computational gaslighting at scale.

The human impact extends far beyond intelligence analysts tracking bot networks. Consider mid career journalists competing with AI-generated 'news' accounts tied to intelligence operations, or small business owners in Taiwan seeing their products review bombed by synthetic outrage. Historians documenting this era will face a forensic nightmare distinguishing genuine cultural artifacts from algorithmically generated ones. Even ordinary social media users—parents sharing school updates, gamers discussing strategy—now unknowingly inhabit spaces where counterparties might be chatbots following geopolitical scripts. When OpenAI researchers found AI writing performance reviews for propaganda operatives, they uncovered a meta layer to the crisis: machines are now auditing humans on how effectively they deploy machines to manipulate other humans.

This phenomenon sits at the intersection of several defining 2020s trends: collapsing trust in institutions, the paradox of 'open' technologies enabling closed societies, and the normalizing of synthetic media. Recall that just five years ago, Russian election interference relied on human troll farms. Today's operations require far fewer people thanks to large language models that can generate endless permutations of persuasive text. The automation curve of disinformation now mirrors that of legitimate industries—greater output with reduced labor. Worryingly, these techniques aren't confined to superpowers. OpenAI's report notes similar activity from Iran, Russia, and even commercial entities in the Philippines, suggesting eventual trickle down to private firms and extremist groups.

Historical parallels exist in the CIA's Cold War funding of abstract expressionist art to counter Soviet aesthetics, or Britain's WWII Political Warfare Executive seeding misinformation through forged documents. What's unprecedented is the velocity and deniability AI enables. Traditional propaganda required physical distribution chains risking interception; today's AI generated content disperses globally at lightspeed with no manufacturing footprint. The 'Sneer Review' operation's use of ChatGPT to fabricate both criticism of a Taiwanese game and the backlash against that criticism reveals a Möbius strip of artificial discourse—a self referencing system where opposition and support are two prompts away from being computationally conjured.

Yet buried in the technical analysis glimmers an unexpected hopeful signal: AI might also unmask its own misuse. Unlike human operatives who can blend into populations, machine generated content leaves latent fingerprints detectable through metadata patterns and output anomalies detectable by the tools themselves. OpenAI's ability to trace distinctive LLM 'accents' across multiple platforms recalls how 19th century philologists exposed fake historical documents through statistical analysis of word usage. This hints at a coming arms race between generative and detective AIs—a cryptographic war fought not with ciphers but with transformer models scrutinizing each other's embeddings.

The ultimate victims in this scenario aren't platforms or governments, but civic cognition itself. Each AI generated comment, even if quickly removed, contributes to ambient cynicism about online discourse's validity. Like microplastics in oceans, synthetic propaganda particulates accumulate in our collective attention ecosystem long after individual campaigns end. Civil society now faces a dual challenge: building resilience against automated disinformation while resisting the temptation to dismiss all inconvenient truths as artificial. Ironically, the same tools threatening discourse might eventually help safeguard it—if platforms invest in forensic AI that maintains provenance trails like artistic authentication for digital content.

As generative AI becomes ubiquitous, we must reconceptualize it not just as a productivity tool but as potential dual use technology requiring oversight comparable to export controlled encryption or biotech. The alternative is a world where every debate about human rights, climate policy, or military spending occurs against a buzzing backdrop of artificially amplified perspectives whose origins even their creators might not fully comprehend. When machines learn to persuade, humanity must relearn how to ascertain.

Legal Disclaimer This opinion piece is a creative commentary based on publicly available news reports and events. It is intended for informational and educational purposes only. The views expressed are those of the author and do not constitute professional, legal, medical, or financial advice. Always consult with qualified experts regarding your specific circumstances.

By Tracey Curl, this article was inspired by this source.