Full-Spectrum Propaganda, Explained: A Conversation with Renée DiResta and Josh Goldstein
New research offers a framework for analyzing how nation-states wage propaganda campaigns in the social media era.
McCourt School Associate Research Professor Renée DiResta and Georgetown University’s Center for Security and Emerging Technology (CSET) Research Fellow Josh Goldstein co-authored research on how states leverage social media for propaganda campaigns.
The paper, Full-Spectrum Propaganda in the Social Media Era, argues that well-resourced states no longer run purely overt or covert operations. Instead, they build capacity across both, combining traditional broadcast outlets, diplomatic social media accounts, state-run news sites and covert persona networks into integrated campaigns. The authors introduce a framework that categorizes propaganda channels along two dimensions: overt versus covert, and broadcast versus social.
Using leaked operational documents and case studies of the Chinese Communist Party and Russia’s RIA FAN, a Russian state-backed news outlet, the research identifies two tactics that only become visible when the full spectrum is considered together: inauthentic amplification, where covert social accounts boost content from overt state media, and deceptive sourcing, where state broadcast outlets cite fabricated personas or front accounts they themselves created. Both blur the line between channels in ways that existing frameworks tend to not capture.
Expert Insights from Georgetown University Researchers

Renée DiResta, associate research professor
Renée DiResta studies adversarial abuse on online social media platforms and is the author of Invisible Rulers: The People Who Turn Lies Into Reality, which explores how algorithms, influencers and crowds interact to shape public opinion.

Josh Goldstein, research fellow
Josh Goldstein’s research and teaching focus on online manipulation, national security and emerging technology.
Q&A
Q1:
What makes full-spectrum propaganda campaigns different from previous propaganda efforts by state actors?
Josh: Media coverage of state-backed propaganda often focuses on a single channel: a network of fake accounts on Facebook or Twitter, or what a particular state broadcasting outlet is saying. But in practice, there are often more moving pieces. States combine channels — overt and covert, on broadcast and social media — to message to foreign audiences. We try to build out this “full-spectrum” framing to show the range of tools that states have, and the strategies that become possible when they use multiple channels in concert.
Renée: The core intuition here isn’t new. As Jacques Ellul teaches, propaganda evolves to fit the technology of the day. Today, using multiple media channels is easier to execute because social media offers ready access to foreign publics and pseudonymity. The participatory nature turns audience members into distributors, and the decentralized nature means that state actors can “fail over” from one platform to another if their accounts get discovered and taken down. Social media is not a substitute for traditional media, but a meaningful addition.
Q2:
The research highlights two popular tactics used in integrated propaganda campaigns: inauthentic amplification and deceptive sourcing. Can you explain the implications of these two tactics on the broader online information ecosystem, or for public policy debates?
Renée: Inauthentic amplification is when political actors use fake social media accounts to amplify content from overtly state-sponsored sources. A lot of fake likes and shares give ordinary social media users the false impression that more people support a position than truly do, influencing or sometimes crowding out genuine discourse.
Deceptive sourcing is when broadcast media falsify sources or cite sources that they themselves create on social media. In the paper, we give examples of Russian state-linked media embedding tweets from supposed journalists as “evidence,” but their troll factories had created the journalist persona accounts. This is information laundering: states can make a point seem credible by using an ostensibly independent voice.
Josh: When these tactics get exposed, they can reduce trust in the information environment, which is, in itself, a goal of some state actors. Sometimes the goal of state-backed propaganda isn’t to change minds, but rather to get people to disengage on a topic altogether.
Q3:
The study highlights that full-spectrum propaganda campaigns cannot be comprehensively addressed through a single channel, as propagandists will continue to adapt their strategies based on the media environment. Furthermore, when “participatory propaganda” takes place, unwitting users become amplifiers. What does this mean for counter-propaganda efforts?
Josh: These points highlight the challenges of countering covert or manipulative propaganda. If propagandists run fake accounts on multiple channels, labeling or removing them on one platform (in the case that they violate platform policies) generally won’t stop their operation. They’ll move over to platforms that are unwilling or unable to track them. We describe this as a form of “regulatory arbitrage” and outline how niche platforms, encrypted apps, and federated networks often serve as a new home. The dynamic isn’t exclusive to propaganda; Columbia Professor Tamar Mitts shows a similar pattern in her book Safe Havens for Hate, where extremists who are moderated on one platform migrate to others.
Q4:
How does generative AI impact all of this?
Renée: In 2020, when the Stanford Internet Observatory got research access to GPT-3, it quickly became clear to us that propagandists would be able to use this technology to enhance their operations. Josh and I wrote about that risk early on, describing the ways language models could lower the cost of influence operations, make them more scalable, and improve the quality of deceptive content. I explored these questions publicly in an Atlantic article co-authored with GPT-3 and in a Wired essay warning about the unique challenges of “deepfake text.” It’s harder to identify than video and images, and with advances in agentic behavior, fake accounts can go back and forth with it, significantly enhancing what’s possible in the realm of “covert social.”
Older fake accounts often posted spammy text or long-form articles that misused slang, copied material, or otherwise felt inauthentic. Generative AI can generate much more convincing language tailored to a particular community or audience. Newer models are also very adept at producing high-quality images, audio, and video that are increasingly difficult to identify as fake, which deepens the broader trust problem. And beyond simply improving older tactics, generative AI may enable entirely new ones—for example, more responsive fake personas or more adaptive trolling at scale.
Q5:
For policymakers reading this work, what’s the single most important takeaway that should inform their approach to countering state propaganda?
Josh: The November 2025 National Security Strategy highlights the need to protect the U.S. from “[p]ropaganda, influence operations, and other forms of cultural subversion.” Our work reminds policymakers about the need to look at the system as a whole. Trust and safety efforts from individual companies to track covert state networks, for example, are critical. But propagandists will migrate to more fertile grounds and have a range of tools at their disposal.
To effectively counter propaganda and influence, policymakers need to recognize the full-spectrum nature of the threat, and calibrate their policy tools to each specific channel. They will also be most effective in their mission if they encourage a flourishing research environment—across social media companies, academic researchers, and independent investigators.
Renée: State actors are persistent, so this problem is not going away. That means we need durable, nimble capacity, not one-off responses. The goal is not to come up with a single fix for a single strategy or adversary, but to build resilient institutions that can adapt as the threat evolves.
