Table of Contents

Summary

Disinformation is widely seen as a pressing challenge for democracies worldwide. Many policymakers are grasping for quick, effective ways to dissuade people from adopting and spreading false beliefs that degrade democratic discourse and can inspire violent or dangerous actions. Yet disinformation has proven difficult to define, understand, and measure, let alone address.

Even when leaders know what they want to achieve in countering disinformation, they struggle to make an impact and often don’t realize how little is known about the effectiveness of policies commonly recommended by experts. Policymakers also sometimes fixate on a few pieces of the disinformation puzzle—including novel technologies like social media and artificial intelligence (AI)—without considering the full range of possible responses in realms such as education, journalism, and political institutions.

This report offers a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. It distills core insights from empirical research and real-world data on ten diverse kinds of policy interventions, including fact-checking, foreign sanctions, algorithmic adjustments, and counter-messaging campaigns. For each case study, we aim to give policymakers an informed sense of the prospects for success—bridging the gap between the mostly meager scientific understanding and the perceived need to act. This means answering three core questions: How much is known about an intervention? How effective does the intervention seem, given current knowledge? And how easy is it to implement at scale?

Overall Findings

  • There is no silver bullet or “best” policy option. None of the interventions considered in this report were simultaneously well-studied, very effective, and easy to scale. Rather, the utility of most interventions seems quite uncertain and likely depends on myriad factors that researchers have barely begun to probe. For example, the precise wording and presentation of social media labels and fact-checks can matter a lot, while counter-messaging campaigns depend on a delicate match of receptive audiences with credible speakers. Bold claims that any one policy is the singular, urgent solution to disinformation should be treated with caution.
  • Policymakers should set realistic expectations. Disinformation is a chronic historical phenomenon with deep roots in complex social, political, and economic structures. It can be seen as jointly driven by forces of supply and demand. On the supply side, there are powerful political and commercial incentives for some actors to engage in, encourage, or tolerate deception, while on the demand side, psychological needs often draw people into believing false narratives. Credible options exist to curb both supply and demand, but technocratic solutionism still has serious limits against disinformation. Finite resources, knowledge, political will, legal authority, and civic trust constrain what is possible, at least in the near- to medium-term.
  • Democracies should adopt a portfolio approach to manage uncertainty. Policymakers should act like investors, pursuing a diversified mixture of counter-disinformation efforts while learning and rebalancing over time. A healthy policy portfolio would include tactical actions that appear well-researched or effective (like fact-checking and labeling social media content). But it would also involve costlier, longer-term bets on promising structural reforms (like supporting local journalism and media literacy). Each policy should come with a concrete plan for ongoing reassessment.
  • Long-term, structural reforms deserve more attention. Although many different counter-disinformation policies are being implemented in democracies, outsized attention goes to the most tangible, immediate, and visible actions. For example, platforms, governments, and researchers routinely make headlines for announcing the discovery or disruption of foreign and other inauthentic online networks. Yet such actions, while helpful, usually have narrow impacts. In comparison, more ambitious but slower-moving efforts to revive local journalism and improve media literacy (among other possibilities) receive less notice despite encouraging research on their prospects.
  • Platforms and tech cannot be the sole focus. Research suggests that social media platforms help to fuel disinformation in various ways—for example, through recommendation algorithms that encourage and amplify misleading content. Yet digital platforms exist alongside, and interact with, many other online and offline forces. The rhetoric of political elites, programming on traditional media sources like TV, and narratives circulating among trusted community members are all highly influential in shaping people’s speech, beliefs, and behaviors. At the same time, the growing number of digital platforms dilutes the effectiveness of actions by any single company to counter disinformation. Given this interplay of many voices and amplifiers, effective policy will involve complementary actions in multiple spheres.
  • Countering disinformation is not always apolitical. Those working to reduce the spread and impact of disinformation often see themselves as disinterested experts and technocrats—operating above the fray of political debate, neither seeking nor exercising political power. Indeed, activities like removing inauthentic social media assets are more or less politically neutral. But other efforts, such as counter-messaging campaigns that use storytelling or emotional appeals to compete with false ideas at a narrative and psychological level, can be hard to distinguish from traditional political advocacy. Ultimately, any institutional effort to declare what is true and what is false—and to back such declarations with power, resources, or prestige—implies some claim of authority and therefore can be seen as having political meaning (and consequences). Denying this reality risks encouraging overreach, or inviting blowback, which deepens distrust.
  • Research gaps are pervasive. The relatively robust study of fact-checking offers clues about the possibilities and the limits of future research on other countermeasures. On the one hand, dedicated effort has enabled researchers to validate fact-checking as a generally useful tool. Policymakers can have some confidence that fact-checking is worthy of investment. On the other hand, researchers have learned that fact-checking’s efficacy can vary a lot depending on a host of highly contextual, poorly understood factors. Moreover, numerous knowledge gaps and methodological biases remain even after hundreds of published studies on fact-checking. Because fact-checking represents the high-water mark of current knowledge about counter-disinformation measures, it can be expected that other measures will likewise require sustained research over long periods—from fundamental theory to highly applied studies.
  • Research is a generational task with uncertain outcomes. The knowledge gaps highlighted in this report can serve as a road map for future research. Filling these gaps will take more than commissioning individual studies; major investments in foundational research infrastructure, such as human capital, data access, and technology, are needed. That said, social science progresses slowly, and it rarely yields definite answers to the most vexing current questions. Take economics, for example: a hundred years of research has helped Western policymakers curb (though not eliminate) depressions, recessions, and panics—yet economists still debate great questions of taxes and trade and are reckoning only belatedly with catastrophic climate risks. The mixed record of economics offers a sobering benchmark for the study of disinformation, which is a far less mature and robust field.
  • Generative AI will have complex effects but might not be a game changer. Rapid AI advances could soon make it much easier and cheaper to create realistic and/or personalized false content. Even so, the net impact on society remains unclear. Studies suggest that people’s willingness to believe false (or true) information is often not primarily driven by the content’s level of realism. Rather, other factors such as repetition, narrative appeal, perceived authority, group identification, and the viewer’s state of mind can matter more. Meanwhile, studies of microtargeted ads—already highly data-driven and automated—cast doubt on the notion that personalized messages are uniquely compelling. Generative AI can also be used to counter disinformation, not just foment it. For example, well-designed and human-supervised AI systems may help fact-checkers work more quickly. While the long-term impact of generative AI remains unknown, it’s clear that disinformation is a complex psychosocial phenomenon and is rarely reducible to any one technology.

Case Study Summaries

  1. Supporting Local Journalism. There is strong evidence that the decline of local news outlets, particularly newspapers, has eroded civic engagement, knowledge, and trust—helping disinformation to proliferate. Bolstering local journalism could plausibly help to arrest or reverse such trends, but this has not been directly tested. Cost is a major challenge, given the expense of quality journalism and the depth of the industry’s financial decline. Philanthropy can provide targeted support, such as seed money for experimentation. But a long-term solution would probably require government intervention and/or alternate business models. This could include direct subsidies (channeled through nongovernmental intermediaries) or indirect measures, such as tax exemptions and bargaining rights.
  2. Media Literacy Education. There is significant evidence that media literacy training can help people identify false stories and unreliable news sources. However, variation in pedagogical approaches means the effectiveness of one program does not necessarily imply the effectiveness of another. The most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information—instilling confidence and a sense of responsibility alongside skills development. While media literacy training shows promise, it suffers challenges in speed, scale, and targeting. Reaching large numbers of people, including those most susceptible to disinformation, is expensive and takes many years.
  3. Fact-Checking. A large body of research indicates that fact-checking can be an effective way to correct false beliefs about specific claims, especially for audiences that are not heavily invested in the partisan elements of the claims. However, influencing factual beliefs does not necessarily result in attitudinal or behavioral changes, such as reduced support for a deceitful politician or a baseless policy proposal. Moreover, the efficacy of fact-checking depends a great deal on contextual factors—such as wording, presentation, and source—that are not well understood. Even so, fact-checking seems unlikely to cause a backfire effect that leads people to double down on false beliefs. Fact-checkers face a structural disadvantage in that false claims can be created more cheaply and disseminated more quickly than corrective information; conceivably, technological innovations could help shift this balance.
  4. Labeling Social Media Content. There is a good body of evidence that labeling false or untrustworthy content with additional context can make users less likely to believe and share it. Large, assertive, and disruptive labels are the most effective, while cautious and generic labels often do not work. Reminders that nudge users to consider accuracy before resharing show promise, as do efforts to label news outlets with credibility scores. Different audiences may react differently to labels, and there are risks that remain poorly understood: labels can sometimes cause users to become either overly credulous or overly skeptical of unlabeled content, for example. Major social media platforms have embraced labels to a large degree, but further scale-up may require better information-sharing or new technologies that combine human judgment with algorithmic efficiency.
  5. Counter-messaging Strategies. There is strong evidence that truthful communications campaigns designed to engage people on a narrative and psychological level are more effective than facts alone. By targeting the deeper feelings and ideas that make false claims appealing, counter-messaging strategies have the potential to impact harder-to-reach audiences. Yet success depends on the complex interplay of many inscrutable factors. The best campaigns use careful audience analysis to select the most resonant messengers, mediums, themes, and styles—but this is a costly process whose success is hard to measure. Promising techniques include communicating respect and empathy, appealing to prosocial values, and giving the audience a sense of agency.
  6. Cybersecurity for Elections and Campaigns. There is good reason to think that campaign- and election-related cybersecurity can be significantly improved, which would prevent some hack-and-leak operations and fear-inducing breaches of election systems. The cybersecurity field has come to a strong consensus on certain basic practices, many of which remain unimplemented by campaigns and election administrators. Better cybersecurity would be particularly helpful in preventing hack-and-leaks, though candidates will struggle to prioritize cybersecurity given the practical imperatives of campaigning. Election systems themselves can be made substantially more secure at a reasonable cost. However, there is still no guarantee that the public would perceive such systems as secure in the face of rhetorical attacks by losing candidates.
  7. Statecraft, Deterrence, and Disruption. Cyber operations targeting foreign influence actors can temporarily frustrate specific foreign operations during sensitive periods, such as elections, but any long-term effect is likely marginal. There is little evidence to show that cyber operations, sanctions, or indictments have achieved strategic deterrence, though some foreign individuals and contract firms may be partially deterrable. Bans on foreign platforms and state media outlets have strong first-order effects (reducing access to them); their second-order consequences include retaliation against democratic media by the targeted state. All in all, the most potent tool of statecraft may be national leaders’ preemptive efforts to educate the public. Yet in democracies around the world, domestic disinformation is far more prolific and influential than foreign influence operations.
  8. Removing Inauthentic Asset Networks. The detection and removal from platforms of accounts or pages that misrepresent themselves has obvious merit, but its effectiveness is difficult to assess. Fragmentary data—such as unverified company statements, draft platform studies, and U.S. intelligence—suggest that continuous takedowns might be capable of reducing the influence of inauthentic networks and imposing some costs on perpetrators. However, few platforms even claim to have achieved this, and the investments required are considerable. Meanwhile, the threat posed by inauthentic asset networks remains unclear: a handful of empirical studies suggest that such networks, and social media influence operations more generally, may not be very effective at spreading disinformation. These early findings imply that platform takedowns may receive undue attention in public and policymaking discourse.
  9. Reducing Data Collection and Targeted Ads. Data privacy protections can be used to reduce the impact of microtargeting, or data-driven personalized messages, as a tool of disinformation. However, nascent scholarship suggests that microtargeting—while modestly effective in political persuasion—falls far short of the manipulative powers often ascribed to it. To the extent that microtargeting works, privacy protections seem to measurably undercut its effectiveness. But this carries high economic costs—not only for tech and ad companies, but also for small and medium businesses that rely on digital advertising. Additionally, efforts to blunt microtargeting can raise the costs of political activity in general, especially for activists and minority groups who lack access to other communication channels.
  10. Changing Recommendation Algorithms. Although platforms are neither the sole sources of disinformation nor the main causes of political polarization, there is strong evidence that social media algorithms intensify and entrench these off-platform dynamics. Algorithmic changes therefore have the potential to ameliorate the problem; however, this has not been directly studied by independent researchers, and the market viability of such changes is uncertain. Major platforms’ optimizing for something other than engagement would undercut the core business model that enabled them to reach their current size. Users could opt in to healthier algorithms via middleware or civically minded alternative platforms, but most people probably would not. Additionally, algorithms are blunt and opaque tools: using them to curb disinformation would also suppress some legitimate content.

Acknowledgments

The authors wish to thank William Adler, Dan Baer, Albin Birger, Kelly Born, Jessica Brandt, David Broniatowski, Monica Bulger, Ciaran Cartmell, Mike Caulfield, Tímea Červeňová, Rama Elluru, Steven Feldstein, Beth Goldberg, Stephanie Hankey, Justin Hendrix, Vishnu Kannan, Jennifer Kavanagh, Rachel Kleinfeld, Samantha Lai, Laura Livingston, Peter Mattis, Tamar Mitts, Brendan Nyhan, George Perkovich, Martin Riedl, Ronald Robertson, Emily Roseman, Jen Rosiere Reynolds, Zeve Sanderson, Bret Schafer, Leah Selig Chauhan, Laura Smillie, Rory Smith, Victoria Smith, Kate Starbird, Josh Stearns, Gerald Torres, Meaghan Waff, Alicia Wanless, Laura Waters, Gavin Wilde, Kamya Yadav, and others for their valuable feedback and insights. Additional thanks to Joshua Sullivan for research assistance and to Alie Brase, Lindsay Maizland, Anjuli Das, Jocelyn Soly, Amy Mellon, and Jessica Katz for publications support. The final report reflects the views of the authors only. This research was supported by a grant from the Special Competitive Studies Project.

About the Authors

Jon Bateman is a senior fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. His research areas include disinformation, cyber operations, artificial intelligence, and techno-nationalism. Bateman previously was special assistant to Chairman of the Joint Chiefs of Staff General Joseph F. Dunford, Jr., serving as a speechwriter and the lead strategic analyst in the chairman’s internal think tank. He has also helped craft policy for military cyber operations in the Office of the Secretary of Defense and was a senior intelligence analyst at the Defense Intelligence Agency, where he led teams responsible for assessing Iran’s internal stability, senior-level decisionmaking, and cyber activities. Bateman is a graduate of Harvard Law School and Johns Hopkins University.

Dean Jackson is principal of Public Circle Research & Consulting and a specialist in democracy, media, and technology. In 2023, he was named an inaugural Tech Policy Press reporting fellow and an affiliate fellow with the Propaganda Research Lab at the University of Texas at Austin. Previously, he was an investigative analyst with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol and project manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace. From 2013 to 2021, Jackson managed research and program coordination activities related to media and technology at the National Endowment for Democracy. He holds an MA in international relations from the University of Chicago and a BA in political science from Wright State University in Dayton, OH.

Notes

1 The cells of this table are color coded: green suggests the most positive assessment for each factor, while red is the least positive and yellow is in between. These overall ratings are a combination of various subfactors, which may be in tension: for example, an intervention can be highly effective but only for a short time or with high risk of second-order consequences.

A green cell means an intervention is well studied, likely to be effective, or easy to implement. For the first column, this means there is a large body of literature on the topic. While it may not conclusively answer every relevant question, it provides strong indicators of effectiveness, cost, and related factors. For the second column, a green cell suggests that an intervention can be highly effective at addressing the problem in a lasting way at a relatively low level of risk. For the third column, a green cell means that the intervention can quickly make a large impact at relatively low cost and without major obstacles to successful implementation.

A yellow cell indicates an intervention is less well studied (there is relevant literature but major questions about efficacy are unanswered or significantly underexplored), less efficacious (its impact is noteworthy but limited in size or duration, or it carries some risk of blowback), or faces nonnegligible hurdles to implementation, such as cost, technical barriers, or political opposition.

A red cell indicates that an intervention is poorly understood, with little literature offering guidance on key questions; that it is low impact, has only narrow use cases, or has significant second-order consequences; or that it requires an especially high investment of resources or political capital to implement or scale.