Tech & Public Policy program supports research in election misinformation, improving civic tech and the hidden collection of biometric data
The Tech & Public Policy program at the McCourt School of Public Policy funds $1.5 million in interdisciplinary research projects to address critical questions about technology’s impact on society.
In partnership with Project Liberty’s Institute (formerly the McCourt Institute), the McCourt School’s Tech & Public Policy (TPP) 2024/2025 grant program focused on Technology for the Common Good: Understanding, Designing, Developing and Regulating New Technologies. TPP’s grants explore how technology might better protect values such as democracy, freedom and autonomy.
The TPP program connects threads of research, innovation and engagement to build a vibrant learning community dedicated to examining and advancing policy that addresses the challenges posed by current and future technologies. Through technologists, ethicists, legal scholars and social scientists working in collaboration, TPP grantees consider novel uses and misuses of new digital technologies, their effects on individuals and society, and new governance models to replace outdated regulatory frameworks.
The 2024-2025 Grantees
Building Consensus in the Digital Landscape: The “Viewz” Platform Initiative
Nejla Asimovic, McCourt School of Public Policy, Georgetown University; Ivan Ivanek, Viewz
Consensus is essential to democratic governance and social cooperation. But polarized settings can make consensus impossible and fuel “us versus them” thinking. This project explores whether technology can be used to highlight divergences of opinion within groups to challenge narratives around homogeneity, among other questions. Using the Viewz platform, a simple online dialogue platform developed by Ivan Ivanek, the research will include a multi-country study that tests different strategies and considers the platform’s potential as a broad model for designing online discourse environments, particularly those that foster a willingness for cross-group collaboration.
Biomanipulation: The Looming Threat Year 2
Laura Donohue, Georgetown University Law Center on National Security
Left in the shadows, the emergence of biomanipulation could have enormous consequences for contemporary social and political structures. This work aims to understand and illuminate the contours and risks of the emerging field of biomanipulation for researchers, policymakers and, ultimately, end-users. She and her research team will build a patent and scientific research database and publish research papers addressing the theoretical underpinnings and technological scope of biomanipulation.
The team will also educate key stakeholders and federal and state policymakers on the risks of biomanipulation and develop policies that legislators and policymakers can use to address these risks and control the spread of biomanipulation.
Can Civic Tech Reduce Administrative Burdens and Increase Trust? An Evaluation of Two Public Interest Technology Solutions
Pamela Herd, Sebastian Jilke and Don Moynihan, McCourt School of Public Policy, Georgetown University
This work will encompass two projects that study digital innovations through collaboration with civic tech and government. Both projects focus on how digital innovations can reduce administrative burdens in the social safety net. The first asks what role Artificial Intelligence (AI) can play in supporting case worker decisions in accessing social safety net programs, and the second project asks if such innovations improve trust in government.
Redesigning the Governance Stack: New Institutional Approaches to Information Economy Harms
Paul Ohm, Julie Cohen and Meg Leta Jones, Georgetown University Law Center
This project is part of a multi-year effort to reinvent the institutions and tools the administrative state uses to govern technology and technology companies, especially given recent advances in AI. This effort will prioritize public accountability and strong public oversight. It will seek to restore and recenter the rule of law within a new institutional framework designed around algorithmically driven information-economy needs and failure modes. It will also be a “full stack” effort, encompassing the sorts of implementation details of shorter-term projects and extending to rethink fundamental principles of regulatory organization and operation.
Exploring the “Collateral Damage” Argument in Internet Censorship Resistance
Micah Sherr, College of Arts & Sciences, Georgetown University
This project proposes to be the first to consider the ethics of the mechanisms and arguments that serve as the foundation for modern censorship-resistant systems (CRSes). A common approach of CRSes is to disguise users’ attempts to access censored content as requests to allowed resources, such as someone trying to access Amazon Web Services (AWS). The entity that is censoring may choose to block access to AWS en masse which would carry potentially enormous collateral damage since all access to AWS would then be affected. The assumption that CRS designers make is that this collateral damage is too politically, economically or socially expensive for the censor, and thus, the censor will not block the CRS. This is the “collateral damage argument” that serves as the foundation of most CRS approaches. The ethics of relying on collateral damage to resist Internet censorship has not been studied in the academic literature, despite its serving as a foundation for many commonly used censorship-resistance systems. The central research focus of this project is to improve our understanding of the ethics of the collateral damage argument and quantify its potential impact.
This interdisciplinary project will tackle the problem of internet censorship from both technical and philosophical directions and focus on two main research thrusts. In thrust 1, the researchers will introduce new methods and models to improve our understanding of how current censorship-resistant systems (CRSes) apply the collateral damage argument and the associated risks to various stakeholders if CRSes’ assumptions fail to hold. Thrust 2 will present a framework for building more ethical CRSes, including the development of principles for obtaining informed consent and the construction of alternative architectures for CRSes that avoid the collateral damage argument entirely.
Generative AI, Humanness, and Misinformation in the 2024 U.S. Presidential Election
Lisa Singh, Department of Computer Science and McCourt School of Public Policy, Georgetown University; Tiago Ventura, McCourt School of Public Policy, Georgetown University; Leticia Bode, Communication, Culture, and Technology, Georgetown University
This project seeks to understand the nature of content related to the 2024 U.S. presidential election shared on social media platforms on two key dimensions — the extent to which it is true or misleading and the extent to which it is perceived as human.
Misinformation is important to identify because it could misinform people in ways that could either disenfranchise them, undermine their trust in the electoral process or change their vote choice.
While humanness is not a perfect proxy for AI-generated content, it does reflect the typical user experience with that content. A social media user encountering such content will not know whether the content is generated by AI but will be able to perceive whether it feels composed by a human.”
USDS Founding, Primary Source Archive
Emily Tavoulareas, Tech & Society and McCourt School of Public Policy, Georgetown University; Kathy Pham, Workday and Harvard Kennedy School
This project aims to create a repository of primary research related to the founding of the United States Digital Service and synthesized assets, such as publications, interviews, reports and more, that scholars can cite in the future.
Sign up for the Tech & Public Policy program newsletter here to stay updated on events and news.