Misinformation has been the focus of significant public attention following a spate of volatile civic events including elections and national referendums that surfaced the prevalence of influence operations. The persistence of misinformation on online platforms is a testament to the inadequacy of existing solutions. With the democratization of generative models for text, images, and videos that immediately enter online distribution channels, it is important to develop safer governance mechanisms via policies that draw on lessons learned from practitioners, and from research in social science and machine learning. We provide a platform to discuss and present research in these areas, and encourage members to develop collaborative early-stage research ideas by tapping into the community.
A sample of topics that are of interest to us include:
- Generative Models
- Social Network Analysis
- Fake News
- Hate Speech
- Online Toxicity
- Causal Effects and Interventions
- Natural Language Processing
- Agent-based Models
- Multiagent Systems
- Reinforcement Learning
- Transparency and Ethics
- Recommender Systems
- Algorithmic Auditing
- Psychosocial Analysis
This reading group is a space to introduce beginners to research in these fields. It will provide a platform to continue the conversation with a cross-functional community to foster nuanced discussions on how to address the challenges created by global misinformation through the ethical use of technology and effective governance.
Organizers
The AI, Misinformation, and Policy (AMPol) Seminar is organized by Swapneel Mehta (CDS Ph.D. candidate) and Megan Brown (CSMAP Sr. Research Engineer)
Attendance Information
Our talks are held in-person at the 7th floor open space, NYU Data Science, 60 5th Avenue, New York, with catered snacks provided. If you are a non-NYU member and want to attend in-person, we encourage you to contact the organizers at least 48 hrs before the event to set up access.
Spring 2023

Date & Time: Tuesday, April 18, 3:30pm
Speaker & Title: Nathaniel Lubin, “Accountability Infrastructure for Recommendation Systems: Methods for Identifying and Mitigating Structural Harms Created by System Architecture” (additional resource: “Accountability Infrastructure” website)
Bio: Nathaniel Lubin has spent his career focused on digital strategy, technology, and politics. Recently, his work has centered on developing novel approaches to improving online discourse, building measurement tools, and combating misinformation. Nathaniel is a Rebooting Social Media Fellow at Harvard’s Berkman Klein Center and a Visiting Fellow at the Digital Life Initiative at Cornell Tech. He founded Fellow Americans, a non-profit which creates and tests more effective digital content, focusing on topics like COVID-19 response, civic participation, and improved social trust while working with some of the largest progressive organizations, and CEO of Survey 160, a software product designed to source data for polling and research. As a consultant, he has assisted more than 30 startups, major corporations, foundations, and advocacy organizations working to leverage technology and digital tools to better communicate with key audiences. Nathaniel previously was the Director of the Office of Digital Strategy at the White House where he led a team of strategists and practitioners to modernize the way the White House engaged and communicated with the American public. Before that, he served as Director of Digital Marketing at Obama for America in 2012 where he led the largest paid digital fundraising, persuasion, and outreach programs yet run in politics with a budget of more than $112 million. Originally from New York, Lubin is an honors graduate from Harvard University.
Abstract: Attention capitalism has generated design processes and product development decisions that prioritize platform growth over all other considerations. To the extent limits have been placed on these incentives, interventions have primarily taken the form of content moderation. While moderation is important for what we call “acute harms,” societal-scale harms – such as negative effects on mental health and social trust – require new forms of institutional transparency and scientific investigation, which we group under the name accountability infrastructure.
This is not a new problem. In fact, there are many conceptual lessons and implementation approaches for accountability infrastructure within the history of public health. Channeling these insights, we reinterpret the societal harms generated by technology platforms as a public health problem. To that end, we present a novel mechanism design framework and practical measurement methods for that framework. The proposed approach is iterative and built into the product design process, and is applicable for either internally-motivated (i.e. self regulation by companies) or externally-motivated (i.e. government regulation) interventions.
In doing this, we aim to help shape a research agenda of principles for mechanism design around problem areas on which there is broad consensus and a firm base of support. We offer constructive examples and discussion of potential implementation methods related to these topics, as well as several new data illustrations for potential effects of exposure to online content.

Date & Time: Friday, April 7, 11:15am
Speaker & Title: Dr. Christian Schroeder de Witt, “Bringing Multi-Agent Learning to Societal Impact: From Steganography to Solar Geoengineering”
Recording: View Christian Schroeder de Witt’s Lecture Recording (Form to Request Password to Christian Schroeder de Witt’s Lecture Recording) (captions coming)
Slides: View Christian Schroeder de Witt’s Lecture Slides
Bio: Over the past years, much progress has been made in deep multi-agent learning in recreational games, such as Go or DOTA 2. In this talk, I propose to use these tools to generate societal impact instead. Starting from my recent breakthrough research yielding the world’s first perfectly secure generative steganography algorithm, to robustifying Human-AI systems against illusory attacks, non-linear decision-making in the net-zero transition, and optimizing solar geoengineering discuss how multi-agent learning can raise new questions about some of the biggest societal challenges of our time, as well as power new technology to advance equity, expand opportunity and protect basic rights and liberties.
Abstract: Dr. Christian Schroeder de Witt is an artificial intelligence researcher specialising in fundamental research on multi-agent control in high-dimensional settings. He has authored a variety of highly influential research works, and is pioneering numerous real-world applications of deep multi-agent reinforcement learning, ranging from steganography to climate-economics models and solar geoengineering. As part of various industry collaborations, Christian has previously worked on A.I for autonomous drone control, as well as automated cybersecurity defence systems. Christian currently holds a postdoctoral researcher role at the University of Oxford, UK.
Born and raised in Frankfurt am Main, Germany, Christian holds a DPhil (Ph.D.) and two distinguished master’s degrees from the University of Oxford, ranging from theoretical physics, to computer science, and artificial intelligence. Aside from academia, Christian has experience working in diverse industries, including with Google AI, Man AHL, and as Head of Engineering of a Berlin eCommerce company.
In 2020, Christian was named “30 under 30” youth politician of a UK political party with House of Commons representation. Christian’s political work on climate policy is informed by a long-term research assistantship with Prof. Myles Allen (Coordinating Lead Author of IPCC SR1.5) at Oxford Net Zero, where he has been focusing on decarbonising small and medium-sized companies. In 2022, Christian was selected as a “30 under 35 rising strategist (Europe)” by Schmidt Futures International Strategy Forum. This fellowship allows him to work with the European Council on Foreign Relations.

Date & Time: Friday, March 31, 11:15am
Speaker & Title: Kiran Garimella, “Data donation systems for platform research”
Recording: View Kiran Garimella’s Lecture Recording (Form to Request Password to Kiran Garimella’s Lecture) (captions coming)
Slides: View Kiran Garimella’s Lecture Slides
Bio: Kiran Garimella is an Assistant Professor in the School of Communication and Information at Rutgers University. His research deals with using large-scale data to tackle societal issues such as misinformation, political polarization, and hate speech. Prior to joining Rutgers, Dr. Garimella was the Michael Hammer postdoc at the Institute for Data, Systems and Society at MIT and a postdoc at EPFL, Switzerland. His work on studying and mitigating polarization on social media won the best paper awards at top computer science conferences. Kiran received his Ph.D. in computer science at Aalto University, Finland, and Masters & Bachelors from IIIT Hyderabad, India. Prior to his Ph.D., he worked as a Research Engineer at Yahoo Research, Barcelona, and QCRI, Doha.
Abstract: Data donation systems are emerging as a new way to facilitate research on social media platforms, where access to user data can be restricted due to privacy concerns. These systems allow users to voluntarily donate their data for research purposes, providing researchers with valuable insights into user behavior and platform dynamics. In this talk, I will discuss the potential benefits and challenges of data donation systems for platform research. I will explore the different models of data donation systems, including opt-in and opt-out approaches, and examine the ethical considerations involved in such systems. I will also discuss the technical challenges of implementing data donation systems, including data quality control, data security, and data anonymization. Finally, I will highlight some of the recent research studies that have been conducted using data donation systems and the potential impact of such studies on our understanding of social media platforms. Overall, data donation systems have the potential to provide researchers with unprecedented access to user data, while also protecting user privacy and autonomy. However, the design and implementation of such systems must be carefully considered to ensure that they are transparent, ethical, and technically feasible.

Date & Time: Monday, March 6th, 2:30 pm
Speaker & Title: Emily Saltz, Jigsaw “Using mixed methods to understand the effects of online information interventions: Lessons from UX research in industry”
Recording: View Emily Saltz Lecture Recording (Form to Request Password to Emily Saltz Lecture) (captions coming)
Slides: View Emily Saltz Lecture Slides
Bio: Emily Saltz is a UX Researcher at Google Jigsaw, working on tools for platforms and moderators to address online harms. Before that, she was a UX Researcher at the New York Times R&D Lab, conducting research on topics ranging from media credibility (the News Provenance Project), to NLP Q&A tools. She was a 2020 Fellow at the Partnership on AI, and holds a Master’s in Computer-Interaction from Carnegie Mellon, and a BA in Linguistics from UC Santa Cruz.
Abstract: This talk will provide a glimpse into the experience of studying online harms and information interventions in industry R&D groups such as Google Jigsaw and the New York Times, and how this research is used to inform product decisions deployed at scale. It will also describe the process of working with cross-functional product teams, using in-depth qualitative research alongside larger scale surveys and lab studies to answer questions relevant to both industry and academia, such as user attitudes towards credibility labels across platforms.