On this page: Series Overview • Spring 2025 Events • Fall 2024 Events • Mailing List • Spring 2024 Seminar Info & Recordings

The NYU NLP and Text-as-Data Speaker Series takes place on Thursdays at 60 5th Avenue, at 11:00am EST; specific room locations for each event can be found below. This interdisciplinary series brings together experts from a wide range of fields, including computer science, linguistics, and social sciences, to explore cutting-edge research in Natural Language Processing (NLP) and text-as-data analysis.
Series Overview
Originally focused on text data applied to social science applications, the series has expanded to incorporate the growing interest in NLP from various disciplines. It provides an opportunity for attendees to engage with groundbreaking work that pushes the boundaries of what’s possible in language processing and analysis. The seminar is organized by Sam Bowman, He He, João Sedoc, Eunsol Choi, and Tal Linzen, and is open to anyone who is affiliated with NYU and wishes to attend.
See below for our schedule of speakers and sign up for our mailing list for details about live-streaming of talks and updates to the schedule.
All talks are currently scheduled to be in-person. The schedule listed below is tentative. Live talk attendance is limited, but recordings will be posted on this page and may be accessed by anyone.
Spring 2025 Events
Upcoming

Daphne Ippolito (CMU)
Date & Time: February 20, 11:00am
Title: TBA
Location: Rm 150 at 60 5th Avenue

Aviral Kumar (CMU)
Date & Time: February 27, 11:00am
Title: TBA
Location: Room 150, 60 5th Avenue

Jonathan Berant (Tel-Aviv University)
Date & Time: April 24, 11:00am
Title: TBA
Location: 7th Floor Open Space, 60 5th Avenue
Fall 2024 Past Events

Michael Hahn (Saarland University) Lecture Recording
Date and Time: December 5, 4:00pm
Title: Understanding Language Models via Theory and Interpretability
Speaker: Michael Hahn
Abstract: Recent progress in LLMs has rapidly outpaced our ability to understand their inner workings. This talk describes our recent work addressing this challenge. First, we develop rigorous mathematical theory describing the abilities (and limitations) of transformers in performing computations foundational to reasoning. We also examine differences and similarities with state-space models such as Mamba. Second, we propose a theoretical framework for understanding success and failure of length generalization in transformers. Third, we propose a method for reading out information from activations inside neural networks, and apply it to mechanistically interpret transformers performing various tasks. I will close with directions for future research.
Bio: Michael holds the Chair for Language, Computation, and Cognition at Saarland University. He received his PhD from Stanford University in 2022, advised by Judith Degen and Dan Jurafsky. He is interested in language, machine learning, and computational cognitive science.
Lecture Slides: Michael Hahn Lecture Slides

Jennifer Hu (Johns Hopkins / Harvard)
Date and Time: November 21, 4:00pm
Title: How to Know What Language Models Know
Speaker: Jennifer Hu
Abstract: As language models (LMs) become more sophisticated, there is growing interest in their cognitive abilities such as reasoning or grammatical generalization. However, we only have access to evaluations, which can only indirectly measure these latent constructs. In this talk, I take inspiration from the concept of “task demands” in cognitive science to design and understand LM evaluations. I will first describe how task demands can affect LMs’ behaviors. I will then present case studies showing how evaluations with different task demands can lead to vastly different conclusions about LMs’ abilities. Specifically, prompt-based evaluations (e.g., “Is the following sentence grammatical? [sentence]“) yield systematically lower performance than string probability comparisons, and smaller LMs are more sensitive to task demands than LMs with more parameters or training, mirroring findings in developmental psychology. These results underscore the importance of specifying the assumptions behind our evaluation design choices before we draw conclusions about LMs’ capabilities.
Bio: Jennifer Hu is an incoming Assistant Professor of Cognitive Science at Johns Hopkins University, where she will direct the Group for Language and Intelligence. Currently, she is a Research Fellow at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. Her research aims to understand the computational and cognitive principles that underlie human language.

Swabha Swayamdipta (USC) Seminar Recording
Date and Time: November 7, 4:00pm
Title: Ensuring Safety and Accountability in LLMs, Pre- and Post Training (Slides)
Speaker: Swabha Swayamdipta
Abstract: As large language models have become ubiquitous, it has proven increasingly challenging to enforce their accountability and safe deployment. In this talk, I will discuss the importance of ensuring the safety, responsibility, and accountability of Large Language Models (LLMs) throughout all stages of their development: pre-training, post-training evaluation, and deployment. First, I will present the idea of a unique LLM signature that can identify the model to ensure accountability. Next, I will present our recent work on reliably evaluating LLMs through our novel formulation of generation separability, and how this could lead to more reliable generation. Finally, I will present some ongoing work that demonstrates LLMs’ ability to understand but not generate unsafe or untrustworthy content.
Bio: Swabha Swayamdipta is an Assistant Professor of Computer Science and a Gabilan Assistant Professor at the University of Southern California. Her research interests lie in natural language processing and machine learning, with a primary focus on the evaluation of generative models of language, understanding the behavior of language models, and designing language technologies for societal good. At USC, Swabha leads the Data, Interpretability, Language, and Learning (DILL) Lab. She received her PhD from Carnegie Mellon University, followed by a postdoctoral position at the Allen Institute for AI. Her work has received awards at ICML, NeurIPS, and ACL. Her research is supported by awards from the National Science Foundation, the Allen Institute for AI, and a Rising Star Award from Intel Labs.

Noah Goodman (Stanford) Seminar Recording
Date and Time: October 17, 2024, 4:00pm
Title: Learning To Reason In Language Models
Speaker: Noah Goodman
Bio: Noah D. Goodman is Associate Professor of Psychology, Linguistics (by courtesy), and Computer Science (by courtesy) at Stanford University. He studies the computational basis of human thought, merging behavioral experiments with formal methods from statistics and logic. Specific projects vary from concept learning and language understanding to inference algorithms for probabilistic programming languages. He received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab.

Tim O’Donnell (McGill University)
Date and Time: October 1, 2024, 4:00pm
Title: Linguistic Compositionality and Incremental Processing
Speaker: Tim O’Donnell
Abstract: In this talk, I will present recent projects focusing on two key properties of natural language. First, I will discuss the problem of incremental processing, presenting modeling and empirical work on the nature of the algorithms that underlie the human sentence processor and discuss information theoretic tools that quantify processing difficulty. Second, I will discuss recent work on developing related information theoretic tools for defining and measuring the degree of compositionality in a system.
Bio: Tim O’Donnell is an associate professor and William Dawson Scholar in the Department of Linguistics at McGill University and a CIFAR Canada AI Chair at Mila, the Quebec AI institute. His research focuses on developing mathematical and computational models of how people learn to represent, process, and generalize language and music. His work draws on techniques from computational linguistics, machine learning, and artificial intelligence, integrating concepts from theoretical linguistics and methods from experimental psychology and looking at problems from all these domains.
Our Mailing List
Never miss a seminar by joining our mailing list!
Spring 2024 Seminar Info & Recordings

Paradoxes in Transformer Language Models: Positional Encodings
Siva Reddy, McGill University / MILA
Speaker: Siva Reddy
Date: February 2, 2024
Abstract: The defining features of Transformer Language Models, such as causal masking, positional encodings, and their monolithic architecture (i.e., the absence of a specific routing mechanism), are paradoxically the same features that hinder their generalization capabilities, and removing them makes them better at generalization. I will present evidence of these paradoxes on various generalizations, including length generalization, instruction following, and multi-task learning.
Bio: Siva Reddy is an Assistant Professor in the School of Computer Science and Linguistics at McGill University. He is also a Facebook CIFAR AI Chair, a core faculty member of Mila Quebec AI Institute and a research scientist at ServiceNow Research. His research focuses on representation learning for language that facilitates reasoning, conversational modeling and safety. He received the 2020 VentureBeat AI Innovation Award in NLP, and the best paper award at EMNLP 2021. Before McGill, he was a postdoctoral researcher at Stanford University and a Google PhD fellow at the University of Edinburgh.

How large language models can contribute to cognitive science
Roger Levy, Massachusetts Institute of Technology
Speaker: Roger Levy
Date: February 22, 2024
Abstract: Large language models (LLMs) are the first human-created artifacts whose text processing and generation capabilities seem to approach our own. But the hardware they run on is vastly different than ours, and the software implementing them probably is too. How, then, can we use LLMs to advance the science of language in the human mind? In this talk I present a set of case studies that exemplify three answers to this question: LLMs can help us place lower bounds on the learnability of linguistic generalizations; they can help us reverse-engineer human language processing mechanisms; and they can help us develop hypotheses for the interface between language and other cognitive mechanisms. The case studies include controlled tests of grammatical generalizations in LLMs; computational models of how adults understand what young children say; psychometric benchmarking of multimodal LLMs; and neurosymbolic models of reasoning in logical problems posed in natural language.
This talk covers joint work with Elika Bergelson, Ruth Foushee, Alex Gu, Jennifer Hu, Anna Ivanova, Benjamin Lipkin, Gary Lupyan, Kyle Mahowald, Stephan Meylan, Theo Olausson, Subha Nawer Pushpita, Armando Solar-Lezama, Joshua Tenenbaum, Ethan Wilcox, Nicole Wong, and Cedegao Zhang.
Bio: Roger Levy joined the Department of Brain and Cognitive Sciences in 2016. Levy received his BS in mathematics from the University of Arizona in 1996, followed by a year as a Fulbright Fellow at the Inter-University Program for Chinese Language Study, Taipei, Taiwan and a year as a research student in biological anthropology at the University of Tokyo. In 2005, he completed his doctoral work at Stanford University under the direction of Christopher Manning, and then spent a year as a UK Economic and Social Research Council Postdoctoral Fellowship at the University of Edinburgh. Before his appointment at MIT he was faculty in the Department of Linguistics at the University of California, San Diego. Levy’s awards include the Alfred P. Sloan Research Fellowship, the NSF Faculty Early Career Development (CAREER) Award, and a Fellowship at the Center for Advanced Study in the Behavioral Sciences.

Practical AI Systems: From General-Purpose AI to (the Right) Specific Use Cases
Sherry Wu, Carnegie Mellon University
Speaker: Sherry Wu
Date: March 7, 2024
Abstract: AI research has made great strides in developing general-purpose models (e.g., LLMs) that can excel across a wide range of tasks, enabling users to explore AI applications tailored to their unique needs without the complexities of custom model training. However, with the opportunities come the challenges — General-purpose models prioritize overall performance, but this can neglect specific user needs. How can we make these models practically usable? In this talk, I will present our recent work on assessing and tailoring general-purpose models for specific use cases. I will first cover methods for evaluating and mapping LLMs to specific usage scenarios, then reflect on the importance of identifying the right tasks for LLMs by comparing how humans and LLMs may perform the same tasks differently. In my final remarks, I will discuss the potential of training humans and LLMs with complementary skill sets.
Bio: Sherry Tongshuang Wu is an Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon University. Her research lies at the intersection of Human-Computer Interaction and Natural Language Processing, and primarily focuses on how humans (AI experts, lay users, domain experts) can practically interact with (debug, audit, and collaborate with) AI systems. To this end, she has worked on assessing NLP model capabilities, supporting human-in-the-loop NLP model debugging and correction, as well as facilitating human-AI collaboration. She has authored award-winning papers in top-tier NLP, HCI and Visualization conferences and journals such as ACL, CHI, TOCHI, TVCG, etc. Before joining CMU, Sherry received her Ph.D. degree from the University of Washington and her bachelor degree from the Hong Kong University of Science and Technology, and has interned at Microsoft Research, Google Research, and Apple.

LLMs under Microscope: Illuminating the Bling Spots and Improving the Reliability of Language Models
Yulia Tsvetkov, Paul G. Allen School of Computer Science & Engineering
Speaker: Yulia Tsvetkov
Date: April 4, 2024
Abstract: Large language models (LMs) are pretrained on diverse data sources—news, discussion forums, books, online encyclopedias. A significant portion of this data includes facts and opinions which, on one hand, celebrate democracy and diversity of ideas, and on the other hand are inherently socially biased. In this talk. I’ll present our recent work proposing new methods to (1) measure media biases in LMs trained on such corpora, along the social and economic axes, and (2) measure the fairness of downstream NLP models trained on top of politically biased LMs. In this study, we find that pretrained LMs do have political leanings which reinforce the polarization present in pretraining corpora, propagating social biases into social-oriented tasks such as hate speech and misinformation detection. In the second part of my talk, I’ll discuss ideas on mitigating LMs’ unfairness. Rather than debiasing models—which, our work shows, is impossible—we propose to understand, calibrate, and better control for their social impacts using modular methods in which diverse LMs collaborate at inference time.
Bio: Yulia Tsvetkov is an associate professor at the Paul G. Allen School of Computer Science & Engineering at University of Washington. Her research group works on fundamental advancements to large language models, multilingual NLP, and AI ethics. This research is motivated by a unified goal: to extend the capabilities of human language technology beyond individual populations and across language boundaries, thereby making NLP tools available to all users. Prior to joining UW, Yulia was an assistant professor at Carnegie Mellon University and a postdoc at Stanford. Yulia is a recipient of NSF CAREER, Sloan Fellowship, Okawa Research award, and several paper awards and runner-ups at NLP, ML, and CSS conferences.