The NYU NLP and Text-as-Data Speaker Series takes place on Thursdays from 4:00pm to 5:30 p.m. at the Center for Data Science, 60 Fifth Avenue (7th floor common area). Expanding from its original emphasis on text data applied to social science applications, the Series incorporates the growing interest in Natural Language Processing from a variety of disciplines, especially Computer Science and Linguistics. The series provides an opportunity for attendees to see cutting-edge NLP and other text-as-data work from the fields of social science, computer science and other related disciplines. The seminar is organized by Professors Arthur Spirling, Sam Bowman, He He, and Tal Linzen, and is open to anyone who is affiliated with NYU and wishes to attend.
See below for our schedule of speakers and sign up for our mailing list for details about live-streaming of talks and updates to the schedule.
Note: All talks are currently scheduled to be in-person but this could change with updated guidance from NYU. The schedule listed below is tentative. Please return to this page or join the mailing list for updates in case of unforeseen circumstances. Live talk attendance is limited, but recordings will be posted on this page and may be accessed by anyone.
Spring 2023
- March 30
- Sasha (Alexander) Rush
- Title: “Pretraining without Attention”
- Abstract: Transformers are essential to pretraining. As we approach 5 years of BERT, the connection between attention as architecture and transfer learning remains key to this central thread in NLP. Other architectures such as CNNs and RNNs have been used to replicate pretraining results, but these either fail to reach the same accuracy or require supplemental attention layers. This work revisits the seminal BERT result and considers pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically, inspired by recent papers like the structured space space sequence model (S4), we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of the proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a notable impact on downstream performance and a different inductive bias that is worth exploring further.
- Bio: Alexander “Sasha” Rush is an Associate Professor at Cornell Tech. His work is at the intersection of natural language processing and generative modeling with applications in text generation, efficient inference, and controllability. He has written several popular open-source software projects supporting NLP research and data science, and works part-time as a researcher at Hugging Face. He is the secretary of ICLR and developed software used to run virtual conferences during COVID. His work has received paper and demo awards at major NLP, visualization, and hardware conferences, an NSF Career Award, and a Sloan Fellowship. He tweets and blogs, mostly about coding and ML, at @srush_nlp.
- Sasha (Alexander) Rush
- April 6
- Ellie Pavlick
- Title: “Mechanisms for Compositionality in Neural Networks”
- Abstract: In this talk, Pavlick will discuss new work which attempts to describe the “algorithms” that neural networks (NNs) implement implicitly, as a result of their training. Pavlick will focus specifically on NNs ability to encode abstract, compositional functions which consist of interpretable subroutines and which operate in a content-independent manner. Pavlick will discuss findings on models trained from scratch on language and vision tasks, as well as large language models (LLMs) in an in-context-learning setting.
- Bio: Ellie Pavlick is an Assistant Professor of Computer Science at Brown University. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Ellie’s research is on cognitively-inspired approaches to language acquisition, focusing on grounded language learning and on the emergence of structure (or lack thereof) in neural language models. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences.
- Recording: Ellie Pavlick Lecture Recording
- Ellie Pavlick
- April 13
- Mohit Iyyer
- Title: On using large language models to translate literary works, and detecting when they’ve been used
- Abstract: In this presentation, Mohit will talk about two projects that are only tangentially related (hence the awkward title!). First, he’ll share my lab’s experiences on working with large language models to translate literary texts (e.g., novels). Mohit and his team’s research in this direction includes (1) efforts to develop human and automatic evaluations of the quality of AI-generated literary translations, which allow us to develop a better understanding of the types of errors made by these systems; and (2) work on building a publicly-accessible platform for readers of AI-generated translation, and extensions to collaborative human-AI translation. In the second part of the talk, Mohit will focus on detecting when a piece of text has been generated by a language model. Mohit and his team evaluate several existing AI-generated text detectors (e.g., watermarking, DetectGPT, etc.) and discover that they are vulnerable to paraphrase attacks: simply passing text generated by ChatGPT through an external paraphrasing model is enough to fool current detectors. Mohit and his team propose a retrieval-based detection algorithm that proves more robust against paraphrasing attacks, but also has its own limitations.
- Bio: Mohit Iyyer is an assistant professor in computer science at the University of Massachusetts Amherst. His research focuses broadly on designing machine learning models for discourse-level language generation (e.g., for story generation and machine translation), and his group also works on tasks involving creative language understanding (e.g., modeling fictional narratives and characters). He is the recipient of best paper awards at NAACL (2016, 2018) and a best demo award at NeurIPS 2015, and he received the 2022 Samsung AI Researcher of the Year award. He received his PhD in computer science from the University of Maryland, College Park in 2017, advised by Jordan Boyd-Graber and Hal Daumé III, and spent the following year as a researcher at the Allen Institute for Artificial Intelligence.
- Recording: Mohit Iyyer Lecture Recording
- Mohit Iyyer
- April 20
- Asli Celikyilmaz
- Title: Exploring Machine Thinking
- Abstract: As language models continue to evolve and become more sophisticated, it will be important to continually evaluate and improve their ability to reason effectively in context to ensure their reliability and accuracy. In this talk, I will present some of our recent work to help models learn such skills, including exploring machine thinking in in-context learning, assessing the correctness of reasoning steps, prompt chaining for compositionally and refinement, and meta-finetuning to assess reasoning abilities. By examining these works, we’ll gain insights into some of the current systems and discover opportunities for future research.
- Bio: Asli Celikyilmaz is a Research Manager at Fundamentals AI Research (FAIR) Labs at Meta AI in Seattle. Formerly, she was Senior Principal Researcher at Microsoft Research (MSR) in Redmond, Washington. She is also an Affiliate Associate Member at the University of Washington. She has received a Ph.D. Degree in Information Science from University of Toronto, Canada, and later continued her Postdoc study at the Computer Science Department of the University of California, Berkeley. Her research interests include machine learning and natural language processing, with a focus on areas such as machine teaching, long-form reasoning and evaluation, in-context learning, language generation and evalution, to name a few. She is serving as the Editor-in-Chief of the Transactions of the ACL (TACL) and Associte Editor on Open Journal of Signal Processing (OJSP). She has received several “best of” awards including Semantic Computing in 2009, and CVPR in 2019.
- Slides: Asli Celikyilmaz Lectures Slides
- Asli Celikyilmaz
- April 27
- Jacob Andreas
- Title: Language Models as World Models
- Abstract: The extent to which language modeling induces representations of the world outside text—and the broader question of whether it is possible to learn about meaning from text alone—have remained a subject of ongoing debate across NLP and cognitive sciences. I’ll present two studies from my lab showing that transformer language models encode structured and manipulable models of situations in their hidden representations. I’ll begin by presenting evidence from semantic probing indicating that LM representations of entity mentions encode information about entities’ dynamic state, and that these state representations are causally implicated downstream language generation. Despite this, even today’s largest LMs are prone to glaring semantic errors: they hallucinate facts, contradict input text, or even their own previous outputs. Building on our understanding of how LMs build models of entities and events, I’ll present a representation editing model called REMEDI that can correct these errors directly in an LM’s representation space, in some cases making it possible to generate output that cannot be produced with a corresponding textual prompt, and to detect incorrect or incoherent output before it is generated.
- Bio: Jacob Andreas is the X Consortium Assistant Professor at MIT. His research aims to build intelligent systems that can communicate effectively using language and learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. As a researcher at Microsoft Semantic Machines, he founded the language generation team and helped develop core pieces of the technology that powers conversational interaction in Microsoft Outlook. He has been named a Samsung AI Researcher of the Year and National Academy of Sciences Kavli Fellow, and has received the NSF CAREER award, MIT’s Junior Bose and Kolokotrones teaching awards, and paper awards at NAACL and ICML.
- Recording: Jacob Andreas Lecture Recording (captions coming soon)
- Slides: Jacob Andreas Lecture Slides
- Jacob Andreas
Subscribe to our mailing list
Request to attend
HAVE A VALID NYU ID?
You may attend the Speaker Series without requesting approval!
DON’T HAVE A VALID NYU ID?
Please see above about livestreaming/poststreaming the talks! We are not able to approve external requests at this time.
Previous speakers
Fall 2022
- 8 September – Dan Jurafsky (Stanford)
- 22 September – Emily Pitler (Google)
- 13 October – Alan Ritter (Georgia Tech)
- 20 October – Chris Manning (Stanford)
- 17 November – Emma Strubell (CMU)
Spring 2022
- 10 March: Chenhao Tan (UChicago) – Towards Human-Centered Explanations of AI Predictions [Recording] [ ]
- 24 March: Sameer Singh (UC Irvine) – Lipstick on a Pig: Using Language Models as Few-Shot Learners [Recording] [Slides]
- 31 March: Sebastian Schuster (NYU) – How contextual are contextual language models? [Recording] [Slides]
- 14 April: Greg Durrett (UT Austin) – Why natural language is the right vehicle for complex reasoning? [Recording] [ ]
- 21 April: Douwe Kiela (HuggingFace) – Progress in Dynamic Adversarial Data Collection & Adventures in Multimodal Machine Learning [Recording] [ ]
Fall 2021
- 23 Sept: Ankur Parikh (Google) – Towards High Precision Text Generation
- 21 Oct: Alex Warstadt (NYU) – Testing the Learnability of Grammar for Humans and Machines: Investigations with Artificial Neural Networks. [Alex Warstadt Slides]
- 4 Nov: Marianna Apidianaki (UPenn) – Lexical Polysemy and Intensity in Contextualized Representations [Marianna Apidianaki Slides]
- 18 Nov: Danqi Chen (Princeton) – Contrastive Representation Learning in Text – [Danqi Chen Slides]
Spring 2021
- 4 Feb: Byron Wallace (Northeastern) — What does the evidence say? Language technologies to help make sense of biomedical texts [Byron Wallace Lecture Video Recording]
- 4 March: Nanyun Peng (UCLA) — Controllable Text Generation Beyond Auto-regressive Models [Nanyun Peng Lecture Video Recording]
- 18 March: Karl Stratos (Rutgers) — Maximal Mutual Information Predictive Coding for Natural Language Processing [Karl Stratos Video Lecture Recording]
- 1 April: Su Lin Blodgett (Microsoft) — Language and Justice: Reconsidering Harms in NLP Systems and Practices [NYU community only: Sun Lin Blodgett Lecture Video Recording]
- 15 April: Allyson Ettinger (University of Chicago) — “Understanding” and prediction: Controlled examinations of meaning sensitivity in pre-trained models [Allyson Ettinger Lecture Video Recording]
- 29 April: Wei Xu (Georgia Tech) — Importance of Data and Linguistics in Neural Language Generation [Wei Xu Lecture Video Recording]
Fall 2020
- 17 Sept: Anna Rogers (Copenhagen) — When BERT plays the lottery, all tickets are winning
- 24 Sept: Matt Gardner (AI2) — Contrastive pairs are better than independent samples, for both learning and evaluation [Matt Gardner Lecture Video Recording]
- 8 Oct: Yonatan Belinkov (Technion) — Causal Mediation Analysis for Interpreting NLP Models: The Case of Gender Bias [Yonatan Belinkov Lecture Video Recording]
- 22 Oct: Tatsunori Hashimoto (Stanford) — Robustness based approaches for improving natural language generation and understanding [Tatsunori Hashimoto Video Lecture Recording]
- 29 Oct: Dan Weld (University of Washington) — Semantic Scholar – Advanced NLP to Accelerate Scientific Research [Dan Weld Lecture Video Recording]
- 5 Nov: Dani Yogatama (DeepMind) — Semiparametric Language Models [Dani Yogatama Lecture Video Recording]
- 3 Dec: Angeliki Lazaridou (DeepMind) — Towards multi-agent emergent communication as a building block of human-centric AI [Angeliki Lazaridou Video Lecture Recording]
Spring 2020
- 6 Feb: Ellie Pavlick (Brown) — What do (and should) language models know about language? [Ellie Pavlick Lecture Video Recording]
- 13 Feb: David Lazer (Northeastern) — Fake news on Twitter during the 2016 U.S. presidential election [David Lazer Lecture Recording]
* No meetings from 20 Feb to 9 April *
- 16 Apr: Cancelled
- 23 Apr: Cancelled
- 30 Apr: Cancelled
- 7 May: Cancelled
Fall 2019
- 5 Sept Eunsol Choi (Google / UT Austin) — Learning to Understand Entities In Text
- 12 Sept Tom Kwiatkowski ( Google) — New Challenges in Question Answering: Natural Questions and Going Beyond Word Matching
- 19 Sept Zack Lipton (CMU) — Deep (Inter-)Active Learning for NLP: Cure-all or Catastrophe?
- 26 Sept Diyi Yang (Georgia Tech) — Building Language Technologies for Better Online Communities
- 3 Oct Robin Jia (Stanford) — Building Adversarially Robust Natural Language Processing Systems
- 10 Oct Ceren Budak (U Mich) — News Producers, Politically Engaged Citizens, and Social Movement Organizations Online
- 17 Oct Edward Grefenstette (Facebook AI) — Teaching Artificial Agents to Understand Language by Modelling Reward
- 24 Oct Alexis Conneau (Facebook AI) — Learning cross-lingual text representations
- 31 Oct Sebastian Ruder (DeepMind) — Unsupervised cross-lingual representation learning
- 7 Nov **NO MEETING**
- 14 Nov Mohit Iyyer (U Mass) — Rethinking Transformers for machine translation and story generation
- 21 Nov Jennifer Pan (Stanford) — Uncovering Hidden Political Activity with Data Science Tools and Social Science Approaches
- 5 Dec Jonathan Berant (Tel-Aviv U) — Understanding Complex Questions
Spring 2019
- 7 Feb Percy Liang (Stanford) — Can Language Robustify Learning?
- 14 Feb – 21 Mar ** No meeting**
- 28 Mar Adji Bousso Dieng (Columbia) — Deep Bayesian Learning as a Paradigm for Text Modeling
- 4 Apr Jacob Devlin (Google) — BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- 11 Apr Yulia Tsvetkov (CMU) — Towards a Computational Analysis of the Language of Veiled Manipulation
- 18 Apr Zachary Steinert-Threlkeld (UCLA) — The Effect of Violence, Cleavages, and Free-riding on Protest Size
- 25 Apr Rico Sennrich (Edinburgh) — Document-level Machine Translation: Recent Progress and The Crux of Evaluation
- 2 May Hannaneh Hajishirzi (UW)
- 9 May Rashida Richardson (AI Now Institute) — Dirty Data, Bad Predictions: How Civil Rights Violation Impact Police Data, Predictive Policing Systems, and Justice
Fall 2018
- 13 Sep Erin Hengel (U Liverpool) — “Publishing while female: Are women held to higher standards? Evidence from peer review”
- 20 Sep *No meeting: New Directions in Analyzing Text as Data conference at UW*
- 21 Sep (Friday) Dan Roth (UPenn) (special NYC NLP talk at Cornell Tech campus)
- 27 Sep *No meeting*
- 4 Oct Emily Gade (UW/Moore-Sloan) — What Counts as Terrorism? An Examination of Terrorist Designations among U.S. Mass Shootings
- 11 Oct Marine Carpuat (U Maryland) — Semantic and Style Divergences in Machine Translation
- 18 Oct Bryce Dietrich (U Iowa/Harvard) — Do Representatives Emphasize Some Groups More Than Others?
- 25 Oct Taylor Berg-Kirkpatrick (UCSD) — Unsupervised Models for Unlocking Language Data
- 1 Nov Laila Wahedi (Georgetown) — Constructing Networks From Social Media Text: How to Do It and When You Should, From Trolls to Journalists
- 8 Nov Jacob Andreas (Microsoft/MIT) — Learning by Narrating
- 15 Nov Sasha Rush (Harvard) — Controllable Text Generation with Deep Latent-Variable Models
- 22 Nov *No meeting: Thanksgiving*
- 29 Nov Kyle Gorman (CUNY) — Grammar engineering in text-to-speech synthesis
- 6 Dec Walter R. Mebane, Jr. (U Michigan) — What You Say You See is Who You Are: Observing Election Incidents in the United States via Twitter
- 13 Dec Philip Resnik (U Maryland) — Mental Health as an Application Area for Computational Linguistics: Prospects and Challenges
Spring 2018
- 25-Jan Omer Levy (UW) — Towards Understanding Deep Learning for Natural Language Processing
- 1-Feb Kevin Knight (USC) — What are Neural Sequence Models Doing?
- 8-Feb Bruno Gonçalves (NYU CDS / Aix-Marseille Université) — Spatio temporal analysis of Language use
- 15-Feb * No meeting *
- 22-Feb * No meeting *
- 1-Mar Maja Rudolph (Columbia) — Structured Embedding Models for Language Variation
- 8-Mar Justine Zhang (Cornell) — Unsupervised Models of Conversational Dynamics
- 15-Mar *No meeting: spring break*
- 22-Mar Elliott Ash (U of Warwick / ETH Zurich) — Proportional Representation Increases Party Politics: Evidence from New Zealand Parliament using a Supervised Topic Model
- 29-Mar Luke Zettlemoyer (UW) — End-to-end Learning for Broad Coverage Semantics
- 5-Apr Marie-Catherine de Marneffe (Ohio State) — Computational pragmatics: a case study of “speaker commitment”
- 12-Apr Graham Neubig (CMU) — What Can Neural Networks Teach us about Language?
- 19-Apr Ray Mooney (UT Austin) — Ensembles and Explanation for Visual Question Answering
- 26-Apr Sarah Bouchat (Northwestern) — Making a Long Story Short: Eliciting Prior Information from Previously Published Research
- 3-May Ben Lauderdale (LSE) — Unsupervised Methods for Extracting Political Positions from Text
Fall 2017
- 21-Sep Dean Knox (Microsoft Research/Princeton) and Christopher Lucas (Harvard) — Measuring Speaker Affect in Audio Data: Dynamics of Supreme Court Oral Arguments
- 28-Sep Rich Nielsen (MIT) — Text Analysis of Internet Islam
- 5-Oct Jordan Boyd-Graber (UMD) — Cooperative and Competitive Machine Learning through Question Answering
- 12-Oct **No meeting: Text as Data 2017 Conference at Princeton**
- 19-Oct Claire Cardie (Cornell) — Structured Prediction for Opinions and Arguments
- 26-Oct David Weiss (Google) — Parsimonious Representation Learning for NLP
- 2-Nov Emily Bender (UW) — Articulating How Our Data and Systems Do and Don’t Represent the World
- 9-Nov Hal Daumé III (UMD) — Learning Language Through Interaction
- 16-Nov Gerard De Melo (Rutgers) — Learning Semantics and Commonsense Knowledge from Heterogeneous Data
- 23-Nov **No meeting: Thanksgiving**
- 30-Nov Jenn Wortman Vaughan (Microsoft Research) — The Human Components of Machine Learning
- 7-Dec Damon Centola (UPenn) — The Emergence of Linguistic Norms: An Experimental Study of Cultural Evolution
- 14-Dec Fernando Diaz (Spotify) — Local Natural Language Processing in Information Retrieval
Summer 2017
- 26-Jul Yejin Choi (UW) — From Naive Physics to Connotation: Learning about the World from Language
Spring 2017
- 2-Feb Chris Callison-Burch (Penn) — The promise of crowdsourcing for natural language processing, public health, and other data sciences
- 9-Feb Vinod Prabhakaran (Stanford) — NLP and Society: Understanding Social Context from Language Use
- 16-Feb Matthew Denny (Penn State) and Arthur Spirling (NYU) —Text Preprocessing For Unsupervised Learning: Why It Matters, When It Misleads, And What To Do About It
- 23-Feb **No Meeting**
- 2-Mar **No Meeting**
- 9-Mar Jason Eisner (JHU)
- 16-Mar **No Meeting: Spring Break**
- 23-Mar Amber Boydstun (UC Davis)
- 30-Mar Hong Yu (UMass Medical)
- 6-Apr **No Meeting**
- 13-Apr **Meeting Cancelled**
- 20-Apr Yoav Artzi (Cornell Tech)
- 27-Apr
Kosuke Imai (Princeton)Brendan T. O’Connor (UMass Amherst)
Fall 2016
- 22-Sep David Bamman (Berkeley) — Beyond Bags of Words: Linguistic Structure in the Analysis of Text as Data
- 29-Sep Regina Barzilay (MIT) — How Can NLP Help Cure Cancer?
- 6-Oct Justin Grimmer (Stanford) — Exploratory and Confirmatory Causal Inference for High Dimensional Interventions
- 13-Oct **No Meeting: Text-as-Data Conference**
- 20-Oct Erin Baggott Carter (USC) — Propaganda and Protest: Evidence from Post-Cold War Africa (coauthored with Brett Carter)
- 27-Oct Matt Taddy (Chicago) — Measuring Polarization in High-Dimensional Data: Method and Application to Congressional Speech
- 3-Nov Ken Benoit (LSE) — Measuring and Explaining Political Sophistication Through Textual Complexity
- 10-Nov Edouard Grave (Facebook) — Large scale learning for natural language processing
- 17-Nov Lillian Lee (Cornell) — Can language change minds?
- 24-Nov **No Meeting: Thanksgiving**
- 1-Dec Gary King (Harvard) — An Improved Method of Automated Nonparametric Content Analysis for Social Science
- 8-Dec Sam Bowman (NYU) — Learning neural networks for sentence understanding with the Stanford NLI corpus
Spring 2016
- 4-Feb Marc Ratkovic (Princeton) — Estimating Common and Idiosyncratic Factors from Multiple Datasets
- 11-Feb David Mimno (Cornell) — Topic models without the randomness: new perspectives on deterministic algorithms
- 18-Feb Pablo Barberá (NYU) — Text vs Networks: Inferring Sociodemographic Traits of Social Media Users
- 25-Feb Jacob Eisenstein (GA Tech) — Sociolinguistic Structure Induction
- 3-Mar Slav Petrov (Google NY) — Towards Universal Syntactic Processing of Natural Language
- 10-Mar Laura Nelson (Northwestern, Kellogg) — Measuring Collective Cognitive Structures via Collectively Produced Text
- 17-Mar *Spring Break*
- 24-Mar Cristian Danescu-Niculescu-Mizi (Cornell) — Language and Social Dynamics
- 31-Mar Jason Weston (Facebook) — Evaluating Prerequisite Qualities for End-to-End Dialog Systems
- 7-Apr *MPSA Conference*
- 14-Apr Sven-Oliver Proksch (McGill) — Multilingual Sentiment Analysis: A New Approach to Measuring Conflict in Parliamentary Speeches
- 21-Apr Noémie Elhadad (Columbia) — Summarizing the Patient Record
- 28-Apr Molly Roberts (UCSD) — Matching Methods for High-Dimensional Data with Applications to Text
- 5-May Mark Dredze (JHU/Bloomberg) — Topic Models for Identifying Public Health Trends
Fall 2015
- 10-Sep Brandon Stewart (Princeton) — Text Analysis with Document Context: the Structural Topic Model
- 17-Sep Yacine Jernite (NYU) — Semi-supervised methods of text processing, and an application to medical concept extraction.
- 24-Sep Andrew Peterson (NYU) — Legislative Text and Regulatory Authority
- 1-Oct John Henderson (Yale) — Crowdsourcing Experiments to Estimate an Ideological Dimension in Text
- 8-Oct Ken Benoit (LSE) — Mining Multiword Expressions to Improve Bag of Words Models in Political Science Text Analysis
- 15-Oct Noah Smith (U of Washington) — Learning Political Embeddings from Text
- 19-Oct — Intro to Text Analysis Using R, a one-day workshop led by Ken Benoit (LSE)
- 22-Oct David Blei (Columbia)— Probabilistic Topic Models and User Behavior
- 29-Oct Zubin Jelveh (NYU) Suresh Naidu (Columbia) –Political Language in Economics
- 5-Nov Hanna Wallach (Microsoft Research) — The Bayesian Echo Chamber: Modeling Influence in Conversations
- 12-Nov Jacob Montgomery (WashU) — Funneling the Wisdom of Crowds: The SentimentIt Platform for Human Computation Text Analysis
- 19-Nov Michael Colaresi (Michigan State) — Learning Human Rights, Lefts, Ups and Downs: Using Lexical and Syntactic Features to Understand Evolving Human Rights Standards
- 3-Dec ******TALK CANCELLED******
- 10-Dec Bruno Goncalves (NYU)