*CDS faculty (including associated and affiliated), researchers, and student authors are bolded with an asterisk before them below:
- 2D-Shapley: A Framework for Fragmented Data Valuation: Zhihong Liu, Hoang Anh Just, Xiangyu Chang, *Xi Chen, Ruoxi Jia
- A Generalization of ViT/MLP-Mixer to Graphs: Xiaoxin He, Bryan Hooi, Thomas Laurent, Adam Perold, *Yann LeCun, Xavier Bresson
- Adaptive Whitening in Neural Populations with Gain-modulating Interneurons: Lyndon Duong, David Lipshutz, *David Heeger, Dmitri Chklovskii, *Eero Simoncelli
- An Effective Meaningful Way to Evaluate Survival Models: Shi-ang Qi, Neeraj Kumar, Mahtab Farrokh, Weijie Sun, Li-Hao Kuan, *Rajesh Ranganath, Ricardo Henao, Russell Greiner
- Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-time Policy Adaptation: Andi Peng, Aviv Netanyahu, *Mark K Ho, Tianmin Shu, Andreea Bobu, Julie Shah, and Pulkit Agrawal
- Distilling Internet-Scale Vision-Language Models into Embodied Agents: Theodore R Sumers, Kenneth Marino, Arun Ahuja, *Rob Fergus, Ishita Dasgupta
- Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions: Leo Klarner, *Tim G. J. Rudner, Michael Reutlinger, Torsten Schindler, Garrett M Morris, Charlotte Deane, Yee Whye Teh
- Evaluating Unsupervised Denoising Requires Unsupervised Metrics: Adria Marcos-Morales, Matan Leibovich, Sreyas Mohan, Joshua Lawrence Vincent, Piyush Haluai, Mai Tan, Peter Crozier, *Carlos Fernandez-Granda
- Extrapolative Controlled Sequence Generation via Iterative Refinement: *Vishakh Padmakumar, Richard Yuanzhe Pang, *He He, Ankur Parikh
- Function-Space Regularization in Neural Networks: A Probabilistic Perspective: *Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, *Andrew Wilson
- HyperTuning: Toward Adapting Large Language Models without Back-propagation: *Jason Phang, Yi Mao, Pengcheng He, Weizhu Chen
- Learning useful representations for shifting tasks and distributions: *Jianyu Zhang, Leon Bottou
- Minimax estimation of discontinuous optimal transport maps: The semi-discrete case: *Aram-Alexandre Pooladian, Vincent Divol, *Jonathan Niles-Weed
- Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization: Alexandre Rame, Kartik Ahuja, *Jianyu Zhang, Matthieu Cord, Leon Bottou, David Lopez-Paz
- Multi-Fidelity Covariance Estimation in the Log-Euclidean Geometry: Aimee Maurais, Terrence Alsup, *Benjamin Peherstorfer, Youssef Marzouk
- Multisample Flow Matching: Straightening Flows with Minibatch Couplings: *Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, and Ricky Tian Qi Chen
- Optimization for Amortized Inverse Problems: Tianci Liu, Tong Yang, Quan Zhang, *Qi Lei
- Perturbation analysis of neural collapse: Tom Tirer, Haoxiang Huang, *Jonathan Niles-Weed
- Pretraining Language Models with Human Preferences: Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, *Jason Phang, *Samuel R. Bowman, Ethan Perez
- RankMe: Assessing the Downstream Performance of Pretrained Self-Supervised Representations by Their Rank: Quentin Garrido, Randall Balestriero, Laurent Najman, *Yann LeCun
- Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC: Yilun Du, Conor Durkan, Robin Strudel, Josh Tenenbaum, Sander Dieleman, *Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, Will Grathwohl
- Self-supervised learning of Split Invariant Equivariant representations: Quentin Garrido, Laurent Najman, *Yann LeCun
- Simple and Fast Group Robustness by Automatic Feature Reweighting: Shikai Qiu, *Andres Potapczynski, Pavel Izmailov, *Andrew Wilson
- Statistical whitening of neural populations with gain-modulating interneurons: Lyndon R. Duong, David Lipshutz, *David J. Heeger, Dmitri B. Chklovskii, *Eero P. Simoncelli
- The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Vivien Cabannnes, Bobak T Kiani, Randall Balestriero, *Yann LeCun, Alberto Bietti - Towards Understanding and Improving GFlowNet Training: Max Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, *Kyunghyun Cho, Tommaso Biancalani
- User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems: Marc Finzi, Anudhyan Boral, *Andrew Wilson, Fei Sha, Leonardo Zepeda-Nunez
- When do Minimax-fair Learning and Empirical Risk Minimization Coincide?: *Harvineet Singh, Matthäus Kleindessner, Volkan Cevher, *Rumi Chunara, Chris Russell
- Why did the Model Fail?: Attributing Model Performance Changes to Distribution Shifts: Haoran Zhang, *Harvineet Singh, Marzyeh Ghassemi, Shalmali Joshi
ICML Workshops
- DMLR Workshop: Data-centric Machine Learning Research: Ce Zhang, Praveen Paritosh, Newsha Ardalani, Nezihe Merve Gürel, William Gaviria Rojas, Yang Liu, Rotem Dror, Manil Maskey, Lilith Bat-Leah, Tzu-Sheng Kuo, Luis Oala, Max Bartolo, Ludwig Schmidt, Alicia Parrish, Daniel Kondermann, *Najoung Kim
- Localized Learning: Decentralized Model Updates via Non-Global Objectives: David I. Inouye , *Mengye Ren, Mateusz Malinowski, Michael Eickenberg, Gao Huang, Eugene Belilovsky
- The Second Workshop on Spurious Correlations, Invariance and Stability: *Yoav Wald, Claudia Shi, Aahlad Puli, Amir Feder, Limor Gultchin, Mark Goldstein, Maggie Makar, Victor Veitch, Uri Shalit
- Workshop on Computational Biology:
- A Variational Inference Approach to Single-Cell Gene Regulatory Network Inference using Probabilistic Matrix Factorization by: *Claudia Skok Gibbs (New York University); *Omar Mahmood (New York University); Richard Bonneau (New York University); *Kyunghyun Cho (New York University) *Winner of Best Paper Award
- BOtied: Multi-objective Bayesian optimization with tied multivariate ranks by: Natasa Tagasovska (Prescient Design, Genentech); Ji Won Park (Prescient Design, Genentech/Roche); Michael Maser (Prescient Design, Genentech); Stephen Ra (Prescient Design, Genentech); *Kyunghyun Cho (New York University)