Denoising diffusion probabilistic models are optimally adaptive to unknown low dimensionality.
Zhihan Huang, Yuting Wei, Yuxin Chen
Stochastic Runge-Kutta methods: Provable acceleration of diffusion models.
Yuchen Wu, Yuxin Chen, Yuting Wei
A sharp convergence theory for the probability flow ODEs of diffusion models.
Gen Li, Yuting Wei, Yuejie Chi, Yuxin Chen
Towards a mathematical theory for consistency training in diffusion models.
Gen Li∗, Zhihan Huang∗, Yuting Wei (∗=equal contribution)
Theoretical insights for diffusion guidance: A case study for Gaussian mixture models.
Yuchen Wu, Minshuo Chen, Zihao Li, Mengdi Wang, Yuting Wei
Short version accepted to ICML, 2024
Accelerating convergence of score-based diffusion models, provably.
Gen Li∗, Yu Huang∗, Timofey Efimov, Yuting Wei, Yuejie Chi, Yuxin Chen (∗=equal contribution)
ICML, 2024
A non-asymptotic distributional theory of approximate message passing for sparse and robust regression.
Gen Li, Yuting Wei
Towards faster non-asymptotic convergence for diffusion-based generative models.
Gen Li, Yuting Wei, Yuxin Chen, Yuejie Chi
Approximate message passing from random initialization with applications to Z2 synchronization.
Gen Li, Wei Fan, Yuting Wei
Proceedings of the National Academy of Sciences (PNAS), 2023
A non-asymptotic framework for approximate message passing in spiked models.
Mitigating multiple descents: A model-agnostic framework for risk monotonization.
Pratik Patil, Arun Kumar Kuchibhotla, Yuting Wei, Alessandro Rinaldo
Minimum L1 interpolators: Precise asymptotics and multiple descent.
Yue Li, Yuting Wei
The Lasso with general Gaussian designs with applications to hypothesis testing.
Michael Celentano, Andrea Montanari, Yuting Wei (alphabetical order) (slides)
Annals of Statistics, 2023
Uniform consistency of cross validation estimators for high-dimensional ridge regression.
Pratik Patil, Yuting Wei, Alessandro Rinaldo, Ryan Tibshirani
AISTATS, oral presentation, 2021
Sharp statistical guarantees for adversarially robust Gaussian classification.
Chen Dan, Yuting Wei, Pradeep Ravikumar
ICML, 2020
Early stopping for kernel boosting algorithms: A general analysis with localized complexities.
Yuting Wei∗, Fanny Yang∗ and Martin Wainwright (∗=equal contribution)
Short version accepted to NeurIPS, spotlight presentation, 2017
IEEE Transactions on Information Theory, 2019
Statistical inference for temporal difference learning with linear function approximation.
Weichen Wu, Gen Li, Yuting Wei, Alessandro Rinaldo
Hybrid reinforcement learning breaks sample size barriers in linear MDPs.
Kevin Tan, Wei Fan, and Yuting Wei
NeurIPS, 2024
Federated natural policy gradient methods for multi-task reinforcement learning.
Tong Yang, Shicong Cen, Yuting Wei, Yuxin Chen, Yuejie Chi
Settling the sample complexity of model-based offline reinforcement learning.
Gen Li, Laixi Shi, Yuxin Chen, Yuejie Chi, Yuting Wei
Annals of Statistics, 2024
High-probability sample complexities for policy evaluation with linear function approximation.
Gen Li∗, Weichen Wu∗, Yuejie Chi, Cong Ma, Alessandro Rinaldo, Yuting Wei
IEEE Transactions on Information Theory, 2024
The curious price of distributional robustness in reinforcement learning with a generative model.
Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, Matthieu Geist, Yuejie Chi
Short version accepted to NeurIPS, 2023
Is Q-Learning minimax optimal? A tight sample complexity analysis.
Gen Li, Changxiao Cai, Yuxin Chen, Yuting Wei, Yuejie Chi
Short version accepted to ICML, 2021
Operations Research, 2024
Breaking the sample size barrier in model-based reinforcement learning with a generative model.
Short version accepted to NeurIPS, 2020 (slides)
Minimax-optimal multi-agent RL in zero-sum Markov games with a generative model.
Gen Li, Yuejie Chi, Yuting Wei, Yuxin Chen
NeurIPS, oral presentation, 2023
Fast policy extragradient methods for competitive games with entropy regularization.
Shicong Cen, Yuting Wei, Yuejie Chi
Short version accepted to NeurIPS, 2021
Journal of Machine Learning Research, 2024
Pessimistic Q-Learning for offline reinforcement learning: Towards optimal sample complexity.
Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, Yuejie Chi
ICML, 2022
Softmax policy gradient methods can take exponential time to converge.
Short version accepted to Conference on Learning Theory (COLT), 2021
Mathematical Programming, 2023
Sample-efficient reinforcement learning is feasible for linearly realizable MDPs with limited revisiting.
Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei
NeurIPS, 2021 (slides)
Fast global convergence of natural policy gradient methods with entropy regularization.
Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, Yuejie Chi
Operations Research, 2022 (INFORMS George Nicholson award finalist)
Sample complexity of asynchronous Q-learning: sharper analysis and variance reduction.
Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen
IEEE Transactions on Information Theory, 2021
Derandomizing Knockoffs.
Zhimei Ren, Yuting Wei, Emmanuel Candès
Journal of the American Statistical Association, 2023 (code)
Tackling small eigen-gaps: Fine-grained eigenvector estimation and inference under heteroscedastic noise.
Chen Cheng, Yuting Wei, Yuxin Chen
Randomized tests for high-dimensional regression: A more efficient and powerful solution.
Yue Li, Ilmun Kim, Yuting Wei
NeurIPS, 2020
The geometry of hypothesis testing over convex cones: Generalized likelihood tests and minimax radii.
Yuting Wei, Martin Wainwright and Adityanand Guntuboyina
The Annals of Statistics, 2019
From Gauss to Kolmogorov: Localized measures of complexity for ellipses.
Yuting Wei, Billy Fang and Martin Wainwright
Electronic Journal of Statistics, 2020
The local geometry of testing in ellipses: Tight control via localized Kolmogorov widths.
Yuting Wei and Martin Wainwright
IEEE Transactions on Information Theory, 2020
Debiasing evaluations that are biased by evaluations.
Jingyan Wang, Ivan Stelmakh, Yuting Wei, Nihar B. Shah
Short version accepted to AAAI Conference on Artificial Intelligence, 2021
Adaptive estimation of planar convex sets.
Tony Cai, Adityanand Guntuboyina and Yuting Wei (alphabetical order).
The Annals of Statistics, 2018
Sharp minimax bounds for testing discrete monotone distributions.
International Symposium on Information Theory (ISIT), 2016
Integration and transfer learning of single-cell transcriptomes via cFIT.
Minshi Peng, Yue Li, Brie Wamsley, Yuting Wei, Kathryn Roeder
Proceedings of the National Academy of Sciences (PNAS), 2021 (code)
Cell type hierarchy reconstruction via reconciliation of multi-resolution cluster tree.
Minshi Peng, Brie Wamsley, Andrew Elkins, Daniel M Geschwind, Yuting Wei, Kathryn Roeder
Nucleic Acids Research, 2021 (code)
MHC binding prediction with KernelRLSpan and its variations.
Wen-Jun Shen∗, Yuting Wei∗, Xin Guo∗, Stephen Smale, Hau-San Wong and Shuai Cheng Li (∗=equal contribution)
Journal of Immunological Methods, 2014