报告人: 吴雨晨 (美国康奈尔大学)
报告时间:2026年1月9日(周五)上午10:30-11:30
报告地点:国交二号楼315会议室
报告摘要:In this talk, I will present recent theoretical advances in diffusion models, a class of deep generative models driving many cutting-edge applications. In the first part, I will introduce a training-free acceleration method for diffusion models. Our approach is simple to implement, compatible with any pre-trained diffusion model, and comes with a convergence rate that strengthens prior theoretical results. We demonstrate the effectiveness of our algorithm across multiple real-world image generation tasks. In the second part, I will discuss a new class of sampling algorithms designed based on the structure of diffusion models. Our approach replaces score networks in the diffusion model architecture with more efficient denoising algorithms that encode information about the target distribution. As applications, we use our method for posterior sampling in two high-dimensional statistical problems: sparse regression and low-rank matrix estimation within the spiked model. In both cases, we develop algorithms with accuracy guarantees in the regime of constant signal-to-noise ratios.