Mingda Wan
About Me
Hi! I am Mingda Wan(万明达), a fourth-year undergraduate student at Anhui University.
Previously, I was fortunate to be advised by Zhao Song at UC Berkeley, Yingyu Liang and Zhenmei Shi at UW–Madison, for machine learning theory and generative models.
Currently, I am looking for further research opportunities and an MPhil or PhD position in 2025 Fall. If you believe I align with your research interests, please feel free to reach out via email.
Email /
WeChat /
CV /
Google Scholar /
Github
|
|
Dream
I aspire to provide insightful theoretical analysis of machine learning methods and to leverage those theoretical understanding to guide the design and optimization of future techniques.
|
Publications
* denote alphabetical order
|
NRFlow: Towards Noise-Robust Generative Modeling via High-Order Flow Matching
Bo Chen*, Chengyue Gong*, Xiaoyu Li*, Yingyu Liang*, Zhizhou Sha*, Zhenmei Shi*, Zhao Song*, Mingda Wan*, Xugang Ye*
UAI 2025
Camera Ready Paper Preview
|
Attention Scheme Inspired Softmax Regression
Zhihang Li*, Zhizhou Sha*, Zhao Song* and Mingda Wan*
ICLR 2025 Workshop
Camera Ready Paper
|
An Improved Sample Complexity for Rank-1 Matrix Sensing
Zhihang Li*, Zhizhou Sha*, Zhao Song* and Mingda Wan*
ICLR 2025 Workshop
Camera Ready Paper
|
High-Order Matching for One-Step Shortcut Diffusion Models
Bo Chen*, Chengyue Gong*, Xiaoyu Li*, Yingyu Liang*, Zhizhou Sha*, Zhenmei Shi*, Zhao Song* and Mingda Wan*
ICLR 2025 Workshop
Camera Ready Paper
|
Theoretical Constraints on the Expressive Power of RoPE-based Tensor Attention Transformers
Xiaoyu Li*, Yingyu Liang*, Zhenmei Shi*, Zhao Song* and Mingda Wan*
arXiv preprint arXiv:2412.18040
arXiv Link
|
HOFAR: High-Order Augmentation of Flow Autoregressive Transformers
Yingyu Liang*, Zhizhou Sha*, Zhenmei Shi*, Zhao Song* and Mingda Wan*
arXiv preprint arXiv:2503.08032
arXiv Link
|
Force Matching with Relativistic Constraints: A Physics-Inspired Approach to Stable and Efficient Generative Modeling
Yang Cao*, Bo Chen*, Xiaoyu Li*, Yingyu Liang*, Zhizhou Sha*, Zhenmei Shi*, Zhao Song* and Mingda Wan*
arXiv preprint arXiv:2502.08150
arXiv Link
|
|