Machine Learning and Data Science Research
Data plays a critical role in all areas of IEOR, from theoretical developments in optimization and stochastics to applications in automation, logistics, health care, energy, finance, and other areas. Much of the recent interest in data science and machine learning has been spurred by the growing ability to apply vast computational power to large scale datasets in nearly every application domain. Faculty and students in the UC Berkeley IEOR department are engaged in cutting edge and interdisciplinary research in ML/DS, including topics like developing scalable and memory-efficient learning algorithms, integrating prediction and optimization models, sparse learning models, addressing fairness concerns, reinforcement learning and control, clustering and learning with network data, as well as applications of ML/DS to various domains.
On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture Models
Zhang, Y., Cui, Y., Sen, B., & Toh, K. (2022). On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture Models. ArXiv. /abs/2208.07514
Freund, Robert & Grigas, Paul & Mazumder, Rahul. (2015). A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives. The Annals of Statistics. 45. 10.1214/16-AOS1505.
Zhu, Tingyu & Liu, Haoyu & Zheng, Zeyu. (2023). Learning to Simulate Sequentially Generated Data via Neural Networks and Wasserstein Training. ACM Transactions on Modeling and Computer Simulation. 10.1145/3583070.
Logarithmic regret for episodic continuous-time linear-quadratic reinforcement learning over a finite-time horizon
M. Basei, X. Guo, A. Hu, Y. Zhang, “Logarithmic regret for episodic continuous-time linear-quadratic reinforcement learning over a finite-time horizon”. Journal of Machine Learning Research, 23 (178), 1-34
Molybog, Ramtin Madani, and Javad Lavaei. Conic Optimization for Quadratic Regression Under Sparse Noise. Journal of Machine Learning Research. https://www.jmlr.org/papers/v21/18-881.html.
Feng Zhu, Zeyu Zheng. When Demands Evolve Larger and Noisier: Learning and Earning in a Growing Environment. International Conference on Machine Learning (ICML) 2020. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3637905.
Tianyi Lin, Zeyu Zheng, Elynn Y. Chen, Marco Cuturi, and Michael I. Jordan. On Projection Robust Optimal Transport: Sample Complexity and Model Misspecification. International Conference on Artificial Intelligence and Statistics (AISTATS) 2021. https://arxiv.org/abs/2006.12301.