Machine Learning and Data Science Research
Data plays a critical role in all areas of IEOR, from theoretical developments in optimization and stochastics to applications in automation, logistics, health care, energy, finance, and other areas. Much of the recent interest in data science and machine learning has been spurred by the growing ability to apply vast computational power to large scale datasets in nearly every application domain. Faculty and students in the UC Berkeley IEOR department are engaged in cutting edge and interdisciplinary research in ML/DS, including topics like developing scalable and memory-efficient learning algorithms, integrating prediction and optimization models, sparse learning models, addressing fairness concerns, reinforcement learning and control, clustering and learning with network data, as well as applications of ML/DS to various domains.
Faculty
Selected Publications
In Situ Answer Sentence Selection at Web-scale
Zhang, Zeyu & Vu, Thuy & Moschitti, Alessandro. (2024). In Situ Answer Sentence Selection at Web-scale. 4298-4302. 10.1145/3627673.3679946.
A survey on geocoding: algorithms and datasets for toponym resolution
Zhang, Zeyu & Bethard, Steven. (2024). A survey on geocoding: algorithms and datasets for toponym resolution. Language Resources and Evaluation. 1-22. 10.1007/s10579-024-09730-2.
Improving Toponym Resolution by Predicting Attributes to Constrain Geographical Ontology Entries
Zhang, Zeyu & Laparra, Egoitz & Bethard, Steven. (2024). Improving Toponym Resolution by Predicting Attributes to Constrain Geographical Ontology Entries. 35-44. 10.18653/v1/2024.naacl-short.3.
Auto- Train-Once: Controller Network Guided Automatic Network Pruning from Scratch
Wu, Xidong & Gao, Shangqian & Zhang, Zeyu & Li, Zhenzhen & Bao, Runxue & Zhang, Yanfu & Wang, Xiaoqian & Huang, Heng. (2024). Auto- Train-Once: Controller Network Guided Automatic Network Pruning from Scratch. 16163-16173. 10.1109/CVPR52733.2024.01530.
Reinforcement Learning from Answer Reranking Feedback for Retrieval-Augmented Answer Generation
Nguyen, Minh & Nguyen, Toan & KC, Kishan & Zhang, Zeyu & Vu, Thuy. (2024). Reinforcement Learning from Answer Reranking Feedback for Retrieval-Augmented Answer Generation. 4044-4048. 10.21437/Interspeech.2024-2147.
Regret Analysis of Learning-Based MPC With Partially-Unknown Cost Function
Dogan, Ilgin & Shen, Max & Aswani, Anil. (2024). Regret Analysis of Learning-Based MPC With Partially-Unknown Cost Function. IEEE Transactions on Automatic Control. PP. 1-8. 10.1109/TAC.2023.3328827.
Dynamic Pricing with External Information and Inventory Constraint
Li, Xiaocheng & Zheng, Zeyu. (2023). Dynamic Pricing with External Information and Inventory Constraint. Management Science. 10.1287/mnsc.2023.4963.
Behavioral Analytics for Myopic Agents
Mintz, Yonatan & Aswani, Anil & Kaminsky, Philip & Fukuoka, Yoshimi. (2017). Behavioral Analytics for Myopic Agents. European Journal of Operational Research. 310. 10.1016/j.ejor.2023.03.034.
Estimating and Incentivizing Imperfect-Knowledge Agents with Hidden Rewards
Dogan, Ilgin & Shen, Max & Aswani, Anil. (2023). Estimating and Incentivizing Imperfect-Knowledge Agents with Hidden Rewards. 10.48550/arXiv.2308.06717.
On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture Models
Zhang, Y., Cui, Y., Sen, B., & Toh, K. (2022). On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture Models. ArXiv. /abs/2208.07514