- This event has passed.
Paul Grigas — Learning, Optimization, and Generalization in the Predict-then-Optimize Setting
March 9 @ 3:30 pm - 4:30 pm
Abstract: In the predict-then-optimize setting, the parameters of an optimization task are predicted based on contextual features and it is desirable to leverage the structure of the underlying optimization task when training a machine learning model. A natural loss function in this setting is based on considering the cost of the decisions induced by the predicted parameters, in contrast to standard measures of prediction error. While directly optimizing this loss function is computationally challenging, we propose the use of a novel convex surrogate loss function, which we prove is statistically consistent under mild conditions. We also provide an assortment of novel generalization bounds, including bounds based on a combinatorial complexity measure and substantially improved bounds under an additional strong convexity assumption. Finally, we discuss extensions and opportunities for further developing new results and methodologies in the predict-then-optimize setting.
Bio: Paul Grigas is an assistant professor of Industrial Engineering and Operations Research at the University of California, Berkeley. Paul’s research interests are in large-scale optimization, statistical machine learning, and data-driven decision making. He is also broadly interested in the applications of data analytics, and he has worked on applications in online advertising. Paul’s research is funded by the National Science Foundation including an NSF CRII Award. Paul was awarded the 2015 INFORMS Optimization Society Student Paper Prize, and an NSF Graduate Research Fellowship. He received his B.S. in Operations Research and Information Engineering (ORIE) from Cornell University in 2011, and his Ph.D. in Operations Research from MIT in 2016.