Prof. Paul Grigas Wins INFORMS Junior Faculty Interest Group Paper Competition

Paul Grigas, Berkeley IEOR

Berkeley IEOR Professor Paul Grigas has been selected as winner of the 2020 INFORMS Junior Faculty Interest Group (JFIG) Paper Competition for his work titled Smart “Predict, then Optimize”, co-authored along with colleague Adam Elmachtoub from Columbia University.

Grigas and Elmachtoub were selected for proposing a new prediction and optimization framework, called Smart “Predict, then Optimize” (SPO), which directly leverages the optimization problem structure for designing better prediction models. A key result of their work has been the development of SPO loss function which measures the decision error induced by a prediction.

As part of the paper competition, all selected finalists virtually presented their work at the INFORMS Annual Conference on November 8, 2020. Among all finalists, Grigas and Elmachtoub were awarded first place. Berkeley IEOR extends a hearty congratulations to the researchers for the award!

The abstract to the paper is provided below. To access to paper, click here.

Abstract: Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is predict-then-optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in the downstream optimization problem. In contrast, we propose a new and very general framework, called Smart “Predict, then Optimize” (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing better prediction models. A key component of our framework is the SPO loss function which measures the decision error induced by a prediction.

Training a prediction model with respect to the SPO loss is computationally challenging, and thus we derive, using duality theory, a convex surrogate loss function which we call the SPO+ loss. Most importantly, we prove that the SPO+ loss is statistically consistent with respect to the SPO loss under mild conditions. Our SPO+ loss function can tractably handle any polyhedral, convex, or even mixed-integer optimization problem with a linear objective. Numerical experiments on shortest path and portfolio optimization problems show that the SPO framework can lead to significant improvement under the predict-then-optimize paradigm, in particular when the prediction model being trained is misspecified. We find that linear models trained using SPO+ loss tend to dominate random forest algorithms, even when the ground truth is highly nonlinear.