BayOpt 2023

The Sixth Bay Area Optimization will be hosted at the University of California, Berkeley. The event will take place in room 100 of Mudd Hall on May 5, 2023. The meeting will bring together leaders in optimization, variational analysis, and applications. Several lectures by experts in the field will describe state-of-the-art models and algorithms as well as real-world applications from public and private sectors.

 

Please register to attend BayOpt 2023: Registration Form

Golden Gate Bridge

Location:

University of California, Berkeley

Mudd Hall, Room 100
Seeley G. Mudd Bldg
1798 Scenic Ave
Berkeley, CA 94709

Public Parking is available at the Lower Hearst Parking Structure (fee required).

Program Committee:

Anil Aswani, University of California, Berkeley (Local organizer)

Johannes O. Royset, Naval Postgraduate School (Chair)

John Duchi (Stanford)

Program:

09:10      Welcome & Introduction

09:20      Jose Blanchet, Stanford

10:00      Qi Gong, UC Santa Cruz

10:40     Coffee Break

11:10      Nika Haghtalab, UC Berkeley

11:50      Yueyue Fan, UC Davis

12:30      Lunch*

02:10      Krishnakumar Balasubramanian, UC Davis

02:50      Alper Atamturk, UC Berkeley

03:30      Coffee Break

04:00     Aaron Sidford, Stanford

 

*participants will be responsible for their lunch. You can find a list of restaurants here: Best of Berkeley Restaurants. You can also find a bunch of great restaurants on Euclid Avenue (5 min walk from Mudd Hall).

Registration:

There is no registration fee, but attendees are required to register at the link found here. Registration deadline details are forthcoming.

Titles and Abstracts:

Speaker: Jose Blanchet

Title: Advances in Distributionally Robust Optimization (DRO): Unifications, Extensions, and Applications

Abstract: We will discuss recent developments in distributionally robust optimization, including a tractable class of problems that simultaneously unifies and extends most of the formulations studied in DRO (including phi-divergence, inverse-phi-divergence, Wasserstein, and Sinkhorn). This unification is based on optimal transport theory with martingale constraints. We discuss various benefits of having the flexibility offered by these formulations in connection with, for example, the theory of epi-convergence and statistical robustness. We apply some of these new developments to optimal portfolio selection. Our implementations are motivated by intriguing experiments which show an unexpected out-of-sample performance of non-robust policies in real data.

This talk is partly based on joint work with Daniel Kuhn, Jiajin Li, Yiping Lu, and Bahar Taskesen.

Short Bio: Jose Blanchet is a Professor of Management Science and Engineering (MS&E) at Stanford. Prior to joining MS&E, he was a professor at Columbia (Industrial Engineering and Operations Research, and Statistics, 2008-2017), and before that he taught at Harvard (Statistics, 2004-2008). Jose is a recipient of the 2010 Erlang Prize and several best publication awards in areas such as applied probability, simulation, operations management, and revenue management. He also received a Presidential Early Career Award for Scientists and Engineers in 2010. He worked as an analyst at Protego Financial Advisors, a leading investment bank in Mexico. He has research interests in applied probability and Monte Carlo methods. He is the Area Editor of Stochastic Models in Mathematics of Operations Research. He has served on the editorial board of Advances in Applied Probability, Bernoulli, Extremes, Insurance: Mathematics and Economics, Journal of Applied Probability, Queueing Systems: Theory and Applications, and Stochastic Systems, among others.

 

 

Speaker: Qi Gong

Title: Model-based Data-driven Learning Methods for Optimal Feedback Control

Abstract: Computing optimal feedback controls for nonlinear systems generally requires solving Hamilton-Jacobi-Bellman (HJB) equations, which, in high dimensions, is a well-known challenging problem due to the curse of dimensionality. In this talk, we present a model-based data-driven method to approximate solutions to HJB equations for high dimensional nonlinear systems. To accomplish this, we model solutions to HJB equations with neural networks trained on data generated without any state space discretization. Training is made more effective and efficient by leveraging the known physics of the problem and generating training data in an adaptive fashion. We further develop different neural networks approximation structures to improve robustness during learning and enhance closed-loop stability of the learned controller.

Short Bio: Qi Gong received his Ph.D. in 2004 in Systems and Control Engineering from Case Western Reserve University. In 2004 he moved to the Department of Mechanical and Astronautical Engineering at Naval Postgraduate School, where he was a National Research Council postdoc and research associate. Dr. Gong joined the Department of Applied Mathematics at University of California, Santa Cruz in 2008 as an Assistant Professor. He was promoted to Associate Professor in 2012 and Professor in 2018. Dr. Gong was the Chair of the AM department from 2019 to 2022. His current research focuses on learning-based feedback control design, computational optimal control, and control applications.

 

 

Speaker: Aaron Sidford

Title: Efficiently Minimizing the Maximum Loss

Abstract: In this talk I will discuss recent advances in the fundamental robust optimization problem of minimizing the maximum of a finite number of convex loss functions. In particular I will show how to develop stochastic methods for approximately solving this problem with a near-optimal number of gradient queries. Along the way, I will cover several optimization techniques of broader utility, including accelerated methods for using ball-optimization oracles and stochastic bias-reduced gradient methods.

This talk will include joint work with Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin including https://arxiv.org/abs/2105.01778 and https://arxiv.org/abs/2106.09481.

Short Bio: Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Professor Jonathan Kelner. His research interests lie broadly in the design and analysis of algorithms, optimization theory, and the theory of computation with an emphasis on work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. He is the recipient of a Microsoft Research Faculty Fellowship, a Sloan Research Fellowship, an NSF CAREER Award, an ACM Doctoral Dissertation Award honorable mention, and best paper awards in COLT, FOCS, and SODA for work in these areas.

 

 

Speaker: Yueyue Fan

Title: Parametric programming for a convex network optimization problem: a new geometric perspective for uncertainty mapping from travel demand space to network flow space

Abstract: Network optimization problems appear in many transportation and logistics applications. In this talk, I will discuss some new results that improve the current understanding of how uncertainty propagates from demand to network flows over a network structure. Different from traditional approaches that focused on a specific or a small neighborhood of a demand value, we directly approach the problem from a parametric programming perspective, to establish a relationship between the problem input and output spaces: the demand space and the network flow space. First, I will present a minimum normal solution mapping (MNSM), established using variational analysis concepts to overcome challenges brought by set-valued mapping. This mapping has nice mathematical properties including well-definedness, uniqueness, and continuity, which are necessary for analytical and numerical purposes. I will then show for situations where link cost functions are piecewise linear, the parametric network optimization problem can be treated in a finite number of partitions with efficient computational methods, and that a neat linear relation between transformed demand and transformed network flow spaces can be established in each partition. In the end, I will discuss how insights learned from this seemingly abstract problem could be linked to some emerging problems in the infrastructure systems domain including learning and equity related challenges.

Bio: Yueyue Fan is a professor in Civil and Environmental Engineering at University of California, Davis. She is also a faculty member in the graduate program of Applied Mathematics at UC Davis. She received her PhD in Civil Engineering at University of Southern California in 2003. Dr. Fan’s research is on transportation and energy infrastructure systems modeling, with a special interest in integrating applied mathematics and engineering domain knowledge to address fundamental challenges brought by data and system uncertainty, dynamics, and underdetermined issues. Dr. Fan is currently serving as the program director of the Civil Infrastructure Systems (CIS) program at the National Science Foundation.

 

 

Speaker: Krishna Balasubramanian

Title: High-dimensional Scaling Limits of Least-square Online SGD Iterates and Its Fluctuations

Abstract: Stochastic Gradient Descent (SGD) is widely used in modern data science. Existing analyses of SGD have predominantly focused on the fixed-dimensional setting. In order to perform high-dimensional statistical inference with such algorithms, it is important to study the dynamics of SGD under high-dimensional scalings. In this talk, I will discuss high-dimensional limit theorems for the online least-squares SGD iterates for solving over-parameterized linear regression. Specifically, focusing on the asymptotic setting (i.e., when both the dimensionality and iterations tend to infinity), I will present the mean-field limit (in the form of an infinite-dimensional ODE) and fluctuations (in the form of an infinite-dimensional SDEs) for the online least-squares SGD iterates. A direct consequence of the result is obtaining explicit expressions for the mean-squared estimation/prediction errors and its fluctuations, under high-dimensional scalings.

Bio: Krishna Balasubramanian is an assistant professor in the Department of Statistics, University of California, Davis. His research interests include stochastic optimization and sampling, geometric and topological statistics, and theoretical machine learning. His research was/is supported by a Facebook PhD fellowship, and CeDAR and NSF grants.

 

 

Speaker: Nika Haghtalab

Title:  Looking beyond the Worst-Case Adversaries in Machine Learning and Optimization

Abstract:  Robustness to changes in data is one of the main challenges faced by sequential machine learning and decision-making algorithms. Yet, most efficient and highly optimized deployed algorithms today were designed to work well on fixed data sets and ultimately fail when data evolves in unpredictable or adversarial ways. It is even more concerning that, for most fundamental problems in machine learning and optimization, providing any performance guarantees that are not completely diminished in the presence of all-powerful adversaries is impossible.

We will explore the smoothed analysis perspective on adaptive adversaries in machine learning and optimization, which goes beyond the worst-case scenario. We will examine both information theoretical and computational perspectives and present general-purpose techniques that provide strong robustness guarantees in practical domains for a wide range of applications, such as online learning, differential privacy, discrepancy theory, sequential probability assignment, and learning-augmented algorithm design. Our conclusion is that even small perturbations to worst-case adaptive adversaries can make learning in their presence as easy as learning over a fixed data set.

Bio: Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She works broadly on the theoretical aspects of machine learning and algorithmic economics. Prof. Haghtalab's work builds theoretical foundations for ensuring both the performance of learning algorithms in presence of everyday economic forces and the integrity of social and economic forces that are born out of the use of machine learning systems. Previously, Prof. Haghtalab was an Assistant Professor in the CS department of Cornell University, in 2019-2020. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University. She is a co-founder of Learning Theory Alliance (LeT-All). Among her honors are the CMU School of Computer Science Dissertation Award, SIGecom Dissertation Honorable Mention, and NeurIPS outstanding paper award.

 

 

Speaker: Alper Atamturk

Title: 2x2-Convexifications for Convex Quadratic Optimization with Indicator Variables

Abstract: In this talk, we present new strong relaxations for the convex quadratic optimization problem with indicator variables. For the bivariate case, we describe the convex hull of the epigraph in the original space of variables and also give a conic quadratic extended formulation. Then, using the convex hull description for the bivariate case as a building block, we derive an extended SDP relaxation for the general case. This new formulation is stronger than other SDP relaxations proposed in the literature for the problem, including Shor's SDP relaxation, the optimal perspective relaxation as well as the optimal rank-one relaxation. Computational experiments indicate that the proposed formulations are quite effective in reducing the integrality gap of the optimization problems. This is a joint work with Shaoning Han and Andres Gomez.