Regret Analysis of Learning-Based MPC With Partially-Unknown Cost Function
Publication Date: October 31, 2023
Dogan, Ilgin & Shen, Max & Aswani, Anil. (2024). Regret Analysis of Learning-Based MPC With Partially-Unknown Cost Function. IEEE Transactions on Automatic Control. PP. 1-8. 10.1109/TAC.2023.3328827.
The exploration/exploitation trade-off is an inherent challenge in data-driven adaptive control. Though this trade-off has been studied for multi-armed bandits (MAB’s) and reinforcement learning for linear systems; it is less well-studied for learning-based control of nonlinear systems. A significant theoretical challenge in the nonlinear setting is that there is no explicit characterization of an optimal controller for a given set of cost and system parameters. We propose the use of a finite-horizon oracle controller with full knowledge of parameters as a reasonable surrogate to optimal controller. This allows us to develop policies in the context of learning-based MPC and MAB’s and conduct a control-theoretic analysis using techniques from MPC- and optimization-theory to show these policies achieve low regret with respect to this finite-horizon oracle. Our simulations exhibit the low regret of our policy on a heating, ventilation, and air-conditioning model with partially-unknown cost function.