Mean-Field Controls with Q-Learning for Cooperative MARL: Convergence and Complexity Analysis

Publication Date: April 12, 2023

H. Gu, X. Guo, X. Wei, R. Xu, “Mean-field controls with Q-learning for cooperative MARL: convergence and complexity analysis”. SIAM Journal on Mathematics of Data Science, 2021.

Multi-agent reinforcement learning (MARL), despite its popularity and empirical success, suffers from the curse of dimensionality. This paper builds the mathematical framework to approximate cooperative MARL by a mean-field control (MFC) approach. By establishing an appropriate form of the dynamic programming principle for both the value function and the Q function, it proposes a model-free kernel-based Q-learning algorithm (MFC-K-Q), which is shown to have a linear convergence rate for the MFC problem, the first of its kind in the MARL literature. It further establishes that the convergence rate and the sample complexity of MFC-K-Q are independent of the number of agents N, which provides an approximation to the MARL problem with N agents in the learning environment. Empirical studies for the network traffic congestion problem demonstrate that MFC-K-Q outperforms existing MARL algorithms when 𝑁 is large, for instance, when 𝑁>50.