BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//UC Berkeley IEOR Department - Industrial Engineering & Operations Research - ECPv5.2.0//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:UC Berkeley IEOR Department - Industrial Engineering & Operations Research
X-ORIGINAL-URL:https://ieor.berkeley.edu
X-WR-CALDESC:Events for UC Berkeley IEOR Department - Industrial Engineering & Operations Research
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20200308T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20201101T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20201102T153000
DTEND;TZID=America/Los_Angeles:20201102T163000
DTSTAMP:20201020T181009
CREATED:20201007T064135Z
LAST-MODIFIED:20201017T195815Z
UID:14378-1604331000-1604334600@ieor.berkeley.edu
SUMMARY:11/2: Jianqing Fan - Analysis of Deep Q-Learning
DESCRIPTION:Abstract: Despite the great empirical success of deep reinforcement learning\, its theoretical foundation is less well understood. In this work\, we make the first attempt to theoretically understand the deep Q-network (DQN) algorithm from both algorithmic and statistical perspectives. Specifically\, we focus on a slight simplification of DQN that fully captures its key features. Under mild assumptions\, we establish the algorithmic and statistical rates of convergence for the action-value functions of the iterative policy sequence obtained by DQN. In particular\, the statistical error characterizes the bias and variance that arise from approximating the action-value function using deep neural networks\, while the algorithmic error converges to zero at a geometric rate. As a byproduct\, our analysis provides justifications for the techniques of experience replay and target network\, which are crucial to the empirical success of DQN. Furthermore\, as a simple extension of DQN\, we propose the Minimax-DQN algorithm for the zero-sum Markov game with two players. Borrowing the analysis of DQN\, we also quantify the difference between the policies obtained by Minimax-DQN and the Nash equilibrium of the Markov game in terms of both the algorithmic and statistical rates of convergence. \nBio: Jianqing Fan\, is a statistician\, financial econometrician\, and data scientist. He is Frederick L. Moore ’18 Professor of Finance\, Professor of Statistics\, and Professor of Operations Research and Financial Engineering at Princeton University where he chaired the department from 2012 to 2015. He is the winner of The 2000 COPSS Presidents’ Award\, Morningside Gold Medal for Applied Mathematics (2007)\, Guggenheim Fellow (2009)\, Pao-Lu Hsu Prize (2013)\, and Guy Medal in Silver (2014). He received his P.h. D in Statistics from the University of Berkeley in 1989. \nFan is interested in statistical theory and methods in data science\, statistical machine learning\, finance\, economics\, computational biology\, biostatistics with particular skills on high-dimensional statistics\, nonparametric modeling\, longitudinal and functional data analysis\, nonlinear\, survival analysis\, time series\, wavelets\, among others. \nFan has co-authored three books on Local Polynomial Modeling (1996)\, Nonlinear time series: Parametric and Nonparametric Methods (2003)\, Statistical Foundations of Data Science\, and authored or coauthored over 200 articles on finance\, economics\, statistical machine learning\, computational biology\, semiparametric and non-parametric modeling\, nonlinear time series\, survival analysis\, longitudinal data analysis\, and other aspects of theoretical and methodological statistics. He has been consistently ranked as a top 10 highly-cited mathematical scientist since the existence of such a ranking and has received numerous awards and positions on boards for his findings.
URL:https://ieor.berkeley.edu/event/jianqing-fan-analysis-of-deep-q-learning/
LOCATION:Zoom Webinar (Virtual)\, https://berkeley.zoom.us/meeting/94404355397
ATTACH;FMTTYPE=image/png:https://ieor.berkeley.edu/wp-content/uploads/2020/10/fan.png
END:VEVENT
END:VCALENDAR