Abstract
Our paper studies policy transfer, one of the well-known transfer learning techniques adopted in large language models, for continuous-time reinforcement learning problems. In the case of continuous-time linear-quadratic systems with Shannon's entropy regularization, we fully exploit the Gaussian structure of their optimal policy and the stability of their associated Riccati equations. In the general case where the system has possibly non-linear and bounded dynamics, the key technical component is the stability of diffusion SDEs which is established by invoking the rough path theory. Our work provides the first theoretical proof of policy transfer for continuous-time RL: an optimal policy learned for one RL problem can be used to initialize to search for a near-optimal policy for another closely related RL problem, while achieving (at least) the same rate of convergence for the original algorithm. As a byproduct of our analysis, we derive the stability of a concrete class of continuous-time score-based diffusion models via their connection with LQRs. To illustrate the benefit of policy transfer for RL, we also propose a novel policy learning algorithm for continuous-time LQRs, which achieves global linear convergence and local super-linear convergence. This is a joint work with Prof. Xin Guo.
Our speaker
Zijiu Lyu is a PhD student in the IEOR department at UC Berkeley, advised by Prof. Xin Guo. His research resides at the intersection of stochastic control, reinforcement learning, and rough path theory. Prior to joining UC Berkeley, he collaborated with Prof. Hao Ni on characterizing probability measures on infinite-dimensional tensor algebra spaces. He received a Master's degree from the University of Oxford, and a Bachelor's degree from Tongji University.
To become a member of the Rough Path Interest Group, register here for free.