We develop a computational framework for robust mean-field control problems driven by common noise. Our approach leverages a fictitious-play scheme to decompose the robust problem into a sequence of standard mean-field control problems with common noise. Each standard problem is then solved by deriving a policy gradient theorem for actor–critic methods in reinforcement learning, combined with path signature representations to efficiently approximate conditional moments. In a climate-change application, the learned robust equilibrium policies demonstrate how uncertainty aversion systematically reshapes optimal abatement and R&D investment decisions, mitigating disparities in long-term damage outcomes. This is joint work with Michael Barnert, Lars Hansen, and Hezhong Zhang.
Our speaker
Ruimeng Hu is an Associate Professor in the Department of Mathematics and the Department of Statistics & Applied Probability at the University of California, Santa Barbara. Her research focuses on stochastic analysis, machine learning, game theory, and stochastic partial differential equations. She has contributed to the theoretical foundations and algorithmic development of learning-based approaches for high-dimensional stochastic systems, as well as their applications in finance, climate economics, and adversarial decision-making. Her research is supported by grants from the NSF and ONR. She was a plenary speaker at the SIAM Conference on Financial Mathematics and Engineering (SIAM FM25).
To become a member of the Rough Path Interest Group, register here for free.