Agile learning-based control of nonlinear robotic systems with inherent instabilities remains a difficult problem. However, these problems are not without structure. This talk advocates for incorporating such structure, in particular via harnessing various nonlinear control theoretic tools, within learning-based solution methodologies. In the first part of the talk, I will discuss our work on stabilisable dynamics learning in the context of model-based reinforcement learning. Specifically, I will show how to use contraction theory to design constrained optimisation problems for learning controlled dynamics models that are amenable for planning and control. In the second part of the talk, I will discuss more recent work on constructing dynamical systems-based policy architectures for agile visuomotor control in an imitation learning context. These models leverage Neural Rough Differential Equations (N-RDEs), and allow for seamless multi-frequency sensor fusion. I will conclude with a discussion of active research on generating multi-modal control functions using Controlled N-RDEs and Energy Based Models.
Our speaker
Sumeet Singh is a research scientist with Google Brain Robotics in NYC. He completed his PhD in the Autonomous Systems Lab at Stanford University in 2019, under the supervision of Marco Pavone. Sumeet's current research interests include (i) efficient and robust motion planning for constrained robotic systems, and (ii) leveraging nonlinear control theory for the design of learning-based controllers and algorithms. Sumeet is the recipient of the Stanford Graduate Fellowship (2013-2016), the Qualcomm Innovation Fellowship (2018), and the Stanford Department of Aeronautics & Astronautics Best PhD Thesis Award (2020).
To become a member of the Rough Path Interest Group, register here for free.