Neural graphical modelling in continuous-time: consistency guarantees and algorithms
The discovery of structure from time series data is a key problem in fields of study working with complex systems. Most identifiability results and learning algorithms assume the underlying dynamics to be discrete in time. Comparatively few, in contrast, explicitly define dependencies in infinitesimal intervals of time, independently of the scale of observation and of the regularity of sampling. In this paper, we consider score-based structure learning for the study of dynamical systems. We prove that for vector fields parameterized in a large class of neural networks, least squares optimization with adaptive regularization schemes consistently recovers directed graphs of local independencies in systems of stochastic differential equations. Using this insight, we propose a score-based learning algorithm based on penalized Neural Ordinary Differential Equations (modelling the mean process) that we show to be applicable to the general setting of irregularly-sampled multivariate time series and to outperform the state of the art across a range of dynamical systems.
Alexis is a postdoctoral research scientist at Columbia University sponsored by Professor Elias Bareinboim. Prior to Columbia, he graduated with a PhD in Applied Mathematics from the University of Cambridge under the supervision of Professor Mihaela van der Schaar. His research focuses on the study of causality from data and its applications, with the objective to broaden the use of causal insights to improve the robustness of machine learning algorithms, and with the objective to leverage modern machine learning where possible to improve causal discovery and causal inference methods.
To become a member of the Rough Path Interest Group, register here for free.