Abstract
In many machine learning tasks, it is crucial to extract low-dimensional and descriptive features from a data set. In this talk, I present a method to extract features from multi-dimensional space-time signals which is motivated, on the one hand, by the success of path signatures in machine learning, and on the other hand, by the success of models from the theory of regularity structures in the analysis of PDEs. I will present a flexible definition of a model feature vector along with numerical experiments in which we combine these features with basic supervised linear regression to predict solutions to parabolic and dispersive PDEs with a given forcing and boundary conditions. Interestingly, in the dispersive case, the prediction power relies heavily on whether the boundary conditions are appropriately included in the model. The talk is based on the following joint work with Andris Gerasimovics and Hendrik Weber: https://arxiv.org/abs/2108.05879
Our speaker
Ilya Chevyrev is a Reader at the University of Edinburgh. He obtained his DPhil from the University of Oxford in 2016 and, prior to Edinburgh, was a postdoc at TU Berlin and a Junior Research Fellow at St John's College in Oxford. His research interests lie in stochastic analysis and its applications to mathematical physics and data science.