Federico Danieli

1 federico danieli

Abstract

Recurrent Neural Networks (RNNs) laid the foundation for sequence modeling, but their intrinsic sequential nature restricts parallel computation, creating a fundamental barrier to scaling. This has led to the dominance of parallelizable architectures like Transformers and, more recently, State Space Models (SSMs). While SSMs achieve efficient parallelization through structured linear recurrences, this linearity constraint limits their expressive power and precludes modeling complex, nonlinear sequence-wise dependencies. To address this, we present ParaRNN, a framework that breaks the sequence-parallelization barrier for nonlinear RNNs. Building on prior work, we cast the sequence of nonlinear recurrence relationships as a single system of equations, which we solve in parallel using Newton's iterations combined with custom parallel reductions. Our implementation achieves speedups of up to 665x over naive sequential application, allowing training nonlinear RNNs at unprecedented scales. To showcase this, we apply ParaRNN to adaptations of LSTM and GRU architectures, successfully training models of 7B parameters that attain perplexity comparable to similarly-sized Transformers and Mamba2 architectures. To accelerate research in efficient sequence modeling, we release the ParaRNN codebase as an open-source framework for automatic training-parallelization of nonlinear RNNs, enabling researchers and practitioners to explore new nonlinear RNN models at scale.

Our speaker

I’m relatively new to Machine Learning, having first started working in the field when I joined Apple MLR in 2022. My research mainly revolves around efficient alternatives to Transformers. Before that, my PhD was at the interface between High-Performance Computing and Applied Mathematics, with a focus on accelerating the numerical solution of differential equations. My current research direction is largely shaped by my background, as much of the inspiration for recent work draws from the strong connection between dynamical systems and modern Recurrent Neural Networks. Today I’ll be presenting our latest work on parallelising the training of nonlinear RNNs, which addresses one of the fundamental computational bottlenecks that has historically limited the application of these models at scale.

 

 

To become a member of the Rough Path Interest Group, register here for free.