The interplay between dynamical systems and algorithm design is central to cutting-edge theoretical and practical advances in data science. On the one hand, the emerging data-intensive applications (such as dynamic brain connectivity in neuroscience, image generation in computer vision, robotics in artificial intelligence, and population genetics in biology) routinely require learning from complex dynamical systems, and call for new computational toolboxes and algorithms to accommodate the explosive growth in data volume, model complexity, and data dependency. On the other hand, many of those simple, scalable machine learning algorithms are essentially based on recursive first-order or second-order optimization methods, which can themselves be viewed as discrete-time dynamical systems.
The rich theory of dynamical systems provides a natural ground to form principled understanding of these algorithms, such as their convergence, robustness, implicit bias, and the underlying optimization landscapes. Such a dynamical systems perspective not only helps lay the theoretical foundations to the discipline, but also yields enormous opportunities to design new, adaptive, and even intelligent data-driven algorithms that could benefit the broader communities.