Anima Anandkumar



Bren Professor at CalTech
director of machine learning research at NVIDIA

Title:

Efficient Quantum Optimization via Multi-Basis Encodings and Tensor Rings



Abstract:

Despite extensive research efforts, few quantum algorithms for classical optimization demonstrate an advantage that is realizable on near-term devices. The utility of many quantum algorithms is limited by high requisite circuit depth and nonconvex optimization landscapes. We tackle these challenges by introducing a new variational quantum algorithm that utilizes multi-basis graph encodings and nonlinear activation functions. Our technique results in increased optimization performance, a factor of two increase in effective quantum resources, and a quadratic reduction in measurement complexity. Further, we construct exact circuit representations using factorized tensor rings. This enables us to successfully optimize the MaxCut of the non-local 512-vertex DIMACS library graphs on a single A100 GPU using only shallow circuits. We further provide efficient distributed implementation via the Tensorly-Quantum library.



Bio:

Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors, such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.






Anwar Walid



Director of the Network Intelligence
and Distributed Systems Research Group
(Nokia-Bell Labs)

Title:

High Performance Computation for Tensor Networks Learning



Abstract:

In this talk, we study high performance computation for tensor networks to address time and space complexities that grow rapidly with the tensor size. We propose efficient primitives that exploit parallelism in tensor learning for efficient implementation on GPU.



Bio:

Anwar Walid is Director of Network Intelligence and Distributed Systems Research, and a Distinguished Member of Research at Bell Labs (Murray Hill, N.J.). He also served at Bell Labs as Head of the Mathematics of Systems Research Department, and Director of Global University Research Partnerships. He received Ph.D. from Columbia University, and B.S. and M.S. from New York University. He has over 20 US and international granted patents on various aspects of computing, communications and networking. He received awards from the ACM and IEEE, including the 2019 ACM SIGCOMM Networking Systems Award for “development of a networking system that has had a significant impact on the world of computer networking” and the 2017 IEEE Communications Society William R. Bennett Prize. Dr. Walid has served on various editorial boards including IEEE IoT Journal – 2019 Special Issue on AI-Enabled Cognitive Communications and Networking for IoT, and IEEE Transactions on Network Science. He served as General Co-Chair of 2018 IEEE/ACM Conference on Connected Health (CHASE). He is an adjunct Professor at Columbia University’s Electrical Engineering Department, a Fellow of the IEEE and an elected member of the IFIP (International Federation for Information Processing) Working Group 7.3. https://www.bell-labs.com/usr/anwar.walid






Danilo P. Mandic



Prof Danilo Mandic
Imperial College London, UK

Title:

Multi-graph Tensor Networks: Big Data Analytics on Irregular Domains



Abstract:

The current availability of powerful computers and huge data sets is creating new opportunities in computational mathematics to bring together concepts and tools from tensor algebra, graph theory, machine learning and signal processing. In discrete mathematics, a tensor is merely a collection of points (nodes in a graph) which are arranged as a multiway arrray. The power of such tensors lies in the fact that they can represent entities as diverse as the users of social networks or financial market data, and that these can be transformed into low-dimensional signals which can be analyzed using data analytics tools. In this talk, we aim to provide a comprehensive introduction to advanced data analytics on graphs using tensor. We will then establish a relationship between tensors and graphs, in order to move beyond the standard regular sampling in time and space. This facilitates modelling in many important areas, including communication networks, computer science, linguistics, social sciences, biology, physics, chemistry, transport, town planning, financial systems, personal health and many others. The tensor and graph topologies will be revisited from a modern data analytics point of view, and we will then proceed to establish a taxonomy of graph tensor networks. With this as a basis, we show such a framework allows for even the most challenging machine learning tasks, such as clustering, being performed in an intuitive and physically meaningful way. Unique aspects of the multi-graph tensor networks (MGTN) framework will be outlined, such as their benefits for processing data acquired on irregular domains, their ability to finely-tune statistical learning procedures through local information processing, the concepts of random signals on graphs and tensors, learning of graph topology from data observed on graphs, and confluence with deep neural networksnd Big Data. Extensive examples are included to render the concepts more concrete and to facilitate a greater understanding of the underlying principles.



Bio:

Danilo P. Mandic is a Professor in signal processing with Imperial College London, UK, and has been working in the areas of adaptive signal processing and bioengineering. He is a Fellow of the IEEE and Vice-President of International Neural Networks Society (INNS). He has more than 300 publications in journals and conferences. Prof Mandic has received the 2019 Dennis Gabor Award by the International Neural Networks Society (for outstanding achievements in neural engineering), and the President Award for Excellence in Postgraduate Supervision at Imperial. He has coauthored research monographs "Recurrent Neural Networks for Prediction", Wiley 2001, "Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models", Wiley 2009, and "Tensor Networks for Dimensionality Reduction and Large Scale Optimisation", Now Publishers, 2017. He is a 2018 recipient of the Best Paper Award in IEEE Signal Processing Magazine, for a paper entitled "Tensor Decompositions for Signal Processing Applications", and a winner of an Outstainding Paper Award in the IEEE ICASSP 2021 conference. His work related to this talk is a series of three articles entitled "Data Analytics on Graphs", published in Foundations and Trends in Machine Learning", December 2020 and his monograph on Tensor Networks.






Nadav Cohen



Asst. Professor
School of Computer Science
Tel Aviv University

Title:

Implicit Regularization in Quantum Tensor Networks



Abstract:

The mysterious ability of neural networks to generalize is believed to stem from an implicit regularization, a tendency of gradient-based optimization to fit training data with predictors of low “complexity.” Despite vast efforts, a satisfying formalization of this intuition is lacking. In this talk I will present a series of works theoretically analyzing the implicit regularization in quantum tensor networks, known to be equivalent to certain (non-linear) neural networks. Through dynamical characterizations, I will establish an implicit regularization towards low tensor ranks, different from any type of norm minimization, in contrast to prior beliefs. I will then discuss implications of this finding to both theory (potential explanation for generalization over natural data) and practice (compression of neural network layers, novel regularization schemes). An underlying theme of the talk will be the potential of quantum tensor networks to unravel mysteries behind deep learning. Works covered in the talk were in collaboration with Sanjeev Arora, Wei Hu, Yuping Luo, Asaf Maman and Noam Razin.

Bio:

Nadav Cohen is an Asst. Professor of Computer Science at Tel Aviv University. His research focuses on the theoretical and algorithmic foundations of deep learning. In particular, he is interested in mathematically analyzing aspects of expressiveness, optimization and generalization, with the goal of deriving theoretically founded procedures and algorithms that will improve practical performance. Nadav earned a BSc in electrical engineering and a BSc in mathematics (both summa cum laude) at the Technion Excellence Program for Distinguished Undergraduates. He obtained his PhD (direct track) at the School of Computer Science and Engineering in the Hebrew University of Jerusalem, after which he was a postdoctoral research scholar at the School of Mathematics in the Institute for Advanced Study of Princeton. For his contributions to the theoretical understanding of deep learning, Nadav received a number of awards, including the Google Doctoral Fellowship in Machine Learning, the Rothschild Postdoctoral Fellowship, the Zuckerman Postdoctoral Fellowship, and the Google Research Scholar Award.






Stefanos Kourtis



Assistant Professor
in the Department of Physics at
Université de Sherbrooke

Title:

Classical and quantum computations as tensor networks



Abstract:

In this talk I will illustrate how tensor networks can be used as a universal tool to represent classical and quantum computations. I will then introduce highly efficient tensor network algorithms that perform or simulate computations in practice and demonstrate their performance on benchmarks of current interest, such as model counting and quantum circuit simulation. I will close with an outline of ongoing work and an outlook on future directions.



Bio:

Stefanos Kourtis is an Assistant Professor in the Department of Physics at Université de Sherbrooke. He holds a Research Chair in Quantum Computing awarded by the Ministère de l'Économie et de l'Innovation du Québec. He was previously a postdoctoral scholar at Boston University (2016-2020), Princeton University (2015-2016), and the University of Cambridge (2014-2015). He received his PhD in theoretical condensed-matter physics from Dresden University of Technology in 2014.