Important Dates


Submission Deadline: Oct. 02, 2020 Oct. 10, 2020
Acceptance Notification: Oct. 30, 2020
Camera-ready Deadline: Dec. 1st, 2020
Workshop Date: Dec. 11th, 2020


Submission Guidelines


The workshop solicits the following submissions.

  • Regular paper (up to 8 pages, not including references) describing original research work that has not been published before.
  • Position paper (up to 6 pages, not including references) reporting preliminary research findings or discussing inspiring and new directions.
  • Extended abstract (up to 4 pages, not including references) highlighting significant works that have been published.

Formatting guidelines, LaTex styles and Word template:

  • neurips_2020.tex -- LaTeX template
  • neurips_2020.sty -- style file for LaTeX 2e
  • neurips_2020.pdf -- example PDF output

References and appendix should be appended into the same (single) PDF document, and do not count in the page limit.



Submission site: https://cmt3.research.microsoft.com/QTNML2020



The reviewing process is double-blind. All submissions will be peer reviewed and evaluated based on technical contribution, originality, relevance to areas of interest, and presentation clarity. Papers may be accepted for either oral or poster presentation. We will also publish all the accepted papers and extended abstracts on the workshop website (with the authors' permission).


Relevant Topics


This workshop aims to promote discussions among researchers investigating innovative QTNML technologies from perspectives of fundamental theory and algorithms, novel approaches in machine learning and deep neural networks, and various applications in computer vision, biomedical image processing, natural language processing, and many other related fields. Furthermore, researchers from multiple disciplines including physics, mathematics, and machine learning fields are encouraged to join the workshop to discuss the challenging problems and future research directions.

We are soliciting contributions that address a wide range of theoretical and practical issues including, but not limited to:

  • Fundamental theory and algorithms for quantum tensor networks in machine learning
  • Quantum speedups in machine learning
  • Quantum machine learning via tensor networks
  • Quantum reinforcement learning
  • Quantum understanding of classical machine learning
  • Supervised, unsupervised and self-supervised quantum machine learning via tensor networks
  • Tensor networks for dimensionality reduction and quantum-enhanced feature extraction
  • Tensor networks for probabilistic modeling
  • Tensor networks for quantum many-body systems
  • Tensor network representations for quantum generative models and graphical models.
  • Tensor network contraction
  • Automatic differentiable programming for quantum tensor networks in machine learning
  • Software development for tensor networks and quantum machine learning
  • High performance quantum tensor networks on GPU/FPGA/ASIC platforms
  • High performance classical simulation of quantum machine learning
  • Applications of tensor networks and quantum machine learning