Contact

Oriel Kiss

Ph.D. Researcher in quantum computing


Curriculum vitae



Quantum Technology Initiative

CERN






Quantum Technology Initiative

CERN



Trainability barriers and opportunities in quantum generative modeling


Journal article


Manuel S. Rudolph, Sacha Lerch, Supanut Thanasilp, Oriel Kiss, Sofia Vallecorsa, Michele Grossi, Zoë Holmes
arXiv preprint arXiv:2305.02881, 2023 May

arXiv
Cite

Cite

APA   Click to copy
Rudolph, M. S., Lerch, S., Thanasilp, S., Kiss, O., Vallecorsa, S., Grossi, M., & Holmes, Z. (2023). Trainability barriers and opportunities in quantum generative modeling. ArXiv Preprint ArXiv:2305.02881.


Chicago/Turabian   Click to copy
Rudolph, Manuel S., Sacha Lerch, Supanut Thanasilp, Oriel Kiss, Sofia Vallecorsa, Michele Grossi, and Zoë Holmes. “Trainability Barriers and Opportunities in Quantum Generative Modeling.” arXiv preprint arXiv:2305.02881 (May 2023).


MLA   Click to copy
Rudolph, Manuel S., et al. “Trainability Barriers and Opportunities in Quantum Generative Modeling.” ArXiv Preprint ArXiv:2305.02881, May 2023.


BibTeX   Click to copy

@article{rudolph2023a,
  title = {Trainability barriers and opportunities in quantum generative modeling},
  year = {2023},
  month = may,
  journal = {arXiv preprint arXiv:2305.02881},
  author = {Rudolph, Manuel S. and Lerch, Sacha and Thanasilp, Supanut and Kiss, Oriel and Vallecorsa, Sofia and Grossi, Michele and Holmes, Zoë},
  month_numeric = {5}
}

Trainability results for quantum generative modeling.
Trainability results for quantum generative modeling.
Overview of quantum generative modeling workflow.
Overview of quantum generative modeling workflow.
Training on 4x4 calorimeter images.
Training on 4x4 calorimeter images.
 Quantum generative models, in providing inherently efficient sampling strategies, show promise for achieving a near-term advantage on quantum hardware. Nonetheless, important questions remain regarding their scalability. In this work, we investigate the barriers to the trainability of quantum generative models posed by barren plateaus and exponential loss concentration. We explore the interplay between explicit and implicit models and losses, and show that using implicit generative models (such as quantum circuit-based models) with explicit losses (such as the KL divergence) leads to a new flavour of barren plateau. In contrast, the Maximum Mean Discrepancy (MMD), which is a popular example of an implicit loss, can be viewed as the expectation value of an observable that is either low-bodied and trainable, or global and untrainable depending on the choice of kernel. However, in parallel, we highlight that the low-bodied losses required for trainability cannot in general distinguish high-order correlations, leading to a fundamental tension between exponential concentration and the emergence of spurious minima. We further propose a new local quantum fidelity-type loss which, by leveraging quantum circuits to estimate the quality of the encoded distribution, is both faithful and enjoys trainability guarantees. Finally, we compare the performance of different loss functions for modelling real-world data from the High-Energy-Physics domain and confirm the trends predicted by our theoretical results. 


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in