Contact

Oriel Kiss

Ph.D. Researcher in quantum computing


Curriculum vitae



Quantum Technology Initiative

CERN






Quantum Technology Initiative

CERN



Trainability barriers and opportunities in quantum generative modeling


Journal article


Manuel S. Rudolph, Sacha Lerch, Supanut Thanasilp, Oriel Kiss, Sofia Vallecorsa, Michele Grossi, Zoë Holmes
npj Quantum Information, vol. 10, 2024 Nov, p. 116


arXiv Paper
Cite

Cite

APA   Click to copy
Rudolph, M. S., Lerch, S., Thanasilp, S., Kiss, O., Vallecorsa, S., Grossi, M., & Holmes, Z. (2024). Trainability barriers and opportunities in quantum generative modeling. Npj Quantum Information, 10, 116. https://doi.org/10.1038/s41534-024-00902-0


Chicago/Turabian   Click to copy
Rudolph, Manuel S., Sacha Lerch, Supanut Thanasilp, Oriel Kiss, Sofia Vallecorsa, Michele Grossi, and Zoë Holmes. “Trainability Barriers and Opportunities in Quantum Generative Modeling.” npj Quantum Information 10 (November 2024): 116.


MLA   Click to copy
Rudolph, Manuel S., et al. “Trainability Barriers and Opportunities in Quantum Generative Modeling.” Npj Quantum Information, vol. 10, Nov. 2024, p. 116, doi:10.1038/s41534-024-00902-0.


BibTeX   Click to copy

@article{rudolph2024a,
  title = {Trainability barriers and opportunities in quantum generative modeling},
  year = {2024},
  month = nov,
  journal = {npj Quantum Information},
  pages = {116},
  volume = {10},
  doi = {10.1038/s41534-024-00902-0},
  author = {Rudolph, Manuel S. and Lerch, Sacha and Thanasilp, Supanut and Kiss, Oriel and Vallecorsa, Sofia and Grossi, Michele and Holmes, Zoë},
  month_numeric = {11}
}

Trainability results for quantum generative modeling.
Trainability results for quantum generative modeling.
Overview of quantum generative modeling workflow.
Overview of quantum generative modeling workflow.
Training on 4x4 calorimeter images.
Training on 4x4 calorimeter images.
 Quantum generative models, in providing inherently efficient sampling strategies, show promise for achieving a near-term advantage on quantum hardware. Nonetheless, important questions remain regarding their scalability. In this work, we investigate the barriers to the trainability of quantum generative models posed by barren plateaus and exponential loss concentration. We explore the interplay between explicit and implicit models and losses, and show that using implicit generative models (such as quantum circuit-based models) with explicit losses (such as the KL divergence) leads to a new flavour of barren plateau. In contrast, the Maximum Mean Discrepancy (MMD), which is a popular example of an implicit loss, can be viewed as the expectation value of an observable that is either low-bodied and trainable, or global and untrainable depending on the choice of kernel. However, in parallel, we highlight that the low-bodied losses required for trainability cannot in general distinguish high-order correlations, leading to a fundamental tension between exponential concentration and the emergence of spurious minima. We further propose a new local quantum fidelity-type loss which, by leveraging quantum circuits to estimate the quality of the encoded distribution, is both faithful and enjoys trainability guarantees. Finally, we compare the performance of different loss functions for modelling real-world data from the High-Energy-Physics domain and confirm the trends predicted by our theoretical results. 


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in