Quantum machine learning is a promising programming paradigm for the optimization of quantum algorithms in the current era of noisy intermediate-scale quantum computers. A fundamental challenge in quantum machine learning is generalization, as the designer targets performance under testing conditions while having access only to limited training data. Existing generalization analyses, while identifying important general trends and scaling laws, cannot be used to assign reliable and informative “error bars” to the decisions made by quantum models. In this article, we propose a general methodology that can reliably quantify the uncertainty of quantum models, irrespective of the amount of training data, the number of shots, the ansatz, the training algorithm, and the presence of quantum hardware noise. The approach, which builds on probabilistic conformal prediction (CP), turns an arbitrary, possibly small, number of shots from a pretrained quantum model into a set prediction, e.g., an interval, that provably contains the true target with any desired coverage level. Experimental results confirm the theoretical calibration guarantees of the proposed framework, referred to as quantum CP.
For more about this article see link below.
For the open access PDF link of this article please click.