Abstract:
Error mitigation techniques are crucial to achieving near-term quantum advantage. Classical post-processing of quantum computation outcomes is a popular approach for error mitigation, which includes methods such as Zero Noise Extrapolation, Virtual Distillation, and learning-based error mitigation. However, these techniques have limitations due to the propagation of uncertainty resulting from the finite shot number of a quantum measurement. In this work, we introduce general and unbiased methods for quantifying the uncertainty and error of error-mitigated observables based on the strategic sampling of error mitigation outcomes. We then extend our approach to demonstrate the optimization of performance and robustness of error mitigation under uncertainty. To illustrate our methods, we apply them to Zero Noise Extrapolation and Clifford Date Regression in the ground state of the XY model simulated using depolarizing and IBM Toronto noise models, respectively. In particular, we optimize the choice of noise levels and the allocation of shots for Zero Noise Extrapolation and the distribution of the training circuits for Clifford Data Regression. While our methods are readily applicable to any post-processing-based error mitigation approach, in practice they must not be prohibitively expensive – even though they perform optimizations of the error mitigation hyperparameters requiring sampling of a statistical distribution of error mitigation outcomes. By leveraging surrogate-based optimization, we show that our methods can efficiently perform optimal design for a Zero Noise Extrapolation implementation. We then further demonstrate the transferability of learned Zero Noise Extrapolation hyperparameters to other similar circuits.
For more about this article see link below.
For the open access PDF link of this article please click.

