This article delves into the role of the quantum Fisher information matrix (FIM) in enhancing the performance of parameterized quantum circuit (PQC)-based reinforcement learning agents. While previous studies have highlighted the effectiveness of PQC-based policies preconditioned with the quantum FIM in contextual bandits, its impact in broader reinforcement learning contexts, such as Markov decision processes, is less clear. Through a detailed analysis of Löwner inequalities between quantum and classical FIMs, this study uncovers the nuanced distinctions and implications of using each type of FIM. Our results indicate that a PQC-based agent using the quantum FIM without additional insights typically incurs a larger approximation error and does not guarantee improved performance compared to the classical FIM. Empirical evaluations in classic control benchmarks suggest even though quantum FIM preconditioning outperforms standard gradient ascent, in general, it is not superior to classical FIM preconditioning.
André Sequeira, Luis Paulo Santos, Luis Soares Barbosa,Regular, UncategorizedBasic Conditions, Deep Reinforcement Learning, Estimation Error, Expected Value, Fisher Information, Fisher Information Matrix, Gradient Ascent, Gradient Update, Infidelity, Less Than Or Equal, Markov Decision Process, Matrix Inequalities, Natural Gradient, Normal Vector, Optimal Policy, Policy Gradient, Policy Gradient Method, Policy Parameters, Policy Space, Positive Semidefinite Matrix, Preconditioning, Projector, Proximal Policy Optimization, Quantum circuit, Quantum Measurement, Quantum state, Reward Function, Square Root, Standard Gradient, State Space