Machine learning (ML) classification tasks can be carried out on a quantum computer (QC) using probabilistic quantum memory (PQM) and its extension, parametric PQM (P-PQM), by calculating the Hamming distance between an input pattern and a database of r patterns containing z features with a distinct attributes. For PQM and P-PQM to correctly compute the Hamming distance, the feature must be encoded using one-hot encoding, which is memory intensive for multiattribute datasets with a>2 . We can represent multiattribute data more compactly by replacing one-hot encoding with label encoding; both encodings yield the same Hamming distance. Implementing this replacement on a classical computer is trivial. However, replacing these encoding schemes on a QC is not straightforward because PQM and P-PQM operate at the bit level, rather than at the feature level (a feature is represented by a binary string of 0’s and 1’s). We present an enhanced P-PQM, called efficient P-PQM (EP-PQM), that allows label encoding of data stored in a PQM data structure and reduces the circuit depth of the data storage and retrieval procedures. We show implementations for an ideal QC and a noisy intermediate-scale quantum (NISQ) device. Our complexity analysis shows that the EP-PQM approach requires O(zlog2(a)) qubits as opposed to O(za) qubits for P-PQM. EP-PQM also requires fewer gates, reducing gate count from O(rza) to O(rzlog2(a)) . For five datasets, we demonstrate that training an ML classification model using EP-PQM requires 48% to 77% fewer qubits than P-PQM for datasets with a>2 . EP-PQM reduces circuit depth in the range of 60% to 96%, depending on the dataset. The depth decreases further with a decomposed circuit, ranging between 94% and 99%. EP-PQM requires less space; thus, it can train on and classify larger datasets than previous PQM implementations on NISQ devices.

For more about this article see link below.
For the open access PDF link of this article please click here.