Nonnegative/Binary matrix factorization with a D-Wave quantum annealer (original) (raw)

A Review of Machine Learning Classification Using Quantum Annealing for Real-World Applications

SN Computer Science, 2021

Optimizing the training of a machine learning pipeline helps in reducing training costs and improving model performance. One such optimizing strategy is quantum annealing, which is an emerging computing paradigm that has shown potential in optimizing the training of a machine learning model. The implementation of a physical quantum annealer has been realized by D-wave systems and is available to the research community for experiments. Recent experimental results on a variety of machine learning applications using quantum annealing have shown interesting results where the performance of classical machine learning techniques is limited by limited training data and high dimensional features. This article explores the application of D-wave's quantum annealer for optimizing machine learning pipelines for real-world classification problems. We review the application domains on which a physical quantum annealer has been used to train machine learning classifiers. We discuss and analyze the experiments performed on the D-Wave quantum annealer for applications such as image recognition, remote sensing imagery, computational biology, and particle physics. We discuss the possible advantages and the problems for which quantum annealing is likely to be advantageous over classical computation.

Convex Non-negative Matrix Factorization Through Quantum Annealing

2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys)

In this paper we provide the quantum version of the Convex Non-negative Matrix Factorization algorithm (Convex-NMF) by using the D-wave quantum annealer. More precisely, we use D-wave 2000Q to find the low rank approximation of a fixed real-valued matrix X by the product of two non-negative matrices factors W and G such that the Frobenius norm of the difference X − XW G is minimized. In order to solve this optimization problem we proceed in two steps. In the first step we transform the global real optimization problem depending on W, G into two quadratic unconstrained binary optimization problems (QUBO) depending on W and G respectively. In the second step we use an alternative strategy between the two QUBO problems corresponding to W and G to find the global solution. The running of these two QUBO problems on D-wave 2000Q need to use an embedding to the chimera graph of D-wave 2000Q, this embedding is limited by the number of qubits of D-wave 2000Q. We perform a study on the maximum number of real data to be used by our approach on D-wave 2000Q. The proposed study is based on the number of qubits used to represent each real variable. We also tested our approach on D-Wave 2000Q with several randomly generated data sets to prove that our approach is faster than the classical approach and also to prove that it gets the best results.

Quantum computing methods for supervised learning

Quantum Machine Intelligence

The last two decades have seen an explosive growth in the theory and practice of both quantum computing and machine learning. Modern machine learning systems process huge volumes of data and demand massive computational power. As silicon semiconductor miniaturization approaches its physics limits, quantum computing is increasingly being considered to cater to these computational needs in the future. Small-scale quantum computers and quantum annealers have been built and are already being sold commercially. Quantum computers can benefit machine learning research and application across all science and engineering domains. However, owing to its roots in quantum mechanics, research in this field has so far been confined within the purview of the physics community, and most work is not easily accessible to researchers from other disciplines. In this paper, we provide a background and summarize key results of quantum computing before exploring its application to supervised machine learning problems. By eschewing results from physics that have little bearing on quantum computation, we hope to make this introduction accessible to data scientists, machine learning practitioners, and researchers from across disciplines.

Towards Feature Selection for Ranking and Classification Exploiting Quantum Annealers

Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval

Feature selection is a common step in many ranking, classification, or prediction tasks and serves many purposes. By removing redundant or noisy features, the accuracy of ranking or classification can be improved and the computational cost of the subsequent learning steps can be reduced. However, feature selection can be itself a computationally expensive process. While for decades confined to theoretical algorithmic papers, quantum computing is now becoming a viable tool to tackle realistic problems, in particular special-purpose solvers based on the Quantum Annealing paradigm. This paper aims to explore the feasibility of using currently available quantum computing architectures to solve some quadratic feature selection algorithms for both ranking and classification. The experimental analysis includes 15 state-of-the-art datasets. The effectiveness obtained with quantum computing hardware is comparable to that of classical solvers, indicating that quantum computers are now reliable enough to tackle interesting problems. In terms of scalability, current generation quantum computers are able to provide a limited speedup over certain classical algorithms and hybrid quantum-classical strategies show lower computational cost for problems of more than a thousand features. CCS CONCEPTS • Information systems → Content analysis and feature selection; • Computer systems organization → Quantum computing.

Quantum-Classical Hybrid Machine Learning for Image Classification (ICCAD Special Session Paper)

2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), 2021

Image classification is a major application domain for conventional deep learning (DL). Quantum machine learning (QML) has the potential to revolutionize image classification. In any typical DL-based image classification, we use convolutional neural network (CNN) to extract features from the image and multi-layer perceptron network (MLP) to create the actual decision boundaries. QML models can be useful in both of these tasks. On one hand, convolution with parameterized quantum circuits (Quanvolution) can extract rich features from the images. On the other hand, quantum neural network (QNN) models can create complex decision boundaries. Therefore, Quanvolution and QNN can be used to create an end-to-end QML model for image classification. Alternatively, we can extract image features separately using classical dimension reduction techniques such as, Principal Components Analysis (PCA) or Convolutional Autoencoder (CAE) and use the extracted features to train a QNN. We review two proposals on quantum-classical hybrid ML models for image classification namely, Quanvolutional Neural Network and dimension reduction using a classical algorithm followed by QNN. Particularly, we make a case for trainable filters in Quanvolution and CAE-based feature extraction for image datasets (instead of dimension reduction using linear transformations such as, PCA). We discuss various design choices, potential opportunities, and drawbacks of these models. We also release a Python-based framework to create and explore these hybrid models with a variety of design choices.

Training Restricted Boltzmann Machines With a D-Wave Quantum Annealer

2021

Restricted Boltzmann Machine (RBM) is an energy-based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate the exact gradient of the log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), where obtaining samples is faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results of RBM trained using quantum annealing are compared with the CD-based method. The performance of the two approaches is compared with respect to the classification accuracies, image reconstruction, and log-likelihood results. The classification accuracy results indicate comparable performances of the two methods. Image reconstruction and log-likelihood results show improved performance of the CD-based method. It is shown th...

ICCAD Special Session Paper: Quantum-Classical Hybrid Machine Learning for Image Classification

2021

Image classification is a major application domain for conventional deep learning (DL). Quantum machine learning (QML) has the potential to revolutionize image classification. In any typical DL-based image classification, we use convolutional neural network (CNN) to extract features from the image and multi-layer perceptron network (MLP) to create the actual decision boundaries. On one hand, QML models can be useful in both of these tasks. Convolution with parameterized quantum circuits (Quanvolution) can extract rich features from the images. On the other hand, quantum neural network (QNN) models can create complex decision boundaries. Therefore, Quanvolution and QNN can be used to create an end-to-end QML model for image classification. Alternatively, we can extract image features separately using classical dimension reduction techniques such as, Principal Components Analysis (PCA) or Convolutional Autoencoder (CAE) and use the extracted features to train a QNN. We review two prop...

Quantum algorithms for supervised and unsupervised machine learning

Machine-learning tasks frequently involve problems of manipulating and classifying large numbers of vectors in high-dimensional spaces. Classical algorithms for solving such problems typically take time polynomial in the number of vectors and the dimension of the space. Quantum computers are good at manipulating high-dimensional vectors in large tensor product spaces. This paper provides supervised and unsupervised quantum machine learning algorithms for cluster assignment and cluster finding. Quantum machine learning can take time logarithmic in both the number of vectors and their dimension, an exponential speed-up over classical algorithms. In machine learning, information processors perform tasks of sorting, assembling, assimilating, and classifying information [1-2]. In supervised learning, the machine infers a function from a set of training examples. In unsupervised learning the machine tries to find hidden structure in unlabeled data. Recent studies and applications focus in particular on the problem of large-scale machine learning [2]-big data-where the training set and/or the number of features is large. Various results on quantum machine learning investigate the use of quantum information processors to perform machine learning tasks [3-9], including pattern matching [3], Probably Approximately Correct learning [4], feedback learning for quantum measurement [5], binary classifiers [6-7], and quantum support vector machines [8].

Supervised Learning Using Quantum Technology

2020

In this paper, we present classical machine learning algorithms enhanced by quantum technology to classify a data set. The data set contains binary input variables and binary output variables. The goal is to extend classical approaches such as neural networks by using quantum machine learning principles. Classical algorithms struggle as the dimensionality of the feature space increases. We examine the usage of quantum technologies to speed up these classical algorithms and to introduce the new quantum paradigm into machine diagnostic domain. Most of the prognosis models based on binary or multi-valued classification have become increasingly complex throughout the recent past. Starting with a short introduction into quantum computing, we will present an approach of combining quantum computing and classical machine learning to the reader. Further, we show different implementations of quantum classification algorithms. Finally, the algorithms presented are applied to a data set. The re...