**Mohammad Amin**, D Wave Systems

*Quantum Boltzmann Machine using a Quantum Annealer*

Machine learning is a rapidly growing field in computer science with applications in computer vision, voice recognition, medical diagnosis, spam filtering, search engines, etc. In this presentation, I will introduce a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Model. Due to the non-commutative nature of quantum mechanics, the training process of the Quantum Boltzmann Machine (QBM) can become nontrivial. I will show how to circumvent this problem by introducing bounds on the quantum probabilities. This allows training the QBM efficiently by sampling. I will then show examples of QBM training with and without the bound, using exact diagonalization, and compare the results with classical Boltzmann training. Finally, after a brief introduction to D-Wave quantum annealing processors, I will discuss the possibility of using such processors for QBM training and application.

**Peter Broecker,** University of Cologne

*Machine learning quantum phases of matter beyond the fermion sign problem*

**Kieron Burke**, University of California, Irvine

*Finding density functionals with machine-learning*

Density functional theory (DFT) is an extremely popular approach to electronic structure problems in both materials science and chemistry and many other fields. Over the past several years, often in collaboration with Klaus Mueller at TU Berlin, we have explored using machine-learning to find the density functionals that must be approximated in DFT calculations. I will summarize our results so far, and report on two new works.

**Juan Carrasquilla**, Perimeter Institute

*Machine Learning Phases of Matter*

**Matthew Fisher**, Kavli Institute for Theoretical Physics

*Quantum Crystals, Quantum Computing and Quantum Cognition*

Quantum mechanics is down to earth - quite literally - since the electrons within the tiny crystals found in a handful of dirt manifest a dizzying world of quantum motion. Each crystal has it’s own unique choreography, with the electrons entangled in a myriad of quantum dances. Quantum entanglement

also holds the promise of futuristic Quantum Computers - which might be comprised of electron and nuclear spins inside diamond, or of atoms confined in traps, or of small superconducting grains, among a plethora of suggested platforms. In this talk I will describe ongoing efforts to elucidate the mysteries of Quantum Crystals, to design and assemble Quantum Computers, before ruminating about “Quantum Cognition” - the proposal that our brains are capable of quantum processing.

**Christopher Granade, **University of Sydney

*Rejection and Particle Filtering for Hamiltonian Learning*

Many tasks in quantum information rely on accurate knowledge of a system's Hamiltonian, including calibrating control, characterizing devices, and verifying quantum simulators. In this talk, we pose the problem of learning Hamiltonians as an instance of parameter estimation. We then solve this problem with Bayesian inference, and describe how rejection and particle filtering provide efficient numerical algorithms for learning Hamiltonians. Finally, we discuss how filtering can be combined with quantum resources to verify quantum systems beyond the reach of classical simulators.

**S****ergei I**sakov, Google

*Towards Quantum Supremacy with Near-Term Devices*

Can quantum computers outperform classical computers on any computational problem in the near future? We study the problem of sampling from the output distribution of random quantum circuits.

Sampling from this distribution requires an exponential amount of classical computational resources. We argue that quantum supremacy can be achieved in the near future with approximately fifty superconducting qubits and without error correction despite the fact that quantum random circuits are extremely sensitive to errors.

**Ashish Kapoor**, Microsoft Research

*Comparing Classical and Quantum Methods for Supervised Machine Learning*

Supervised Machine Learning is one of the key problems that arises in modern big data tasks. In this talk, I will first describe several different classical algorithmic paradigms for classification and then contrast them with quantum algorithmic constructs. In particular, we will look at classical methods such as the nearest neighbor rule, optimization based algorithms (e.g. SVMs), Bayesian inference based techniques (e.g. Bayes point machine) and provide a unifying framework so that we can get a deeper understanding about the quantum versions of the methods.

**Rosemary Ke, **MILA, University of Montreal

*Deep Learning: An Overview*

**Seth Lloyd**, Massachusetts Institute of Technology

*Quantum algorithm for topological analysis of data*

This talk presents a quantum algorithm for performing persistent homology, the identification of topological features of data sets such as connected components, holes and voids. Finding the full persistent homology of a data set over n points using classical algorithms takes time O(2^{2n}), while the quantum algorithm takes time O(n^2), an exponential improvement. The quantum algorithm does not require a quantum random access memory and is suitable for implementation on small quantum computers with a few hundred qubits.

**Alejandro Perdomo-Ortiz, **NASA Ames Research Center

*A quantum-assisted algorithm for sampling applications in machine learning. *

An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact in deep learning and other machine learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggests it will do so with an instance-dependent effective temperature, different from the physical temperature of the device. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this talk, we present a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a kind of restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep learning architectures. We also provide a comparison to k-step contrastive divergence (CD-k) with k up to 100. Although assuming a suitable fixed effective temperature also allows to outperform one step contrastive divergence (CD-1), only when using an instance-dependent effective temperature we find a performance close to that of CD-100 for the case studied here. We discuss generalizations of the algorithm to other more expressive generative models, beyond restricted Boltzmann machines.

**Barry Sanders,** University of Calgary

*Learning in Quantum Control: High-Dimensional Global Optimization for Noisy Quantum Dynamics*

Quantum control is valuable for various quantum technologies such as high-fidelity gates for universal quantum computing, adaptive quantum-enhanced metrology, and ultra-cold atom manipulation. Although supervised machine learning and reinforcement learning are widely used for optimizing control parameters in classical systems, quantum control for parameter optimization is mainly pursued via gradient-based greedy algorithms. Although the quantum fitness landscape is often compatible for greedy algorithms, sometimes greedy algorithms yield poor results, especially for large-dimensional quantum systems. We employ differential evolution algorithms to circumvent the stagnation problem of non-convex optimization, and we average over the objective function to improve quantum control fidelity for noisy systems. To reduce computational cost, we introduce heuristics for early termination of runs and for adaptive selection of search subspaces. Our implementation is massively parallel and vectorized to reduce run time even further. We demonstrate our methods with two examples, namely quantum phase estimation and quantum gate design, for which we achieve superior fidelity and scalability than obtained using greedy algorithms.

**David Schwab**, Northwestern University

*Physical approaches to the extraction of relevant information*

In the first part of this talk, I will focus on the physics of deep learning, a popular subfield of machine learning where recent performance on tasks such as visual object recognition rivals human performance. I present work relating greedy training of deep belief networks to a form of variational real-space renormalization. This connection may help explain how deep networks automatically learn relevant features from data and extract independent factors of variation. Next, I turn to the information bottleneck (IB), an information theoretic approach to clustering and compression of relevant information that has been suggested as a framework for deep learning. I present a new variant of IB called the Deterministic Information Bottleneck, arguing that it better captures the notion of compression while retaining relevant information.

**Maria Schuld**, University of KwaZulu-Natal

*Classification on a quantum computer: Linear regression and ensemble methods*

Quantum machine learning algorithms usually translate a machine learning methods into an algorithm that can exploit the advantages of quantum information processing. One approach is to tackle methods that rely on matrix inversion with the quantum linear system of equations routine. We give such a quantum algorithm based on unregularised linear regression. Opposed to closely related work from Wiebe, Braun and Lloyd [PRL 109 (2012)] our scheme focuses on a classification task and uses a different combination of core routines that allows us to process non-sparse inputs, and significantly improves the dependence on the condition number. The second part of the talk presents an idea that transcends the reproduction of classical results. Instead of considering a single trained classifier, practicioners often use ensembles of models to make predictions more robust and accurate. Under certain conditions, having infinite ensembles can lead to good results. We introduce a quantum sampling scheme that uses the parallelism inherent to a quantum computer in order to sample from 'exponentially large' ensembles that are not explicitely trained.

**Cyril Stark**, Massachusetts Institute of Technology

*Physics-inspired techniques for association rule mining*

Imagine you run a supermarket, and assume that for each customer “u” you record what “u” is buying. For instance, you may observe that u=1 typically buys bread and cheese and u=2 typically buys bread and salami. Studying your dataset you suspect that generally, customers who are likely to buy cheese are likely to buy bread as well. Rules of this kind are called association rules. Mining association rules is of significant practical importance in fields like market basket analysis and healthcare. In this talk I introduce a novel method for association rule mining which is inspired by ideas from classical statistical mechanics and quantum foundations.

**James ****Steck,** Wichita State University

*Learning quantum annealing*

**Damian Steiger**, ETH Zurich & Google

*Racing in parallel: Quantum versus Classical*

In a fair comparison of the performance of a quantum algorithm to a classical one it is important to treat them on equal footing, both regarding resource usage and parallelism. We show how one may otherwise mistakenly attribute speedup due to parallelism as quantum speedup. As an illustration we will go through a few quantum machine learning algorithms, e.g. Quantum Page Rank, and show how a classical parallel computer can solve these problems faster with the same amount of resources.

Our classical parallelism considerations are especially important for quantum machine learning algorithms, which either use QRAM, allow for unbounded fanout, or require an all-to-all communication network.

**Miles Stoudenmire**, University of California, Irvine

*Learning with Quantum-Inspired Tensor Networks*

We propose a family of models with an exponential number of parameters, but which are approximated by a tensor network. Tensor networks are used to represent quantum wavefunctions, and powerful methods for optimizing them can be extended to machine learning applications as well. We use a matrix product state to classify images, and find that a surprisingly small bond dimension yields state-of-the-art results. Tensor networks offer many advantages for machine learning, such as better scaling for existing machine learning approaches and the ability to adapt hyperparameters during training. We will also propose a generative interpretation of the trained models.

**Giacomo Torlai**, University of Waterloo

*Learning Thermodynamics with Boltzmann Machines*

The introduction of neural networks with deep architecture has led to a revolution, giving rise to a new wave of technologies empowering our modern society. Although data science has been the main focus, the idea of generic algorithms which automatically extract features and representations from raw data is quite general and applicable in multiple scenarios. Motivated by the effectiveness of deep learning algorithms in revealing complex patterns and structures underlying data, we are interested in exploiting such tool in the context of many-body physics. I will first introduce the Boltzmann Machine, a stochastic neural network that has been extensively used in the layers of deep architectures. I will describe how such network can be used for modelling thermodynamic observables for physical systems in thermal equilibrium, and show that it can faithfully reproduce observables for the 2 dimensional Ising model. Finally, I will discuss how to adapt the same network for implementing the classical computation required to perform quantum error correction in the 2D toric code.