This series consists of talks in the area of Foundations of Quantum Theory. Seminar and group meetings will alternate.
Quantum mechanics can be seen as a set of instructions how to calculate probabilities by associating mathematical objects to physical procedures, like preparation, manipulation, and measurement of a system. Quantum theory then yields probabilities which are neutral with respect to its use, e.g., in a Bayesian or a frequentistic way. We investigate a different approach to quantum theory and physical theories in general, in which we aim for subjective predictions in the Bayesian sense. This gives a structure different from the operational framework of general probabilistic theories.
Seven years ago, the first paper was published [1] on what has come to be known as the “Many Interacting Worlds” (MIW) interpretation of quantum mechanics (QM) [2,3,4]. MIW is based on a new formulation of QM [1,5,6], in which the wavefunction Ψ(t, x) is discarded entirely. Instead, the quantum state is represented as an ensemble, x(t, C), of quantum trajectories or “worlds.” Each of these worlds has well-defined real-valued particle positions and momenta, and is thereby classical-like.
The path integral formulation of quantum mechanics has been immensely influential, particularly in high energy physics. However, its applications to quantum circuits has so far been more limited. In this talk I will discuss the sum-over-paths approach to computing transition amplitudes in Clifford circuits. In such a formulation, the relative phases of different discrete-time paths through the configuration space can be defined in terms of a classical action which is provided by the discrete Wigner representation.
Device-independent self-testing allows user to identify the measured quantum states and the measurements made in a device using the observed statistics. Such verification scheme only requires assuming the validity of quantum theory and no-signalling. As such, no assumptions are required about the inner-workings of the device.
What can machine learning teach us about quantum mechanics? I will begin with a brief overview of attempts to bring together the two fields, and the insights this may yield. I will then focus on one particular framework, Projective Simulation, which describes physics-based agents that are capable of learning by themselves. These agents can serve as toy models for studying a wide variety of phenomena, as I will illustrate with examples from quantum experiments and biology.
We construct a hidden variable model for the EPR correlations using a Restricted Boltzmann Machine. The model reproduces the expected correlations and thus violates the Bell inequality, as required by Bell's theorem. Unlike most hidden-variable models, this model does not violate the locality assumption in Bell's argument. Rather, it violates measurement independence, albeit in a decidedly non-conspiratorial way.
In this talk I am going to describe Spekkens’ toy model, a non-contextual hidden variable model with an epistemic restriction, a constraint on what an observer can know about reality. The aim of the model, developed for continuous and discrete prime degrees of freedom, is to advocate the epistemic view of quantum theory, where quantum states are states of incomplete knowledge about a deeper underlying reality. In spite of its classical flavour, many aspects that were thought to belong only to quantum mechanics can be reproduced in the model.
What is the essence of quantum theory? In the present talk I want to approach this question from a particular operationalist perspective. I take advantage of a recent convergence between operational approaches to quantum theory and axiomatic approaches to quantum field theory. Removing anything special to particular physical models, including underlying notions of space and (crucially) time, what remains is what I shall call "abstract quantum theory".
What can we say about the spectra of a collection of microscopic variables when only their coarse-grained sums are experimentally accessible? In this paper, using the tools and methodology from the study of quantum nonlocality, we develop a mathematical theory of the macroscopic fluctuations generated by ensembles of independent microscopic discrete systems. We provide algorithms to decide which multivariate gaussian distributions can be approximated by sums of finitely-valued random vectors.
Computational complexity theory is a branch of computer science dedicated to classifying computational problems in terms of their difficulty. While computability theory tells us what we can compute in principle, complexity theory informs us regarding what is feasible. In this chapter I argue that the science of quantum computing illuminates the foundations of complexity theory by emphasising that its fundamental concepts are not model-independent. However this does not, as some have suggested, force us to radically revise the foundations of the theory.