This series consists of talks in the area of Foundations of Quantum Theory. Seminar and group meetings will alternate.
Device-independent self-testing allows user to identify the measured quantum states and the measurements made in a device using the observed statistics. Such verification scheme only requires assuming the validity of quantum theory and no-signalling. As such, no assumptions are required about the inner-workings of the device.
What can machine learning teach us about quantum mechanics? I will begin with a brief overview of attempts to bring together the two fields, and the insights this may yield. I will then focus on one particular framework, Projective Simulation, which describes physics-based agents that are capable of learning by themselves. These agents can serve as toy models for studying a wide variety of phenomena, as I will illustrate with examples from quantum experiments and biology.
We construct a hidden variable model for the EPR correlations using a Restricted Boltzmann Machine. The model reproduces the expected correlations and thus violates the Bell inequality, as required by Bell's theorem. Unlike most hidden-variable models, this model does not violate the locality assumption in Bell's argument. Rather, it violates measurement independence, albeit in a decidedly non-conspiratorial way.
In this talk I am going to describe Spekkens’ toy model, a non-contextual hidden variable model with an epistemic restriction, a constraint on what an observer can know about reality. The aim of the model, developed for continuous and discrete prime degrees of freedom, is to advocate the epistemic view of quantum theory, where quantum states are states of incomplete knowledge about a deeper underlying reality. In spite of its classical flavour, many aspects that were thought to belong only to quantum mechanics can be reproduced in the model.
What is the essence of quantum theory? In the present talk I want to approach this question from a particular operationalist perspective. I take advantage of a recent convergence between operational approaches to quantum theory and axiomatic approaches to quantum field theory. Removing anything special to particular physical models, including underlying notions of space and (crucially) time, what remains is what I shall call "abstract quantum theory".
What can we say about the spectra of a collection of microscopic variables when only their coarse-grained sums are experimentally accessible? In this paper, using the tools and methodology from the study of quantum nonlocality, we develop a mathematical theory of the macroscopic fluctuations generated by ensembles of independent microscopic discrete systems. We provide algorithms to decide which multivariate gaussian distributions can be approximated by sums of finitely-valued random vectors.
Computational complexity theory is a branch of computer science dedicated to classifying computational problems in terms of their difficulty. While computability theory tells us what we can compute in principle, complexity theory informs us regarding what is feasible. In this chapter I argue that the science of quantum computing illuminates the foundations of complexity theory by emphasising that its fundamental concepts are not model-independent. However this does not, as some have suggested, force us to radically revise the foundations of the theory.
Entropy is an important information measure. A complete understanding of entropy flow will have applications in quantum thermodynamics and beyond; for example it may help to identify the sources of fidelity loss in quantum communications and methods to prevent or control them. Being nonlinear in density matrix, its evaluation for quantum systems requires simultaneous evolution of more-than-one density matrix.
When utilized appropriately, the path-integral offers an alternative to the ordinary quantum formalism of state-vectors, selfadjoint operators, and external observers -- an alternative that seems closer to the underlying reality and more in tune with quantum gravity. The basic dynamical relationships are then expressed, not by a propagator, but by the quantum measure, a set-function $\mu$ that assigns to every (suitably regular) set $E$ of histories its generalized measure $\mu(E)$.
In this talk I will discuss how we might go about about performing a Bell experiment in which humans are used to decide the settings at each end. The radical possibility we wish to investigate is that, when humans are used to decide the settings (rather than various types of random number generators), we might then expect to see a violation of Quantum Theory in agreement with the relevant Bell inequality. Such a result, while very unlikely, would be tremendously significant for our understanding of the world (and I will discuss some interpretations).