Scaling Limits of Bayesian Inference with Deep Neural Networks

PIRSA ID: 24040103
Event Type: Seminar
Scientific Area(s):
End date:
  • Boris Hanin, Princeton University

Large neural networks are often studied analytically through scaling limits: regimes in which some structural network parameters (e.g. depth, width, number of training datapoints, and so on) tend to infinity. Such limits are challenging to identify and study in part because the limits as these structural parameters diverge typically do not commute. I will present some recent and ongoing work with Alexander Zlokapa (MIT), in which we provide the first solvable models of learning – in this case by Bayesian inference – with neural networks where the depth, width, and number of datapoints can all be large.


Zoom link