Skip to main content

PIRSA ID: 22020057

Series:

Event Type: Seminar

Scientific Area(s): Other

End date: 2022-02-09

Speaker(s): Shirley Ho Flatiron Institute

We develop a general approach to "interpret" what a network has learned by introducing strong inductive biases. In particular, we focus on Graph Neural Networks. 

The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.

In particular, we will show examples of recovery of newton's law and masses of solar system bodies with real ephemeris data and recovery of navier-stokes equations with turbulence dataset.  We will speculate what one can do with this new tool.