Skip to main content
Will artificial intelligence (AI) revolutionize physics, or is it hype? Here are the experts actually using it, and how they see AI shaping the future of their fields.

On a cool August morning, forty-odd physicists gather at Perimeter Institute for Theoretical Physics. A navy-clad scientist stands at a podium and poses a question to the audience: “Are AI models really reasoning?”

He’s Moritz Münchmeyer, a Perimeter alum and now assistant professor at the University of Wisconsin-Madison. The occasion? A symposium for physicists seeking to lay out a vision for the future of their field, as part of Perimeter Institute’s 25th anniversary.    

AI is just one subject for discussion at the conference, but it is by no means the least of them. The topic has cultural, scientific, and economic buzz. Does it have substance, too?

Physicists from around the world gathered at Perimeter Institute for the Charting the Future Symposium, celebrating 25 years of breakthroughs in cosmology, particle physics, and strong gravity.


“A major problem is that AI models often make solutions that look plausible, but do not follow rigid logic,” says Münchmeyer, admitting to the limitations of the technology. Anyone who's used a large language model (LLM) like ChatGPT knows they can make mistakes and ‘hallucinate’ answers. Yet his talk isn’t doom and gloom. In fact, he’s optimistic about the future of AI in physics, and is actively working on solving the challenges currently holding it back. 

What is AI good for?

If AI can make mistakes, why bother using it at all? How can scientists – who put so much weight on evidence and verification – trust it?

The first answer is by being selective about which problems they ask AI to tackle.

AI tools have proven extremely useful for problems that are hard to solve but easy to verify. You can let an AI make a large number of guesses that would otherwise be incredibly time-and resource-draining, and then physicists can work through the answers and toss away the obviously incorrect ones. So far, problems of this type are the primary use case of AI in physics.

But recently, it’s become possible to work with LLMs on problems that require reasoning and calculation, too. The latest LLM models can solve math and physics problems as well as top students can. Still, there’s a lot of room to improve in this area before all physicists will be satisfied.

Münchmeyer and his colleagues are working on fixing that, using techniques designed to improve AI’s reasoning capabilities. One of these is called reinforcement learning. Instead of just memorizing a large set of training data, an AI model is given both a problem to solve and the correct answer, but not the solution. Every time it gets the correct answer, it ‘learns’ that its reasoning was correct in that instance.

That work is still in progress, but the capabilities of AI are improving rapidly, and it’s something to watch in the years to come.

AI in practice: big data cosmology

Perhaps the best evidence that AI can be useful in physics is that physicists are already doing it.

At Perimeter Institute, cosmologists have been putting AI to the test on cosmological datasets to tease out new information.

A central question in theoretical physics is: what were the laws of physics before the big bang, when the initial conditions of our universe were created? Kendrick Smith, Daniel Family James Peebles Chair in Theoretical Physics, is analyzing cosmology datasets to search for "primordial non-Gaussianity" – that is, fluctuations in the initial conditions that deviate from a simple bell curve. Primordial non-Gaussianity is predicted in some theories of the early universe but not others, so by searching for it in present-day data, we can learn about physics before the big bang.

Using AI in this research can be tricky. Neural networks can significantly improve error bars for a set of statistical data, but they are not always robust, says Smith. Take a network trained on one set of data and apply it to another, and it may make mistakes. So the goal is to find a method that keeps the robustness of traditional methods, while improving statistical precision using neural networks.

Smith and collaborators proposed an "AI-enhanced" version of a traditional data analysis method. The traditional method works by searching for large-scale variations in galaxy properties (such as colour or size) which can only arise in a universe with primordial non-Gaussianity. By training a neural network to learn which galaxy properties are optimal for this purpose, it is possible to construct an AI-enhanced data analysis pipeline which is the best of both worlds: the statistical power of AI, and the robustness of traditional methods.

NASA’s James Webb Space Telescope captured this deep field image of galaxy cluster SMACS 0723, revealing thousands of distant galaxies magnified by gravitational lensing.


Another Perimeter cosmologist, cross-appointed with York University, is Matthew Johnson. He’s betting that AI can help cosmologists better understand the distribution of dark matter and gas (both invisible to telescopes) in the universe. The datasets he works with tend to come in two forms: graphs and images. Graphs organize information about galaxies, like their luminosity, for example, and help map relational information between groups of galaxies. The pixelated information in images, meanwhile, helps map dark matter and gas densities clumping around galaxies.

Existing neural networks have been built to work with either graphs or images, but Johnson and colleagues wanted to build a hybrid neural network that could do both.

“We demonstrated that such a hybrid is possible, and that it can be leveraged to increase the fidelity of the inferred dark matter and gas distribution given a set of galaxies,” Johnson says.

Unblurring black holes with AI tools

The Event Horizon Telescope (EHT)’s 2019 image of black hole M87* was a landmark achievement in physics, but EHT scientists haven’t been resting on their laurels in the years since. Avery Broderick, cross-appointed between Perimeter Institute and the University of Waterloo, recently set about finding a way to sharpen the EHT images further, and turned to machine learning to make it happen.

The problem: empty space between us and the black hole isn’t really empty. Interstellar plasma, made up of wandering electrons, causes a scattering effect that distorts radio waves before they reach Earth. The result: blurry images.

In a 2025 paper, Broderick and colleagues trained a neural network to mitigate the effects of the interstellar medium, de-scattering the image and, in effect, unblurring the images. The team took steps to keep the training data unbiased (for example, the data does not presuppose that the resulting image should produce a ring-like shape, as expected from a black hole). Similarly, they limited the deblurring to simulated astronomical distances that match the EHT’s capabilities, to “avoid erroneously emphasizing small-scale structures that are inaccessible to EHT.”

An example of the effects of interstellar scattering. Kouroshnia et al.


The result showed that the technique was viable (outcompeting existing de-scattering processes), suggesting that it could be incorporated into the real EHT image reconstruction pipeline in the future.

This isn’t the first time the EHT team has put machine learning to work, although it is something they do cautiously. “Our forays into AI have been an exercise in understanding how to build verifiable tools – an AI version of ‘trust but verify,’” says Broderick. 

With that in mind, PSI master’s student (and now PhD student) Ali SaraerToosi began developing a machine learning tool called ALINet in 2023, designed to generate theoretical model images of a black hole’s accretion flows that can be compared to the real EHT data. ALINet's key advantage is that it can generate these models very fast. Previous schemes took too long to effectively deploy, but ALINet makes the task viable by completing computational procedures that used to take a minute in just milliseconds. In practice, it makes an impossible task possible.

Quantum computers controlled by AI?

At Perimeter Institute’s Quantum Intelligence Lab (PIQuIL), researchers like Roger Melko are working to improve the capabilities of quantum computers. 

They use data from quantum computers to train AI agents, and in turn, those agents can be used to control quantum computers. The task of ‘controlling’ a quantum computer is no mean feat. It is among the main barriers to full-scale, fault-tolerant quantum computing today. Quantum control involves manipulating quantum bits (qubits) to carry out tasks, while ensuring the entangled qubits aren’t interrupted by ‘noise’ or interference, which causes a loss of entanglement called decoherence.

In normal circumstances, a classical computer is involved in the quantum control process, but PIQuIL is investigating ways to also incorporate the unique abilities of AI agents into the control stack. So far, the results have been promising.

Machine learning for gravitational wave modelling

In 2015, for the first time, scientists detected a collision between two black holes using gravitational waves – ripples in spacetime predicted by the theory of general relativity. Since then, 218 such collisions have been detected between black holes as well as neutron stars.

To get the most out of gravitational wave detectors, scientists develop models of what certain kinds of collisions should look like in the data. These model waveforms tell researchers what to look for, helping sort out which signals are evidence of interesting physics and which are just noise.

Waveforms that represent the collision of neutron stars are in some ways more difficult to model than those of black hole mergers, because the dynamics and turbulence of the stars' plasma and magnetic fields need to be accounted for. In 2022, Perimeter researchers Tim Whittaker, Will East, Huan Yang, and Luis Lehner – who holds the Carlo Fidani Rainer Weiss Chair in Theoretical Physics – tested machine learning solutions to overcome these difficulties by applying a machine learning method called a conditional variational autoencoder. Although there is more work to be done, they showed that given enough training data, this method can provide an accurate generative model suited to the task.

Using physics to improve AI

While many physicists are looking for ways that AI can support their research, some are looking for ways to use physics to make AI better. 

Perimeter postdoctoral researcher Anindita Maiti is one of these. She’s trying to solve what is known as the “interpretability” problem. The process an AI uses to derive an answer is often obscure, a ‘black box’ between the information input into it, and the output on the other end. But if AI is to be trusted, researchers want to be able to understand what happens inside it.

Perimeter researcher Anindita Maiti develops physics-informed AI models that make the 'black box' of machine learning more transparent and trustworthy.


Maiti’s solution: give the AI a rulebook built on the laws of physics. If we know the rules, we know the game, and the black box becomes transparent. This ruleset might take the form of a particular quantum field theory (QFT), for example, which gives statistical information about the behaviour of an elementary particle. An AI constrained to follow a particular QFT gives us an exact map of the possible processes an AI can use to produce its output.

AI as the future of physics?

It’s clear that AI is changing the way we do physics. It isn’t a cure-all – it’s one tool among many – and it has shortcomings that are severe if not considered carefully. Nonetheless, there are multiple areas of physics where it has demonstrable, practical uses, and scientists are being smart about how to use AI so it enhances research without getting ensnared in the technology’s pitfalls.

The research projects discussed here are exciting examples. There are others. Researchers at Perimeter are also delving into how AI can to work on new interferometry techniques that can improve the resolution of Earth’s optical telescopes. The possibilities are growing, and the case for smart, thoughtful use of AI is clear.

So, is AI all hype? No.

Is it a magic bullet? Also no.

Is it changing how physics is done? Resoundingly, yes.

About PI

Perimeter Institute is the world’s largest research hub devoted to theoretical physics. The independent Institute was founded in 1999 to foster breakthroughs in the fundamental understanding of our universe, from the smallest particles to the entire cosmos. Research at Perimeter is motivated by the understanding that fundamental science advances human knowledge and catalyzes innovation, and that today’s theoretical physics is tomorrow’s technology. Located in the Region of Waterloo, the not-for-profit Institute is a unique public-private endeavour, including the Governments of Ontario and Canada, that enables cutting-edge research, trains the next generation of scientific pioneers, and shares the power of physics through award-winning educational outreach and public engagement. 

For more information, contact:
Communications & Public Engagement
Media Relations
416-797-9666