In 2019, the world got its first look at a supermassive black hole, M87*, at the heart of the giant elliptical galaxy Messier 87. The image showed a ring-like structure with a central depression, and it was consistent with the predictions of Einstein’s theory of general relativity. It was published by major news outlets around the world.
“That first image of M87* is beautiful and stark,” says Avery Broderick, Perimeter researcher, Professor at the University of Waterloo, and founding member of the Event Horizon Telescope (EHT) collaboration that brought the images to the world. “The black hole appears as a black hole should appear. Through quantitative predictions, we knew the ring should be a certain size, the shadow should be a certain size – and it was.”
Today, the EHT has two supermassive black holes in its sights. Both M87* and Sagittarius A* (Sgr A*, pronounced "sadge-ay-star"), a supermassive black hole at the centre of our own Milky Way, are prime candidates for detailed study. Black holes – once among the greatest mysteries in the universe – are also its most extreme gravitational objects. Scientists are eager to learn more – and in some cases, they are developing new machine learning tools to help research go faster.
Those tools – created and verified by the scientists who use them – may play an important role in helping researchers access new information about black holes, including the secrets of cosmology and quantum gravity.
‘Direct to physics’ for more efficiency
The EHT is a global network of radio telescopes that work together to create an Earth-sized telescope. But the EHT doesn’t take pictures directly. Instead, EHT data comes in a ‘mixed-up’ format that needs to be translated so researchers can identify candidate images to study, and then interpret them.
EHT researchers employ a method known as nonlinear interpolation to estimate unknown values between parameters, filling in the gaps where the collected data is incomplete.
There are a couple of approaches to this interpolation problem. In some cases, scientists want to use the raw data without making any assumptions to reconstruct images. But to get the most out of the data received, EHT scientists often want to compare the data against physical models of black holes based on theoretical understandings of how we expect them to behave: how the plasma around them moves, how radiation is produced, and how it propagates across space to reach Earth. The latter allows researchers to discern the physical parameters of Sgr A* with unparalleled accuracy.
In practice, the results of these efforts are not a single image but libraries of billions of images, reflecting many possible ways of interpreting the data, and enabling rigorous statistical analysis of the black hole’s features.
“It lets us zero in on which parameters and images fit the data and which images don’t,” says Broderick, “but it’s a time-consuming process. At even 1 second per image, it’s a very long time. We can use ~10,000 computer cores at a time, but even so, it used to take a month to run one example. What we really want to do is go straight from the data to the interpretation more quickly.”
It’s an approach he calls ‘direct to physics,’ because it allows physicists to skip the time-consuming step of producing images before interpretation can begin. Machine learning seemed to offer a solution.
“I knew very little about machine learning, but I had a set of potential problems in mind, and the idea was to develop a tool that would develop candidate images that we could deploy on many potential theoretical models and get direct-to-physics analyses,” he says.
Broderick enlisted the help of a master’s student, Ali SaraerToosi, to create ALINet, a purpose-built image generation tool in the form of a generative machine-learning model, as part of his PSI Master’s project. ALINet speeds up the process of generating candidate images for comparison by many orders of magnitude.
“We can now generate a comparatively tiny 100,000 images and train up a machine learning model to produce the required billions of images a thousand times faster than before, making this possible on ~30 computer cores in a single day.
By reducing the computational cost of generating an image, this tool facilitates parameter estimation and model validation for observations of black hole systems. After testing the tool with test images, and then training it with a training set, ALINet is now in use with EHT.
“This is enabling technology for us. We’re getting to answers faster, and we’re verifying them as we go,” says Broderick.
Can AI ‘learn’ to see black holes more clearly?
There’s another challenge with the images of black holes: seeing them through so much space can ruin the view. The problem is related to interstellar scattering, which produces two types of distortion at millimeter wavelengths by blurring the image and adding refractive noise. It’s a problem Broderick and University of Waterloo graduate student, Chunchong (Rufus) Ni posed to undergraduate students who are working on another AI architecture to ‘denoise’ images of Sgr A*.
“This is a proof of principle question that we had. When we look toward Sgr A*, we see it through the disc of the galaxy. In that disc there’s gas and free electrons and magnetic fields that cause fluctuations in the image, and it’s kind of like looking at it through a window that has frosted from freezing rain, or through a window screen,” says Broderick.
Different frequencies of light can affect how clear an image is. When viewed at higher frequencies, blurring improves, but the atmosphere gets opaque, and it’s much harder to see through.
“Sgr A* is protected by a privacy screen, and so you can go to higher frequency, but that’s quite difficult on Earth, because the atmosphere is not very kind to electromagnetic radiation,” says Broderick.
So how do we get past this privacy screen and the galactic centre? Fortunately, scientists see many views of Sgr A* because it’s constantly changing – and the ‘screen’ through which we see it, seems to change at a slower rate.
“If you are looking at an object through a window on your screen, sometimes the screen imposes itself on your vision – but if that object starts to move, you see it clear as day, because your brain pieces together all the parts of the thing that’s moving, and it defocuses on the screen,” Broderick says.
It’s a mathematical concept called ‘deconvolution’ – an undoing of blurring or distortion effects.
“Some students came up with the idea of adapting a machine-learning tool that takes noisy speckles out of X-ray astronomy images,” says Broderick. “And we thought, maybe we could build something that would let us see through the window screen or see a clearer picture.”
In May 2025, the team published a paper in The Astrophysical Journal, in which they demonstrate that it is possible to nearly completely mitigate interstellar scattering at a wavelength of 1.3 mm (the frequency at which EHT operates). They validated their tool using simulations.
Broderick qualifies this as a ‘proof of principle question.’ First, the team needs to see if it’s possible to ‘pull the screen off.’ Once they know there is a way, the next step is learning how to implement it in a way that is fully verifiable.
Trust but verify
“Science is about asking, ‘how sure can I be that I have seen what I think I saw?’ And that’s a very difficult task that we have taken on in my group,” Broderick says. “We have a ‘trust but verify’ approach, and we deploy AI only in places where we can explicitly verify that it behaves the way it’s supposed to behave.”
Broderick sees it as part of a learning process for everyone. As AI tools become more common, Broderick expects the public will improve its understanding of how to differentiate between tools that create never-before-seen outputs, and those that perform simple mapping functions.
“The trick is going to be how to harness AI in a responsible way that allows us to deploy it in discovery applications,” Broderick says. “How do you use AI to define what you’ve never seen? We’re constructing our own architecture designed to do image analysis in a way that we can verify. In science, in my group’s research, we will always validate AI’s answers.”
About PI
Perimeter Institute is the world’s largest research hub devoted to theoretical physics. The independent Institute was founded in 1999 to foster breakthroughs in the fundamental understanding of our universe, from the smallest particles to the entire cosmos. Research at Perimeter is motivated by the understanding that fundamental science advances human knowledge and catalyzes innovation, and that today’s theoretical physics is tomorrow’s technology. Located in the Region of Waterloo, the not-for-profit Institute is a unique public-private endeavour, including the Governments of Ontario and Canada, that enables cutting-edge research, trains the next generation of scientific pioneers, and shares the power of physics through award-winning educational outreach and public engagement.