A new (old) lens on the universe

account_circle By Rose Simone
Scientists are reviving an old optical telescope technique that might outperform today’s best observatories and show us the universe in a new light.

Back in the 1950s, Robert Hanbury Brown and Richard Q. Twiss had a controversial idea.

They wanted to combine the information from two light sources to create a bigger optical telescope window. This basic idea,  a method known as interferometry, was already established for radio telescopes at the time. 

But optical and radio telescopes are quite different. With radio telescopes it is possible record and then piece together the information about the electrical signals. With optical telescopes, one is trying to count and measure the arrival times of photons very precisely (the light intensity) and then cross-correlate the information. 

Many scoffed. 

The technical challenges of doing this were huge and some scientists, having embraced the idea of photons having discrete individual properties, did not even think it was theoretically possible.

Hanbury Brown and Twiss proved them wrong.

Their technique, known as intensity interferometry, was used to measure the diameter of Sirius A, in 1959 at the Jodrell Bank Observatory in the UK. 

Credit: VERITAS

The success led Hanbury Brown and others to design the Narrabri Stellar Intensity Interferometer in Australia, used to measure the diameters of many more stars.

It was revolutionary for its time. But in the 1970s, their technique was largely abandoned. The detectors and timing instruments for intensity interferometry were simply too slow. It took hours of work to capture and cross-correlate the information about light. 

Meanwhile, scientists were making improvements to an even older technique, known as amplitude interferometry, resulting in a new generation of large optical telescopes. With that, the intensity interferometry was put on the shelf. 

But now, with the advent of better and faster photodetectors that are already available for commercial uses, coupled with super-fast processing, scientists are buzzing with excitement about the possibilities for intensity interferometry all over again.

Perimeter Institute recently hosted a Future Prospects of Intensity Interferometry workshop that brought together the experts who are developing the ultra-fast photodetectors and spectroscopy capabilities needed to make a modern version of the technique viable, alongside the scientists who are finding new ways to use that technology in cosmology. They are hoping to inspire a revival of intensity interferometry and pioneer new ways of seeing objects in the universe.

The workshop was co-organized by Neal Dalal, a Perimeter faculty member in cosmology, Marios Galanis, a Perimeter postdoctoral researcher, Junwu Huang, a Perimeter faculty member in cosmology and particle physics, and Masha Baryakhtar, a former Perimeter postdoctoral researcher who is now an assistant professor in physics at the University of Washington.

Dalal and Galanis are also co-authors of a new Physical Review D paper that discusses the possibility of using intensity interferometry to get better images of active galactic nuclei (AGN), which are supermassive black holes that emit bright jets and winds that shape their galaxies. The paper was published with their colleagues Charles Gammie of the University of Illinois at Urbana-Champaign, Samuel Gralla of the University of Arizona, and Norman Murray of the University of Toronto.

“Currently, there is a lot of controversy regarding the physics of how gas flows into these objects. But with the intensity interferometry technique and using large enough arrays, we could literally take pictures of these sources just to see how they look. We can take movies,” Dalal says. “With these pictures and movies, you can really start to understand the physics of these objects.” 

Combining the information from two or more telescopes to create an image that looks as if were coming from one big telescope is well-established in the radio telescope realm. 

This is exactly how the Event Horizon Telescope works. It uses radio antennas all around the world, which collect and record the electrical signals from a black hole event horizon, and then the data about the amplitudes and phases of those signals can be correlated with software, allowing an image to be pulled together. 

That gave us the amazing first-ever image of a black hole at the centre of Messier 87 (M87), a galaxy 50 million light-years from Earth. When that famous image was released to the public, scientists described it as a feat akin to capturing an image of an orange on the surface of the Moon.

But with optical telescopes, achieving the same thing is much more complicated. Optical telescopes need to count the photons and measure their arrival times very precisely as they hit the camera pixels at different telescopes. This information about the coherence (light waves oscillating at the same frequency and travelling in the same direction) must be integrated to get the image.

However, the advantage of using optical telescopes is that the wavelength of visible light is much shorter than that of radio waves. The shorter the wavelength  the better and more detailed the picture. That is why scientists today are excited about reviving intensity interferometry.

“The EHT observes in millimetre wavelengths. This matters because the resolution of your telescope depends not only on the separation between the telescopes, but also on the wavelength that you are observing in,” Dalal explains. 

“Visible light, however, has much shorter wavelengths, smaller than a micron, and therefore, for the same baseline or separation between the telescopes, you could get an angular resolution many thousands of times better.”

As Robin Kaiser, a cold atom researcher at the Nice Institute of Physics which is affiliated with the University Côte d'Azur in France explained at the workshop, another early expert in this field named Antoine Labeyrie improved on a different method, known as amplitude interferometry for multi-telescope imaging, in the 1970s. 

Labeyrie’s method produced, for the first time, the interference fringes on Vega using two separate telescopes at the Observatory of Nice. Over the decades, Labeyrie’s method has been improved with laser adaptive optics systems and other new technologies.

Credit: ESO/G.Hüdepohl

This is the type of interferometry being used at visible light observatories such as the Very Large Telescope (VLT) operated by the European Southern Observatory, located in the Atacama Desert., and the Center for High Angular Resolution Astronomy (CHARA) array, located on Mount Wilson in California.

These big telescopes using Labeyrie’s method have been successes in their own right. The independent telescopes at VLT have used amplitude interferometry to get very high-resolution on bright objects, such as Betelgeuse. The CHARA Array has measured the diameters and temperatures of stars, imaged spots and flares on their surfaces, and mapped the orbits of close binary companions. 

But the revival of intensity interferometry could make optical telescopes even better. 

The technique has advantages because it is less affected by atmospheric turbulence and optical imperfections in telescopes. Moreover, the telescopes can be spaced further apart, meaning that the combined telescope window can be bigger. 

Today, there are single-photon avalanche diode (SPAD) arrays, and superconducting nanowire single-photon detectors (SNSPD or SSPD) for capturing the photons that can overcome the past limitations.

These new detectors have already revolutionized everything from smart phones to self-driving cars. They’re useful in medical imaging, and in manufacturing situations where you need to detect objects moving extremely fast in low light conditions. They are also being used in emerging technologies, like quantum computing.

Scientists like Kaiser are now hoping to use them to do astrophysical imaging. His research group in France hopes to use these new detectors to directly measure the angular size of a white dwarf star. While scientists know a lot about white dwarfs from their models, “we want to measure the angular size of a white dwarf directly, for the first time, because every time you measure something, you might come upon surprises,” Kaiser says. 

There are also other intensity interferometry projects being taken up by various groups around the world. Scientists with the Very Energetic Radiation Imaging Telescope Array System (VERITAS) collaboration measured the angular diameter of stars using stellar intensity interferometry for the first time in nearly 50 years and demonstrated both improvements to the sensitivity of the technique and its scalability using digital electronics. There is also an effort to build out intensity interferometry capacity using the Cherenkov Telescope Array (CTA) gamma ray telescopes. 

Philip Mauskopf, an experimental cosmologist at Arizona State University was a speaker at the conference and is excited about putting the new prospects for intensity interferometry to work in the field. 

At the moment it is extremely hard to see dim objects like exoplanets with ground based optical telescopes. With intensity interferometry techniques, and using a network of telescopes, the view might become much better, he says. “That would be cool,” he adds.

Those possibilities alone are enough to generate enthusiasm, but a cosmologist such as Dalal is also looking forward to using the images to solve long-standing scientific conundrums such as the expansion rate of the universe.

Currently, there is a controversy over the Hubble constant, which predicts how fast the objects in the universe, like a galaxy, is moving away from us, and therefore how fast the universe is expanding. Different types of measurements for the Hubble constant produce different results. Based on fundamental physics, the value should be around 68 kilometres per second per megaparsec, suggesting the rate of expansion increases by 68 km/sec for every megaparsec of distance from Earth (1 megaparsec is about 3.26 million light-years). But some measurements have pushed value as high as 74 km/s/Mpc. 

That difference between 68 and 74 may not seem like much, but it is a huge deal in cosmology. The discrepancy between predictions and observations implies there could be something major missing from our current understanding of the universe. Getting an accurate value for the cosmological constant is important to understanding how the universe will evolve and knowing what its ultimate fate will be. 

Dalal says intensity interferometry could help solve that problem, by allowing scientists to measure both the angular size (how much of the sky an object appears to cover) and the actual physical size of supermassive black hole regions. We might be able to see and measure light fluctuating and moving from the central black hole to the ionized gas clouds around it (the emission lines.). From that, it is possible to get information about the redshift (how fast an object is moving away from us.)

“It would give us a purely geometric way to measure how fast the universe is expanding. Theorists like me like these types of methods because they are clean, and we can understand them from first principles. There are no uncertain calibration factors,” Dalal says. 

“That is just one tiny sliver of the sort of science you could imagine enabling if you can increase the resolution by factors of thousands,” he adds.

Dalal says the other great potential of intensity interferometry is that it would be much cheaper than the usual way of seeing with giant optical telescopes.

The modern detectors that are commercially available have come down in price, but also, the mirrors in the telescopes don’t need to be as precise for intensity interferometry. 

With big professional telescope mirrors, “a lot of the time in manufacturing is spent getting the surface errors down from millimetres to nanometers.”

In other words, smaller, less expensive telescopes can be used. They can have cheaper mirrors, but arranged at a distance from one another, it is possible to  get a lens with incredible resolution, Dalal says.

The group at the Perimeter workshop has already decided to reconvene for another workshop next year in Germany, to strengthen the collaborations between the detector technology experts, and the cosmologists who want to use it.

What is needed to reignite the field of intensity interferometry to make all this science possible?

Compared to other cosmology projects, not a whole lot, Dalal says. 

“We already have the detectors. We have already figured out how to synchronize the clocks. I don’t see many showstoppers, except for the funding,” he says.

“One of the outcomes of the workshop is that we are putting together the science book to describe the current state of the art and what is possible. We are laying out the science case, and then from that, people can pull from that book and write their funding proposals.”

Some intensity interferometry projects are already being done on the brightest stars in the sky, “but we want to scale things up,” Dalal says. Scientists would like to build different intensity interferometry arrays of different sizes, he adds.

“There is a lot of enthusiasm toward making this happen.”

About PI

Perimeter Institute is the world’s largest research hub devoted to theoretical physics. The independent Institute was founded in 1999 to foster breakthroughs in the fundamental understanding of our universe, from the smallest particles to the entire cosmos. Research at Perimeter is motivated by the understanding that fundamental science advances human knowledge and catalyzes innovation, and that today’s theoretical physics is tomorrow’s technology. Located in the Region of Waterloo, the not-for-profit Institute is a unique public-private endeavour, including the Governments of Ontario and Canada, that enables cutting-edge research, trains the next generation of scientific pioneers, and shares the power of physics through award-winning educational outreach and public engagement. 

For more information, contact:
Communications & Public Engagement
Media Relations
416-797-9666