Skip to main content

Fostering collaboration in a unique environment

Perimeter’s postdoctoral program, one of the largest in theoretical physics, offers early-career researchers a unique opportunity for independent research. Postdoctoral researchers at Perimeter are appointed as independent researchers and encouraged to collaborate broadly and tackle ambitious and challenging problems while embedded within a supportive environment. Perimeter aims to attract highly creative and intellectually adventurous theorists from around the world for three-year positions, as well as prestigious named four-year fellowships, senior five-year fellowships, and jointly appointed fellowships with partner universities and research institutions. 

Postdoctoral positions at Perimeter are highly sought after: over 1,300 applicants vied for 15 postdoctoral positions in the 2024/25 reporting year. Perimeter postdoctoral researchers go on to successful careers in industry and prestigious academic positions around the world. Highlights from this year’s outgoing postdocs include:

Daniel Egana-Ugrinovic, a senior postdoctoral researcher, is a Senior Photonics Engineer at Xanadu Quantum Technologies in Toronto, Canada.

Jessica Muir, a postdoctoral researcher, is an assistant professor at the University of Cincinnati.

Perimeter had a total of 70 postdocs from 25 countries this year, with 35 percent of the group identifying as women and additional genders.3 See the full list in the Appendix.

PI People: Postdoctoral Researcher

Anindita Maiti

Anindita Maiti in Perimeter's atrium.

Solving AI’s black box


When we ask ChatGPT a question, we don’t really know how it comes to its answer. Sure, we can infer that a mix of data science, algorithms, and training went into its response. But there’s a divide between the user and the artificial intelligence (AI), a kind of a black box between our inputs and its output. Many AI and machine learning experts admit that we simply don’t know what many models are doing in that black box.

This issue is broadly referred to as “AI interpretability,” the idea that AI models should be legible and transparent at every step. Perimeter Institute researcher Anindita Maiti is hoping to solve this challenge. She proposes that instead of feeding AI models more training data, developers could constrain the AI to a theoretical framework that it operates within, to allow for a more transparent understanding of AI’s logic. 

Read the story

Person explaining writing on a chalkboard to another person sitting down.