Tensor Processing Units (TPUs) as scientific supercomputers
Google's TPUs were exclusively designed to accelerate and scale up machine learning workloads, amid the ongoing planet-wide race to build faster specialized hardware for artificial intelligence. But one must surely be able to use this hardware for other challenging computational tasks, right? We explored how to turn a TPU pod (2048 TPU v3 cores) into a dense linear algebra supercomputer to e.g. multiply two matrices of size 1,000,000 x 1,000,000 in just 2 minutes. We then used this power to perform a number of quantum physics and quantum chemistry computations at scale. For instance, we recently completed two largest-ever computations: a Density Functional Theory DFT computation of electronic structure (with N = 248,000 orbitals), and a Density Matrix Renormalization Group DMRG computation (with bond dimension D = 65,000). Cloud-based TPU pods and GPU pods are accessible to anyone and are poised to revolutionize the scientific supercomputing landscape.