In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the right amount of time it should take is of major practical relevance. Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires n3 field operations to multiply two n × n matrices over that field (Θ(n3) in big O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". The optimal number of field operations needed to multiply two square n × n matrices up to constant factors is still unknown. This is a major open question in theoretical computer science. As of October 2022, the best announced bound on the asymptotic complexity of a matrix multiplication algorithm is O(n2.37188) time, given by Duan, Wu and Zhou announced in a preprint. This improves on the bound of O(n2.3728596) time, given by Josh Alman and Virginia Vassilevska Williams. However, this and similar improvements to Strassen are not used in practice, because they are galactic algorithms: the constant coefficient hidden by the Big O notation is so large that they are only worthwhile for matrices that are too large to handle on present-day computers. (en)