词条 | Matrix multiplication algorithm |
释义 |
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph.[1] Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors (perhaps over a network). Directly applying the mathematical definition of matrix multiplication gives an algorithm that takes time on the order of {{math|n3}} to multiply two {{math|n × n}} matrices ({{math|Θ(n3)}} in big O notation). Better asymptotic bounds on the time required to multiply matrices have been known since the work of Strassen in the 1960s, but it is still unknown what the optimal time is (i.e., what the complexity of the problem is). Iterative algorithmThe definition of matrix multiplication is that if {{math|C {{=}} AB}} for an {{math|n × m}} matrix {{mvar|A}} and an {{math|m × p}} matrix {{mvar|B}}, then {{mvar|C}} is an {{math|n × p}} matrix with entries . From this, a simple algorithm can be constructed which loops over the indices {{mvar|i}} from 1 through {{mvar|n}} and {{mvar|j}} from 1 through {{mvar|p}}, computing the above using a nested loop: {{framebox|blue}}
This algorithm takes time {{math|Θ(nmp)}} (in asymptotic notation).[1] A common simplification for the purpose of algorithms analysis is to assume that the inputs are all square matrices of size {{math|n × n}}, in which case the running time is {{math|Θ(n3)}}, i.e., cubic.[1] Cache behaviorThe three loops in iterative matrix multiplication can be arbitrarily swapped with each other without an effect on correctness or asymptotic running time. However, the order can have a considerable impact on practical performance due to the memory access patterns and cache use of the algorithm;[2] which order is best also depends on whether the matrices are stored in row-major order, column-major order, or a mix of both. In particular, in the idealized case of a fully associative cache consisting of {{mvar|M}} cache lines of {{mvar|b}} bytes each, the above algorithm is sub-optimal for {{mvar|A}} and {{mvar|B}} stored in row-major order. When {{math|n > {{sfrac|M|b}}}}, every iteration of the inner loop (a simultaneous sweep through a row of {{mvar|A}} and a column of {{mvar|B}}) incurs a cache miss when accessing an element of {{mvar|B}}. This means that the algorithm incurs {{math|Θ(n3)}} cache misses in the worst case. {{As of|2010}}, the speed of memories compared to that of processors is such that the cache misses, rather than the actual calculations, dominate the running time for sizable matrices.[3] The optimal variant of the iterative algorithm for {{mvar|A}} and {{mvar|B}} in row-major layout is a tiled version, where the matrix is implicitly divided into square tiles of size {{math|{{radic|M}}}} by {{math|{{radic|M}}}}:[3][4] {{framebox|blue}}
In the idealized cache model, this algorithm incurs only {{math|Θ({{sfrac|n3|b {{radic|M}}}})}} cache misses; the divisor {{math|b {{radic|M}}}} amounts to several orders of magnitude on modern machines, so that the actual calculations dominate the running time, rather than the cache misses.[3] Divide and conquer algorithmAn alternative to the iterative algorithm is the divide and conquer algorithm for matrix multiplication. This relies on the block partitioning . which works for all square matrices whose dimensions are powers of two, i.e., the shapes are {{math|2n × 2n}} for some {{mvar|n}}. The matrix product is now which consists of eight multiplications of pairs of submatrices, followed by an addition step. The divide and conquer algorithm computes the smaller multiplications recursively, using the scalar multiplication {{math|c11 {{=}} a11b11}} as its base case. The complexity of this algorithm as a function of {{mvar|n}} is given by the recurrence[1] ; , accounting for the eight recursive calls on matrices of size {{math|n/2}} and {{math|Θ(n2)}} to sum the four pairs of resulting matrices element-wise. Application of the master theorem for divide-and-conquer recurrences shows this recursion to have the solution {{math|Θ(n3)}}, the same as the iterative algorithm.[1] Non-square matricesA variant of this algorithm that works for matrices of arbitrary shapes and is faster in practice[3] splits matrices in two instead of four submatrices, as follows.[5] Splitting a matrix now means dividing it into two parts of equal size, or as close to equal sizes as possible in the case of odd dimensions. {{framebox|blue}}
{{frame-footer}} Cache behaviorThe cache miss rate of recursive matrix multiplication is the same as that of a tiled iterative version, but unlike that algorithm, the recursive algorithm is cache-oblivious:[5] there is no tuning parameter required to get optimal cache performance, and it behaves well in a multiprogramming environment where cache sizes are effectively dynamic due to other processes taking up cache space.[3] (The simple iterative algorithm is cache-oblivious as well, but much slower in practice if the matrix layout is not adapted to the algorithm.) The number of cache misses incurred by this algorithm, on a machine with {{mvar|M}} lines of ideal cache, each of size {{mvar|b}} bytes, is bounded by{{r|prokop}}{{rp|13}} Sub-cubic algorithmsAlgorithms exist that provide better running times than the straightforward ones. The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". It is based on a way of multiplying two {{math|2 × 2}}-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of . Strassen's algorithm is more complex, and the numerical stability is reduced compared to the naïve algorithm,[6] but it is faster in cases where {{math|n > 100}} or so[2] and appears in several libraries, such as BLAS.[7] It is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue. The current {{math|O(nk)}} algorithm with the lowest known exponent {{mvar|k}} is a generalization of the Coppersmith–Winograd algorithm that has an asymptotic complexity of {{math|O(n2.3728639)}}, by François Le Gall.[8] The Le Gall algorithm, and the Coppersmith–Winograd algorithm on which it is based, are similar to Strassen's algorithm: a way is devised for multiplying two {{math|k × k}}-matrices with fewer than {{math|k3}} multiplications, and this technique is applied recursively. However, the constant coefficient hidden by the Big O notation is so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers.[9][10] Since any algorithm for multiplying two {{math|n × n}}-matrices has to process all {{math|2n2}} entries, there is an asymptotic lower bound of {{math|Ω(n2)}} operations. Raz proved a lower bound of {{math|Ω(n2 log(n))}} for bounded coefficient arithmetic circuits over the real or complex numbers.[11] Cohn et al. put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the triple product property (TPP). They show that if families of wreath products of Abelian groups with symmetric groups realise families of subset triples with a simultaneous version of the TPP, then there are matrix multiplication algorithms with essentially quadratic complexity.[12][13] Most researchers believe that this is indeed the case.[10] However, Alon, Shpilka and Chris Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the sunflower conjecture.[14] Freivalds' algorithm is a simple Monte Carlo algorithm that, given matrices {{mvar|A}}, {{mvar|B}} and {{mvar|C}}, verifies in {{math|Θ(n2)}} time if {{math|AB {{=}} C}}. Parallel and distributed algorithmsShared-memory parallelismThe divide and conquer algorithm sketched earlier can be parallelized in two ways for shared-memory multiprocessors. These are based on the fact that the eight recursive matrix multiplications in can be performed independently of each other, as can the four summations (although the algorithm needs to "join" the multiplications before doing the summations). Exploiting the full parallelism of the problem, one obtains an algorithm that can be expressed in fork–join style pseudocode:[26] {{framebox|blue}} Procedure {{math|multiply(C, A, B)}}:
Procedure {{math|add(C, T)}} adds {{mvar|T}} into {{mvar|C}}, element-wise:
Here, fork is a keyword that signal a computation may be run in parallel with the rest of the function call, while join waits for all previously "forked" computations to complete. {{math|partition}} achieves its goal by pointer manipulation only. This algorithm has a critical path length of {{math|Θ(log2 n)}} steps, meaning it takes that much time on an ideal machine with an infinite number of processors; therefore, it has a maximum possible speedup of {{math|Θ(n3/log2 n)}} on any real computer. The algorithm isn't practical due to the communication cost inherent in moving data to and from the temporary matrix {{mvar|T}}, but a more practical variant achieves {{math|Θ(n2)}} speedup, without using a temporary matrix.[15] Communication-avoiding and distributed algorithmsOn modern architectures with hierarchical memory, the cost of loading and storing input matrix elements tends to dominate the cost of arithmetic. On a single machine this is the amount of data transferred between RAM and cache, while on a distributed memory multi-node machine it is the amount transferred between nodes; in either case it is called the communication bandwidth. The naïve algorithm using three nested loops uses {{math|Ω(n3)}} communication bandwidth. Cannon's algorithm, also known as the 2D algorithm, is a communication-avoiding algorithm that partitions each input matrix into a block matrix whose elements are submatrices of size {{math|{{sqrt|M/3}}}} by {{math|{{sqrt|M/3}}}}, where {{math|M}} is the size of fast memory.[16] The naïve algorithm is then used over the block matrices, computing products of submatrices entirely in fast memory. This reduces communication bandwidth to {{math|O(n3/{{sqrt|M}})}}, which is asymptotically optimal (for algorithms performing {{math|Ω(n3)}} computation).[17][18]In a distributed setting with {{mvar|p}} processors arranged in a {{math|{{sqrt|p}}}} by {{math|{{sqrt|p}}}} 2D mesh, one submatrix of the result can be assigned to each processor, and the product can be computed with each processor transmitting {{math|O(n2/{{sqrt|p}})}} words, which is asymptotically optimal assuming that each node stores the minimum {{math|O(n2/p)}} elements.[18] This can be improved by the 3D algorithm, which arranges the processors in a 3D cube mesh, assigning every product of two input submatrices to a single processor. The result submatrices are then generated by performing a reduction over each row.[19] This algorithm transmits {{math|O(n2/p2/3)}} words per processor, which is asymptotically optimal.[18] However, this requires replicating each input matrix element {{math|p1/3}} times, and so requires a factor of {{math|p1/3}} more memory than is needed to store the inputs. This algorithm can be combined with Strassen to further reduce runtime.[19] "2.5D" algorithms provide a continuous tradeoff between memory usage and communication bandwidth.[20] On modern distributed computing environments such as MapReduce, specialized multiplication algorithms have been developed.[21] Algorithms for meshesThere are a variety of algorithms for multiplication on meshes. For multiplication of two n×n on a standard two-dimensional mesh using the 2D Cannon's algorithm, one can complete the multiplication in 3n-2 steps although this is reduced to half this number for repeated computations.[22] The standard array is inefficient because the data from the two matrices does not arrive simultaneously and it must be padded with zeroes. The result is even faster on a two-layered cross-wired mesh, where only 2n-1 steps are needed.[23] The performance improves further for repeated computations leading to 100% efficiency.[24] The cross-wired mesh array may be seen as a special case of a non-planar (i.e. multilayered) processing structure.[25] See also
References1. ^1 2 {{Introduction to Algorithms|3|pages=75–79}} 2. ^1 2 3 {{cite book |first=Steven |last=Skiena |authorlink=Steven Skiena |title=The Algorithm Design Manual |publisher=Springer |year=2008 |pages=45–46, 401–403 |doi=10.1007/978-1-84800-070-4_4|chapter=Sorting and Searching |isbn=978-1-84800-069-8 }} 3. ^1 2 3 4 {{cite web |first1=Saman |last1=Amarasinghe |first2=Charles |last2=Leiserson |title=6.172 Performance Engineering of Software Systems, Lecture 8 |year=2010 |publisher=Massachusetts Institute of Technology |website=MIT OpenCourseWare |url=http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2010/video-lectures/lecture-8-cache-efficient-algorithms/ |accessdate=27 January 2015}} 4. ^{{cite conference |first1=Monica S. |last1=Lam |first2=Edward E. |last2=Rothberg |first3=Michael E. |last3=Wolf |title=The Cache Performance and Optimizations of Blocked Algorithms |conference=Int'l Conf. on Architectural Support for Programming Languages and Operating Systems (ASPLOS) |year=1991}} 5. ^1 {{cite thesis |type=Master's |first=Harald |last=Prokop |authorlink=Harald Prokop |title=Cache-Oblivious Algorithms |publisher=MIT |year=1999 |url=http://supertech.csail.mit.edu/papers/Prokop99.pdf}} 6. ^{{Citation | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | pages=97–107}} 7. ^{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=Cambridge University Press |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=108|title-link=Numerical Recipes }} 8. ^{{Citation | last1=Le Gall | first1=François | contribution=Powers of tensors and fast matrix multiplication | year = 2014 | arxiv=1401.7714 | title = Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014)| bibcode=2014arXiv1401.7714L }}. The original algorithm was presented by Don Coppersmith and Shmuel Winograd in 1990, has an asymptotic complexity of {{math|O(n2.376)}}. It was improved in 2013 to {{math|O(n2.3729)}} by Virginia Vassilevska Williams, giving a time only slightly worse than Le Gall's improvement: {{cite web |url=http://www.cs.stanford.edu/~virgi/matrixmult-f.pdf |title=Multiplying matrices faster than Coppersmith-Winograd |first=Virginia Vassilevska |last=Williams|authorlink= Virginia Vassilevska Williams}} 9. ^{{citation | last = Iliopoulos | first = Costas S. | doi = 10.1137/0218045 | issue = 4 | journal = SIAM Journal on Computing | mr = 1004789 | quote = The Coppersmith–Winograd algorithm is not practical, due to the very large hidden constant in the upper bound on the number of multiplications required. | pages = 658–669 | title = Worst-case complexity bounds on algorithms for computing the canonical structure of finite abelian groups and the Hermite and Smith normal forms of an integer matrix | url = http://www.williamstein.org/home/wstein/www/home/pernet/Papers/Hermite/Iliopoulos88.pdf | volume = 18 | year = 1989| citeseerx = 10.1.1.531.9309 }} 10. ^1 {{Cite journal | last1=Robinson | first1=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=http://www.siam.org/pdf/news/174.pdf | year=2005 | journal=SIAM News | volume=38 | issue=9}} 11. ^Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing. ACM Press, 2002. {{doi|10.1145/509907.509932}}. 12. ^Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans. Group-theoretic Algorithms for Matrix Multiplication. {{arxiv|math.GR/0511460}}. Proceedings of the 46th Annual Symposium on Foundations of Computer Science, 23–25 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379–388. 13. ^Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication. {{arxiv|math.GR/0307321}}. Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 11–14 October 2003, Cambridge, MA, IEEE Computer Society, pp. 438–449. 14. ^Alon, Shpilka, Umans, On Sunflowers and Matrix Multiplication 15. ^1 {{cite thesis |type=Ph.D. |last=Randall |first=Keith H. |title=Cilk: Efficient Multithreaded Computing |publisher=Massachusetts Institute of Technology |year=1998 |pages=54–57 |url=http://supertech.csail.mit.edu/papers/randall-phdthesis.pdf}} 16. ^Lynn Elliot Cannon, A cellular computer to implement the Kalman Filter Algorithm, Technical report, Ph.D. Thesis, Montana State University, 14 July 1969. 17. ^{{cite journal|last=Hong|first=J. W.|first2=H. T. |last2=Kung|title=I/O complexity: The red-blue pebble game|journal=STOC '81: Proceedings of the Thirteenth Annual ACM Symposium on Theory of Computing|year=1981|pages=326–333}} 18. ^1 2 {{cite journal|last=Irony|first=Dror|first2=Sivan |last2=Toledo |first3=Alexander |last3=Tiskin |title=Communication lower bounds for distributed-memory matrix multiplication|journal=J. Parallel Distrib. Comput.|date=September 2004|volume=64|issue=9|pages=1017–1026|doi=10.1016/j.jpdc.2004.03.021|citeseerx=10.1.1.20.7034}} 19. ^1 {{cite journal|last=Agarwal|first=R.C.|first2=S. M. |last2=Balle |first3=F. G. |last3=Gustavson |first4=M. |last4=Joshi |first5=P. |last5=Palkar |title=A three-dimensional approach to parallel matrix multiplication|journal=IBM J. Res. Dev.|date=September 1995|volume=39|issue=5|pages=575–582|doi=10.1147/rd.395.0575|citeseerx=10.1.1.44.3404}} 20. ^{{cite journal|last=Solomonik|first=Edgar|first2=James |last2=Demmel|title=Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms|journal=Proceedings of the 17th International Conference on Parallel Processing|year=2011|volume=Part II|pages=90–109}} 21. ^{{cite journal|last1=Bosagh Zadeh|first1=Reza|last2=Carlsson|first2=Gunnar|title=Dimension Independent Matrix Square Using MapReduce|url=http://stanford.edu/~rezab/papers/dimsum.pdf|accessdate=12 July 2014}} 22. ^Bae, S.E., T.-W. Shinn, T. Takaoka (2014) A faster parallel algorithm for matrix multiplication on a mesh array. Procedia Computer Science 29: 2230-2240 23. ^Kak, S. (1988) A two-layered mesh array for matrix multiplication. Parallel Computing 6: 383-385 24. ^Kak, S. (2014) Efficiency of matrix multiplication on the cross-wired mesh array. https://arxiv.org/abs/1411.3273 25. ^Kak, S. (1988) Multilayered array computing. Information Sciences 45: 347-365 Further reading
1 : Matrix multiplication algorithms |
随便看 |
|
开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。