Computational complexity#
The computational complexity of an algorithm characterizes the number of operations it requires (thus comparing the algorithm instead of the hardware). Linear algebra algorithms are characterized in terms of the dimension of their arguments (vector length, matrix size, etc). Complexity is represented in big O notation (similar to the accuracy of function approximations). I.e.: Only the leading order is kept, and constants disregarded.
Example#
Traditional matrix multiplication of two \(n \times n\) matricies - involves \(n^3\) operations and is thus \(O(n^3)\)
NB: This is algorithm-dependant - The Strassen algorithm has \(O(n^{2.81})\)
def time_matmult(n):
import numpy as np
A = np.random.rand(n, n)
B = np.random.rand(n, n)
%timeit C = A @ B
time_matmult(100)
time_matmult(1000)
time_matmult(2000)
time_matmult(4000)
123 µs ± 51.3 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
98.4 ms ± 37.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
514 ms ± 11 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
4.64 s ± 736 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Some complexities of common linear algebra operations:
Operation |
Description |
Complexity |
---|---|---|
Matrix addition |
Adding two matrices of the same size |
O(n²) |
Matrix multiplication |
Multiplying two matrices |
O(n³) (standard algorithm) |
Matrix-vector multiplication |
Multiplying a matrix by a vector |
O(n²) |
Matrix inversion |
Finding the inverse of a matrix |
O(n³) |
Determinant calculation |
Computing the determinant of a matrix |
O(n³) |
Transpose |
Swapping rows and columns of a matrix |
O(n²) |
Matrix norm |
Sqrt of sum of squares |
O(n²) |
High-performance computing introduces a new element to complexity which deals with how algorithms parallelize. This is called scaling and measured in \(O(number \ of \ nodes)\). The goal is usually \(O(n)\) but is hard to achieve!