October, 18th, 13:00 in G.15.25
Sparse matrix operations are fundamental components of sparse linear solvers and simulations. Increases in problem size and complexity stress the performance of these sparse operations. Commonly used sparse solvers, such as Krylov subspace methods and algebraic multigrid, rely on sparse operations such as sparse matrix-vector (SpMV) and sparse matrix-matrix (SpGEMM) multiplication as key components. However, the performance and scalability of such operations suffer on modern architecture due to large costs associated with communication. This talk focuses on extending the performance of sparse matrix operations to larger core counts, particularly at the strong scaling limits. I will present methods of reducing communication costs at both the solver level as well as altering the underlying communication algorithm.