Parallel sparse direct solvers are an interesting alternative to iterative methods for some classes of large sparse systems of linear equations. In the context of a parallel sparse multifrontal solver (MUMPS), we describe a new dynamic scheduling strategy aiming at balancing both the workload and the memory usage. More precisely, this hybrid approach balances the workload under memory constraints. We show that the peak of memory can be significantly reduced, while we have also improved the performance of the solver.
Then, we present preliminary work concerning a parallel out-of-core extension of the solver MUMPS, enabling to solve increasingly large simulation problems.
This is joint work with P.Amestoy, A.Guermouche, S.Pralet and E.Agullo.