Author
Croci, M
Souza, G
Last updated
2021-11-12T09:54:05.067+00:00
Abstract
Mixed-precision algorithms combine low- and high-precision computations in
order to benefit from the performance gains of reduced-precision without
sacrificing accuracy. In this work, we design mixed-precision
Runge-Kutta-Chebyshev (RKC) methods, where high precision is used for accuracy,
and low precision for stability. Generally speaking, RKC methods are low-order
explicit schemes with a stability domain growing quadratically with the number
of function evaluations. For this reason, most of the computational effort is
spent on stability rather than accuracy purposes. In this paper, we show that a
na\"ive mixed-precision implementation of any Runge-Kutta scheme can harm the
convergence order of the method and limit its accuracy, and we introduce a new
class of mixed-precision RKC schemes that are instead unaffected by this
limiting behaviour. We present three mixed-precision schemes: a first- and a
second-order RKC method, and a first-order multirate RKC scheme for multiscale
problems. These schemes perform only the few function evaluations needed for
accuracy (1 or 2 for first- and second-order methods respectively) in high
precision, while the rest are performed in low precision. We prove that while
these methods are essentially as cheap as their fully low-precision equivalent,
they retain the convergence order of their high-precision counterpart. Indeed,
numerical experiments confirm that these schemes are as accurate as the
corresponding high-precision method.
Symplectic ID
1201716
Download URL
http://arxiv.org/abs/2109.12153v1
Publication type
Journal Article
Publication date
24 September 2021
Please contact us with feedback and comments about this page. Created on 13 Oct 2021 - 17:30.