Mixed precision randomized low-rank approximation with GPU tensor cores
Résumé
Randomized projection methods have been shown to be very
efficient at computing low-rank approximations (LRA) of large matri-
ces. In this work, we investigate the design and development of such
methods capable of exploiting recent mixed precision accelerators like
GPUs equipped with tensor core units. We combine three new ideas to
exploit mixed precision arithmetic in randomized LRA. The first is to
perform the matrix multiplication with mixed precision fp16/fp32 ten-
sor cores. The second is to use CholeskyQR orthonormalization, which is
much faster on GPUs, while mitigating its numerical instability by using
fp64 arithmetic. The third is to use a recently proposed iterative refine-
ment method for LRA to improve the accuracy of the LRA by calling
it twice. We implement the proposed approach on various GPU archi-
tectures and analyze its performance and accuracy. We compare with
a standard randomized LRA entirely in fp32 arithmetic, which achieves
an average accuracy of order 10−4 . Our results show that our approach
without refinement is up to 8× faster, with an average accuracy of order
10−2 , which may be acceptable for some applications. Otherwise, we show
that using refinement significantly improves the accuracy to an average
of order 10−5 , while remaining up to 2.2× faster than the standard fp32
randomized LRA. This work illustrates the convergence of approximate
computing techniques by combining low-rank approximations, random-
ization, mixed precision arithmetic, and GPU acceleration.
Fichier principal
Mixed_precision_randomized_low_rank_approximation_with_GPU_tensor_cores_Review-1.pdf (662.27 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|