|Title:||Algebraic multigrid methods on parallel architectures||Language:||English||Authors:||Wagner, Markus||Qualification level:||Diploma||Keywords:||Algebraische Mehrgitterverfahren; Parallele Architekturen; Vorkonditionierer; ViennaCL; GPU-Computing; C++
Algebraic Multigrid Methods; Parallel Architectures; Preconditioner; ViennaCL; GPU Computing; C++
|Advisor:||Langer, Erasmus||Issue Date:||2011||Number of Pages:||80||Qualification level:||Diploma||Abstract:||
Algebraic multigrid methods (AMG) are multigrid methods which construct grid hierarchies using algebraic information contained in the system matrix of the linear equation to be solved. This makes AMG useful for more general applications, including problems for which no grid interpretation is available. AMG offers optimal linear complexity and is therefore especially well-suited for large problem-sizes. An efficient stand-alone solver for linear equations, AMG is even more often used as a preconditioner for Krylov methods to improve convergence. Many variations of the AMG algorithm have been developed and the approaches usually differ in terms of convergence, setup time and complexity. One important aspect, driven by recent hardware development trends, is parallelism: Although parts of the basic AMG approaches are sequential, this can be overcome by certain modifications to the algorithm. For this thesis, implementations of several AMG preconditioners were developed for ViennaCL, a mathematical C++ library supporting parallel computing via OpenCL. These implementations can take advantage of parallel execution by using OpenMP threads for the setup phase (CPU) and OpenCL for the solver phase (GPU). Benchmark results show that GPU computing can increase solver performance for large matrices by a factor close to the theoretical maximum, but the total computation time is bounded by the setup phase computed on the CPU. For various problems, different AMG methods lead to optimal results, justifying the multitude of AMG variations available. An interesting consequence of these benchmarks is the relative independence of computation time from convergence factor and this is even stronger if the GPU is used. Methods with lower convergence usually offer faster setup but also a lower number of coarse levels and operator matrix density, reducing solver time even though more iterations are required. Complexity, therefore, seems to be a better indicator for AMG performance than convergence.
|Library ID:||AC07811925||Organisation:||E360 - Institut für Mikroelektronik||Publication Type:||Thesis
|Appears in Collections:||Thesis|
Show full item record
Files in this item:
|Algebraic multigrid methods on parallel architectures.pdf||430.8 kB||Adobe PDF|
checked on Feb 18, 2021
checked on Feb 18, 2021
Items in reposiTUm are protected by copyright, with all rights reserved, unless otherwise indicated.