My master thesis

Scalable solving of boundary element methods utilizing the Green cross approximation method and GPUs

This project is maintained by Bennet Carstensen

Mar 30, 2017 by Bennet Carstensen •
Let’s get started - Motivation by a model problem

In this blog I will collect ideas for my master thesis with the working title “An efficient solver for the Green cross approximation method on GPUs” based on the article Approximation of integral operators by Green quadrature and nested cross approximation [1].

We consider integral equations e.g. of the form

with and

as an sample kernel function for a fundamental solution of the negative Laplace operator , e.g. occurring in the interpretation of Poisson’s equations in integral formulation.

To compute an approximation of the solution, we consider a test space V and look for the Galerkin approximation satisfying the variational equation

Since our first priority is the Galerkin discretization of the variational formulation, we let , introduce basis functions , where is a suitable finite index set, and replace the space by

after the standard Galerkin approach. Furthermore, we get the following finite-dimensional variational problem and are now searching for an such that

To compute , we express it as a linear combination of our basis with the coefficient vector :

and thus we can describe an -dimensional linear system of equations

Defining the matrix as well as the vector by

we can write this linear system in the form

Since is an -dimensional square matrix, we have to store coefficients. To approximate reasonably accurate with , we have to choose rather large, thus, storing coefficients is rather unattractive.

The matrix is typically non-sparse since the integral operator is non-local, in other words we have for almost all , and thus for all .

For this kind of matrices, we summarize common techniques proposed to handle them into three categories: kernel-based, matrix-based and hybrid techniques.

The first category replaces the kernel function by a degenerated approximation which the corresponding method handles efficiently. The panel clustering method [2] for example uses Taylor expansion or interpolation [3] to approximate the kernel function.

Matrix-based techniques work directly with the matrix entries to construct an approximation. The cross-approximation [4] method in particular computes a small number of “crosses”, each represented by one row and column of a submatrix of , which leads to a low-rank approximation. For an efficient implementation we only partially evaluate the full matrix to retrieve the “crosses” thus we have to specify a pivoting strategy and an error estimator to construct the low-rank matrix which leads to the adaptive cross approximation [5].

Kernel- and matrix-based approximations both have advantages as well as disadvantages. Kernel-based approximations are typically proven to converge at a certain rate, and they do not depend on the choice of basis functions, but they are frequently less efficient than matrix-based approximations. Matrix-based approximations typically lead to high compression rates. Finding a reliable stopping criterion, however, becomes challenging because of the partial evaluation, and furthermore, efficient pivoting strategies [5] presently rely on further stability assumptions.

The third category tries to combine kernel and matrix-based techniques to compensate the disadvantages while maintaining the advantages. The hybrid cross approximation [6] for example uses cross approximation on a small submatrix resulting from interpolation, ergo avoiding error estimators.


Available at http://www.h2lib.org