My master thesis

Scalable solving of boundary element methods utilizing the Green cross approximation method and GPUs

This project is maintained by Bennet Carstensen

Aug 23, 2017 by Bennet Carstensen •
Adaptive cross approximation

The low-rank approximation we prepared in this section still offers room for improvment [1]: the matrices and describe the influence of Neumann and Dirichlet values of on , respectively. Using the Pioncaré-Steklov operator, we can construct the Neumann from the Dirichlet values. In other words we can gain from and thus the rank of is .

Though it’s possible to discard explicitly by approximating the Poincaré-Steklov operator, this approach needs to solve an integral equation on the boundary and a high accuraccy for the sake of preserving the exponential convergence of the quadrature approximation [1].

Because of their implactions on performance, we choose an implicit approach: we assume that the properties mentioned above holds true, e.g. the rank of is lower than the number of columns, and we can use an algebraic procedure to approximate by a lower-rank matrix. The adaptive cross approximation [1, 5] is an interesting method for this situation since it allows us to construct an algebraic interpolation operator we can use to approximate matrix blocks based on a small number of their entries.

Assuming, our index sets and have the form of and , , we interpret a matrix by standard definitions of matrices with . To find an adaptive cross approximation of , we first consider an LU factorization

with a lower triangular matrix and an upper triangular matrix , .

With , we can split the factors into

We construct row pivot indices and column pivot indices and apply the standard LU factorization to the matrix with . We construct this pivot indices parallel to the LU factorization to avoid a zero on the diagonal.

Starting with indices and such that , the first row and column of L and U are given by

We proceed with constructing the remainder

and look for its LU factorization. The resulting algorithm looks as follows:

This constructs matrices and such that

holds and the error

is sufficiently small. We can use

as a rank-k approximation of .

We can interpret this approximation, like mentioned before, as an algebraic procedure: we introduce the matrix by

mapping a vector to the first -th selected pivot elements. Interpeted as an algebraic interpolation, the pivot elements play the role of interpolation points.

For an equivalent of Lagrange polynomials, we first need to realize that the algorithm effectively sets all entries of the -th row and -th column, , to zero in the -th remainder matrix

Applying the permutations on this matrix results in a block matrix

Interpreting and as an array of column and row vectors, respectively results in

and thus with

we can describe the entries of by

This matrix is lower triangular and due to our choice of scaling, the diagonal entries are equal to one, ergo the matrix is also invertible, resulting in

being well-defined. The columns are now acting as the Lagrange polynomials, and the algebraic operator given by

satisfies the projection property of

Since the rows of are set to zero, we have

which leads to

the low-rank approximation LU is resulting from algebraic interpolation.