Scalable solving of boundary element methods utilizing the Green cross approximation method and GPUs
This project is maintained by Bennet Carstensen
As we already mentioned, matrices resulting from integral equations are typically non-sparse, thus we consider data-sparsity: We are interested in an alternative representation or approximation of the matrix requiring just a small number of coefficients. Hierarchical matrix methods [8], [9] achieve this by dividing the matrix into submatrices approximating those who hold an admissibility condition or storing them in standard two-dimensional arrays if they are small enough.
Definition (Tree) [10] Let be a finite set, let and . Define . We call a tree if for each there is precisely one sequence with such that
We call the nodes, the root and the edges of . Furthermore, we call
the set of sons of and the father of m. The father is unique for each except the root, which has no father. Hereinafter we are using as a synonym for , e.g., we write .
One might interpret a tree as a directed graph where each node represents a subset of a index set, e.g. those resulting from the discretization of the Galerkin approximation. To divide the set and thus the matrix into subsets we consider a clustering method.
Definition (Cluster tree) Let be a finite set, let
be a tree and let be a subset
given for each .
We call
a
cluster tree for if the tree fullfills the follwing
requirements:
This way, we can represent submatrices of a matrix , with index sets and , by pairs of cluster chosen from two cluster trees and . To find suitable submatrices to approximate, we introduce another tree structure.
Definition (Block tree) Let and be cluster trees for index sets and . We call a block tree for and if the tree fullfills the follwing requirements:
We call the nodes of a block tree for cluster trees and blocks and denote the set of leaves by .
A block tree represents a cluster tree for the Cartesian product index set .
For any cluster tree, the labels of the leaves form a disjoint partition of the index set, as one can show by induction. The leaves of a block tree in particular form the disjoint partition
of the index set , i.e, resulting from the discretization of the Galerkin approximation.
Since we won’t be able to approximate each block, we define an admissiblity condition
which indicates whether we approximate a submatrix . We call a block admissible if adm$$(t, s) = true holds.
Definition (Admissibility condition) Let and be cluster trees for index sets and . We call a mapping
an admissibility condition for and if
Since being a disjoint partition of the index set, the leaves of a block tree give rise to a decomposition of a matrix into submatrices: We expect the leaves to correspond to those submatrices we approximate, i.e. the ones who are admissible, or those we represent as an array of coefficients since they are small enough.
Definition (Admissible and inadmissible leaves) Let be a block tree for cluster trees and and let adm be an admissiblity condition. We split the set of leaves of into the sets
of admissible and
inadmissible leaves.
Definition (Admissible block tree) Let be a block tree, let and *denote the admissible and inadmissible leaves of .
We call a block tree admissible if
and we call it strictly admissible if
With cluster trees , and an admissibility condition given, we can construct a (strictly) admissible block tree by checking the admissibility condition for the root . We finished our construction if the condition holds true. Otherwise, we check the sons and further descendants.
Constructing an approximation of now means defining approximations for all with .
Definition (Hierarchical matrix) Let be a block tree for cluster trees , , and let denote the admissible leaves of .
We call a matrix a hierarchical matrix (or -matrix) of local rank of , if for each we find and such that
where denotes the transposed of .
Approximating admissible submatrices leads in typical applications to reduced storage requierements of a hierarchical matrix to , with and [9].
We can avoid the logarithmic factor by refining the representation: we choose sets of basis vectors for all clusters , and represent the admissible blocks by these basis vectors.
Definition (Cluster index) We call a family of an finite index set a cluster index for .
Definition (Cluster basis) Let be a cluster index for . We call a family of matrices a cluster basis of rank if
In this case, we call the matrices basis matrices and the matrices transfer matrices.
Definition (Nested representation) Let be a cluster basis of rank and let be a family of transfer matrices satisfying .
*We call a nested representation of the cluster basis .
In a nested representation, we give the matrices of explicitly and otherwise implicitly by .
Definition (-matrix) Let be a hierarchical matrix for nested cluster bases of rank and of rank , if for each we can find such that
we call a -matrix. We call the matrices coupling matrices.
Representing all admissible submatrices and the cluster basis by the coupling and transfer matrices respectively, we can reduces the storage requirements of an -matrix* to , where .