Servicios Personalizados
Revista
Articulo
Indicadores
- Citado por SciELO
- Accesos
Links relacionados
- Similares en SciELO
Compartir
Geofísica internacional
versión On-line ISSN 2954-436Xversión impresa ISSN 0016-7169
Geofís. Intl vol.50 no.4 Ciudad de México oct./dic. 2011
Review paper
A brief overview of nonoverlapping domain decomposition methods
Ismael Herrera*, Antonio CarrilloLedesma and Alberto RosasMedina
Instituto de Geofísica, Universidad Nacional Autónoma de México, Ciudad Universitaria, Delegación Coyoacán 04510, México D.F. *Corresponding author: iherrera@geofisica.unam.mx
Received: July 1, 2011.
Accepted: July 7, 2011.
Published on line: September 30, 2011.
Resumen
Se presenta una visión general de los métodos de descomposición de dominio con dominios ajenos. Los métodos más eficientes que existen en la actualidad, el BDDC y el FETIDP, se ubican en un marco 'primal' (el 'espacio de vectores derivados (DVS, por sus siglas en inglés)'), el cual permite una presentación sintética y efectiva tanto de las formulaciones primales como de las 'duales'. El marco conceptual del espacio de los vectores derivados tiene alguna similitud con el que usa BDDC, pero una diferencia importante es que en el marco DVS el problema tratado se transforma en otro definido en el espacio vectorial producto, mientras que en el BDDC no se hace tal cosa. Esto simplifica los algoritmos, los cual se sintetizan en un breve conjunto de fórmulas matriciales muy generales que son aplicables a matrices simétricas, no simétricas e indefinidas, cuando ellas provienen de la discretización de ecuaciones diferenciales parciales o sistemas de tales ecuaciones. Las fórmulas matriciales de este conjunto, son explícitas y pueden ser usadas directamente para desarrollar códigos computacionales. Hasta donde sabemos, dos de los algoritmos precondicionados del conjunto mencionado, son totalmente diferentes a cualquiera de los reportados en la literatura y deben ser motivo de investigaciones futuras.
Palabras clave: subestructuración iterativa, métodos de descomposición en dominios ajenos; BDD, BDDC; FETI, FETIDP; pre condicionadores; espacio producto; multiplicadores de Lagrange.
Abstract
An overview of nonoverlapping domain decomposition methods is presented. The most efficient methods that exist at present, BDDC and FETIDP, are placed in a 'primal' framework (the 'derivedvectors space (DVS)') which permits a synthetic and effective presentation of both: primal and 'dual' formulations. The derivedvectors space is similar to the setting used in BDDC. A significant difference is that, in the DVS framework, the problem considered is transformed into one that is defined in a product vector space while in BDDC that is not done. This simplifies the algorithmic formulations, which are summarized in a set of matrixformulas applicable to symmetric, nonsymmetric and indefinite matrices generated when treating numerically partial differential equations or systems of such equations. They can directly be used for code development. Two preconditioned algorithms of the mentioned set had not been reported previously in the DDM literature, as far as we know, and are suitable for being researched.
Key words: iterative substructuring, nonoverlapping domain decomposition, BDD, BDDC, FETI, FETIDP, preconditioned, product space, Lagrange multipliers.
Introduction
In this paper we present a synthetic and brief overview of some of the most important algebraic formulas of nonoverlapping domain decomposition methods (DDM). We will use a framework that is very convenient for this purpose, which will be called the 'derivedvector space (DVS)' framework [Herrera and Yates 2010; Herrera and Yates 2009 ].
Among the frameworks that are used in nonoverlapping DDM formulations two categories are distinguished: dual frameworks, which as in the case of FETI and its variants use Lagrange multipliers, and primal frameworks, which as in the case of BDD and its variants tackle the problems directly without resource to Lagrange multipliers [Dohrmann 2003; Mandel and Dohrmann 2003; Mandel et al, 2005; Toselli and Widlund 2005]. The derivedvector space (DVS) framework used here is a primal framework similar to that of the BDDC formulations. A significant difference between the DVS framework and that of BDDC formulations is that in the DVSframework the problem is transformed into one defined in the derivedvectors space, which is a product space containing the discontinuous functions, and thereafter all the work is carried out in it. In BDDC formulations, on the other hand, the original space of continuous functions is never completely abandoned; indeed, one frequently goes back and forth from the degrees of freedom associated with the original space of continuous functions to the degrees of freedom associated with the substructures, which in such formulations play the role of the product space (see Section 15 for a more detailed discussion).
Although the DVS framework is a primal framework, dual formulations can also be accommodated in it; this feature permits unifying in its realm both dual and primal formulations; in particular, BDDC and FETIDP. Also, the derivedvectors space constitutes a Hilbertspace with respect to a suitable inner product the Euclidean innerproduct and, while using the DVS formulation, we will profit from its Hilbertspace structure achieving in this manner great simplicity for the algorithm formulations. Furthermore, the theory of partial differential equations in discontinuous piecewise defined functions [Herrera 2007] is used for establishing clear correspondences between the problems at the continuous level and those obtained after discretization (see, Section 9 of [Herrera and Yates 2010] and Appendix "B" of the present article). In particular, in this paper using the DVSframework we present simple explicit matrix formulas that can be applied to simplify code development of models governed by a single differential equation or systems of such equations; they have a wide range of applications to practical problems which includes nonsymmetric and indefinite matrices. We also remark that all our developments are carried out in vector spaces subjected to constraints and therefore all the DVS algorithms here presented are algorithms with constraints.
In this paper, a survey of the nonoverlapping DDM algorithms that can be developed in the DVS framework is carried out, which yields a synthetic and brief overview of some of the most important algebraic formulas of nonoverlapping domain decomposition methods (DDM). In particular, FETIDP [Farhat and Roux 1991; Mandel and Tezaur 1996; Farhat et al., 2001; Toselli and Widlund 2005] and the BDDC [Dohrmann 2003; Mandel and Dohrmann 2003; Mandel et al., 2005; Mandel 1993; Mandel and Brezina 1993; Mandel and Tezaur 2001], which are the most successful nonoverlapping DDM, are incorporated producing in this manner DVSversions of them. In recent years a number of papers have discussed connections between BDDC and FETIDP [Mandel et al., 2005; Li and Widlund 2006; Klawonn and Widlund 2001], and similar connections encountered using the DVSframework are here discussed. Also, by now in the literature the developments on DMM for nonsymmetric and indefinite matrices have been significant (see for example [Da Conceição 2006; Farhat and Li 2005; Li and Tu 2009; Toselli 2000; Tu and Li; Tu and Li]). As said before, the DVSframework for nonoverlapping DDM is applicable to such kind of matrices; indeed, [Herrera and Yates 2009] was devoted to extend the DVS framework to nonsymmetric and indefinite matrices. The assumptions under which such extension is possible were spelled out with precision and detail in Section 9 of [Herrera and Yates 2009]. When such results are complemented with those presented in Sections 7 to 14 of this paper, they permit establishing with certainty and precision in each case when such algorithms can be applied. Thus far, we have not seen discussed this topic with this generality elsewhere, in spite of its obvious importance.
In conclusion, our results can be effectively summarized in eight matrixformulas; of them, those with greater practical interest are of course the preconditioned formulations. The nonpreconditioned ones are included because they are important for understanding properly the theoretical developments. Of the four preconditioned matrix formulas contained in that summary, as said before, two correspond to the BDDC and FETIDP algorithms, while for the other two we have not been able to find suitable counterparts in the DDM literature already published, although the effectiveness of their performance can be expected to be of the same order as BDDC or FETIDP.
The paper is organized as follows. Section 2 is devoted to present an overview of the DVS framework. The problem to be dealt with (the original problem) is stated in Section3, while Sections 4 and 5 introduce the notions of the derivedvectors space. The general problem with constraints defined in the derivedvector space, equivalent to the originalproblem, is formulated in Section 6. The guidelines for the manner in which Sections 7 to 14 were organized is supplied by the results of Appendix "B". Only two basic nonpreconditioned algorithms were considered: the DirichletDirichlet and the NeumannNeumann algorithms; a primal and a dual formulation is supplied for each one of them. In turn, each one of the four nonpreconditioned formulations so obtained is preconditioned. This yields, in total, the eight algorithms mentioned before. Sections 7 to 10 are devoted to the four nonpreconditioned algorithms, while Sections 11 to 14 are devoted to the preconditioned algorithms. The DVS versions of BDDC and FETIDP are obtained when the primal and dual formulations of the nonpreconditioned DirichletDirichlet algorithms are preconditioned, respectively. Some comparisons and comments about FETIDP and BDDC as seen from the DVS framework are made in Section 15, while the Conclusions are presented in Section 16. Three Appendices are included in which some complementary technical details are given.
Section 2
Overview of the DVS framework
In previous papers [Herrera and Yates 2010; Herrera and Yates 2009] a general framework for domain decomposition methods, here called the 'derived vector space framework (DVSframework)', has been developed. Its formulation starts with the system of linear equations that is obtained after the partial differential equation, or system of such equations, has been discretized. We shall call, this system of linear equations, the 'original problem'. Independently of the discretization method used, it is assumed that a set of nodes and a domainpartition have been defined and that both the nodes and the partitionsubdomains have been numbered. Generally, some nodes belong to more than one partitionsubdomain (Figure 1). For the formulation of nonoverlapping domain decomposition methods, this is an inconvenient feature. To overcome this problem, the DVS framework introduces a new set of nodes, the 'derived nodes'; a derived node is a pair of natural numbers: a nodeindex followed by a subdomainindex, which may be any that fulfills the condition that the node involved belongs to the corresponding partitionsubdomain. As for the nodeindices, they are referred to as the 'originalnodes'.
Furthermore, with each partitionsubdomain we associate a 'local subset of derivednodes', which is constituted by the derivednodes whose subdomainindex corresponds to that partitionsubdomain. The family of local subsets of derivednodes so obtained, one for each partitionsubdomain, constitutes a truly disjoint (i.e., nonoverlapping) partition of the whole set of derivednodes (Figure 2). Therefore, it is adequate for overcoming the difficulty mentioned above. Thereafter, the developments are relatively straightforward. A 'derivedvector' is defined to be a realvalued function1 defined in the whole set of derivednodes; the set of all derivedvectors constitutes a linear space: the 'derivedvector space (DVS)'. This latter vectorspace must be distinguished from that constituted by the realvalued functions defined in the originalnodes, which is referred to as the 'originalvector space'. A new problem, which is equivalent to the original problem, is defined in the derivedvector space. Of course, the of this new problem is different to the originalmatrix, which is not defined in the derivedvector space, and the theory supplies a formula for deriving it; the procedure for constructing it is similar to substructuring (see, Appendix "A"). From there on, in the DVS framework, all the work is done in the derivedvector space and one never goes back to the original vectorspace.
The derivedvector space is a kind of productspace; namely, the product of the family of local subsets of derivednodes mentioned above. Furthermore, it constitutes a Hilbertspace (finitedimensional) with respect to a suitable innerproduct, called the Euclidean inner product, and it is handled as such throughout. Although the DVS framework was originally developed having in mind applications to symmetric and definite matrices, in [Herrera and Yates 2009 it was extended to nonsymmetric and indefinite matrices. The assumptions under which such extensions are possible were spelled out in detail there (see Section 9 of [Herrera and Yates 2009]). In this paper, we carry out a survey, as exhaustive as possible, of the DDM algorithms that can be developed in the DVS framework, both preconditioned and nonpreconditioned. Thereby, DVS versions of both the BDDC and FETIDP algorithms are produced.
At the continuous level, the most studied procedures are the NeumannNeuman and the DirichletDirichlet algorithms [Toselli and Widlund 2005; Quarteroni and Valli 1999]. During the development of the DVS framework, very precise and clear correspondences between the processes at the continuous level, before discretization, and the processes at the discrete level, after discretization, were established [Herrera and Rubio 2011]. Using such correspondences the results of our survey can be summarized in a brief and effective manner. They are:
I) NonPreconditioned Algorithms
a) The primal DirichletDirichlet problem (Schurcomplement algorithm)
b) The dual formulation of the NeumannNeumann problem
c) The primal formulation of the NeumannNeumann problem
d) The second dual formulation of the NeumannNeumann problem
II) Preconditioned Algorithms
a) Preconditioned DirichletDirichlet (The DVSversion of BDDC)
b) Preconditioned dual formulation of the NeumannNeumann problem (The DVSversion of FETIDP)
c) Preconditioned primal formulation of the NeumannNeumann problem
d) Preconditioned second dual formulation of NeumannNeumann problem
All these algorithms are formulated in vector spaces subjected to constraints, so the algorithms are constrained algorithms.
Section 3
The original problem
The DVSframework applies to the systemmatrix that is obtained after discretization. Its procedures are independent of the discretization method used; it could be, FEM, finitedifferences, or any other. It requires, however, that some assumptions (or axioms) be fulfilled, as it is explained in what follows. Such axioms are stated in terms of the systemmatrix and two additional concepts: the original nodes and a family of subsets of such nodes, which is associated with a domain partition (or, domain decomposition). To illustrate how such concepts are introduced, consider a variational formulation of the discretized version of a general boundary value problem. It consists in finding V, such that
Here, V is a finite dimensional linear space of realvalued2 functions defined in certain spatial domain Ω, while g V is a given function.
Let ={1, ..., n} be the set of indices, which number the nodes used in the discretization, and {φ1, ..., φn}V be a basis of V, such that for each i, φ1= 1 at node i and zero at every other node. Then, since V, we have:
Here, i is the value of at node i. Let and be the vectors ≡ (1,..., n) and ≡ (1, ..., n)3, with
The variational formulation of Eq. (3.1) is equivalent to:
The matrix , which will be referred to as the 'original matrix', is given by
After the problem has been discretized, a partition of W into a set of nonoverlapping subdomains, {Ω1, ... ΩE}, is introduced; more precisely, for each α = 1, ..., E, Ωα, is open and:
Where α stands for the closure of Ωα. The set of 'subdomainindices' will be
α, α = 1, E, will be used for the subset of originalnodes that correspond to nodes pertaining to α. As usual, nodes will be classified into 'internal' and 'interfacenodes': a node is internal if it belongs to only one partitionsubdomain closure and it is an interfacenode, when it belongs to more than one. For the application of dualprimal methods, interfacenodes are classified into 'primal' and 'dual' nodes. We define:
1 as the set of internalnodes;
Γ as the set of interfacenodes;
π as the set of primalnodes4; and
Δ as the set of dualnodes.
The set π Γ is chosen arbitrarily and then Δ is defined as Δ ≡ Γ π. Each one of the following two families of nodesubsets is disjoint:{1, Γ} and {1, π, Δ}. Furthermore, these node subsets fulfill the relations:
The realvalued functions defined in = {1, ..., n} constitute a linear vector space that will be denoted by and referred to as the 'original vectorspace'. Vectors will be written as = (1,..., n), where i for i =1, n, are the components of the vector . Then, by the 'originalproblem' consists in: "Given , find a such that Eq. (3.4) is fulfilled". Throughout our developments the original matrix is assumed to be nonsingular (i.e., it defines a bijection of into itself).
Conditions under which the DVSframework is applicable to indefinite or/and nonsymmetric were given in [Herrera and Yates 2009]; in particular, the following assumption ('axiom') is adopted here: "Let the indices iα and jβ be internal originalnodes, then:
Section 4
DerivedNodes
As said before, when a nonoverlapping partition is introduced some of the nodes used in the discretization belong to more than one partitionsubdomain. To overcome this inconvenient feature in the DVSframework, besides the originalnodes, another set of nodes is introduced, called the 'derived nodes'. The general developments are better understood, through a simple example that we explain first.
Consider the set of twenty five nodes of a "nonoverlapping" domain decomposition, which consists of four subdomains, as shown in Figure 1. Thus, we have a set of nodes and a set of subdomains, which are numbered using of the indexsets: ≡{1, ..., 25} and = {1, 2, 3, 4}, respectively. Then, the sets of nodes corresponding to such a nonoverlapping domain decomposition is actually overlapping, since the four subsets
are not disjoint (see, Figure 1). Indeed, for example:
In order to obtain a "truly" nonoverlapping decomposition, we replace the set of 'original nodes' by another set: the set of 'derived nodes'; a 'derived node' is defined to be a pair of numbers: (p, α), where p corresponds a node that belongs to α. In symbols: a 'derived node' is a pair of numbers (p, α) such that pα. We denote by X the set of derived nodes; we observe that the total number of derivednodes is 36 while that of originalnodes is 25. Then, we define Xα as the set of derived nodes that can be written as (p, α), where a is kept fixed. Taking α successively as 1, 2, 3 and 4, we obtain the family of four subsets, {X1, X2, X3, X4}, which is a truly disjoint decomposition of X, in the sense that (see, Figure 2):
Of course, the cardinality (i.e., the number of members) of each one of these subsets is 36/4 equal to 9.
The above discussion had the sole purpose of motivating the more general and formal developments that follow. So, now we go back to the general case introduced in Section 3, in which the sets of nodeindexes and subdomainindexes are = {1, ..., n} and = {1, ..., E}, respectively, and define a 'derivednode' to be any pair of numbers, (p, α), such that pα. Then, the total set of derivednodes, fulfills:
In order to avoid unnecessary repetitions, in the developments that follow where we deal extensively with derived nodes, the notation (p, α) is reserved for pairs such that (p, α)X. Some subsets of X are defined next:
With each α = 1, ..., E, we associate a unique 'local subset of derivednodes':
The family of subsets {X1, ..., XE}, is a truly disjoint decomposition of the whole set of derivednodes, in the sense that:
Section 5
The "Derived VectorSpace (DVS)"
Firstly, we recall from Section 3 the definition of the vector space . Then, for each α = 1, ..., E, we define the vectorsubspace α , which is constituted by the vectors that have the property that, for each iα, its icomponent vanishes. With this notation, the 'productspace' W, is defined by
As explained in Section 3, the 'original problem' of Eq.(3.4) is a problem formulated in the original vectorspace and in the developments that follow we transform this problem into one that is formulated in the productspace W, which is a space of discontinuous functions.
By a 'derivedvector' we mean a realvalued function5 defined in the set X, of derivednodes. The set of derivedvectors constitute a linear space, which will be referred to as the 'derivedvector space'. Corresponding to each local subset of derivednodes, Xα, there is a 'local subspace of derivedvectors', Wα, which is defined by the condition that vectors of Wa vanish at every derivednode that does not belong to Xα. A formal manner of stating this definition is
• uWαW, if and only if, u(p, β) = 0 whenever β≠α
An important difference between the subspaces Wα and α that should be observed is that WαW, while αW. In particular,
In words: the space W is the product of the family of subspaces {1, ..., E}, but at the same time it is the directsum of the family {W1, ..., WE}. In view of Eq. (5.2), it is straightforwa rd to establish a bijection (actually, an isomorphism) between the derivedvector space and the productspace. Thus, in what follows we identify both.
For every pair of vectors, uW and wW, the 'Euclidean inner product' is defined to be
In applications of the theory to systems of equations, when u(p, α) itself is a vector, Eq. (5.3) is replaced by
Here, uw. means the inner product of the vectors involved. An important property is that the derivedvector space, W, constitutes a finite dimensional Hilbertspace with respect to the Euclidean inner product. We observe the Euclidean inner product independently of the nature of the original matrix ; in particular it may nonsymmetric or indefinite.
The natural injection, R: W, of into W, is defined by the condition that, for every û, one has
The 'multiplicity', m(p), of any originalnode p is characterized by the property [Herrera and Yates 2010, Herrera and Yates 2009]:
The space W will be decomposed into two orthogonal complementary subspaces W11W and W12W, so that
Here, the subspace W12W is the natural injection of into W; i.e.,
and W11W its orthogonal complement with respect to the Euclidean inner product. For later use, we point out that the inverse of R: W, when restrictedjto W12W, exists and will denoted by R1: W12 . Here, we recall that it is customary to use the directsum notation:
when the pair of equalities of Eq. (5.7), holds. The 'subspace of continuous vectors' is defined to be W12W, while the 'subspace of zeroaverage vectors' is defined to be W11W. Two matrices : WW and : WW are here introduced; they are the projections operators, with respect to the Euclidean innerproduct, on W12 and W11, respectively. The first one will be referred to as the 'average operator' and the second one will be the 'jump operator', respectively. We observe that in view of Eq. (5.7), every vector, uW, can be uniquely written as the sum of a zeroaverage vector plus a continuous vector (we could say: a zerojump vector); indeed:
The vectors u and u are said to be the 'jump' and the 'average' of uu, respectively.
The linear subspaces that are defined next are chosen to mimic those used by other authors [Mandel and Dohrmann 2003; Mandel et al., 2005]. In particular, WI WΓ, Wπ, and WΔ are defined by imposing the restrictions that follow to their members. Vectors of:
• WI vanish at every derivednode that is not an internal node;
• WΓ vanish at every derivednode that is not an interface node;
• Wπ vanish at every derivednode that is not a primal node; and
• WΔ vanish at every derivednode that is not a dual node.
Furthermore,
We observe that each one of the following families of subspaces are linearly independent:
{WI , WΓ}, {WI ,Wπ,WΔ }, {WΠ, WΔ}
And also that
The above definition of Wr is appropriate when considering dualprimal formulations; other kinds of restrictions require changing the term Wπ by rWπ, where r is a projection on the restricted subspace.
Section 6
The general problem with constraints
The following result is similar to results shown in [Herrera and Yates 2010; Herrera and Yates 2009]; its proof, as well as the definition of the matrix : Wr Wr that is used in it, is given in Appendix "A":
"A vector is solution of the original problem, if and only if, u' = R WrW fulfills the equalities:
The vector = (R) W12 Wr , will be written as ≡ Π + Δ, with Π WΠ and Δ WΔ."
This is the 'dualprimal problem formulated in the derivedvector space'; or, simply, the DVSdualprimal problem. We remark that this problem is formulated in the subspace Wr of the derivedvector space W, in which the restrictions have been incorporated. Thus, all the algorithms to be discussed include such restrictions; in particular, those imposed by means of primal
nodes. In what follows, the matrix A: r r is assumed to be invertible. In many cases this can be granted when a sufficiently large number of primal nodes, adequately located, are taken. Let u' Wr be solution of it, then u' W12W necessarily, since u' = 0, and one can apply the inverse of the natural injection to obtain
Since this problem is formulated in the derivedvectors space, in the algorithms to be presented all the operations are carried out in such a space; in particular, we will never return to the original vector space, , except at the end when we apply Eq. (6.2).
Section 7
The schur complement algorithm
The matrix of Eq. (6.1), can be written as (here, we draw from Appendix "A"):
Using this notation, we define the 'dualprimal Schurcomplement matrix' by
Let be u≡u' Π, then Eq. (6.1) is equivalent to: "Given = Δ WΔ, find a uΔ WΔ such that
Here, u = uΠ+ uΔ and
We observe that the Schur complement matrix, :WΔ WΔ, is invertible when so is :Wr Wr [Herrera and Yates 2010; Herrera and Yates 2009].
In Appendix "B" it is shown that Eq. (7.3) is the discrete version of a nonpreconditioned DirichletDirichlet problem. Thus, this algorithm could be called the 'nonpreconditioned DirichletDirichletalgorithm'. However, in what follows, the algorithm that corresponds to Eq. (7.3) will be referred to as the Schurcomplement algorithm', since it is a variant of one of the simplest forms of substructuring methods described, for example, in [Smith et al., 1996].
Section 8
The dual NeumannNeumann problem
In this paper we present three alternative procedures for obtaining the algorithm we are about to derive. One is as a NeumannNeumann formulation, discussed in Appendix "B" using an operator, which is the counterpart of the SteklovPoincaré operator; one more is in what could be called the classical manner that consists in using a Lagrange multipliers treatment of the problem of Section 7 [Toselli and Widlund 2005] (the DVS version of this approach is presented in Appendix "C"); and the third one used in this Section stems from the identity
This latter equation is clear since + = .
Eqs.(7.3) and (8.1), together, imply that
when the vector is defined to be
Therefore, WΔ. Thus, the problem of finding uΔ has been transformed into that of finding the 'Lagrange multiplier' 6, since once is known one can apply 1 to Eq. (8.2), to obtain
Furthermore, in Eq. (8.4), uΔWΔ, so that
Hence, WΔ fulfills
Thereby, we mention that uΔ is discretized version of the of the average of the normal derivative [Herrera and Yates 2010; Herrera and Yates 2009].
Section 9
The primal NeumannNeumann problem
In Appendix "B", it was shown that there is a second and more direct manner of formulating the nonpreconditioned NeumannNeumann problem, which is given by Eq.(18.28). Here, we derive it for the general problem we are considering. Our starting point will be Eq.(7.3).
We multiply the first equality in Eq. (7.3) by 1, observing that Δ, to obtain
Thus, Eq. (7.3) can be transformed into
or
If we define:
Eq. (9.3) is transformed into:
The iterative form of this algorithm is obtained multiplying by 1:
If the solution of Eq. (7.3) is known, then vΔ WΔ defined by Eq. (9.4) fulfills Eq. (9.6); conversely, if vΔ WΔ satisfies Eq. (9.6), then
is solution of Eq. (7.3). We shall refer to the iterative algorithm defined by Eq. (9.6) as the 'multipliersfree formulation of the nonpreconditioned NeumannNeuman problem'.
Section 10
The second dual NeumannNeumann problem
Our starting point will be Eq. (8.6). Firstly, we observe the following identity:
Then, we multiply the first equality in Eq. (8.6) by to obtain
Or
If we multiply the first of these equalities by 1 and define:
Eq. (10.3) is transformed into:
We observe that this latter equation is equivalent to Eq. (10.3) because 1 is nonsingular. If the solution of Eq. (8.6) is known, then WΔdefined by Eq. (10.4) fulfills Eq. (10.5). Conversely, if WΔ satisfies Eq. (10.5), then
is solution of Eq. (8.6).
We notice that Eq. (10.5) does not define an iterative algorithm. However, multiplying Eq. (10.5) by an iterative algorithm is obtained:
Eq. (10.7) supplies an alternative manner of applying the Lagrangemultipliers approach.
The equality 1 = 0 may be interpreted as a restriction; indeed, It can be shown that it is equivalent to WΔ.
Section 11
The DVS version of the BDDC algorithm
The DVS version of the BDDC is obtained when the Schurcomplement algorithm, of Section 7, is preconditioned by means of the matrix 1. It is: "Given ΔWΔ, find uΔWΔ such that
The following properties should be noticed:
a) This is an iterative algorithm;
b) The iterated matrix is 1 ;
c) The iteration is carried"outm the subspace WΔWΔ;
d) This algorithm is applicable whenever the Schur complement matrix is such that the logical implication is fulfilled, for any wWΔ
e) In particular, it is applicable when is definite.
Properties a) to c) are interrelated. The condition uΔ = 0 is equivalent to uΔ WΔ; thus the search is carried out in WΔ. When the matrix 1 is applied repeatedly, one remains in WΔ, because for every wWΔone has (1w) = 0. As for property d), it means that when the implication of Eq. (11.2) holds, Eq. (11.1) implies Eq. (7.3). To prove this, observe that
and also that Eq. (11.1) implies
When the implication of Eq. (11.2) holds, Eqs. (11.3) and (11.4) together imply
As desired, this proves that Eqs. (11.1) implies Eq. (7.3), when Eq. (11.2) holds.
The condition of Eq. (11.2), is weaker than that of (or generalizes that of) requiring that the Schur complement matrix be definite, since the implication of Eq. (11.2) is_always satisfied when is definite. Assume that is definite, then for any vector wWΔ such that 1w = 0 and w = 0, one has
This implies w= 0, because 1 is definite when so is . Thereby, Property e) is clear.
Section 12
The DVS version of FETIDP algorithm
The DVS version of the FETIDP algorithm is obtained when the 'LagrangeMultipliers formulation of the nonpreconditioned NeumannNeuman problem', of Section 8, Eq. (8.6), is preconditioned by means of the matrix _S. It is: "Given ΔWΔ, find WΔ such that
For this algorithm the following properties should be noticed:
i. This is an iterative algorithm;
ii. The iterated matrix is 1;
iii. The iteration is carried out in the subspace WΔWΔ;
iv. The algorithm is applicable whenever the Schur complement matrix is such that the logical implication is fulfilled, for any wWΔ:
v. In particular, it is applicable when is positive definite.
Properties i) to ii) are interrelated. The condition = 0 is equivalent to WΔ; thus the search is carried out in WΔ. When the matrix 1 is applied repeatedly one remains in WΔ, because for every WΔ, one has (1) = 0. As for property iv), it means that when the implication of Eq. (12.2) holds, Eq. (12.1) implies Eq. (8.6). To prove this, assume Eq. (12.1) and observe that
and also that Eq. (12.1) implies
When the implication of Eq. (12.2) holds, Eqs. (12.3) and (12.4) together imply
As desired, this proves that Eqs. (12.1) implies Eq. (8.6), when Eq. (12.2) holds.
The condition of Eq. (12.2), is weaker than that of (or generalizes that of) requiring that the Schur complement matrix be definite, since the implication of Eq. (12.2) is always satisfied when is definite. Indeed, assume that is definite, then for any vector WΔ such that = 0 and = 0 , one has
This implies = 0, because is definite. Thereby, Property v) is clear.
Section 13
Preconditioned primal NeumannNeumann algorithm
This algorithm is a preconditioned version of the multipliersfree formulation of the nonpreconditioned NeumannNeumann problem. It can be derived multiplying the first equality in Eq. (9.5) by the preconditioner 1. Thus, such an algorithm consists in searching for a function vΔWΔ, which fulfills
For this algorithm the following properties should be noticed:
A. This is an iterative algorithm;
B. The iterated matrix is 1;
C. The iteration is carried out Tn the subspace 1WΔWΔ;
D. The algorithm is applicable whenever the Schur complement matrix is such that the logical implication is fulfilled, for any wWΔ:
E. In particular, it is applicable when is positive definite.
Properties A) to C) are interrelated. The condition vΔ= 0 is equivalent to v1WΔ; thus the search is carried out in the subspace 1WΔ. When the matrix 1 is applied repeatedly one remains in 1WΔ, because for every vΔWΔ, one has (1vΔ) = 0. As for property D), it means that when the implication of Eq. (13.2) holds, Eq. (13.1) implies Eq. (9.5). To prove this, assume Eq. (13.1) and define
Furthermore, in view of Eq. (13.1)
Using Eq. (13.2), it is seen that Eqs. (13.3) and (13.4) together imply
Now Eq. (9.5) is clear and the proof is complete.
The condition of Eq. (13.2), is weaker than (or generalizes that of) requiring that the Schur complement matrix be definite, since the implication of Eq. (13.2) is always satisfied when is definite. Indeed, assume that is definite, then for any vector wWΔ such that w = 0 and w = 0, one has
This implies w = 0, because is definite. Thereby, Property E) is clear.
Section 14
Preconditioned second dual NeumannNeumann algorithm
This algorithm is a preconditioned version of the 'second form of the LagrangeMultipliers formulation of the nonpreconditioned NeumannNeuman problem', of Section 10. Multiplying Eq. (10.5) by the matrix 1, we obtain:
The iterative nonoverlapping algorithm that is obtained by the use of Eq. (14.1) is similar to FETIDP. The following properties should be noticed:
I. Firstly, this is an iterative algorithm;
II. The iterated matrix is 1;
III. The iteration is carried out in the subspace WΔWΔ;
IV. This algorithm is applicable whenever the Schur complement matrix is such that the logical implication is fulfilled, for any wWΔ:
V. In particular, it is applicable when is positive definite in WΔ.
Properties I) to III) are interrelated. The condition 1= 0 is equivalent to WΔ; thus the search is carried out in the subspace WΔ. When the matrix 1 is applied repeatedly one remains in WΔ, because for every WΔ, one has (1) = 0. As for property IV), it means that when the implication of Eq. (14.2) holds, Eq. (14.1) implies Eq. (10.5). To prove this, assume Eq. (14.1) and define
Furthermore, in view of Eq. (14.1)
Eqs. (14.3) and (14.4), together, imply
Now Eq. (10.5) is clear and, as desired, it has been shown that Eq. (14.1) implies Eq. (10.5), when the condition of Eq. (14.2) holds.
The condition of Eq. (14.2), is weaker than (or generalizes that of) requiring that the Schur complement matrix be definite, in the sense that any positive definite matrix fulfills it. Indeed, assume that is definite, then for any vector wWΔ such that 1w = 0 and w = 0, one has
This implies w = 0, because 1 is definite when so is . Thereby, Property V) is clear.
Section 15
FETIDP and BDDC from the DVS perspective
As said in the Introduction, both the FETIDP and BDDC can be accommodated in the DVSframework. In this Section, we show that the DVS version of FETIDP presented in Section 12, is obtained when suitable choices are made in the general expressions of FETIDP. As for BDDC, its relation with the algorithm presented in Section 9 is a little more complicated.
FETIDP
The FETI preconditioner is given by
and we now have to solve the preconditioned system
were Fr = BrS†BrTand dr = BrS† f (see, page 157, Eq. (6.51) and (6.52) of [Toselli and Widlund 2005]). Developing the expression and replacing r1 we get
We take Pr = . So, we have
replacing = DrBr, get
simplifying
Now, we choose Br ≡ and Dr ≡ , so that BrT = , to obtain
One advantage of introducing is its very convenient algebraic properties; for example, it is idempotent. In particular, here we have used the fact that r = r, since r = 0. Except for slight changes of notation this is the same as Eq. (12.1).
BDDC
In the standard notation used in BDDC [Dohrmann 2003; Mandel and Dohrmann 2003; Mandel et al., 2005; Da Conceição 2006]
where S and the preconditioner M1 are
respectively. Furthermore, N is the number of subdomains and for each i = 1, ..., N
Ri = ΓΓi is the restriction operator from Γ into Γi; when applied to a functiondefined in Γ, it yields its restriction to Γi. As for i, i : ΓΓi is given by i = DiRi. Here, Di =diag{δi} is a diagonal matrix defining a partition of unity. Substituting S and M1 in Eq. (15.8), we obtain
This equation is to be compared with our Eq. (9.1). For the purpose of comparison, the vectors u and f of Eq. (15.11) can be identified with vectors u and of our original space, . Furthermore, we apply our natural injection, R : W, defined by Eq.(5.5), to Eq.(9.1) and premultiply the resulting equation also by the natural injection, with u and Δ replaced by Ru and R, respectively. In this manner we obtain.
We have verified that indeed Eqs. (15.11) and (15.12) are equivalent.
Comparisons
The DVS approach, and therefore also the DVSversions of FETIDP and BDDC here presented, starts with the matrix that is obtained after the problem has been discretized and for its application does not require any information about the system of partial differential equations from which it originated. Generally, all the nonpreconditioned DVSalgorithms that have been presented throughout this paper are equally applicable to symmetric, indefinite and nonsymmetric matrices. The specific conditions required for its applicability are spelled out in detail in [Herré ra an d Yates 2009] (Section 9). Throughout all the developments it is assumed that the dualprimal Schurcomplement matrix , defined in Section 7, Eq.(7.2), is nonsingular.
As said before, for FETI we show that the DVSversion of FETIDP can be obtained when suitable choices are made in the general expressions of FETIDP. Although, these choices represent particular case of the general FETIDP algorithm, in some sense the choices made are optimal because both and are complementary orthogonal projections, as it has been verified numerically in [Herrera and Yates 2010; Herrera and Yates 2009] and other more recent numerical implementations. When carrying out the incorporation of BDDC in the DVSframework we encountered more substantial differences. For example, when the inverses of the local Schurcomplements exist, in the DVS framework the inverse of t is given by (see Appendix "A"):
A similar relation does not hold for the BDDC algorithm. Indeed, in this latter approach in that case we have instead:
and
even when the inverses of the local Schurcomplements exist and no restrictions are used.
The origin of this problem, encountered in the BDDC formulation, may be traced back to the fact that the BDDC approach does not work directly in the product space. Indeed, one frequently goes back to degrees of freedom associated with the original nodes. This is done by means of the restriction operators Ri : ΓΓi which can be interpreted as transformations of the original vectorspace into the product vectorspace (or, derived vectorspace). If the algorithm of Eq. (15.11) is analyzed from this point of view, it is seen that it repeatedly goes from the original vectorspace to the productspace (or derived vectorspace) and back. For example, consider the expression:
occurring in Eq. (15.11). After starting with the vector u in the original vectorspace, we go to the derivedvector space with iu and remain there when we apply Si. However,we go back to the original vectorspace when it is applied. A similar analysis can be made of the term
Summarizing, in the operations indicated in Eq. (15.16) four trips between the original vector space and the derivedvector space were made, two one way and the other two in the way back. In the DVSframework, on the other hand, from the start the original problem is transformed into one defined in derivedvector space, where all the work is done afterwards, and that permits avoiding all those unnecessary trips. Thereby, the matrix formulas are simplified and so is code development. The unification and simplification achieved in this manner, permits producing more effective and robust software.
Section 16
Conclusions and discusions
1. A primal framework for the formulation of nonoverlapping domain decomposition methods has been proposed, which is referred to as 'the derivedvector space (DVS) framework';
2. Dual and primal formulations have been derived in a unified manner. Symmetric, nonsymmetric and indefinite matrices are also included. Furthermore, detailed conditions that such matrices need to satisfy in order for the general algorithms to be applicable to them have been given in Section 9 of [Herrera and Yates 2009] and in Sections 7 to 14 of the present paper;
3. A brief and effective summary of nonoverlapping domain decomposition methods has been obtained. It consists of eight matrixformulas: four are primal formulations and the other four are dual formulations;
4. The nonpreconditioned formulas are:
5. The preconditioned formulas are:
6. The most commonly used methods, BDDC and FETIDP, have been incorporated in this framework, producing in this manner DVSversions of such methods. Eq. (16.1), is the primal formulation of a DirichletDirichlet problem; when this is preconditioned the DVSBDDC is obtained. The formulation of a NeumannNeumann problem using the counterpart of the SteklovPoincaré operator, given in Appendix "B", yields a dual formulation, which is stated in the second equality of Eq. (16.2). The DVS version of FETIDP, of Eq. (16.4), is the preconditioned form of this formulation;
7. For the other matrixformulas, two preconditioned and two more nonpreconditioned, we have not been able to find suitable counterparts in the DDM literature already published;
8. Using the detailed definitions given in [Herrera and Yates 2010; Herrera and Yates 2009], the above DVS formulas can be used directly for code development. They are somewhat simpler than those of the BDDC framework and have permitted us to simplify codedevelopment and also to develop very robust computational codes for the examples considered in [Herrera and Yates 2010; Herrera and Yates 2009].
FETI DP and BDDC are optimal in the sense that the condition number κ of its interface problem grows asymptotically as [Dohrmann 2003; Klawonn and Widlund 2001; Tezaur 1998]:
Furthermore, they perform quite similarly when the same set of primal constraints is used. Therefore, to be competitive the last two preconditioned DirichletDirichlet algorithms of Eq. (16.5) should have a similar behavior, but at present that is an open question.
Acknowledgement
We thank Dr. Norberto VeraGuzmán for his valuable assistance in several aspects of the research reported in this article.
Bibliography
Da Conceição D.T. Jr., 2006, Balancing domain decomposition preconditioners for nonsymmetric problems, Instituto Nacional de Matemática Pura e Aplicada, Agencia Nacional do Petróleo PRH32, Rio de Janeiro, May. 9. [ Links ]
Dohrmann C.R., 2003, A preconditioner for substructuring based on constrained energy minimization. SIAM J. Sci. Comput. 25(1):246258. [ Links ]
Farhat Ch., Roux F., 1991, A method of finite element tearing and interconnecting and its parallel solution algorithm. Internat. J. Numer. Methods Engrg. 32:12051227. [ Links ]
Farhat C., Lesoinne M., LeTallec P., Pierson K., Rixen D., 2001, FETI DP a dualprimal unified FETI method, Part1: A faster alternative to the twolevel FETI method, Int. J. Numer. Methods Engrg., 50, 15231544. [ Links ]
Farhat C., Li J., 2005, An iterative domain decomposition method for the solution of a class of indefinite problems in computational structural dynamics. ELSEVIER Science Direct Applied Numerical Math. 54 pp 150166. [ Links ]
Herrera I., 2007, Theory of Differential Equations in Discontinuous PiecewiseDefinedFunctions, NUMER METH PART D E, 23(3), pp 597639, DOI 10.1002 NO. 20182. [ Links ]
Herrera I., Rubio E., 2011, Unified theory of Differential Operators Acting on Discontinuous Functions and of Matrices Acting on Discontinuous Vectors 19th International Conference on Domain Decomposition Methods, Zhangjiajie, China 2009. (Oral presentation). Internal report #5, GMMC UNAM. [ Links ]
Herrera I., Yates R.A., 2009, The Multipliersfree Domain Decomposition Methods NUMER.METH. PART D. E. 26: 874905 July 2010, DOI 10.1002/num. 20462. (Published on line Jan 28). [ Links ]
Herrera I., Yates R.A., 2010, The MultipliersFree Dual Primal Domain Decomposition Methods for Nonsymmetric Matrices NUMER. METH. PART D. E. 2009, DOI 10.1002/Num. 20581, (Published on line April 28). [ Links ]
Klawonn A., Widlund O.B., 2001 FETI and NeumannNeumann iterative substructuring methods: connections and new results. Comm. Pure and Appl. Math. 54(1):5790. [ Links ]
Li J., Tu X., 2009, Convergence analysis of a Balancing Domain Decomposition method for solving a class of indefinite linear systems. Numer. Linear Algebra Appl. 2009; 16:745773. [ Links ]
Li J., Widlund O.B., 2006, FETIDP, BDDC, and block Cholesky methods. Int. J. Numer. Methods, Engng. 250271. [ Links ]
Mandel J., 1993, Balancing domain decomposition. Comm. Numer. Meth. Engrg., 9:233241. [ Links ]
Mandel J., Brezina M., 1993, Balancing Domain Decomposition: Theory and performance in two and three dimensions. UCD/CCM report 2. [ Links ]
Mandel J., Dohrmann C.R., 2003, Convergence of a balancing domain decomposition by constraints and energy minimization, Numer. Linear Algebra Appl, 10(7):639659. [ Links ]
Mandel J., Dohrmann C.R., Tezaur R., 2005, An algebraic theory for primal and dual substructuring methods by constraints, Appl. Numer. Math., 54: 167193. [ Links ]
Mandel J., Tezaur R., 1996, Convergence of a substructuring method with Lagrange multipliers. Numer. Math 73(4): 473487. [ Links ]
Mandel J., Tezaur R., 2001, On the convergence of a dualprimal substructuring method, Numer. Math 88:543558. [ Links ]
Quarteroni A., Valli A., 1999, Domain decomposition methods for partial differential equations, Numerical Mathematics and Scientific Computation, Oxford Science Publications, Clarendon PressOxford. [ Links ]
Smith B., Bjorstad P., De Gropp W., 1996, Domain Decomposition, Cambridge University Press, New York. [ Links ]
Tezaur R., 1998, Analysis of Lagrange multipliers based domain decomposition. Ph.D. Thesis, University of Colorado, Denver. [ Links ]
Toselli A., 2000, FETI domain decomposition methods for scalar advectiondiffusion problem. Technical Report TR2000800, Courant Institute of Mathematical Sciences, Department of Computer Science, New. York University. [ Links ]
Toselli A., Widlund O.B., 2005, Domain decomposition methods Algorithms and Theory, Springer Series in Computational Mathematics, SpringerVerlag, Berlin, 450p. [ Links ]
Tu X., Li J., A Balancing Domain Decomposition method by constraints for advectiondiffusion problems. www.ddm.org/DD18/ [ Links ]
Tu X., Li J., BDDC for Nonsymmetric positive definite and symmetric indefinite problems. www.ddm.org/DD18/ [ Links ]
1 For the treatment of systems of equations, vectorvalued functions are considered, instead
2 The theory to be presented, with slight modifications, works as well in the case that the functions of V are vectorvalued.
3 Strictly, these should be columnvectors. However, when they are incorporated in the middle of the text, we write them as rowvectors to save space.
4 In order to mimic standard notations, as we try to do in most of this paper, we should use Π instead of the lowcase π. However, we have found convenient to reserve the letter Π for another use.
5 For the treatment of systems of equations, such as those of linear elasticity, such functions are vectorvalued.
6 In the Appendix C, it is shown that X is indeed the Lagrange multiplier when such an approach is adopted.