American Journal of Computational and Applied Mathematics

p-ISSN: 2165-8935    e-ISSN: 2165-8943

2013;  3(1): 13-22

doi:10.5923/j.ajcam.20130301.03

Iteratively Regularized Gradient Method for Determination of Source Terms in a Linear Parabolic Problem

Arzu Erdem

Kocaeli University, Faculty of Arts and Sciences, Department of Mathematics, Umuttepe Campus, 41380, Kocaeli, Turkey

Correspondence to: Arzu Erdem, Kocaeli University, Faculty of Arts and Sciences, Department of Mathematics, Umuttepe Campus, 41380, Kocaeli, Turkey.

Email:

Copyright © 2012 Scientific & Academic Publishing. All Rights Reserved.

Abstract

This paper investigates a numerical computation for determination of source terms in a linear parabolic problem. The source term is defined in the linear parabolic equation and Robin boundary condition from the measured final data and the measurement of the temperature in a subregion. We demonstrate how to compute Fréchet derivative of Tikhonov functional based on the solution of the adjoint problem. Lipschitz continuity of the gradient is proved. Iteratively regularized gradient method is applied for numerical solution of the problem. We conclude with several numerical tests by using the theoretical results.

Keywords: Inverse Coefficient Source Problem, Parabolic Equation, Adjoint Problem, FrÉChet Derivative, Lipschitz Continuity

Cite this paper: Arzu Erdem, Iteratively Regularized Gradient Method for Determination of Source Terms in a Linear Parabolic Problem, American Journal of Computational and Applied Mathematics, Vol. 3 No. 1, 2013, pp. 13-22. doi: 10.5923/j.ajcam.20130301.03.

1. Introduction

In describing the heat conduction in a material occupying a domain , the temperature distribution is modeled by
(1)
(2)
(3)
where denotes internal heat source, is spatial varying heat conductivity, is an initial condition and denotes the convection between conducting body and the ambient environment. If one cannot measure the pair directly, one can try to determine from the final state observation of
(4)
and from the observation of over subregion ,
(5)
Source term identification problems like (3)-(4) appear in hydrology[2], material science[26], heat transfer[3] and transport problems[31].
The problem of reconstructing the right hand side of a parabolic equation were investigated earlier in[18,20,23, 24,28].
The unique solvability of inverse problem of determining the right hand side in the parabolic problem and the overdetermination condition are given in[12]. Determining the unknown function representing source terms in inverse heat conduction problems and gradient based iterative procedures for optimization problem have been presented in[30]. Based on the weak solution approach how the inverse problem can be formulated for the pair has been investigated in[16,17].
To solve the inverse source problem one can use explicit and implicit methods[5,6,15,19,25]. Explicit methods provide analytical solutions to the inverse source problem directly from measured data. Explicit methods are limited to simple medium geometries with spatially non-varying optical parameters . For more complex geometries and heterogeneous media no explicit methods are available and implicit methods need to be employed. Implicit methods for solving the inverse source problem iteratively utilize a solution of a forward model to provide predicted measurement data. An update of an initial source distribution is sought by minimizing a functional that describes the goodness of a fit between the predicted and experimental data.
Our approach is based on quasisolution approach. We also introduce an adjoint problem. Adjoint problem technique computes the gradient of the objective function. The concept of the adjoint problem technique can also be applied to similar inverse problems[10,13,14] or sensitivity analysis where the derivative of an error function is sought. A distinct advantage of using that technique is relatively simple numerical implementation and the resulting low computational costs. In view of quasisolution approach,this inverse problem can be formulated as minimization problem for the objective function[27]. In most cases for the numerical solution of this minimization problem gradient methods are used[4]. For this aim, in many applications various gradient formulas are either derived empirically, or computed numerically[21]. Although an empirical gradient formula has been employed with regularization algorithm,there was no mathematical framework for this formula. At the same time, we need to estimate the iteration parameter for any gradient method. Choice of the iteration parameter defines various gradient methods,although in many situation estimations of this parameter is a difficult problem. However,in the case of Lipschitz continuity of the gradient of the objective function the parameter can be estimated via the Lipschitz constant,which subsequently improves convergence properties of the iteration process [29].
In this paper we shall show how the adjoint problem technique can be readily utilized in proving Fréchet differentiability of the objective function. This has been hinted at in previous treatments[16]. Here we extend the objective function including the regularization parameter. Then we show how the Fréchet differentiability result is readily extended to examine Lipschitz continuity properties of the operator. Finally, we shall illustrate the application of our technique.
The paper is outlined as follows. We summarize the basic notation and definition of regularized objective function in Section 2. Fréchet differentiability of the objective function results proven in Section 3 gives a unique regularized solution of the inverse problem. Iteratively regularized gradient method is proposed to obtain the numerical solution and some numerical examples are presented in Section 4 .

2. Regularization Method

Let us denote by the set of admissible unknown sources and . The scalar product in is defined as follows:
where We also assume that.
We denote the unique solution of problem (3) by, corresponding to this source term. The direct problem could be to predict the evolution of the described system from knowledge of. We denote by the set of measured output data and the set of measured output data and set. Hence the inverse problem (3)-(4) can be formulated in the following operator form
(6)
According to[8,9,10], the mapping is defined to be the input-output mapping. We can give the definition of scalar product in as similar as in :
There is a fundamental difference between the direct and the inverse problems. In all cases, the inverse problem is ill-posed or improperly posed in the sense of Hadamard, while the direct problem is well-posed. A mathematical model for a physical problem is called as well-posed in the sense that it has the following three properties:
There exists a solution of the problem (existence).
There is at most one solution of the problem (uniqueness).
The solution depends continuously on the data (stability).
When the operator is a bounded, linear and injective between Hilbert spaces and, and , the existence and uniqueness of the mapping is clear. If the desired output data and are not attainable, one tries to get approximation and as close as possible and , respectively. Then the function
will be defined to be the final state noisy output data and the noisy data over the subregion. For the analysis of the approximation quality of the regularized solutions, we require that a bound on the data noise
The problem to solve (3)-(4) with noise data may be equivalently reformulated as finding the minimum of the functional which has been given in[16] for the only final state output data :
On the other hand, in the case where is given, the inverse problem of determining from the observation , can be transformed to a Fredholm equation of the second kind, where there might exist a non-trivial solution which implies the non-uniqueness for such an inverse problem. Of course, the solution to this minimization problem again does not depend continuously on the data. One possibility to restore stability is to add the data over the subregion and a penalty term to the functional involving the norm of:
(7)
The parameter is called regularization parameter. A regularized solution is defined by
Regularization methods replace an ill-posed problem by a family of well-posed problems, their solution, called regularized solutions, are used as approximations to the desired solution of the inverse problem. These methods always involve some parameter measuring the closeness of the regularized and the original (unregularized) inverse problem, rules (and algorithms) for the choice of these regularization parameters as well as convergence properties of the regularized solutions are central points in the theory of these methods, since only they allow to finally and the right balance between stability and accuracy.

3. Properties of Regularization Method

This section contains the main results of this paper. In the forthcoming theorem, we prove that the the functional (7) is Fréchet differentiable and provide the explicit form of the derivative. Let us give some preparations.
Definition 3.1. Let X, Y be normed spaces, and let U be an open subset of X. A mapping is called Fréchet differentiable at if there exists a bounded linear operator such that
The proof of the following lemma can be found in[16].
Lemma 3.2. Let be two solutions of direct problem (3) corresponding to admissible sources The following equality holds:
(8)
where , , is the solution of the following sensitivity problem
(9)
(10)
(11)
and is the solution of the backward parabolic problem:
Lemma 3.3. Let be two solutions of direct problem (3) corresponding to admissible sources The following equality holds:
(15)
where is the solution of the backward parabolic problem:
with the following discontinuous right-hand side
Proof. We start by replacing the left hand side of equality (15) with the right hand side of problem (18):
We use integration by parts
and employ initial and boundary conditions of problems (11) and (18) we conclude the proof of the lemma.
The following Lemma gives computation of the first variation of the functional (7).
Lemma 3.4. Let us denote by. the first variation of the functional (7) is given by
(19)
Proof. By definition of the functional (7) we observe
Employing some add and subtract tricks, we get
Finally, this with the integral identities (8) and (15) leads to
Lemma 3.5. There exists a constant such that
(20)
where is solution of the parabolic problem (11).
Proof. One can have this result due to Lemma 3.2 of[16].
Lemma 3.6. There exists a constant such that
(21)
where is solution of the parabolic problem (11).
Proof. Due to the energy equality of the parabolic problem (11), we write
Applying Cauchy inequality to the right hand side of the above equality we obtain
(22)
Since
we have
(23)
Using (23) on the right hand side of inequality (22) and lower bound of , we get the following estimate:
and satisfies
(24)
where . In this case requiring we obtain the bound . Further from the requirement we have the second bound . Thus assuming for the parameter
Taking into account (23) and (24)
where
Theorem 3.7. Assume that and is the solution of the parabolic problem (11) corresponding to admissible source . Then, the functional (7) is Fréchet differentiable, with Fréchet differential:
(25)
where
Proof. We take the two sources , instead of in (19)
Using the estimates in lemma 5 and lemma 6 , we have
Then due to Definition 1 we conclude Fréchet derivative of the functional (7)
Theorem 3.8. If conditions of Theorem 7 hold, then the functional (7) has a unique solution in for . This minimum is given by the solution of the following equation:
Moreover
Proof. Assume that minimizes the functional (7). The choice implies by (25) that
To show that defined by the solution of above equation minimizes the functional (7), note that for all the function is a polynomial of degree 2 with and Hence with the equality only implies that is a minimization of the functional (7). Due to the convexity of the functional (7), we obtain the uniqueness of the solution. Since the functional (7) attains its minimum at and , we have
which implies
(26)
A crucial question in regularization methods is how to choose regularization parameters to obtain optimal convergence rates. Theorem 9 shows converges towards a solution of (6) in a set-valued sense with and.
Theorem 3.9. Let be a weakly closed set and be the exact solution of (6) in . If is injective and, then converges to as tends to zero.
Proof. Let us assume the contrary. Then there exist an and a sequence such that
Since the functional (7) attains its minimum at
(27)
Hence
(28)
According to condition of theorem there is a constant c, independent of , such that . Then we obtain. Further, using the weak compactness of a ball in Hilbert space we conclude that converges weakly to , since is a weakly closed subset. Together with lower semicontinuity of the norm and inequality (28)
(29)
By (27)
By limit transition as, we conclude Due to the weak converges we obtain . This contradiction proves the theorem.

4. Identification Process and Computational Results

Another idea is to minimize the functional (7) by gradient method. This leads to the recursion formula of Conjugate Gradient Method
(30)
where is the search step size, is the direction of descent, is the iteration parameter. The direction of descent is given as
(31)
where different expression for the conjugation coefficient can be found as Polak-Ribiere or Fletcher -Reeves[1,7,11] . In the Polak-Ribiere version of the conjugation coefficient can be obtained from the following expression:
(32)
In the Fletcher -Reeves version of the conjugation coefficient is given by the following expression:
(33)
By using a first-order Taylor series approximation the following expression result for the step size
(34)
To use a numerical method with rapid convergence properties in the solution of the inverse problem we must require higher regularity properties on defined by (6) than just continuity. In particular to generate an affine approximation to required to be Fréchet differentiable that we have already obtained in the previous section. To obtain high-order convergence properties of the numerical method this Fréchet derivative must also be Lipschitz continuous.
For the next results we refer to[16,17]
Theorem 4.1. If and are the solutions of problems (3) and (14), respectively then and the following estimate holds:
(35)
Corollary 4.2. Assume that , Then implies
The proof of monotonicity of the sequence is given by Corollary 4.1 in[16].
Theorem 4.3. The sequence is a monotone decreasing sequence. Moreover;
Since an expression for the gradient of the functional (7) is explicitly available, and easily obtained by solving the adjoint problem (14), the gradient can be readily implemented. Gradient algorithm[22] applied to the optimization problem takes the form
Step 1 Choose, and set.
Step 2 Solve the direct problem (3) with and determine the residuals
Step 3 Solve the adjoint problems (14) and (18)
Step 4 Compute the gradient with (25).
Step 5 Update the conjugation coefficient from (32) or (33) and then the direction descent from (31).
Step 6 By setting solve the sensitivity problem (11) to obtain and on subregion .
Step 7 Compute the step size form (34).
Step 8 Update from (30).
Step 9 Stop computing if the stopping criterion
is satisfied. Otherwise set and go to Step 2.
Now, we perform some numerical experiments using the above algorithm.
Example 4.4. In the first numerical experiment we take
The final state observation and the observation over the subregion are given by
It is easy to check that satisfies the problem (3) for. The noisy data and are generated as follows:
where is the noisy level and is generated by MATLAB function. The exact solutions and together with the numerical solutions for various values of the noisy level are shown in Figure 1. Due to the discrepancy principle we use the stopping criteria as where the value of the tolerance, for noisy free data and the regularization parameter
Example 4.5. In the second numerical experiment we take
The final state observation and the observation over the subregion are given by
satisfies the problem (3) for The exact solutions and together with the numerical solutions for various values of the noisy level are presented in Figure 2. The stopping criteria is for noisy free data and the regularization parameter
Figure 1. Results obtained by conjugate gradient method for Example 4.4
Figure 2. Exact solution and and numerical experiment of and for various amounts of noise p ={1, 2}% for Example 4.5
Example 4.6. This example tries to reconstruct both and when an analytical solution for the problem (3) is not available:
The final state observation and the observation over the subregion are computed by numerically for. The exact solutions and together with the numerical solutions for various values of the noisy level are presented in Figure 3. The stopping criteria is for noisy free data and the regularization parameter.
Figure 3. Exact solution and construction of and for various amounts of noise p ={2, 4}% for Example 4.6
Table 1. The values of it,
     and
      with various noisy level for Example4.4, Example4.5 and Example4.6
     
In Table 1 we present some numerical results for the stopping iteration numbers and the percentage error in and. Here we use the symbol it as the stopping iteration numbers, and as the percentage error in and, respectively where and are approximate value of and.

References

[1]  O.M. Alifanov, Inverse Heat Transfer Problems , Springer-Verlag, 1994.
[2]  J. Bear, Dynamics of Fluids in Porous Media, Elsevier, New York, 1972.
[3]  V. Beck, B. Blackwell, St. C.R. Clair, Inverse Heat Conduction, Ill-Posed Problems, Wiley-Interscience, New York, 1985.
[4]  A.M.Bruaset, A Survey of Preconditioned Iterative Methods, New York:Addison-Wesley,1995.
[5]  M. Choulli, M. Yamamoto Generic well-posedness of an inverse parabolic problem - the Hölder-space approach, Inverse Problems, 12 (1996), 195-205
[6]  M. Chulli, An inverse problem for a semilinear parabolic equation, Inverse Problems, 10(1994) 1123-1132.
[7]  J.V. Daniel , The Approximate Minimization of Functionals , Prentice-Hall, Englewood Cliffs, 1971.
[8]  P. DuChateau, Monotonicity and invertibility of coefficient-to-data mappings for parabolic inverse problems, SIAM J. Math. Anal. 26(6)(1995) 1473-1487.
[9]  P. DuChateau, Introduction to inverse problems in partial differential Equations for engineers, physicists and mathematicians, In: Parameter Identification and Inverse Problems in Hydrology, Geology and Ecology (J. Gottlieb, P. DuChateau, eds), Kliver Academic Publishers, The Netherland (1996) 3-38.
[10]  P. DuChateau, R. Thelwell, G. Butters, Analysis of an adjoint problem approach to the identification of an unknown diffusion coefficient, Inverse Problems 20(2004) 601-625.
[11]  R. Fletcher, C.M. Reeves, Function Minimization by Conjugate Gradients, Computer J., 7(1964), 149-154.
[12]  O. F. Gozukizil ,M. Yaman, A Note On The Unique Solvability Of AnInverse Problem With Integral Overdetermination, Applied Mathematics E-Notes,8(2008), 223-230.
[13]  A. Hasanov, A. Demir, A. Erdem, Monotonicity of input-output mappings ininverse coefficient and source problems for paraboic equations, J.Math.Anal.Appl., 335(2007),pp.1434-1451
[14]  A. Hasanov, P. DuChateau, B. Pektas, An adjoint problem approach and coarse-fine mesh method for identification of the diffusion coefficient in a linear parabolic equation, Inverse and Ill-Posed Problems 14(2006) 435-463.
[15]  A. Hasanov, J. Mueller, A numerical method for backward parabolic problems with non-self adjoint elliptic operators, Appl. Numer. Math., 37(2001), 55-78.
[16]  A. Hasanov, Simultaneous determination of source terms in a linear parabolic problem from the final overdetermination: Weak solution approach, Journal of Mathematical Analysis and Applications, 330(2), 766-779.
[17]  A.Hasanov, An inverse source problem with single Dirichlet type measured output data for a linear parabolic equation, Appl. Math. Lett. 24(7): 1269-1273 (2011).
[18]  V. Isakov, Inverse parabolic problems with the final overdetermination, Comm. Pure Appl. Math., 44(1991), 185-209.
[19]  V. Isakov, Inverse Source Problems, Mathematical Surveys and Monographs, 34, American Mathematical Society, Providence, RI, 1990.
[20]  V. L. Kamynin, On the inverse problem of determining the right-hand side of a parabolic equation under an integral overdetermination condition, Math. Notes, 77-4(2005), 482-493.
[21]  S.Narayan,M.B.Dusseault and D.C.Nobes. Inversion techniques applied to resistivity inverse problems, Inverse Problems, 10(1994)669-686
[22]  E. Polak, Optimization: Algorithms and Consistent Approximations, Springer-Verlag, New York, 1997.
[23]  A. I. Prilepko, A. B. Kostin, On some inverse problems for parabolic equations with final and integral overdetermination, Math. Sb. ,183(4)(1992)
[24]  A.I. Prilepko, D.G. Orlovskii and I.A. Vasin, Methods for Solving Inverse Problems in Mathematical Physics, Marcel Dekker, New York, (2000).
[25]  A.I. Prilepko, V.V. Solov’ev, I.A. Vasin, Methods for Solving Inverse Problems in Mathematical Physics, Dekker, New York, 2000.
[26]  M. Renardy, W.J. Hursa, J.A. Nohel, Mathematical Problems in Viscoelasticity, Wiley, New York, 1987.
[27]  A.Tikhonov and, V.Arsenin, Solution of Ill-Posed Problems, New York:JohnWiley, 1977.
[28]  D. S. Tkachenko, On an Inverse Problem for a Parabolic Equation, Mat. Zametki, 75:5 (2004), 729-743.
[29]  F.P.Vasil’ev, Methods for Solving Extremal Problems, Moscow:Nauka, 1981.
[30]  J. Wang, A.J.S. Neto, F.D.M. Neto, J.Su , Function estimation with Alifanov’s iterative regularization method in linear and nonlinear heat conduction problems, Applied Mathematical Modelling 26(2002), 1093-1111.
[31]  C. Zheng, G.D. Bennett, Applied Contaminant Transport Modelling: Theory and Practice, Van Nostrand Reinhold, New York, 1995.