Applied Mathematics
p-ISSN: 2163-1409 e-ISSN: 2163-1425
2012; 2(2): 21-27
doi: 10.5923/j.am.20120202.05
M. T. Darvishi , Norollah Darvishi
Department of Mathematics, Razi University, Kermanshah, 67149, Iran
Correspondence to: M. T. Darvishi , Department of Mathematics, Razi University, Kermanshah, 67149, Iran.
| Email: | ![]() |
Copyright © 2012 Scientific & Academic Publishing. All Rights Reserved.
In this paper, we present SOR-Steffensen-Newton (SOR-SN) algorithm to solve systems of nonlinear equations. We study the convergence of the method. The computational aspects of the method is also studied using some numerical experiment. In comparison of new method with SOR-Newton, SOR-Steffensen and SOR-Secant methods, our method are better in CPU time and number of iterations.
Keywords: Nonlinear System, SOR-Newton Method, SOR-Steffensen Method, SOR-Secant Method, CPU Time
![]() | (1) |
, that is, a system with
equations and
unknowns. Finding roots of systems of nonlinear equations efficiently is of major importance and has widespread applications in numerical and applied mathematics. There are many approaches to solve (1). The second order Newton's method is one of the most common iterative methods adopted for finding approximate solutions of nonlinear system (1) (for more details see[1]). The Newton's method to solve (1) is an important and basic method which converges quadratically. Recently, some third and fourth order iterative methods have been proposed and analyzed for solving systems of nonlinear equations that improve some classical methods such as the Newton's method and Chebyshev-Halley methods. It has been demonstrated that the methods are efficient, and can compete with Newton's method, for more details see[2-10]. To solve system (1), Darvishi and Barati[11-13] presented some high order iterative methods, free from second order derivative of function
. Darvishi[14,15] presented some multi-step iterative methods, free from second order derivative of function
. Frontini and Sormani[16] obtained a third-order method based on a quadrature formula to solve systems of nonlinear equations. Babajee et al.[17] proposed a fourth order iterative method to solve system (1).One-step SOR-Newton method to solve nonlinear system (1) which presented in[1] is as follows:
where
,
is the relaxation parameter in SOR method and
It is obvious that we have
The aim of this paper is to introduce the SOR-SN algorothm. To achieve this, we follow the two-step Steffensen type method which presented by Ren et al[18]. They have presented a class of one-parameter Steffensen type methods with fourth-order convergence to solve nonlinear equations![]() | (2) |
for
is defined by solving the following equation for 
![]() | (3) |
Then we obtain
from (3) as follows
where
whereas
Finally, we set
or ![]() | (4) |
. b) For system
,
is gradient of
, namely,
and function
is strictly convex. c) On domain
, function
is twice continuously differentiable. d) Set
for some
is nonempty and compact. Note that
exists and set
is nonempty and compact. e)
for
and
, unless
is the point at which
attains its minimum, where
for
is Hessian matrix of
at
. Assumptions a) and b) show that the Hessian matrix of
is positive definite. It is obvious that sets
are convex for all
. Assumption d) shows that
attains its minimum at some point
. If assumptions a), b) and c) hold and function
attains its minimum at some point
then assumption d) is nontrivially satisfied, that is, there exists
such that
and
is compact also
is a non-increasing function. From b) the minimum point
is unique. At last, a point
is the minimum point of function
if and only if
.Now, we can present the following theorem on convergency of SOR-SN method. Theorem 1. If sequence
be generated by
and
where
in fact, we have ![]() | (5) |
Also if
,
, be defined by
and
be defined by
where
and for some
which satisfies in
then for any
sequence
is well-defined and converges to
. Proof. We first prove that sequence
is well-defined. By consideration of structure of sets
and
we have
Note that we have
this shows that
. But sequence
is non-increasing, thus
, from this we have
. In a similar manner and by mathematical induction we can prove that all terms of sequence
are contained in
hence the sequence is well-defined. We now prove that
converges. By using Taylor's theorem on function
for
(where
denotes the open line segment joining
and
) we have![]() | (6) |
thus from (5) we have ![]() | (7) |
![]() | (8) |
which satisfies in
we have
Consequently, from this and (8) we have
Thus
where
. Since the Hessian matrix of
is positive semidefinite, hence
. We consider two cases for
: Case 1.
By assumption e) there exists some subsequence
converges to
and we know that
and as
is a non-increasing sequence, we have
Therefore, by our assumption, as
is a continuous function, we have
. Case 2.
We know that
satisfies in
we consider two possible cases as follows: Case 1.
.We know that the sequence
is non-increasing and bounded from below, hence it is a convergent sequence. Then
, we suppose that
is a limit point of
and we consider the following sets
and
. From (7) we have
from the convergency of the Steffensen's method we have thus
.As
and
is bounded, hence we have
Thus
This shows that
is a nonempty set. If
be an empty set, hence
and as
is a non-increasing sequence,
. Else if
be a nonempty set, thus
For
we define the following sets
Clearly sets
are closed and nonempty sets, because
and
, hence
. Also we define
. Since
is a closed set then there exists
such that for
, we have
Remember that
hence, there is an integer
such that for
we have
Let
. From the continuity of
there exists
such that
implies that
We define set
such that for all
the following relations hold ![]() | (9) |
![]() | (10) |
we may select
such that if
then
, otherwise for every sequence
which reduces to zero, there exists
such that
and
. All such
are contained in
which is a compact set. This results that, there exists a limit point
of
which is such that
. We have from (9) that
And also from (10) we have
hence
where
denotes the inner product of vectors
and
. This leads to a contradiction, because
is a strictly convex function. Thus there exists
such that
implies that
Also from
, there exists
such that for
we have
Also there exists
such that
hence there is an
such that
thus
. If
for
, then we have
which is a contradiction. Hence
for
. This shows that for
there is not any
such that
, therefore there is not any
in
thus
is an empty set, which is a contradiction. Thus
.Case 2.
We can state a similar discussion for this case.
, where
shows the Infinity norm.Note that in SOR-secant method we need to two initial guesses, namely
and
, while for another methods we only need to one initial guess. In the following parts, we present some examples to compare these methods: Example 1 . Consider the following nonlinear system
The exact solution of the above system is
.For an initial guess we set
and
also for SOR-Secant method we set
. The numerical results are given in Table 1.
|
Its exact solution is
. For an initial guess we set
and
also for SOR-Secant method we set
. The numerical results are given in Table 2.
|
Its exact solutions are
and
for different values of
. For an initial guess we set
and
also for SOR-Secant method we set
. The numerical results are given in Table 3.
|
One of the exact solutions of the system is
for different values of
. For an initial guess we set
also
and for SOR-Secant method we set
. The numerical results are given in Table 4.
|
, where
is the
th approximation of the solution. Example 5 . We consider the following boundary value problem, which is given in[3]:
The exact solution of this problem is
. To discretize the problem we use the second order finite difference method. The zeros of the following nonlinear functions will provide us an estimation of the solution of the problem:
,where
and for
we have
, such that
In the above system, the second order approximations are used for
and
by step
. By this step, we set the nodes
as:
.Also,
denotes the unknown
. In Table 5 some results can be observed. For any initial estimation, we analyze the number of iterations and CPU time needed to converge to the solution. The initial estimations used are:
For SOR-secant in Table 5 we set
and in Table 6 we set
And
in both cases. As we can see from Tables 5 and 6 for all cases the number of iterations and CPU time for our new method are less than number of iterations and CPU time for the other methods.
|
|