American Journal of Mathematics and Statistics
p-ISSN: 2162-948X e-ISSN: 2162-8475
2018; 8(5): 140-143
doi:10.5923/j.ajms.20180805.06

Hoang Thi Cam Thach1, Nguyen Kieu Linh2
1Department of Mathematics, University of Transport Technology, Vietnam
2Department of Scientific Fundamentals, Posts and Telecommunications Institute of Technology, Hanoi, Vietnam
Correspondence to: Hoang Thi Cam Thach, Department of Mathematics, University of Transport Technology, Vietnam.
| Email: | ![]() |
Copyright © 2018 The Author(s). Published by Scientific & Academic Publishing.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

In this paper, we introduce a new iteration method for solving a variational inequality over the fixed point set of a firmly nonexpansive mapping in
, where the cost function is continuous and monotone, which is called the projection method. The algorithm is a variant of the subgradient method and projection methods.
Keywords: Variational inequality, Subgradient algorithm, Firmly nonexpansive mapping
Cite this paper: Hoang Thi Cam Thach, Nguyen Kieu Linh, A Projection Method for Variational Inequalities over the Fixed Point Set, American Journal of Mathematics and Statistics, Vol. 8 No. 5, 2018, pp. 140-143. doi: 10.5923/j.ajms.20180805.06.
be a firmly nonexpansive mapping, i.e.,
for all
and mapping
We consider the following variational inequalities over the fixed point set (shortly, VI(F, Fix(T))): Find
such that 
where
Problem VI(F,Fix(T)) is a special class of equilibrium problems on the nonempty closed convex constraint set. Many iterative methods for solving such problems have been presented in [1, 2, 3, 4, 5, 8, 9]. In this paper, we investigate a new and efficient global algorithm for solving variational inequalities over the fixed point set of a firmly nonexpansive mapping. To solve the problem, most of current algorithms are based on the metric projection onto a nonempty closed convex constraint set, in general, which is not easy to compute. The fundamental difference here is that, at each main iteration in the proposed algorithm, we only require computing the simple projection. Moreover, by choosing suitable regularization parameters, we show that the iterative sequence globally converges to a solution of Problem VI(F,Fix(T)).The paper is organized as follows. Section 2 recalls some concepts related to variational inequalities over the fixed point set of a nonexpansive mapping, that will be used in the sequel and a new iteration scheme. Section 3 investigates the convergence theorem of the iteration sequences presented in Section 2 as the main results of our paper.
be a nonempty closed convex subset of in
we denote the metric projection on
by
i.e,
The mapping
is said to be(i) monotone on
if for each
(ii) pseudomonotone on
if for each
It is well-known that the gradient method in [10] solves the convex optimization problem: ![]() | (2.1) |
is a closed convex subset of
for all i = 1,… ,m,
and f is a differentiable convex function on
The iteration sequence
of the method is defined by
When
is arbitrary closed convex, in general, computation of the metric projection
is not necessarily easy and hence it is not effective for solving the convex optimization problem. To overcome this drawback, Yamada in [11] proposed a fixed point iteration method
where T is a nonexpansive mapping defined by
for all
such that
Under certain parameters
the sequence
converges a solution to Problem (2.1). Very recently, Iiduka in [6] proposed the fixed point optimization algorithm for solving the following variational inequalities: Finding
such that
where
is a nonempty closed convex subset of
,
over the fixed point set Fix(T) of a firmly nonexpansive mapping
In each iteration of the algorithm, in order to get the next iterate
one orthogonal projection onto
included Fix(T) is calculated, according to the following iterative step. Given the current iterate
calculate
Under certain conditions over parameters
and asymtotic optimization conditions
is satisfied. Then, the iterative sequence
converges a solution to the variational inequalities over the fixed point set of the firmly nonexpansive mapping. In fact, the asymtotic optimization condition, in some cases, is very difficult to define. In order to avoid this requirement, we propose a new iteration method without both the asymtotic optimization condition and computing the metric projection on a closed convex set. Our algorithm is described more detailed as follows. Algorithm 2.2 Initialization. Take a point
such that
a positive number ρ > 0, and the positive sequences
verifying the following conditions:![]() | (2.2) |
Choose arbitrary
such that
for all
Define 
and 
Step 2. Compute
Note that
is a closed ball. Therefore, the metric projection
is computed by
and
be the three nonnegative sequences satisfying the following condition:
If
and
then
exists.We are now in a position to prove some convergence theorems.Theorem 3.2 Let
be a nonempty closed convex subset of
is a firmly nonexpansive mapping such that Fix(T) is bounded by M > 0, and
ismonotone. Then, the sequence
generalized by Algorithm 2.2 converges to a solution of Problem VI(F, Fix(T)).Proof. We divide the proof into five steps.Step 1. For each
we have![]() | (3.1) |
it follows that ![]() | (3.2) |
for all
and
for all k ≥ 0, we have
Then, substituting
into (3.2), we get
Combinating this and the inequality
we have
Since (3.3),
and the equaltity![]() | (3.4) |
Thus ![]() | (3.5) |
and
it follows that ![]() | (3.6) |
and (3.6), we have
![]() | (3.7) |
This implies (3.1). Step 2. Claim that
and
Indeed, using (3.3), (3.4) and the definition of the firmly nonexpansive mapping, we have
Hence, we have ![]() | (3.8) |
![]() | (3.9) |

and
it follows that
Combinating this, (3.8) and (3.9), we get
Using this, the nonexpansive property of T and
have
Since
is bounded, there exists 
and a subsequence
which converges to
as i → ∞. Step 3. Claim that
where
and the open ball is defined by
Indeed, from
it follows that the existence of
such that for 
for all
It means that
for all
Then, we have
Thus,
Now we suppose that
By Step 2 and Opial’s condition, we get
This is a contradiction. So,
Step 4. Claim that
and the sequence
converges to
Indeed, from (3.6), it follows that
Using
and
, we have
Combinating this and Step 3, we have
Denote
Then, g is convex and
Thus,
is a local minimizer of g. Since Fix(T) is nonempty convex,
is also a global minimizer of g, i.e.,
for all
This means that
So,
Sol(F, Fix(T)). To prove
converges to
we suppose that the subsequence
also converges to
as j → ∞. By a same way, we also have
VI(F, Fix(T)). Suppose that
Then, using Opial’s condition, we have
This is a contradiction. Thus, the sequence
converges to
Sol(F, Fix(T)).