American Journal of Mathematics and Statistics

p-ISSN: 2162-948X    e-ISSN: 2162-8475

2018;  8(5): 140-143

doi:10.5923/j.ajms.20180805.06

 

A Projection Method for Variational Inequalities over the Fixed Point Set

Hoang Thi Cam Thach1, Nguyen Kieu Linh2

1Department of Mathematics, University of Transport Technology, Vietnam

2Department of Scientific Fundamentals, Posts and Telecommunications Institute of Technology, Hanoi, Vietnam

Correspondence to: Hoang Thi Cam Thach, Department of Mathematics, University of Transport Technology, Vietnam.

Email:

Copyright © 2018 The Author(s). Published by Scientific & Academic Publishing.

This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Abstract

In this paper, we introduce a new iteration method for solving a variational inequality over the fixed point set of a firmly nonexpansive mapping in , where the cost function is continuous and monotone, which is called the projection method. The algorithm is a variant of the subgradient method and projection methods.

Keywords: Variational inequality, Subgradient algorithm, Firmly nonexpansive mapping

Cite this paper: Hoang Thi Cam Thach, Nguyen Kieu Linh, A Projection Method for Variational Inequalities over the Fixed Point Set, American Journal of Mathematics and Statistics, Vol. 8 No. 5, 2018, pp. 140-143. doi: 10.5923/j.ajms.20180805.06.

1. Introduction

Let be a firmly nonexpansive mapping, i.e., for all and mapping We consider the following variational inequalities over the fixed point set (shortly, VI(F, Fix(T))):
Find such that
where Problem VI(F,Fix(T)) is a special class of equilibrium problems on the nonempty closed convex constraint set. Many iterative methods for solving such problems have been presented in [1, 2, 3, 4, 5, 8, 9].
In this paper, we investigate a new and efficient global algorithm for solving variational inequalities over the fixed point set of a firmly nonexpansive mapping. To solve the problem, most of current algorithms are based on the metric projection onto a nonempty closed convex constraint set, in general, which is not easy to compute. The fundamental difference here is that, at each main iteration in the proposed algorithm, we only require computing the simple projection. Moreover, by choosing suitable regularization parameters, we show that the iterative sequence globally converges to a solution of Problem VI(F,Fix(T)).
The paper is organized as follows. Section 2 recalls some concepts related to variational inequalities over the fixed point set of a nonexpansive mapping, that will be used in the sequel and a new iteration scheme. Section 3 investigates the convergence theorem of the iteration sequences presented in Section 2 as the main results of our paper.

2. Preliminaries

We list some well known definitions and the projection under the Euclidean norm, which will be required in our following analysis.
Definition 2.1 Let be a nonempty closed convex subset of in we denote the metric projection on by i.e,
The mapping is said to be
(i) monotone on if for each
(ii) pseudomonotone on if for each
It is well-known that the gradient method in [10] solves the convex optimization problem:
(2.1)
where is a closed convex subset of for all i = 1, ,m, and f is a differentiable convex function on The iteration sequence of the method is defined by
When is arbitrary closed convex, in general, computation of the metric projection is not necessarily easy and hence it is not effective for solving the convex optimization problem. To overcome this drawback, Yamada in [11] proposed a fixed point iteration method
where T is a nonexpansive mapping defined by for all such that Under certain parameters the sequence converges a solution to Problem (2.1). Very recently, Iiduka in [6] proposed the fixed point optimization algorithm for solving the following variational inequalities:
Finding such that
where is a nonempty closed convex subset of , over the fixed point set Fix(T) of a firmly nonexpansive mapping In each iteration of the algorithm, in order to get the next iterate one orthogonal projection onto included Fix(T) is calculated, according to the following iterative step. Given the current iterate calculate
Under certain conditions over parameters and asymtotic optimization conditions is satisfied. Then, the iterative sequence converges a solution to the variational inequalities over the fixed point set of the firmly nonexpansive mapping. In fact, the asymtotic optimization condition, in some cases, is very difficult to define. In order to avoid this requirement, we propose a new iteration method without both the asymtotic optimization condition and computing the metric projection on a closed convex set. Our algorithm is described more detailed as follows.
Algorithm 2.2 Initialization. Take a point such that a positive number ρ > 0, and the positive sequences verifying the following conditions:
(2.2)
Step 1. Let Choose arbitrary such that for all Define and
Step 2. Compute
Note that is a closed ball. Therefore, the metric projection is computed by

3. Convergent Results

To investigate the convergence of Algorithm 2.2, we recall the following technical lemmas, which will be used in the sequel.
Lemma 3.1 (see [7]) Let and be the three nonnegative sequences satisfying the following condition:
If and then exists.
We are now in a position to prove some convergence theorems.
Theorem 3.2 Let be a nonempty closed convex subset of is a firmly nonexpansive mapping such that Fix(T) is bounded by M > 0, and ismonotone. Then, the sequence generalized by Algorithm 2.2 converges to a solution of Problem VI(F, Fix(T)).
Proof. We divide the proof into five steps.
Step 1. For each we have
(3.1)
Indeed, from it follows that
(3.2)
Using the assumption for all and for all k ≥ 0, we have Then, substituting into (3.2), we get
Combinating this and the inequality

we have
Since (3.3), and the equaltity
(3.4)
we get
Thus
(3.5)
From and it follows that
(3.6)
By the definition of the metric projection and (3.6), we have

(3.7)
Combinating (3.5), (3.6) and (3.7), we get
This implies (3.1).
Step 2. Claim that and
Indeed, using (3.3), (3.4) and the definition of the firmly nonexpansive mapping, we have
Hence, we have
(3.8)
Applying Lemma 3.1 for the sequences in the inequality (3.1), there exists
(3.9)
From Initialization of Algorithm 2.2 that and it follows that
Combinating this, (3.8) and (3.9), we get
Using this, the nonexpansive property of T and have
Since is bounded, there exists and a subsequence which converges to as i → ∞.
Step 3. Claim that where and the open ball is defined by
Indeed, from it follows that the existence of such that for for all It means that for all Then, we have
Thus,
Now we suppose that By Step 2 and Opial’s condition, we get
This is a contradiction. So,
Step 4. Claim that and the sequence converges to
Indeed, from (3.6), it follows that
Using and , we have Combinating this and Step 3, we have
Denote Then, g is convex and
Thus, is a local minimizer of g. Since Fix(T) is nonempty convex, is also a global minimizer of g, i.e., for all This means that
So, Sol(F, Fix(T)).
To prove converges to we suppose that the subsequence also converges to as j → ∞. By a same way, we also have VI(F, Fix(T)). Suppose that Then, using Opial’s condition, we have
This is a contradiction. Thus, the sequence converges to Sol(F, Fix(T)).

4. Conclusions

This paper presented an iterative algorithm for solving variational inequalities over the fixed point set of a nonexpansive mapping T. By choosing the suitable regular parameters, we show that the sequences generated by the algorithm globally converge to a solution of Problem VI(F, fix(T)). Comparing with the current methods, the fundamental difference here is that, the algorithm only requires the continuity of the mapping F and convergence of the proposed algorithms only require F to satisfy monotonicity. Moreover, in general, computing the exact subgradient of a subdifferentiable function is too expensive, our algorithm only requires to compute approximate.

References

[1]  P.N. Anh, A logarithmic quadratic regularization method for solving pseudomonotone equilibrium problems, Acta Mathematica Vietnamica, 34 (2009), 183-200.
[2]  P.N. Anh, and N.X. Phuong, Linesearch methods for variational inequalities involving strict pseudocontractions, Optim., 64 (2015), 1841-1854
[3]  P. N. Anh, and J. K. Kim, Outer approximation algorithms for pseudomonotone equilibrium problems, Computers and Mathematics with Applications, 61 (2011), 2588- 2595.
[4]  E. Blum, and W. Oettli, From optimization and variational inequality to equilibrium problems, The Mathematics Student, 63 (1994), 127-149.
[5]  P. Daniele, F. Giannessi, and A. Maugeri, Equilibrium problems and variational models, Kluwer (2003).
[6]  H. Iiduka, Fixed point optimization algorithm and its application to power control in CDMA data networks, Math. Program., (2012), DOI 10.1007/s10107-010-0427-x.
[7]  L. Qihou, Iterative sequences for asymptotically quasinonexpansive mappings, J. Math. Anal. Appl. 259 (2001), 1-7.
[8]  T.D. Quoc, P.N. Anh, and L.D. Muu, Dual extragradient algorithms to equilibrium problems, J. of Global Optimization, 52 (2012), 139-159.
[9]  K. Slavakisa, I. Yamadaa, and N. Ogurab, The adaptive projected subgradient method over the fixed point set of strongly attracting nonexpansive mappings, Optimization, 27 (2006), 905-930.
[10]  P. Wolfe, Finding the nearest point in a polytope, Math. Program 11 (1976), 128-149.
[11]  I. Yamada, The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings, Studies in Computational Mathematics, 8 (2001), 473-504.