International Journal of Statistics and Applications
p-ISSN: 2168-5193 e-ISSN: 2168-5215
2017; 7(2): 73-92
doi:10.5923/j.statistics.20170702.03

Otaru O. A. P., Iwundu M. P.
Department of Mathematics and Statistics, University of Port Harcourt, Port Harcourt, Nigeria
Correspondence to: Iwundu M. P., Department of Mathematics and Statistics, University of Port Harcourt, Port Harcourt, Nigeria.
| Email: | ![]() |
Copyright © 2017 Scientific & Academic Publishing. All Rights Reserved.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

A competitive assessment of the performance of two Variance Weighted Gradient (VWG) methods with some gradient and non-gradient optimization methods is considered for optimizing polynomial response surfaces. The variance weighted gradient methods could involve several response surfaces defined on joint feasible regions having same constraints or several response surfaces defined on disjoint feasible regions having different constraints. The gradient and non-gradient optimization methods include Quasi-Newton (GN), Genetic Algorithm (GA), Mesh Adaptive Search (MADS) and Generalized Pattern Search (GPS) methods. The variance weighted gradient methods perform considerably and comparatively well and require few iterative steps to convergence.
Keywords: Response Surfaces, Optimization, Gradient Method, Non-Gradient Method, Weighted Gradients methods
Cite this paper: Otaru O. A. P., Iwundu M. P., Competitive Assessment of Two Variance Weighted Gradient (VWG) Methods with Some Standard Gradient and Non-Gradient Optimization Methods, International Journal of Statistics and Applications, Vol. 7 No. 2, 2017, pp. 73-92. doi: 10.5923/j.statistics.20170702.03.
be an n-variate, p-parameter polynomial functions of degree m defined by constraints on the same feasible region, given by![]() | (1) |
is a p-component vector of known coefficient,
is the random error component assumed normally and independently distributed with zero mean and constant variance,
is a component vector of known coefficients and
is a scalar for s number of constraints. The Variance Weighted Gradient (VWG) method for joint feasible regions with same constraints is defined by the following iterative steps:i) Obtain N support points
from
ii) Define the design measures
which is made up of N support points such that
where
support points are spread evenly in
iii) From the support point l (1, N) that makes up the design measure, compute the starting points as.
iv) Obtain the n-component gradient vector for the kth function.
where
is an
degree polynomial; 
is a t-component vector of known coefficients v) Compute the corresponding kth gradient vector, by substituting each l design point to the gradient function
as
vi) Using the gradient function and design measures obtain the corresponding design matrices 
In order to form the design matrix
a single polynomial that combines the respective gradient function
associated with each response function
is
vii) Compute the variances of each l design point
for the kth function as 
viii) Obtain the direction vector for the kth function as
and the normalize direction vector
such that
ix) Compute the step-length
as
x) With
and
make a move to
xi) To make a next move set
and define the design measure as
and repeat the process from step (iii) then obtain
xii) If
where
is the kth function at the jth step and
is the kth function at the next (j+1)th step.then set the optimizers as
and STOP. Else, set
and repeat the process from (iii).![]() | (2) |
supported by s constraints. Such that
where
is a p-component vector of known coefficients, independent random variable
and
is the random error component assumed normally and independently distributed with zero mean and constant variance. While
is a component vector of know coefficients and
is a scalar for s number of constraints in the rth region.The Variance Weighted Gradient (VWG) method for disjoint feasible regions, with different Constraints, is given by the following sequential steps:i) From
obtain the design measures
which are made up of support points from respective regions such that
where
support points are spread evenly in
ii) From the support points that make up the design measure compute R starting points as, the arithmetic mean vectors.
iii) Obtain the n-component gradient function for the rth region.
where
is an
degree polynomial ; 
is a t-component vector of known coefficients iv) Compute the corresponding r gradient vectors, by substituting each design point defined on the rth region to the gradient function
as
v) Using the gradient function and design measures obtain the corresponding design matrices 
In order to form the design matrix
a single polynomial that combines the respective gradient function
associated with each response function
is
vi) Compute the variances of each l design point
defined on the rth region as
vii) Obtain the direction vector in the rth region as
and the normalize direction vector
such that
viii) Compute the step-length
as
with
and
make a move to
using
evaluate the projection operator
as
and obtain the projector optimizers for each region
as
ix) To make a next move set
and define the design measure as
and repeat the process from step (ii) then obtain
x) If
where
is the rth feasible region at the jth step and
is the rth feasible region at the next j+1th step.then set the optimizers as
and STOP. Else, set
and repeat the process from (ii).
subject to
(Ghadle and Pawar, 2015)and Maximize
subject to
(Hillier and Lieberman, 2001).The solution to the minimization problem
using the Variance Weighted Gradient (VWG) method is
and
from an initial starting point,
Similarly, the solution to the maximization problem
using the Variance Weighted Gradient (VWG) method is
and
from an initial starting point,
QN, GA, MADS and GPS methods obtained the respective solutions
to the minimization problem
from respective initial guess value,
Similarly, QN, GA, MADS and GPS methods obtained the respective solutions
to the maximization problem
from respective initial guess value,
The summary results are as tabulated in Tables 1 and 2.
|
|
subject to
(Hillier and Lieberman, 2001)andMaximize
subject to
(Hillier and Lieberman, 2001)The solution to the maximization problem
using the Variance Weighted Gradient (VWG) method is
and
from an initial starting point,
Similarly, the solution to the maximization problem
using the Variance Weighted Gradient (VWG) method is
and
from an initial starting point,
QN, GA, MADS and GPS methods obtained the respective solutions
from respective initial guess value,
to the maximization problem
In like manner, QN, GA, MADS and GPS methods obtained the respective solutions for
from respective initial guess value,
to the maximization problem
The summary results are as tabulated in Tables 3 and 4.
|
|
subject to
and
the Quasi-Newton Method (QNM) with the initial guess starting point (0.0000, 1.0000) locates the optimizer (1.5000, 0.5000) in 4 iterations with a response function value of 0.5000. Genetic Algorithm (GA) with initial guess starting points (0.0000, 1.0000) locates the optimizer (1.5040, 0.4950) in 51 iterations with a response function value of 0.5001. The Mesh Adaptive Search (MADS) and Generalized Pattern Search (GPS) methods, with initial guess starting point (0.0000, 0.0000), locate the same optimizer (1.5000, 0.5000) with a response function value of 0.5000 in 25 and 26 iterations, respectively. The Variance Weighted Gradient Method (VWG) obtained optimizers (1.4960, 0.5030) in 5 iterations, with a response function value of 0.5000 from an initial optimum starting point (0.66, 0.66).In maximizing
subject to
and
the Quasi-Newton Method (QNM) method, with the initial guess starting point (0.0000, 0.0000) locates the optimizer (0.7500, 1.2500) in 3 iterations with a response function value of 3.1250. The Genetic Algorithm (GA), with the initial guess starting point (0.0000, 0.0000) locates the optimizer (0.7540, 1.2500) in 51 iterations with a response function value of 3.1249. The Mesh Adaptive Search (MADS) method, with the initial guess starting point (0.0000, 0.0000) locates the optimizer (0.7570, 1.2420) in 135 iterations with a response function value of 3.1244. The Generalized Pattern Search (GPS) method, with the initial guess starting point (1.0000, 0.0000) locates the optimizer (0.7500, 1.2500) in 24 iterations with a response function value of 3.1250. The Variance Weighted Gradient Method (VWG) obtained optimizers (0.7560, 1.2430) in 3 iterations, with a response function value of 3.1249 from an initial optimum starting point (0.66, 0.66). In maximizing
subject to
and
, the Quasi-Newton Method (QNM) with the initial guess starting point (0.0000, 1.0000) locates the optimizer (1.0000, 1.5000) in 4 iterations with a response function value of 11.4999. Genetic Algorithm (GA) with initial guess starting point (0.0000, 1.0000) locates the optimizer (0.9960, 1.5040) in 51 iterations, with response function value 11.4999. The Mesh Adaptive Search (MADS) method with initial guess starting point (1.0000, 0.0000) locates the optimizer (1.0000, 1.5000) in 23 iterations, with response function value 11.5000. The Generalized Pattern Search (GPS) method with initial guess starting point (0.0000, 1.0000) locates the optimizer (1.0000, 1.5000) in 24 iterations, with response function value of 11.5000. The Variance Weighted Gradient Method (VWG) obtained optimizers (0.9800, 1.5290) from an initial optimum starting point (0.7500, 0.7900) in 2 iterations, with response function value of 11.4977.![]() | Figure 1. Optimum Point of Subject to ![]() |
![]() | Figure 2. Optimum Point of Subject to ![]() |
![]() | Figure 3. Optimum Point of Subject to ![]() |
![]() | Figure 4. Optimum Point of subject to ![]() |
subject to
and
, the Quasi-Newton Method (QNM) with the initial guess starting point (0.0000, 1.0000) locates the optimizer (0.4220, 0.5770) in 5 iterations with a response function value of 1.3849. Genetic Algorithm (GA) with the initial guess starting point (0.0000, 0.0000) locates the optimizer (0.4220, 0.5770) in 51 iterations with a response function value of 1.3849. The Mesh Adaptive Search (MADS) method with the initial guess starting point (0.5000, 0.5000) locates the optimizer (0.4290, 0.5770) in 109 iterations with a response function value of 1.3848. The Generalized Pattern Search (GPS) method with the initial guess starting point (1.0000, 0.0000) locates the optimizer (0.4220, 0.5770) in 40 iterations with a response function value of 1.3849. The Variance Weighted Gradient Method (VWG) obtained optimizers (0.4280, 0.5710) with the initial starting point (0.2700, 0.3300) in 3 iterations with a response function value of 1.3849. The VWG method has the ability to obtain optimizers for polynomial response functions defined on joint and disjoint feasible regions simultaneously in either the first or second iterations. The comparative assessment shows that results obtained using the VWG simultaneous optimization are comparatively efficient in locating the optimizers of several response surfaces. The norm of the optimizers obtained using the VWG methods relative to the existing methods is very small, with the maximum recorded as 0.0352. Also, the absolute difference between the values of the response functions obtained using the new method relative to existing methods are approximately zero. The present study raises the possibility that optimizers of multifunction defined on joint feasible region can be added to the design points of the region and also the optimizers of multifunction defined on one feasible region can be projected to another feasible region in obtaining optimal solutions. We propose that further research should be undertaken to investigate multifunction polynomial response surfaces defined by constraints on different feasible regions and multifunction polynomial response surfaces for unconstraint.