International Journal of Finance and Accounting
p-ISSN: 2168-4812 e-ISSN: 2168-4820
2012; 1(3): 23-27
doi:10.5923/j.ijfa.20120103.01
Vida Varahrami
Kargar-e-Shomali Avenue, Faculty of Economics, University of Tehran, Tehran, Iran
Correspondence to: Vida Varahrami, Kargar-e-Shomali Avenue, Faculty of Economics, University of Tehran, Tehran, Iran.
| Email: | ![]() |
Copyright © 2012 Scientific & Academic Publishing. All Rights Reserved.
Studies show neural networks have better results in predicting of financial time series in comparison to any linear or non-linear functional form to model the price movement. Neural networks have the advantage of simulating the non-linear models when little a priori knowledge of the structure of problem domains exist or the number of immeasurable input variables are great and system has a chaotic characteristics. Among different methods, MLFF neural network with back-propagation learning algorithm and GMDH neural network with genetic learning algorithms are used to predict gas price of the Henry Hob database covering 01/01/2004-13/7/2009 period. This paper uses moving average crossover inputs and the results confirms (1) the fact there is short-term dependence in gas price movements, (2) the EMA moving average has better result and also (3) by means of the GMDH neural networks, prediction accuracy in comparison to MLFF neural networks, can be improved.
Keywords: Artificial Neural Networks (ANN), Multi Layered Feed Forward (MLFF), Group Method of Data Handling (GMDH)
Cite this paper: Vida Varahrami, Good Prediction of Gas Price between MLFF and GMDH Neural Network, International Journal of Finance and Accounting , Vol. 1 No. 3, 2012, pp. 23-27. doi: 10.5923/j.ijfa.20120103.01.
). It has a continuous derivative, which allows it to be used in back- propagation. This function is also preferred because its derivative is easily calculated:
Multi-layer networks use a variety of learning techniques; the most popular is back-propagation algorithm (BPA). The BPA is a supervised learning algorithm that aims at reducing overall system error to a minimum[1,9]. This algorithm has made multilayer neural networks suitable for various prediction problems. In this learning procedure, an initial weight vectors
is updated according to[10]: ![]() | (1) |
The weight matrix associated with ith neuron;
Input of the ith neuron;
Actual output of the ith neuron;
Target output of the ith neuron, and μ is the learning rate parameter.Here the output values (
) are compared with the correct answer to compute the value of some predefined error- function. The neural network is learned with the weight update equation (1) to minimize the mean squared error given by[10]:
By various techniques the error is then fed back through the network. Using this information, the algorithm adjusts the weights of each connection in order to reduce the value of the error function by some small amount. After repeating this process for a sufficiently large number of training cycles the network will usually converge to some state where the error of the calculations is small. In this case one says that the network has learned a certain target function. To adjust weights properly one applies a general method for non-linear optimization that is called gradient descent. For this, the derivative of the error function with respect to the network weights is calculated and the weights are then changed such that the error decreases[18].The gradient descent back-propagation learning algorithm is based on minimizing the mean square error. An alternate approach to gradient descent is the exponentiated gradient descent algorithm which minimizes the relative entropy[19].
so that can be approximately used instead of actual one,
, in order to predict output
for a given input vector
as close as possible to its actual output
[22]. Therefore, given M observation of multi-input-single-output data pairs so that
it is now possible to train a GMDH-type neural network to predict the output values
for any given input vector
, that is
The problem is now to determine a GMDH-type neural network so that the square of difference between the actual output and the predicted one is minimized, that is
.General connection between inputs and output variables can be expressed by a complicated discrete form of the Volterra functional series in the form of![]() | (2) |
![]() | (3) |
in equation (3) are calculated using regression techniques so that the difference between actual output, y, and the calculated one,
, for each pair of
,
as input variables is minimized. Indeed, it can be seen that a tree of polynomials is constructed using the quadratic form given in equation (3) whose coefficients are obtained in a least-squares sense. In this way, the coefficients of each quadratic function
are obtained to optimally fit the output in the whole set of input-output data pair, that is[16]![]() | (4) |
, i=1, 2, …, M) in a least-squares sense. Consequently,
neurons will be built up in the first hidden layer of the feed forward network from the observations {
(i=1, 2,… M)} for different
. In other words, it is now possible to construct M data triples {
(i=1, 2,…, M)} from observation using such
in the form
.Using the quadratic sub-expression in the form of equation (3) for each row of M data triples, the following matrix equation can be readily obtained as
where
is the vector of unknown coefficients of the quadratic polynomial in equation (3)![]() | (5) |
is the vector of output’s value from observation. It can be readily seen that
The least-squares technique from multiple-regression analysis leads to the solution of the normal equations in the form of![]() | (6) |
where Ct is a price at time t[17]. The shorter the time period, the more reactionary a moving average becomes. A typical short term moving average ranges from 5 to 25 days, an intermediate-term from 5 to 100, and long-term 100 to 250 days. In our experiment, the window of the time interval n is 5.
The longer the period of the exponential moving average, the less total weight is applied to the most recent price. The advantage to an exponential average is its ability to pick up on price changes more. In our experiment, the window of the time interval n is 5.
|
|
|
|